How Media Advertising Got Upended by Internet

In this excellent summary paper ‘Did Google and Facebook kill the media revenue model?‘ Frederic Filloux takes a deep insight on the evolution of media advertising evolution in last decades. It benefitted initially paper media and it got completely upended by Internet. The interesting part is that its actual value has also plummeted – making it cheaper for people to advertise but also diminishing the possible revenue stream for media.

The inefficiency of advertising in print, radio, and TV has always been its historical flaw.” By providing a far more targeted solution, internet advertising suddenly increased dramatically advertising efficiency. In addition, it was suddenly possible to much better measure the effectiveness of a campaign to improve it.

In addition, availability of advertising channels created a major deflation of advertising expenditure and the post contains staggering graphs (out of which the illustration of the post is extracted) showing how total advertising value plummeted in the last decade, with total media adverting expenditure diminishing by more than 25% in most developed countries (after an historical increase that was much more than inflation, so it is also sort of a correction).

Could printed media have reacted earlier? Maybe, but as Frederic Filloux concludes, the changes were so systemic and overwhelming that some exceptions may have transformed sufficiently, but certainly not the entire industry. “Most of the legacy media were in denial. They acted too late and too little. But they were not in a position to do otherwise.”

At the end, media advertising is another area where internet has upended the value chain, and at the same provided more value to advertisers. It should be seen as a quite positive shift for the overall value chain and consumers, were it not for the overwhelming position of Google and Facebook.

Share

How Social Media Currently Rewards Bad Behavior

This article explains the position of Ellen Pao (ex CEO of Reddit and now quite opposed to Silicon Valley giants for a number or reasons including accusations of gender discrimination) ‘Social Media Reward Bad Behavior’ or another similar interview in Inc.com ‘Why the Trolls Are Winning the Internet‘. Her point is that she observes that social media today rewards bad behaviors because it is not managed in the interest of the people, and because possibly the teams managing the current tools are not sufficiently diverse.

It makes me really sad, because the internet is such a powerful tool, and it introduced this idea that you could connect with anyone. And it’s been turned into this weapon used to hurt and harass people.” She is quite strong in her words about the impact of social media on the users today.

One of the reasons she mentions is that “One of the big problems is that these platforms were built by homogeneous teams, who didn’t experience the harassment themselves, and who don’t have friends who were harassed. Some of them still don’t understand what other people are experiencing and why change is so important.”

An important point is that she does not believe that this problem can be addressed at the scale of the current social networks. “I don’t think it’s possible anymore except at very small scale, because the nature of interactions at scale has become very attention-focused: “The angrier and meaner I am online, the more attention I get.” This has created a high-energy, high-emotion, conflict-oriented set of interactions. And there’s no clear delineation around what’s a good or a bad engagement. People just want engagement.

All in all, her view is quite negative on the possibility for social media to change quickly because of its interest to engage people to spend more time on their platform. Still she provides an interesting path for improvement, which is to make sure there is an increased diversity in the social media teams.

Refer also to our previous post ‘How Facebook Model is Addiction and Growth – and Why It Can’t Change

Share

How Social Network Legal Protection for Content Should be Reviewed

Since their inception, all social networks have been protected under US Law by a disposition called Section 230. Quartz’s update Section 230 provides quite a comprehensive coverage of the issue. Basically, social networks operate under a status of content distributors, not publishers thus not taking any responsibility in the content itself – thus preventing any lawsuit based on content. While this contributed immensely to their development, as they have grown we can observe that this cannot apply any longer, and social networks have had to take measures by themselves to monitor and regulate their content.

There are many voices now to reconsider whether this section should continue to apply to the major internet content providers. Section 230 states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” In reality, social networks are not just distributing, they also produce some content, and decide routinely which content to put first and visible, which in itself is a gesture almost akin to producing a meta-content. Because of their ubiquity, they need to regulate the content they show. De facto, the amount of content regulation they enforce nowadays proves they can’t be satisfied to be just distributors, they are inching closer to being publishers, that have to have an eye on the content they broadcast.

It won’t be easy to change it: “Curiously, some Big Tech companies have come around to support efforts to weaken Section 230. Facebook and Google, for example, were early supporters of the bill that eventually became FOSTA and Facebook CEO Mark Zuckerberg has called for more reform. These small concessions could head off more onerous regulation down the road. But the more cynical read is that the biggest Big Tech companies would gain an advantage over smaller competitors who lack the resources to navigate the legal morass that would follow the repeal of Section 230.

It will be quite interesting to watch this change unfold in the next few months, and what impact this will have on social network content and governance. At least the legislator understands large social networks cannot be considered as neutral distributors, and some liability will be enforced on the content in the near future.

Share

How Modern Inequality is also Information Inequality

As the Fourth Revolution progresses, we can hear a lot about the rise of inequality mainly in the field of finances and income. But modern inequality is also very much – and increasingly- informational. We already discussed for example in the post ‘How the Transformation of the Press Business Model Makes Access to Quality Information More Difficult‘, but let’s take a wider view of the situation.

There are good quality news outlet out there, that try to stick to journalistic principles. They may be orientated one way or the other, and the editor may favor a certain view on things, but this is generally known as the editorial line of the media. The thing is, access to this media is increasingly paying. Be it traditional press, new news portals and edited aggregators, access increasingly requires subscription.

If you can’t afford subscriptions, or if that’s not culturally on your priority list, what does remain? Public news outlets that are free, some traditional outlets that still manage to be ad-funded, or social networks. And it is this reliance on social networks that is at the origin of quite a number of issues today such as the polarization of society and more extreme groups. Before, the newspaper was displayed to be available for all to read on crowded streets, but not any more.

Thus in the past few years, access to information inequality has grown drastically up to becoming a real societal concern. It certainly needs to be fixed more urgently than income inequality, because the situation may create substantial disruption with groups of citizens living increasingly in parallel worlds.

Share

How to Deal with the Upcoming Flood of AI-Generated Copywriting

Following up from our previous post about the transformation caused by AI in music, this excellent post addresses the issue of AI generated text content: ‘The internet is not ready for the flood of AI-generated text‘. As it is becoming easy for AI to generate copywriting that could pass for generated by humans, we can expect the internet to be flooded by far more text than humans can currently generate. And it is not ready to manage this flood of content!

One of the subtle ways AI generated text may take over the internet is the ability to quickly generate different versions of the text and find which is the most engaging, thus becoming increasingly better rated on platforms, and overtaking human-generated text. In a world where what matters is to grab attention, this could easily become a discriminating factor.

Technologies such as GPT-3 may dramatically impact the world of misinformation and disinformation, creating an infinite supply of fake news” – and in the process creating more inequality between those that can pay to have access to quality-vetted information and the others.

What can be done? In addition to having machines that can detect machine-generated content, “One of the most obvious first steps forward, which should be put in place for every output of tools such as GPT-3 no matter or much or how little human editing was involved, is labeling of AI-generated content so that people know what they are reading.”

Welcome to a world where human-generated content will become a rarity for which we may need to pay a bit more!

Share

How GPS Became Irreplaceable While Free

We don’t think much anymore how miraculous the Global Positioning System GPS is. Still, it takes a lot of high technology (including relativist time corrections on satellites!) to provide us with what is now an everyday service we depend on. Actually we take it for granted when we would be quite lost without it.

Is GPS now part of the minimum infrastructure that we need as humans, like internet access? It has certainly drastically changed the way we navigate. It started as many technology before from the Cold War military efforts, but has been progressively opened to the civilian uses. For free. Until we can’t now part from it.

Let’s imagine for a minute what would happen if the system went to be fully unavailable. We are all using it one way or the other in our daily lives, and even more so in certain industries like logistics. A lot of the efficiency gains in many activities come from GPS availability.

Still this service is available for free as many services we now take for granted, as a by-product of something developed for military purpose. There is an issue about the US controlling the signal, which is being addressed by other blocks of nations that are launching their own system. This will provide redundancy. It is still amazing that something so useful is available for free.

In the Collaborative Age, a lot of the basic infrastructure becomes increasingly available for free or cheap. Maybe we should be careful not to take it too much for granted and have some backup solutions if they suddenly disappear.

Share

How Facebook Handling of Political Ads Must Be Better Scrutinized

We can observe that Facebook is increasingly under pressure about its political impact. This interesting Mashable article ‘Facebook wants NYU researchers to stop sharing the political ad data it keeps secret‘ provides insights about how secret the platform is about how it is handling political ads.

Apparently the fact that New York University is conducting research and publishing key statistics on Facebook political ads is not agreeable to Facebook itself who probably would prefer to wash its laundry internally.

Not only do you see how much money each campaign is spending; you also get a breakdown of topics the ads for each candidate cover, the dollar amount going into each one, and the specifics of how ads are targeted toward each candidate’s hoped-for voters. It’s not necessarily comprehensive information, since it depends on how much data volunteers are able to gather. But it’s more transparency than Facebook has provided on the political ad spending hosted by the platform.”

Apparently such transparency is a problem to the network, when it should certainly be public knowledge as a way to check that elections are not unduly influenced.

The reticence of Facebook to encourage such research is another clue that something needs to be changed in the way it tends to influence users.

Share

How Internet Can Also Be Used to Foster Democracy

This worthwhile Guardian article ‘How Taiwan’s ‘civic hackers’ helped find a new way to run the country‘ describes the important gOv experiment carried out there. Using a platform focused on areas of agreement rather than tending to split communities around disagreement it seems that they have built a platform that gives hope that internet can be used to really foster democracy (g0v.asia).

Of course this experiment could only come from Taiwan where the need for democracy is particularly essential due to the ambitions of its mainland neighbor.

The Guardian article explains how this started in 2014, and how important it is now in the local political landscape, with even a minister stemming from this movement.

Interestingly, a cornerstone of the approach is radical transparency about everything in the public sphere – making information and data much more accessible to the citizens.

But the most interesting I find is that “the discussants found themselves in an entirely new kind of online space – exactly the opposite of a social media platform that encourages strife“. “As people expressed their views, rather than serving up the comments that were the most divisive, it gave the most visibility to those finding consensus – consensus across not just their own little huddle of ideological fellow-travellers, but the other huddles, too. Divisive statements, trolling, provocation – you simply couldn’t see these.

So it quite possible to use Internet in a way that fosters agreement instead of the traditional social networks we have grown used to, that do rather the contrary. This is quite an important message, and I am looking forward to this type of platforms to become increasingly widespread.

Share

How to Detect Mistakes in Statistical Analysis

This extremely useful paper reminds us of common statistical mistakes made in articles and papers: ‘Ten common statistical mistakes to watch out for when writing or reviewing a manuscript‘.

Those are:

  • absence of an adequate control condition or group
  • interpreting comparisons between two effects without directly comparing them as a full group
  • inflating the number of units of analysis
  • spurious correlations (example single weird value)
  • using too small samples
  • circular analysis (retrospectively selecting features of the data to characterize the dependent variables, resulting in a distortion of the resulting statistical test)
  • too much flexibility of analysis
  • failure to correct for multiple comparisons in exploratory analysis)
  • over-interpreting non-significant results
  • confusing correlation and causation

Quite a useful checklist to use the next time you review a paper based on statistical analysis!

Share

How Most Internet Services Are Poor at Helping You Discover New Things

This thoughtful blog by Seth Godin ‘Who is good at discovery?‘ remind us that most internet services are poor to help us discover new things. Some are better like Netflix, but many are really poor.

Google built its entire business on the mythology of discovery, persuading millions of entrepreneurs and creators that somehow, SEO would help them get found, at the very same time they’ve dramatically decreased organic search results to maximize revenue.”

Intrinsically, and increasingly, internet services tend to propose new things that fit our preferences in order to keep us hooked. I find that it is increasingly difficult to get a connection to something new. And it is not the case for traditional press and magazines, my personal network of peers and connections, references in the books I read which continue to allow me to discover new things much more than all internet united.

When you search on internet you’d better know what you’d like to find, because serendipity is not going to happen by itself. Worse, on “YouTube–if you follow the ‘recommended’ path for just a handful or two of clicks, you’ll end up with something banal or violent.

Don’t rely on the internet to find new things to discover. Rather rely on your network and traditional sources!

Share

How Internet Activity Does Not Seem to Increase Exponentially Any More

I have recently looked at those graphs produced every year about everything that happens on internet in a minute. This is the graph for 2020. What is interesting is to compare to similar graphs from previous years.

The absolute numbers are overwhelming (4.7 million videos viewed on youtube every minute, 59 million messages on messenger and whastapp), in particular when one remembers that it takes 1,440 minutes to make a day. At the same time they have evolved somewhat linearly in the past 4-5 years (doubling over the period) at least based on similar representations. There seems to be a physical limit to our online activity!

Of course, each video or message itself may have become heavier with higher definition or improved content, so that may not represent the actual growth of traffic. Still, it is interesting to observe that we don’t seem to be in the exponential growth that was observed when digital started to spread at the start of the century.

Maybe an important thought to have in terms of context for new online services that would intend to penetrate the market in particular where it seems to be quite mature (developed countries).

Share

How To Deal with Frequent Mistakes of Artificial Intelligence

Artificial Intelligence is quite often mistaken, and that’s something we must know and understand (see for example the previous post on ‘How Deployment of Facial Recognition Creates Many Issues‘). The best example I’ve read of the problem of humans not reacting adequately is highlighted in the post ‘Pourquoi l’intelligence artificielle se trompe tout le temps’ (Why AI is always mistaken – in French) when it evokes the Kasparov vs Deep Blue chess match (recounted here in English ‘Twenty years on from Deep Blue vs Kasparov: how a chess match started the big data revolution‘)

At some stage during the game, the computer did something which looked quite stupid. And it was actually stupid, but one could although believe it was brilliantly unconventional! Kasparov was destabilized. In reality, that was actually a mistake by the AI program! “The world champion was supposedly so shaken by what he saw as the machine’s superior intelligence that he was unable to recover his composure and played too cautiously from then on. He even missed the chance to come back from the open file tactic when Deep Blue made a “terrible blunder”.”

Because of the manner in which AI gets trained, it will necessarily create a high ratio of mistakes and errors when implemented. The challenge is for us to identify those occasions and not get destabilized by them.

First, we should probably get a systematic warning associated with the AI output about the possibility of a mistake. And, we should remain conscious and critically aware of the possibility of a mistake by running some simple checks for the adequacy of the output.

This high error rate of AI is a problem for high reliability applications of course, and we should also see some emergence of techniques to correct this problem or provide technological checks and balances to avoid inadvertent mistakes that could have actual consequences.

Still just knowing that AI is prone to making mistakes is something important we need to recognise and be able to respond to.

Share