How Advertising is Often Wrongly Pointed As the Source of Internet Problems

This Atlantic paper ‘the Internet original sin‘ provides a reminder of the damaging effect of a web funded by advertising. It proposes, as many papers have proposed before, an alternate funding model.

Advertising became the default business model on the web, “the entire economic foundation of our industry,” because it was the easiest model for a web startup to implement, and the easiest to market to investors. Web startups could contract their revenue growth to an ad network and focus on building an audience.”

Of course funding through advertising has led to some effects that are not quite positive: development of means to defeat search algorithms, need to pay to get promoted anywhere, and the most important, the trend by social networks to increase stickiness by making sure you only see stuff that conforms to your worldview.

I am not sure however that we should blame advertising so much. Historically, newspapers, radio and TV stations have also been mainly funded through advertising. This is not new. What is new is the power of digital to leverage advertising to a new level of personalization, up to showing a personal view of the internet to each user; and that the market for advertisement has now become global. In the case of newspaper, radio and TV, regulations have been introduced to allow a balanced approach to what was being broadcast. That’s probably what is missing for internet now.

It may be difficult to introduce regulations because they need to be global and internet has become a playground for power ambitions, but it is definitely possible to impose regulations nationally or by region, and that’s what should be done.

Share

How the Split of Internet is Linked to the Strategic Value of User Data

Following up on our previous post ‘How Internet is Getting Increasingly Split‘, let reflect for a moment on the reason for this. I don’t believe it is just censorship. Of course the censorship motivation applies for many non-democratic countries, but the reason is probably deeper and has been highlighted by the TikTok events: ownership and access to user data.

Access to user data allows all sorts of manipulations as people can be targeted individually based on their preferences and hot buttons. It also provides an insight into the private life of individuals and may help setup compromission. In brief, it provides a strategic advantage that can be used to disrupt of manipulate social situations. It is a useful source of information for cyberwar, as shown by manipulations historically performed on American, British and less developed nations elections.

The recognition of the strategic value of user data is an interesting issue at the brink of the exponential development of the Internet of Things (IoT): even more data will be generated that is linked to our private life, and often while we are not conscious of what is really happening. This will in turn bring forth even more push to avoid foreign powers to have access to user data, necessarily promoting an increased split of internet. Global companies will have to develop strategies to locate data in the countries they are generated and provide security as to the usage by foreign powers.

The strategic value of user data has now been recognized, as well as its potential negative usage. And one can expect more consequences in the near future.

Share

How Internet is Getting Increasingly Split

A good summary of what is currently happening on the internet is given in those posts by Darin Stewart ‘Welcome to the Splinternet‘ and ‘TikTok is just the latest victim of the fracturing Internet‘. Of course the trend has been around for some time, but it now definitely clear that internet is not any more global, but multiple.

What was promised as the great agent of globalization is rapidly becoming an enabler of isolationism. The borderless, digital frontier international businesses and organizations aligned themselves to is fragmenting. New borders and checkpoints are emerging.” “Technical fragmentation currently prevents roughly 25% of internet users, most in emerging markets, from accessing 70% of the Web. Political fragmentation has already divided the Internet into East and West, but recent developments are further divvying up the web into strongly bordered regional federations.”

This fragmentation has been driven by legal aspects (data protection laws), copyright and commercial issues, political issues (China being the best known, but India also participating) etc. It is quite interesting that this trend is parallel to the trend to reel back from globalization.

Over time it creates a parallel reality that is extremely difficult to break out of. When amplified by the walled garden effect users are separated from non-aligned segments of the web as firmly as if they were on different networks altogether.”

When travelling it is possible to overcome some of the access limitations but when staying in one’s country, only advanced tricks will allow to overcome these limitations. Most people will increasingly be participating to a more limited version of internet. And that is probably reinforce the current issue of people being increasingly caught in the bubble of their own opinion and social network.

Share

How Non-Conformists Must Find New Safe Spaces

In this comprehensive post by Paul Graham ‘The Four Quadrants of Conformism‘, he addresses what is the type of person moves that actually move the world (hint: they are quite few in number). The challenges raised by an increase in conformism in the current world are also exposed.

The quadrant of conformism is the degree of conformism axis against a passive/active axis. This creates roughly four types of people and it would be linked more to personality than cultural influence. There are more conventional-minded than independent-minded people, and fewer active/aggressive people than passive.

Why do the independent-minded need to be protected, though? Because they have all the new ideas. To be a successful scientist, for example, it’s not enough just to be right. You have to be right when everyone else is wrong. Conventional-minded people can’t do that. For similar reasons, all successful startup CEOs are not merely independent-minded, but aggressively so. So it’s no coincidence that societies prosper only to the extent that they have customs for keeping the conventional-minded at bay

In the last few years, many of us have noticed that the customs protecting free inquiry have been weakened.” We are reverting to a pre-enlightenment situation where people were expected to be passive and conventional. The fact that universites are becoming locations where intolerance becomes prevalent, while they have historically, on the contrary, be places of tolerance and investigation, is a worry. This safe space has not been replaced by the internet or other safe locations.

Though I’ve spent a lot of time thinking about this situation, I can’t predict how it plays out. Could some universities reverse the current trend and remain places where the independent-minded want to congregate? Or will the independent-minded gradually abandon them? I worry a lot about what we might lose if that happened.”

But I’m hopeful long term. The independent-minded are good at protecting themselves. If existing institutions are compromised, they’ll create new ones. That may require some imagination. But imagination is, after all, their specialty.

Our increasing conventional society is a worry, but I believe we underestimate the existence of free inquiry possibilities as information is becoming increasingly available. Innovators had to be close to universities and their incomparable library; this constraint is now obsolete and virtual locations will develop offering the same possibilities.

Share

How to Overcome Bias in Project Estimates by Involving Generalists in Systemic Reviews

To finish our current series of posts on our exploration of the excellent book ‘Range: Why Generalists Triumph in a Specialized World‘ by David Epstein, I noted how the concepts developed about generalist vs specialist also applied in the field of project definition. It takes generalists and a diverse set of viewpoints to test the adequacy of a project definition file and associated estimate.

Bent Flyvbjerg, chair of Major Programme Management at Oxford University’s business school, has shown that around 90 percent of major infrastructure projects worldwide go over budget (by an average of 28 percent) in part because managers focus on the details of their project and become overly optimistic. Project managers can become like Kahneman’s curriculum-building team, which decided that thanks to its roster of experts it would certainly not encounter the same delays as did other groups. Flyvbjerg studied a project to build a tram system in Scotland, in which an outside consulting team actually went through an analogy process akin to what the private equity investors were instructed to do. They ignored specifics of the project at hand and focused on others with structural similarities. The consulting team saw that the project group had made a rigorous analysis using all of the details of the work to be done. And yet, using analogies to separate projects, the consulting team concluded that the cost projection of £ 320 million (more than $ 400 million) was probably a massive underestimate

This is a widespread phenomenon. If you’re asked to predict […], the more internal details you learn about any particular scenario […] the more likely you are to say that the scenario you are investigating will occur.”

This is why we observe again and again the immense benefits of having independent reviews of projects by people having a generalist overview and not emotionally involved with the project to get an objective feedback. While this is what we promote, the fact that this review is systemic and performed by generalists is also an essential part of the value delivered. I will highlight it more in the future.

Share

How the Collaborative Age Value Is In The Platform

In this post ‘Why newspapers fail‘, Frederic Filloux mentions a few reasons. The one which has struck me is that they concentrated on the wrong thing: diffusion rather than aggregation and development of a platform, to reap the value of user data.

The news industry took the opposite stance. Deprived of customers’ data, it found itself blind to what kind of online services the audience was craving. As a result, publishers left numerous markets wide open, like free classified and auctions taken by Craigslist and eBay (before Schibsted set in), large news aggregators and the entire system that flourished thanks to RSS feeds. It is actually funny to see many news outlets now engaged in costly acquisitions to get back the services they should have developed in the first place.”

Today the value lies in customer data, and those platforms and links in the chain that have access generate the most value. And this explains the value of social network companies and other platforms like Google. All other services are doomed to be dependent from the platform-gods.

If you want to create value today, you need eventually to produce a platform that will concentrate user data; the real source of value in this early Collaborative Age!

Share

How to Rebuild Your Missing Best Friend with AI

I found this post inspiring: ‘SPEAK, MEMORY – When her best friend died, she rebuilt him using artificial intelligence‘. The approach is quite easy: build a bot with AI, feed all the messages and interactions with your past best friend, and let AI work out its miracle to provide you with interaction.

I find the idea exciting and unsettling at the same time. This solution offers a kind of immortality (at least on the basis of past expression), but also poses questions about what AI will really produce with its limitations.

In the post it seems that this approach has helped the person overcome its grief, but it may also create a situation where grieving will be suspended because of the impression to have your friend still there with you.

Therefore, this is an idea to be handled with caution. At the same time, we can expect that the concept will become more prevalent with an increased performance of AI and also an increased amount of data generated digitally during our lifetimes.

We are moving further toward a digital world inhabited by multiple versions of ourselves, some of them will survive our death. Interesting world!

Share

How Our Perception of Knowledge Is Shifting to Relative Knowledge

In this quite tedious post ‘Knowledge is crude: Far from being a touchstone of the truth, knowledge is a stone-age concept that harms our dealings with the modern world‘, some interesting concepts are developed how our view of knowledge needs to change as we move into the Collaborative Age.

My understanding of the thesis of the post is that basically, knowledge is increasingly relative – and more based on a statistical evidence. It is much less absolute and certain like we considered knowledge previously.

Specifically, knowledge being considered as something being shared between people becomes increasingly an alignment of opinions rather than a more certain knowledge independently vetted and settled.

I am quite convinced that we have realized in the few past decades how knowledge is temporary and can be put in question by new evidence. We now know that scientific knowledge and theories only wait for the next bit of evidence to contradict it and thus create the need for new, better theories.

In the Collaborative Age, we will increasingly see knowledge as relative and ready to be upended. Tools to support this are already there, such as online encyclopedia. The challenge of course is to ensure that knowledge remains grounded and does not become another set of conspiracy theories. We still have to invent the quality criteria of a relative knowledge. Let’s get to work.

Share

How AI Algorithms Now Get Generated by Natural Selection

In this breathtaking post ‘Google Engineers ‘Mutate’ AI to Make It Evolve Systems Faster Than We Can Code Them‘, latest developments of AI algorithm generation is described.

It does not yet look like it really works for advanced algorithms, but there are possibilities that very quickly algorithms will evolve that will produce novel solutions to certain simple problems such as image recognition. This is quite an exciting – and troubling – development. It was due to arrive though with the development of ‘genetic algorithms’ that simulate natural selection.

Of course the novelty is now to apply it to AI algorithms which are by themselves heavier and more cumbersome to handle. Still it gives quite an interesting perspective of what we can expect in the short and medium term. Scary!

Share

How to Ensure Data Is Not a New Toxic Waste

There are quite a number of discussions about the ambiguous status of data in the new Collaborative Age. On one side it is celebrated as the new oil (refer to our post How Data Really is the New Oil, and Better); on the other side some argue that it is rather a toxic waste as in this interesting column ‘Data – the new oil, or potential for a toxic oil spill?

The point of the article is linked to data security and the harm that can be done through data theft and possible advanced recombination with other data sources that would also have been stolen. With zillions of data generated everyday, the argument is that one day or the other, sensitive data will leak and produce toxic effects on the wider data landscape and digital environment.

Specifically, the article mentions “Re-identification of anonymized data-sets [which] is a hot research topic for computer science today” and the fact that the breaches are additive in nature, progressively weakening privacy and sensitive data.

Of course, unclean data (refer to our post on data hygiene) is also another issue of toxic waste that may influence the wider data ecosystem if it is used as a basis for AI algorithm teaching or other reference applications.

The large amounts of data available today are a great source of value and at the same time are fraught with risks – as any new technology. Which will win first? My optimistic self is rather confident that the benefits will outweigh the risks, but that does not detract from the need to reinforce security and privacy.

Let’s make sure data is the source of value and not a toxic waste.

Share

How Data Hygiene Becomes Essential

In an AI conference lately I was struck by the mention of new jobs such as data hygienist and AI trainer. I did not realize how important data hygiene was – up to becoming a new profession!

Data hygiene is in reality quiet critical to AI development. Poor data hygiene is certain to create all sort of issues and false positive, and to lengthen dramatically the time it would take for an AI algorithm to learn its part.

Data hygiene is actually hard work because of the sheer size of the data bases to clean up, and the need to distinguish between rubbish and actual legitimate data points. It requires specific tools and particular attention, not to mention time. Hence it is a significant investment, but is found to be quite worthwhile apparently compared to the benefits.

Before we did not care so much about the quality of data in our databases – although there is still this old adage about garbage in, garbage out. Now we need a much higher quality level and apparently it is quite a challenge to achieve it.

Welcome to the world of data hygiene and data hygienists!

Share

How Easy It Is to Fool Artificial Intelligence

I love this funny post ‘Hackers stuck a 2-inch strip of tape on a 35mph speed sign and successfully tricked 2 Teslas into accelerating to 85mph’. The point here is not really about Tesla reliability, but how easy it is still to trick Artificial Intelligence recognition tools.

In this particularly funny example the researchers just changed slightly the speed limit sign and it was enough to trick the sign recognition algorithm that watches the road and determines what is the acceptable speed (see the image). This type of system is increasingly prevalent in cars generally just to update the actually applicable speed limit that is provided as a guidance to the driver.

What is really impressive here is obviously how easy it seems to fool an Artificial Intelligence-based recognition software. If that’s the case for something so obvious and mundane, then what are the consequences for more complex applications like face recognition? Are they also as easy to fool?

Artificial Intelligence does not seem to be quite completely robust yet. Some progress is still needed!

Share