How We Need to Increase Efforts to Protect Against Inadequate AI Generated Content

Following up from the previous post ‘How We Underestimate the Availability of AI Generated Content‘, this interesting post ‘AI-generated fake content could unleash a virtual arms race‘ goes one step further looking at the consequences of this technology.

“[the exercise of generated a fake AI-generated website provided] a glimpse into a potentially darker digital future in which it is impossible to distinguish reality from fiction.

Such a scenario threatens to topple the already precarious balance of power between creators, search engines, and users. The current flow of fake news and propaganda already fools too many people, even as digital platforms struggle to weed it all out. AI’s ability to further automate content creation could leave everyone from journalists to brands unable to connect with an audience that no longer trusts search engine results and must assume that the bulk of what they see online is fake.”

The issue is really that machines can generate content much faster than humans, and that all social networks rely mainly on humans to weed out inadequate content. Those tools could thus be “weaponized […] to unleash a tidal wave of propaganda could make today’s infowars look primitive“.

There is thus a definite urgent need to develop “increasingly better tools to help us determine real from fake and more human gatekeepers to sift through the rising tide of content.”

Share

How We Underestimate the Availability of AI Generated Content

Just take 1 minute to visit ‘This Marketing Blog Does Not Exist‘. Looks like a genuine blog, just as this one, right? Wrong, it has been entirely AI generated including the head shot of the supposed writer. And the texts do seem to make sense at first glimpse.

We are coming to a situation where we not any more in a position to distinguish AI generated content from human content. Speech generators produce real-sounding audio. Soon we won’t be able to distinguish deep-fake videos from real videos (the picture represents a snapshot from a deep-fake video of Obama compared to a real video extract).

For end-users, there is a definite need to clearly identify content that is AI generated. For some people, this also creates an unprecedented opportunity to swindle or otherwise abuse the confidence of readers or viewers at an unprecedented rate.

In a year where AI-engines are now being put widely at the disposal of the public, I believe we widely underestimate the impact of those technologies in the current world and how an increasing portion of what we read, hear and watch is AI-generated fiction. A wake-up call could be needed!

Share

How the USA Decadence Would Be Due to Not Achieving the Proper Institutional Transformation

In the very interesting book ‘Hedge: A Greater Safety Net for the Entrepreneurial Age‘ by Nicolas Colin, consideration is given to the changes in our institutions needed to deal with the Collaborative Age. In this context, the evolution of the institutional setup in the USA takes a particular importance being the original place of the current Fourth Revolution. And his conclusion is that this transformation is not happening, weakening the USA as the economic giant of the age, and starting its decadence.

Being the dominant power in a given techno – economic age is not only about nurturing the dominant corporations of the day . It’s also about building the institutions needed to bring about economic security and prosperity . America has been the hotbed of three consecutive technological revolutions . But now that we’re deep into the current age of ubiquitous computing and networks , it’s entirely possible that the US will know the same fate as Germany at the dawn of the age of steel and heavy engineering . Despite having a headstart and everything needed to succeed , it could come up short and , taken aback by its own demise , experience the worst decades in its history

The issues are well known: the lack of a proper minimum safety net for people, in particular those not working in the large corporations typical of the industrial age; problematic institutions preventing real reforms to pass; a pioneer economy that is deeply opposite to the need to protect the environment etc.

I like the historical comparison as it shows the importance of transforming the institutions to fit the economic evolution. In the long term it is what makes a difference. Whether institutions can change sufficiently fast remains to be seen.

Share

How To Deal with Frequent Mistakes of Artificial Intelligence

Artificial Intelligence is quite often mistaken, and that’s something we must know and understand (see for example the previous post on ‘How Deployment of Facial Recognition Creates Many Issues‘). The best example I’ve read of the problem of humans not reacting adequately is highlighted in the post ‘Pourquoi l’intelligence artificielle se trompe tout le temps’ (Why AI is always mistaken – in French) when it evokes the Kasparov vs Deep Blue chess match (recounted here in English ‘Twenty years on from Deep Blue vs Kasparov: how a chess match started the big data revolution‘)

At some stage during the game, the computer did something which looked quite stupid. And it was actually stupid, but one could although believe it was brilliantly unconventional! Kasparov was destabilized. In reality, that was actually a mistake by the AI program! “The world champion was supposedly so shaken by what he saw as the machine’s superior intelligence that he was unable to recover his composure and played too cautiously from then on. He even missed the chance to come back from the open file tactic when Deep Blue made a “terrible blunder”.”

Because of the manner in which AI gets trained, it will necessarily create a high ratio of mistakes and errors when implemented. The challenge is for us to identify those occasions and not get destabilized by them.

First, we should probably get a systematic warning associated with the AI output about the possibility of a mistake. And, we should remain conscious and critically aware of the possibility of a mistake by running some simple checks for the adequacy of the output.

This high error rate of AI is a problem for high reliability applications of course, and we should also see some emergence of techniques to correct this problem or provide technological checks and balances to avoid inadvertent mistakes that could have actual consequences.

Still just knowing that AI is prone to making mistakes is something important we need to recognise and be able to respond to.

Share

How Deployment of Facial Recognition Creates Many Issues

In this Reuters investigation ‘Rite Aid deployed facial recognition systems in hundreds of U.S. stores‘, the major problems of deploying this technology massively are exposed. At the end it seems that this pharmacy brand has actually renounced using it for the moment.

The primary intent of this implementation was security and theft prevention. Beyond issues in the information of the public on the application of the technology, it seems that there have been many instances of wrong positive recognition, in particular with minority people of color. In addition the paper adds the links of the technology to China which reflects the fear that facial recognition data may be misused or the system manipulated.

Of course facial recognition software could be used for positive usage such as individualized service, but other technologies would also allow it. The current lack of reliability of the technology, and the fact that it is deployed without the proper guarantee for appeal for wrongly identified people is a concern. This probably calls for a strong regulation how people from the public can access the data and what is done with it.

Share

How Advertising is Often Wrongly Pointed As the Source of Internet Problems

This Atlantic paper ‘the Internet original sin‘ provides a reminder of the damaging effect of a web funded by advertising. It proposes, as many papers have proposed before, an alternate funding model.

Advertising became the default business model on the web, “the entire economic foundation of our industry,” because it was the easiest model for a web startup to implement, and the easiest to market to investors. Web startups could contract their revenue growth to an ad network and focus on building an audience.”

Of course funding through advertising has led to some effects that are not quite positive: development of means to defeat search algorithms, need to pay to get promoted anywhere, and the most important, the trend by social networks to increase stickiness by making sure you only see stuff that conforms to your worldview.

I am not sure however that we should blame advertising so much. Historically, newspapers, radio and TV stations have also been mainly funded through advertising. This is not new. What is new is the power of digital to leverage advertising to a new level of personalization, up to showing a personal view of the internet to each user; and that the market for advertisement has now become global. In the case of newspaper, radio and TV, regulations have been introduced to allow a balanced approach to what was being broadcast. That’s probably what is missing for internet now.

It may be difficult to introduce regulations because they need to be global and internet has become a playground for power ambitions, but it is definitely possible to impose regulations nationally or by region, and that’s what should be done.

Share

How the Split of Internet is Linked to the Strategic Value of User Data

Following up on our previous post ‘How Internet is Getting Increasingly Split‘, let reflect for a moment on the reason for this. I don’t believe it is just censorship. Of course the censorship motivation applies for many non-democratic countries, but the reason is probably deeper and has been highlighted by the TikTok events: ownership and access to user data.

Access to user data allows all sorts of manipulations as people can be targeted individually based on their preferences and hot buttons. It also provides an insight into the private life of individuals and may help setup compromission. In brief, it provides a strategic advantage that can be used to disrupt of manipulate social situations. It is a useful source of information for cyberwar, as shown by manipulations historically performed on American, British and less developed nations elections.

The recognition of the strategic value of user data is an interesting issue at the brink of the exponential development of the Internet of Things (IoT): even more data will be generated that is linked to our private life, and often while we are not conscious of what is really happening. This will in turn bring forth even more push to avoid foreign powers to have access to user data, necessarily promoting an increased split of internet. Global companies will have to develop strategies to locate data in the countries they are generated and provide security as to the usage by foreign powers.

The strategic value of user data has now been recognized, as well as its potential negative usage. And one can expect more consequences in the near future.

Share

How Conversations with Artificial Intelligence Become Realistic

In this post ‘Conversations with GPT-3‘ we get some interesting insight about the experience of conversing with an artificial intelligence, on the basis of the largest natural language AI around, which was released in July 2020.

GPT-3 is “Generative Pre-trained Transformer 3 (GPT-3) is an autoregressive language model that uses deep learning to produce human-like text. It is the third-generation language prediction model in the GPT-n series created by OpenAI, a for-profit San Francisco-based artificial intelligence research laboratory” [Wikipedia]. It is one of the largest AI system produced so far.

GPT-3 has been trained on most of what humanity has publicly written. All of our greatest books, scientific papers, and news articles. We can present our problems to GPT-3, and just like it transcended our capabilities in Go, it may transcend our creativity and problem solving capabilities and provide new, novel strategies to employ in every aspect of human work and relationships.”

The post presents some example of the texts that were predicted by AI. Well, the conversations are quite astounding even if of course, what “Wise Being” says (it is the name of the machine” is extracted from what we could call common knowledge. Take for example the conversation around Love.

The conclusion is that very soon we’ll be chatting with AI bots without even realizing that we are. They will regurgitate our entire knowledge in the right way, providing deep reach out in our collective culture and production.

Share

How Non-Conformists Must Find New Safe Spaces

In this comprehensive post by Paul Graham ‘The Four Quadrants of Conformism‘, he addresses what is the type of person moves that actually move the world (hint: they are quite few in number). The challenges raised by an increase in conformism in the current world are also exposed.

The quadrant of conformism is the degree of conformism axis against a passive/active axis. This creates roughly four types of people and it would be linked more to personality than cultural influence. There are more conventional-minded than independent-minded people, and fewer active/aggressive people than passive.

Why do the independent-minded need to be protected, though? Because they have all the new ideas. To be a successful scientist, for example, it’s not enough just to be right. You have to be right when everyone else is wrong. Conventional-minded people can’t do that. For similar reasons, all successful startup CEOs are not merely independent-minded, but aggressively so. So it’s no coincidence that societies prosper only to the extent that they have customs for keeping the conventional-minded at bay

In the last few years, many of us have noticed that the customs protecting free inquiry have been weakened.” We are reverting to a pre-enlightenment situation where people were expected to be passive and conventional. The fact that universites are becoming locations where intolerance becomes prevalent, while they have historically, on the contrary, be places of tolerance and investigation, is a worry. This safe space has not been replaced by the internet or other safe locations.

Though I’ve spent a lot of time thinking about this situation, I can’t predict how it plays out. Could some universities reverse the current trend and remain places where the independent-minded want to congregate? Or will the independent-minded gradually abandon them? I worry a lot about what we might lose if that happened.”

But I’m hopeful long term. The independent-minded are good at protecting themselves. If existing institutions are compromised, they’ll create new ones. That may require some imagination. But imagination is, after all, their specialty.

Our increasing conventional society is a worry, but I believe we underestimate the existence of free inquiry possibilities as information is becoming increasingly available. Innovators had to be close to universities and their incomparable library; this constraint is now obsolete and virtual locations will develop offering the same possibilities.

Share

How to Overcome Bias in Project Estimates by Involving Generalists in Systemic Reviews

To finish our current series of posts on our exploration of the excellent book ‘Range: Why Generalists Triumph in a Specialized World‘ by David Epstein, I noted how the concepts developed about generalist vs specialist also applied in the field of project definition. It takes generalists and a diverse set of viewpoints to test the adequacy of a project definition file and associated estimate.

Bent Flyvbjerg, chair of Major Programme Management at Oxford University’s business school, has shown that around 90 percent of major infrastructure projects worldwide go over budget (by an average of 28 percent) in part because managers focus on the details of their project and become overly optimistic. Project managers can become like Kahneman’s curriculum-building team, which decided that thanks to its roster of experts it would certainly not encounter the same delays as did other groups. Flyvbjerg studied a project to build a tram system in Scotland, in which an outside consulting team actually went through an analogy process akin to what the private equity investors were instructed to do. They ignored specifics of the project at hand and focused on others with structural similarities. The consulting team saw that the project group had made a rigorous analysis using all of the details of the work to be done. And yet, using analogies to separate projects, the consulting team concluded that the cost projection of £ 320 million (more than $ 400 million) was probably a massive underestimate

This is a widespread phenomenon. If you’re asked to predict […], the more internal details you learn about any particular scenario […] the more likely you are to say that the scenario you are investigating will occur.”

This is why we observe again and again the immense benefits of having independent reviews of projects by people having a generalist overview and not emotionally involved with the project to get an objective feedback. While this is what we promote, the fact that this review is systemic and performed by generalists is also an essential part of the value delivered. I will highlight it more in the future.

Share

How Learning Approaches Must Be Different in Complexity: Upending the 10,000 h Rule

Following on our previous post ‘How Generalists Are Necessary for the Collaborative Age‘, let’s continue some exploration of the excellent book ‘Range: Why Generalists Triumph in a Specialized World‘ by David Epstein. One of the main topics in the book is to show that the famous 10,000 hours rule for mastering some area of knowledge is actually only applicable to certain types of activities that are bound by clear rules: chess, music, golf. It does not apply to mastering complexity or any activity that does not respond to those characteristics.

The bestseller Talent Is Overrated used the Polgar sisters and Tiger Woods as proof that a head start in deliberate practice is the key to success in “virtually any activity that matters to you.” The powerful lesson is that anything in the world can be conquered in the same way. It relies on one very important, and very unspoken, assumption: that chess and golf are representative examples of all the activities that matter to you.”

The concept of the 10,000 h rule to master some practice is thus upended. Worst, “In 2009, Kahneman and Klein [found that] whether or not experience inevitably led to expertise, they agreed, depended entirely on the domain in question“. Sometimes even “In the most devilishly wicked learning environments, experience will reinforce the exact wrong lessons.”

Thus in the real complex world, actual learning must happen differently that repeating many times the same exercise in a predictable environment. It probably requires exposure to many different situations. Learning also cannot be expected to be continuous: it is probably discontinuous, with some ‘aha’ moments separated by slow maturing of new understanding.

Quite some thoughts that upend a lot of common knowledge. And still more thoughts that put into question traditional education.

Share

How Zero Interest Rates May Affect the Innovation Economy

This very interesting post ‘The Social Consequences of Zero Interest Rates‘ examines the possible long term impact of this situation on innovation and the economy, taking as a model Japan where this situation has been prevalent for a longer time than anywhere else.

The article shows that innovation has decreased significantly in Japan in the last decades, since the 1990s which mark the end of Japan post-war catch-up and development phase. “Innovation ultimately has a lot to do with time preference in economic terms. Real innovations often only pay off years later, which is why innovative companies have to be prepared for a long haul. Zero interest rates counteract the power of innovation, because they almost always go hand in hand with higher time preference.” At the same time, wages stagnate and part-time employment grows. According to the author, all this negative evolution could be associated with high public debt / low interest rates.

This approach is interesting, however I tend to observe on the contrary that faced with very low interest rates, clever money tends to look for other places with potential gains and innovative startups tend to be quite awash with money these days – raising funds has rarely been as easy. Money also tends to get invested in shares and other high risk investments (which explains the high levels of the share market). There are quite other factors at work in Japan that could explain decreased innovation, for example the rigidity of the labour market and the traditional industrial age employment approaches.

What is certain, is that low interest rates increase the price of assets and proportionately make it more difficult to acquire them on the basis of wages, decreasing the actual purchasing power of people and increasing inequality. However, the impact on innovation is not as obvious to me. What are your views?

Share