How Machine Learning Will Lead to More Conformity and Less Creativity

One thing most people forget is that machine learning, the essence of today’s Artificial Intelligence (AI), is basically about reproducing the same patterns than the ones fed during the learning process.

A typical machine learning neural network
A typical machine learning neural network

Therefore, the introduction of AI will first lead to an increase of conformity. Anything outside of the ordinary (i.e. outside of the set of circumstances used for the learning process) will cause problems, misbehavior and defects.

If we draw this observation further, it will not be possible to have AI achieve any kind of disruption. Disruption can be created by the human mind, as history shows. So, for a while there will be a significant difference between AI and the human mind: the ability to think out of the box and to create disruptive patterns. Or, what is exactly is generally covered under the word ‘creativity’.

The massive irruption of AI in our lives will force some amount of conformity on us and that is a danger. At the same time, creativity will remain an unrivaled feature of the human mind.

Share

How the Collaborative Age Abundance May Change our World

Cory Doctorow excellent Locus column ‘Cory Doctorow: The Jubilee: Fill Your Boots‘ raises interesting questions about the ultimate evolution of society in the Collaborative Age – what I would call the Abundance utopia.

The point Cory Doctorow makes is that it should be possible to live in a world of abundance leveraging on the rhythms of nature to minimize our environmental impact. The point is then to accept that things don’t necessarily work all the time. This drawback can be compensated by technology coordination.

Technology hints at another model, one that hybridizes the pre-industrial rhythms of work and play and the super-modern ability to use computers to solve otherwise transcendentally hard logistics and coordination problems.

Using bright green, high tech coordination tools, we can restore the pastoral green, artisanal autonomy that privileges mindful play over mindless work. The motto of Magpie Killjoy’s Steampunk zine was ‘‘love the machine, hate the factory.’’ Love the dividends of coordinated labor, hate the loss of freedom we suffer when we have to coordinate with others.

I strongly encourage the read and the thought that freed from the needs to coordinating large organizations, we could live a life far closer to the rhythms of nature while enjoying abundance.

Share

How We Can Evolve To Be A Totally Different Person

Traditionally many psychological and personality tests assume that their result won’t change dramatically over a life time. New research shows that it is a misconception: during our lifetime we will change significantly and dramatically. Read the Quartz article You’re a completely different person at 14 and 77, the longest-running personality study ever has found.

The famous Myers-Briggs personality test

I have observed that the results of personality tests tend to be quite stable over 3 to 5 years periods and this is quite a common observations. However people do evolve, have different experiences, and what these studies show is that over a lifetime (50+ years) our preferences are not any more correlated with those we had initially.

“The longer the interval between two assessments of personality, the weaker the relationship between the two tends to be,” the researchers write. “Our results suggest that, when the interval is increased to as much as 63 years, there is hardly any relationship at all.”

This is great news because it demonstrates that we can change ourselves if we want to, and that there does not seem to be any limit in our capability to completely overhaul ourselves.

So, ready for change?

Share

How to Reconcile Two Opposite Sides of Human Factor

In too many accidents the ‘human error’ is at fault. Autopilots and Artificial Intelligence are developed with the aim to diminish the frequency of accidents. At the same time, only humans can deal with certain unexpected situations and find ways to manage them. These are two sides of the coin of the ‘human factor’ and we are struggling to reconcile them.

One of the issues of the Fourth Revolution is that the border between the intelligent automation and the area which still requires active human input is moving fast. Commercial aircraft flying is already largely automated. In a few years, automobile driving will be automated to drastically lower accidents. Still there will always be some situations where humans need to take over because they go beyond what the automation can deal with. And this means that increasingly, humans will largely monitor automated systems and at the same time will be required to be able to deal with extraordinary situations.

This is because humans are expected to be able to deal with a wider, more systemic view of the situation and find a way to move forward. This type of intervention will be prone to a high rate of failure in particular if there is not too much time to analyse the situation. Still we will continue to rely on this human intervention in extreme cases, sometimes with unsatisfactory results.

Finding exactly how humans can contribute best and setting up the right ergonomics so that this intervention is effective is a key area of research.

How strange is it that at the same time we complain loudly about the fallibility of humans and still expect them to deal with those extra-ordinary situations automated systems can’t deal with!

Share

How to Deal With Pareto in the Collaborative Age

The Pareto principle, also called the 80-20 principle is a characteristic of complex systems: a small part of the system accounts for 80% of its effects, sales or whatever is being considered. However the way it will be used needs to be considered carefully.

There are some considerations in serious papers like the Harvard Business Review that ‘AI Is Going to Change the 80/20 Rule‘. The paper explains that Big Data can be used to better understand where the Pareto distributions are and help change marketing or production parameters accordingly.

Of course if one finds that 80% of the value or profit is generated by 20% of the sales, the tendency will be to slash the 80% of unprofitable sales and concentrate on the high value ones. But that is not necessarily the most clever decision to take.

For example, new products and innovative services will not be part of the top profitable sales. Is it a good idea to slash them if they represent the future? Also, clients that currently belong to the long tail may suddenly become part of the core business.

Therefore the trick is not just to identify what creates most value, but to know how to manage the long tail of the 80% not-so-profitable business. This part might still be indispensable to the entire setup. Don’t slash it out without thinking!

Share

Why You Should Not Tell Your Friends Luck is the Reason for their Success

Don’t try to tell your successful friends that they’re lucky. We saw that when Obama gave his speech in 2012 and Elizabeth Warren gave a similar speech, people didn’t like that. Those speeches were completely reasonable […], but people didn’t hear the reasonable part. The message they heard was that they didn’t deserve their success.” This recommendation is given by Robert Frank the author of ‘Success and Luck: Good Fortune and the Myth of Meritocracy‘, in an interview.

Success_LuckA large part of success is luck, that is not to be denied. However we still ascribe most of it to hard work and talent. Find the right way to investigate which was the share of luck by asking the right question.

He continues”That’s not the message of those speeches. If you want people to think about the fact that they’ve been lucky, don’t tell them that they’ve been lucky. Ask them if they can think of any examples of times when they might have been lucky along their path to the top.”

“I’ve tried this many, many times and can report to you that the successful people who would get angered and defensive if they were reminded that they were lucky, instead don’t get angry or defensive at all when they think about the question, “Can you think of examples of times when you were lucky?” Instead their eyes light up, they try to think of examples, they recount one to you, and that prompts them to remember another one, they tell you about that one too, and soon they’re talking about investments we ought to be making.”

Share

How History Forgets That Searching for Truth Entails Risks

Because biographies of famous scientists tend to edit out their mistakes, we underestimate the degree of risk they were willing to take. And because anything a famous scientist did that wasn’t a mistake has probably now become the conventional wisdom, those choices don’t seem risky either.” writes Paul Graham in an excellent short post, ‘the risk of discovery‘.

newtonalchemyBiographies of Newton, for example, understandably focus more on physics than alchemy or theology. The impression we get is that his unerring judgment led him straight to truths no one else had noticed. How to explain all the time he spent on alchemy and theology? Well, smart people are often kind of crazy.” (and it seems Newton’s dog helped burn his alchemy writings as well).

There are at least two interesting learning points from this reflection:

  • People who truly seek new truths at the border of knowledge will seem a bit crazy and will investigate potential avenues, some of which might not be fruitful at the end. And they will put in question mainstream knowledge, which can be dangerous for them.
  • History only highlights what is becoming new mainstream knowledge forgetting about the rest, and deleting it from collective memory. But that is reductive because we don’t know what will become mainstream in the future.

So it quite normal that we take risks if we strive to progress science and find new truths. Taking risk is part of it. Let’s not stop at it!

Share

How Drone Hives Are Becoming a Reality

Did you think that hives of drones coming upon us to overcome our defenses were things from a Hollywood movie? It is becoming real and it has the potentiel to change significantly the battlefield. In particular because hives of drones do not require each drone to be individually controlled – this creates a lot of resilience to the technology.

minidroneSmall drone military usage is probably the new factor to be taken into account on the battlefield. There has been news of the Islamic State using small commercial drones carrying explosives (see for example ‘Pentagon confirms new threat from ISIS: exploding drones‘).

However one very impressive video is shown below, which is the usage of drone hives, still under development. Hives of small drones are dropped from fighter jets and then they behave like hives. The thing is that each drone is not individually piloted, the hive has a collective behavior with drones reacting to the others’ behavior. That’s really impressive, in particular the buzz of the drone hive homing in at the end of the video!

Share

How Personal Independence and Autonomy are Different Concepts

Independence and autonomy might seem quite similar but there is a substantial difference: autonomy does not preclude asking for support and help, while independence does.

Be-Your-Own-HeroBefore proceeding further, let’s note that we apply those terms here in the personal sense and not in the diplomatic sense.

This distinction between the two concepts is essential because it shows that being independent is far more limiting than being autonomous. Autonomy implies being able to take one own’s decisions but at the same draw on help and support from others to reach one’s goals.

This is why we should strive personally for autonomy, not independence.

Share

How Democracy is Adapted to a Complex World

Democracy is the political regime best adapted to complexity. The reason is that it allows bifurcations to happen at every election, i.e. depending on the country every 4 to 7 years. Those changes can be unexpected and worrying, but they happen more frequently and -one hopes- less abruptly than in other political regimes.

trump_win
US election surprise!

Elections are always creating surprises in particular in troubled times, and this has been demonstrated heavily in 2016 where in several western countries there has been a reaction against the establishment and from people who feel left aside from the world’s transformation (Brexit, Trump election).

It is a good property of a system setup to manage a complex world to be able to implement those important changes with this frequency.

Other political regimes will in fact only allow such changes much less frequently and therefore, they will be more abrupt and can even degenerate into civil wars.

We concur heavily with Churchill saying that “democracy is the worst form of government except all the others that have been tried“! And this conclusion on democracy should be kept in mind when we are not happy with election results.

Share

How New Regulatory Approaches Could Be Structured for AI-Driven Technology

Following up on our post ‘How Artificial Intelligence Challenges Our Regulatory Approach to System Risk‘, in this post let’s discuss some possible new regulatory approaches.

  • ntsbEnhancing lessons learnt and redistributing them to the entire ecosystem is a cornerstone of safety enhancement. It is much facilitated in the case of Artificial Intelligence (AI) thanks to the remote update possibility, as demonstrated successfully by Tesla.
  • Implementing a statistical approach instead of a deterministic one. Some statistical risk analysis approaches are already available for years in the form of fault trees to determine the statistical probability of a feared accident. However this only works in environments where statistical failure data of components is available, and with limited changes to the environment and the system. New statistical approaches will have to be developed based on specific testing of the entire AI-related system. These approaches need to be developed theoretically and empirically and remain the major challenge of the years to come.
  • Rules governing operability of the system in case of component failure will have to be strictly defined and enforced (with how many sensors out of order is it safe to drive autonomously?), because the degraded situations are the most difficult and cumbersome to regulate.

The problem of the new statistical approaches to safety demonstration is an exciting problem facing all regulators. I am looking for some science behind this, if any reader has useful links please share!

Share

How Artificial Intelligence Challenges Our Regulatory Approach to System Risk

Our current approaches to the regulation of system risk management and prevention of deadly accidents remains very much deterministic. In the most critical applications such as in nuclear power plants or aircraft controls, regulatory authorities require a deterministic demonstration of the links between input and outputs. Superfluous code that is not used needs to be removed, just in case. Older processors are used which reactions are fully known.

ai
How to test fully HAL’s reactions to all possible events?

With the advances of Artificial Intelligence, this won’t be possible any more. In particular because the devices become black boxes that have learned to behave in a certain manner most of the time when exposed to certain stimulus. However deterministic proof of the relationship between input and output is impossible and we don’t quite know how it really works inside. It can only be a statistical measure.

This situation is an extensive challenge for the regulatory authorities that will have to regulate safety-critical applications based on AI such as automatic driving. Most current regulatory approaches will become obsolete.

Some regulatory authorities have identified this challenge but most have not, although this will constitute a real revolution in regulation.

Share