I have observed that the results of personality tests tend to be quite stable over 3 to 5 years periods and this is quite a common observations. However people do evolve, have different experiences, and what these studies show is that over a lifetime (50+ years) our preferences are not any more correlated with those we had initially.
“The longer the interval between two assessments of personality, the weaker the relationship between the two tends to be,” the researchers write. “Our results suggest that, when the interval is increased to as much as 63 years, there is hardly any relationship at all.”
This is great news because it demonstrates that we can change ourselves if we want to, and that there does not seem to be any limit in our capability to completely overhaul ourselves.
In too many accidents the ‘human error’ is at fault. Autopilots and Artificial Intelligence are developed with the aim to diminish the frequency of accidents. At the same time, only humans can deal with certain unexpected situations and find ways to manage them. These are two sides of the coin of the ‘human factor’ and we are struggling to reconcile them.
One of the issues of the Fourth Revolution is that the border between the intelligent automation and the area which still requires active human input is moving fast. Commercial aircraft flying is already largely automated. In a few years, automobile driving will be automated to drastically lower accidents. Still there will always be some situations where humans need to take over because they go beyond what the automation can deal with. And this means that increasingly, humans will largely monitor automated systems and at the same time will be required to be able to deal with extraordinary situations.
This is because humans are expected to be able to deal with a wider, more systemic view of the situation and find a way to move forward. This type of intervention will be prone to a high rate of failure in particular if there is not too much time to analyse the situation. Still we will continue to rely on this human intervention in extreme cases, sometimes with unsatisfactory results.
Finding exactly how humans can contribute best and setting up the right ergonomics so that this intervention is effective is a key area of research.
How strange is it that at the same time we complain loudly about the fallibility of humans and still expect them to deal with those extra-ordinary situations automated systems can’t deal with!
The Pareto principle, also called the 80-20 principle is a characteristic of complex systems: a small part of the system accounts for 80% of its effects, sales or whatever is being considered. However the way it will be used needs to be considered carefully.
There are some considerations in serious papers like the Harvard Business Review that ‘AI Is Going to Change the 80/20 Rule‘. The paper explains that Big Data can be used to better understand where the Pareto distributions are and help change marketing or production parameters accordingly.
Of course if one finds that 80% of the value or profit is generated by 20% of the sales, the tendency will be to slash the 80% of unprofitable sales and concentrate on the high value ones. But that is not necessarily the most clever decision to take.
For example, new products and innovative services will not be part of the top profitable sales. Is it a good idea to slash them if they represent the future? Also, clients that currently belong to the long tail may suddenly become part of the core business.
Therefore the trick is not just to identify what creates most value, but to know how to manage the long tail of the 80% not-so-profitable business. This part might still be indispensable to the entire setup. Don’t slash it out without thinking!
“Don’t try to tell your successful friends that they’re lucky. We saw that when Obama gave his speech in 2012 and Elizabeth Warren gave a similar speech, people didn’t like that. Those speeches were completely reasonable […], but people didn’t hear the reasonable part. The message they heard was that they didn’t deserve their success.” This recommendation is given by Robert Frank the author of ‘Success and Luck: Good Fortune and the Myth of Meritocracy‘, in an interview.
A large part of success is luck, that is not to be denied. However we still ascribe most of it to hard work and talent. Find the right way to investigate which was the share of luck by asking the right question.
He continues”That’s not the message of those speeches. If you want people to think about the fact that they’ve been lucky, don’t tell them that they’ve been lucky. Ask them if they can think of any examples of times when they might have been lucky along their path to the top.”
“I’ve tried this many, many times and can report to you that the successful people who would get angered and defensive if they were reminded that they were lucky, instead don’t get angry or defensive at all when they think about the question, “Can you think of examples of times when you were lucky?” Instead their eyes light up, they try to think of examples, they recount one to you, and that prompts them to remember another one, they tell you about that one too, and soon they’re talking about investments we ought to be making.”
“Because biographies of famous scientists tend to edit out their mistakes, we underestimate the degree of risk they were willing to take. And because anything a famous scientist did that wasn’t a mistake has probably now become the conventional wisdom, those choices don’t seem risky either.” writes Paul Graham in an excellent short post, ‘the risk of discovery‘.
“Biographies of Newton, for example, understandably focus more on physics than alchemy or theology. The impression we get is that his unerring judgment led him straight to truths no one else had noticed. How to explain all the time he spent on alchemy and theology? Well, smart people are often kind of crazy.” (and it seems Newton’s dog helped burn his alchemy writings as well).
There are at least two interesting learning points from this reflection:
People who truly seek new truths at the border of knowledge will seem a bit crazy and will investigate potential avenues, some of which might not be fruitful at the end. And they will put in question mainstream knowledge, which can be dangerous for them.
History only highlights what is becoming new mainstream knowledge forgetting about the rest, and deleting it from collective memory. But that is reductive because we don’t know what will become mainstream in the future.
So it quite normal that we take risks if we strive to progress science and find new truths. Taking risk is part of it. Let’s not stop at it!
Did you think that hives of drones coming upon us to overcome our defenses were things from a Hollywood movie? It is becoming real and it has the potentiel to change significantly the battlefield. In particular because hives of drones do not require each drone to be individually controlled – this creates a lot of resilience to the technology.
However one very impressive video is shown below, which is the usage of drone hives, still under development. Hives of small drones are dropped from fighter jets and then they behave like hives. The thing is that each drone is not individually piloted, the hive has a collective behavior with drones reacting to the others’ behavior. That’s really impressive, in particular the buzz of the drone hive homing in at the end of the video!
Independence and autonomy might seem quite similar but there is a substantial difference: autonomy does not preclude asking for support and help, while independence does.
Before proceeding further, let’s note that we apply those terms here in the personal sense and not in the diplomatic sense.
This distinction between the two concepts is essential because it shows that being independent is far more limiting than being autonomous. Autonomy implies being able to take one own’s decisions but at the same draw on help and support from others to reach one’s goals.
This is why we should strive personally for autonomy, not independence.
Democracy is the political regime best adapted to complexity. The reason is that it allows bifurcations to happen at every election, i.e. depending on the country every 4 to 7 years. Those changes can be unexpected and worrying, but they happen more frequently and -one hopes- less abruptly than in other political regimes.
Elections are always creating surprises in particular in troubled times, and this has been demonstrated heavily in 2016 where in several western countries there has been a reaction against the establishment and from people who feel left aside from the world’s transformation (Brexit, Trump election).
It is a good property of a system setup to manage a complex world to be able to implement those important changes with this frequency.
Other political regimes will in fact only allow such changes much less frequently and therefore, they will be more abrupt and can even degenerate into civil wars.
We concur heavily with Churchill saying that “democracy is the worst form of government except all the others that have been tried“! And this conclusion on democracy should be kept in mind when we are not happy with election results.
Enhancing lessons learnt and redistributing them to the entire ecosystem is a cornerstone of safety enhancement. It is much facilitated in the case of Artificial Intelligence (AI) thanks to the remote update possibility, as demonstrated successfully by Tesla.
Implementing a statistical approach instead of a deterministic one. Some statistical risk analysis approaches are already available for years in the form of fault trees to determine the statistical probability of a feared accident. However this only works in environments where statistical failure data of components is available, and with limited changes to the environment and the system. New statistical approaches will have to be developed based on specific testing of the entire AI-related system. These approaches need to be developed theoretically and empirically and remain the major challenge of the years to come.
Rules governing operability of the system in case of component failure will have to be strictly defined and enforced (with how many sensors out of order is it safe to drive autonomously?), because the degraded situations are the most difficult and cumbersome to regulate.
The problem of the new statistical approaches to safety demonstration is an exciting problem facing all regulators. I am looking for some science behind this, if any reader has useful links please share!
Our current approaches to the regulation of system risk management and prevention of deadly accidents remains very much deterministic. In the most critical applications such as in nuclear power plants or aircraft controls, regulatory authorities require a deterministic demonstration of the links between input and outputs. Superfluous code that is not used needs to be removed, just in case. Older processors are used which reactions are fully known.
With the advances of Artificial Intelligence, this won’t be possible any more. In particular because the devices become black boxes that have learned to behave in a certain manner most of the time when exposed to certain stimulus. However deterministic proof of the relationship between input and output is impossible and we don’t quite know how it really works inside. It can only be a statistical measure.
This situation is an extensive challenge for the regulatory authorities that will have to regulate safety-critical applications based on AI such as automatic driving. Most current regulatory approaches will become obsolete.
Some regulatory authorities have identified this challenge but most have not, although this will constitute a real revolution in regulation.
There are numerous definitions of leadership. Seen from the complexity view, a leader is someone that is able to create locally, more or less broadly, some alignment inside a complex organization.
In a complex system it is certainly difficult to create any sort of alignment. Contributors all have their own interest and are very inter-dependently linked and related to other contributors. However when one is able to create a dynamic movement and bring along the necessary contributors, astonishing things can happen. That’s probably what leadership in a complex world means.
This may be a new definition of leadership. At the same time I believe it is a useful approach to this issue. Seen from that perspective, a number of leadership practices become clearer and more founded in actual science.
As a leader, impress movement in complexity. It is will be even more powerful than what you believe.
Complex and chaotic systems can be described by mathematical equations that are in fact an extension and generalization of Quantum Mechanics equation. That’s what Ilya Prigogine (Nobel-price winner in 1977) explains in his excellent book “the laws of chaos” (apparently not available in English unfortunately).
We have argued numerous times that one of the precursors of the Fourth Revolution is the emergence of Quantum Mechanics, or at least the limits found to Newtonian Mechanics which founded the Industrial Age. The science of complexity and chaos is even newer. By finding that an extension and generalization of the maths of Quantum Mechanics is needed to describe it, we are indeed confirmed in our observation that it constitutes a further step towards the underlying paradigm of the Collaborative Age.
Complexity is still vastly misunderstood because it creates a rupture with the comfortable deterministic view of the world which we entertained during centuries. Its probabilistic nature, the fact that mere observation changes the observed world (like in Quantum Mechanics) makes it even more fascinating.
Welcome to the world beyond Quantum Mechanics and the Uncertainty Principle.