“Don’t try to tell your successful friends that they’re lucky. We saw that when Obama gave his speech in 2012 and Elizabeth Warren gave a similar speech, people didn’t like that. Those speeches were completely reasonable […], but people didn’t hear the reasonable part. The message they heard was that they didn’t deserve their success.” This recommendation is given by Robert Frank the author of ‘Success and Luck: Good Fortune and the Myth of Meritocracy‘, in an interview.
A large part of success is luck, that is not to be denied. However we still ascribe most of it to hard work and talent. Find the right way to investigate which was the share of luck by asking the right question.
He continues”That’s not the message of those speeches. If you want people to think about the fact that they’ve been lucky, don’t tell them that they’ve been lucky. Ask them if they can think of any examples of times when they might have been lucky along their path to the top.”
“I’ve tried this many, many times and can report to you that the successful people who would get angered and defensive if they were reminded that they were lucky, instead don’t get angry or defensive at all when they think about the question, “Can you think of examples of times when you were lucky?” Instead their eyes light up, they try to think of examples, they recount one to you, and that prompts them to remember another one, they tell you about that one too, and soon they’re talking about investments we ought to be making.”
“Because biographies of famous scientists tend to edit out their mistakes, we underestimate the degree of risk they were willing to take. And because anything a famous scientist did that wasn’t a mistake has probably now become the conventional wisdom, those choices don’t seem risky either.” writes Paul Graham in an excellent short post, ‘the risk of discovery‘.
“Biographies of Newton, for example, understandably focus more on physics than alchemy or theology. The impression we get is that his unerring judgment led him straight to truths no one else had noticed. How to explain all the time he spent on alchemy and theology? Well, smart people are often kind of crazy.” (and it seems Newton’s dog helped burn his alchemy writings as well).
There are at least two interesting learning points from this reflection:
People who truly seek new truths at the border of knowledge will seem a bit crazy and will investigate potential avenues, some of which might not be fruitful at the end. And they will put in question mainstream knowledge, which can be dangerous for them.
History only highlights what is becoming new mainstream knowledge forgetting about the rest, and deleting it from collective memory. But that is reductive because we don’t know what will become mainstream in the future.
So it quite normal that we take risks if we strive to progress science and find new truths. Taking risk is part of it. Let’s not stop at it!
Did you think that hives of drones coming upon us to overcome our defenses were things from a Hollywood movie? It is becoming real and it has the potentiel to change significantly the battlefield. In particular because hives of drones do not require each drone to be individually controlled – this creates a lot of resilience to the technology.
However one very impressive video is shown below, which is the usage of drone hives, still under development. Hives of small drones are dropped from fighter jets and then they behave like hives. The thing is that each drone is not individually piloted, the hive has a collective behavior with drones reacting to the others’ behavior. That’s really impressive, in particular the buzz of the drone hive homing in at the end of the video!
Independence and autonomy might seem quite similar but there is a substantial difference: autonomy does not preclude asking for support and help, while independence does.
Before proceeding further, let’s note that we apply those terms here in the personal sense and not in the diplomatic sense.
This distinction between the two concepts is essential because it shows that being independent is far more limiting than being autonomous. Autonomy implies being able to take one own’s decisions but at the same draw on help and support from others to reach one’s goals.
This is why we should strive personally for autonomy, not independence.
Democracy is the political regime best adapted to complexity. The reason is that it allows bifurcations to happen at every election, i.e. depending on the country every 4 to 7 years. Those changes can be unexpected and worrying, but they happen more frequently and -one hopes- less abruptly than in other political regimes.
Elections are always creating surprises in particular in troubled times, and this has been demonstrated heavily in 2016 where in several western countries there has been a reaction against the establishment and from people who feel left aside from the world’s transformation (Brexit, Trump election).
It is a good property of a system setup to manage a complex world to be able to implement those important changes with this frequency.
Other political regimes will in fact only allow such changes much less frequently and therefore, they will be more abrupt and can even degenerate into civil wars.
We concur heavily with Churchill saying that “democracy is the worst form of government except all the others that have been tried“! And this conclusion on democracy should be kept in mind when we are not happy with election results.
Enhancing lessons learnt and redistributing them to the entire ecosystem is a cornerstone of safety enhancement. It is much facilitated in the case of Artificial Intelligence (AI) thanks to the remote update possibility, as demonstrated successfully by Tesla.
Implementing a statistical approach instead of a deterministic one. Some statistical risk analysis approaches are already available for years in the form of fault trees to determine the statistical probability of a feared accident. However this only works in environments where statistical failure data of components is available, and with limited changes to the environment and the system. New statistical approaches will have to be developed based on specific testing of the entire AI-related system. These approaches need to be developed theoretically and empirically and remain the major challenge of the years to come.
Rules governing operability of the system in case of component failure will have to be strictly defined and enforced (with how many sensors out of order is it safe to drive autonomously?), because the degraded situations are the most difficult and cumbersome to regulate.
The problem of the new statistical approaches to safety demonstration is an exciting problem facing all regulators. I am looking for some science behind this, if any reader has useful links please share!
Our current approaches to the regulation of system risk management and prevention of deadly accidents remains very much deterministic. In the most critical applications such as in nuclear power plants or aircraft controls, regulatory authorities require a deterministic demonstration of the links between input and outputs. Superfluous code that is not used needs to be removed, just in case. Older processors are used which reactions are fully known.
With the advances of Artificial Intelligence, this won’t be possible any more. In particular because the devices become black boxes that have learned to behave in a certain manner most of the time when exposed to certain stimulus. However deterministic proof of the relationship between input and output is impossible and we don’t quite know how it really works inside. It can only be a statistical measure.
This situation is an extensive challenge for the regulatory authorities that will have to regulate safety-critical applications based on AI such as automatic driving. Most current regulatory approaches will become obsolete.
Some regulatory authorities have identified this challenge but most have not, although this will constitute a real revolution in regulation.
There are numerous definitions of leadership. Seen from the complexity view, a leader is someone that is able to create locally, more or less broadly, some alignment inside a complex organization.
In a complex system it is certainly difficult to create any sort of alignment. Contributors all have their own interest and are very inter-dependently linked and related to other contributors. However when one is able to create a dynamic movement and bring along the necessary contributors, astonishing things can happen. That’s probably what leadership in a complex world means.
This may be a new definition of leadership. At the same time I believe it is a useful approach to this issue. Seen from that perspective, a number of leadership practices become clearer and more founded in actual science.
As a leader, impress movement in complexity. It is will be even more powerful than what you believe.
Complex and chaotic systems can be described by mathematical equations that are in fact an extension and generalization of Quantum Mechanics equation. That’s what Ilya Prigogine (Nobel-price winner in 1977) explains in his excellent book “the laws of chaos” (apparently not available in English unfortunately).
We have argued numerous times that one of the precursors of the Fourth Revolution is the emergence of Quantum Mechanics, or at least the limits found to Newtonian Mechanics which founded the Industrial Age. The science of complexity and chaos is even newer. By finding that an extension and generalization of the maths of Quantum Mechanics is needed to describe it, we are indeed confirmed in our observation that it constitutes a further step towards the underlying paradigm of the Collaborative Age.
Complexity is still vastly misunderstood because it creates a rupture with the comfortable deterministic view of the world which we entertained during centuries. Its probabilistic nature, the fact that mere observation changes the observed world (like in Quantum Mechanics) makes it even more fascinating.
Welcome to the world beyond Quantum Mechanics and the Uncertainty Principle.
The premise is that the intrinsic complexity and sophistication of the empire or organization increases over time up to a point where additional complexity is detrimental, in particular in the face of sudden external change. The institution is then unable to cope with the change. “When societies fail to respond to reduced circumstances through orderly downsizing, it isn’t because they don’t want to, it’s because they can’t.”
I find this model intriguing because from my perspective, complexity rather increases reactivity and adaptation. I think the author mistaken complication and complexity. Adding layers of bureaucracy in a futile attempt at control is complication. Properly maintained complexity is rather an antidote at inflexibility. We should certainly fight organizational complication (and its representative, bureaucracy) but rather welcome complexity.
Research shows that we definitely have different ethical standpoints depending on the language we use. In particular it would seem we are more deliberate (rational) when using a foreign language. There are several explanations for this – the effort needed to operate in the foreign language, or the fact that our original language is related to so many emotions, which the foreign language is less.
Whatever the deep explanation, this creates significant issues when working internationally, for example when negotiating an agreement with someone in his native language. The fact that the foreign speaker will be more deliberate and less emotional is rarely considered.
While concentration of power is quite unavoidable in today’s complex world, we still can thrive in this world. Of course, those institutions that have the power and the wealth might not have the best intentions and we should not be too naive. But thanks to the newly available technology of the Fourth Revolution, there is an intrinsic counter-power to this situation.
anybody can publish to the world, for free (or close to it),
we can coordinate, re-group and communicate globally, for free (or close to it),
it is possible to start a business for a lot less money than before, and have instantaneously a global footprint,
we can travel anywhere for much cheaper than anytime before (compared to the average earning power).
The sheer size of those actors has also an interesting drawback, that can be increasingly observed: they don’t know what to do with their money. Share buy-backs are more and more widespread, a sure sign that those organizations don’t know what to invest their resources in. This is great news because it has probably never been easier to get money to fund new initiatives and ventures. And these resources will necessarily flow into much smaller setups, that are nimble enough to take advantage of the opportunities of today’s world.
One can also argue that these huge organizations are also struggling with controlling themselves and what they are actually doing.
Hence although this might be a problem on some aspects, I do not find the concentration of power we can observe to be a major impediment of taking initiative and developing new stuff, on the contrary.