How to Account for the Cobra Effect

In complex systems, actions or decisions tend to have unintended consequences. That appears to be called the ‘Cobra effect’ after a 2001 book. This is explained with many funny examples on Wikipedia (Cobra effect article) and Quartz (Cobra effect post).

The name itself stems from an attempt by the british colonial power to eradiate cobras in New Delhi. The reward policy for each snake led people to breed them and increase their number to get the rewards. And when the reward system got cancelled, all those cobras were released, which had finally the opposite effect than the one expected.

This reminds us that in complex systems with people possibly taking decisions to game the system, decisions that look straightforward can have a different or even opposite effect. Therefore we need to systematically test the effect of a decision on a smaller scale before spreading it. And sometimes the solution to a problem is not an obvious one, but rather doing something that only looks remotely connected with the problem at hand!

Share

How Our Brains Filter Perception More and at a Lower Level Than We Thought

In this excellent article ‘To Pay Attention, the Brain Uses Filters, Not a Spotlight‘ we are reminded that filtering is a major activity of our brain with a particular emphasis on attention management. However it obviously has some drawbacks.

Somehow, even with massive amounts of information flooding our senses, we’re able to focus on what’s important and act on it.” By researching about this effect of focalisation researchers have actually found out that our brain uses deeply ingrained filtering mechanisms, beyond our cortex regions. “The attentional searchlight metaphor was backward: The brain wasn’t brightening the light on stimuli of interest; it was lowering the lights on everything else.”

Moreover, “[the] findings indicate that the brain casts extraneous perceptions aside earlier than expected. And filtering is starting at that very first step, before the information even reaches the visual cortex.” This shows that there is a substantial amount of information that gets filtered without even reaching any level of consciousness. This shows that we unconsciously filter much more than what we would believe!

Interesting new research paths are described, in particular how perception and movement get interlinked at a very low brain level.

In any case it becomes obvious that our filtering mechanisms are useful and dangerous at the same time, and they are so deeply ingrained in our brains that we can’t even hope to become conscious of it. Food for thought!

Share

How Complexity can Emerge Even from Simple Systems

In this interesting article ‘Physicists discover surprisingly complex states emerging out of simple synchronized networks‘ we discover that complexity can emerge even from simple systems.

In fact, we know that complexity can already emerge from the ‘three body problem’, which is unpredictable beyond a certain time. Those are passive elements, however, and now scientists show how complexity can also emerge from active machines or beings. “Caltech researchers have shown experimentally how a simple network of identical synchronized nanomachines can give rise to out-of-sync, complex states

The findings experimentally demonstrate that even simple networks can lead to complexity, and this knowledge, in turn, may ultimately lead to new tools for controlling those networks. For example, by better understanding how heart cells or power grids display complexity in seemingly uniform networks, researchers may be able to develop new tools for pushing those networks back into rhythm.”

Complexity is more widespread than we think and how to welcome it is a major issue in the modern world. Let’s work towards this!

Share

How Important It Is to Distinguish Between Finite and Infinite Games

In this excellent speech ‘What game theory teaches us about war’ (Youtube), Simon Sinek reminds us that there isa great difference between playing finite or infinite games. This brings back to the book ‘Finite and Infinite Games‘ by James Carse.

There are two types of games: there are finite games and there are infinite games. A finite game is defined as known players fixed rules and agreed-upon objective (such as baseball for example). An infinite game is defined as known and unknown players the rules are changeable and the objective is to perpetuate the game.

He reminds us then that business or strategy is an infinite game, and those finite players won’t last long. “The game of business is an infinite game. The concept of business has existed longer than every single company that exists right now and it’ll exist long after all the companies that exist right now go away. The funny thing about business is the number of companies that are playing finite they’re playing to win they’re playing to be the best they’re playing to beat the quarter or the year and they’re always frustrated by that company that has an amazing vision, a long-term vision that seems to drive them crazy and over the long term that player will always win and the other player will run out of resources or the will and they’ll go out of business“.

There can be some differing views about how to play infinite games (see this post about whether the current unpredictability of US external policy could be a way to play the infinite game: ‘Why Simon Sinek is wrong about Game Theory.’

Still this distinction is important and it reminds us that when we play infinite games, rules are not so essential as they are constantly reinvented; and strategies such as unpredictability can be valid to have other players on their toes. Are you playing a finite or infinite game?

Share

How the Failure of Autonomous Cars Shows the Limits of Self-Taught AI

There was a great hope of quickly achieving autonomous driving but it appears that this dream has to be postponed quite a few years. A good summary is exposed in this Quartz article ‘Autonomous vehicles: self-driving car wreck

The key point I find is the following. “AV researchers assumed driving enough test miles would lead to self-driving cars, an idea that emerged from an influential 2009 white paper by Google researchers, “The Unreasonable Effectiveness of Data”. It demonstrated how […] sufficient data could solve (most) problems“.

Driving, it turns out, isn’t one of them. The open road is too complex, and there are too many unexpected dangers to design a self self-driving system from data alone. AV companies are now shifting gears and building “safety cases” borrowed from the aviation and safety industries that identify and solve for possible points of failure. This detour means AVs will arrive later than once thought.

This extract shows that there are limits to self-taught AI and the associated certification challenges. The future probably lies in a mix between AI and deterministic programming.

This failure has probably more importance than noted, and shows that many hopes of a purely self-taught AI technology to solve complex problems is possibly an illusion. It has not yet stopped the AI tech-bubble, let’s expect some more disappointments in this area!

Share

How Beneficial Checklists Can Be

In this excellent Quartz summary ‘Checklists‘, the history and benefits of this tool are explained in detail. The most amazing is the level of benefits that can be extracted from such a simple tool, that we should certainly use more often.

As to benefits: “In the WHO’s initial pilot study of eight hospitals in eight international cities the checklist was associated with a one-third reduction in deaths and complications from surgery.” And checklists have only become mainstream in medical care in the 2000’s with a WHO initiative!

The history is interesting too: “it was systemic complexity that gave rise to the first formal checklist in the 1930s.” – when crew realized they needed to have checklist before taking off on a new, ultra-complicated bomber airplane. Therefore, checklists are a tool to tackle complicated or even complex situations.

We often underestimate the power of such tools when dealing with repetitive, but complicated situations. Let’s systematize checklists!

Share

How to Overcome the Science Reproducibility Crisis

Following up on our previous post on “How Fake Science is Strongly on the Rise and Endangers Us“, the issue or reproducibility in science is also coming up strongly: even paper and findings recognized as legitimate for a long time are put in question by the inability to reproduce results, and more specifically in human sciences. This post ‘Why Your Company Needs Reproducible Research‘ provides a good summary of the issues at stake.

Recent efforts are reproducing psychology results lead to “Only about 40% of the findings could be successfully replicated, while the rest were either inconclusive or definitively not replicated.” Similar proportions are obtained in business-related research.

While this may be due to very human bias like the need to show some results from research, and the inherent complexity of the environment around some experiments, there is definitely a need for more thorough replication requirements prior to confirming results. This puts more challenge on researchers but is probably a need in a world that sees increasingly fake science.

Science will always progress by invalidating previous results or restraining the boundaries of validity of previous results. This is a normal process, still we need to be wary to ensure reproducibility of results before they are spread as invariant truths.

Share

How we Should not Use Software to Compensate for Systems that are not Properly Designed

The now famous crisis of the Boeing 737 Max shows us that there are limits to what software can compensate when a system it not properly designed. This post ‘Boeing 737 Max: Software patches can only do so much‘ is worth reading.

It actually boils down to a system-engineering issue. Adding layers upon layers of fixes to try to compensate functionalities on legacy systems only work up to a certain point. The author “cautioned his customers against using software as a patch for systems that for economic reasons or reasons of expediency, were not purpose-built. This applies not just to the most complex heterogeneous networks of systems but also small devices.”. This leads to “spaghetti architecture, or architecture by committee“, missing the important step of first listing all requirements and making sure there is consistency in the overall system and no systemic flaws.

More generally, “the old stuff just doesn’t migrate well; it needs to be redesigned from scratch“. In the next few years we can expect that many platforms used in many industries will indeed need to be redesigned to overcome their obsolescence and take advantage of the benefits of modern technology.

Existing platforms can only be upgraded to a certain point of complexity and layering, before we lose control. Then they need to be redesigned from scratch.

Share

How Innovation is Actually Behavior Change

In her post ‘Innovation is About Behavior Change‘, Valeria Maltoni makes, I believe, and excellent point. Innovation or invention is not about the tangible product, it is about how it changes habits and behavior.

This explains why there are so many inventions which seem quite a breakthrough but that never spread: it is because the associated behavior change did not happen. Maybe because there was a force of inertia, maybe because something else happened at the same time that pulled behavior change in the opposite direction.

It is a lesson for all inventors and innovators: don’t just focus on how marvelous your product is. Spend most of your effort working on the behavior that needs to change for its adoption. Work on the habits, on the social aspect of behavior, and anything that will make your innovation unavoidable on a day-to-day basis.

Innovation that would not consider behavior change is doomed. And as a Business Angel I will recognize that as a major criteria when judging the adequacy of the development plan of startups.

Share

How We Should Join ‘Team Human’ in the World of Social Media

‘Team Human’ is a movement created by Douglas Rushkoff through its TED talk ‘How to be Team Human in the digital future‘. I’m definitely on!

It starts from a rather depressing statement about social media: “Does social media really connect people in new, interesting ways? No, social media is about using our data to predict our future behavior. Or when necessary, to influence our future behavior so that we act more in accordance with our statistical profiles. The digital economy — does it like people? No” I am not so extreme in opinion, but it is certain that in part, social media has been designed to be addictive for some purpose.

His concern is that technology moguls now seem to have stopped caring about the people. “It’s funny, I used to be the guy who talked about the digital future for people who hadn’t yet experienced anything digital. And now I feel like I’m the last guy who remembers what life was like before digital technology. It’s not a matter of rejecting the digital or rejecting the technological. It’s a matter of retrieving the values that we’re in danger of leaving behind and then embedding them in the digital infrastructure for the future. “

Join “Team Human.” Find the others. Together, let’s make the future that we always wanted.”

Share

How to Manage Unintended Consequences of Technology

New technology always has unintended consequences, and possibly unintended usage. In this interesting post ‘Managing the Unintended Consequences of Technology‘, some take-away points from the 2018 Unintended Consequences of Technology Conference are exposed.

The main recommendations I noted from the post are:

  • hire a more diverse workforce (to better anticipate unexpected usage)
  • de-bias the data sets used for developing new technology, to avoid unexpected algorithmic discrimination
  • develop a product impact advisory board

In a complex world I believe it is quite impossible to predict what technology will be used for, but we can certainly try to avoid unexpected consequences. I am a strong believer in the need to have more diversity to avoid blind spots related to cultural and social backgrounds.

The fact that there are conferences on the subject looks like quite a good starting point to work on the subject!

Share

How the ‘Buy Slow, Sell Fast’ Advice of Stockbrokers is Wrong

In this Marginal Revolution post ‘The Buying Slow but Selling Fast Bias‘ by Alex Tabarrok, a long quoted wisdom sentence of stockbrokers is proven wrong. ‘Buy Slow, Sell Fast’ has been proven by data scientists to not be the best strategy: it would rather be ‘Buy Slow, Sell Slow’.

According to the research quoted in the post, buying slow and deliberately is not a problem. It is rather on the selling side that selling fast is not optimal. According to the research article, “We use a unique data set to show that financial market experts – institutional investors with portfolios averaging $573 million – exhibit costly, systematic biases. A striking finding emerges: while investors display clear skill in buying, their selling decisions underperform substantially – even relative to strategies involving no skill such as randomly selling existing positions – in terms of both benchmark-adjusted and risk-adjusted returns. We present evidence consistent with limited attention as a key driver of this discrepancy, with investors devoting more attentional resources to buy decisions than sell decisions.”

Coming back to the thinking fast and slow approach now familiar thanks to Daniel Kahneman, this tends to demonstrate that in most cases, a slow and reflective approach is better than a fast, reactive approach – and that it shows even in the testosterone-laden world of financial trading!

Even in stressful situations, it pays off to think slow or think twice before taking a decision!

Share