How to Detect Mistakes in Statistical Analysis

This extremely useful paper reminds us of common statistical mistakes made in articles and papers: ‘Ten common statistical mistakes to watch out for when writing or reviewing a manuscript‘.

Those are:

  • absence of an adequate control condition or group
  • interpreting comparisons between two effects without directly comparing them as a full group
  • inflating the number of units of analysis
  • spurious correlations (example single weird value)
  • using too small samples
  • circular analysis (retrospectively selecting features of the data to characterize the dependent variables, resulting in a distortion of the resulting statistical test)
  • too much flexibility of analysis
  • failure to correct for multiple comparisons in exploratory analysis)
  • over-interpreting non-significant results
  • confusing correlation and causation

Quite a useful checklist to use the next time you review a paper based on statistical analysis!

Share

How We Need to Increase Efforts to Protect Against Inadequate AI Generated Content

Following up from the previous post ‘How We Underestimate the Availability of AI Generated Content‘, this interesting post ‘AI-generated fake content could unleash a virtual arms race‘ goes one step further looking at the consequences of this technology.

“[the exercise of generated a fake AI-generated website provided] a glimpse into a potentially darker digital future in which it is impossible to distinguish reality from fiction.

Such a scenario threatens to topple the already precarious balance of power between creators, search engines, and users. The current flow of fake news and propaganda already fools too many people, even as digital platforms struggle to weed it all out. AI’s ability to further automate content creation could leave everyone from journalists to brands unable to connect with an audience that no longer trusts search engine results and must assume that the bulk of what they see online is fake.”

The issue is really that machines can generate content much faster than humans, and that all social networks rely mainly on humans to weed out inadequate content. Those tools could thus be “weaponized […] to unleash a tidal wave of propaganda could make today’s infowars look primitive“.

There is thus a definite urgent need to develop “increasingly better tools to help us determine real from fake and more human gatekeepers to sift through the rising tide of content.”

Share

How Diversity Is Shown to Increases Academic Research Results

This Nature article ‘These labs are remarkably diverse — here’s why they’re winning at science‘ makes the point that diversity is fostering creativity and academic outcomes, based on a study of scientific papers.

Of course, this study that shows that diversity is beneficial is based on citation count of scientific papers vs the names of contributors, which may not be fully representative of the importance of the research. Still, it is interesting to see a full fledged research based on data demonstrate the benefits of diversity.

The diversity of experience, cultures and viewpoints is quite essential for creativity and the article gives quite a few examples, in particular with the input from Maori culture into research, and other multi-cultural research teams such as an Okinawa setup that requires diversity and multi-culture to be part of the team. The article also mentions challenges of working in diverse teams such as language and cultural behavior.

Another stone in the field of demonstrating how diversity is beneficial for creativity and value creation.

Share

How To Deal with Frequent Mistakes of Artificial Intelligence

Artificial Intelligence is quite often mistaken, and that’s something we must know and understand (see for example the previous post on ‘How Deployment of Facial Recognition Creates Many Issues‘). The best example I’ve read of the problem of humans not reacting adequately is highlighted in the post ‘Pourquoi l’intelligence artificielle se trompe tout le temps’ (Why AI is always mistaken – in French) when it evokes the Kasparov vs Deep Blue chess match (recounted here in English ‘Twenty years on from Deep Blue vs Kasparov: how a chess match started the big data revolution‘)

At some stage during the game, the computer did something which looked quite stupid. And it was actually stupid, but one could although believe it was brilliantly unconventional! Kasparov was destabilized. In reality, that was actually a mistake by the AI program! “The world champion was supposedly so shaken by what he saw as the machine’s superior intelligence that he was unable to recover his composure and played too cautiously from then on. He even missed the chance to come back from the open file tactic when Deep Blue made a “terrible blunder”.”

Because of the manner in which AI gets trained, it will necessarily create a high ratio of mistakes and errors when implemented. The challenge is for us to identify those occasions and not get destabilized by them.

First, we should probably get a systematic warning associated with the AI output about the possibility of a mistake. And, we should remain conscious and critically aware of the possibility of a mistake by running some simple checks for the adequacy of the output.

This high error rate of AI is a problem for high reliability applications of course, and we should also see some emergence of techniques to correct this problem or provide technological checks and balances to avoid inadvertent mistakes that could have actual consequences.

Still just knowing that AI is prone to making mistakes is something important we need to recognise and be able to respond to.

Share

How California Fires Show How Nature Plays Catch-Up

I obviously like unconventional viewpoints and here is one on the 2020 California forest fires: ‘This is Not Fine‘ by Alex Tabarrok which mentions a study showing that those fires were just a catch-up by nature compared to the normal natural fire rate in California.

Academics believe that between 4.4 million and 11.8 million acres burned each year in prehistoric California. Between 1982 and 1998, California’s agency land managers burned, on average, about 30,000 acres a year. Between 1999 and 2017, that number dropped to an annual 13,000 acres.” In addition when I visited California I got explained that the Giant Sequoias could only reproduce if there were fires, because it is what triggers the seed to start growing. So, obviously, we have tried to convince ourselves that fire is not normal whereas it is a normal behaviour of the ecosystem.

The post goes on to state various reasons why there is not more controlled burning every year, but what is really important here to note is that ecosystems won’t always bend to the wishes of humans, and at some time there will be a catchup. And that we should be able to foresee this situation rather than complain.

Listening to the century-old rhythm of nature and ecosystems is certainly a good way to start when it comes to deciding where and how we live.

Share

How We Need to Be Graceful in Times of Crisis

I like this quote attributed to Dustin Poirier (an American martial art artist): “When times are good, be grateful, and when times are tough, be graceful“. I find it fits particularly well with the current economic crisis.

I was of course particularly touched by the recommendation to be graceful when times are tough. This is hard, and requires a lot of awareness. Too many people tend to focus on their own interest and lose gracefulness in tough times. This can be observed every day in the current Covid economical crisis.

I realize however through the quote that the appropriate mindset to deal with tough times also comes with the need to be grateful when success hits. It comes to recognize that we are not the sole cause of our success and that many others have contributed; and that is also why it is important to be graceful when times are tough with everyone and everything that surrounds us and contributes to our being.

Gracefulness in crisis is not easy when stress is high and the horizon is blocked by unknown factors. I still believe it is a good mindset to strive to. Are we all sufficiently graceful in the current crisis?

Share

How to Explain the Excessive Usage of Personality Tests Results

In a professional environment or out of personal curiosity, we’ve all taken personality tests that indicate our strengths and weaknesses. Why are those reductive tests so popular and in use? In David Epstein book ‘Range: Why Generalists Triumph in a Specialized World‘, the author takes the position that it respond to our and organisation needs to classify us and pigeonhole us.

A lucrative career and personality quiz and counseling industry survives on that notion. “All of the strengths-finder stuff, it gives people license to pigeonhole themselves or others in ways that just don’t take into account how much we grow and evolve and blossom and discover new things,” . “But people want answers, so these frameworks sell. It’s a lot harder to say, ‘Well, come up with some experiments and see what happens.’”

On one hand I find those tests quite insightful and they generate useful thinking about oneself and what we should reinforce or change; on the other hand it is true that they tend to classify us. By the way, the best advice is certainly not to try to improve weaknesses but rather to further enhance those strengths that make us so specific.

Organisations are then advised to seek diversity (i.e. a set of people which results to the test spread nicely across categories), and may outright reject applications on the basis of test results.

Those tests are an extremely reductionist approach to our personality and they also don’t account at all on the fact that we may evolve. Taking important decisions on their basis and letting them classify ourselves in categories is certainly excessive. They should remain as an interesting insight in our personality, but should not be used beyond a certain limit.

Share

How a Crisis Dramatically Accelerates Changes

What I find absolutely amazing in the current world Covid crisis is how it is accelerating changes. In particular, how it is accelerating changes that had already started but were not obvious or where inertia let unstable situations be maintained.

This is the case in the economic and business field where there is an increased differentiation between winners and losers of the current crisis. And the losers were often those that had a precarious market and financial situation. They could survive, sometimes barely, in a stable world; the crisis acts as a revealing factor.

This is also the case in world politics and strategics; the Covid crisis has been a substantial catalyst for China actions in Hong Kong and more generally on the world stage, and here again the crisis has acted as an accelerator of events which could be anticipated in the future.

Finally it is often the case on the personal level; the changes in the way we work and in our world vision, and in our daily lives, accelerates a transformation that was to happen as digital and communication become more widespread and available.

The Covid crisis provokes a deep transformation of the world, but as I see it, most of it is just an incredible acceleration of changes which were already written or in the works.

Share

How to Overcome Bias in Project Estimates by Involving Generalists in Systemic Reviews

To finish our current series of posts on our exploration of the excellent book ‘Range: Why Generalists Triumph in a Specialized World‘ by David Epstein, I noted how the concepts developed about generalist vs specialist also applied in the field of project definition. It takes generalists and a diverse set of viewpoints to test the adequacy of a project definition file and associated estimate.

Bent Flyvbjerg, chair of Major Programme Management at Oxford University’s business school, has shown that around 90 percent of major infrastructure projects worldwide go over budget (by an average of 28 percent) in part because managers focus on the details of their project and become overly optimistic. Project managers can become like Kahneman’s curriculum-building team, which decided that thanks to its roster of experts it would certainly not encounter the same delays as did other groups. Flyvbjerg studied a project to build a tram system in Scotland, in which an outside consulting team actually went through an analogy process akin to what the private equity investors were instructed to do. They ignored specifics of the project at hand and focused on others with structural similarities. The consulting team saw that the project group had made a rigorous analysis using all of the details of the work to be done. And yet, using analogies to separate projects, the consulting team concluded that the cost projection of £ 320 million (more than $ 400 million) was probably a massive underestimate

This is a widespread phenomenon. If you’re asked to predict […], the more internal details you learn about any particular scenario […] the more likely you are to say that the scenario you are investigating will occur.”

This is why we observe again and again the immense benefits of having independent reviews of projects by people having a generalist overview and not emotionally involved with the project to get an objective feedback. While this is what we promote, the fact that this review is systemic and performed by generalists is also an essential part of the value delivered. I will highlight it more in the future.

Share

How Humans Will Crush Machines in Open-Ended Real World Problems

Following our previous posts (‘How Learning Approaches Must Be Different in Complexity: Upending the 10,000 h Rule‘) let’s continue our exploration of the excellent book ‘Range: Why Generalists Triumph in a Specialized World‘ by David Epstein. Beyond putting in question traditional learning techniques, and more generally pointing out the limits of specialization, he makes the point that in an increasingly automated world, the generalists that have a broad integrating picture are the ones that will be in demand.

The more a task shifts to an open world of big-picture strategy, the more humans have to add“. “The bigger the picture, the more unique the potential human contribution. Our greatest strength is the exact opposite of narrow specialization. It is the ability to integrate broadly.” Reference is made here to open-ended games or infinite games compared to closed or finite games that are won by specialists (refer to our post ‘How Important It Is to Distinguish Between Finite and Infinite Games‘)

Therefore, “in open ended real-world problems we’re still crushing the machines.” This distinction between simple and complex, open and closed problems is really essential in defining the approaches that are needed and the competencies required.

Human’s strength is the capability to decide in complex open-ended problems, and this is what we need now to put emphasis on in terms of education, career and recognition.

Share

How Learning Approaches Must Be Different in Complexity: Upending the 10,000 h Rule

Following on our previous post ‘How Generalists Are Necessary for the Collaborative Age‘, let’s continue some exploration of the excellent book ‘Range: Why Generalists Triumph in a Specialized World‘ by David Epstein. One of the main topics in the book is to show that the famous 10,000 hours rule for mastering some area of knowledge is actually only applicable to certain types of activities that are bound by clear rules: chess, music, golf. It does not apply to mastering complexity or any activity that does not respond to those characteristics.

The bestseller Talent Is Overrated used the Polgar sisters and Tiger Woods as proof that a head start in deliberate practice is the key to success in “virtually any activity that matters to you.” The powerful lesson is that anything in the world can be conquered in the same way. It relies on one very important, and very unspoken, assumption: that chess and golf are representative examples of all the activities that matter to you.”

The concept of the 10,000 h rule to master some practice is thus upended. Worst, “In 2009, Kahneman and Klein [found that] whether or not experience inevitably led to expertise, they agreed, depended entirely on the domain in question“. Sometimes even “In the most devilishly wicked learning environments, experience will reinforce the exact wrong lessons.”

Thus in the real complex world, actual learning must happen differently that repeating many times the same exercise in a predictable environment. It probably requires exposure to many different situations. Learning also cannot be expected to be continuous: it is probably discontinuous, with some ‘aha’ moments separated by slow maturing of new understanding.

Quite some thoughts that upend a lot of common knowledge. And still more thoughts that put into question traditional education.

Share

How Generalists Are Necessary for the Collaborative Age

I recommend highly the book ‘Range: Why Generalists Triumph in a Specialized World‘ by David Epstein. It has provided quite a few interesting insights for me, which will be the subject of a few following posts.

For those that have been following this blog, I have expressed many times that the Collaborative Age calls for generalists, contrary to the specialists fostered by the Industrial Age (for example here and here). This book confirms this hint in a very convincing way and goes beyond to show that complex systems can only be dealt with by generalists. And that being a specialist can be quite dangerous in terms of decision-making beyond the bounds of specialization validity.

Highly credentialed experts can become so narrow-minded that they actually get worse with experience, even while becoming more confident— a dangerous combination.”

And specialization can indeed lead to poor real-life outcomes. For example, “One revelation in the aftermath of the 2008 global financial crisis was the degree of segregation within big banks. Legions of specialized groups optimizing risk for their own tiny pieces of the big picture created a catastrophic whole. To make matters worse, responses to the crisis betrayed a dizzying degree of specialization-induced perversity.”

This realization is pervading more and more organisations and society when it comes to choosing someone to lead a complex endeavor. The best candidates are generalists, or at least people who have been exposed to many things beyond their main area of interest. “the most common [path to excellence] was a sampling period, often lightly structured with some lessons and a breadth of instruments and activities, followed only later by a narrowing of focus, increased structure, and an explosion of practice volume.”

I have always been convinced, and I am more and more convinced, that the rounded individual exposed to largely varied experiences and fields of knowledge is the new type of leader we will be looking for in an increasingly complex Collective Age. And this is probably the biggest challenge of our learning and academic institutions today.

Share