How It is Necessary to Take Complex Problems One Chunk At a Time

I realize a lot of my value is sometimes to help clients faced with complex problems create smaller, more digestible chunks. To narrow down the scope to something manageable that can be readily be a scope for action.

Of course this process is at the same time frustrating (all issues don’t get addressed, only the priority ones) and there is a definite risk of creating a local optimized solution that is not optimal at all for the entire system. However it is often the only way to progress into action.

I realize that this act often requires leadership and the ability to take a risk, and is not natural to everyone. It is important to take the time address the major risk of creating a local solution incompatible with later developments by first going through an elucidation phase that is sufficient to clarify the system and avoid as much as possible this situation. And at the same time it is essential to progress and to resolve the priority issue that is under consideration.

Split the elephant into small chunks to eat it. Otherwise often we would not get anywhere. Take care about doing it right.

Share

How to Deal With Chaos

The question is widespread: how to deal with chaos, or to use a trendy word, with a VUCA world (Volatile,  Uncertain, Complex and Ambiguous)?

I like one answer provided by Leo Babauta in a blog post on how to deal with chaos: “When chaos and messiness come our way, it’s not necessarily a bad thing. It’s not inherently stressful and anxiety-inducing. It’s just that our minds don’t usually like these things. We want order and simplicity.

So the problem isn’t the external situation. It’s our internal ideals. We want order and simplicity, not to be interrupted, not to be overwhelmed. The ideal of orderliness is causing our frustration, stress, anxiety, not other people, not a chaotic situation.

The ideal of orderliness causes our difficulties. And we created the ideal. Therefore, we are causing our own difficulties.

The good news is that, if we created the ideal, we have the power to change it.

The interesting part is to identify how our embedded ideal of orderliness and stability influences the way we look at the world. That is definitely something we need to overcome to thrive in the Collaborative Age, and that needs to be part of the education of every young generation: change is the new normal.

Share

How The Way to Regulate the Safety of Software Needs to Change

In a previous post ‘How Artificial Intelligence Challenges Our Regulatory Approach to System Risk‘ we already discussed how regulators are challenged by new software and particularly AI, which is at the root of reaction unpredictability.

Modern cars are software-driven
Modern cars are software-driven

Before that stage, the predictability of conventional codes is already in question. In highly regulated industries such as the nuclear or aerospace industries, regulators have historically been very strict about the usage of commercial codes and requirements to remove all ‘non-functional code’ for that particular application. However that makes software very expensive because it needs to be redeveloped on an ad-hoc basis.

As underlined in the Atlantic paper ‘The Coming Software Apocalypse‘, other industries such as the automotive industry have never invested in that space, although the complexity of codes has increased dramatically. As mentioned in a previous post, the only solution is to transfer to code auto-generation using system modeling. The regulatory issue then becomes to certify and audit the software that generates the code, and the system model that drives it. It changes quite significantly the focus of regulation: instead of certifying the end-product, the production chain needs to be reviewed and certified.

Regulators should drive this transition in the way code is being generated from systems models and how they will be certified and approved. They seem to be a bit slow in jumping into that space, but it will hopefully come very soon.

Share

How to Develop Safe Software – Stop Manual Coding!

Following up from the previous post ‘How Most Traditionally-Developed Software is Failure-Prone‘, the same Atlantic paper ‘The Coming Software Apocalypse‘ provides with a possible solution: automated software production from a basis of system modeling.

system modeling
system modeling

System engineering and system modeling is a modern, powerful manner to describe complicated systems and embed in a systematic manner all applicable requirements. It also includes powerful verification and validation techniques and approaches that allow to check in an exhaustive manner the behavior of the system.

Software code can be automatically generated from the system model. This unique code, specific to the application, might not be easy to read, but it can be proven to be consistent with the intent. In particular it will not contain the traditional 90% of supposedly unused code that plagues most modern software platforms. And it can be fully verified and validated thanks to dedicated tools.

This might spell the doom of manual coding, but seems to be the right way to progress to eradicate random errors from software that has become over the years too complex.

Share

How Most Traditionally-Developed Software is Failure-Prone

Modern software is often built by layering upon layers of code development. This makes those software intrinsically unsafe and impossible to test in all possible situations. This excellent post in The Atlantic ‘The Coming Software Apocalypse‘ describes both the extent of the problem and the possible solution.

software_bugIn this first post we will concentrate on the intrinsic failure-prone characteristic of traditionally-developed software. One example is developed in this post which is striking: after a few accidents involving car speed regulation systems, an expert examined the code and they “described what they found as “spaghetti code,” programmer lingo for software that has become a tangled mess. Code turns to spaghetti when it accretes over many years, with feature after feature piling on top of, and being woven around, what’s already there; eventually the code becomes impossible to follow, let alone to test exhaustively for flaws. Using the same model as the Camry involved in the accident, Barr’s team demonstrated that there were actually more than 10 million ways for the onboard computer to cause unintended acceleration. They showed that as little as a single bit flip—a one in the computer’s memory becoming a zero or vice versa—could make a car run out of control.”

Particularly scary when we note that cars are nowadays the most sophisticated machines on the planet! And a huge challenge for regulators too!

The solution is to implement a new way to produce software: automatic generation from a systems model basis. We’ll examine this in the next post.

Share

What Resilience Management Approaches Should Include

In a follow-up of our post ‘How Risk Management Must Evolve into Resilience Management‘, let’s examine what are the approaches that would be required in a new discipline of Resilience Management.

First, Resilience Management should be an extension, a complement of Risk Management. Risk Management works and this foundation should not be dismissed.

Resilience Management should focus on the ‘unknown-unknowns’, developing scenarios generally believed to be highly unlikely or too extreme, to examine how the system would respond. Resilience Management is the appropriate response to unpredictable complex systems.

In addition, Resilience Management should measure the capability of the system to adapt by reconfiguring itself, reassigning resources or seeking additional resources, adapting its structure.

To be effective, Resilience Management should also address a higher level system than the one too often examined. For example, it should consider airspace safety management and not just individual aircraft safety; or the entire capability of a society to respond to major industrial accident instead of the sole safety of plant.

The main focus of Resilience Management should be the capability for the system to effectively respond (and not just react) at all times.

Resilience Management is a new discipline to be invented and developed for the Collaborative Age. Are you ready to take the challenge?

Share

How Risk Management Must Evolve into Resilience Management

In the past decades the concept and discipline of Risk Management has emerged and developed into a major management discipline. But now it has shown its limits.

Houston floodings 2017

One of the issues of the current approach to Risk Management is that it tends to address risk mitigations in a static manner, without considering the fact that quick response and evolutive systems can be a better response to unexpected situations. In addition, traditional risk management approaches address well the ‘known-unknowns’ but absolutely not the ‘unknowns-unknowns’ even in a generic manner.

Resilience Management should be the appropriate future extension of Risk Management. It includes an additional dimension of being able to bear unexpected events or events beyond the bounds of system design. It includes mitigation actions that could include evolution and revolution of the system in a dynamic manner.

As recent storms and unexpected natural or man-made disasters have shown, Resilience Management is a discipline to develop to enhance our capability to respond to the unexpected.

Share

The Welcome Complexity Manifesto is Published

I am very proud to announce the publication of the Welcome Complexity Manifesto (in French only at this time, the English version is under checking and should be released soon).

This manifesto is a collective effort spearheaded by Michel Paillet. Welcome Complexity is a non profit organisation under creation in France with the aim to provide resources and tools to deal with the increasing complexity of our world. It is deeply rooted in academic research and regroups also a number of professionals that are actively helping organisations deal with those challenges.

The manifesto ambitions to clarify the problems at stake and gives direction for action.

Welcome Complexity is now working on a Body of Knowledge book that ambitions to give practical tools and advice to tackle the most complex problems of our current world.

A must-read manifesto available worldwide on all e-bookshops. It is sold at the minimum zero-margin price. Here are the links for Amazon.fr:

Enjoy the read as much as I enjoyed helping publishing it!

Share

How the History of Murphy’s Law is an Inspiration

The Quartz column ‘Murphy’s Law is totally misunderstood and is in fact a call to excellence‘ and the linked pages on the history of Murphy’s law provide an interesting insight into the history and initial meaning of that law.

The gee-whiz experiment, around which Murphy’s law was conceived

It all happened around some hazardous tests conducted at the time of the sound barrier breaking effort. The original meaning would have been rather more aligned with a risk analysis approach: “When a reporter asked about the project’s inherent danger, Stapp allegedly replied that the team was guided by a principle he called “Murphy’s Law.” As Stapp put it, errors and malfunctions were an inescapable reality of any undertaking. Instead of using that fact as reason to quit, the engineers used it as motivation to excel. The only way to avoid catastrophe was to envision every possible scenario and plan against it

Of course this is not contradictory with the current understanding of Murphy’s law ‘Anything that can go wrong will go wrong’, but it takes a more positive prevention meaning. Murphy’s law is in fact an inspiration to consider all possible failure modes in a design, would it be a technical system or any other human endeavor.

Share

How Artificial Intelligence Needs to Be Regulated

In July there was a lot of media coverage of the declarations of Elon Musk about Artificial Intelligence in front of a US Governors Assembly. See for example Fortune’s ‘Elon Musk says that Artificial Intelligence is the Greatest Threat We Face as a Civilization‘.

According to Elon Musk “AI’s a rare case where we need to be proactive in regulation, instead of reactive. Because by the time we are reactive with AI regulation, it’s too late,” he remarked. Musk then drew a contrast between AI and traditional targets for regulation, saying “AI is a fundamental risk to the existence of human civilization, in a way that car accidents, airplane crashes, faulty drugs, or bad food were not.

His point is that if unregulated, AI might learn to manipulate to achieve goals that would be harmful to (some) humans. Elon Musk has access to the latest AI developments and it might be difficult to understand those capabilities. In any case, his warning should be heard and regulation might be a good thing. At the end of the day, AI might be used as a weapon and weapons are generally regulated. In any case it would not harm to have a regulatory approach to it. The challenge of the safety certification of AI-driven objects could be the right way to tackle the issue.

Share

How We Can Protect our Business from Competition in the Future (the Moat)

In ‘The New Moats: Why Systems of Intelligence™ are the Next Defensible Business Model‘, the author Jerry Chen makes the point that traditional defensive moats around businesses are getting obsolete and that the new moat is around intelligent technology.

A Moat Around My Castle

Companies that focus too much on technology without putting it in context of a customer problem will be caught between a rock and a hard place?—?or as I like to say, between open source and a cloud place

Jerry Chen goes on explaining that he believes the new competition defences will be built around systems of intelligence that can combine several data sources to create substantial value.”In all of these markets, the battle is moving from the old moats, the sources of the data, to the new moats, what you do with the data“.

Personally I agree half-way, in particular because Jerry Chen places a lot of expectations on Artificial Intelligence. It might evolve that way, but for the moment, from my experience trying to create value-added applications for organizations, it is the engagement of the users around the data that creates value. The meaning is given by the experience of the users (although this might need to be facilitated and supported to properly define those items of value).

Yes, future defences to competition will be in clever data meaning development. But let’s not forget the engagement of the people around the data-set and the softer component of value creation. Here lies the real value of the future.

Share

How to Identify Groupthink

General Patton said “If everyone is thinking alike, then somebody isn’t thinking“. He was probably weary of GroupThink.

Patton Thinking

In any case I tend to agree: if there is too much agreement on something not trivial, something is wrong. We don’t have the right people in the room or they feel compelled to agree with the majority’s view.

I like to be the one offering a different approach or opinion. I know it takes guts to tell a high powered executive that has obviously flawed plans what people don’t dare tell him straight. It’s risky in some organizations. Maybe that’s why I became an independent consultant.

But remember: if people agree too easily on a controversial subject, something’s wrong in the organization.

Share