How The Way to Regulate the Safety of Software Needs to Change

In a previous post ‘How Artificial Intelligence Challenges Our Regulatory Approach to System Risk‘ we already discussed how regulators are challenged by new software and particularly AI, which is at the root of reaction unpredictability.

Modern cars are software-driven
Modern cars are software-driven

Before that stage, the predictability of conventional codes is already in question. In highly regulated industries such as the nuclear or aerospace industries, regulators have historically been very strict about the usage of commercial codes and requirements to remove all ‘non-functional code’ for that particular application. However that makes software very expensive because it needs to be redeveloped on an ad-hoc basis.

As underlined in the Atlantic paper ‘The Coming Software Apocalypse‘, other industries such as the automotive industry have never invested in that space, although the complexity of codes has increased dramatically. As mentioned in a previous post, the only solution is to transfer to code auto-generation using system modeling. The regulatory issue then becomes to certify and audit the software that generates the code, and the system model that drives it. It changes quite significantly the focus of regulation: instead of certifying the end-product, the production chain needs to be reviewed and certified.

Regulators should drive this transition in the way code is being generated from systems models and how they will be certified and approved. They seem to be a bit slow in jumping into that space, but it will hopefully come very soon.

Share