Following up on our post ‘How Artificial Intelligence Challenges Our Regulatory Approach to System Risk‘, in this post let’s discuss some possible new regulatory approaches.
- Enhancing lessons learnt and redistributing them to the entire ecosystem is a cornerstone of safety enhancement. It is much facilitated in the case of Artificial Intelligence (AI) thanks to the remote update possibility, as demonstrated successfully by Tesla.
- Implementing a statistical approach instead of a deterministic one. Some statistical risk analysis approaches are already available for years in the form of fault trees to determine the statistical probability of a feared accident. However this only works in environments where statistical failure data of components is available, and with limited changes to the environment and the system. New statistical approaches will have to be developed based on specific testing of the entire AI-related system. These approaches need to be developed theoretically and empirically and remain the major challenge of the years to come.
- Rules governing operability of the system in case of component failure will have to be strictly defined and enforced (with how many sensors out of order is it safe to drive autonomously?), because the degraded situations are the most difficult and cumbersome to regulate.
The problem of the new statistical approaches to safety demonstration is an exciting problem facing all regulators. I am looking for some science behind this, if any reader has useful links please share!