How Artificial Intelligence Needs to Be Regulated

In July there was a lot of media coverage of the declarations of Elon Musk about Artificial Intelligence in front of a US Governors Assembly. See for example Fortune’s ‘Elon Musk says that Artificial Intelligence is the Greatest Threat We Face as a Civilization‘.

According to Elon Musk “AI’s a rare case where we need to be proactive in regulation, instead of reactive. Because by the time we are reactive with AI regulation, it’s too late,” he remarked. Musk then drew a contrast between AI and traditional targets for regulation, saying “AI is a fundamental risk to the existence of human civilization, in a way that car accidents, airplane crashes, faulty drugs, or bad food were not.

His point is that if unregulated, AI might learn to manipulate to achieve goals that would be harmful to (some) humans. Elon Musk has access to the latest AI developments and it might be difficult to understand those capabilities. In any case, his warning should be heard and regulation might be a good thing. At the end of the day, AI might be used as a weapon and weapons are generally regulated. In any case it would not harm to have a regulatory approach to it. The challenge of the safety certification of AI-driven objects could be the right way to tackle the issue.

Share