How the Failure of Autonomous Cars Shows the Limits of Self-Taught AI

There was a great hope of quickly achieving autonomous driving but it appears that this dream has to be postponed quite a few years. A good summary is exposed in this Quartz article ‘Autonomous vehicles: self-driving car wreck

The key point I find is the following. “AV researchers assumed driving enough test miles would lead to self-driving cars, an idea that emerged from an influential 2009 white paper by Google researchers, “The Unreasonable Effectiveness of Data”. It demonstrated how […] sufficient data could solve (most) problems“.

Driving, it turns out, isn’t one of them. The open road is too complex, and there are too many unexpected dangers to design a self self-driving system from data alone. AV companies are now shifting gears and building “safety cases” borrowed from the aviation and safety industries that identify and solve for possible points of failure. This detour means AVs will arrive later than once thought.

This extract shows that there are limits to self-taught AI and the associated certification challenges. The future probably lies in a mix between AI and deterministic programming.

This failure has probably more importance than noted, and shows that many hopes of a purely self-taught AI technology to solve complex problems is possibly an illusion. It has not yet stopped the AI tech-bubble, let’s expect some more disappointments in this area!

Share