How Easy It Is to Fool Artificial Intelligence

I love this funny post ‘Hackers stuck a 2-inch strip of tape on a 35mph speed sign and successfully tricked 2 Teslas into accelerating to 85mph’. The point here is not really about Tesla reliability, but how easy it is still to trick Artificial Intelligence recognition tools.

In this particularly funny example the researchers just changed slightly the speed limit sign and it was enough to trick the sign recognition algorithm that watches the road and determines what is the acceptable speed (see the image). This type of system is increasingly prevalent in cars generally just to update the actually applicable speed limit that is provided as a guidance to the driver.

What is really impressive here is obviously how easy it seems to fool an Artificial Intelligence-based recognition software. If that’s the case for something so obvious and mundane, then what are the consequences for more complex applications like face recognition? Are they also as easy to fool?

Artificial Intelligence does not seem to be quite completely robust yet. Some progress is still needed!

Share