How AI-based Systems Show Substantial Limits in Addressing Extremes

In this interesting post ‘Deepnews.ai, progress report #3‘, Frederic Filloux explains the struggles to get an AI framework to actually work on a specific subject which is to check the veracity of news.

The idea is to develop an AI based application that would be able to give a rating of quality to broadcast news. As stated in the post, using a neural network the system is able to measure accuracy of a news article with a 80% accuracy. The interesting part of course is that the system fails sometimes on pieces of high accuracy and interest, but are way different from the average – and those can be highly valuable, Pulitzer-price potentials.

There are two learning points from those experiments:

  • The mysterious and dangerous beauty, so to speak, of A.I. models is they are rarely fully understood by their creators
  • once trained, an AI will correctly appraise information close to the average, but will be at a loss to consider data which is significantly off the charts, rating it in an absurd way – AI algorithm thus promote conformity.

AI is a great tool but its limits need to be understood. I am particularly concerned about the fact that it may force conformity into social systems.

Share