How We Need to Increase Efforts to Protect Against Inadequate AI Generated Content

Following up from the previous post ‘How We Underestimate the Availability of AI Generated Content‘, this interesting post ‘AI-generated fake content could unleash a virtual arms race‘ goes one step further looking at the consequences of this technology.

“[the exercise of generated a fake AI-generated website provided] a glimpse into a potentially darker digital future in which it is impossible to distinguish reality from fiction.

Such a scenario threatens to topple the already precarious balance of power between creators, search engines, and users. The current flow of fake news and propaganda already fools too many people, even as digital platforms struggle to weed it all out. AI’s ability to further automate content creation could leave everyone from journalists to brands unable to connect with an audience that no longer trusts search engine results and must assume that the bulk of what they see online is fake.”

The issue is really that machines can generate content much faster than humans, and that all social networks rely mainly on humans to weed out inadequate content. Those tools could thus be “weaponized […] to unleash a tidal wave of propaganda could make today’s infowars look primitive“.

There is thus a definite urgent need to develop “increasingly better tools to help us determine real from fake and more human gatekeepers to sift through the rising tide of content.”

Share