How Deep-fakes Become a Public Concern for Companies

In this interesting post by Darin Stewart ‘Executives Need a Deep-Fake Defense Strategy‘, the issues of deep-fakes being used for fraud are highlighted. Examples show how deep-fakes can induce personnel to believe they are being actually addressed by their CEO and make them commit inappropriate actions.

AI and machine learning permeate modern business and communications. It should come as no surprise that these technologies are being turned to illicit purposes. Deepfakes are audio, images, and videos that appear real but are actually AI-generated, synthetic creations. They are just the latest manifestation of disinformation in what the RAND Corporation describes as a culture of “truth decay.

Swindlers may increasingly used those techniques in the future. They may also be used to compromise people and exercise pressure on them, or in public or political settings to destroy the reputation of adversaries.

Therefore, finding ways to detect deep-fakes is becoming an important issue and quite some resources need to be devoted to this challenge. We can expect in the near future to have deep-fake detectors embedded in our communication software. In any case, we need to be ever more watchful for signs of deception in any communication!

Share