Using generative AI, a small team can spread the next global disinformation campaign to billions in less than a day. Here’s how to fight it.
In July 1983, one of history’s most insidious 'fake news' campaigns took root in India. It began with a letter in Delhi's 'Patriot' newspaper, allegedly written by an anonymous American scientist. The letter claimed AIDS was the result of a Pentagon biological experiment that had gone out of control.
The letter — as Johns Hopkins School of Advanced International Studies professor Thomas Rid writes in his book Active Measures: The Secret History of Disinformation and Political Warfare — was in fact part of a KGB 'dezinformatsiya' plot called Operation DENVER.
Rid detailed how, in 1987, elements of the Patriot 'story' eventually found their way onto America’s CBS Evening News via a series of plants and (mis)quotes in newspapers across Asia, Africa, South America, and Europe. Moscow soon ‘disavowed’ the story, but it had taken hold. By 2005, hip-hop megastar Kanye West would sing ‘I know the government administer AIDS’ (sic) in his somewhat ironically named song Heard 'Em Say.
Operation DENVER took 22 years and dozens of secret agents to go from an anonymous letter in a small Indian newspaper to a 'Billboard Hot 100' chartbuster. The next global disinformation campaign could be planned, set up and executed by a small team, from a single room, in less than one day. This war could have devastating effects off the battlefield. It will start with a flood of stories, images, and videos created by generative artificial intelligence (AI) and published on dozens of websites, also built and deployed instantly by AI.
These will be amplified on social media by millions of instantly generated bots, targeting hundreds of millions of real people with the near-perfect ‘deepfake’ audio, video, and images.
Twitter’s new algorithm, which gives content from ‘blue check’ users a boost, would unwittingly help the ‘verified’ disinformation spread faster. This is what happened on May 22. A ‘blue check’ account falsely claiming affiliation to Bloomberg News tweeted an AI-generated photo depicting an explosion outside the Pentagon. The image spread quickly and media houses around the world even covered it as fact.
This single blue check account tweeting a single AI generated photo caused the S&P 500 to shed a reported $500 billion in market cap before the news was disproved. Imagine the devastation when a network of 1,000 malicious blue check accounts is wielded in unison. The cost for 1,000 blue checks? $8,000.
No one will be safe from this war. The simultaneous simplicity and sophistication of generative AI software means it can be used to fight well-funded governments, corporations, and celebrities, or even trivial workplace or schoolyard rivalries. Overwhelmed with information — real and fake — we will be unable to separate fact from fiction.
Solutions for this tech nightmare rely not just on code, but on policy and preparation.
To start, all content, whether created by humans using Microsoft Word, Adobe Photoshop, Apple Final Cut Pro — or Generative AI like Bard and Dall-E — could be signed by a digital fingerprint. Further edits would not only log what changes were made, but the identity of the editor too. Media and tech companies such as the BBC, Adobe, Intel, Microsoft, and Canon have joined hands in one such effort called the Coalition for Content Provenance and Authenticity.
Media projects such as the International Fact-Checking Network (IFCN), reinforced by AI-powered fact-checking companies such as Logically also play a vital role in identifying and red-flagging fake news. Working off open-source standards, red-flagged content should be blacklisted on platforms such as Google, Wikipedia, Instagram, Twitter, and even WhatsApp to stop or slow their spread. Red flagging would also allow journalists and readers to spot misinformation before unwittingly reporting it as fact.
Regulators have a role, too, in framing policies that support the creation and adoption of such standards, as well as mechanisms to catch and penalise entities that subvert them.
What if malicious content broke through before getting red-flagged?
Tech and media companies must build mechanisms to issue 'recalls' that alert users who were exposed to disinformation. Facebook pioneered this in 2020, when it warned users who were fed posts identified as COVID-19 misinformation. This must become a standard practice.
The problem, of course, is most people are unwilling to accept they have been deceived. Countering this requires aggressive public education — in schools and online — about disinformation techniques and how to recognise and avoid falling for them.
Governments, which protect millions of lives, and companies with billions of dollars at stake must prepare, too. Start by creating a cross-functional 'Red Team' drawing from your legal, communications, HR, marketing, IT, finance, security, and operations teams. Have them map the parts of your organisation that are vulnerable to ‘disinformation’. Set up 'command centres’ to track news and social media for disinformation in real-time. Update your crisis playbook so each team knows how to respond — internally and externally — when under attack.
You can now use special apps to digitally sign text, images, and video content proving they were created by you. Do this and make official, authenticated content easy to find on your website. Also, consider engaging experts who use forensic tools to detect manipulated content. Don't forget to educate rank and file staff on what to do if they spot potentially malicious information. Finally, update your playbooks as often as is reasonable.
Even with these measures in place, disinformation will break through. Human beings are fallible, and all technology eventually becomes obsolete. While no one knows for sure what the future holds, let’s take a page out of Maya Angelou's book - if we hope for the best and are prepared for the worst, we'll be unsurprised by anything in between.
It's time to get prepared.
The article was first published in the afaqs