AI Misinformation: How It Works and Ways to Spot It
Determining what's real online is getting more difficult as AI and deepfakes spread across social media platforms. But there are steps you can take to deal with it.
A year and a half before the 2024 presidential election, the Republican National Committee began running attack ads against US President Joe Biden. This time around, however, the committee did something different.
It used generative artificial intelligence to create a political ad filled with images depicting an alternative reality with a partisan slant — what it wants us to believe the country will look like if Biden gets reelected. The ad flashes images of migrants coming across US borders in droves, a world war imminent, and soldiers patrolling the streets of barren US cities. And at the top left corner of the video, a small, faint disclaimer — easy to miss — notes, "Built entirely with AI imagery."
It's unclear what prompts the RNC used to generate this video. The committee didn't respond to requests for more information. But it surely seems like it worked off ideas like "devastation," "governmental collapse" and "economic failure."
Political ads aren't the only place we're seeing misinformation pop up via AI-generated images and writing. And they won't always carry a warning label. Fake images of Pope Francis wearing a stylish puffer jacket, for instance, went viral in March, suggesting incorrectly that the religious leader was modeling an outfit from luxury fashion brand Balenciaga. A TikTok video of Paris streets littered with trash amassed more than 400,000 views this month, and all the images were completely fake.
Generative AI tools like OpenAI's ChatGPT and Google Bard have been the most talked-about technology of 2023, with no sign of any letup, across virtually every field, from computer programming to journalism to education. The technology is being used for social media posts, major TV shows and book writing. Companies such as Microsoft are investing billions in AI.
Generative AI tools — built using huge amounts of data, often gobbled up from across the internet, and sometimes from proprietary sources — are programmed to answer a query or respond to a prompt by generating text, images, audio or other forms of media. Tasks such as making photos, writing code and creating music can easily be done with AI tools; simply adjust your prompt until you get what you want. It has sparked creativity for some, while others are worried about the potential threats from these AI systems.
Problems arise when we can't tell AI from reality. Or when AI-generated content is intentionally made to trick people — so not just misinformation (wrong or misleading information) but disinformation (falsehoods designed to mislead or cause harm). Those aiming to spread misinformation can use generative AI to create fake content at little cost, and experts say the output can do a better job fooling the public than human-created content.
The potential harm from AI-generated misinformation could be serious: It could affect votes or rock the stock market. Generative AI could also erode trust and our shared sense of reality, says AI expert Wasim Khaled.
"As AI blurs the line between fact and fiction, we're seeing a rise in disinformation campaigns and deepfakes that can manipulate public opinion and disrupt democratic processes," said Wasim Khaled, CEO and co-founder of Blackbird.AI, a company that provides artificial intelligence-powered narrative and risk intelligence to businesses. "This warping of reality threatens to undermine public trust and poses significant societal and ethical challenges."
AI is already being used for misinformation purposes even though the tech giants that created the technology are trying to minimize risks. While experts aren't sure if we have the tools to stop the misuse of AI, they do have some tips on how you can spot it and slow its spread.
What is AI misinformation and why is it effective?
Technology has always been a tool for misinformation. Whether it's an email filled with crazy conspiracies that's forwarded from a relative, Facebook posts about COVID-19 or robocalls spreading false claims about mail-in voting, those who want to fool the public will use tech to accomplish their goals. It's become such a serious problem in recent years — thanks in part to social media providing a ramped-up distribution tool for misinformation peddlers — that US Surgeon General Dr. Vivek Murthy called it an "urgent threat" in 2021, saying COVID misinformation was putting lives at risk.
Generative AI technology is far from perfect — AI chatbots can give answers that are factually wrong and AI-created images can have an uncanny valley look — but it's easy to use. It's this ease of use that makes generative AI tools ripe for misuse.
Misinformation created by AI comes in different forms. In May, Russian state-controlled news outlet RT.com tweeted a fake image of an explosion near the Pentagon in Washington, DC. Experts cited by NBC say the image was likely created by AI, and it went viral on social media, causing a dip in the stock market.
NewsGuard, an organization that rates the trustworthiness of news sites, found more than 300 sites it refers to as "unreliable AI-generated news and information websites." These sites have generic but legit-sounding names, but the content produced has included some false claims such as celebrity death hoaxes and other fake events.
These examples may seem obvious fakes to more savvy online users, but the kind of content created by AI is improving and harder to detect. It's also becoming more compelling, which is helpful to malicious actors who are trying to push an agenda through propaganda.
"AI-generated misinformation tends to actually have greater emotional appeal," said Munmun de Choudhury, an associate professor at Georgia Tech's School of Interactive Computing and co-author of a study looking at AI-generated misinformation published in April.
"You can just use these generative AI tools to generate very convincing, accurate-looking information and use that to advance whatever propaganda or political interest they are trying to advance," said de Choudhury. "That type of misuse is one of the biggest threats I see going forward."
Bad actors using generative AI can boost the quality of their misinformation by creating a more emotional appeal, but there are instances where AI doesn't need to be told to create false info. It does it all by itself, which can then be spread unwittingly.
Misinformation isn't always intentional. AI can generate its own false information, called a hallucination, said Javin West, an associate professor at the University of Washington Information School and co-founder of the Center for an Informed Public, in his Mini MisinfoDay presentation in May.
When AI is given a task, it's supposed to generate a response based on real-world data. In some cases, however, AI will fabricate sources — that is, it's "hallucinating." This can be references to certain books that don't exist or news articles pretending to be from well-known websites like The Guardian.
Google's Bard struck a nerve with company employees who tested the AI before it was made available to the public in March. Those who tried it out said the tech was rushed and that Bard was a "pathological liar." It also gave bad, if not dangerous, advice on how to land a plane or scuba dive.
This double whammy of content created by AI being plausible and compelling is bad enough. However, it's the need by some to believe this fake content is true that helps it go viral.
What to do about AI misinformation?
When it comes to combating AI misinformation, and the dangers of AI in general, the developers of these tools say they're working to reduce any harm this technology may cause, but they've also made moves that seem to counter their intentions.
Microsoft, which invested billions of dollars into ChatGPT creator OpenAI, laid off 10,000 employees in March, including the team whose responsibilities were to make sure ethical principles were in place when using AI in Microsoft products.
When asked about the layoffs on an episode of the Freakonomics Radio podcast in June, Microsoft CEO Satya Nadella said that AI safety is a critical part of product making.
"Work that AI safety teams are doing are now becoming so mainstream," Nadella said. "We're actually, if anything, doubled down on it. ... To me, AI safety is like saying 'performance' or 'quality' of any software project."
The companies that created the technology say they're working on reducing the risk of AI. Google, Microsoft, OpenAI and Anthropic, an AI safety and research company, formed the Frontier Model Forum on July 26. The objective of this group is to advance AI safety research, identify best practices, and collaborate with policymakers, academics and other companies.
Government officials, however, are also looking to address the issue of AI safety. US Vice President Kamala Harris met with leaders of Google, Microsoft and OpenAI in May about the potential dangers of AI. Two months later, those leaders made a "voluntary commitment" to the Biden administration to reduce the risks of AI. The European Union said in June it wants tech companies to begin labeling AI-created content before it passes legislation to do so.
What online giants are doing about AI misinformation
To combat AI-generated misinformation ahead of the 2024 US presidential election, Google will, from mid-November, require that political ads using AI have a disclosure on them.
"All verified election advertisers in regions where verification is required must prominently disclose when their ads contain synthetic content that inauthentically depicts real or realistic-looking people or events," says Google's updated policy, which also applies to content on YouTube. "This disclosure must be clear and conspicuous, and must be placed in a location where it is likely to be noticed by users. This policy will apply to image, video and audio content."
Meta is bringing in the same requirement for political ads on Instagram and Facebook, from Jan. 1.
"Advertisers will have to disclose whenever a social issue, electoral or political ad contains a photorealistic image or video, or realistic sounding audio, that was digitally created or altered," Meta's new policy says.
What you can do to avoid gen AI misinformation
There are AI tools available to detect misinformation content created by AI, but they're not up to par yet. De Choudhury says in her study that these misinformation-detecting tools needed more continual learning to handle AI-generated misinformation.
In July, OpenAI's own tool to detect AI-written text was taken down by the company, citing its low rate of accuracy.
Khaled says what helps to determine if a piece of content is AI-generated is a bit of skepticism and attention to detail.
"AI-generated content, while advanced, often has subtle quirks or inconsistencies," he said. "These signs may not always be present or noticeable, but they can sometimes give away AI-generated content."
Four things to consider when trying to determine whether something is generated by AI or not:
Look for AI quirks: Odd phrasing, irrelevant tangents or sentences that don't quite fit the overall narrative are signs of AI-written text. With images and videos, changes in lighting, strange facial movements or odd blending of the background can be indicators that it was made with AI.
Consider the source: Is this a reputable source such as the Associated Press, BBC or New York Times, or is this coming from a site you never heard of?
Do your own research: If a post you see online looks too crazy to be true, then check it out first. Google what you saw in the post and see if it's real or if it's just more AI content that went viral.
Get a reality check: Take a timeout and talk with people you trust about the stuff you're seeing. It can be harmful to keep yourself in an online bubble where it's becoming harder to tell what's real and what's fake.
What continues to work best when fighting any kind of misinformation, whether it's generated by humans or AI, is to not share it.
"No. 1 thing we can do is think more, share less," West said.
Editors' note: CNET is using an AI engine to help create some stories. For more, see this post.