A Recent Analysis Co-Authored by Google Researchers Sheds Light on AI-Generated Misinformation

A Recent Analysis Co-Authored by Google Researchers Sheds Light on AI-Generated Misinformation

A recent analysis co-authored by Google researchers highlights the rapid growth of AI-generated misinformation online. The study, co-authored by researchers from Google, Duke University, and several fact-checking and media organizations, was published in a preprint last week. It introduces a massive new dataset of misinformation fact-checked by websites like Snopes, dating back to 1995.

According to the researchers, the data reveals that AI-generated images have quickly risen in prominence, becoming nearly as popular as more traditional forms of manipulation. The study was first reported by 404 Media after being spotted by the Faked Up newsletter. It shows that "AI-generated images made up a minute proportion of content manipulations overall until early last year," the researchers noted.

Last year saw the release of new AI image-generation tools by major tech players, including OpenAI, Microsoft, and Google itself. As a result, AI-generated misinformation is now "nearly as common as text and general content manipulations," the paper said.

The researchers observed that the increase in fact-checking AI images coincided with a general wave of AI hype, which may have led websites to focus on the technology. However, the dataset indicates that fact-checking of AI content has slowed down in recent months, with traditional text and image manipulation seeing an increase.

The study also found that video hoaxes now make up roughly 60 percent of all fact-checked claims that include media.

AI has been used to generate fake images of real people, with concerning effects. For instance, fake nude images of Taylor Swift circulated earlier this year. The tool used to create the images was reportedly Microsoft's AI-generation software, which it licenses from ChatGPT maker OpenAI. This prompted Microsoft to close a loophole allowing such images to be generated.

The rise of AI-generated misinformation has caused headaches for social media companies and Google itself. Fake celebrity images have prominently featured in Google image search results in the past, thanks to SEO-driven content farms. Using AI to manipulate search results is against Google's policies.

To deal with the problem of AI fakes, Google has launched initiatives such as digital watermarking, which flags AI-generated images as fake with a mark that is invisible to the human eye. The company, along with Microsoft, Intel, and Adobe, is also exploring giving creators the option to add a visible watermark to AI-generated images.