How Newsrooms Can Stand Out in the Age of AI
Artificial intelligence (AI) has been utilized by news organizations for various tasks such as automation, transcription, and content personalization. The emergence of generative AI, exemplified by models like ChatGPT, has sparked discussions about the opportunities, risks, and ethical implications for journalism. Instead of dismissing this technology, newsrooms should explore how they can leverage it to their advantage while addressing the challenges it presents.
One area of concern is intellectual property. Large language models (LLMs) heavily rely on high-quality input, especially when accuracy is paramount. If poor quality material is given to the AI, it will match the poor quality of the material when it creates its own material However, these models are trained on vast amounts of content, including that from publishers who have invested significant resources in its production, such as The Guardian, The New York Times, and other major high-quality publications. Yet, publishers have not received compensation or acknowledgment for the use of their content.
On one hand, this situation presents an opportunity for media organizations to highlight the value they create and seek ways to claim intellectual property and get payment from the creators of generative AI models. But this poses a major security risk: publishers must urgently reassess how they protect their intellectual property and navigate this ongoing issue. If the public can have access to high-quality distilled intellectual property vis-a-vis the AI, this puts their entire business model in danger: consumers will opt into the easier option rather than maintaining an ongoing subscription to these news sources.
Why is this? News publishers have been making distinct efforts to enhance user experiences by evolving from simple content providers to platforms that empower customers in how they consume news. Generative AI could allow users to customize the format of the content they wish to consume. Users may be able to set the tone of articles they wish to read, which could result in severe biases being strengthened. Users may also choose to receive information in layman’s terms, stripping the piece of its complexity altogether. While this presents an entire other ethical issue for journalists, this shift also has implications for publishers' bottom line and ad revenue. To avoid catastrophe, news providers can and should prioritize building direct relationships with their audience. They need to position themselves as “destinations” that offer unique value beyond being mere content producers.
AI also presents an obstacle when it comes to accuracy and trustworthy information. There have been instances where AI-generated content contained errors, leading to false information being published without proper verification. AI, if it is not trained or updated with further knowledge in due time, will gleefully invent information that it presents as fact. AI is already flooding the news landscape with low-quality, pink-slime-esque content. However, news brands may be able to take advantage of this shortcoming. Brands can and should differentiate themselves by becoming trusted sources of high-quality content—and by being transparent about their research processes. Human-verified, reliable information that helps audiences make informed decisions and navigate the complexities of the world will become a rarity in the age of AI and it’s a chance to redirect audience proclivities in order to stay relevant.
Quality, originality, research, and verification are four things that AI cannot guarantee in its current state. As for the future? It is difficult to say. AI is growing alarmingly quickly and the language models as we understand them currently could be completely different in as little as a year. Journalists and newsrooms must continue to engage with AI’s potential offerings in order to stay relevant alongside the new technology, or journalism itself runs the risk of becoming inundated with non-human stories written by a non-human entity.