Big Tech's Rollback of Security Measures: A Threat to the Fight Against Misinformation

Big Tech's Rollback of Security Measures: A Threat to the Fight Against Misinformation

Ahead of the 2024 election cycle, the world's largest tech companies have been making controversial decisions to roll back security measures initially implemented to combat misinformation. YouTube and Meta—Facebook’s parent company—are at the forefront of these policy changes, drawing concerns from lawmakers and consumer advocacy leaders. This shift in stance raises questions about the delicate balance between protecting political speech and curbing the spread of harmful misinformation.

YouTube recently confirmed its decision to reverse its election integrity policy, which aimed to remove content that propagated falsehoods about fraud, errors, or glitches in the 2020 presidential election. This policy was established in December 2020, after the certification of election results by several states. YouTube justified its reversal by asserting that the policy could “limit political speech” without significantly reducing the risk of real-world harm or violence. By doing so, YouTube is essentially allowing misleading information about the election to persist on its platform—and for baseless challenges to future elections to thrive in an uncontrolled space.

Meta, on the other hand, made headlines by reinstating the Instagram account of Robert F. Kennedy Jr., a prominent anti-vaccination voice who was previously removed from the platform for spreading misinformation about COVID-19 vaccines, views refuted by Kennedy’s own family. This decision, based on Kennedy Jr.'s candidacy for the 2024 Democratic primary, raises concerns about the prioritization of political figures over public health and safety. While Meta maintains different rules for political figures, allowing them greater leeway in their speech, the reinstatement of an individual known for spreading misinformation undermines the efforts to combat false information surrounding COVID-19.

The regulation of political speech on social media platforms has always been a contentious issue in the United States. While traditional broadcast channels are prohibited from censoring political ads that contain falsehoods, the rules for social media platforms remain less defined, in part due to the lack of legislation from Congress meant to regulate big tech companies. Meta regulates political speech differently than it does normal posts–the platform treats political figures and world leaders with a distinct set of rules that do not apply to ordinary citizens. YouTube appears to be heading in a similar direction, potentially creating a disparity between the regulation of political and non-political content.

So what are the choices to fight the spread of misinformation and disinformation when big tech companies appear to be inviting a free-for-all? 

Joan Donovan, a prominent misinformation expert and author of the book Meme Wars, argues that “the problem isn’t that election misinformation exists, but rather that social media as a product amplifies misinformation-at-scale, which makes it everyone’s problem now." 

Kathleen Hall Jamieson, director of the Annenberg Public Policy Center, says the answer to the inundation of misinformation is the inverse: inundation of correct information. "Flood the zone with the best available information, [to] make sure that when the misinformation gets up there, you've got corrective context with good information up next to it,” she said.

The security and reliability of social media accounts has come into question often as of late, in part due to Twitter’s takeover by right-wing misinformation enthusiast Elon Musk. Twitter saw a major layoff of staff that distinctly compromised the ability of the platform to effectively combat misinformation and respond to potential threats—alongside Musk’s reckless re-implementation of accounts belonging to offenders such as David Duke, the former Grand Wizard of the Ku Klux Klan, and former President Donald Trump, who incited an insurrection against the United States using misinformation. The impact of these measures is concerning, as it raises questions about whether platforms have the necessary resources and personnel to adequately address the challenges posed by misinformation during the upcoming election cycle and what the consequences of this coming election cycle will be.

Other accessible platforms, including Hulu and Spotify, are planning to begin running political ads which will be subject to the ban on censorship—meaning more individuals may be exposed to misinformation and disinformation for political gain. Overall, the 2024 election cycle is painting a worrisome picture for the state of democracy: if misinformation can continue to infiltrate politics in the United States, it’s only a matter of time before a bad actor takes advantage, which will lead to catastrophic consequences for the institutions we hold dear.

Alan Herrera is the Editorial Supervisor for the Association of Foreign Press Correspondents (AFPC-USA), where he oversees the organization’s media platform, foreignpress.org. He previously served as AFPC-USA’s General Secretary from 2019 to 2021 and as its Treasurer until early 2022.

Alan is an editor and reporter who has worked on interviews with such individuals as former White House Communications Director Anthony Scaramucci; Maria Fernanda Espinosa, the former President of the United Nations General Assembly; and Mariangela Zappia, the former Permanent Representative to Italy for the U.N. and current Italian Ambassador to the United States.

Alan has spent his career managing teams as well as commissioning, writing, and editing pieces on subjects like sustainable trade, financial markets, climate change, artificial intelligence, threats to the global information environment, and domestic and international politics. Alan began his career writing film criticism for fun and later worked as the Editor on the content team for Star Trek actor and activist George Takei, where he oversaw the writing team and championed progressive policy initatives, with a particular focus on LGBTQ+ rights advocacy.