Sections

Commentary

Meta’s Oversight Board is unprepared for a historic 2024 election cycle

Darrell M. West and
Darrell West
Darrell M. West Senior Fellow - Center for Technology Innovation, Douglas Dillon Chair in Governmental Studies

Natasha White
Natasha White
Natasha White Research Intern - The Brookings Institution

July 3, 2024


  • The record number of global elections taking place in 2024 will test how Meta deploys its election-related content moderation policies and language capabilities across its 3 billion daily users.
  • Meta’s 22-person Oversight Board is responsible for binding decisions regarding the implementation of Meta’s content moderation policies, but the board appears underprepared to moderate misinformation for elections across 64 countries.
  • Non-English speaking communities are at increased misinformation risk; Meta’s Oversight Board should revise their counter-misinformation spending to allocate additional funds to non-English language and local content moderation efforts.
Meta platforms
The European Commission informed Meta - owner of Facebook, Instagram and WhatsApp - that its policy of asking users to either accept personalized advertising across its services or pay may violate the European Union's digital "gatekeeper" rules. Source: Deutsche Presse-Agentur GmbH/REUTERS

It’s time for Meta to bolster and enforce its election-related content moderation policies and language capabilities.

The 2024 election cycle presents a unique challenge for social media giants like Meta who bear responsibility for mediating the spread of election-related misinformation content on their platforms. With over 3 billion daily users across the Meta ecosystem of Facebook, Instagram, and WhatsApp, and a record number of national elections taking place globally this year, the stakes surrounding Meta and its Oversight Board’s powers as disseminators of false and misleading election information are at an all-time high.

Early flexes of AI’s disinformation muscle

In January, an AI-generated voice clone of President Joe Biden went viral, encouraging New Hampshire voters to abstain from voting in the 2024 New Hampshire presidential primary election. The AI-generated voice recording was later linked to a consultant working for a rival candidate, whose commissioning of the call violated federal the state’s laws against voter suppression in elections. The robocall is proving to have been a mere preview of coming attractions. The ready availability of AI sound, image, and now, video-generating tools has made proliferation of deepfake election content considerably more accessible—and more common.

But the threat to elections isn’t limited to deepfakes and misleading AI on social media. Many worry that campaigns will use their social media to spread false information and that tech companies won’t adequately address these issues as their platforms continue to host democracy-compromising content ahead of elections.

The problem is not a new one but one that faces increased scrutiny due to the bounds in technological progress made since the last major round of elections in 2020, and correspondingly increased complexity of threats. After investigations found that social media played a key role in election interferences during 2016, leading social media firm Meta created an oversight board ahead of the 2020 U.S. election, with the stated goal of “[exercising] independent judgment over some of the most difficult and significant content decisions” on its platforms.

Meta’s Oversight Board: Champions of accountability or mere spectators?

The board, comprised of 22 multi-national, cross-industry experts, exists as an autonomous body that judges a selection of user appeals to content decisions, and hands Meta binding decisions about whether posts should be reinstated or removed according to Meta’s content policy. The board also plays a key role in shaping the future of Meta’s content policy through their policy recommendations, which Meta must respond to within 60 days. According to the Oversight Board’s Transparency Report from the second half of 2023, however, an astounding 41.8% of its ‘binding’ recommendations were declined, still awaiting implementation, or otherwise not acted upon by Meta.

Despite the significant benefit provided by the Oversight Board, the 22-person body appears woefully underprepared to provide the much-needed moderation of election-charged misinformation in a year when 64 countries, which collectively hold nearly 50% of the world’s population, will head to the polls. In the last year and a half, like many other tech firms, Meta made the decision to significantly downsize its civic integrity and Trust and Safety teams. Following the 2020 elections, the firm dissolved its civic integrity team, and in October 2023, it terminated over 180 content moderators based in Kenya. On April 29th, chairperson of the Oversight Board Trust, Stephen Neal, shared that the board would be laying off members of its team, in order “to further optimize [] operations by prioritizing the most impactful aspects of [their] work.” Most crucially, these layoffs will impact staff who support the board through administrative, research, and translation tasks.

In the meantime, legislative processes in the United States are struggling to keep up and set guidelines for tech firms. In the past year, over 100 bills that look to regulate AI’s election-related disinformation potential have been introduced or passed in 39 state legislatures. The Supreme Court recently heard arguments from a pair of disputed laws from Texas and Florida that allow the government to determine what political content social media companies must allow to remain online, protecting content from ‘selective’ moderation by companies like Meta. In response, tech proponents have argued that they have the right to curate what their users see. But what degree of moderation is realistic and possible for Meta’s increasingly limited Oversight Board?

Lost in translation: Non-English speaking communities at increased misinformation risk

The global implications of Meta and similar companies’ de-emphasis of manipulated media and misinformation policies have already become visible–especially in countries where English is not the first language. During Brazil’s 2022 election campaign, political violence reached staggering heights, with social media posts on platforms like Meta adding to calls for violence. In June 2023, Meta’s Oversight Board ruled that a clip of a Brazilian general urging people to “hit the streets,” promoted political violence on its platforms, and urged the social media firm to remove the post. In this case and many others, moderation came too late.

In the United States, where only 38% of the first-generation Latino immigrant population reports English proficiency, and election misinformation leaves many communities especially vulnerable. As home to 25% of the nation’s broader Hispanic population and 5.8% of its population with limited English proficiency, California will be a key state in which election information must be moderated in both English and Spanish. For immigrants and communities with low English proficiency in America, language barriers are often compounded by distrust in democratic systems and an overreliance on social media sites like Facebook and WhatsApp for news. With fewer resources to debunk election-related deepfake and misinformation content, these communities are more frequently targeted with misinformation campaigns in their less-moderated and less frequently fact-checked native languages.

In its Transparency Report for the second half of 2023, Meta’s Oversight Board only addressed its role in election-related content in relation to the tech company’s need to update their lists of banned language for countries holding elections in 2024. Considering that the social media-based threats to election integrity already seen in 2024 far outstrip such quick fixes, the Oversight Board should revise their ‘counter misinformation’ spending, allocating a greater portion of funds to non-English language and local content moderation efforts in countries holding national elections. On Meta’s Facebook platform, alone, 87% of counter misinformation funds cover English language cases, although English speakers account for just 9% of global Facebook users. When it comes to voting and election information, the persistence of such trends across Meta’s other platforms, Instagram and WhatsApp, suggests a disturbing reality about the neglect of non-English speakers in Meta’s content moderation efforts.

No room for oversight: The time for Meta to safeguard democratic institutions is now

To ensure that election-related content moderation in languages other than English do not fall by the wayside, Meta should consider crafting special guidelines for countries holding elections in 2024. In particular, Meta might consider expediting election-specific policies related to equity of cases the Board hears from regions outside the U.S. and Canada and increasing its responsiveness and enforcement rate to the Oversight Board’s rulings and recommendations. The Oversight Board, itself, has identified these, among others, as problem areas. So why, as elections draw closer and closer, does it still not act to protect its most vulnerable users?

Authors

  • Acknowledgements and disclosures

    Meta is a general, unrestricted donor to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the authors and are not influenced by any donation.