Sections

Commentary

How to cope with an infodemic

Coronavirus Infodemic

As people around the globe struggle with the impacts of the COVID-19 pandemic, we are also coping with a parallel infodemic. First coined by the World Health Organization, the term refers to the dynamics of our modern information space, where trustworthy information is difficult to distinguish from an overwhelming din of competing, and in some cases conflicting, voices. Indeed, with COVID-19, there are already countless cases of false rumors, purposeful misinformation, and baseless conspiracy theories spreading both online and off.

Rumors and misinformation are nothing new to the crisis context. Researchers have long described rumoring—the act of creating and spreading rumors—as a natural response to conditions of uncertainty and anxiety that accompany crisis events. The sociologist Tamotsu Shibutani even describes a collective sensemaking process where people work together to generate shared interpretations of the unfolding event, building off each other’s speculations. This sensemaking process has positive effects—when rumors turn out to be true, they bring with them important informational and psychological benefits.

However, our collective efforts at sensemaking also make us acutely vulnerable during times of crisis to the spread of both accidental misinformation and intentional disinformation. Due to its inherent and persistent uncertainty, the COVID-19 pandemic is a perfect storm for the spread of misinformation. The science of how the disease spreads and can be treated is uncertain and dynamic, and these scientific questions will take time to resolve. And that makes us both anxious and vulnerable.

How tech platforms have responded

For social media companies, the infodemic poses an acute challenge. Amid the uncertainty of an unfolding crisis, they need to provide a space for their users to surface and vet accurate information, which requires allowing inaccurate information to be shared and discussed. Yet they need to prevent the deliberate spread of known disinformation. From Russian trolls to partisan political operatives to medical conmen, there is no shortage of bad actors seeking to leverage the crisis for political and financial ends.

The good news is that the major platforms are at least taking action. The platforms have implemented stricter policies and taken stronger actions to stop misinformation related to COVID-19 than they have, for example, towards political misinformation. Most notably:

  • Facebook has taken a range of actions including banning advertisements that attempt to exploit the crisis (for example by using misinformation to sell medical products), adding banners directing users to authoritative information about COVID-19, using external fact checks to add labels to misinformation about the coronavirus, and creating a “hub” where users can find updates from health authorities.
  • Updating its “safety policy” and extending its definition of harm, Twitter asserted that it would ban tweets that “could place people at a higher risk of transmitting COVID-19”—for example by denying expert’s recommendations, advocating for ineffective or harmful treatments, denying scientific facts about the virus, or spreading unverified claims that “incite people to action and cause widespread panic.” The platform also implemented new procedures to verify and promote the content of “authoritative voices” on the coronavirus.
  • Medium articulated a policy encouraging authors to rely on “journalism best practices” and describing a “risk analysis” framework that would inform decisions to remove content. Similar to Twitter, this framework relied on perceptions of harm (including what and how severe the consequences of the misinformation would be) and focused on specific kinds of health claims. The policy explicitly prohibited denial of social distancing recommendations, promotion of unproven alternative treatments, and the spread of certain conspiracy theories.

Many other platforms made COVID-specific changes to their policies as well. FirstDraft has a great overview of policy statements and actions taken in response to COVID-19 by several different platforms.

The platforms appear to be acknowledging—and accepting—the power and responsibility to stem the flow of misinformation. By performing risk assessments based on both content and reach, the platforms are acknowledging that harm is potentially increased by social influence and algorithmic manipulation within their platforms. They seem to be recognizing that though misinformation sometimes begins at the fringes of the media ecosystem, its potential harm grows exponentially when it is amplified by influencers, and Twitter deserves credit for applying their new policies to high profile political operatives, media pundits, and even elected leaders. Perhaps for the first time, the platforms are addressing that some of the worst misinformation comes from “the top.” This is a good first step toward a more healthy information environment.

Sensemaking and censorship during COVID-19

During an acute crisis like the COVID-19 pandemic, people may be more accepting of these restrictive policies, but it is important to caution that “emergency powers” invoked by the platforms today could continue beyond the pandemic. It is crucial that we critique these policies now and that they are not carried forth into other times and contexts without renewed attention.

First, state and platform censorship of certain content could dampen the collective sensemaking process that is vital both for information transfer and for coping psychologically with impacts of the event. Consider “social credit” policies in China that punish social media users for sharing what the Chinese government considers misinformation. These policies may limit the spread of rumors but likely also chill speech, reducing the spread of accurate information and content critical of the government.

Silencing voices that challenge official response organizations—and to some extent just privileging the messages of those organizations as “authoritative voices”—may not be as straightforwardly positive as it seems. During an event like this one, populations need to be able criticize government responses and challenge government claims that conflict with other evidence. Without the early whistleblowers in Wuhan (who were accused of spreading false rumors), this outbreak may have spread further, faster. And in the U.S., there is emerging criticism of early recommendations by the CDC against wearing masks, which may have misled people about their efficacy. These are both cases where information that conflicted with the messages of official government response organizations—information that might have been labelled as “misinformation”—helped us get closer to the truth.

Information sharing is an innately human response to crisis events. Social media platforms enable people to come together and share information at unprecedented scales—and in new ways. In just a few years, these platforms have become part of the critical infrastructure of crisis response. Researchers of disaster sociology remind us that human behavior during crisis events is often pro-social, and recent studies document people using social media platforms in altruistic ways—for example, to find and share critical information and to organize volunteer efforts. These platforms have also become a place where people converge to make sense of the event and deal with its psychological and social impacts.

Fine-grained policing of content may inadvertently silence the collective sensemaking process that is so vital for people coping with the pandemic’s complex impacts. By focusing on the influencers who select and mobilize content for political or reputational gain and not on the sensemakers who are trying to understand a frightening, dynamic situation, the platforms can significantly dampen the spread of misinformation while still providing a place for people to come together to cope with the impacts of the pandemic.

Kate Starbird is an associate professor in the Department of Human Centered Design & Engineering at the University of Washington and the director of the Emerging Capacities of Mass Participation (emCOMP) Laboratory.

Facebook and Twitter provide financial support to the Brookings Institution, a nonprofit organization devoted to rigorous, independent, in-depth public policy research. 

Author

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).