Sections

Commentary

The promise of restorative justice in addressing online harm

Restorative Justice Approach

As public pressure has increased for social media platforms to take action against online harassment and abuse, most of the policy debate has centered around Section 230 of the Communications Decency Act, which provides immunity to social media companies like Facebook and Twitter from being sued over most content users publish on their sites. The presumptive Democratic presidential nominee, Joe Biden, is ready to revoke it because of platforms’ tolerance of hate speech. President Donald Trump appears to agree but for different reasons: because of what he claims is the platforms’ anti-conservative bias.

Whether Section 230 is tweaked, repealed, or unchanged, platforms will likely respond to online harm in fundamentally the same way they are now. Echoing the American criminal justice system, the main strategy that platforms use is to remove offensive material—and sometimes the users who post it—from communities.

Instead of removing content and users, we argue for a different approach to content moderation, one based on the principles of restorative justice: focus on repairing harm rather than punishing offenders. Offenders are (usually) capable of remorse and change, and victims are better served by processes that meet their specific needs rather than on punishing the harmers. Transformative justice—a closely related idea—emphasizes that an incident of harm is an opportunity for a community to work on changing conditions and norms to try to prevent the same harm from happening again. By building content moderation policies and practices around restorative and transformative justice, social media platforms have the opportunity to make their spaces healthier and more resilient.

Broken windows policing for the internet

Currently, social media platform companies rely on several layers of content moderation, including policy teams that craft site-wide rules restricting certain kinds of content, algorithms that detect words or images that may violate policies, paid content moderators that classify flagged content on an industrial scale, and volunteer moderators who attend to smaller communities or topics. All of these efforts are directed at finding content that violates platform policies and removing it. When users repeatedly post forbidden content, they typically face increasing restrictions on their ability to participate on the platform. Users interpret these restrictions as punishments, even if the platforms themselves don’t typically use the word “punish.” The result is a punitive system that uses silencing and expulsion to achieve compliance with platform policies.

The problem with these approaches to harm in online spaces are similar to the well-known limitations of the criminal legal system. Punishment itself is generally ineffective as a deterrent for those who harm others and rarely addresses the needs of those who have been harmed. This punitive system also does not encourage offenders to learn about the harm they have done and work to repair it, nor does it change the conditions and norms that facilitated the harm in the first place. 

Commercial content moderation is in some ways the internet equivalent of broken windows policing—the debunked theory that swiftly punishing low-level disorder makes a community safer and healthier. Similarly, AI systems that sweep platforms for banned language and images provide a superficial level of cleanliness, which relieves some of the public pressure on platforms to address the problem. But the underlying systems that allow people to harass remain intact.

In both the criminal legal system and commercial content moderation, the victims’ needs are generally an afterthought. In the criminal legal system, the person who has been harmed generally has little influence on the process of addressing the harm. Victims may be asked to testify, to submit an impact statement, or otherwise support prosecutors’ efforts to convict, but they typically have little or no control over the process. In some cases, they are offered counseling and financial restitution, such as through victims’ compensation funds. Platforms provide even fewer opportunities than the criminal legal system for victims to participate in a process or access advocates, support, or reparations. 

This approach doesn’t work well for offenders either. When private companies such as Facebook take this approach, they don’t provide offenders with the same protections that exist in the criminal legal system. Platforms have none of the due process obligations of the court system, such as explaining the nature of a violation, standardizing punishments to mitigate bias, or clarifying the appeals process.

A new approach to internet justice

Restorative justice emerged as a term in the 1990s but grew from theories and practices developed since the 1960s by different groups of activists, academics, and justice and social-work practitioners, including victim-offender mediation and family group conferencing. In general, restorative justice stresses “restoring” the victim and the community to the place they were before the offence, and is used in some juvenile justice, criminal justice, and family courts. Transformative justice tends to operate in grassroots organizations and typically has the more radical aim of “transforming” the community—often without involving the criminal legal system—so that the harm is not repeated.

Restorative and transformative justice practices both center on relationships and communication, typically through one-on-one meetings with facilitators and small group conversations that often include the victim, the offender, and people connected to them. Howard Zehr, a prominent proponent of restorative justice, explains that such meetings are typically part of a process designed to bring everyone with “a stake in a specific offense [together] to collectively identify and address harms, needs[,] and obligations in order to heal and put things as right as possible.” The group collectively develops a plan to repair the harm and, if possible, to eventually reintegrate the person who has committed the harm into the community.

Empirical studies have consistently found promise in restorative justice processes in a variety of contexts, including schools and workplaces. Both victims and offenders are generally more satisfied with the restorative justice process than the criminal legal system, and studies have found substantial reductions in repeat offending for both violence and property crime. Some U.S. schools that have implemented restorative justice practices have benefited from improved school climate, dramatically decreased suspension and expulsion rates, and reductions in bullying.

Some schools have used in-person restorative justice practices to respond to and attempt to prevent online bullying. It’s been used at the college level too. At Dalhousie University in Nova Scotia, Canada, male dentistry students created a private Facebook group that included sexist, misogynist, and homophobic remarks and images, as well as posts targeting some of their female classmates. Once these issues came to light in 2014, the school opted for a restorative justice process to address it.

At the conclusion of a series of meetings and workshops, the members of the Facebook group wrote, in a collective statement: “We have come to accept our personal and shared responsibility for … the harmful ways in which we were building connection with one another. … We recognize more clearly the prejudice and discrimination that exists inside and outside of dentistry. … It may be impossible to undo the harms but, we commit, individually and collectively to work day by day to make positive changes in the world.”

The female participants in the restorative justice process wrote in their collective statement of their satisfaction with the outcomes. They explained that as a small, tight-knit community, they didn’t want to take a punitive approach and simply expel all the members of the Facebook group from the dentistry program. Others claimed that the process was designed primarily to rehabilitate Dalhousie’s reputation and that offenders should have faced steeper consequences. These discrepancies are understandable: Collective processes to repair harm are often more complex and contested than simply removing offenders from communities—or simply deleting content from platforms. 

Tackling harm in online communities

Our ongoing research pursues a big question: How could restorative and transformative justice practices help address harm within online communities? From our initial interviews and workshops with online community moderators, we found that some are already using these practices informally. This shouldn’t be surprising, because these are simply the practices of a healthy, connected community. When harm happens, communities should prioritize the victim and their needs. Second, restorative and transformative justice stress that the offender is part of the community too. Instead of simply removing them or their content without further discussion, the goal should be to try to help them understand the harm, repair it, and, ideally, rejoin the community.

In most commercial content moderation systems, once a user makes a report, attention typically turns exclusively to the piece of content that violates policy: Is it really against the rules? Should it be flagged, removed, or demoted? Then, in some cases, platforms also address the user account that posted the content: Should they be banned? Generally, the person who was being harassed is not consulted or updated again beyond a cursory note thanking them for reporting the issue. In commercial content moderation, usually the best-case scenario for the victim is that harmful content just disappears. Ideally, the harasser stops because the report to the platform helped them realize they crossed a line. But more often than not, harassers perceive these consequences to be arbitrary, unfair, and punitive, and they either find other ways to continue harassing the victim or eventually stop out of boredom or frustration.

In a restorative and transformative justice approach to moderation, the focus would be on the victim’s immediate safety and health, and on addressing the harm based on the needs of that person. What could the platform do to help the victim feel safe and respected in the space? The initial response might not even involve the harasser; for many victims, one of the most important things is for the community to validate that what they experienced was indeed harmful and wrong.

Restorative and transformative justice may also be most effective in smaller communities where people are invested in participating and where preserving their reputations matter to them. One participant in our research explained: “For community repair you need some semblance of shared community and adherence to or willingness to engage with a shared value set. You can’t do that with anon [anonymous] randos.”

But it’s not impossible in larger communities. One moderator of a large general discussion community who we interviewed explained that despite tens of thousands of users, their community had established norms for interacting on the site in its two decades of history and has a thorough onboarding process for new users. Working from that baseline, they explained that sometimes a small intervention from moderators can “nudge the conversation” and that community members routinely help other users understand why a comment might be offensive or insensitive.

Restorative and transformative justice isn’t the right approach for every incident of online harassment. Practitioners warn that it shouldn’t be used for conflict resolution, meaning that facilitators need to work to make sure that offenders are aware they’ve caused harm and can engage in good faith before they bring people together for a conversation. One of our interviewees explained: “If people are showing up in a very combative format, that is a great sign for you that they are not ready to do a circle [a small group meeting].”

The problem of scale

The principles of restorative and transformative justice require creative, individual responses to different kinds of harm. This means that even though different processes would share common practices, they cannot simply scale. The way we address Islamophobic hate speech in one country will be very different from a case of revenge porn in another country. Successful restorative and transformative justice approaches would require platforms to hire practitioners with cultural competency, train them extensively, and compensate them fairly. Many transformative justice practitioners insist that facilitators should be closely connected to the communities they work in, which gives them the legitimacy to support people in collectively addressing systemic problems. As one transformative justice handbook explains, “People who are part of the culture in which oppressive practices and abusive behavior takes place are in the best position to challenge cultural relativism … [and the] cultural norms that support abusive power and systemic oppression.”

Restorative and transformative justice are resource-intensive because the primary tool is communication—one-on-one with a facilitator and in small groups—that can’t be properly carried out by a bot or an underpaid contract worker. Effectively addressing harm won’t be cheap or easy, but the current strategy of trying to minimize costs by applying one-size-fits-all solutions and taking a hands-off approach can have devastating outcomes.

If social media platforms want to retain their users long-term, we recommend they invest in restorative and transformative justice approaches. Training and supporting community moderators to use these practices could pay dividends later on, including decreasing recidivism and improving community climate.

Social media platforms are already set up for one-on-one and group conversations. The main thing they’re missing to use restorative and transformative justice practices is the trained facilitators. Large platform companies like Facebook have already demonstrated a willingness to invest in resources to train community moderators. Compensation for training and for restorative and transformative justice facilitation could give community moderators the resources to address harm in their communities. Outside of these groups, platform companies could make quality restorative justice facilitation available to users as an alternative to simply flagging or reporting content.

In our research, one of our interviewees, a restorative justice practitioner, was hopeful about using it in online communities. They imagined a world in which trained community facilitators helped users learn about and practice the fundamental concepts of restorative justice, such as gaining an understanding of victim-centered approaches and how offenders can take responsibility and be accountable for repairing harm. And they thought that this approach could have broader social implications too: “To actually address violence, we need to scale up … I think about all the opportunities for people to practice accountability in an online space and how that could benefit everything.”

Amy A. Hasinoff is Associate Professor of Communication at the University of Colorado Denver.
Anna D. Gibson is a PhD candidate in the Department of Communication at Stanford University.
Niloufar Salehi is Assistant Professor at the School of Information at UC Berkeley.

Facebook and Twitter provide financial support to the Brookings Institution, a nonprofit organization devoted to rigorous, independent, in-depth public policy research. 

Authors

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).