Sections

Research

Big tech threats: Making sense of the backlash against online platforms

Dozens of cardboard cut-outs of Facebook CEO Mark Zuckerberg sit outside of the U.S. Capitol Building as part of an Avaaz.org protest in Washington, U.S., April 10, 2018. REUTERS/Leah Millis - RC1CA8A95B60

Intro

Not long ago, information technology was heralded as a tool of democratic progress. Some referred to the Arab Spring uprisings that swept the Middle East as the “Facebook Revolution” because activists used social media to organize and rally fellow citizens. Online platform technologies, it was believed, helped promote equality, freedom, and democracy by empowering citizens to publish their ideas and broadcast their everyday realities unconstrained by gatekeepers, communicate freely with one another, and advocate for political reform.

In recent years, however, doubts have surfaced about the effects of information technology on democracy. A growing tech-skeptic chorus is drawing attention to the ways in which information technology disrupts democracy. No country is immune. From New Zealand to Myanmar to the United States, terrorists, authoritarian governments, and foreign adversaries have weaponized the internet. Russia’s online influence campaign during the 2016 United States presidential election demonstrated how easily and effectively bad actors could leverage platform technologies to pursue their own interests. Revelations about Cambridge Analytica, the political consulting firm hired by Donald Trump’s presidential campaign that acquired personal data from 87 million Facebook users, exposed Facebook’s failure to monitor the information third parties collect through its platform and prevent its misuse.

The concern extends beyond isolated incidents to the heart of the business model undergirding many of today’s large technology companies. The advertising revenue that fuels the attention economy leads companies to create new ways to keep users scrolling, viewing, clicking, posting, and commenting for as long as possible. Algorithms designed to accomplish this often end up displaying content curated to entertain, shock, and anger each individual user. The ways in which online platforms are currently engineered have thus come under fire for exacerbating polarization, radicalizing users, and rewarding engagement with disinformation and extremist content. While many large technology companies have underinvested in protecting their own platforms from abuse, they have designed a service that has amplified existing political tensions and spawned new political vulnerabilities.

“The ways in which online platforms are currently engineered have come under fire for exacerbating polarization, radicalizing users, and rewarding engagement with disinformation and extremist content.”

Countries around the world have responded to this growing threat by launching investigations, passing new laws, and commissioning reports. The U.S., meanwhile, has lagged behind other governments even in the face of well-documented abuses during the 2016 election. The U.S. has been slower to rein in “big tech,” in part, because of a fear of state overreach, the constitutional and cultural commitment to free speech, and a reluctance to constrain the capacity of dynamic companies to innovate.

The steps taken by governments around the world, on the other hand, can be explained by some broad principles shared across borders. A growing international consensus holds that the ways in which today’s dominant online platforms are currently designed poses an inherent threat to democracy. Across a number of countries, lawmakers share the view that the structural design of the attention economy has given rise to disinformation and its rapid spread online. Today’s powerful technologies, they argue, have coarsened public discourse by satiating the appetite for political tribalism, serving up information—true or false—that accords with each users’ ideological preference. They believe the ways in which dominant platforms filter and spread information online presents a serious political threat not only to newer, more fragile democracies but also to long-standing Western liberal democracies.

While lawmakers in the U.S. are beginning to critique the ways in which online platforms have failed to police their own technologies, there remains a reluctance to respond to the digital economy’s negative side effects by establishing terms to regulate the flow of information and classifying certain content as unacceptable. This, many believe, would violate First Amendment free speech rights. Meanwhile, other countries have identified a clearer regulatory role to mitigate the threat online platforms pose to democratic societies.

A similar divide between the actions taken in Europe and the U.S. on online privacy issues has taken shape. Europe has responded forcefully to protect users’ online privacy, bolstering its already robust set of privacy laws when it passed the General Data Protection Regulation in the spring of 2016. The law is widely recognized as the toughest and most comprehensive digital privacy law on the books and is grounded in a cultural attachment to protecting the right of individuals to control access to their personal information.

Across the Atlantic, the U.S. embraces a different concept and culture of privacy. The American privacy regime largely focuses on protecting individuals from state intrusion and companies from red tape. At a time when individual companies hold an unprecedented amount of personal information on their users, the U.S. currently lacks a comprehensive federal privacy law governing the collection and use of personal data by technology companies.

A shared view of the market dynamics that lead to concentration in the digital economy has also begun to develop abroad. Competition enforcement agencies across a range of countries view data as an important source of market power that has given rise to a few dominant “data-opolies” that have amassed troves of users’ personal information. Lawmakers concerned about declining competition in the technology sector have argued that the digital economy does not require a whole new set of principles to guide competition enforcement but that enforcement should home in on the ways in which large technology companies are using data to weaken competition and leverage their dominant position to strengthen their hold on the market.

The consumer welfare framework with its focus on achieving low prices has long guided American antitrust enforcement and stands in stark contrast to the nascent framework being developed abroad. U.S. antitrust authorities, for their part, are now beginning to consider modernizing enforcement to adapt to the market realities of the digital age. For many years, the predominant view held that intervening in the tech sector would make it less dynamic. But evidence is mounting that concentration in the tech sector can slow or even stifle innovation, fostering an openness to promoting greater competition in the digital economy through updated antitrust doctrines and metrics.

With a better understanding of the principles undergirding both foreign and domestic responses to the threats posed by big tech, each subsequent section in this paper will lay out the specific dimensions of the political and economic problems that have arisen in the digital age, the policy responses and proposals pursued abroad, and the ideas guiding debate in the U.S. The goal of this paper is to serve as a resource so that as U.S. lawmakers consider how improve transparency in online advertising, protect user privacy, mitigate the threat posed by harmful content, empower content creators dependent on online platforms, and ensure competition in the digital economy, they can draw on the experience of other democratic governments around the world.

Political advertising

Just a few years after its launch, Facebook announced it would begin running advertisements. The move empowered companies to target consumers with remarkable precision based on the massive amount of personal information the social media site holds on its users. Nearly twelve years later, spending on digital advertising outpaces traditional advertising, including television, radio, and newspaper.

Digital advertising, however, has not merely enabled companies selling everything from designer clothing to groceries to reach potential customers. Online platforms have also provided a new mechanism for political campaigns, political action committees, and private citizens with their own agendas to target voters. Unlike political advertisements broadcast on television or radio which are subject to strenuous disclosure requirements, online political advertisements face few constraints. This omission has allowed bad actors to leverage the power of online platforms to individually curate to each voters’ ideological preferences and biases. Little oversight of the ads run on online platforms has compounded the problem. Facebook’s algorithm, for example, once allowed advertisers to target users interested in “How to burn jews.”

Between June 2015 and May 2017, the pro-Kremlin Internet Research Agency was able to purchase roughly 3,000 Facebook advertisements intended to sow division and discord in the U.S. during a highly contentious presidential campaign and political transition. In testimony to the Senate Committee on the Judiciary’s Subcommittee on Crime and Terrorism, Colin Stretch, Facebook’s General Counsel at the time, estimated that advertisements linked to the IRA’s fake accounts reached approximately 126 million Facebook users, none of whom knew their source.

After Moscow’s successful online influence campaign in 2016, some social media sites have introduced new requirements for those trying to purchase online political advertisements. This attempt at self-regulation has produced unsatisfactory results, however. During the 2018 midterm election, those who paid for online political advertisements on Facebook were able to remain anonymous despite the social media company’s requirement that purchasers verify and disclose their identity.

Lawmakers around the world are beginning to push for public disclosure rules that would subject online platforms to new requirements to maintain a record of all online political advertisements and inform users as to who paid for the political advertisements they are shown. Lawmakers in the United Kingdom have proposed creating a publicly searchable database of online advertisements that would detail who paid for the ad, the issue covered, the period online, and the demographic groups targeted. In December, Canada modified its federal election laws to include new online advertisement transparency rules requiring online platforms to create a publicly accessible registry of any political advertisements they publish detailing who paid for the ad. While Facebook plans to run political advertisements ahead of Canada’s upcoming election, Google has announced it will not sell political advertisements in Canada after the law’s implementation.

While some U.S. states have passed regulations to govern online ad transparency, federal proposals to do the same have stalled. Major online platforms are not currently required by law to publicly disclose who purchased online advertisements, how much an individual or organization paid, the targeted audience, and the number of views received.

“While some U.S. states have passed regulations to govern online ad transparency, federal proposals to do the same have stalled.”

In October 2017, Sens. Amy Klobuchar, Mark Warner, and John McCain introduced the Honest Ads Act that proposes new disclosure requirements for online political advertisements. The bill, if passed, would require online platforms with more than 50 million unique monthly visitors to keep a publicly accessible record of advertisements purchased by an individual or group spending more than $500. The record would include a digital copy of the advertisement, who paid for it, a description of the targeted audience, the number of views received, the dates displayed online, and the rate charged. The Honest Ads Act was incorporated in HR 1, House Democrats’ sweeping anti-corruption and voting rights bill, and passed the House in March, but it is unlikely the bill will be brought to the Senate floor for a vote. Since HR 1’s passage, Sens. Klobuchar, Warner, and Lindsey Graham have reintroduced the Honest Ads Act. Senate Majority Leader Mitch McConnell, however, worries advertising disclosure requirements might raise First Amendment concerns. While there is a bipartisan appetite to push for online advertising transparency, lawmakers might fail to improve upon the status quo before the 2020 presidential election.

User privacy

Online platforms that rely on targeted advertising to generate revenue are in the business of amassing as much personal information on their users as possible. For years, tech companies have been able to collect, use, and share users’ data largely unconstrained. A New York Times investigation found that Facebook gave a number of large technology companies access to users’ personal data, including users’ private messages. In another investigation, the Wall Street Journal found that smartphone apps holding highly sensitive personal data, including information on users’ menstrual cycles, regularly share data with Facebook. While Facebook users can prohibit the social media site from using their data to receive targeted advertisements, users are unable to prevent Facebook from collecting their personal data in the first place.

Meanwhile, high-profile data breaches have highlighted the inability of some of the largest tech companies to protect users’ information from misuse. Cambridge Analytica, a political-data firm linked to Donald Trump’s presidential campaign targeted voters in the run-up to the 2016 presidential election by successfully collecting private information from as many as 87 million Facebook users, most of whom had not agreed to let Facebook release their information to third parties. The campaign used this data to target personalized messages to voters and “individually whisper something in each of their ears,” as whistleblower Christopher Wylie described. Just months after the Cambridge Analytica story, hackers successfully broke into Facebook’s computer network and exposed nearly 50 million users’ personal information.

While users enjoy free access to many tech platforms, they are handing over their personal information with little understanding of the amount, nature, or application of the data tech companies hold on them and little ability to stop its collection. The Cambridge Analytica scandal revealed that entire political systems and processes, not just individual users, are vulnerable when large tech companies fail to properly handle users’ data and leave the door open to those interested in exploiting social and political rifts.

The European Union has made online user privacy a top priority, establishing itself as a global leader on the issue after it passed its General Data Protection Regulation. The law sets out new requirements for obtaining user consent to process data, mandates data portability, requires organizations to notify users of data breaches in a timely fashion, and allows steep fines to be imposed on organizations that violate the regulation. Less than a year after GDPR’s passage, French officials levied a hefty $57 million fine against Google for failing to inform users about its data-collection practices and obtain consent for targeted advertising. After confronting pressure from the European Commission, Facebook agreed to make clear to users that it offers its services for free by utilizing personal data to run targeted advertisements. In Ireland, Facebook is facing several investigations into its compliance with European data protection laws. These moves signal Europe’s commitment to tough enforcement under its new privacy regime.

Lawmakers in Australia and Canada are considering adopting a privacy framework similar to the EU’s GDPR. The Australian Competition & Consumer Commission recently called for amending the country’s Privacy Act to strengthen the notification requirements for those collecting consumers’ personal information, require the explicit consent of consumers for platforms to collect their data, allow consumers to withdraw consent and erase personal information, and increase the penalties for those that violate consumer privacy. Meanwhile, Canada’s House of Commons’ Standing Committee on Access to Information, Privacy and Ethics has called on the Canadian government to implement a new data privacy law which would give the Privacy Commissioner the authority to impose fines for noncompliance. Canada’s privacy commissioner has already announced its plans to take Facebook to court for violating Canadian Facebook users’ privacy during the Cambridge Analytica data breach. The move follows the decision by the United Kingdom Information Commissioner’s Office to issue a $645,000 fine against Facebook for failing to prevent Cambridge Analytica from accessing users’ information without their consent.

In the U.S., the Federal Trade Commission is currently investigating Facebook’s failure to prevent the Cambridge Analytica data breach. Facebook expects a $5 billion fine for violating its consent decree with the FTC in which Facebook agreed not to share users’ personal data without their consent. If the FTC issues a $5 billion fine against Facebook, it would be the largest penalty a U.S. regulator has ever levied against a technology company. The investigation may also result in structural remedies, including a requirement that Facebook create new positions within the company devoted to strengthening user privacy and company compliance overseen by an independent committee established by the FTC. Commissioners, however, disagree over whether Facebook CEO Mark Zuckerberg should be held personally liable for any future violations. Some argue, however, that individual enforcement cases cannot be a substitute for comprehensive regulation.

As such, federal lawmakers have begun drafting online privacy legislation. In the past two years, four Senate bills regarding user privacy have been introduced, and senators on the Committee on Commerce, Science and Transportation are currently drafting their own proposal which would preempt California’s GDPR-style consumer privacy law. Both Democratic and Republican lawmakers have expressed concern over platforms’ treatment of users’ data and agree that a law addressing data privacy is needed. At a time of partisan gridlock, there may be a unique opportunity for a bipartisan push to pass a national privacy law.

“At a time of partisan gridlock, there may be a unique opportunity for a bipartisan push to pass a national privacy law.”

Harmful content

While innovative tech companies have enabled new ways for individuals to develop meaningful connections with one another, online platforms have also created the largest forum for bad actors to post and disseminate violent and extremist content, fake news, and disinformation. Facebook’s automatic detection system failed to stop a gunman who opened fire at two mosques in Christchurch in March from livestreaming the massacre. The livestream ran for nearly 30 minutes before the social media company removed it. In 2017, neo-Nazis used Facebook to organize the Unite the Right rally in Charlottesville, Virginia, which drew several hundred white nationalists to the college town where a counter-protester was brutally murdered.

For a long time, online platforms have argued that they are not publishers and therefore not responsible for any harmful content their users post and circulate. Rather than policing their own platforms, tech companies have largely relied on users to flag inappropriate content. Recently, however, online platforms have taken steps to reduce the spread of harmful content, acknowledging they have a responsibility to more closely monitor the content they host. Facebook, for example, has announced new features and product updates it plans to roll out to monitor harmful content, including restrictions on its live video service. In May, the company banned a number of high-profile anti-Semites and provocateurs and extended the ban to Instagram, which it owns.

Meanwhile, an international push to stipulate tech companies’ responsibilities for harmful content has gained steam.

In 2018, Germany implemented a new law prohibiting hate speech on online platforms. Cognizant of the important role online platforms play in informing voters, French lawmakers recently passed a law that empowers judges to order tech platforms to remove disinformation during election campaigns, and French lawmakers will soon consider whether its own online hate speech laws need to be updated.

In the wake of the Christchurch mosque massacres, Australian lawmakers passed a law that subjects online platforms to huge fines and tech executives to jail time if violent material is not removed from platforms in a timely manner. By contrast, New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron have worked together to coordinate an international response, drafting the “Christchurch Call,” a non-binding pledge between governments and tech companies that sets out expectations regarding the removal of violent and extremist content.While tech companies will face no penalties if they fail to comply, the move signals growing international pressure for platforms to do a better job policing themselves, especially when it comes to content connected to real-world violence. Eighteen countries and five American tech companies—Amazon, Facebook, Google, Microsoft, and Twitter—signed onto the accord.

The most aggressive regulatory efforts to date to rein in harmful content comes from the United Kingdom, where leading consumer protection regulators have called for establishing new government powers to regulate harmful content online, including extremist and violent content, disinformation, and material that exploits children. The U.K. government proposes a new regulatory body funded by tech companies and empowered to issue fines, block access to websites, and hold top executives of tech companies liable for the content on their platforms. The House of Commons’ Digital, Culture, Media and Sport Committee will soon hold hearings on the government’s proposal. In an interview with Business Insider, the U.K.’s digital minister urged “other governments [to] follow our lead.”

Driven by a concern that regulating harmful content online might violate Americans’ constitutional right to free speech, U.S. lawmakers are reluctant to consider introducing any measures to rein in harmful content online. In fact, the Trump administration declined to sign onto the “Christchurch Call,” citing free speech concerns. In a statement explaining its decision, the administration notes, “the best tool to defeat terrorist speech is productive speech.” The administration’s move offers yet another illustration that the U.S. understanding of how best to manage the threat posed by harmful content online is increasingly out of step with the path pursued by other countries.

Nonetheless, U.S. lawmakers may soon force platforms to accept greater liability for the content they host. In April 2018, lawmakers amended Section 230 of the Communications Decency Act to allow prosecutors and sex trafficking victims to take websites to court if advertisements or posts facilitate trafficking. The move indicated that the legal regime that has long insulated online platforms from liability for the content they host may confront future challenges.

“While tackling online privacy has attracted bipartisan support among U.S. lawmakers, the debate over reining in harmful content online is rife with partisan division.”

While tackling online privacy has attracted bipartisan support among U.S. lawmakers, the debate over reining in harmful content online is rife with partisan division. Prominent Republicans, including the president, argue that big tech is suppressing conservative speech. Just recently, the Senate Judiciary subcommittee held a hearing on “Technological Censorship and the Public Discourse,” in which Republican senators claimed that Facebook, Google, and Twitter stifle conservative speech. (Facebook’s recent move to ban extremists, which fell heavily on white nationalists as well as anti-Semites, may add fuel to this charge.) Democrats believe such claims distract from a bigger problem: Platforms have failed to aggressively police hate speech and disinformation. The growing bipartisan chorus to rein in big tech may mask significant differences in how each party views the threat posed by big tech.

Balance of power between content creators and platforms

While dominant online platforms have created a new home on the internet for fake news and conspiracy theories, they have become an indispensable tool for the circulation of legitimate content. One-in-five U.S. adults regularly consumes news on social media. Online platforms have undoubtedly come to occupy a significant place in the information ecosystem but in doing so, they have also threatened the economic viability of media organizations whose content they circulate.

Big tech dominance in digital advertising has hurt news outlets’ advertising revenue. Facebook and Google alone accounted for 60% of total digital advertising spent in 2018.

As a result, print and online media have struggled to sustain themselves financially. The economic realities confronting the news industry are dismal. Newsroom employment declined 23% between 2008 and 2017, and the U.S. lost 1,800 newspapers between 2004 and 2018. While newsrooms across the country have laid off reporters or ceased production, journalistic content has been used by online platforms to attract and engage users.

A number of countries have grown increasingly concerned about the future of journalism in the digital economy. The Australian Competition and Consumer Commission argues that digital platforms pose a serious challenge to the provision of journalism and has called for establishing a regulatory authority to oversee and monitor how digital platforms display news and identify how algorithms affect the production of news. Meanwhile, a report commissioned by the British government proposes creating a code of conduct to govern the relationship between news publishers and online platforms, investigating the online advertising industry to ensure fair competition, and providing tax relief to publishers to support public-interest news.

The European Union has already taken steps to level the playing field between content creators and online platforms. A recently passed copyright directive requires tech companies to enter into licensing agreements with content creators (including media companies) in order to share their content on the platform. The directive also hold platforms liable for any copyrighted content displayed without the proper rights and licensing.

Some U.S. lawmakers have expressed concern about the future of journalism in the digital age. In 2009, the Senate Committee on Commerce, Science, and Transportation held a subcommittee hearing on “The Future of Journalism.” That same year, Sen. Ben Cardin introduced the Newspaper Revitalization Act which would have enabled newspapers to operate as nonprofits, making advertising and subscription revenue tax exempt and enabling tax-deductible contributions to support coverage. On the campaign trail entrepreneur-turned Democratic presidential candidate Andrew Yang has proposed creating a government program that would place experienced journalists in local newsrooms around the country and establishing a Local Journalism Fund that would provide grants to local news outlets. While a number of organizations and coordinated efforts have sprung up in the intervening years focused on sustaining local and investigative reporting, the journalism industry’s health has not yet become a primary focus for U.S. lawmakers.

Competition

This imbalance of power between newsrooms and social media sites is mirrored throughout the digital economy, from third-party vendors selling on Amazon Marketplace to app developers making their products available for download on the App Store. Content creators and businesses hold little power to extract fairer terms from the large online platforms that have become indispensable business partners.

This reflects, in part, the inherent nature of the digital economy which is made up of highly concentrated markets that favor dominance. A platform becomes more valuable to each individual user as the total number of users increases. The greater the number of connections a user builds on a platform, the greater the switching costs a user incurs. This network effect makes it incredibly difficult for potential competitors to enter the market.

The massive amount of personal data platforms hold on their users also tips tech markets toward concentration. The more data a platform holds on its users, the more effectively it can customize the articles, photos, and posts an individual user is likely to enjoy, creating a feedback effect that has allowed a few platforms to dominate.

While these market dynamics inherently constrain competition, some tech companies have deliberately undermined competition to entrench their dominance. Big tech companies have bought up hundreds of start-ups, depriving the market of potential competitors. Google alone has acquired more than 200 start-ups since its founding. As a result, many venture capitalists and entrepreneurs have internalized a strategy of trying to be bought out by dominant tech companies instead of trying to compete against them. Major online platforms have also used their platforms to unfairly prioritize their own products and services and priced products below cost to out-discount competitors.

Margrethe Vestager, the European Union Commissioner for Competition, has set the standard for tough enforcement against tech companies that weaken competition. In 2017, the European Commission levied a record $2.7 billion fine against Google for prioritizing its own online shopping service in search results. In 2018, the Commission broke this record when it brought a $5 billion antitrust fine against Google for using its mobile operating system to entrench the dominance of its other services (like Search and Chrome). This March, the Commission hit Google with a third fine of $1.7 billion for blocking advertising rivals. Vestager recently launched a probe into Amazon to examine whether the e-commerce giant uses data on third-party merchants to compete against them in its own marketplace and will decide in the next few months whether to formally investigate Amazon. EU lawmakers have already agreed to new regulation that requires platforms to be more transparent with the businesses and content creators that rely on platforms to reach consumers.

Meanwhile, Germany’s competition authority recently ruled that Facebook’s efforts to combine user data across its social platforms without their consent violates competition law. This is the first time that a major competition enforcer in the EU has found a company in violation of competition law for failing to comply with data protection principles.

At the heart of these enforcement actions is an emerging international consensus that data is a new, under-examined source of market power in the tech sector. This understanding led the Canadian House of Commons’ Standing Committee on Access to Information, Privacy and Ethics to suggest moving competition enforcement in the tech sector “away from price-centric tools” and toward evaluating the value of data at stake between merging companies. Acknowledging the ability of data collection to weaken competition, the committee has also recommended establishing principles of data portability and system interoperability.

An emerging view also holds that antitrust and regulatory action may be needed to rein in big tech. A government-appointed Digital Competition Expert Panel in the United Kingdom recently concluded that neither antitrust action to ensure markets operate freely and competitively nor government intervention through regulation will be sufficient on its own. The panel calls for modernizing competition enforcement in the digital age by establishing a pro-competition digital markets unit, resetting merger assessment in digital markets, prioritizing scrutiny of mergers in digital markets (including assessing harm to innovation and potential impacts on competition), and performing retrospective evaluation on previously approved mergers. The panel also recommends developing principles for data mobility, identifying certain companies as having “strategic market status” and prohibiting them from prioritizing their own products and services on their platform, and creating open standards for user data to ensure consumers can easily transition to using another platform. In a similar vein, a recently released European Commission report on competition policy states “there is no general answer to the question of whether competition law or regulation is better placed to deal with the challenges arising from digitisation of the economy.” The report goes on to note that “competition law enforcement and regulation are not necessarily substitutes, but most often complements and can reinforce each other.”

The U.K. expert panel and the European Commission report both take the view that ensuring competition in the digital economy does not require wholesale reform to competition law. The digital economy does not require changing the fundamental aims of competition law, but simply modernizing enforcement. For instance, the European Commission report warns that under-enforcement could threaten consumer welfare and argues that “even where consumer harm cannot be precisely measured, strategies employed by dominant platforms aimed at reducing the competitive pressure they face should be forbidden in the absence of clearly documented consumer welfare gains.” An emerging view holds that the digital age may not require revising the goals of competition law but instead challenging enforcers to rethink how they apply existing principles.

“A consumer welfare standard exclusively focused on low prices has failed to capture concentration in the tech sector as dominant technology companies have evaded scrutiny by offering their services for free or at a low cost.”

In the U.S., a consumer welfare standard exclusively focused on low prices has failed to capture concentration in the tech sector as dominant technology companies have evaded scrutiny by offering their services for free or at a low cost. American antitrust enforcers, however, are beginning to re-evaluate this approach.

The Federal Trade Commission recently set up its own task force to examine competition in the technology sector. The task force will assess previously approved acquisitions and study antitrust enforcement in technology markets. Bruce Hoffman, the director of the FTC’s bureau of competition has said the FTC could use its lookback authority to reverse mergers if necessary. The FTC also has plans to use its authority to collect non-public information from tech firms to study the inner workings of tech companies and their privacy and competition practices.

Meanwhile, some U.S. lawmakers have already proposed ways to modernize antitrust enforcement for the digital age. Sen. Klobuchar, for example, introduced the Consolidation Prevention and Competition Promotion Act in September 2017, which would amend the Clayton Antitrust Act by banning acquisitions by any company with a market cap higher than $100 billion. The bill also calls for placing the burden of proof on companies to demonstrate that consolidation won’t limit competition.

Others have called for regulating tech companies like utilities. Trump’s former chief strategist Steve Bannon, for example, has argued that online platforms, like cable, have become a kind of modern necessity and should therefore be regulated in a similar manner. Opponents of this approach contend that regulating tech companies like utilities represents a concession to their dominance and to the market realities that make it difficult for innovative competitors to dethrone today’s dominant players.

On the campaign trail, Sen. Elizabeth Warren released her own plan to rein in big tech which calls for both tougher antitrust enforcement and utility-style regulation. The first part of her plan focuses on strengthening antitrust enforcement by appointing regulators who would reverse anti-competitive tech mergers. The second part of her plan calls for enacting legislation that would designate large tech companies such as Amazon, Facebook, and Google as “platform utilities” and ban those companies from selling their own products and services on the platform they operate. Under this proposal, Amazon Marketplace and Amazon Basics, which currently sells products on Amazon Marketplace, would be split into two separate companies banned from sharing data with one another.

Just recently, Facebook co-founder Chris Hughes penned a 6,000-word op-ed calling on lawmakers to break up Facebook. A concern that major online platforms have become too big is now under serious consideration in mainstream policy circles. Whether their breakup will be supported by an antitrust framework focused on consumer welfare remains an open question.

Conclusion

Around the world, governments are experimenting with how best to confront the problems posed by the digital economy, from the ways in which online platforms empower bad actors to dominant technology companies shaping our personal and economic lives in profound ways. But not every challenge the digital economy has introduced may be effectively managed by passing new laws, levying steep fines, or imposing structural mandates. There is only so much regulatory action can do to address the fact that catering to our appetite for political tribalism has become so profitable.

While Zuckerberg calls Facebook a “global community,” it has become increasingly clear to many that the algorithms that fuel the attention economy stoke polarization rather than quell it, creating negative externalities that threaten a healthy democracy. As a Washington Post headline on recent policy changes at Facebook reads, “Facebook is trying to stop its own algorithms from doing their job.”

A massive redesign could discourage the ideological echo chambers that currently proliferate online or deprioritize incendiary posts, but it is unlikely platforms will be able to entirely prevent harmful content from making its way online. While Facebook has received criticism for failing to invest in more employees who can monitor harmful content on the platform and verify the factual legitimacy of articles shared, no number of humans can reasonably evaluate the volume of content posted every second on the largest platforms, and current technology alone is not advanced enough to sufficiently monitor disinformation and hate speech.

Lawmakers in several countries, including Canada, France, and the United Kingdom have proposed implementing digital literacy initiatives that would help citizens identify disinformation online and not spread it themselves. Initiatives would also teach users the value of stopping to think before publishing a post or comment online. Platform technologies have long aimed to remove “friction” from the user experience, but this design principle has been criticized by the Center for Humane Technology for creating addictive technologies that discourage thoughtful engagement online.

While many countries have identified a critical role for government to develop new rules and promote competitive markets in the digital economy, some also see a role for government in helping citizens and democracy maintain power at a time of rapid technological change. France’s Policy Planning Staff and Institute for Strategic Research argues that “citizens concerned with the quality of public debate” are in charge of its protection. “[I]t is the duty of civil society to develop its own resilience,” they note in a report on information manipulation. They argue, however, that governments “cannot afford to ignore a threat that undermines the foundations of democracy” and “should come to the aid of civil society.” Similarly, the authors of the United Kingdom House of Commons’ report on disinformation and fake news contend that citizens who engage thoughtfully online and “regulation to restore democratic accountability” can “make sure the people stay in charge of the machines.”

The global push to craft rules for the digital economy is well on its way, and in the U.S., lawmakers who were once cheerleaders of Silicon Valley are now declaring the end of an era of self-regulation. With a federal response to online political advertising transparency, user privacy, and competition in technology markets in its nascent stage, the U.S. can look to the rest of the world for ideas.

It may well be that the challenge of the digital information sector is beyond the ability of single nations working by themselves to meet. If so, it is time to start thinking about the kinds of international agreements and institutions that could do collectively what individual countries cannot do for themselves.

In the first instance, anyway, these arrangements would have to be restricted to countries that share a broad understanding of the importance of individual liberty and a free civil society. An international regime that encompassed authoritarian governments as well as liberal democracies would be a cure worse than the disease. It is not clear, for example, that the People’s Republic of China would ever be willing to embrace the kinds of individual and civic rights for information technologies that liberal democracies consider nonnegotiable. Still, it would make sense to begin exploring what the governments of free societies might be able to agree on across their differences of law and civic cultures.

The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Amazon, Facebook, Google, and Microsoft provide general, unrestricted support to The Brookings Institution. The findings, interpretations, and conclusions posted in this piece are not influenced by any donation. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment.


Appendix

Australia

Canada

European Union

France

United Kingdom

Authors