One can only watch the revitalized activities of the Federal Trade Commission (FTC) and cheer, “Hooray for Lina Khan!” As Chair of the agency, she has shown focus, vision, and grit to assert FTC authority over many of the challenges created by the digital economy.
In a period of only nine days—April 25 to May 3—the FTC announced initiatives to look at unfair and deceptive acts involving artificial intelligence (AI) and proposed banning Meta Platforms from targeting young users. These come on top of two years of antitrust aggressiveness and consumer protection assertiveness.
But both actions beg the question, “Are the tools strong enough for the task?”
Both the AI and Meta activities are indications of the limitations that Chair Khan and the agency face as a result of being tied to industrial era statutes and procedures. Because there is no other agency to pick up the challenge—or other agencies are less creative and aggressive—the FTC has become the primary digital enforcer. But the realities of the recent AI and Meta activities illustrate why public interest protections need to be broader than the legacy authority in old statutes.
Meta Penalty
The Meta action was a “penalty” targeted only at the company, not a “regulation” that would apply to all similarly situated digital platforms and their exploitation of young Americans. The FTC’s authority to take this action stems from its 2020 settlement in which it fined the company formerly known as Facebook $5 billion for violating the privacy terms and conditions promised to users. While other consent decrees in place with Google and Snap could perhaps form the basis for similar actions against those platforms, the reach of the combined actions would still fall short of the broad collection of platforms that siphon information from young users in order to target messages at them.
The FTC’s decision was an “order to show cause” and gave Meta thirty days to reply. The path forward, however, is fraught with issues. Democratic Commissioner Alvaro Bedoya, while concurring with the order, issued a statement describing his concerns regarding whether the use of an old enforcement order was the best way to move forward. “[T]he relevant question is not what I would support as a matter of policy,” he wrote. “Rather, when the Commission determines how to modify an order, it must identify a nexus between the original order, the intervening violations, and the modified order… I have concerns about whether such a nexus exists.”
Meta called the decision “a political stunt.” Indeed, the company had modified its policy regarding targeting young users. The modifications, however, do not prevent targeting young users based on data collected from them such as age or location. Meta’s policy does prohibit advertisers targeting ads based on user interests, visiting certain websites, or gender.
If the FTC moves ahead—seeking a decision in the fall of this year—it will most likely be challenged in court. Meta, like others unhappy with the Khan Commission, can be expected to file a suit disputing the agency’s authority. That would mean a further delay (probably a year) before a court decision (longer if it goes to the Supreme Court). Even if successful, it would apply only to Meta’s Facebook, Instagram, and Horizon Worlds services.
The FTC has identified the problem of online safety and is using its traditional tools. But are those tools enough?
Artificial Intelligence
The FTC’s AI initiative pledged to “uphold America’s commitment to the core principles of fairness, equality, and justice.” In a powerfully concise statement of intention, Chair Khan observed, “There is no AI exemption to the laws on the books.” Whether the scope of the “laws on the books” are sufficient for the new realities of AI is up for debate. The FTC’s enabling statute to which the Chairwoman refers was written in 1914 to deal with an industrial economy far different from today’s AI realities.
The FTC’s statutory authority does grant it power to deal with old-fashioned problems, even if they are perpetuated by new-fangled AI. The agency, for instance, can use its authority to move against AI-generated scams and other unfair or deceptive acts or practices. Much more problematic, however, is the reach of old statutory authority to deal with the broader aspects of AI such as transparency into how various AI models operate. Does the FTC, for instance, have the authority to regulate AI management systems to mitigate broad risks, or to establish expectations of human responsibilities in the development of AI code writing?
Calls to close gaps in the oversight of AI come from diverse constituencies ranging from the U.S. Chamber of Commerce to the CEO of the developer of ChatGPT. The Chamber of Commerce, not an organization known for embracing government regulation, has called for a “risk-based regulatory framework” to define and protect the responsible use of AI. Sam Altman, CEO of OpenAI, told an interviewer, “Society, I think, has a limited amount of time to figure out how to react to that, how to regulate that, how to handle it.”
AI promises to become pervasive. Are the FTC’s traditional tools up to the task?
Looking Beyond Traditional Tools
We do not live in traditional times. As the only watchdog in sight, the FTC is right to apply its authority to protect the public against non-traditional threats. Fitting new realities into old statutes, however, can be awkward. What’s more, these efforts are an invitation for those who don’t like such protections to go to court to dispute the FTC’s authority.
In the industrial era, technology unleashed the power of new production tools. In response, government developed a set of countervailing tools to protect the public interest. In the internet era, new online capabilities have unleashed the power of digital tools. Because Congress has yet to respond with a new set of public interest tools focused on digital realities, the job of protecting Americans has fallen to agencies such as the FTC.
Twenty-first century Americans deserve better than 20th century solutions. At a time when other western democracies are stepping up to protect their citizens’ digital rights, the United States needs its own set of digital protection tools—whether beefing up the FTC’s authority or creating a new, focused digital agency.
Google and Meta are general, unrestricted donors to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the author and are not influenced by any donation.
Commentary
Are the FTC’s tools strong enough for digital challenges?
May 10, 2023