Artificial intelligence (AI) is one of the transformative technologies of our time. It is reshaping entire sectors, including healthcare, education, e-commerce, transportation, and defense—and in many ways, is the defining force of the coming years. Ultimately, though, it is the policies and principles established by people—lawmakers, regulators, software developers, and ethicists—that will determine the trajectory of this emerging technology.
That is the message that Brookings scholars Darrell West and John Allen convey in their new book, “Turning Point: Policymaking in the Era of Artificial Intelligence.” On August 10, West and Allen joined Nicol Turner Lee for a webinar discussion to explain what AI is, discuss its use in leading sectors, outline the ethical and societal ramifications of AI deployment, and recommend a policy and governance blueprint to maximize the advantages of AI.
Artificial intelligence: Definitions, use cases, and challenges
Artificial intelligence is a term for “computer systems [that] can learn from data, text, or images and make intentional and intelligent decisions based on that analysis.” The recent growth in computing power, data, and algorithms has accelerated the capability of AI to make predictions or decisions and even assume human-like elements; in turn, this has propelled advancements in augmented reality, connected vehicles, hyperwar, and more. These terms (and others) are defined in West and Allen’s glossary of AI and emerging technologies.
In a global pandemic, it is easy to see how AI can be an asset to any coordinated strategy and response. For example, scientific researchers use AI to scan coronavirus-related literature to identify treatments and produce health recommendations more quickly and efficiently. Contact tracers can also use data analytics to track and measure the pandemic’s spread. Furthermore, as schools and universities invest in online or hybrid courses, AI can help deliver personalized learning experiences for students.
Even before the coronavirus outbreak, AI demonstrated the potential to transform markets and societies. In addition to education and healthcare, AI is being incorporated in the transportation sector. In particular, autonomous vehicles are expected to grow to a multi-billion dollar market—and ride-sharing companies, long-distance delivery trucks, and mass transit systems may be some of the earliest adopters. But there is a law of physics in which an action taken in one industry will have an impact on others. If autonomous vehicles decrease the number of traffic accidents, there will be secondary effects in the car insurance industry. And if healthcare providers and schools expand their virtual offerings, the challenges of data privacy, algorithmic bias, and lack of equitable broadband access will rise in consequence. These issues will be relevant long after the pandemic subsides.
Private and public sector recommendations to address AI
Some of these challenges can be addressed in the private sector by the companies that develop and deploy AI. West and Allen outlined steps that companies can take to promote fair and equitable AI: they should hire ethicists, enact AI ethics codes, and institute AI review boards to assess products before they reach the market. In addition, companies should evaluate how unrepresentative datasets lead to discriminatory algorithmic outcomes—such as if an algorithm, based on biased criminal justice records, yields inaccurate predictions related to recidivism. It is crucial for companies to establish ethical principles or guardrails to make AI more safe, transparent, and understandable.
But it is also crucial for governments to enact laws, regulations, and policies that promote the responsible use of AI—that both harness the benefits of AI and mitigate the consequences. To help U.S. lawmakers draft legislation related to artificial intelligence and emerging technologies, West and Allen recommended restoring the Office of Technology Assessment within Congress. Equally important, they emphasized the need to enforce existing laws and regulations for AI-related incidents, including current rules on privacy, anti-discrimination, and competition. Finally, West and Allen proposed that government agencies consider AI impact statements: private companies that develop large-scale, publicly-funded AI systems could use these to analyze the societal and ethical consequences of their products and find ways to mitigate them.
On a global scale, AI could affect relationships with U.S. trade partners like China and the European Union, or even shift the broader geopolitical landscape. On the defense front, AI can increase the efficiency, safety, and capacity of weapons systems—and it is important to implement processes to train military leaders to use it responsibly. Ultimately, any negotiated settlements or decisions between nations to either invest in or voluntarily constrain AI will come with significant economic, diplomatic, and national security considerations. In this situation, the importance of enforceable, adaptable, and equitable principles become clear. As Allen explained: “Historically, there has been a latency of policy following technology … we can’t afford to have that latency [with AI].”
Use promo code “Algorithm” to receive 30% off your purchase of “Turning Point: Policymaking in the Era of Artificial Intelligence.”
Agenda
-
August 10
-
Discussion
Moderator
Nicol Turner Lee Director - Center for Technology Innovation, Senior Fellow - Governance Studies @drturnerleePanelist
Darrell M. West Senior Fellow - Governance Studies, Center for Technology Innovation, Center for Effective Public Management, Douglas Dillon Chair in Governmental Studies
-