BPEA Spring 2024 conference

LIVE

BPEA Spring 2024 conference
Sections

Commentary

Who should answer the ethical questions surrounding artificial intelligence?

A TV crew interviews Sophia, a robot integrating the latest technologies and artificial intelligence

Continuing advancements in artificial intelligence (AI) for use in both the public and private sectors warrant serious ethical consideration. As the capability of AI improves, the issues of transparency, fairness, privacy, and accountability associated with using these technologies become more serious. Many developers in the private sector acknowledge the threats AI poses and have created their own codes of ethics to monitor AI development responsibly. However, many experts believe government regulation may be required to resolve issues ranging from racial bias in facial recognition software to the use of autonomous weapons in warfare.

On Sept. 14, the Center for Technology Innovation hosted a panel discussion at the Brookings Institution to consider the ethical dilemmas of AI. Brookings scholars Christopher Meserole, Darrell West, and William Galston were joined by Charina Chou, the global policy lead for emerging technologies at Google, and Heather Patterson, a senior research scientist at Intel.

Dual uses of artificial intelligence

The panelists recognized the dual uses of these technologies to both benefit and harm society. For example, AI holds the potential to assist law enforcement but also limit civil liberties and privacy. On one hand, a kidnapped child could be located much more easily through facial recognition software. However, this also enables surveillance that tracks a person’s movements without their consent or knowledge. Deciding on the ethics of specific technologies developed though AI becomes murky when the same software could produce clear benefits but also real risks.

As Darrell West pointed out, facial recognition software has already shown racial biases based on demographically unrepresentative training data. These systems are significantly more accurate at identifying a Caucasian face than a minority one, meaning false identification is more likely for minority groups. Correcting for incomplete training data would prevent these algorithms from exacerbating historical discrimination against minority groups.

In a different realm, AI development in the military poses other accountability challenges. If the military deployed autonomous weapons in warfare, who bears responsibility if a faulty firing kills innocent civilians? It is unclear to what degrees the military, government, and software developer would be held responsible. However, other nations might also develop autonomous weaponry. According to a recent Brookings survey, American support of using AI for warfare increases significantly if adversaries are known to be developing similar technologies.

Putting ethical principles into practice

The panel emphasized the need for transparency in how principles of ethical AI are applied in practice. In cases where ethical principles conflict with each other, decisions will need to be made about how to prioritize them. Self-driving cars provide one example: in the case of an imminent collision, should a self-driving car prioritize the safety of the rider or a pedestrian? Resolving these moral dilemmas should accurately reflect public opinion, not just the view of the software developer or the car company producing these autonomous vehicles, argued William Galston.

By developing a standardized code, getting ethicists involved in the conversation, and training developers in ethical considerations, we can limit the harm that AI has the potential to cause. Through proactive public and private regulation, we can continue to develop AI for the benefit of society, while being wary of its potential negative consequences.

Julia Slisz contributed to this blog post.


Intel and Google are donors to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the authors and not influenced by any donation.

Authors