![A stethoscope over a graph](https://www.brookings.edu/wp-content/uploads/2023/06/20230531_THP_HCEvent_shutterstock_582412642.jpg?quality=75&w=500)
![A stethoscope over a graph](https://www.brookings.edu/wp-content/uploads/2023/06/20230531_THP_HCEvent_shutterstock_582412642.jpg?quality=75&w=500)
1:30 pm EDT - 2:30 pm EDT
Past Event
The notions of ethical and accountable artificial intelligence (AI)—also referred to as “responsible AI”—have been adopted by many stakeholders from government, industry, civil society, and academic institutions. Making AI systems transparent, fair, secure, and inclusive are core elements of widely asserted responsible AI frameworks, but how they are interpreted and operationalized by each group can vary. Further, there is some debate on whether responsible AI frameworks can address the explicit and implicit biases embedded within systems to ensure equity in predictive decisions, especially when applied to employment, health care, financial services, and criminal justice.
On May 10, the Center for Technology Innovation at Brookings hosted a webinar to unpack what is meant by “responsible AI” and how different sectors are building corollary frameworks to increase the technology’s accountability. Panelists also discussed the roles of self-regulation, public policies, and consumer feedback.
Viewers submitted questions for speakers by emailing [email protected] or via Twitter at @BrookingsGov by using #AIBias.
Moderator
Panelist
Niam Yaraghi, Azizi A. Seixas, Ferdinand Zizi
June 26, 2024
Diana Fu, Emile Dirks
June 24, 2024
2024
The Brookings Institution, Washington DC
10:00 am - 12:00 pm EDT