Sections

Commentary

Responsible AI use in global financial markets

June 20, 2024


  • Nicol Turner Lee, director of the Center for Technology Innovation, recently co-authored “Responsible Artificial Intelligence in Financial Markets: Opportunities, Risks & Recommendations,” a report for the Technology Advisory Committee of the Commodities and Futures Trading Commission (CFTC).
  • The report explores the expansion of AI tools in markets regulated by the CFTC and recommends that the Commission convene a roundtable of registered entities to encourage the adoption of the NIST’s AI Risk Management Framework.
  • The report also recommends that the CFTC prioritize aligning its AI policies with those of other regulatory agencies, such as the Securities and Exchange Commission.
Signage is seen outside of the US Commodity Futures Trading Commission (CFTC) in Washington, D.C., U.S., August 30, 2020.
Signage is seen outside of the US Commodity Futures Trading Commission (CFTC) in Washington, D.C., U.S., August 30, 2020. REUTERS/Andrew Kelly

In 1999, Commissioner Christy Goldsmith-Romero from the Commodity Futures Trading Commission (CFTC) established a new Technology Advisory Committee (TAC) to help with issues at the intersection of technology, law, policy, and finance. In 2023, Brookings Director of the Center for Technology Innovation Nicol Turner Lee was appointed to serve as a member and the co-chair of the Subcommittee on Emerging and Evolving Technologies, for which she co-authored the pioneering and inaugural report for the Commission titled, “Responsible Artificial Intelligence in Financial Markets: Opportunities, Risks & Recommendations.” The report has increasing significance as the Biden-Harris administration works toward safe, trustworthy, and secure artificial intelligence (AI), whose principles and action statements are codified in the recent White House Executive Order.

The increasing adoption of AI in the financial services sector stands to reshape the industry overall from retail to backend services and operations. Composed by a group of distinguished experts on the Subcommittee who work in the derivatives and trading markets, as well as on AI, the report compiled research and findings around the opportunities and challenges of AI adoption and use by the CFTC’s regulated entities, including trading organizations in the U.S. derivatives markets, such as futures, options, and swaps. In particular, the agency has jurisdiction over designated contract markets, swap execution facilities, derivatives clearing organizations, and swap data repositories. The report sets the context, defines important terms (including “responsible AI”), and reviews the current AI policy landscape alongside existing and hypothetical vulnerabilities in their targeted markets. The information in the report also highlights valuable use cases of AI in financial services, including fraud detection, risk management, and the identification, execution, and back-testing of trading strategies.

The report concludes with five detailed recommendations to the CFTC. The proposals urge the Commission to leverage its convening power around a roundtable discussion of registered entities who might consider the adoption and use of the AI risk management framework advanced by the National Institute of Standards and Technology (NIST). The report also proposes that the Commission perform legal gap analysis for autonomous AI systems, align AI policies with other agencies like the U.S. Securities and Exchange Commission (SEC), and engage staff in domestic and international dialogues around appropriate use of AI models in autonomous financial systems.

In theory, AI represents a potentially valuable tool for CFTC internal and external stakeholders, as well as the global financial market in general. However, despite its potential value, the use of AI by CFTC registered entities will require further exploration and discussion, including raising awareness about the function of automated decision-making models and the necessary governance. Because of the lack of direct knowledge about the CFTC-registered entities currently leveraging AI, and the level of transparency and explainability among these firms to date, regulators and customers should further explore the various technologies being used (e.g., predictive, algorithmic, generative, or other frontier models), and for what use cases. Other considerations include responsible development, the quality of training data, the extent of involvement of humans in autonomous trading models, data privacy, auditing and oversight, and the breadth of internal talent at the CFTC to perform all or some of these suggested activities.

The White House EO on AI urges federal agencies and regulatory commissions to think carefully and substantively about how they plan to develop, use, and evaluate AI technologies within the context of their functions. The CFTC report both represents a comprehensive approach to identifying and mitigating the risks of AI in scenarios such as financial trading and presents an opportunity for future discovery by the Commission and related agencies, including the U.S. Department of Treasury, which has also started its own exploration of AI use cases. Recently, Commissioner Goldsmith Romero was nominated to chair the Federal Deposit Insurance Corporation by President Biden. It is quite likely that she will, too, bring a deliberative AI strategy to that organization, which also plays a vital role in the national economy.

  • Acknowledgements and disclosures

    Nicol Turner Lee would like to acknowledge the research support of Joshua Turner and Jack Malamud on this blog and the final CFTC report.