The future of Ukraine’s economic recovery

LIVE

The future of Ukraine’s economic recovery
Sections

Research

Large Language Models level up – Better, faster, cheaper

A 2024 mid-year update on generative AI

July 31, 2024


Key takeaways:

  • Large Language Model (LLM) capabilities are advancing at breakneck speed.
  • Recent advancements in LLMs can potentially offer significant productivity boosts for knowledge workers.
  • This paper demonstrates three dozen practical applications of LLMs.
  • Continuous learning and responsible use of LLMs are crucial to maximize their benefits and minimize potential risks.
Webpages of OpenAI, Microsoft AI, Google AI, and Meta AI are seen on a laptop computer.
Shutterstock / Tada Images
Editor's note:

This paper was originally published by the Journal of Economic Literature in July 2024. It represents a mid-year update to our earlier report on “Generative AI for Economic Research.”

Executive summary

The landscape of Large Language Models (LLMs) and other generative AI has evolved rapidly since the beginning of 2024. Recent progress has been characterized by better performance, growing context windows allowing LLMs to process more data at once, better recall, faster processing, and falling costs. These changes have led to qualitatively new ways of applying LLMs in cognitive work. This paper summarizes the main innovations since the beginning of the year and demonstrates updated use cases of cutting-edge LLMs for economic research and similar activities, classified along six domains: ideation and feedback, writing, background research, coding, data analysis, and mathematical derivations.

Leading AI labs have released significant updates to their LLM offerings in recent months, including vision capabilities and real-time sound processing. OpenAI’s GPT-4o, Anthropic’s Claude 3.5, and Google DeepMind’s Gemini series represent the cutting edge of publicly available models. LLMs can now be accessed through web-based chatbots, real-time voice assistants, web-based experimentation platforms, and Application Programming Interfaces (APIs), offering varying levels of customization and integration. The rise of powerful open-source models like Meta’s LlaMA 3 series and Mistral’s models offers new opportunities for transparency, innovation, and cost-effective research applications. Advances in computational capacity and LLM efficiency are making it increasingly feasible to run smaller but capable models on local machines, offering benefits in terms of data privacy and offline accessibility. The introduction of features like OpenAI’s Advanced Data Analysis tool and Anthropic’s Artifacts enable more sophisticated data processing and analysis directly within LLM interfaces.

These developments have significant implications for all cognitive work, including for economic research, offering the potential for increased productivity. All white-collar workers are well-advised to stay informed about developments in LLM technology, be aware of the latest LLM capabilities, and explore how these tools can best be integrated into their workflows, as the rapid pace of innovation suggests that capabilities and best practices will continue to evolve. This paper summarizes the new developments and provides an updated collection of three dozen examples and use cases.

Download full text PDF

Click here for the Dec. 2023 version of the paper and here for additional resources.

  • Acknowledgements and disclosures

    Copyright American Economic Association; reproduced with permission.

    The Brookings Institution is financed through the support of a diverse array of foundations, corporations, governments, individuals, as well as an endowment. A list of donors can be found in our annual reports published online here. The findings, interpretations, and conclusions in this report are solely those of its author(s) and are not influenced by any donation.