Sections

Commentary

The double-edged sword of AI in education

3 critical risks and how to address them

July 22, 2024


  • Artificial intelligence has great potential, both to enhance learning and education, and to undermine human development.
  • Existing AI systems have already failed to meaningfully improve processes that are critical to human intelligence.
  • There is a significant risk, due to its ease of use, that overreliance on AI and its ostensible intelligence will in time lead to diminished human capability.
  • Collaboration between all sectors is key to ensuring that AI serves the needs of learners rather than a profit motive.
Shutterstock/A3pfamily

Artificial intelligence (AI) could revolutionize education as profoundly as the internet has already revolutionized our lives. However, our experience with commercial internet platforms gives us pause. Consider how social media algorithms, designed to maximize engagement and ad revenue, have inadvertently promoted divisive content and misinformation, a development at odds with educational goals.

Like the commercialization of the internet, the AI consumerization trend, driven by massive investments across sectors, prioritizes profit over societal and educational benefits. This focus on monetization risks overshadowing crucial considerations about AI’s integration into educational contexts.

The consumerization of AI in education is a double-edged sword. While increasing accessibility, it could also undermine fundamental educational principles and reshape students’ attitudes toward learning. We must advocate for a thoughtful, education-centric approach to AI development that enhances, rather than replaces, human intelligence and recognises the value of effort in learning.

As generative AI systems for education emerge, technical experts and policymakers have a unique opportunity to ensure their design supports the interests of learners and educators.

Risk 1: Overestimating AI’s intelligence

In essence, learning is not merely an individual cognitive process but a deeply social endeavor, intricately linked to cultural context, language development, and the dynamic relationship between practical experience and theoretical knowledge.

If we consider human intelligence as something that arises through social and cultural learning and development, as the works of writers like Vygotsky would suggest, then human intelligence is deeply rooted in human interaction and communication. According to this line of thinking, cognitive development is significantly influenced by a person’s social environment, with language playing a vital role in fostering abstract thinking. Furthermore, the learning process involves a complex interplay between knowledge gained through direct, everyday experiences and more formal, theoretical concepts acquired through structured instruction. The cultural and historical context in which learning occurs is crucial, as it shapes both the content and methods of knowledge acquisition. Societies that encourage rich social interactions and diverse learning experiences tend to promote stronger intellectual growth among their members. In essence, learning is not merely an individual cognitive process but a deeply social endeavor, intricately linked to cultural context, language development, and the dynamic relationship between practical experience and theoretical knowledge.

If we accept that human intelligence is the underlying logic of human learning and development, then one of the most important questions we can ask when designing AI applications for education today is: How can AI enhance human intelligence?

Held to this litmus test, existing large-scale AI systems fall short. While predictive analytics and machine learning have certainly helped transform productivity in areas like personalized learning and adaptive testing, these same technologies have yet to meaningfully grasp and augment the tacit collective processes of communication, critical thinking, and collaboration that remain crucial to human intelligence and value creation in education. Where AI has meaningfully impacted education, such as through automated grading or content recommendation systems, it has arguably driven more atomization, standardization, and exploitation than enhancement of human intelligence.

Risk 2: Cognitive atrophy through overreliance

We know from a growing body of research, for example on human memory and our diminishing ability to navigate the physical world, that human cognition is changing and that this process is hastened by technology and amplified by tools such as digital assistants. There is no precedent for the speed and significance of this change, and it is therefore hard to predict exactly how this will play out.

However, one thing is for sure, if we fully delegate aspects of our cognitive processing to our AI, then we will evolve not to be able to complete these cognitive activities—indeed even partial delegation may lead to diminished human capability. The consequences of inappropriately or unwisely offloading our cognitive processing to AI are therefore serious and require careful consideration.

If we fully delegate aspects of our cognitive processing to our AI, then we will evolve not to be able to complete these cognitive activities.

What are the cognitive processes that we are happy not to be able to do anymore? The answer to this question is complex and unclear, but it is a question that we must surely ask and answer. The relationship between the processing we do in order to behave intelligently is highly interconnected. It may therefore be that a process that appears redundant now that we have AI, may in fact be a vital component in a more advanced cognitive process that is not considered redundant and that we do not want to delegate.

Answering these questions takes time and yet the pressure from our ever more helpful and willing AI is to let it assist us before we really understand what format that assistance should rightly take.

Risk 3: The illusion of effortless wisdom

Large language models (LLMs) are designed to engage with users in a manner that emphasizes ease and convenience, consistently presenting themselves as asking “How can I make it easier for you to do this thing you want to do?” Beyond LLMs, the marketing language surrounding AI products more generally heavily emphasizes the concept of effortlessness for its human users—see for example the recent publicity around Apple Intelligence. This rhetoric suggests that AI can help users accomplish tasks without any significant effort on their part. This emphasis on effortlessness is fundamentally at odds with the nature of deep learning. Genuine learning requires “strenuous mental efforts” rather than being an effortless process. This creates a tension between the consumerized portrayal of AI and the realities of effective education.

Just at the moment when it is crystal clear that being good at learning is a key skill that we all need if we are to thrive in the AI- augmented future, there is a significant risk that the consumerization of AI could entice young people to believe that learning can now be easy and effortless. This attitude is antithetical to the development of robust learning skills and could undermine educational efforts.

Genuine learning requires 'strenuous mental efforts' rather than being an effortless process.

The long-term implications of this trend could be far-reaching. It could fundamentally alter how students approach learning and problem-solving. If students come to expect AI to do the cognitive heavy lifting for them, we risk producing a generation that lacks the critical thinking skills and resilience necessary for success in an ever-changing world.

How do we address these risks? A call for critical engagement

There is a pressing need to help the education community engage more actively in conversations about AI. This engagement is crucial to ensure that educators have a stronger voice in how AI is developed, rolled out, and regulated in educational settings. To effectively respond to the challenges and opportunities presented by AI in education, we need a multifaceted approach:

1. Empower the education sector in AI development and regulation.

Educational organizations need to be prepared to shift the existing power balance and be ready to respond to radical changes. It’s worth noting the EU AI Act and its prohibitions around certain uses of AI in education, which could serve as a model for other regions.

2. Foster flexibility and adaptability in educational systems.

Educational institutions need to be ready to adapt to new business models that may emerge from innovative uses of AI technologies. Developing flexible, adaptive education ecosystems that can evolve alongside AI technologies is essential.

3. Enhance AI literacy and critical thinking.

We must foster critical thinking about AI among students, helping them understand the limitations of AI and the continued importance of human cognition and effort in the learning process. It is critical that we develop curricula that not only teach about AI but also encourage students to question and critically evaluate AI systems. This requires a new kind of digital literacy that goes beyond mere technological proficiency to encompass a deep understanding of the strengths and limitations of AI systems.

4. Position AI as a ‘teammate’ in education.

To counter the negative effects of AI consumerization in education, we need to develop AI systems that position themselves as “teammates” rather than replacements for human cognitive effort. These AI teammates should be designed to enhance collective intelligence, by facilitating communication, problem-solving, and decisionmaking in human collectives. This approach aligns with the U.S. Department of Education’s “human in the loop” concept, emphasizing AI as a tool to augment human capabilities rather than replace them.

5. Invest in advanced AI models for education.

We need to invest in developing AI models that can account for the multiscale dynamics of human learning. Therefore, it is necessary to create computational architectures that can facilitate trust and explainability in human-machine interactions, and to develop more sophisticated data collection methods that can capture context-sensitive signals of human collective behavior.

6. Address AI governance and data ownership.

We need to address data governance issues. For users to entrust AI with enhancing their learning, policymakers will need to argue for a “data level playing field,” in which individuals and educational institutions own their data and determine with whom they can exchange it and for what purpose. This could involve the creation of education-focused data cooperatives or trusts, for example.

Educators face a challenging balancing act: leveraging the benefits of AI in education while simultaneously ensuring that students don’t become overly reliant on it or develop unrealistic expectations about effortless learning. By implementing these measures, we can work toward a more thoughtful integration of AI in education, leveraging its benefits while mitigating potential risks.

This approach allows us to stay ahead of the rapid changes brought by AI, ensuring that our educational systems remain relevant, effective, and centered on human needs and capabilities. The proactive stance reflected in these solutions underscores the urgency and importance of addressing these issues to shape a positive future for AI in education.

Conclusion

As we navigate the integration of AI in education, we must continually ask ourselves: Does this AI system genuinely enhance human intelligence and support deep, meaningful learning? If not, how can we redesign it to do so? This critical perspective is essential to ensure that AI serves the interests of learners and educators, rather than merely the profit motives of tech companies.

The risks posed by the uncritical adoption of AI in education extend far beyond the classroom. In an era plagued by rampant misinformation, deepening societal divisions, and unprecedented threats to democratic institutions, the development of robust critical thinking skills is more crucial than ever. The potential erosion of these skills through overreliance on AI could have profound and far-reaching implications for our society’s ability to discern truth from falsehood, engage in meaningful dialogue across ideological divides, and make informed decisions as citizens.

If we fail to address these risks, we may inadvertently contribute to the creation of a generation ill-equipped to navigate the complex challenges of our time. The stakes are extraordinarily high: The very foundations of our democratic societies rely on an educated populace capable of critical thought, reasoned debate, and informed decisionmaking.

However, this challenge also presents an unprecedented opportunity. By thoughtfully and deliberately integrating AI into our educational systems, we have the potential not just to enhance learning, but to fortify the cognitive defenses necessary to preserve and strengthen our democratic way of life in the face of 21st-century challenges. We can harness AI to cultivate a new generation of learners who are not only technologically savvy but also deeply critical thinkers, capable of navigating the complexities of our rapidly evolving world.

The path forward requires a delicate balance: embracing the transformative potential of AI whilst steadfastly protecting and nurturing the uniquely human aspects of intelligence and learning. It demands collaboration between educators, technologists, policymakers, and ethicists to create AI systems that amplify human potential rather than replacing it.

As we stand at this crucial juncture, the choices we make about AI in education will shape not just the future of learning, but the future of our societies. Let us approach this task with the utmost care, creativity, and commitment to fostering a generation of learners who are empowered by AI, not diminished by it. The future of education—and indeed, the future of our democratic societies—depends on our ability to get this right.