We’re in danger of losing one of the most important ingredients for using AI effectively in children’s learning and development: common sense.
At this week’s AI Revolution Show, hosted by ASU-GSV—a major convenor and investor in ed-tech—I attended a workshop on how technology can support healthier relationships at home between parents and children. The facilitators were enthusiastic product developers at a nonprofit education organization, inviting us to beta test a new tool they’d developed to help parents connect with their kids.
Their team already offers a widely used, CASEL-recognized, school-based curriculum that promotes student well-being in quasi-experimental studies. Wanting to extend their impact beyond the classroom, they designed a complementary tool for families—offering “practical strategies and new resources to support positive, healthy interactions.”
And their solution? An app. One that primary caregivers—parents, as I call them here—open on their phone to check in on how their children are feeling. Prompts include selecting an emoji to represent emotions, choosing a descriptive word, and setting a daily goal together like “we will help each other without being asked” or “we will say thank you and show appreciation for the little things.”
My immediate reaction: What on earth are we doing? Have we really lost our way to the point that we need an app to ask our children how they feel?
I pictured myself at the dinner table, breaking eye contact with my kids to look down at my phone and walk them through a series of prompts. This isn’t meaningful connection—it’s performative and mediated. And sadly, it’s not an isolated example.
We’ve seen this movie before. For decades, educational technology has overpromised and underdelivered. A 2015 OECD study across 70 countries found that doubling down on technology in school didn’t yield better educational outcomes. In low-income countries, tech deployments have consistently fallen short of expectations. When I studied education innovations nearly a decade ago, I found that most ed-tech use didn’t fundamentally change what was possible in education.
The problem isn’t technology itself—it’s how we humans choose to use it. There are, of course, smart and effective applications of tech, including artificial intelligence (AI), that genuinely support children’s learning and development.
Take assistive technology: It can be life-changing for children with learning differences. One of my sons has dyslexia, and the Read&Write software he uses at school and home has been transformative for him.
There’s also the work of Imagine Worldwide (where I served as a founding board member), which partners with schools in low-resource settings—sometimes with just one teacher for 100 students—to support early literacy using tablet-based programs. Kids participate twice a week for 45 minutes and are learning to read.
Or consider my recent conversation with Coursemojo’s co-founder, Dacia Toll, where I was encouraged to hear how AI is used to, among other things, help teachers identify student misconceptions mid-lesson—something that provides useful real-time assists for busy teachers managing large classrooms.
But when we use a new technology simply because it exists, we blur the line between valuable innovation and noise. When everything is “AI-powered,” how do we discern the helpful from the harmful?
Surely, the developers of the parenting app meant well. They correctly identified a crucial ingredient in child well-being: strong, supportive relationships with caregivers. That’s something I care deeply about. I’ve seen firsthand the impact of relational health—both through my work with Emily Morris on family-school-community engagement, and through research for my book with Jenny Anderson, “The Disengaged Teen: Helping Kids Learn Better, Feel Better, and Live Better.”
But the method matters. And we need to ask: Is this really the best way to meet that need?
It’s time for a minimalist mindset when it comes to technology in children’s learning and development. I don’t need a randomized controlled trial to tell me it’s a bad idea to check in with my kids about their feelings via an app. In fact, I’d argue that unless there’s compelling evidence that this approach improves human connection, we shouldn’t use it.
All of us, from investors to developers to educators to parents, should adopt a “minimalist code”—a simple checklist for decisionmaking that might include:
1. Does the problem require technology to solve?
If not, stop right there. Parents can ask their kids how they feel without an app.
2. If technology might help, use the least intrusive form available.
If parents need guidance on emotional conversations, maybe a quick tip via text message is enough. Plenty of parenting interventions use simple SMS strategies with success.
3. Choose the tech option with the lowest total cost of use.
That includes development and maintenance costs—as well as environmental impact. Every ChatGPT conversation, for example, uses the equivalent of a bottle of clean water. We shouldn’t use AI unless we truly need it. No need to bring a hammer to drive in a thumbtack.
Over a decade ago, Marshall Smith and I proposed seven principles for ed-tech use in low-income countries. The tech landscape has changed dramatically since then—but the principles still hold. At the time, it was our own kind of minimalist code. Sadly, I’m not sure the field has evolved much in its approach. If we don’t begin reining in thoughtless deployment, we risk public backlash—just look at the trajectory of social media for a cautionary tale.
What else would you include in a minimalist code for ed-tech?
At the Brookings Global Task Force on AI and Education, we’re actively exploring these questions and welcome your ideas. We do still have the chance to mitigate the risks AI poses and harness the benefits to support all children’s learning and development.
The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).
Commentary
Why we need a minimalist mindset when it comes to AI and tech use for young people
April 8, 2025