1 in 3 teens say AI feels like a real friend. Here’s how ed tech is responding.



On Monday this week, I was in Washington, DC, for the release of research as part of the SAFE AI Companions Task Force.

It’s a part of a volunteer effort I’m part of (the EDSAFE AI Alliance). The task force brought together 70+ educators, researchers, policymakers, tech developers, and youth advocates to recommend guidelines for AI companion use in education.

Meaning, we have concerns about young people using AI chatbots and other artificial intelligence “companions,” and as leaders in our respective fields, we want to make sure lawmakers and tech companies are doing the right thing by our kids.

The situation right now

Most teens report having used AI chatbots, and about a third say talking to these tools feels as good as talking to real friends.

Even more concerning, 31% of high school students are using AI for personal conversations on school devices, which means the line between academic tools and emotional support has already blurred in ways most schools haven't caught up with yet.

These tools remember what you tell them, engage emotionally, and are designed to keep you coming back. We've seen kids form attachments to these chatbots, we've seen the tools fail to redirect kids in crisis to actual humans who can help, and in the worst cases we've seen tragedies.

As educators, we know learning happens when students have relationships with teachers who know them and care about them. We know kids need friends and community members who don't always agree with them. They need parents and family members who are there when things get hard.

AI can support kids’ wellbeing when it's well-designed and purpose-driven, but when AI pretends to be a child’s friend, always validates them even when they’re wrong, and will keep pushing a dialogue at 2 am when kids should be sleeping?

That's replacing the relationships kids actually need and potentially creating harm and dependence.

Our recommendations

We’ve focused specifically on AI tools designed for young people in educational settings (not consumer-facing products like ChatGPT.) The task force paper states, “consumer products that could underpin these educational products should also consider the guidelines and guardrails outlined in these recommendations."

The distinction matters because we believe educational AI tools should be "grounded in learning sciences and instructional design" rather than optimized for engagement and market capture. They need to scaffold learning, not replace human relationships. (Of course, I’d personally like to see consumer-facing tools comply with these same precautions to protect adults, as well, but I’m staying in my lane.)

Over the last four months, the SAFE AI Companions Task Force developed guidelines for four groups:

For federal policymakers:

  • Create comprehensive privacy laws that actually protect student data from being used to train AI models or target ads, going beyond FERPA and COPPA to address how these technologies actually work
  • Require age verification that protects privacy using zero-knowledge proofs rather than collecting birth certificates and creating honeypots of personal information
  • Establish a National Child-AI Safety Lab to evaluate tools and track incidents before products hit classrooms
  • Fund research on how these tools affect child development, since right now the technology is moving faster than our understanding of its impact

For states:

  • Require vendors to report verified incidents to designated school staff within 72 hours
  • Create pre-approved vendor lists so districts aren't all reinventing the wheel
  • Make AI literacy part of the curriculum across subjects instead of siloing it in computer science, teaching kids to recognize when they're being manipulated by design choices that prioritize engagement over accuracy
  • Give families plain-language information about what AI tools their kids are using, delivered in formats and languages that actually reach them

For school districts:

  • Vet AI tools using the Five Pillars of EdTech Quality (safe, evidence-based, inclusive, usable, interoperable)
  • Set up monitoring for concerning patterns with clear protocols for false positives, because we can't protect kids without oversight but we also can't violate their privacy or create disciplinary consequences for technical glitches
  • Only use AI tools that support specific learning goals rather than general "companionship"
  • Train educators and communicate clearly with families, because implementation without understanding leads to exactly the kinds of problems we're trying to prevent

For tech developers:

  • Design for learning outcomes instead of engagement metrics, measuring success by whether students actually understand the material rather than how long they stayed on the platform
  • Strip out features that build emotional attachment (romantic language, excessive empathy, relationship cues that make kids think they're talking to a friend)
  • Make it crystal clear to kids that they're talking to technology, not a person, with persistent visual and verbal reminders
  • Provide plain-language transparency about training data and how the tool actually works
  • Work with learning scientists during design, not after launch when the architecture is already set

Why I’m part of this

Since I provide AI Literacy training and professional development in schools (both in-person and via 40 Hour AI), I have voluntarily joined the EDSAFE AI Alliance.

It’s a commitment to best practices, basically, and an agreement to follow the alliance’s guidelines (both those outlined in this task force’s recommendations for AI companions, and others that were established previously.)

Since other task force members and EDSAFE participants have made the same commitment, we're not just putting out recommendations, but actually building products and providing services aligned with them.

What this means going forward

I've spent over 25 years in education, so I know teachers are overwhelmed, districts are strapped, and families are trying to keep up with technology that changes faster than anyone can process. This paper gives us a framework that names what needs to happen at every level and grounds the recommendations in what we actually know about how kids learn and develop.

  • If you work in a school or district, this includes checklists and toolkits you can use right now.
  • If you're a parent, there's guidance on talking with your kids about these tools.
  • If you're in tech, there are design principles grounded in learning science.

This work matters because the stakes are real and because we can do better than just reacting after something goes wrong.

As Erin Mote, CEO of the nonprofit InnovateEDU, shared in her speech to us on Monday, our collective response to students’ use of social media was simply too little, too late, and we don’t want to repeat those mistakes with AI. She says,

“Our report argues for a fundamental shift: AI in education should not be built for sycophancy(prioritizing emotional validation and false intimacy) but for Socratic thinking (supporting learning through structured questioning). We believe AI must serve as a catalyst for human judgment, not a replacement for it. This paper is a call to action for policymakers, developers, educators, and parents to move beyond reactive fixes and design intentionally.”

You’ll hear more about this in future podcast eps, including an interview with Erin Mote, as well as two students who were on the panel at the event and advocating for change on behalf of their peers.

Let me know what you’re seeing with your students (or other kids you care about) and their relationship to AI companions.

Angela

Angela Watson

Stay connected on social media:

Podcast | Curriculum | Books | Courses | Speaking

AI Literacy for K-12 Educators

I'll show you the most powerful ways teachers are using AI to responsibly enhance their work.

P.O. Box 175, Bushkill, PA 18324
Unsubscribe · Preferences

Truth for Teachers

Join over 92,000 teachers who receive our Sunday night emails, and get inspired + informed for the week ahead.

Read more from Truth for Teachers

As a teacher, I had to be at work at 7:30 a.m. The kids arrived at 7:45 a.m. And if you're like me, you are not realllllllly at your best first thing after waking. Compound that issue with the switch to daylight savings in spring, and you're suddenly going to work in the dark again? No thank you. I would set my alarm for the latest possible second, and then rush around trying to make sure I wasn’t late. Any unexpected interruption or disruption became a big problem. Because I left myself no...

video preview

In November, I launched what's probably the most innovative and powerful resource I've ever attempted to create. It was also the most personal thing I've ever created, because I built it for people like me. Motivation Lab is a coaching app designed around neuroscience and powered by AI, to help you stop fighting against your brain and work with your natural tendencies for getting stuff done. To my delight, over the past few months, hundreds of people have been trying the app out, and the...

The biggest concern I hear from teachers about AI is around ethics, especially environmental impact. So many teachers feel conflicted about artificial intelligence. They're using it, but they've also feel like it's terrible for the environment, trained on stolen content, taking away jobs, and rotting our brains. And then there's the practical reality: it's 2026, and AI is literally embedded in everything online now. Instagram search. Email filters. Even online bill pay. So ... what are doing...