The situation right nowMost teens report having used AI chatbots, and about a third say talking to these tools feels as good as talking to real friends. Even more concerning, 31% of high school students are using AI for personal conversations on school devices, which means the line between academic tools and emotional support has already blurred in ways most schools haven't caught up with yet. These tools remember what you tell them, engage emotionally, and are designed to keep you coming back. We've seen kids form attachments to these chatbots, we've seen the tools fail to redirect kids in crisis to actual humans who can help, and in the worst cases we've seen tragedies. As educators, we know learning happens when students have relationships with teachers who know them and care about them. We know kids need friends and community members who don't always agree with them. They need parents and family members who are there when things get hard. AI can support kids’ wellbeing when it's well-designed and purpose-driven, but when AI pretends to be a child’s friend, always validates them even when they’re wrong, and will keep pushing a dialogue at 2 am when kids should be sleeping? That's replacing the relationships kids actually need and potentially creating harm and dependence. Our recommendationsWe’ve focused specifically on AI tools designed for young people in educational settings (not consumer-facing products like ChatGPT.) The task force paper states, “consumer products that could underpin these educational products should also consider the guidelines and guardrails outlined in these recommendations." The distinction matters because we believe educational AI tools should be "grounded in learning sciences and instructional design" rather than optimized for engagement and market capture. They need to scaffold learning, not replace human relationships. (Of course, I’d personally like to see consumer-facing tools comply with these same precautions to protect adults, as well, but I’m staying in my lane.) Over the last four months, the SAFE AI Companions Task Force developed guidelines for four groups: For federal policymakers:
For states:
For school districts:
For tech developers:
Why I’m part of thisSince I provide AI Literacy training and professional development in schools (both in-person and via 40 Hour AI), I have voluntarily joined the EDSAFE AI Alliance. It’s a commitment to best practices, basically, and an agreement to follow the alliance’s guidelines (both those outlined in this task force’s recommendations for AI companions, and others that were established previously.) Since other task force members and EDSAFE participants have made the same commitment, we're not just putting out recommendations, but actually building products and providing services aligned with them. What this means going forwardI've spent over 25 years in education, so I know teachers are overwhelmed, districts are strapped, and families are trying to keep up with technology that changes faster than anyone can process. This paper gives us a framework that names what needs to happen at every level and grounds the recommendations in what we actually know about how kids learn and develop.
This work matters because the stakes are real and because we can do better than just reacting after something goes wrong. As Erin Mote, CEO of the nonprofit InnovateEDU, shared in her speech to us on Monday, our collective response to students’ use of social media was simply too little, too late, and we don’t want to repeat those mistakes with AI. She says, “Our report argues for a fundamental shift: AI in education should not be built for sycophancy(prioritizing emotional validation and false intimacy) but for Socratic thinking (supporting learning through structured questioning). We believe AI must serve as a catalyst for human judgment, not a replacement for it. This paper is a call to action for policymakers, developers, educators, and parents to move beyond reactive fixes and design intentionally.” You’ll hear more about this in future podcast eps, including an interview with Erin Mote, as well as two students who were on the panel at the event and advocating for change on behalf of their peers. Let me know what you’re seeing with your students (or other kids you care about) and their relationship to AI companions. Angela
|
|
Join over 92,000 teachers who receive our Sunday night emails, and get inspired + informed for the week ahead.
As a teacher, I had to be at work at 7:30 a.m. The kids arrived at 7:45 a.m. And if you're like me, you are not realllllllly at your best first thing after waking. Compound that issue with the switch to daylight savings in spring, and you're suddenly going to work in the dark again? No thank you. I would set my alarm for the latest possible second, and then rush around trying to make sure I wasn’t late. Any unexpected interruption or disruption became a big problem. Because I left myself no...
In November, I launched what's probably the most innovative and powerful resource I've ever attempted to create. It was also the most personal thing I've ever created, because I built it for people like me. Motivation Lab is a coaching app designed around neuroscience and powered by AI, to help you stop fighting against your brain and work with your natural tendencies for getting stuff done. To my delight, over the past few months, hundreds of people have been trying the app out, and the...
The biggest concern I hear from teachers about AI is around ethics, especially environmental impact. So many teachers feel conflicted about artificial intelligence. They're using it, but they've also feel like it's terrible for the environment, trained on stolen content, taking away jobs, and rotting our brains. And then there's the practical reality: it's 2026, and AI is literally embedded in everything online now. Instagram search. Email filters. Even online bill pay. So ... what are doing...