What Guardrails Should AI Companies Build to Protect Learning?
Failed to add items
Add to basket failed.
Add to Wish List failed.
Remove from Wish List failed.
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
By:
About this listen
In the past few months new AI tools known as “Agentic AI” have emerged. These new browsers let users deploy AI assistants that can surf the web on their behalf. While they were designed to do things like book airline tickets or schedule meetings, students can use the tools to have the bot log into learning management systems to take quizzes for them. Anna Mills, a longtime English instructor, has called on AI companies to add a simple guardrail to keep these tools from assisting in academic fraud, just as they refuse to help with hacking or other unethical acts. The situation raises questions about how AI companies are responding to calls by educators to add safeguards to protect learning.
LinkedIn post by Anna Mills calling for AI companies to add guardrails to protect learning.
“Statement on Educational Technologies and AI Agents” by the Modern Language Association.
Video demo by Anna Mills showing an Agentic AI browser taking quizzes in the name of a student.
“Tech companies don’t care that students use their AI agents to cheat,” in The Verge.
Perplexity ad on social media.
"The Adoption and Usage of AI Agents: Early Evidence from Perplexity," in ArXiv.