What Guardrails Should AI Companies Build to Protect Learning? cover art

What Guardrails Should AI Companies Build to Protect Learning?

What Guardrails Should AI Companies Build to Protect Learning?

Listen for free

View show details

About this listen

In the past few months new AI tools known as “Agentic AI” have emerged. These new browsers let users deploy AI assistants that can surf the web on their behalf. While they were designed to do things like book airline tickets or schedule meetings, students can use the tools to have the bot log into learning management systems to take quizzes for them. Anna Mills, a longtime English instructor, has called on AI companies to add a simple guardrail to keep these tools from assisting in academic fraud, just as they refuse to help with hacking or other unethical acts. The situation raises questions about how AI companies are responding to calls by educators to add safeguards to protect learning.

LinkedIn post by Anna Mills calling for AI companies to add guardrails to protect learning.

“Statement on Educational Technologies and AI Agents” by the Modern Language Association.

Video demo by Anna Mills showing an Agentic AI browser taking quizzes in the name of a student.

“Tech companies don’t care that students use their AI agents to cheat,” in The Verge.

Perplexity ad on social media.

"The Adoption and Usage of AI Agents: Early Evidence from Perplexity," in ArXiv.

No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.