
Protect & Prevent - Should Schools Be Allowed to Monitor Students' School Accounts or Devices Social Media Interactions?
Failed to add items
Add to basket failed.
Add to Wish List failed.
Remove from Wish List failed.
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
By:
About this listen
This past week we saw a story in the Associated Press with an interesting headline. “Students have been arrested for false alarms from AI surveillance." The story relates an interesting situation. It seems a 13-year-old girl made an offensive joke while chatting online with her classmates, which triggered the schools’ AI surveillance software. Everyone seemed to agree (including mother and student) that the comments were “wrong” and “stupid” but the context showed they were not a threat (per the article). The article goes on to say that “surveillance systems in American schools increasing monitor everything students write on school accounts and devices." Thousands of school districts across the country use software like Gaggle and Lightspeed Alert to track kid’s online activities, looking for signs that the students might hurt themselves and or others. The girl had been texting with friends on a chat function tied to her school email. The AI function identified the perceived threat and school authorities and law enforcement were alerted. She was arrested, spent the night in jail and strip-searched. Later the court ordered eight weeks of house arrest, a psychological evaluation and 20 days at an alternative school. There is now a law suit filed against the school. This is a fascinating case, and I welcome comments from Million Kids followers. Email Opal at info@millionkids.org or make a comment on the podcast. Since this organization combats sextortion and online exploitation, I could not help but wonder if this technology could be used if a student sends or receives CSAM or a nude or is receiving demands for money from someone. Especially if that communication is originating outside the United States. Perhaps we could save lives.The article is especially interesting to me because they laid out all of the atrocities of the situation but only at the end do they tell you that it took place two years ago. AI has changed dramatically in two years. It is my hope that there has been substantial refinement in the detection systems so that school administrators and law enforcement can sort through when a student is making a crude remark, or bad joke, and when it is the real threat and needs immediate attention. Whether we agree or disagree with AI monitoring, we also need to educate our kids that statements about causing harm, bombs, killing, guns, or threats of any kind will be taken seriously, especially if you are using a system provided by a school organization. Be sure to join us.