Monitor Kids Online With Artificial Intelligence?
There’s no question that young people can come to harm online. Child predators, cyberbullies, and other dangers lurk in every form of online activity. Parents, naturally, want to protect their kids, but they can’t monitor every social media, email, Skype chat, etc. So many parents are turning to what can be called AI-powered child surveillance services. Schools also are monitoring students even when the students are off-campus, in hopes of preventing mass shootings, suicides, and other tragedies. Read on for the scoop...
Parental Controls Get AI
Parents want to protect their kids from a variety of dangers that lurk online. Left unsupervised, kids may not know whom to (dis)trust, what dangers lurk in dark corners of the Web, and how to manage social media interactions. Schools have always been a breeding ground for teasing and bullying, but the Internet amplifies those signals and beams them at kids 24/7.
In texting slang, "jk" is shorthand for "just kidding." But it's often used after an ill-considered, mean-spirited or threatening comment. You've probably heard stories of children who have tragically ended their own lives, or taken horrendous measures to get revenge, after being subjected to cyber-bullying. Let's look at how some new tools and technology can help to thwart those unhappy endings, and figure out when "jk" is actually "JK, but not really."
Firms like Bark Technologies, Gaggle, and Securly monitor what kids write, read, and view online, using apps apps installed on individuals’ devices or on school district networks. They look for concerns ranging from use of profanity to expressions of suicidal intentions. Some services, like Securly, block students from accessing blacklisted sites using school resources. All of them send alerts to parents and school administrators when something of concern pops up. If a “concern” is serious and exigent, like a bomb threat, law enforcement may be alerted too.
Bark Technologies, which launched its school service in February, 2017, claims to be monitoring 2.6 million students in 1,100 school districts. It issues 35,000 to 55,000 alerts on a typical day, most of them about profanity. Bark claims to have uncovered 16 school shooting threats in time to thwart them.
Gaggle.net has been around for 20 years. Since July, 2018, it claims to have prevented 447 suicides among students at the 1,400 school districts it serves. Gaggle also flagged 240 kids who brought weapons to school with intent to harm someone.
Securly Inc claims to serve 10,000 schools and 10 million students. It recently flagged a student who had searched Google for “how to make a bomb” and “how to kill yourself.” A human analyst who reviewed the automated alert contacted the school.
Successes like these are possible because kids often share their anger, despair, or violent intentions on social media, via private messages, and so on. Their peers spread word of contraband rapidly, too. Monitoring services are very good at flagging disturbing communications and alerting the appropriate level of authority.
Early child-monitoring (better known as “parental control”) software, dating back to the mid-1990s, was pretty primitive. It relied on lists of keywords to filter web sites or flag student communications. Today, machine learning and artificial intelligence provide the ability to flag an Instagram photo of a gun in a backpack, or a TikTok video in which verbal threats are made. Modern AI software can even evaluate how serious a threat is, although panels of human analysts still play a role in referrals to schools or law enforcement.
The Privacy/Security Tradeoffs
These monitoring services have helped hundreds of emotionally disturbed students, and may well have prevented dozens of school shootings. Still, their pervasive surveillance raises privacy and free speech concerns, at least among students. Parents and school administrators seem to feel the trade-off is worthwhile.
What happens on social media can move onto a school campus very quickly. Some “trash talking” on Facebook over a weekend often leads to an on-campus fight the following Monday. So schools are naturally in favor of monitoring students’ social media activity whether the kids are at school or not.
Amanda Lenhart, a researcher who has studied how teens use the Internet, says that it’s difficult for adults to correctly interpret kids’ interactions online. “Even if you have people directly looking at posts they won’t know what they’re looking at,” Lenhart told Wired magazine. “That could be exacerbated by an algorithm that can’t possibly understand the context of what it was seeing.”
What do you think? Should parents and schools have unfettered access to students’ social interactions? Is the trade-off of privacy for security worthwhile in this case? Can software ever understand when a teen is joking and serious? Or should we always take a "better safe than sorry" approach? Your thoughts on this topic are welcome. Post your comment or question below...
This article was posted by Bob Rankin on 25 Feb 2019
|For Fun: Buy Bob a Snickers.|
What is YOUR Backup Strategy? (I'll show you mine...)
The Top Twenty
Foldable Phones Are Here - But Why?
Post your Comments, Questions or Suggestions
Free Tech Support -- Ask Bob Rankin
Subscribe to AskBobRankin Updates: Free Newsletter
Copyright © 2005 - Bob Rankin - All Rights Reserved
Article information: AskBobRankin -- Monitor Kids Online With Artificial Intelligence? (Posted: 25 Feb 2019)
Copyright © 2005 - Bob Rankin - All Rights Reserved