Chatbots and Teens: What could go wrong?

Only a few miles away from our offices, in the city of Rancho Santa Margarita, California parents of a teen sued Open AI, the Artificial Intelligence (AI) company and maker of ChatGPT, after their child committed suicide after relying on ChatGPT for advice and counsel. OpenAI recently announced new parental controls in an effort to minimize risk to youth. Parents can now configure settings to restrict access in many ways. In addition it offers new features to detect a child in crisis and can be set to send parents text or email notifications after a human reviewer has responded to the potential crisis. OpenAI claims it will only disclose enough information to warn parents and will not compromise their child’s privacy or release communication between their child and ChatGPT. OpenAI is also working to accurately detect the age of the user to ensure protections for minors. So how is this working?

“ChatGPT parental control notification problems include delayed notifications, potential bypass of controls, and a lack of direct visibility into conversations. Issues such as the notification delay of up to 24 hours for serious alerts, and the ability for teens to simply log out or unlink their accounts, indicate that the current system has vulnerabilities that need to be addressed. Additionally, parental controls may not be default-safe, as some settings might not apply if a user is not signed in.” (Google Gemini referencing Consumer Reports and the Washington Post)

While perhaps not rock solid, this is another example of more dangers and now more parental controls for parents to monitor and manage to protect children online. Is OpenAI really suggesting that parents, knowing the dangers of AI, put their children’s lives in the hands of new, untested AI parental control protections?

We have seen this before. Parental controls have always been an afterthought for computer, operating system, videogame and social media companies.  Videogame developer Blizzard Entertainment (also only a few miles from us) introduced parental controls about the same time lawsuits emerged from parents alleging addiction potential. The same has happened with the social media companies and many lawsuits are ongoing. 

But what if videogame, social media and AI companies are not the problem? What if the problem is simply that children have too easy access to technologies that are not safe for them? What if the solution is simply technology that specifically helps parents protect their children from our society’s unbridled acceptance of new and potentially unsafe technologies?

That is exactly what the Sentinel LaunchPad does. Rather than add another layer of parental controls, the LaunchPad makes it easy for parents. Set the child’s age and answer some basic questions about their schedule and your family lifestyle and professionally guided settings are automatically applied. Parents can choose if they want their child to have access to AI and if so when and in what modes of operation. Parents can selectively, through the System Settings App, enable or restrict access to specific AI websites in specific modes.  

But ultimately, restriction is not the answer.  Parents will need to educate and guide their children for the successful introduction of social media, artificial intelligence and other potentially valuable resources that also carry significant risks.  This can only be accomplished if access is carefully managed and use monitored.  With the LaunchPad, parents can easily enable AI or Social Media access only at times when they can be with the child. When being with the child in person is not possible or as the child matures, parents can, from anywhere in the world, monitor their child’s use of AI with periodic high definition screen images. Parents can, in near real-time, even communicate with their child about what they are seeing and if necessary remove them from undesired content.

As a psychologist, computer engineer and entrepreneur I am both excited and terrified about all these new technologies. We at Sentinel Computers have a history of developing systems to treat problematic and addictive screen use and are proud to now be working to help parents prevent these and other harms for future generations.    

For information on other efforts to protect children from unsafe technologies check out the directory website we sponsor, putgenieback.org.

Next
Next

Imparting Parental Values In An Age of Virtually Unrestricted Media Influences