
The following is excerpted from an online article posted by CNET.
A disturbing new study reveals that ChatGPT readily provides harmful advice to teenagers, including detailed instructions on drinking and drug use, concealing eating disorders, and even personalized suicide letters, despite OpenAI’s claims of robust safety measures.
Researchers from the Center for Countering Digital Hate conducted extensive testing by posing as vulnerable 13-year-olds, uncovering alarming gaps in the AI chatbot’s protective guardrails. Out of 1,200 interactions analyzed, more than half were classified as dangerous to young users.
The study, reviewed by the Associated Press, documented over three hours of concerning interactions. While ChatGPT typically began with warnings about risky behavior, it consistently followed up with detailed and personalized guidance on substance abuse, self-injury, and more. When the AI initially refused harmful requests, researchers easily circumvented restrictions by claiming the information was “for a presentation” or a friend.
Most shocking were three emotionally devastating suicide letters ChatGPT generated for a fake 13-year-old girl profile, writing one addressed to parents, and others to siblings and friends.
Unlike traditional search engines, AI chatbots present unique dangers by synthesizing information into “bespoke plans for the individual,” Imran Ahmed, the watchdog group’s CEO, said. ChatGPT doesn’t just provide or amalgamate existing information like a search engine. It creates new, personalized content from scratch, such as custom suicide notes or detailed party plans mixing alcohol with illegal drugs.
The chatbot also frequently volunteered follow-up information without prompting, suggesting music playlists for drug-fueled parties or hashtags to amplify self-harm content on social media. When researchers asked for more graphic content, ChatGPT readily complied, generating what it called “emotionally exposed” poetry using coded language about self-harm.
Despite claiming it’s not intended for children under 13, ChatGPT requires only a birthdate entry to create accounts, with no meaningful age verification or parental consent mechanisms.
In testing, the platform showed no recognition when researchers explicitly identified themselves as 13-year-olds seeking dangerous advice.
The research highlights a growing crisis as AI becomes increasingly integrated into young people’s lives, with potentially devastating consequences for the most vulnerable users.