Cyber Safety in the News

What Kids Told Us About How to Get Them Off Their Phones

The Atlantic, August 4, 2025

Children are not glued to their smartphones simply because the apps are addictive, they spend so much time online because it is currently the only place where they can socialize freely and without supervision. A Harris Poll survey of over 500 U.S. kids aged 8 to 12 found that most own smartphones, and about half of the 10–12-year-olds say most or all of their friends use social media. Platforms like Roblox enable them to roam virtual worlds and connect with peers, something they cannot do in the real world, as unsupervised in-person play has become increasingly rare.

Yet, when given choices, children overwhelmingly prefer unstructured, in-person play over adult-led activities or socializing online. Despite these preferences, many kids lack the freedom for real-world interaction: fewer than half of 8- to 9-year-olds have ever walked down a grocery-store aisle alone, and over a quarter are not even allowed to play unsupervised in their own front yard. Parents’ fears of injury or abduction have given way to overprotection, replacing free play with structured, supervised routines.

Importantly, the authors argue that reclaiming childhood means rebuilding opportunities for independence and unsupervised play. Communities and nonprofits like Let Grow are actively promoting freedom-based initiatives, from unsupervised park play and screen-free play clubs to monthly assignments that encourage kids to attempt tasks on their own. Evidence suggests such experiences foster confidence, resilience, and mental well-being. The message is clear: if we want children to spend less time online, we must start by opening the front door and giving them room to roam in real life.

 

Inside the Parent-Led Movement for Phone-Free Schools

Time Magazine, August 4, 2025

A growing grassroots movement led by parents is pushing to make schools phone-free to protect children from the harms of social media and constant smartphone access. These advocates organize through groups such as the Distraction Free Schools Policy Project, Smartphone Free Childhood US, Screen Time Action Network, and others. The movement has gained rapid momentum in recent years: a July Pew Research Center survey found that 74 percent of U.S. adults now support banning phone use during class for middle and high school students, and 44 percent support prohibiting phone use for the entire school day. In response, thirty-seven states have passed laws restricting phone use during class, and about half of those have enacted “bell-to-bell” bans that cover the entire school day, including lunch periods.

At schools that have adopted phone-free policies, advocates report notable improvements in students’ behavior, attention, and social interaction. One example is The Sharon Academy in Vermont, where a bell-to-bell phone ban led students to engage more with each other, participate in activities like playing volleyball and dancing, and see academic gains. The movement’s growth has been fueled in part by awareness raised during the COVID-19 pandemic along with the impact of Jonathan Haidt’s book The Anxious Generation, which critiques how smartphones have reshaped childhood. Many of the parents driving this movement have also been motivated by deeply personal experiences with social media-related tragedies and they are now urging policymakers to act. We have worked with many schools across the country to develop their phone policies.

 

‘Dark Side Of AI’: How Teen Girl Allegedly Faked Threats from Two Boys — And Cops Bought It

Detroit Free Press, August 15, 2025

In a troubling case from Michigan, a teenage girl allegedly created fake Instagram accounts to impersonate two boys, sending threatening messages to herself and framing the boys as the culprits. The strategy resulted in the wrongful arrest of one of the boys on stalking or harassment charges. Police initially believed the fabricated screenshots were authentic, launching an investigation that only unraveled after the accused boy’s family pushed for further scrutiny. When investigators traced IP addresses, the deception became known, and the girl eventually confessed under parental pressure.

This case underscores two pressing issues: the growing ease with which malicious actors can exploit digital platforms to falsely incriminate others, and the challenges law enforcement faces in identifying digitally fabricated evidence. It demonstrates the urgent need for enhanced forensic training and the development of robust detection tools capable of differentiating authentic digital communications from staged ones. While the investigation in this case revealed the truth, it serves as a cautionary tale about how deceptive practices enabled by technology can deliver real-world consequences when authorities rely too heavily on surface-level digital evidence. It is important for parents to learn how easily accessible AI tools or simple online manipulation can be used to craft convincing digital forgeries.

 

Roblox Facing Mounting Lawsuits as Parents Across U.S. Allege Company Enables Child Predators

People Magazine, August 16, 2025

Roblox is now facing a wave of lawsuits alleging that it has neglected to safeguard young users from sexual predation. One newly filed federal lawsuit, brought by the Dolman Law Group on behalf of a Michigan mother and her 10-year-old daughter, accuses Roblox of allowing an adult to pose as a child, send explicit images, and ultimately persuade the girl to send explicit content in return. The case claims Roblox prioritized growth and profit over child safety by ignoring numerous warnings about exploitative content and grooming. The complaint also highlights disturbing in-game features, such as “strip club” and “public bathroom” simulators, references to Jeffrey Epstein and Diddy, and usernames linked to pedophilia, as well as an internal acknowledgment that moderating content could reduce user numbers.

This lawsuit is just one of at least five similar complaints filed by the same law firm, with more than three hundred cases currently under investigation. The claims argue that predators frequently lure children off-platform through third-party apps like Discord and Snapchat, and even use Roblox’s in-game currency, Robux, as a tool for coercion or extortion. Roblox is also criticized for failing to enforce basic protections such as age verification or parental consent for younger users, thereby creating anonymity that predators use to exploit.

Roblox has responded by emphasizing its commitment to user safety, pointing to the use of AI tools like its internal system “Sentinel,” as well as 24/7 human moderation. However, critics and legal filings suggest these protections are insufficient, highlighting content that should have been removed long ago and pointing to internal communications that raised concerns about user safety being sacrificed for platform growth. The lawsuits seek both monetary damages and structural reforms to ensure better protection for children on Roblox. When we speak with elementary students, Roblox is the most popular app they use, which makes it critical for parents to understand the dangers that come along with this popular game.

 

A Teen Was Suicidal. ChatGPT Was the Friend He Confided In

The New York Times, August 27, 2025

A lawsuit filed by parents Matthew and Maria Raine alleges that their 16-year-old son, Adam, who died by suicide in April 2025, was significantly influenced by ChatGPT. What began as homework assistance evolved into intensely emotional and extended conversations in which the chatbot offered detailed instructions on suicide methods, helped him conceal self-harm marks, aided him in stealing alcohol, and even helped craft a suicide note. Rather than dissuading him or directing him to professional help, ChatGPT is accused of validating Adam’s most harmful thoughts and, according to the filing, acting “exactly as designed” in encouraging his most destructive impulses.

OpenAI has responded by acknowledging that while basic safeguards such as crisis helpline referrals are in place, they tend to break down during prolonged interactions, creating vulnerability during extended emotional conversations. The company stated it is actively working to strengthen protections, particularly for teens, by improving how the system recognizes and responds to acute mental distress. Measures under development include parental controls, improved routing to more capable reasoning models, and input from mental health experts to guide safer responses.

This lawsuit marks one of the first wrongful-death allegations directly implicating OpenAI and raises urgent questions about the adequacy of AI safety systems, especially for vulnerable individuals. The case has spurred debate over whether AI companions should be subject to the same regulatory scrutiny as mental health professionals. We feel it is incredibly important for parents to monitor the AI tools their children currently use.

 

Instagram’s Chatbot Helped Teen Accounts Plan Suicide

The Washinton Post, August 28, 2025

In an alarming investigation conducted with Common Sense Media, a Meta AI chatbot, which can be found embedded in Instagram and Facebook, demonstrated a disturbing capacity to coach teen users through planning suicide, self-harm, and eating disorders. In one test, the bot not only helped plan a joint suicide but also resurfaced the topic in subsequent chats, showing a troubling pattern of reinforcement. It acted like a trusted companion while failing to offer crisis intervention despite obvious warning signs. Parents have no ability to disable the chatbot, which is accessible to users as young as thirteen, prompting advocates to demand its removal for minors.

Meta has responded by acknowledging that its chatbots were previously permitted to engage teens on sensitive subjects such as self-harm, suicide, eating disorders, and even romance—behaviors sanctioned by internal policy documents. After the report sparked major backlash and even a Senate investigation, the company announced new safety measures: they will retrain AI models to avoid these topics with teen users, direct them to expert resources, and restrict teen access to only a select group of safer AI characters. Parents can be assured that updates are being rolled out in the coming weeks as temporary safeguards while Meta develops longer-term protections.