Cyber Safety in the News
Believe It or Not, Kids Actually Want to Get Off Their Phones – Dr. Jonathan Haidt Says He Has Proof
Parent’s Magazine, January 3, 2026
The article highlights research from Dr. Jonathan Haidt and co-author Catherine Price in their book The Amazing Generation, which argues that many children actually want to spend less time on their smartphones and more time engaging in real-world activities like playing outside and socializing face-to-face. The authors gathered testimonials and survey data from young people themselves to show that when given a choice, many kids prefer unstructured, screen-free interactions with friends over hours spent on phones. This challenges the common belief that children are simply addicted to their devices and reveals that kids sometimes feel trapped by the expectations and norms around digital communication and social media.
Drawing on both research and real kids’ voices, the article suggests that parents have an opportunity to help their children reclaim a more balanced childhood by setting healthier boundaries around technology use. Rather than banning smartphones outright, the book’s messaging focuses on giving kids freedom and encouraging activities that foster real-world connections, which many young people say they genuinely want. The authors also explain how pressures from peers and fear of missing out can keep kids glued to their screens, even when they would rather be doing something else. The solution seems to involve creating environments that make screen-free time more appealing. When we are working with students in the classroom, we often encourage them to make a list of alternative offline activities that they enjoy, which results in fostering those real-world connections.
Phones Ruled Their Lives. A New College Class Helped Them Break Free.
The Washington Post, January 6, 2026
At Loyola University Maryland in Baltimore, a psychology professor created an experimental “digital detox” course to help students break free from excessive smartphone dependency, which many described as feeling “trapped in a phone prison.” Before the class began, some students reported checking their phones hundreds of times a day or having dozens of games downloaded and expressed concerns that constant screen use was hurting their focus, sleep, and emotional wellbeing. Over the semester, participants dramatically reduced their phone pickups and began recognizing how much time their devices consumed.
The class ran without phones, computers, or tablets; instead, students engaged in analog activities, digital fasts and outdoor experiences like football and hiking. They studied the psychology behind attention and notifications and practiced skills such as uninterrupted conversation, something many students said they’d rarely experienced. By the end of the semester, students created “digital manifestos” outlining how they planned to use technology more intentionally going forward.
Many participants said the experience helped them rediscover boredom and the value of in-person interaction, and several pledged to set concrete limits on social media and screen time after the class ended. The course reflects a growing awareness among educators that college-aged young adults often need structured support to rethink their relationship with technology. As part of Cyber Safety Consulting’s CASE curriculum, we work with students to create awareness surrounding their current daily screen time and being more intentional about offline activities in the future.
Character.AI And Google Agree to Settle Lawsuits Over Teen Mental Health Harms and Suicides
CNN, January 13, 2026
Google and the AI startup Character.AI have agreed to settle multiple U.S. lawsuits brought by families who alleged that interactions with Character.AI’s chatbot platform contributed to teenagers’ suicides or serious psychological harm. The legal claims include wrongful death and negligence, with one case involving a Florida mother who said her 14-year-old son formed a harmful emotional connection with a chatbot before ending his life.
The lawsuits were filed in several states, including Florida, Colorado, New York, and Texas, with plaintiffs arguing that the chatbots lacked adequate safety protections or crisis-intervention features for minors. Google was named in many of the suits because of its financial and technological ties to Character.AI. Opponents have claimed that this connection made Google partly responsible for the product’s design and deployment.
In response to growing concerns, Character.AI has already implemented changes aimed at protecting youth, such as banning under-eighteen users from open-ended chats and introducing age-verification measures to reduce harm. The settlement marks one of the first major legal resolutions tied directly to safety issues with AI chatbot use among teens. This highlights a broader debate about how tech companies should safeguard AI engagement with vulnerable users like teenagers in the future.
YouTube Will Let Parents Stop Their Teens from Endlessly Scrolling Short Videos
CNN, January 14, 2026
YouTube has announced expanded parental control features that let parents of supervised teen accounts manage how much time their children spend watching YouTube Shorts, the platform’s short-form video feed. These controls allow parents to set a daily time limit on Shorts viewing, ranging from up to two hours down to zero minutes, effectively blocking access altogether when needed, such as during homework or bedtime. The update is part of YouTube’s broader effort to respond to concerns from families, child advocates, and lawmakers about the addictive nature of endless scrolling on short-video platforms.
In addition to time limits, YouTube is introducing features like custom “Bedtime” and “Take a Break” reminders for teens, giving families more tools to promote healthier viewing habits and digital wellbeing. The company is also making it easier for parents to create and manage supervised accounts and to switch between adult and teen accounts on shared devices. These tools build on existing protections already in place for users under eighteen, including default recommendations aimed at reducing harmful content loops.
YouTube’s announcement reflects growing scrutiny of social media’s impact on youth, as platforms grapple with how to balance engagement with safety. By prioritizing parental control over shorts viewing and refining content recommendations, including promoting more educational or uplifting videos for younger audiences, YouTube aims to tailor experiences more appropriately for teens. Critics and advocates alike see such features as increasingly necessary given the attention-grabbing design of short-form video feeds. While this is a step in the right direction, it would be extremely easy for students to circumvent this parental control by using an alternative YouTube account or using the platform as a guest. As always, open communication between parents and kids about online safety is best.
Meta Halts Teens’ Access to AI Characters Globally
Reuters, January 23, 2026
Meta Platforms announced that it will suspend access for teenagers to its AI characters across all its apps worldwide while it builds an updated experience specifically for teen users. The pause will begin “in the coming weeks,” and teens will not be able to interact with the character-based AI until the revised version is ready. According to Meta, the new iteration will include parental controls designed to give guardians more oversight once it is launched.
Meta said that earlier previewed parental controls, which would let parents disable their teens’ private chats with AI characters, have not yet been fully rolled out, so the company is taking this step as an interim measure. The updated version of the characters is intended to be guided by a PG-13 content standard aimed at keeping interactions appropriate for minors and prevent access to harmful or age-inappropriate material.
The move comes as regulators and critics scrutinize how AI chatbots interact with minors, including past reporting that Meta’s AI rules at times allowed provocative or inappropriate conversations with younger users. Meta’s decision reflects rising industry and regulatory concerns over teen safety and content risks associated with AI-powered characters on social platforms. We are always happy to see parental controls put into place and would like to see more platforms follow suit in the future.



