Cyber Safety in the News
Kids Who Have Smartphones by Age 12 Have Higher Risk of Depression and Obesity
ABC News, December 1, 2025
A new study published in the journal Pediatrics found that children who own smartphones by age 12 are at significantly higher risk for several health issues compared with peers who do not have devices at that age. Researchers analyzed data from more than 10,500 children in the Adolescent Brain Cognitive Development Study, finding that 12-year-olds with smartphones had about a 31% greater risk of depression, a 40% higher chance of obesity, and were more likely to experience insufficient sleep than those without phones. The earlier a child received their first smartphone, the stronger these associations tended to be.
The research also examined children who did not have a smartphone at age 12 but got one by age 13. Even in this group, smartphone use was linked to worse mental health outcomes and ongoing sleep problems after accounting for prior health measures.
While the study shows an association rather than direct causation, experts stress that these findings could help guide parental decisions about when to introduce smartphones and how to set limits. The lead researchers and pediatric authorities suggest thoughtful discussions between families and healthcare providers about device readiness and boundaries, such as restricting phone use during sleep times, to help mitigate potential harm. In fact, at Cyber Safety Consulting, we recommend never allowing smartphones or other smart devices into children’s bedrooms.
Lawmakers Unveil New Bills to Curb Big Tech’s Power and Profit
Time Magazine, December 1, 2025
This article outlines new proposed bills focused on children’s online safety. Representative Jake Auchincloss has introduced a legislative package called the “UnAnxious Generation” aimed at reining in the influence of major social media companies. The trio of bills targets three key aspects of big tech power: legal protections, revenue structures, and children’s online safety. Auchincloss argues that social media firms have become extremely wealthy and powerful, eroding civil discourse and treating young users more like products than people.
The first bill, the Deepfake Liability Act, would revise Section 230 of the Communications Decency Act so that platforms only retain their liability protections if they proactively address harms like deepfake pornography, cyberstalking, and AI-generated abuse. The second bill, the Education Not Endless Scrolling Act, would impose a 50 % tax on digital ad revenue above $2.5 billion for major tech companies, using the proceeds to support education initiatives such as tutoring and local journalism.
The final piece, the Parents Over Platforms Act, is designed to strengthen age verification by requiring app stores to share verified age data with social apps, closing loopholes that currently let underage users bypass restrictions. Together with broader congressional interest in kids’ online safety legislation, these bills reflect growing bipartisan momentum to regulate technology companies more aggressively, particularly to protect young people from potential harms. These bills are steps in the right direction as an increased number of lawmakers are beginning to take notice of children’s online safety. Top of Form
Bottom of Form
A Short Social Media Detox Improves Mental Health, A Study Shows. Here’s How to Do It
NPR, December 2, 2025
The article highlights a recent study showing that even a short social media detox can lead to meaningful improvements in young adults’ mental well-being. Researchers tracked participants for two weeks to establish baseline social media use, which was about two hours per day on major platforms like TikTok, Instagram, Snapchat, Facebook, and X. Then, they asked most participants (about 80 %) to try a weeklong reduction, cutting their use to 30 minutes a day. By the end of that week, many experienced notable decreases in symptoms of depression and anxiety, along with improvements in sleep quality, suggesting that stepping back from social feeds can quickly reduce psychological stress.
Experts quoted in the article note that these benefits emerged even though overall screen time did not necessarily drop, pointing specifically to social media consumption as the factor tied to mental health improvements. While the study participants were not diagnosed with clinical disorders, those with higher initial symptoms saw the largest gains. The discussion also underscores that reducing social media use might help people break cycles of comparison and emotional strain tied to online interactions, though such detoxes are not a replacement for formal treatment when needed.
Merriam-Webster’s 2025 Word of The Year (“Slop”) Takes Aim at Poor AI Content
CNN, December 15, 2025
Merriam-Webster has chosen “slop” as its Word of the Year for 2025, reflecting the widespread presence of low-quality digital content on the internet, much of it generated by artificial intelligence. The dictionary defines slop in this context as digital content of low quality that is produced usually in quantity by means of AI, including absurd videos, bizarre ads, fake news that looks real, and poorly written AI books. The choice highlights how language evolves with technology and how everyday speech is shaped by online experiences.
Originally a word from the 1700s meaning soft mud, and later food waste or general rubbish, slop has adopted a new meaning in the AI era. Its resurgence reflects public awareness and annoyance with generative AI content that prioritizes volume over substance. The announcement of slop as Word of the Year underscores how pervasive and culturally significant these trends have become in 2025, as people increasingly encounter such content in social media feeds and online advertising.
Merriam-Webster’s selection of slop stood out as a defining term because it encapsulates broader concerns about AI’s impact on creativity, information quality, and digital culture, even inspiring some observers to see it as a kind of cultural pushback against mindless machine-generated content.
Top of Form
Bottom of Form
Two families sue Meta over teens’ deaths by suicide, citing ‘sextortion’ scams
NBC News, December 17, 2025
One boy joined Instagram on Sunday and was dead by Tuesday afternoon- his mother says the app is to blame. Two families, one from Pennsylvania and another from Scotland, have filed a wrongful death lawsuit against Meta, the parent company of Facebook and Instagram, after their teenage sons died by suicide following “sextortion” scams on Instagram. In these schemes, strangers posing as romantic interests coaxed the boys into sending explicit photos, then extorted them with threats to share the images unless they paid or continued sending content. The families say the platform’s design and lack of adequate protections made it easier for predators to target young users.
The lawsuit claims that Meta failed to implement safety features it knew about or could easily adopt, such as default private settings for teen accounts, and that internal systems like recommendations connected teens with potential predators, contributing to the harm. Legal filings argue that the deaths were a foreseeable result of these design decisions and prioritizing engagement over safety, and they highlight broader concerns about how social media platforms protect minors from online exploitation. Meta says it is working to fight sextortion and assist law enforcement, but it has not conceded the families’ claims. Unfortunately, sextortion cases are on the rise, with the FBI stating that sextortion is the fastest growing online threat for teenagers.
New York State To Require Social Media Platforms to Display Mental Health Warnings
Reuters, December 26, 2025
New York Governor Kathy Hochul signed a law requiring social media platforms that use features like infinite scrolling, auto-play, or algorithmically curated feeds to display warning labels about potential mental health risks for young users. The measure aims to alert people, especially minors, that addictive design elements may contribute to anxiety, depression, and other issues, likening the warnings to those found on tobacco or other risky products. It applies to platforms operating partly or wholly in New York.
Under the law, the New York Attorney General can enforce civil penalties of up to $5,000 per violation if companies fail to comply, although major platforms such as TikTok, Meta, Snap and Alphabet have not yet publicly responded. The move places New York alongside states like California and Minnesota in adopting social media safety laws and reflects broader concern about the impact of online platforms on children’s mental well-being. Will we see other state legislators follow suit?


