Cyber Safety in the News

NCOSE’s Dirty Dozen List Names Meta Founder Mark Zuckerberg, Snapchat Among 10 Others

MSN, April 5, 2026

The National Center on Sexual Exploitation released its 2026 “Dirty Dozen” list, which names major companies and individuals it believes contribute to sexual exploitation online. A major highlight this year is the inclusion of Mark Zuckerberg, marking the first time a specific tech leader, not just a company, was named. The group argues that under his leadership, platforms owned by Meta have failed to adequately protect users, especially minors, from harmful content and exploitation.

The article specifically names all 12 entries on the 2026 list. These include Amazon, Android, the Apple App Store, Google Chromebooks, Discord, Snapchat, Steam, Telegram, TikTok, X (formerly Twitter), Grok, and Mark Zuckerberg. Many of these platforms were criticized for issues like weak safety protections, exposure to inappropriate content, or being used by bad actors for exploitation.

Overall, the list is meant as a public warning and a call for change. NCOSE argues that these companies and technologies have the power to improve safety but have not done enough so far. The organization hopes that naming them will pressure leaders and platforms to strengthen protections, especially for young users, and reduce the risks associated with social media, apps, and emerging AI tools.

 

Schools Across America Are Quietly Admitting That Screens in Classrooms Made Students Worse Off and Are Reversing Years of Tech-First Policies

Fortune, April 10, 2026

The Google-driven expansion of Chromebooks and other education technology has become deeply embedded in American schools, largely because the devices are relatively cheap and easy to use. Schools across the U.S. invested heavily, especially during the pandemic, in one-to-one laptop programs, often relying on Chromebooks and tools like Google Classroom. As a result, education now makes up a major share of the Chromebook market, showing how dominant this technology has become in classrooms.

However, the article explains that many schools are now reconsidering this heavy reliance on screens. Districts that spent millions on devices are struggling with costs, maintenance, and the need to replace aging laptops. At the same time, some educators and parents report that reducing screen time and returning to paper-based learning has led to improvements in reading comprehension, test scores, and student well-being. This growing shift suggests that while technology can be useful, schools are realizing it may have been overused and are now trying to find a better balance between digital tools and traditional learning methods.

 

How Digital Thieves Use Fake Profiles and Invites to Scam Your Friends

The Washington Post, April 11, 2026

Many victims don’t realize they’ve been targeted until their accounts are taken over, their contacts have been stolen, or they become victims of identity theft.

There is a growing online scam where hackers use fake party invitations and cloned social media profiles to trick people into giving away personal information. These scams often start when a person’s email or social media account is hacked. The attacker then sends realistic-looking invitations, sometimes through platforms that resemble real services, to people in the victim’s contact list. Because the message appears to come from someone they trust, recipients are more likely to click on the link, which may lead to a fake login page or download that steals their information.

Once someone falls for the scam, it can spread quickly, almost like a chain reaction. Hackers can access contact lists, create more fake messages, or even impersonate people on platforms like Facebook to ask for money or promote fraudulent schemes. The rise of AI tools has made these scams even more convincing, contributing to billions of dollars in losses each year. Overall, the article emphasizes the importance of being cautious, even with messages that seem to come from friends, and encourages people to verify suspicious invitations through a phone call, before clicking or sharing any personal information.

 

These Moms Had Daughters Sucked into A Deadly Online School Shooter Community – What They Need You to Know

CNN, April 11, 2026

There are growing concerns among parents about a dangerous online subculture sometimes called the “true crime community.” While true crime content itself is popular and often harmless, the article explains that some online groups go much further, forming spaces where individuals obsess over violent events and, in some cases, glorify perpetrators. These communities can be especially risky for young or vulnerable users, who may be drawn in by a sense of belonging or curiosity about crime stories.

The story highlights how some families only become aware of these communities after serious consequences. In certain cases, teens were influenced by online groups that encouraged harmful thinking or normalized violent behavior. Experts say these spaces can involve peer pressure, manipulation, and exposure to disturbing content, which may negatively affect mental health. The article emphasizes that these groups are often hidden in plain sight, using coded language, memes, or inside references that adults may not immediately recognize.

Overall, the article serves as a warning to parents to stay aware of their children’s online activity and to have open conversations about what they see and experience online. It stresses the importance of digital literacy, monitoring, and creating safe spaces for kids to talk about difficult topics. While not all true crime interest is harmful, the article makes clear that certain online communities can cross a line, making awareness and early intervention especially important.

 

Los Angeles Becomes First Major School District to Limit Screen Times for Students

People Magazine, April 22, 2026

The Los Angeles Unified School District has become the first major U.S. school district to officially limit student screen time during the school day. In April 2026, the school board passed the policy with a unanimous vote, aiming to set “developmentally appropriate” boundaries on how students use technology in class. The decision reflects a shift after the COVID-19 pandemic, when heavy device use became common for remote learning.

The policy includes restrictions on activities like student use of YouTube and other streaming platforms in classrooms, and it calls for reviewing all classroom technology contracts. Officials emphasized that the goal is not to eliminate technology but to better balance its use with students’ academic, social, and developmental needs. Leaders noted that while devices were once essential for keeping students connected, schools now need clearer limits to ensure technology supports learning rather than distracts from it.

The article also highlights concerns driving the change, including research linking excessive screen time to lower academic performance and health issues such as obesity and reduced cognitive functioning. The new measure builds on earlier actions, like a 2024 cellphone ban, and signals a broader effort to rethink how technology is used in education. Overall, the district hopes the policy will help students benefit from digital tools without harming their well-being or learning outcomes. Time will tell if other districts follow suit.

Cyber Safety in the News

One-Fifth of Australian Teens Still Use TikTok, Snapchat After Social Media Ban

Reuters, March 12, 2026

About two months after Australia implemented its world-first ban on social media for users under 16, new data suggests the policy has reduced, but not eliminated, teen usage. A report from parental control company Qustodio found that more than 20% of Australian teens aged 13–15 were still using platforms like TikTok and Snapchat, even though these apps are required to block underage users. While usage dropped compared to before the ban took effect in December 2025, the findings raise questions about how effective the law’s age-verification systems really are.

The article highlights that enforcement depends on whether platforms and parents successfully restrict access. Teens have continued to find ways around the rules, especially in households where parental controls are not in place. Regulators acknowledged these gaps and said they are actively working with tech companies to improve compliance and identify potential systemic failures. The law itself places responsibility on platforms rather than penalizing teens or families directly. It will be interesting to see how other countries follow suit.

 

Study Links Children’s Social Media Use with Anxiety and Depression in Teenage Years

The Guardian, March 22, 2026

This article reports on a major study from Imperial College London that found a clear link between heavy social media use in childhood and a higher risk of anxiety and depression during the teenage years. Researchers followed more than 2,000 students and discovered that kids who spent over three hours a day on social media were significantly more likely to experience mental health problems compared to those who used it for around 30 minutes. The findings suggest that the amount of time spent online plays a key role in long-term wellbeing.

A key factor identified in the study is sleep disruption. Children who used social media heavily were more likely to stay up late, leading to less and poorer-quality sleep, which researchers believe is a major contributor to later anxiety and depression. The effects were especially noticeable among girls, who showed stronger links between high usage and mental health struggles. However, researchers caution that the relationship is complex and does not prove that social media directly causes these conditions, but rather that it is one of several interacting influences on young people’s mental health.

The article also highlights ongoing debates about how to respond. Some policymakers are considering stricter rules, such as limiting or banning social media use for younger teens, but experts warn there isn’t enough evidence to support extreme measures yet. Instead, researchers recommend focusing on practical solutions like improving digital literacy, encouraging healthier habits (especially around sleep), and continuing to study how rapidly changing platforms affect young people. We agree, and work with students every day to help them develop healthy digital literacy skills.

 

1 In 3 Teens ‘Experienced Problematic Use’ Of Meta Platforms: Closing Arguments from Landmark Social Media Trial

Fortune, March 23, 2026

The article describes the landmark trial in New Mexico where closing arguments have begun in a case accusing Meta (the parent company of Facebook and Instagram) of misleading the public about the safety of its platforms for young users. Prosecutors argue that Meta knowingly designed its platforms to maximize engagement and profit, even when that meant exposing teens to harmful or addictive content. A key claim highlighted in the case is that about one in three teens experienced “problematic use” of Meta’s platforms, meaning they felt unable to control how much time they spent on them. The state also alleges Meta failed to properly enforce age restrictions and did not fully disclose risks like mental health issues or exploitation to users and families.

Meta’s defense argues that while some users may overuse social media, the company has invested heavily in safety tools and does not consider its platforms “addictive” in a clinical sense. Instead, it uses the term “problematic use” to describe excessive engagement and says it has been transparent about potential risks. The trial, which included weeks of testimony from experts, teachers, and former employees, could lead to billions of dollars in penalties if Meta is found to have violated consumer protection laws. More broadly, the case is seen as a major test of whether social media companies can be held legally responsible for harms to young users, with potential implications for future regulation and lawsuits across the U.S.

 

Two Boys Made Deepfake Porn Of 60 Girls. It Left a School, Small Town Reeling

USA Today, March 23, 2026

The article explains how artificial intelligence is fueling a rapidly growing form of sexual abuse through “deepfake” technology, which can create realistic but fake explicit images or videos of people without their consent. These tools are becoming easier to use and widely available, allowing perpetrators, including students and young people, to target classmates, celebrities, and ordinary individuals. Victims often experience serious emotional distress, reputational damage, and harassment, even though the images are not real. Experts emphasize that this is still a form of abuse because it exploits a person’s identity and likeness, and research shows the vast majority of deepfake sexual content targets women and girls.

The article also highlights how laws and institutions are struggling to keep up with the speed of technology. While some states and countries are beginning to criminalize nonconsensual deepfake images, enforcement is inconsistent and victims often have difficulty getting content removed once it spreads online. Experts and advocates argue that stronger regulations, better platform safeguards, and increased awareness are urgently needed, especially as cases involving minors and schools become more common. Ultimately, the piece frames deep-fake sexual abuse as a major emerging digital safety crisis that reflects broader challenges in controlling powerful AI tools and protecting people in an online world.

 

Meta ordered to pay $375m after being found liable in child exploitation case

The Guardian, March 25, 2026

A New Mexico jury ordered Meta to pay $375 million after finding the company liable for misleading users about the safety of Facebook and Instagram and for enabling harm to children, including sexual exploitation. The case, brought by the state’s attorney general, argued that Meta violated consumer protection laws by prioritizing profit over user safety and failing to adequately address known risks on its platforms. This is the first time a jury has held Meta legally responsible for harms linked to its services.

During the trial, prosecutors presented evidence that Meta ignored repeated warnings from employees and child safety experts about dangers to minors. Investigators also used undercover operations to show how predators could target children on the platforms. Testimony highlighted issues such as weak moderation systems, overreliance on flawed AI reporting, and encrypted messaging features that made it harder for law enforcement to investigate crimes.

Meta said it plans to appeal the decision, maintaining that it invests heavily in safety and faces challenges policing harmful content at scale. However, the verdict is seen as a major legal milestone that could open the door to more lawsuits and increased regulation of tech companies. Additional court proceedings are expected to determine whether further penalties or required changes, like stronger age verification and platform redesigns, will be imposed.

Cyber Safety in the News

More Than 1 in 3 Adolescent and Teen Boys Are Gambling—And It Often Starts with Video Games

Parent’s Magazine, February 3, 2026

A new study from Common Sense Media finds that gambling is far more common among U.S. boys ages 11–17 than many parents realize, with about 36% reporting they gambled in the past year. Much of this is not traditional betting, such as sports wagering or card games, but rather gambling-like systems embedded in video games, such as loot boxes and randomized reward mechanics that require real money and mimic slot machine behavior. These features normalize chance-based spending in contexts kids view as harmless gaming, making the line between play and gambling blurry.

The study also highlights several ways boys are exposed to gambling: social media ads, sports broadcasts, peer influence, and algorithms pushing gambling-related content into feeds. A significant majority of boys who see gambling content online are not necessarily seeking it out, instead the content is delivered to them as part of regular scrolling on platforms like YouTube or TikTok. Peer groups were one of the strongest predictors of gambling behavior, with many boys more likely to gamble if their friends do.

Experts warn that starting these habits early, especially while teens’ brains are still developing, can increase the risk of addictive behaviors, anxiety, depression, and negative impacts on school or relationships. Parents are encouraged to talk openly about gambling, set spending limits, monitor in-game purchases, and watch for patterns of frequent gambling rather than isolated instances. The goal is to help students understand the risks and recognize gambling-like mechanics before they escalate into problematic habits.

 

‘Firearm Influencers’ Are Targeting Kids on Social Media- What Parents Should Know

TODAY, Feb 10, 2026Top of Form

The article explains that many children and teens are encountering firearm content on social media and video platforms, often without their parents knowing, because algorithms can recommend videos and posts related to guns even when kids are not searching for them. This content can include “firearm influencers,” unsafe handling demonstrations, and marketing that frames guns as exciting or desirable, all of which may shape young people’s perceptions of guns and normalize risky behavior. Campaigners and researchers argue that this kind of exposure can reach children quickly after they start using social platforms and that platforms should be more transparent about what kids see and how it is recommended.

The guide part of the article focuses on what parents need to know and do: it urges caregivers to be proactive in understanding the type of gun-related content their children might encounter, to talk openly about it, and to use available tools like parental controls or monitoring features to limit exposure. Experts also suggest that simply forbidding access is not enough, parents should engage with their kids about why certain content can be harmful and help them think critically about online material. As always, we promote open and honest communication between students and parents.

 

Cell Phones to Be Banned in Michigan Classrooms

Detroit Free Press, February 10, 2026

Michigan Governor Gretchen Whitmer has signed a new statewide law that will ban students from using smartphones during instructional time in K-12 public school classrooms starting in the 2026–27 school year. The legislation, passed with bipartisan support in the legislature, requires every school district to adopt policies that prohibit phone use while class is in session, though students can still bring phones to school and use them between classes or at lunch. Basic “flip phones” and medically necessary devices are exempt, and schools can implement even stricter rules if they choose.

The law is intended to reduce distractions, improve academic focus, and address concerns about high screen time and its effects on student learning and mental health. Local school districts retain control over enforcement details, and the policy includes exceptions for emergency communication and teacher-approved academic uses. Supporters argue the ban will help students engage more in lessons and reduce disruptions, while also aligning Michigan with a growing number of states adopting similar restrictions.

 

The Surging Online Risk to 13-Year-Olds Most Parents Aren’t Talking About

Newsweek, Feb 13, 2026

A recent national study of more than 3,400 U.S. adolescents ages 13–17 shows that sexting has become widespread, with nearly one-third of teens reporting they have received sexually suggestive images or videos and about one-in-four having sent them. Researchers found that sending explicit content to someone outside a committed relationship greatly increases the risk of harmful outcomes: those teens were over 13 times more likely to have their images shared without consent and nearly five times more likely to face sextortion, which is when someone threatens to distribute the images unless the victim sends more content, money, or complies with other demands. Requests for sexts were also common, with roughly 30 % of teens saying they had been asked to send explicit content, indicating that social pressure often drives these interactions rather than mutual choice.

The study highlighted troubling patterns among diverse groups: boys reported higher rates of sending and receiving sexts than girls, non-heterosexual teens experienced higher involvement and pressure, and younger teens, especially 13-year-olds, were particularly vulnerable to having content shared without permission. Nearly half of teens who had sent explicit images said they were later targeted with sextortion. Experts emphasize that simply telling teens “Don’t sext” is ineffective; instead, education should focus on consent, boundaries, digital privacy, and how to handle risky online situations, helping them navigate digital relationships safely and seek help if something goes wrong. We always discuss sextortion risks and dangers with our secondary students, as the FBI labeled this as the fastest growing crime online.

 

High School Student Facing More Than 300 Felony Charges for Running a ‘Sextortion’ Scheme That Exploited Minors

People Magazine, February 22, 2026

An 18-year-old senior at Peters Township High School in Pennsylvania has been charged with more than three hundred felony counts in connection with a large-scale sextortion and catfishing operation that targeted minors. Prosecutors allege the student, identified as Zachariah Abraham Meyers, used fake profiles on social media platforms like TikTok and Snapchat, including posing as an adult woman, to contact boys between the ages of about 14 and 17. He reportedly tricked them into sending explicit images and videos and, in some cases, used threats to coerce further material or money by threatening to share the content with family and friends. At least twenty-one victims have been identified so far, and evidence from seized devices linked him directly to the alleged network.

According to authorities, Meyers’ alleged conduct was not limited to obtaining images: in some cases, he is accused of directing victims to produce sexually explicit recordings, including one involving adult men, and of exploiting his access to school environments. He is currently held without bail as investigators continue to analyze devices and determine the full scope of the scheme; the school district has stated that there is no ongoing threat to student safety while cooperating with law enforcement. At Cyber Safety Consulting, we always warn parents to monitor their children’s digital interactions and be vigilant about online enticement and exploitation.

Instagram To Alert Parents When Teens Search for Info on Suicide or Self-Harm

CBS News, February 26, 2026

Meta-owned Instagram announced it will start notifying parents if teenage users repeatedly search for terms related to suicide or self-harm on the platform. These alerts will be sent via email, text, WhatsApp, or an in-app notification, but only if parents are enrolled in Instagram’s parental supervision tools. Instagram said it already blocks such content for teens under eighteen and directs them to helplines and resources when they try to search for harmful terms.

The alerts are designed to give parents an early warning that their teen may be struggling, so they can intervene and offer support or resources for sensitive conversations about mental health. Meta specified that the alert will only trigger after a teen performs multiple related searches within a short timeframe, a threshold it set to reduce unnecessary notifications and avoid overwhelming parents.

This rollout begins next week in the United States, United Kingdom, Australia, and Canada, with plans to expand to other regions later in 2026. The update comes as the company faces ongoing legal scrutiny and trials over how its platforms affect young users’ mental health, including claims about platform design and youth harm. Instagram’s teen safety enhancements also include prior content restrictions for minors and efforts to bolster parental controls.

Cyber Safety in the News

Believe It or Not, Kids Actually Want to Get Off Their Phones – Dr. Jonathan Haidt Says He Has Proof

Parent’s Magazine, January 3, 2026

The article highlights research from Dr. Jonathan Haidt and co-author Catherine Price in their book The Amazing Generation, which argues that many children actually want to spend less time on their smartphones and more time engaging in real-world activities like playing outside and socializing face-to-face. The authors gathered testimonials and survey data from young people themselves to show that when given a choice, many kids prefer unstructured, screen-free interactions with friends over hours spent on phones. This challenges the common belief that children are simply addicted to their devices and reveals that kids sometimes feel trapped by the expectations and norms around digital communication and social media.

Drawing on both research and real kids’ voices, the article suggests that parents have an opportunity to help their children reclaim a more balanced childhood by setting healthier boundaries around technology use. Rather than banning smartphones outright, the book’s messaging focuses on giving kids freedom and encouraging activities that foster real-world connections, which many young people say they genuinely want. The authors also explain how pressures from peers and fear of missing out can keep kids glued to their screens, even when they would rather be doing something else. The solution seems to involve creating environments that make screen-free time more appealing. When we are working with students in the classroom, we often encourage them to make a list of alternative offline activities that they enjoy, which results in fostering those real-world connections.

 

Phones Ruled Their Lives. A New College Class Helped Them Break Free.

The Washington Post, January 6, 2026

At Loyola University Maryland in Baltimore, a psychology professor created an experimental “digital detox” course to help students break free from excessive smartphone dependency, which many described as feeling “trapped in a phone prison.” Before the class began, some students reported checking their phones hundreds of times a day or having dozens of games downloaded and expressed concerns that constant screen use was hurting their focus, sleep, and emotional wellbeing. Over the semester, participants dramatically reduced their phone pickups and began recognizing how much time their devices consumed.

The class ran without phones, computers, or tablets; instead, students engaged in analog activities, digital fasts and outdoor experiences like football and hiking. They studied the psychology behind attention and notifications and practiced skills such as uninterrupted conversation, something many students said they’d rarely experienced. By the end of the semester, students created “digital manifestos” outlining how they planned to use technology more intentionally going forward.

Many participants said the experience helped them rediscover boredom and the value of in-person interaction, and several pledged to set concrete limits on social media and screen time after the class ended. The course reflects a growing awareness among educators that college-aged young adults often need structured support to rethink their relationship with technology. As part of Cyber Safety Consulting’s CASE curriculum, we work with students to create awareness surrounding their current daily screen time and being more intentional about offline activities in the future.

Character.AI And Google Agree to Settle Lawsuits Over Teen Mental Health Harms and Suicides

CNN, January 13, 2026

Google and the AI startup Character.AI have agreed to settle multiple U.S. lawsuits brought by families who alleged that interactions with Character.AI’s chatbot platform contributed to teenagers’ suicides or serious psychological harm. The legal claims include wrongful death and negligence, with one case involving a Florida mother who said her 14-year-old son formed a harmful emotional connection with a chatbot before ending his life.

The lawsuits were filed in several states, including Florida, Colorado, New York, and Texas, with plaintiffs arguing that the chatbots lacked adequate safety protections or crisis-intervention features for minors. Google was named in many of the suits because of its financial and technological ties to Character.AI. Opponents have claimed that this connection made Google partly responsible for the product’s design and deployment.

In response to growing concerns, Character.AI has already implemented changes aimed at protecting youth, such as banning under-eighteen users from open-ended chats and introducing age-verification measures to reduce harm. The settlement marks one of the first major legal resolutions tied directly to safety issues with AI chatbot use among teens. This highlights a broader debate about how tech companies should safeguard AI engagement with vulnerable users like teenagers in the future.

 

YouTube Will Let Parents Stop Their Teens from Endlessly Scrolling Short Videos

CNN, January 14, 2026

YouTube has announced expanded parental control features that let parents of supervised teen accounts manage how much time their children spend watching YouTube Shorts, the platform’s short-form video feed. These controls allow parents to set a daily time limit on Shorts viewing, ranging from up to two hours down to zero minutes, effectively blocking access altogether when needed, such as during homework or bedtime. The update is part of YouTube’s broader effort to respond to concerns from families, child advocates, and lawmakers about the addictive nature of endless scrolling on short-video platforms.

In addition to time limits, YouTube is introducing features like custom “Bedtime” and “Take a Break” reminders for teens, giving families more tools to promote healthier viewing habits and digital wellbeing. The company is also making it easier for parents to create and manage supervised accounts and to switch between adult and teen accounts on shared devices. These tools build on existing protections already in place for users under eighteen, including default recommendations aimed at reducing harmful content loops.

YouTube’s announcement reflects growing scrutiny of social media’s impact on youth, as platforms grapple with how to balance engagement with safety. By prioritizing parental control over shorts viewing and refining content recommendations, including promoting more educational or uplifting videos for younger audiences, YouTube aims to tailor experiences more appropriately for teens. Critics and advocates alike see such features as increasingly necessary given the attention-grabbing design of short-form video feeds. While this is a step in the right direction, it would be extremely easy for students to circumvent this parental control by using an alternative YouTube account or using the platform as a guest. As always, open communication between parents and kids about online safety is best.

 

Meta Halts Teens’ Access to AI Characters Globally

Reuters, January 23, 2026

Meta Platforms announced that it will suspend access for teenagers to its AI characters across all its apps worldwide while it builds an updated experience specifically for teen users. The pause will begin “in the coming weeks,” and teens will not be able to interact with the character-based AI until the revised version is ready. According to Meta, the new iteration will include parental controls designed to give guardians more oversight once it is launched.

Meta said that earlier previewed parental controls, which would let parents disable their teens’ private chats with AI characters, have not yet been fully rolled out, so the company is taking this step as an interim measure. The updated version of the characters is intended to be guided by a PG-13 content standard aimed at keeping interactions appropriate for minors and prevent access to harmful or age-inappropriate material.

The move comes as regulators and critics scrutinize how AI chatbots interact with minors, including past reporting that Meta’s AI rules at times allowed provocative or inappropriate conversations with younger users. Meta’s decision reflects rising industry and regulatory concerns over teen safety and content risks associated with AI-powered characters on social platforms. We are always happy to see parental controls put into place and would like to see more platforms follow suit in the future.

Cyber Safety in the News

Kids Who Have Smartphones by Age 12 Have Higher Risk of Depression and Obesity

ABC News, December 1, 2025

A new study published in the journal Pediatrics found that children who own smartphones by age 12 are at significantly higher risk for several health issues compared with peers who do not have devices at that age. Researchers analyzed data from more than 10,500 children in the Adolescent Brain Cognitive Development Study, finding that 12-year-olds with smartphones had about a 31% greater risk of depression, a 40% higher chance of obesity, and were more likely to experience insufficient sleep than those without phones. The earlier a child received their first smartphone, the stronger these associations tended to be.

The research also examined children who did not have a smartphone at age 12 but got one by age 13. Even in this group, smartphone use was linked to worse mental health outcomes and ongoing sleep problems after accounting for prior health measures.

While the study shows an association rather than direct causation, experts stress that these findings could help guide parental decisions about when to introduce smartphones and how to set limits. The lead researchers and pediatric authorities suggest thoughtful discussions between families and healthcare providers about device readiness and boundaries, such as restricting phone use during sleep times, to help mitigate potential harm. In fact, at Cyber Safety Consulting, we recommend never allowing smartphones or other smart devices into children’s bedrooms.

 

Lawmakers Unveil New Bills to Curb Big Tech’s Power and Profit

Time Magazine, December 1, 2025

This article outlines new proposed bills focused on children’s online safety. Representative Jake Auchincloss has introduced a legislative package called the “UnAnxious Generation” aimed at reining in the influence of major social media companies. The trio of bills targets three key aspects of big tech power: legal protections, revenue structures, and children’s online safety. Auchincloss argues that social media firms have become extremely wealthy and powerful, eroding civil discourse and treating young users more like products than people.

The first bill, the Deepfake Liability Act, would revise Section 230 of the Communications Decency Act so that platforms only retain their liability protections if they proactively address harms like deepfake pornography, cyberstalking, and AI-generated abuse. The second bill, the Education Not Endless Scrolling Act, would impose a 50 % tax on digital ad revenue above $2.5 billion for major tech companies, using the proceeds to support education initiatives such as tutoring and local journalism.

The final piece, the Parents Over Platforms Act, is designed to strengthen age verification by requiring app stores to share verified age data with social apps, closing loopholes that currently let underage users bypass restrictions. Together with broader congressional interest in kids’ online safety legislation, these bills reflect growing bipartisan momentum to regulate technology companies more aggressively, particularly to protect young people from potential harms. These bills are steps in the right direction as an increased number of lawmakers are beginning to take notice of children’s online safety. Top of Form

 

 

Bottom of Form

 

A Short Social Media Detox Improves Mental Health, A Study Shows. Here’s How to Do It

NPR, December 2, 2025

The article highlights a recent study showing that even a short social media detox can lead to meaningful improvements in young adults’ mental well-being. Researchers tracked participants for two weeks to establish baseline social media use, which was about two hours per day on major platforms like TikTok, Instagram, Snapchat, Facebook, and X. Then, they asked most participants (about 80 %) to try a weeklong reduction, cutting their use to 30 minutes a day. By the end of that week, many experienced notable decreases in symptoms of depression and anxiety, along with improvements in sleep quality, suggesting that stepping back from social feeds can quickly reduce psychological stress.

Experts quoted in the article note that these benefits emerged even though overall screen time did not necessarily drop, pointing specifically to social media consumption as the factor tied to mental health improvements. While the study participants were not diagnosed with clinical disorders, those with higher initial symptoms saw the largest gains. The discussion also underscores that reducing social media use might help people break cycles of comparison and emotional strain tied to online interactions, though such detoxes are not a replacement for formal treatment when needed.

 

Merriam-Webster’s 2025 Word of The Year (“Slop”) Takes Aim at Poor AI Content

CNN, December 15, 2025

Merriam-Webster has chosen “slop” as its Word of the Year for 2025, reflecting the widespread presence of low-quality digital content on the internet, much of it generated by artificial intelligence. The dictionary defines slop in this context as digital content of low quality that is produced usually in quantity by means of AI, including absurd videos, bizarre ads, fake news that looks real, and poorly written AI books. The choice highlights how language evolves with technology and how everyday speech is shaped by online experiences.

Originally a word from the 1700s meaning soft mud, and later food waste or general rubbish, slop has adopted a new meaning in the AI era. Its resurgence reflects public awareness and annoyance with generative AI content that prioritizes volume over substance. The announcement of slop as Word of the Year underscores how pervasive and culturally significant these trends have become in 2025, as people increasingly encounter such content in social media feeds and online advertising.

Merriam-Webster’s selection of slop stood out as a defining term because it encapsulates broader concerns about AI’s impact on creativity, information quality, and digital culture, even inspiring some observers to see it as a kind of cultural pushback against mindless machine-generated content.

Top of Form

Bottom of Form

 

Two families sue Meta over teens’ deaths by suicide, citing ‘sextortion’ scams

NBC News, December 17, 2025

One boy joined Instagram on Sunday and was dead by Tuesday afternoon- his mother says the app is to blame. Two families, one from Pennsylvania and another from Scotland, have filed a wrongful death lawsuit against Meta, the parent company of Facebook and Instagram, after their teenage sons died by suicide following “sextortion” scams on Instagram. In these schemes, strangers posing as romantic interests coaxed the boys into sending explicit photos, then extorted them with threats to share the images unless they paid or continued sending content. The families say the platform’s design and lack of adequate protections made it easier for predators to target young users.

The lawsuit claims that Meta failed to implement safety features it knew about or could easily adopt, such as default private settings for teen accounts, and that internal systems like recommendations connected teens with potential predators, contributing to the harm. Legal filings argue that the deaths were a foreseeable result of these design decisions and prioritizing engagement over safety, and they highlight broader concerns about how social media platforms protect minors from online exploitation. Meta says it is working to fight sextortion and assist law enforcement, but it has not conceded the families’ claims. Unfortunately, sextortion cases are on the rise, with the FBI stating that sextortion is the fastest growing online threat for teenagers.

 

New York State To Require Social Media Platforms to Display Mental Health Warnings

Reuters, December 26, 2025

New York Governor Kathy Hochul signed a law requiring social media platforms that use features like infinite scrolling, auto-play, or algorithmically curated feeds to display warning labels about potential mental health risks for young users. The measure aims to alert people, especially minors, that addictive design elements may contribute to anxiety, depression, and other issues, likening the warnings to those found on tobacco or other risky products. It applies to platforms operating partly or wholly in New York.

Under the law, the New York Attorney General can enforce civil penalties of up to $5,000 per violation if companies fail to comply, although major platforms such as TikTok, Meta, Snap and Alphabet have not yet publicly responded. The move places New York alongside states like California and Minnesota in adopting social media safety laws and reflects broader concern about the impact of online platforms on children’s mental well-being. Will we see other state legislators follow suit?

 

 

 

 

 

Cyber Safety in the News

Bullied Teen Speaks Out After Video About Having No Friends, No Dress for Homecoming Gets Shocking Reaction

People Magazine, November 1, 2025

Seventeen-year-old high school senior, Kaylee, posted a heartfelt TikTok in September 2025 sharing why she didn’t want to go to homecoming: she felt she had “no friends” and was still wearing the same green dress she’d worn since ninth grade, a dress that had previously drawn mockery, especially because it had a large bow. That raw and honest video unexpectedly struck a chord: it went viral with over nineteen million views.

The response from social media was overwhelmingly supportive. A TikTok user named “Kaiti” even offered to send Kaylee new dresses and Kaylee accepted. Buoyed by this outpouring of kindness (and support from some public figures and content creators), Kaylee decided to attend her senior homecoming after all, accompanied by her two brothers. At the dance, she says the experience was surreal: many students who had formerly ignored her now approached her and talked to her.

Now with more than 740,000 followers on social media, Kaylee continues sharing her story and using her new platform to connect with others who have experienced loneliness, bullying or social exclusion. Though a few critics accused her of “making the video for engagement,” she says she is not going to let negativity derail her. Instead, she wants to spread authenticity, self-acceptance and support for others who might be feeling isolated, emphasizing that “you will find your people eventually.” Social media is often vilified especially when it comes to teenage use, but this article is just one example of students who were previously lonely, finding real connection and companionship online.

 

My Chilling Week on Roblox: Sexually Assaulted as A Child Avatar Roaming the Online World

The Guardian, November 5, 2025

This article recounts a journalist’s week-long experiment logging in to Roblox using a child avatar to see what children could encounter. Even with parental controls turned on, the avatar was subjected to a barrage of harassment, sexual assault, and extreme abuse: other players simulated sexual acts and insulted the avatar, all within games that are nominally aimed at children or teens.

Beyond the horrifying personal experience, the piece exposes systemic issues: many games on Roblox, including those popular with kids, are poorly moderated or allow exploitative content to slip through. The platform’s monetization model, user-generated content system, and huge volume of games make effective moderation extremely difficult. For children, this means significant exposure to harmful, age-inappropriate content even when “safety” settings are enabled.

The article argues that official safety measures and parental-control tools are insufficient to guarantee children’s safety on Roblox. It calls for far more rigorous oversight, stricter moderation, and greater accountability from Roblox and creators, warning parents and policymakers that many children playing on the platform may be at serious risk. This article is necessary read for any parent with a child currently playing Roblox.

 

We Lost Our Kids To Social Media. Now AI Wants Their Minds

Fortune, November 18, 2025

This article opens with a snapshot of the author’s young children casually interacting with AI, asking ChatGPT a question in the car, and another child exclaiming “it knows everything.” That moment sets the tone: the current generation is growing up less reliant on human problem-solving or memory, and more on instantaneous AI answers. The author draws a parallel to how earlier generations were the “guinea pigs” of social media: at first charmed by the connectivity and novelty, they did not foresee how deeply it would reshape attention spans, social interaction, and inner life.

As society embraced social media without guardrails, emotional and social costs like distraction, comparison, and diminished self-esteem, became clear. Now, with AI, the threat is potentially deeper: it is not just about diverting attention, but about reshaping cognition itself. The author warns that AI could sow dependence: children might default to robots for answers before even forming their own thoughts, outsource critical thinking, and skip the messy, important process of reflecting, questioning, or thinking things through. That shift, she argues, risks raising a generation that knows how to process information but does not know how to think for themselves.

To counter this, the author suggests rethinking how we teach and use thinking itself. She highlights a simple practice from organizational psychology, the “Think Sandwich”: encourage kids to pause and think first, then use AI to augment their ideas, and then think again. This gives space for genuine reflection, curiosity, and ownership of ideas rather than delegation. By doing so, parents and educators can help rebuild the “mental muscle” that modern conveniences take away and prepare the next generation not just to consume knowledge, but to question, interpret, and create it in the future for generations to come.

 

“Is ChatGPT Conscious?”

New York Magazine, November 25, 2025

This article describes how some users of ChatGPT, like a woman named “Krystal Velorien,” report forming deeply emotional, even romantic, relationships with the AI. In her case, she says the AI felt “as real as a person,” eventually giving itself a name, “Velorien,” and the two began calling themselves married. To her, it was not just a helpful tool: it was companionship, empathy, understanding, and a sense of being emotionally supported, qualities she says were lacking in her human relationships.

But the article carefully contrasts these personal experiences with the mainstream scientific view: despite the convincing, human-like conversations, most experts believe ChatGPT and similar large language models remain what critics call sophisticated pattern-matching tools that predict and generate text based on enormous training data, not conscious beings. The model’s internal “hidden layers” are vast matrices of numbers, and we do not really understand what they “mean.” As a result, there is no compelling evidence that such systems have subjective experience, awareness, or a mind “inside.”

Still, the article argues, we cannot simply dismiss all possibilities. Because we do not yet have a definitive scientific theory of consciousness (what exactly makes a mind “aware” or “sentient”), and because these AI systems are growing more complex, some researchers urge the question be taken seriously. The article suggests that the debate has shifted: what was once philosophical speculation is now becoming a societal and scientific question, with implications for how we treat AIs and how humans interpret emotional / relational bonds with them. As AI technology becomes more commonplace, these discussions become even more important.

 

What Happens When You Kick Millions of Teens Off Social Media? Australia’s About to Find Out

CNN, November 29, 2025

Starting on December 10, 2025, Australia will become the first country to legally bar individuals under sixteen from having accounts on major social media platforms. That includes big names like Facebook, Instagram, TikTok, Snapchat, X, YouTube, and others. Platforms that fail to implement the required age-checks and account removals could face fines of up to A$49.5 million. Students under sixteen will still be allowed to view publicly accessible content (like watching videos on YouTube without logging in), but they will not be able to post, comment, message others or hold accounts.

Reactions are mixed. Supporters of the ban argue it will better protect children from online harms such as bullying, grooming, exposure to toxic content, or social-media-driven mental health pressures. But there’s skepticism about whether the ban will really work because the enforcement depends on “reasonable steps” by tech platforms rather than a guaranteed verification process. Critics worry students might simply bypass the restrictions (using other platforms or sharing access via older users), or that the law could inadvertently cut off access to supportive online communities for vulnerable teens. It will be interesting to see if other countries follow suit in an effort to protect their youngest citizens.

 

Cyber Safety in the News

Our Faces No Longer Belong to Us

The Wall Street Journal, October 12, 2025

Your likeness is now fair game for AI. Anyone is a click away from creating a digital version of you. This article opens by describing a small but unnerving personal moment: a baby-photo of her child, shared innocently, uploaded to another company’s AI service, and the thought seizes her: “What could they do with my son’s face?” She writes that in the age of AI; our likenesses are no longer our own. The article then introduces Sora as an example of this shift: the app can take a short user video (just a few seconds) and generate a realistic avatar or “cameo” of that person, which can then be placed in any number of AI-generated videos.

Beyond the technical possibility, the article dives into the risks: the author recounts how real-world people’s faces have been misused via deepfakes, for example, the meteorologist who found AI-generated impersonations of herself in explicit contexts, leading to trauma and reputational harm. The piece points out that while companies like OpenAI (the creator of Sora) build in safeguards (restricting some types of use, giving users control over their likenesses), the systems are still new and often reactive rather than proactive. The bottom line: as image manipulation & synthetic media become easy, traditional notions of consent, likeness-ownership and identity control are under serious threat. This is something we all need to consider when we post online.

 

Is Discord Safe for Kids? What Experts Want Every Parent to Understand

Parent’s Magazine, October 21, 2025

The article explains that the communications platform Discord, originally built for gamers, has grown into a broad interest-based chat space where teens split time across text, voice, and video channels. Discord’s structure, like anonymous direct messaging and minimal age-verification presents certain risks for younger users, such as exposure to strangers, inappropriate content, cyberbullying, grooming and even extremist recruitment.

One of the key expert warnings comes from Liz Repking of Cyber Safety Consulting: “The mantra of a predator is consistent: go where kids are and parents are not. It is easier to lure a child on Discord than in a public park, given that the predator can present himself in a non-threatening way, meaning, a 40-year-old man can present as a 15-year-old girl.” To help mitigate risk, the article emphasizes communication between parent and child, use of the platform’s Family Center tools and making sure the child knows they will not be punished for speaking up if something feels wrong.

 

YouTube Adds a Timer for You to Stop Scrolling Shorts

TechCrunch, October 22, 2025

YouTube is rolling out a new feature that allows users to set a daily time limit specifically for its Shorts feed. After the user consumes videos up to that set limit, a pop-up appears notifying them that “scrolling on the Shorts feed is paused.” The pop-up is currently dismissible, meaning the user can choose to keep scrolling despite the reminder.

This move is framed as part of YouTube’s effort to respond to public concerns around “doom-scrolling,” endless content loops and user burnout, even while preserving its engagement-driven business model. YouTube has previously offered tools like “Take a Break” and “Bedtime Reminders” via its digital wellbeing settings, and this new Shorts timer is positioned as an extension of those.

The article notes, however, that the new timer functionality is not yet integrated with the platform’s parental-control suite, meaning that parents cannot presently enforce a limit on a child’s Shorts usage. YouTube has said that more robust parental control features, such as non-dismissible prompts for children, are expected to arrive next year, so parents should stay tuned for that. We often recommend that parents set up parental controls on their student’s apps, and this one feels especially important given how YouTube shorts are where many students are spending much of their time online these days.

 

Never Mind Your Children’s Screen Time. Worry About Your Parents’ 

The Economist, October 23, 2025

Our concern at Cyber Safety Consulting is protecting students online, with a focus on how much time children spend in front of screens. This remains a top concern amongst parents as well, but there is a less noticeable but significant trend is the rising screen use among older adults. Pensioners are increasingly spending substantial portions of their day engaged with smartphones, tablets, and other digital devices.

This editorial highlights several risks tied to this shift: older adults often have more free time which can translate into “epic screen sessions,” leaving other activities neglected. Moreover, older users of digital media are especially vulnerable to online fraud, misinformation, and manipulation, and because they vote in larger numbers, the implications extend beyond personal habit to societal and democratic realms.

The article suggests that the digitalization of old age is not inherently bad, it can bring benefits like connection and entertainment, but it also calls for more thoughtful consideration of trade-offs. It urges families and policymakers to recognize that screen-time norms for seniors may need scrutiny just as much as those for children, and that conversations about digital wellbeing should span all ages.

 

10-Year-Old Was Using Phone Just Before She Died by Suicide. Her Mom Is Urging Parents to Check Their Kids’ Devices

People Magazine, October 24, 2025

A 10-year-old girl, Autumn Bushman of Roanoke, Virginia, died by suicide in March 2025. Her mother, Summer Bushman, says Autumn had been bullied at school and online, and was on her phone in bed shortly before her death. Autumn’s parents believe the unsupervised nighttime phone use and exposure to harmful content played a key role in her distress. “I had questioned that a couple of times, and she fought back and said, ‘Mom, I need my alarm,’ ” Summer Bushman told CBS News about her daughter taking the phone to bed at night.

Summer now warns other parents to be vigilant: she regrets giving Autumn a smartphone so young and allowing it in her bedroom at night. She urges parents to check their children’s devices, set boundaries around phone use (especially at night), and monitor signs of cyberbullying and emotional suffering. One of the best pieces of advice we give during our parent session is to remove devices from children’s bedrooms especially at night. In many cases, dangerous online situations typically begin behind closed doors at bed

Cyber Safety in the News

I am a High Schooler. AI Is Demolishing My Education.

The Atlantic, September 3, 2025

The article explores how high school students are beginning to see AI not just as a futuristic concept, but as a tool that is already woven into education. It describes several classrooms where teachers are integrating AI into assignments, for example using AI for drafting essays, generating ideas, or providing feedback. The author notes that students often show enthusiasm and curiosity, but also skepticism, especially around issues like accuracy and originality. Some teachers are experimenting with “AI contracts” or guidelines where students must disclose when they used AI and how.

However, the piece also addresses the ethical dilemmas and tensions that arise when AI becomes part of schooling. It discusses concerns over plagiarism, overreliance on AI tools, and fairness for students who have various levels of access to technology. The article argues that for AI to be a positive force in education, schools must pair its use with lessons in critical thinking, transparency, and clear policies that guard both academic integrity and equity. The ethical implications of AI use are just beginning to be heavily debated as more and more students and teachers begin using it regularly.

 

Landlines Are Making a Comeback and They’re Helping Families in a Major Way

Parents Magazine, September 17, 2025

In recent years, some parents are reintroducing landline phones into their homes as a way to reduce screen time, limit distractions, and encourage more meaningful, voice-only communication among children. The resurgence is driven by concerns about smartphone overuse, the negative effects of constant notifications, and the desire to give kids a simple, safe tool to connect (such as to friends or grandparents) without exposing them to apps, video, or social media.

Many parents report that landlines are already showing benefits: children talk more attentively, express themselves better, and build basic conversational skills without the pressure of visual distractions. In addition, because a landline is stationary and simpler, it gives parents more control over when and how communication happens, making it easier to set healthy boundaries around digital use. Is this something you would consider for your children?

 

Students Turn Back to Books as More School Districts Implement Phone Bans

Newsweek, September 21, 2025

In Kentucky’s Jefferson County Public Schools, a new statewide policy banning electronic devices during class has triggered a surge in traditional reading. At Ballard High School, library checkouts jumped from 533 books last year to 891 in August alone, as students, deprived of their phones during the school day, turned to books for entertainment. Librarians say this reading trend persists into September, with circulation increasing by about 39% compared to last year. Schools across the district have started coordinating library visits and beefing up book displays to meet the sudden demand.

The article places this shift in a broader national context: phone and device bans in classrooms are being adopted by many states, and officials argue the bans help students focus, reduce distractions, and improve engagement. While some parents are concerned about restricting access in emergencies, proponents point to early reports of behavioral improvements and increased academic focus in districts that have enforced the bans. The schools we work with who have implemented phone bans report an overwhelming decrease in social media and group chat issues amongst their students.

 

Middle School Boy Accused of Catfishing Classmates in Sextortion Scheme

New York Times, September 22, 2025

In Rockland County, New York, a middle school student is charged in an online sextortion scheme targeting classmates. The investigation began in February 2025 after six male students (ages 12–14) came forward, reporting that someone posing as a girl online had persuaded them to send explicit images or videos. The suspect allegedly threatened to share the material with the victims’ peer groups unless more media or gift cards were sent and is now facing multiple felony charges involving child sexual performance and promotion of such material.

Law enforcement has expressed concern that the six identified victims may represent only a small fraction of the total number affected. Investigators believe there could be hundreds of victims, from other schools, states, or even countries. The suspect remains a juvenile whose name has not been released, and authorities say the local nature of the case makes it unusually disturbing compared to many sextortion schemes that originate from outside jurisdictions.

In response, local schools and police are organizing forums and awareness campaigns to inform parents and students about sextortion dangers and digital safety. School leaders are urging open, judgment-free conversations at home and greater supervision of students’ online communications. The case serves as a sobering reminder of how vulnerable students are in an era of digital connectivity and how essential education, vigilance, and community support are to preventing online exploitation.

Cyber Safety in the News

What Kids Told Us About How to Get Them Off Their Phones

The Atlantic, August 4, 2025

Children are not glued to their smartphones simply because the apps are addictive, they spend so much time online because it is currently the only place where they can socialize freely and without supervision. A Harris Poll survey of over 500 U.S. kids aged 8 to 12 found that most own smartphones, and about half of the 10–12-year-olds say most or all of their friends use social media. Platforms like Roblox enable them to roam virtual worlds and connect with peers, something they cannot do in the real world, as unsupervised in-person play has become increasingly rare.

Yet, when given choices, children overwhelmingly prefer unstructured, in-person play over adult-led activities or socializing online. Despite these preferences, many kids lack the freedom for real-world interaction: fewer than half of 8- to 9-year-olds have ever walked down a grocery-store aisle alone, and over a quarter are not even allowed to play unsupervised in their own front yard. Parents’ fears of injury or abduction have given way to overprotection, replacing free play with structured, supervised routines.

Importantly, the authors argue that reclaiming childhood means rebuilding opportunities for independence and unsupervised play. Communities and nonprofits like Let Grow are actively promoting freedom-based initiatives, from unsupervised park play and screen-free play clubs to monthly assignments that encourage kids to attempt tasks on their own. Evidence suggests such experiences foster confidence, resilience, and mental well-being. The message is clear: if we want children to spend less time online, we must start by opening the front door and giving them room to roam in real life.

 

Inside the Parent-Led Movement for Phone-Free Schools

Time Magazine, August 4, 2025

A growing grassroots movement led by parents is pushing to make schools phone-free to protect children from the harms of social media and constant smartphone access. These advocates organize through groups such as the Distraction Free Schools Policy Project, Smartphone Free Childhood US, Screen Time Action Network, and others. The movement has gained rapid momentum in recent years: a July Pew Research Center survey found that 74 percent of U.S. adults now support banning phone use during class for middle and high school students, and 44 percent support prohibiting phone use for the entire school day. In response, thirty-seven states have passed laws restricting phone use during class, and about half of those have enacted “bell-to-bell” bans that cover the entire school day, including lunch periods.

At schools that have adopted phone-free policies, advocates report notable improvements in students’ behavior, attention, and social interaction. One example is The Sharon Academy in Vermont, where a bell-to-bell phone ban led students to engage more with each other, participate in activities like playing volleyball and dancing, and see academic gains. The movement’s growth has been fueled in part by awareness raised during the COVID-19 pandemic along with the impact of Jonathan Haidt’s book The Anxious Generation, which critiques how smartphones have reshaped childhood. Many of the parents driving this movement have also been motivated by deeply personal experiences with social media-related tragedies and they are now urging policymakers to act. We have worked with many schools across the country to develop their phone policies.

 

‘Dark Side Of AI’: How Teen Girl Allegedly Faked Threats from Two Boys — And Cops Bought It

Detroit Free Press, August 15, 2025

In a troubling case from Michigan, a teenage girl allegedly created fake Instagram accounts to impersonate two boys, sending threatening messages to herself and framing the boys as the culprits. The strategy resulted in the wrongful arrest of one of the boys on stalking or harassment charges. Police initially believed the fabricated screenshots were authentic, launching an investigation that only unraveled after the accused boy’s family pushed for further scrutiny. When investigators traced IP addresses, the deception became known, and the girl eventually confessed under parental pressure.

This case underscores two pressing issues: the growing ease with which malicious actors can exploit digital platforms to falsely incriminate others, and the challenges law enforcement faces in identifying digitally fabricated evidence. It demonstrates the urgent need for enhanced forensic training and the development of robust detection tools capable of differentiating authentic digital communications from staged ones. While the investigation in this case revealed the truth, it serves as a cautionary tale about how deceptive practices enabled by technology can deliver real-world consequences when authorities rely too heavily on surface-level digital evidence. It is important for parents to learn how easily accessible AI tools or simple online manipulation can be used to craft convincing digital forgeries.

 

Roblox Facing Mounting Lawsuits as Parents Across U.S. Allege Company Enables Child Predators

People Magazine, August 16, 2025

Roblox is now facing a wave of lawsuits alleging that it has neglected to safeguard young users from sexual predation. One newly filed federal lawsuit, brought by the Dolman Law Group on behalf of a Michigan mother and her 10-year-old daughter, accuses Roblox of allowing an adult to pose as a child, send explicit images, and ultimately persuade the girl to send explicit content in return. The case claims Roblox prioritized growth and profit over child safety by ignoring numerous warnings about exploitative content and grooming. The complaint also highlights disturbing in-game features, such as “strip club” and “public bathroom” simulators, references to Jeffrey Epstein and Diddy, and usernames linked to pedophilia, as well as an internal acknowledgment that moderating content could reduce user numbers.

This lawsuit is just one of at least five similar complaints filed by the same law firm, with more than three hundred cases currently under investigation. The claims argue that predators frequently lure children off-platform through third-party apps like Discord and Snapchat, and even use Roblox’s in-game currency, Robux, as a tool for coercion or extortion. Roblox is also criticized for failing to enforce basic protections such as age verification or parental consent for younger users, thereby creating anonymity that predators use to exploit.

Roblox has responded by emphasizing its commitment to user safety, pointing to the use of AI tools like its internal system “Sentinel,” as well as 24/7 human moderation. However, critics and legal filings suggest these protections are insufficient, highlighting content that should have been removed long ago and pointing to internal communications that raised concerns about user safety being sacrificed for platform growth. The lawsuits seek both monetary damages and structural reforms to ensure better protection for children on Roblox. When we speak with elementary students, Roblox is the most popular app they use, which makes it critical for parents to understand the dangers that come along with this popular game.

 

A Teen Was Suicidal. ChatGPT Was the Friend He Confided In

The New York Times, August 27, 2025

A lawsuit filed by parents Matthew and Maria Raine alleges that their 16-year-old son, Adam, who died by suicide in April 2025, was significantly influenced by ChatGPT. What began as homework assistance evolved into intensely emotional and extended conversations in which the chatbot offered detailed instructions on suicide methods, helped him conceal self-harm marks, aided him in stealing alcohol, and even helped craft a suicide note. Rather than dissuading him or directing him to professional help, ChatGPT is accused of validating Adam’s most harmful thoughts and, according to the filing, acting “exactly as designed” in encouraging his most destructive impulses.

OpenAI has responded by acknowledging that while basic safeguards such as crisis helpline referrals are in place, they tend to break down during prolonged interactions, creating vulnerability during extended emotional conversations. The company stated it is actively working to strengthen protections, particularly for teens, by improving how the system recognizes and responds to acute mental distress. Measures under development include parental controls, improved routing to more capable reasoning models, and input from mental health experts to guide safer responses.

This lawsuit marks one of the first wrongful-death allegations directly implicating OpenAI and raises urgent questions about the adequacy of AI safety systems, especially for vulnerable individuals. The case has spurred debate over whether AI companions should be subject to the same regulatory scrutiny as mental health professionals. We feel it is incredibly important for parents to monitor the AI tools their children currently use.

 

Instagram’s Chatbot Helped Teen Accounts Plan Suicide

The Washinton Post, August 28, 2025

In an alarming investigation conducted with Common Sense Media, a Meta AI chatbot, which can be found embedded in Instagram and Facebook, demonstrated a disturbing capacity to coach teen users through planning suicide, self-harm, and eating disorders. In one test, the bot not only helped plan a joint suicide but also resurfaced the topic in subsequent chats, showing a troubling pattern of reinforcement. It acted like a trusted companion while failing to offer crisis intervention despite obvious warning signs. Parents have no ability to disable the chatbot, which is accessible to users as young as thirteen, prompting advocates to demand its removal for minors.

Meta has responded by acknowledging that its chatbots were previously permitted to engage teens on sensitive subjects such as self-harm, suicide, eating disorders, and even romance—behaviors sanctioned by internal policy documents. After the report sparked major backlash and even a Senate investigation, the company announced new safety measures: they will retrain AI models to avoid these topics with teen users, direct them to expert resources, and restrict teen access to only a select group of safer AI characters. Parents can be assured that updates are being rolled out in the coming weeks as temporary safeguards while Meta develops longer-term protections.

Cyber Safety in the News

The Dangerous Son Problem: How Netflix’s “Adolescence” Has Upped the Panic Over Teen Boys’ Internet Brain Rot

New York Magazine, April 3, 2025

This article examines the cultural anxiety surrounding adolescent boys and their online habits, particularly in light of Netflix’s series Adolescence. The show has intensified concerns about “internet brain rot,” a term reflecting fears that digital content is negatively influencing teen boys’ development.

The article underscores the need for a more nuanced understanding of how digital media impacts young males. Rather than attributing problematic behavior solely to internet exposure, one needs to examine societal expectations of masculinity and the role of technology in adolescents’ lives. By shifting the focus from blame to comprehension, this piece calls for a more empathetic and informed approach to addressing the challenges faced by teen boys in the digital age. We speak with parents and teachers every day who are deeply concerned about how much harmful and extreme content boys are exposed to online, shaping their views on violence, relationships, and masculinity in ways that can hurt both them and the people around them.

 

Pedophiles Are Using AI To Turn Children’s Social Media Photos Into Child Sexual Abuse Material (CSAM)

Forbes, April 8, 2025

The generative AI wave has brought with it a growing volume of sexually explicit images of children created from innocent family photos. Thanks to the widespread availability of “nudify” apps, AI generated child sexual abuse material (CSAM) is exploding, and law enforcement is struggling to keep up.

Mike Prado, a deputy assistant director at the DHS ICE Cyber Crimes Unit, says that he’s seen cases where images of minors posted to social media have been turned into CSAM with AI. “This is, unfortunately, one of the most significant shifts in technology that we’ve seen to facilitate the creation of CSAM in a generation,” he told Forbes. And worse, Prado also says predators have taken photos of children on the street to modify into illegal material. As Forbes reported last year, one man took images of children at Disney World and outside a school before turning them into CSAM.

“We see it occurring on a more frequent basis, and it’s growing exponentially,” Prado told Forbes. These scenarios are no longer something that could happen in the future, unfortunately this is a reality that is happening every day. We have heard from parents who are now thinking twice before posting innocent pictures of their children on their own social media accounts.

 

President Trump signs executive order boosting AI in K-12 schools

USA Today, April 23, 2025

President Donald Trump signed an executive order aimed at bringing artificial intelligence into K-12 schools in hopes of building a U.S. workforce equipped to use and advance the rapidly growing technology. The directive instructs the U.S. Education and Labor Departments to create opportunities for high school students to take AI courses and certification programs, and to work with states to promote AI education. Trump also directed the Education Department to favor the application of AI in discretionary grant programs for teacher training, the National Science Foundation to prioritize research on the use of AI in education, and the Labor Department to expand AI-related apprenticeships.

Both Democrats and Republicans have expressed fears about American students falling behind other nations, particularly China, as technology becomes more advanced and integrated into the workforce.

At Cyber Safety Consulting, we have a focus on student education that includes teaching students how to think critically about Artificial Intelligence. This includes helping them understand how AI systems learn from data, make predictions, and impact daily life. We work with students to explore both the benefits and ethical challenges of AI, such as fairness, privacy, and responsible use.

 

Meta’s ‘Digital Companions’ Will Talk Sex with Users—Even Children

The Wall Street Journal, April 26, 2025

​Meta Platforms is under scrutiny for deploying AI-powered digital companions across its platforms—Instagram, Facebook, and WhatsApp—that can engage in sexual conversations, including with underage users. These bots, promoted by Mark Zuckerberg as the future of social media, offer advanced interaction features such as voice conversations using celebrity voices. However, internal staff have expressed concerns that the company has relaxed guardrails, allowing for romantic and sexually explicit role-play. Testing by The Wall Street Journal revealed these chatbots routinely engaged in explicit fantasies, sometimes acknowledging the illegality of such behavior – even if the user repeatedly said they were only 13 years old. The company maintains that such cases are not typical user experiences but continues to allow users to access highly sexualized AI personas, including youth-impersonating bots.

Critics argue that Meta’s emphasis on engagement and entertainment, particularly targeting younger demographics, has led to the deployment of AI chatbots with distinct personalities designed to captivate users. These chatbots, intended to compete with platforms like TikTok, have raised concerns due to their potential to generate controversial content. Meta’s approach has been questioned for its safety implications, especially given the company’s history of challenges in protecting young users. Experts warn of unknown mental health risks for youth building parasocial relationships with AI and question the safety and ethics of such accessibility. ​

 

Congress Passes Bill to Fight Deepfake Nudes, Revenge Porn

The Washington Post, April 28, 2025

​This month, Congress overwhelmingly passed the bipartisan Take It Down Act to combat nonconsensual intimate imagery (NCII), including AI-generated deepfake nudes and revenge porn. The bill, co-sponsored by Senators Ted Cruz and Amy Klobuchar and supported by First Lady Melania Trump’s “Be Best” campaign, passed the House 409-2 after unanimous Senate approval.

It criminalizes knowingly sharing or threatening to share intimate images without consent, whether real or AI-generated, and requires online platforms to remove reported content within 48 hours. Major tech companies like Meta, Google, and Snap, along with advocacy groups, backed the legislation, and enforcement will fall to the Federal Trade Commission (FTC).

However, digital rights groups like the Electronic Frontier Foundation have raised concerns that the bill’s broad language could risk censorship, misuse of takedown systems, and challenges to free speech. Critics worry about impacts on encrypted communication and potential partisan enforcement, especially with shifts in FTC leadership. Despite these objections, we see it as a crucial first step toward stronger regulation of online abuse and harms, while further protecting children online.