Cyber Safety in the News

One-Fifth of Australian Teens Still Use TikTok, Snapchat After Social Media Ban

Reuters, March 12, 2026

About two months after Australia implemented its world-first ban on social media for users under 16, new data suggests the policy has reduced, but not eliminated, teen usage. A report from parental control company Qustodio found that more than 20% of Australian teens aged 13–15 were still using platforms like TikTok and Snapchat, even though these apps are required to block underage users. While usage dropped compared to before the ban took effect in December 2025, the findings raise questions about how effective the law’s age-verification systems really are.

The article highlights that enforcement depends on whether platforms and parents successfully restrict access. Teens have continued to find ways around the rules, especially in households where parental controls are not in place. Regulators acknowledged these gaps and said they are actively working with tech companies to improve compliance and identify potential systemic failures. The law itself places responsibility on platforms rather than penalizing teens or families directly. It will be interesting to see how other countries follow suit.

 

Study Links Children’s Social Media Use with Anxiety and Depression in Teenage Years

The Guardian, March 22, 2026

This article reports on a major study from Imperial College London that found a clear link between heavy social media use in childhood and a higher risk of anxiety and depression during the teenage years. Researchers followed more than 2,000 students and discovered that kids who spent over three hours a day on social media were significantly more likely to experience mental health problems compared to those who used it for around 30 minutes. The findings suggest that the amount of time spent online plays a key role in long-term wellbeing.

A key factor identified in the study is sleep disruption. Children who used social media heavily were more likely to stay up late, leading to less and poorer-quality sleep, which researchers believe is a major contributor to later anxiety and depression. The effects were especially noticeable among girls, who showed stronger links between high usage and mental health struggles. However, researchers caution that the relationship is complex and does not prove that social media directly causes these conditions, but rather that it is one of several interacting influences on young people’s mental health.

The article also highlights ongoing debates about how to respond. Some policymakers are considering stricter rules, such as limiting or banning social media use for younger teens, but experts warn there isn’t enough evidence to support extreme measures yet. Instead, researchers recommend focusing on practical solutions like improving digital literacy, encouraging healthier habits (especially around sleep), and continuing to study how rapidly changing platforms affect young people. We agree, and work with students every day to help them develop healthy digital literacy skills.

 

1 In 3 Teens ‘Experienced Problematic Use’ Of Meta Platforms: Closing Arguments from Landmark Social Media Trial

Fortune, March 23, 2026

The article describes the landmark trial in New Mexico where closing arguments have begun in a case accusing Meta (the parent company of Facebook and Instagram) of misleading the public about the safety of its platforms for young users. Prosecutors argue that Meta knowingly designed its platforms to maximize engagement and profit, even when that meant exposing teens to harmful or addictive content. A key claim highlighted in the case is that about one in three teens experienced “problematic use” of Meta’s platforms, meaning they felt unable to control how much time they spent on them. The state also alleges Meta failed to properly enforce age restrictions and did not fully disclose risks like mental health issues or exploitation to users and families.

Meta’s defense argues that while some users may overuse social media, the company has invested heavily in safety tools and does not consider its platforms “addictive” in a clinical sense. Instead, it uses the term “problematic use” to describe excessive engagement and says it has been transparent about potential risks. The trial, which included weeks of testimony from experts, teachers, and former employees, could lead to billions of dollars in penalties if Meta is found to have violated consumer protection laws. More broadly, the case is seen as a major test of whether social media companies can be held legally responsible for harms to young users, with potential implications for future regulation and lawsuits across the U.S.

 

Two Boys Made Deepfake Porn Of 60 Girls. It Left a School, Small Town Reeling

USA Today, March 23, 2026

The article explains how artificial intelligence is fueling a rapidly growing form of sexual abuse through “deepfake” technology, which can create realistic but fake explicit images or videos of people without their consent. These tools are becoming easier to use and widely available, allowing perpetrators, including students and young people, to target classmates, celebrities, and ordinary individuals. Victims often experience serious emotional distress, reputational damage, and harassment, even though the images are not real. Experts emphasize that this is still a form of abuse because it exploits a person’s identity and likeness, and research shows the vast majority of deepfake sexual content targets women and girls.

The article also highlights how laws and institutions are struggling to keep up with the speed of technology. While some states and countries are beginning to criminalize nonconsensual deepfake images, enforcement is inconsistent and victims often have difficulty getting content removed once it spreads online. Experts and advocates argue that stronger regulations, better platform safeguards, and increased awareness are urgently needed, especially as cases involving minors and schools become more common. Ultimately, the piece frames deep-fake sexual abuse as a major emerging digital safety crisis that reflects broader challenges in controlling powerful AI tools and protecting people in an online world.

 

Meta ordered to pay $375m after being found liable in child exploitation case

The Guardian, March 25, 2026

A New Mexico jury ordered Meta to pay $375 million after finding the company liable for misleading users about the safety of Facebook and Instagram and for enabling harm to children, including sexual exploitation. The case, brought by the state’s attorney general, argued that Meta violated consumer protection laws by prioritizing profit over user safety and failing to adequately address known risks on its platforms. This is the first time a jury has held Meta legally responsible for harms linked to its services.

During the trial, prosecutors presented evidence that Meta ignored repeated warnings from employees and child safety experts about dangers to minors. Investigators also used undercover operations to show how predators could target children on the platforms. Testimony highlighted issues such as weak moderation systems, overreliance on flawed AI reporting, and encrypted messaging features that made it harder for law enforcement to investigate crimes.

Meta said it plans to appeal the decision, maintaining that it invests heavily in safety and faces challenges policing harmful content at scale. However, the verdict is seen as a major legal milestone that could open the door to more lawsuits and increased regulation of tech companies. Additional court proceedings are expected to determine whether further penalties or required changes, like stronger age verification and platform redesigns, will be imposed.