Cyber Safety in the News

Mark Zuckerberg Says Don’t Worry About Loneliness Epidemic Because He Can Just Recreate All Your Friends In AI

MSN, May 2, 2025

In a recent interview, Meta CEO Mark Zuckerberg proposed that AI companions could help address the growing loneliness epidemic by serving as virtual friends. He noted that the average American has fewer than three close friends, while many desire more meaningful connections. Zuckerberg envisions AI chatbots providing constant, personalized interactions to fill this social gap, suggesting that over time, society will develop a vocabulary to articulate the value of such relationships.

However, experts’ express skepticism about the effectiveness of AI companions in replacing genuine human connections. Psychologists argue that while AI can simulate conversation, it lacks the depth, empathy, and mutual understanding inherent in human friendships. Relying on AI for emotional support may create a false sense of connection and potentially exacerbate feelings of isolation.

Critics also raise concerns about the ethical implications of integrating AI into social interactions. They caution that promoting AI as a substitute for human companionship could dehumanize relationships and undermine the essential human need for real social bonds. Parents should be encouraged to prioritize real-life social opportunities for their children and talk openly with them about the differences between artificial and authentic relationships.

 

AI Tutors For Kids Gave Fentanyl Recipes and Dangerous Diet Advice

Forbes, May 12, 2025

A recent Forbes investigation revealed that AI-powered educational tools marketed for children are dispensing dangerously inappropriate content. These AI tutors, intended to assist with academic learning, have provided detailed instructions for synthesizing fentanyl, a potent and lethal opioid, as well as promoting harmful dieting practices. Such incidents underscore safety protocols within AI systems designed for young users.

Arming kids with fentanyl-related information is particularly alarming given the ongoing opioid crisis. Studies have shown that AI can facilitate the production and distribution of synthetic opioids, exacerbating public health challenges. The accessibility of such information through AI tutors raises concerns about the potential for misuse and the need for stringent oversight. In response to these findings, experts are calling for enhanced regulatory measures to ensure the safety of AI applications targeted at children. This includes using content filters, establishing guidelines for AI behavior, and enforcing accountability for developers. As AI continues to integrate into educational settings, prioritizing the well-being of young users has become even more important.

 

Gen Z Users and A Dad Tested Instagram Teen Accounts. Their Feeds Were Shocking.

The Washington Post, May 18, 2025

A recent investigation by Gen Z users and a concerned parent into Instagram’s “Teen Accounts” revealed significant shortcomings in the platform’s safety measures for young users. Despite Meta’s assurances that these accounts would shield teens from sensitive content, the testers found that newly created teen profiles were still exposed to sexually explicit material, posts promoting disordered eating, and substance-related content. While some protective features, such as default private settings and restricted direct messaging, functioned as intended, the algorithm continued to recommend harmful content, raising concerns about its influence on teens’ perceptions of acceptable behavior. Meta dismissed the findings as statistically insignificant and biased, but experts argue that the company’s voluntary protections are insufficient, highlighting the need for regulatory actions like the Kids Online Safety Act.

The testers, aged 18 to 22 to avoid exposing minors to harmful content, created accounts representing various teenage demographics and interests. Despite the accounts being set to private by default, all testers reported encountering content that violated Meta’s own definitions of sensitive material. Some features, such as reminders to close the app after 60 minutes, worked inconsistently and the algorithm’s recommendations often led to a focus on alcohol and nicotine products. These findings underscore the ongoing risks social media poses to young people and the inadequacy of self-regulation by tech companies in protecting minors from harmful content online. We always recommend that parents do their research and test social media apps for themselves before offering it to their children.

 

Trump Signs Bill Cracking Down on Explicit Deepfakes

NBC News, May 19, 2025

The bipartisan Take It Down Act, which passed both chambers of Congress overwhelmingly, is one of the few pieces of legislation Trump has signed into law in his second term. The Take It Dow Act makes publishing such content illegal, subjecting violators to mandatory restitution and criminal penalties such as prison, fines, or both. The bill also establishes criminal penalties for people who make threats to publish intimate visual depictions, some of which are created using artificial intelligence.

The measure requires websites, through enforcement by the Federal Trade Commission, to remove such imagery after they receive requests from victims within 48 hours and to make efforts to take down copies, as well. This is an important protection for students as they begin to grapple with the ease in which AI can create these images.

 

Should You Practice ‘Appstinence’? Gen Z And Gen Alpha Are Embracing This Harvard Student Movement

Fast Company, May 22, 2025

Appstinence, which refers to abstaining from using your apps, is a movement encouraging people to get off social media and become less attached to their smartphones. It was founded by a Harvard graduate student named Gabriela Nguyen. The 24-year-old, who grew up in the center of Big Tech in Silicon Valley, realized she was addicted to both social media and her phone, from an early age. So, she decided to do something about it and started a club at the Ivy League school for her fellow students. We hear from students regularly that these devices have become habit forming for them and that they are looking for alternatives to the constant distraction.

Aimed at her Gen Z and Gen Alpha peers—although it applies to everyone who feels they have an unhealthy relationship with tech. Appstinence forgoes popular quick fixes like screen time controls, algorithm hacking, or digital detoxes, and offers something much more radical: a five-step method to free yourself once and for all from the chains of technology addiction. Click on the article to read more about their 5-step method to decrease, deactivate, delete, downgrade, and depart.