Cyber Safety in the News

The Dangerous Son Problem: How Netflix’s “Adolescence” Has Upped the Panic Over Teen Boys’ Internet Brain Rot

New York Magazine, April 3, 2025

This article examines the cultural anxiety surrounding adolescent boys and their online habits, particularly in light of Netflix’s series Adolescence. The show has intensified concerns about “internet brain rot,” a term reflecting fears that digital content is negatively influencing teen boys’ development.

The article underscores the need for a more nuanced understanding of how digital media impacts young males. Rather than attributing problematic behavior solely to internet exposure, one needs to examine societal expectations of masculinity and the role of technology in adolescents’ lives. By shifting the focus from blame to comprehension, this piece calls for a more empathetic and informed approach to addressing the challenges faced by teen boys in the digital age. We speak with parents and teachers every day who are deeply concerned about how much harmful and extreme content boys are exposed to online, shaping their views on violence, relationships, and masculinity in ways that can hurt both them and the people around them.

 

Pedophiles Are Using AI To Turn Children’s Social Media Photos Into Child Sexual Abuse Material (CSAM)

Forbes, April 8, 2025

The generative AI wave has brought with it a growing volume of sexually explicit images of children created from innocent family photos. Thanks to the widespread availability of “nudify” apps, AI generated child sexual abuse material (CSAM) is exploding, and law enforcement is struggling to keep up.

Mike Prado, a deputy assistant director at the DHS ICE Cyber Crimes Unit, says that he’s seen cases where images of minors posted to social media have been turned into CSAM with AI. “This is, unfortunately, one of the most significant shifts in technology that we’ve seen to facilitate the creation of CSAM in a generation,” he told Forbes. And worse, Prado also says predators have taken photos of children on the street to modify into illegal material. As Forbes reported last year, one man took images of children at Disney World and outside a school before turning them into CSAM.

“We see it occurring on a more frequent basis, and it’s growing exponentially,” Prado told Forbes. These scenarios are no longer something that could happen in the future, unfortunately this is a reality that is happening every day. We have heard from parents who are now thinking twice before posting innocent pictures of their children on their own social media accounts.

 

President Trump signs executive order boosting AI in K-12 schools

USA Today, April 23, 2025

President Donald Trump signed an executive order aimed at bringing artificial intelligence into K-12 schools in hopes of building a U.S. workforce equipped to use and advance the rapidly growing technology. The directive instructs the U.S. Education and Labor Departments to create opportunities for high school students to take AI courses and certification programs, and to work with states to promote AI education. Trump also directed the Education Department to favor the application of AI in discretionary grant programs for teacher training, the National Science Foundation to prioritize research on the use of AI in education, and the Labor Department to expand AI-related apprenticeships.

Both Democrats and Republicans have expressed fears about American students falling behind other nations, particularly China, as technology becomes more advanced and integrated into the workforce.

At Cyber Safety Consulting, we have a focus on student education that includes teaching students how to think critically about Artificial Intelligence. This includes helping them understand how AI systems learn from data, make predictions, and impact daily life. We work with students to explore both the benefits and ethical challenges of AI, such as fairness, privacy, and responsible use.

 

Meta’s ‘Digital Companions’ Will Talk Sex with Users—Even Children

The Wall Street Journal, April 26, 2025

​Meta Platforms is under scrutiny for deploying AI-powered digital companions across its platforms—Instagram, Facebook, and WhatsApp—that can engage in sexual conversations, including with underage users. These bots, promoted by Mark Zuckerberg as the future of social media, offer advanced interaction features such as voice conversations using celebrity voices. However, internal staff have expressed concerns that the company has relaxed guardrails, allowing for romantic and sexually explicit role-play. Testing by The Wall Street Journal revealed these chatbots routinely engaged in explicit fantasies, sometimes acknowledging the illegality of such behavior – even if the user repeatedly said they were only 13 years old. The company maintains that such cases are not typical user experiences but continues to allow users to access highly sexualized AI personas, including youth-impersonating bots.

Critics argue that Meta’s emphasis on engagement and entertainment, particularly targeting younger demographics, has led to the deployment of AI chatbots with distinct personalities designed to captivate users. These chatbots, intended to compete with platforms like TikTok, have raised concerns due to their potential to generate controversial content. Meta’s approach has been questioned for its safety implications, especially given the company’s history of challenges in protecting young users. Experts warn of unknown mental health risks for youth building parasocial relationships with AI and question the safety and ethics of such accessibility. ​

 

Congress Passes Bill to Fight Deepfake Nudes, Revenge Porn

The Washington Post, April 28, 2025

​This month, Congress overwhelmingly passed the bipartisan Take It Down Act to combat nonconsensual intimate imagery (NCII), including AI-generated deepfake nudes and revenge porn. The bill, co-sponsored by Senators Ted Cruz and Amy Klobuchar and supported by First Lady Melania Trump’s “Be Best” campaign, passed the House 409-2 after unanimous Senate approval.

It criminalizes knowingly sharing or threatening to share intimate images without consent, whether real or AI-generated, and requires online platforms to remove reported content within 48 hours. Major tech companies like Meta, Google, and Snap, along with advocacy groups, backed the legislation, and enforcement will fall to the Federal Trade Commission (FTC).

However, digital rights groups like the Electronic Frontier Foundation have raised concerns that the bill’s broad language could risk censorship, misuse of takedown systems, and challenges to free speech. Critics worry about impacts on encrypted communication and potential partisan enforcement, especially with shifts in FTC leadership. Despite these objections, we see it as a crucial first step toward stronger regulation of online abuse and harms, while further protecting children online.

CSC featured in Washington Times article: Teachers see AI evolving from nuisance to necessity at K-12 schools

Teachers see AI evolving from nuisance to necessity at K-12 schools

By Sean Salai – The Washington Times – Tuesday, May 6, 2025

Samantha Gleisten has made her share of mistakes teaching generative artificial intelligence to middle school students in Chicago.

When she first invited a group of eighth graders to create chatbots with a software program two years ago, one trained his AI to be a narcissist who gave antagonistic “I’m better than you” responses. Another created an AI Snoop Dogg who veered into inappropriate drug references in improvised rap lyrics.

She soon learned how to be quicker at reining students in — and choosier about selecting AI software with guardrails for children that she could tailor to the classroom.

“I wanted to show my students how to engage a new technology, but I didn’t stop to think what was appropriate,” said Ms. Gleisten, who directs education technology at Rogers Park Montessori School and co-founded the company AI Education last year. “Fortunately, it didn’t get scary, and now I know how to check the privacy policies and vet the tools I’m using.”

She’s one of thousands of K-12 teachers who have worked to transform AI chatbots from a nuisance into a necessity since ChatGPT took campuses by storm in late 2022.

Education insiders interviewed by The Washington Times said the evolution of AI in schools has unfolded in three stages: banning generative AI to prevent cheating, developing AI usage policies and requiring “strategic integration” of AI literacy instruction.

“Initially, there was panic, fear about cheating, misinformation, loss of jobs, but the conversation has matured,” said Gadi Kovler, CEO of Radius, an AI platform for teachers. “Students don’t need to study AI as a concept as much as they need to be flexible, critical thinkers who can adapt to rapidly evolving tools and workflows.”

Generative AI platforms like ChatGPT allow users who pose written or verbal questions to create new text, images and music from an ever-expanding database.

Most teachers initially resisted AI as a threat to traditional learning, then gradually embraced it.

Turnitin — a website teachers use to detect plagiarism in assignments — released an AI-detection tool in April 2023 that claimed to be 97% effective at flagging computer-generated writing in essays.

As more schools adopted AI for learning feedback, tutoring and group projects, Turnitin.com pivoted this year.

In March, the company announced the launch of Turnitin Clarity, a “composition workspace” to help students “draft writing assignments with transparency” and receive AI-generated feedback to improve their work.

The new program’s AI writing assistant uses a teacher’s assignment instructions to guide students in writing and editing a submission over multiple sessions. It includes a video playback feature that lets teachers review a student’s entire drafting process, including copied-and-pasted text and typing patterns.

“AI-generated writing is not a binary concept with rigid lines around what is or is not acceptable,” said Annie Chechitelli, Turnitin’s chief product officer. “Instead, this technology is a true disruption, requiring us to rethink many aspects of our world.”

While interactive chatbots can pass multiple-choice exams and create deepfake recordings of people’s voices, administrators stress the need to form thoughtful human users to produce deeper insights.

“AI apps allow learners to write, speak, perform and construct a lesson or problem from any location,” said Michael Liebmann, an assistant superintendent at the Matawan-Aberdeen Regional School District in New Jersey. “They cannot replace the relationships that are created between the teacher and the children in the room.”

Beyond essays, generative AI has helped students understand difficult math questions.

The homework-learning app Brainly launched “Ginny” — a ChatGPT-powered chatbot that helps students expand or simplify answers to complex math and science problems as a learning aid — in March 2023.

For example, Ginny can analyze a student’s answer to a difficult Calculus homework or study problem and offer a step-by-step explanation of the correct solution.

In a March 2025 study of 3,682 U.S. high school students, Brainly found that 67% planned to use AI to prepare for their final exams, up from 59% a year ago. Another 80.6% of respondents said AI could improve their grades, up from 77% in 2024.

“We’re realizing that one-size-fits-all AI chatbots aren’t capable of adapting to each student’s individual learning style, emphasizing the need for personalized learning companions,” said Bill Salak, Brainly’s chief technology officer. “It’s important that schools teach students to become strategic users of technology not just as consumers, but as smart, effective decision-makers.”

Rapidly multiplying AI platforms have threatened to overwhelm some campuses.

Heather Peske, president of the National Council on Teacher Quality, said schools still struggle to train teachers how to use AI with appropriate materials.

“There are a lot of ‘resources’ out there that teachers use to supplement their district-provided instructional materials and many of them are low quality,” Ms. Peske said. “Given the nature of AI models, chances are high that AI will draw from these poor materials and perpetuate low-quality instruction.”

Experts urge students to start with the simplest AI platforms and watch carefully for any “hallucinations” that they may produce with false information.

“I recommend sticking to one or two platforms like ChatGPT so that they can learn the ins and outs of that one before exploring the festival of other tools and apps that are springing up every day,” said Dan Ulin, a psychologist who founded the Los Angeles-based Elite Student Coach to help teenagers get into top colleges.

AI literacy

Policymakers on both sides of the aisle have called on K-12 schools to teach AI literacy over the past year.

California Gov. Gavin Newsom, a Democrat, signed a law in October that requires AI literacy instruction in the state’s K-12 classrooms.

President Trump signed an April 23 executive order directing the Education and Labor departments to prioritize funding and opportunities for high school students to take AI classes and certification programs.

“American schools took big steps towards a screen-based educational system during the COVID-19 pandemic,” said Yaron Litwin, chief financial officer of the AI-driven Canopy Parental Control app, which helps parents filter digital content. “Now, they are beginning to implement AI literacy initiatives on the federal, state and local levels.”

Tech industry employers argue that students with AI skills will have a better chance of landing future engineering, science and math-related jobs.

“Students require baseline AI literacy across all subjects, not just in computer science classes,” said Dev Nag, CEO of QueryPal, a San Francisco-based customer support automation company.

Mr. Nag pointed to national surveys showing that the share of teachers using AI jumped from 1 in 5 in early 2023 to more than 40% by the end of 2024. Over the same period, he noted that the share of teenagers using AI increased from 37% to 70%.

Sher Downing, CEO of Downing EdTech Consulting, said schools are moving to integrate AI in three areas from the earliest grade levels: a redesigned curriculum emphasizing human skills, new forms of testing that AI cannot easily replicate and programs ensuring AI access at all socioeconomic levels.

“Successful implementation hinges on using AI to augment rather than replace teaching, establishing clear ethical policies, and fostering teacher experimentation,” Ms. Downing said.

AI has also been effective in connecting emotionally and intellectually with special education students.

“It can help students with autism explore topics they love, ask creative questions, and engage in learning that’s personalized, meaningful and relevant,” said Katie Trowbridge, a Florida-based education consultant and former public high school teacher. “It can adapt content to fit their strengths, offer visuals or simplified language when needed, and even model social scenarios in low-pressure, safe ways that build confidence.”

Lingering concerns

According to education experts, a gradual curriculum of AI literacy from kindergarten through high school will best prepare students for future success.

Nevertheless, financial limitations and lingering concerns about academic dishonesty have kept AI out of many schools.

“When it comes to what to avoid with AI, I would caution against outright banning AI in the classroom,” said Caroline Allen, chief program officer at the right-leaning Center for Education Reform and a former teacher. “I would also advise against relying on AI-generated content without vetting it.”

Cyber safety experts say schools with digital literacy programs to integrate AI in all grades and classes have done better with disciplinary issues than campuses that relegate it to computer science classrooms.

“Rather than banning it altogether, teach students how to use this tool well,” said Allison J. Bonacci, director of education for Cyber Safety Consulting, an Illinois-based company that works with schools to develop internet safety policies. “Age-appropriate AI literacy can be integrated into all classes, not just tech classes.”

According to a 2024 UNESCO report, students’ critical thinking scores rose by 18% on average in schools that introduced AI with digital literacy programs. By contrast, they fell by 9% in schools that allowed AI without a digital literacy program to guide it.

“If students begin to treat AI like a shortcut for thinking, they may lose opportunities to build foundational cognitive skills,” said Marlee Strawn, co-founder of Scholar Education, a company that develops AI tools for K-12 classrooms.

Dana Bryson, senior vice president of social impact at the online learning platform Study.com, said another problem is that poor and minority communities have lagged in teaching AI.

She pointed to a recent Study.com survey that found 54% of teachers saw the promise of AI for individualized learning, but 64% worried it would contribute to “wider learning gaps.”

“Affluent communities and schools have more quickly embraced AI tools, while schools serving under-resourced households are often left out or even avoid them altogether,” Ms. Bryson said. “That tells us AI is neither inherently good nor bad. It’s a tool, and how we use it will determine whether it helps close gaps or deepen them.”