WASHINGTON, D.C. – Generative AI is no longer just a tool for cybercriminals; it has become a new and disturbing form of exploitation among children and teenagers, according to new research from Suzuki Law Offices. A 2025 survey by the research organization Thorn highlights just how widespread the use of these tools has become, with “deepfake nudes” now part of the youth social landscape.

Alarming Statistics
The survey reveals a crisis that has moved from the dark web into everyday youth culture:
- 31% of teens are familiar with deepfake nudes.
- 1 in 8 teens know someone who has been a victim of a deepfake nude.
- 1 in 10 children admit knowing peers who have created deepfake sexual images.
- 1 in 5 children (ages 9-17) have seen non-consensual sexual images reshared online.
This data indicates a dangerous shift where AI-driven exploitation is no longer confined to organized criminal groups but is now a form of peer-to-peer harassment in schools and friendship circles.
The Legal and Psychological Fallout
The use of AI-generated sexual images creates a two-fold harm: irreversible reputational damage and severe psychological trauma for victims. Legal experts note that the law has not kept pace with technology. Many jurisdictions lack clear definitions for synthetic child abuse content, leaving victims with little legal recourse and offenders often facing no consequences.

“We are now seeing AI exploitation in spaces parents never imagined—classrooms, sports teams, even after-school chats,” Suzuki Law Offices reported. “That gap leaves minors vulnerable, parents powerless, and offenders emboldened.”
Ease of Access and Enforcement Challenges
Part of the problem is the easy accessibility of “Nudify” and “unclothe” apps, which are often marketed with casual branding and require no technical skills. Watchdog organizations, such as the Internet Watch Foundation, have already documented the abuse of these apps, which have no age restrictions and mandatory reporting obligations.
Law enforcement is also struggling to keep up, facing challenges with the sheer volume of content and a lack of legal clarity on how to prosecute cases involving fully synthetic images. This “legal gray zone” often prevents prosecutors from pursuing cases under traditional child pornography statutes.
The research concludes that the threat of AI-driven exploitation is not a future concern but a present reality that is shaping the risks faced by children today.
Read the full report here.


