AI Sexual Image Abuse: What Young People Are Experiencing

Survey Snapshot – November 2025 (Canada-Focused)

In November 2025, we ran an anonymous survey focused on Canadian students and young adults, primarily recruited through university communities.

Over 100 people responded, most aged 18–25, with additional responses from teens and adults from Canada, the U.S., and Europe.

This page summarizes the key patterns emerging from the data — not in academic language, but in plain terms that reflect the real experiences people shared.

1. Awareness Is Nearly Universal

95% had heard of these tools

Almost everyone had heard of deepfake or "nudify" tools.

This wasn't niche — it was common knowledge across:

  • Teens
  • University students
  • Men and women
  • People from multiple countries

For young people today, AI sexual editing tools are part of the online landscape.

Awareness of Deepfake Tools

2. Harm Is Already Happening — Even to Teens

- reported being victimized

Even with a modest sample, multiple respondents reported harm:

  • Several young women said a deepfake had been made of them
  • One 15–17-year-old girl said yes, it happened to me
  • Others said "maybe," often meaning rumours or unconfirmed images

For teenagers in particular, the responses were stark:

  • High awareness
  • High fear
  • Clear exposure
  • At least one confirmed victim

This mirrors a pattern advocates have warned about: minors are already being targeted.

Personal Experience

3. Many People Know Someone Affected

- know someone affected

About -% of all respondents said yes or maybe when asked if they personally know someone victimized.

The "maybe" group is important — it often reflects:

  • Rumours
  • Partial knowledge
  • Suspected images
  • Social circles where this behaviour is normalized

This suggests a real social spread of harm, not isolated incidents.

Do You Know Someone Affected?

4. Women Report More Fear. Men Report More Access.

A gender pattern appeared clearly:

Women

  • More likely to report fear
  • More likely to answer "maybe" about being targeted
  • More likely to know someone else affected

Men (especially 18–25)

  • More likely to say tools are "very easy" to access
  • More likely to say they know someone targeted
  • More likely to report having used a tool themselves (a small number admitted this)

Even in a small survey, this mirrors what frontline advocates already know:

Access often skews male, and harm skews female.

Worry Level by Gender

Ease of Access by Gender

5. Fear Is Widespread Even Among Those Not Directly Targeted

- are worried or maybe worried

Around three-quarters of respondents said they were:

  • Worried, or
  • Maybe worried

This includes people who have not been targeted.

They still fear:

  • Their friends being victimized
  • Images being shared without consent
  • Tools circulating in their peer groups

Fear itself is a form of harm — and your data shows it's already widespread.

Level of Worry

6. Young People Believe These Tools Are Extremely Easy to Use

The most consistent response across age groups:

"Very easy."

Across all demographics, especially young men:

  • Over half said very easy
  • Most of the rest said somewhat easy
  • Only a few said "hard" or "not sure"

The perception is clear:

If someone wants to make a deepfake, they can. Easily.

Ease of Access to Tools

7. A Small Number Admitted to Using These Tools

Even in an anonymous form:

Given the stigma, the true number is likely higher.

This doesn't just show awareness.

It shows active participation among some young men.

Why This Matters

From this dataset, we can already see:

This is exactly the environment where large-scale abuse emerges:

High access → high fear → low support → low accountability.

What DarkAI Is Working Toward

Based on the survey results, DarkAI aims to:

1

Map the infrastructure enabling this abuse

Where these services are hosted, how they hide, and how they spread.

2

Provide tools for victims and allies

Including templates and guided workflows for contacting:

  • CDNs
  • Hosting providers
  • Domain registrars
  • Payment processors

to demand removal.

3

Support researchers, journalists, and policymakers

The patterns here align closely with what advocacy groups already fear — but lack data for.

DarkAI aims to provide that data.