Deepfake sexual imagery and “nudification” harassment is a distribution problem, not a photo problem. Attackers use any public image to create an abusive artifact, then they rely on platform churn and reuploads to keep it circulating. The response that works is evidence discipline, account hardening, and consistent takedown workflows without amplifying the content.
| Start here | Do this | Why |
|---|---|---|
| If the content is already posted | Preserve evidence first, then report through platform tools and hosts | Removal requests go faster with URLs, screenshots, and timestamps |
| If you are being threatened or extorted | Do not pay, preserve evidence, and escalate to safety or law enforcement as needed | Payment often increases future pressure and does not stop distribution |
| If you suspect account compromise | Secure email, change passwords from a clean device, and clean sessions | Attackers often steal images through takeover, not only scraping |
| If you want to reduce future risk | Limit public high-resolution photos and tighten privacy settings | Reduces easy scraping and correlation across platforms |
Safety note: preserve evidence and prioritize personal safety before making changes that could escalate conflict. If you are in immediate danger, contact local emergency services.
Preserve evidence without increasing exposure
Evidence is what makes reporting and escalation work. Preserve it quietly.
- Capture screenshots that include the abusive content, the profile name, the URL, and the timestamp.
- Copy direct URLs for the post, the profile, and any reuploads you find.
- Keep a private timeline of reports you submitted (ticket IDs, emails, responses).
- Avoid reposting the content to “warn others”. That often increases distribution.
Stop account-based photo theft (the common hidden cause)
Many deepfake harassment campaigns begin with account compromise: the attacker steals private images or uses your accounts to spread the content. Stabilize your control plane.
- Secure your primary email and enable 2FA.
- Change passwords from a clean device and sign out unknown sessions.
- Remove suspicious connected apps and third-party access.
- If prompts persist after resets, check device integrity: how to detect spyware.
If you need broader incident structure, start with immediate steps after being hacked and how to check if you have been hacked.
Use a stable takedown workflow (platform first, then search)
Removal is rarely one report. The pattern is: report where it is hosted, then reduce discoverability through search and link sharing.
1) Report on the platform where it is hosted
- Use the platform’s reporting category that most directly matches the harm (non-consensual sexual imagery, impersonation, harassment).
- Report the profile and the specific posts. Use direct URLs, not only screenshots.
- If the content is reuploaded repeatedly, keep reporting with the same evidence packet. Consistency often helps.
2) If removal is slow, report to the host
Some sites respond to hosting-provider abuse reports faster than they respond to user reports. This works best when you can provide the exact URLs.
3) Reduce search visibility where eligible
Search removals do not remove the source, but they can reduce visibility and slow down re-sharing. Use the dedicated workflows for explicit imagery and personal information where eligible.
Common mistake: searching repeatedly and clicking reuploads. That can train recommendation systems and create new distribution signals. Preserve evidence, then limit re-exposure.
Reduce future scraping and impersonation risk
This is not about hiding. It is about controlling where high-resolution images live and who can contact you directly.
- Set accounts to private where appropriate and restrict who can message, tag, or download content.
- Remove older public photos that create easy training material and identity correlation.
- Use separate public contact channels from your login email and keep recovery channels private.
- Run a privacy pass: manage your privacy settings for social media and reduce your digital footprint.
When to escalate
Escalate beyond “platform reports” if any of the following are true:
- Threats mention your home, workplace, or family.
- You are being extorted for money or more images.
- The content involves minors or you suspect child sexual abuse material. Do not investigate yourself. Use official reporting channels immediately.
If harassment is the primary problem, use what to do about online harassment as your incident structure. If content removal is the main problem, see how to remove non-consensual intimate imagery.
Deepfake harassment is designed to create panic and isolation. A stable response looks the opposite: preserve evidence, harden accounts so attackers cannot steal more material, and work takedowns methodically without amplifying the content. Once you control the inbox and the sessions and you have a clean evidence packet, the problem becomes a set of repeatable actions rather than an endless emergency.
