A deepfake is synthetic media generated or altered using machine learning to convincingly imitate a real person's face, voice, or actions.
Deepfakes are used for impersonation, fraud, harassment, and sometimes extortion. They can also be used to make social engineering attempts feel 'proven' when they are not.
Why it matters for account recovery
Deepfakes often show up as part of a larger incident: an attacker wants you to click, pay, share a code, or hand over access. If you treat media as proof instead of a manipulation surface, you can be pushed into the attacker's process.
Common failure modes and misconceptions
- Believing 'video proof' without independent verification: A deepfake can be persuasive enough to trigger quick decisions under pressure.
- Responding inside the attacker's channel: Scams become durable when you let the attacker choose the verification method.
- Oversharing source material: Public high-quality photos, clips, and voice samples can lower the cost of impersonation.
Safe best practices
- Verify identity through a known channel you control, not the channel that delivered the media.
- Secure key accounts so attackers cannot compound the damage through account takeover.
- Preserve evidence and document URLs and timestamps before reporting or requesting removals.
- Reduce exposure of high-quality voice and video samples when feasible for your risk profile.
Related terms
Related guides
- Deepfakes explained: why they pose a serious risk
- Reduce deepfake misuse risk: privacy controls, evidence discipline, and safe response
- Deepfake sexual imagery: preserve evidence, secure accounts, and remove content
- How to stop impersonation
Deepfakes change the feel of scams more than the underlying mechanics. The durable defense is the same: verify through a known channel, secure the control plane, and avoid decisions made under time pressure.
