Frequently asked questions


Dive deeper on the subject of deepfakes, their technology, the threat they pose, and what we can do to stop them.

What are Deepfakes?

Deepfakes are non-consensually AI-generated voices, images or videos that are created to produce sexual imagery, commit fraud, or spread misinformation.

Deepfake technology is rapidly advancing and uses deep learning algorithms to create convincing and deceptive content.

Why do they matter?

Deepfakes can steal your face, your voice, and your identity.

They are often used to create sexually abusive material, commit fraud, and harass individuals.

Between 2022 and 2023, deepfake sexual content increased by over 400%, and deepfake fraud increased by 3000%.

What can be done?

Governments must impose obligations throughout the supply chain to stop the creation and spread of deepfakes.

Legislative change is essential to keep humanity in control of deepfakes.

What can I do?

Sign up to our email list to stay up to date on campaign developments and learn more about deepfakes.

The burden of responsibility for the damage caused by deepfakes must be on the creators and users of the technology, not the general public.

As a member of the public, the best thing you can do is stay informed and express your concern about deepfakes to your local representative.


With your help we can prevent the worst of the damage deepfakes can cause.

Common myths


It's hard to make deepfakes.


Anyone with an internet connection can make a deepfake. All they need is one photo or a 10 second voice clip.


There will always be software that can detect deepfakes.


Even now, software struggles to detect deepfakes. This will only get worse as technology progresses.


Deepfakes are already illegal.


The creation of deepfakes is not illegal and individuals are still free to make deepfake porn.


Adding watermarks to deepfakes will solve the problem.


It is simply impossible to create watermarks that cannot be removed easily by AI.


Punishing the person who creates a deepfake is enough.


In order to tackle deepfakes, the entire supply chain must be held accountable - from users to deployers.