THE SOLUTION

The only effective way to stop deepfakes is for governments to ban them at every stage of production and distribution. Read more about our proposal for banning deepfakes below.

The STATE OF

Deepfakes

Deepfakes are non-consensually AI-generated voices, images or videos that are created to produce sexual imagery, commit fraud, or spread misinformation.

The threat of deepfakes continues to grow - which is why action must be taken now.

A growing

THREAT

Deepfakes are a growing threat to society, and governments must act.

Rapid AI improvement has made deepfake creation fast, cheap and easy.

Between 2022 and 2023, deepfake sexual content increased by over 400%, and deepfake fraud increased by 3000%.

There are no laws effectively targeting and limiting the creation and circulation of deepfakes, and all current requirements on creators are ineffective.

POLITICAL SUPPORT

FOR A BAN

In the UK, a coalition of bipartisan politicians has endorsed a statement regarding the need for a ban on deepfakes from Control AI.

PUBLIC SUPPORT

FOR A BAN

The public strongly supports a ban on deepfakes. Support for the ban crosses political boundaries.

Recent polls show support for a deepfake ban was strong in every country surveyed: the UK (86%), France (77%), Italy (74%), and Spain (82%). In the US, 77% of the public support the regulation of deepfake technology.

THE SOLUTION

The only effective way to stop deepfakes is for governments to ban them at every stage of production and distribution.

This must place legal accountability on the companies that provide deepfake technology, the creators of deepfake content, and everyone in between. 

ONE

Make the creation and dissemination of deepfakes a crime.

And allow people harmed by deepfakes to sue for damages.

TWO

Hold model developers liable.

They must show that they have applied techniques to prevent their models from being used to create deepfakes. This includes:

1) precluding a model’s ability to generate deepfake pornography or commit fraud

2) showing that such techniques cannot be easily circumvented, and;

3) guaranteeing that the datasets they use to train their model do not contain illegal material (e.g., child sexual abuse material).

THREE

Hold model providers, service providers, and compute providers liable.

They must show that they have applied techniques to prevent their resources from being used to create deep-fakes. This includes:

1) implementing reasonable measures to monitor users;

2) detecting users that are trying to create deepfakes, and;

3) restricting access from malicious users.

Global policy landscape

Countries are starting to take action on deepfakes, but much more is needed

CONTACT US

Deepfakes must be banned.

This is not a simple task, but a necessary one. Governments must take bold action to stop this threat.

Get in touch with the Campaign’s team of leading experts in AI policy:

Mark Brakel, Director of Policy, Future of Life Institute

Landon Klein, Director of US Policy, Future of Life Institute

Hamza Chaudry, US Policy Specialist, Future of Life Institute

Alexandra Tsalidis, Policy Fellow, Future of Life Institute

Fill out this form to contact our campaign team: