Countries are starting to take action on deepfakes, but much more is needed

Deepfakes are non-consensually AI-generated voices, images or videos of real people, created to produce sexual imagery, commit fraud, and undermine democracy. This does not include parody or satire.

Policy Landscape: United Kingdom

The Online Safety Act 2023 establishes legal responsibility for businesses and individuals operating various online services to ensure the safety of people, particularly children, in the UK. This includes companies involved in distributing user-generated content to other users or functioning as search engines. The oversight of these regulations falls under the jurisdiction of the communications regulator Ofcom, which will provide guidance on prioritising content prevention and the necessary systems for handling it. Companies are obligated to adhere to Ofcom's code of practice or devise their own. Additionally, the legislation identifies specific criminal offenses as "priority illegal content," necessitating companies within its scope to implement systems and processes to prevent users from encountering such material.

The Online Safety Act makes it an offence under Section 188 when

Person A “intentionally [shares] a photograph or film which shows, or appears to show, another person (B) in an intimate state”.

and

B does not consent.

However, A is criminally liable only if

  • A “does not reasonably believe that B consents” or

  • A shares the photograph or film with the “intention of causing B alarm, distress or humiliation”.

Importantly, in setting out the definition of “photograph or film”, the Act states that these include “an image, whether made or altered by computer graphics or in any other way, which appears to be a photograph or film” (s.187).

Therefore, the Online Safety Act 2023 likely covers the distribution of non-consensual sexual deepfakes, which are themselves content that has been manipulated to make it appear as though a person is in an intimate state. 

However, this leads to a nightmare situation in terms of enforcement. Effective enforcement would require tracking millions of users, often hidden behind anonymous accounts. For example, the Act does not address the creation of deepfakes, and only criminalizes sharing this type of content. Furthermore, according to Section 188, a provider of an internet service through which a photograph or film is shared is not to be regarded as a person who shares it. In fact, the Act only creates duties on platform providers to abide by OFCOM’s codes of practice or construct their own, as well as to take down illegal content upon notice. Importantly, providers of deepfake-generating systems are not covered by the Act.

Crucially, it is not sufficient for the creator of the deepfake to be held criminally liable. An effective solution will require accountability across the entire deepfake supply chain. While there are billions of potential deepfake creators, there are only a few entities that provide access to AI services or train the AI systems that are capable of generating this harmful content. As such, developers of digital platform services or computer programmes capable of altering or otherwise generating deepfakes must also be held to account.

Policy Landscape: United States

At this time, several bills have been proposed to address deepfakes at the federal level. Below is a summary of selected examples and their key provisions. 

Proposed Legislation
Description of Provisions

The AI Labeling Act of 2023 (S. 2691) which was proposed by Sens. Schatz and Kennedy has been assigned to the Committee on Commerce, Science, and Transportation which can then recommend it for full review by the Senate once it has completed its own review.

Proposed Legislation
Proposed Legislation

Each AI system that produces image, video, audio, or multimedia AI-generated content shall include on such AI-generated content a clear and conspicuous disclosure.

Disclosure must include in the metadata an identification that the content is AI-generated, the identity of the tool used to create the content, and the date and time the content was created.

To the greatest extent possible, the disclosure must also be permanent or difficult to remove.

Developers must "implement reasonable procedures to prevent downstream use of such system without the disclosures required". There is also a duty to include in any licenses a clause prohibiting the removal of the disclosure notice.

Description of Provisions
Description of Provisions

The AI Disclosure Act of 2023 (H.R. 3831) was introduced by Rep. Torres in 2023.

Proposed Legislation
Proposed Legislation

Any output generated by AI would need to include: “Disclaimer: this output has been generated by artificial intelligence.”.

Description of Provisions
Description of Provisions

The DEEPFAKES Accountability Act (H.R. 5586) was introduced by Rep. Clarke and was referred to Energy & Commerce Subcommittee on Emergency Management and Technology.

Proposed Legislation
Proposed Legislation

"Advanced technological false personation" records would have to include a contents provenance disclosure, such as a notice that the video was created using artificial intelligence.

Audiovisual content would require no less than one verbal statement to this effect and a written statement at the bottom of the visual.

There would be a criminal penalty for a failure to comply with disclosure requirements where the content is intended to humiliate or harass the person falsely exhibited in a sexual manner.

The criminal penalty would also apply where the failure to comply with disclosure requirements is with the intent to cause violence or physical harm, incite armed or diplomatic conflict, or interfere in an official proceeding, including an election (and such a threat is credible).

Description of Provisions
Description of Provisions

The NO FAKES Act of 2023 is a draft bill introduced by Sens. Coons, Blackburn, Klobuchar, and Tillis.

Proposed Legislation
Proposed Legislation

The draft bill would create a property right over one's own likeness in relation to digital replicas which are nearly indistinguishable from the actual image, voice, or visual likeness of that individual.

Description of Provisions

The nonconsensual production of a digital replica would be actionable under civil liability. The distribution of an unauthorized digital replica, with knowledge that it is unauthorized, would also create a civil liability claim.

Description of Provisions

The NO AI FRAUD Act of 2024 (H.R. 6943) was introduced by Rep. Salazar along with other House lawmakers.

Proposed Legislation
Proposed Legislation

The bill would create a federal civil remedy for victims of “digital forgery.”

A “digital forgery” is defined as “any intimate visual depiction of an identifiable individual” created through means including artificial intelligence, regardless of whether there is a label.

Description of Provisions
Description of Provisions

There are positive elements to many of these proposals, such as several that create accountability on multiple parts of the value chain. For example, the NO AI FRAUD Act includes a provision holding liable developers who distribute, transmit, or otherwise make available to the public a personalized cloning service. However, bills focusing exclusively on watermarking will have very little effect in practice, given the ease with which labels can be removed. Moreover, the harm caused by certain deepfakes, such as those which are sexually explicit, is not mitigated by mere labelling.

Policy Landscape: Europe

There are two forthcoming legislative efforts which address deepfakes in Europe: the EU Artificial Intelligence Act (which deals with deepfakes generally) and the Directive Combatting Violence Against Women (which criminalizes nonconsensual sexual deepfakes).

The EU AI Act is set to impose transparency obligations for providers and users of certain AI systems and general-purpose AI models under Article 52. Providers of AI systems generating synthetic audio, image, video or text content, must ensure that the outputs of these systems are marked in a machine-readable format as artificially generated or manipulated. These obligations would not apply where the content forms part of an evidently artistic, creative, satirical, or fictional analogous work. The Act also grants the European Commission the power to adopt delegated acts concerning labelling and detection of artificially generated or manipulated content.

The proposal for a Directive Combatting Violence Against Women also recently reached political agreement. While the final text for this directive is not yet available, the Council of the European Union has published its mandate which directly addresses the “non-consensual production, manipulation or altering for instance by image editing, of material that makes it appear as though another person is engaged in sexual activities'' (Recital 19). Article 7 makes it an offence to use information and communication technologies to make accessible to the public images, videos, or similar material depicting sexually explicit activities or intimate parts of a person without the consent of those involved. However, this is an offence only if the conduct is likely to cause serious harm. More specifically, Article 7(1)(b) also criminalizes “producing, manipulating or altering and subsequently making accessible to the public '' content which makes it appear as though another person is engaged in sexually explicit activities.

Inciting, aiding, and abetting the commission of an act under Article 7 would also be punishable as a criminal offence. In other words, helping someone create a non-consensual deepfake that qualifies under Article 7 would also be punishable by a separate criminal offence. While “aiding” is not defined in the Directive, the most logical interpretation of this provision would mean that providers of deepfake generators would be criminally liable when those non-consensual deepfakes eventually breach Article 7 because they are made accessible to the public.

The Directive also briefly addresses the role of hosting and intermediary platforms. Recital 40 states that Member States should empower “national authorities to issue orders to hosting service providers to remove” or “disable access” to material contravening Article 7.

While both the EU AI Act and the Directive Combatting Violence Against Women go some way in criminalizing the dissemination of harmful deepfakes and imposing transparency obligations, certain risks remain unaddressed. For example, provider duties to monitor whether non consensual deepfakes are being produced using their models should be clarified. Enforceable obligations beyond codes of conduct should be implemented to prevent the production of harmful deepfakes.

CONTACT US

Deepfakes must be banned.

This is not a simple task, but a necessary one. Governments must take bold action to stop this threat.

Get in touch with the Campaign’s team of leading experts in AI policy:

Mark Brakel, Director of Policy, Future of Life Institute

Landon Klein, Director of US Policy, Future of Life Institute

Hamza Chaudry, US Policy Specialist, Future of Life Institute

Alexandra Tsalidis, Policy Fellow, Future of Life Institute

To contact us regarding policy matters please email policy@futureoflife.org