New Legislation Targets AI Deepfakes
Independent Senator David Pocock has formally introduced a private member's bill to the Australian Parliament, seeking to establish robust protections against the misuse of artificial intelligence (AI) deepfakes. The bill, officially named 'The Online Safety and Other Legislation Amendment (My Face, My Rights) Bill 2025,' was tabled on Monday, November 24, 2025. It aims to grant Australians explicit legal rights over their own face and voice, safeguarding against their likeness being exploited for scams, disinformation, and other harmful purposes.
Senator Pocock emphasized the urgent need for such legislation, stating that the government has not adequately kept pace with the rapid advancements in AI technology. He articulated the core principle behind the bill, saying, 'It seems like a very sensible thing for Australians to be able to say, I own my face, this belongs to me, it is part of who I am'.
Key Provisions and Enhanced Powers
The proposed legislation seeks to amend existing frameworks, specifically the Online Safety Act 2021 and the Privacy Act 1988, to prohibit the non-consensual use of digitally altered or artificially generated audio or visual content depicting a person's face or voice.
Key provisions of the 'My Face, My Rights' Bill include:
- Establishment of a complaints system for the non-consensual sharing of deepfake material.
- Strengthening the powers of the eSafety Commissioner to respond to AI-generated harm, including the authority to issue removal notices and formal warnings.
- Providing clear avenues for civil redress through the courts for individuals who are wrongfully depicted or exploited via deepfake material, including civil penalties.
- Enabling victims to sue for emotional damages, removing the requirement to prove financial loss.
- Defining crucial concepts such as 'deepfake material,' 'non-consensual sharing,' and 'subject of deepfake material'.
The bill also includes specific exemptions for uses in journalism, satire, and good faith law enforcement activities. It is designed to target those who knowingly or recklessly share non-consensual deepfakes, while sparing individuals who do so unknowingly.
Penalties and Broader Context
Under the proposed laws, individuals found sharing non-consensual deepfakes could face penalties of up to $165,000. Companies that fail to comply with removal notices issued by the eSafety Commissioner could incur fines as high as $825,000.
Senator Pocock highlighted that the bill addresses a broader range of deepfake harms beyond those covered by the existing Criminal Code Amendment (Deepfake Sexual Material) Bill 2024, which primarily focuses on sexually explicit content. He stressed the importance of drawing 'a line in the sand' against non-consensual deepfake creation, noting that similar legislative efforts are underway in countries like China, Spain, and Denmark.
5 Comments
Stan Marsh
The eSafety Commissioner getting more power is a double-edged sword; great for protection, but we need robust oversight to prevent censorship or stifling legitimate expression. It's a difficult balance.
Kyle Broflovski
Overreach! This bill will stifle creativity and free speech.
Eric Cartman
While protecting personal likeness is vital, I worry about the potential for misuse of these powers, especially regarding satire and parody. Where's the line?
Stan Marsh
It's good to see action on deepfakes, but focusing heavily on individual penalties might not deter large-scale malicious actors effectively. We need global cooperation too.
Kyle Broflovski
Great move, Senator Pocock! Protecting citizens from AI misuse is paramount.