Personal Content Rights: Protecting Individuals in the Age of Deepfakes and AI Cloning
by Dinis Cruz and ChatGPT Deep Research, 2025/06/15
Download LinkedIn Post Back to Cyber Security
Executive Summary¶
The rise of generative AI has enabled anyone to clone voices, faces, and writing styles with startling realism. From fabricated videos of leaders to AI-generated voice scams, this technology is being weaponized for fraud, defamation, and non-consensual pornography on a massive scale. In one 2019 study, 96% of the 14,000 deepfake videos detected online were pornographic and targeted women without their consent. Criminals have used AI to impersonate a company CFO on a video call and trick employees into transferring $25 million. Even democratic processes are at risk: in early 2024, scammers cloned President Biden’s voice in robocalls to suppress voter turnout. These examples underscore the urgent need for stronger protections of individuals’ “personal content” – their likeness, voice, and distinctive style – against unauthorized AI replication.
We propose establishing “Personal Content Rights” as a new legal framework to give individuals firm control over their own digital persona. This would make it illegal to create or distribute AI-generated content that imitates a real person without that person’s explicit authorization. In effect, cloning someone’s face, voice, or persona without consent would be treated as a serious rights violation – akin to identity theft or fraud – with stiff penalties including fines and liability. Crucially, this regime would require clear labeling and traceability of AI-generated media. Every synthetic image, video, audio, or text depicting a person should carry a tamper-proof watermark or metadata tag indicating it is AI-generated. Such provenance information would allow anyone (or any automated system) to verify whether a piece of content is authentic or a clone, making it far easier to trust what we see and hear.
Key proposals include: (1) Outlawing unauthorized deepfakes – generative AI impersonations of a person’s likeness or voice would be illegal unless the person has given informed consent. This tackles the root of the problem by making the very act of cloning someone’s identity without permission a punishable offense. (2) Inalienability of persona rights – an individual’s rights to their face, voice, and persona would be personal and non-transferable (beyond specific licensed uses). Just as human organs cannot be sold on the market, your fundamental “identity assets” cannot be signed away wholesale. Even celebrities would only license their likeness for narrow, agreed purposes (e.g. a film role or an advertisement) rather than permanently selling their digital self. (3) Robust identity verification frameworks – we need to develop much richer “persona and identity graphs” that assign unique digital identities to people and their various personas. This would help distinguish between individuals who share similar names or appearances, ensuring that permissions and rights attach to the correct person. For example, if two people coincidentally have the same name or look, a granular identity graph would prevent bad actors from exploiting the lesser-known person’s consent to imitate the famous one. (4) Mandatory watermarking and provenance metadata – All AI-generated media that depicts or mimics real humans must include an indelible watermark or cryptographic signature indicating its origin. This requirement should be backed by strong regulation (similar to GDPR privacy rules) to compel companies to comply or face hefty fines. Watermarking and signed metadata, recorded through an open standard, would enable a content provenance trail (much like a Git or blockchain log) so that any piece of media can be traced back to its source and creation method. (5) Preservation of legitimate fair uses – The framework must explicitly protect parody, satire, academic and other transformative uses. Traditional artistic impersonations (like an SNL sketch or caricature) remain legal as they are usually understood not to be the actual person, especially when appropriately disclaimed. The target here is deceptive impersonation: if it “looks like a duck and quacks like a duck,” legally it is treated as that person’s likeness. In other words, if a synthetic video is readily identifiable as a specific individual, it would fall under these protections. Genuine satire or commentary, where no reasonable viewer would think it is the real person speaking, would remain protected by free speech principles – as already covered under existing law.
In summary, Personal Content Rights would establish that each individual owns their own face, voice, and personal likeness, and unauthorized digital cloning of those attributes is a serious violation. This white paper outlines why such rights are necessary, how they can be implemented technologically and legally, and how they can coexist with innovation and free expression. We argue that proactive steps by forward-looking jurisdictions – even smaller countries or regions – can jump-start these protections, much as Europe led the world with data privacy via GDPR. By enforcing transparency and consent in the use of AI-generated human likeness, society can enjoy generative AI’s benefits (e.g. creative entertainment, assistive tech) without sacrificing personal dignity, agency, and truth in the public sphere. The time to enact these safeguards is now, before deepfakes further erode trust in our information ecosystem.
The Case for Personal Content Rights¶
In the past, laws like the right of publicity have given people (especially celebrities) some protection against having their name or photo used in ads without permission. But those laws were not designed for a world where anyone’s face or voice can be cloned by AI and made to say or do things they never did. Today, generative AI can produce “deepfakes” – hyper-realistic digital imposters – that invade privacy, damage reputations, and enable fraud on a scale never seen before. Victims of non-consensual deepfake pornography, for instance, face devastating harm with little recourse; once a fake explicit video spreads online, it is nearly impossible to remove. Public figures and private citizens alike are vulnerable to having their identity appropriated in misleading ways. Yet, outside of a patchwork of state laws, there is no comprehensive legal regime expressly banning this conduct.
Personal Content Rights aim to fill this gap by recognizing each individual’s sovereignty over their own digital likeness. Just as copyright law grants an author control over copying of their creative works, these rights would grant a person control over copying of their persona. This includes one’s image, voice, unique expressions, and other attributes that define “who you are” in media. Importantly, this is not limited to celebrities. Every person – regardless of fame – should have the right to not be digitally impersonated without consent. The moral justification is clear: using AI to pose as someone else is a form of identity theft and exploitation. It can deceive viewers and listeners into false beliefs (“audio or video evidence” of the person doing X), and it usurps the person’s autonomy to decide how their likeness is presented to the world. Essentially, it treats a human being’s identity as raw material, which is deeply unethical absent permission. As a group of over 200 artists and musicians put it in an open letter, the unauthorized commercial use of anyone’s voice or image in AI tools is “predatory” and “must be stopped”.
Why not rely on existing laws? Current legal tools like defamation, fraud, or copyright have been used in some deepfake cases, but they are incomplete solutions. Defamation only applies if the fake portrays something false and harmful and you can prove actual malice – a high bar, especially for private figures. Fraud laws require showing a specific intent to deceive someone in a transaction, which may not cover many reputational deepfakes or parody cases. Copyright might protect a famous performer’s recorded voice or image, but AI can generate a new likeness that isn’t a direct copy of any specific photo or recording, thus dodging traditional IP infringement. Moreover, copyright only covers the expression, not the identity of the person portrayed. This is why legal scholars and lawmakers are converging on the idea that a new, specific right is needed – essentially an expansion of the right of publicity to cover AI-generated impersonations. The U.S. Congressional Research Service has even suggested that the advent of realistic AI “replicas” of real people may warrant a federal right of publicity law. In other words, we need to explicitly recognize that you own your persona, and others cannot digitally recreate it without permission. That is the essence of Personal Content Rights.
At its core, this is also a matter of human dignity and autonomy. If anyone can make a digital “clone” of you and have it say things or perform acts in your likeness, your agency is undermined. Imagine a future where a celebrity’s face can appear in any advertisement worldwide without their consent – or where political figures are routinely “sampled” in manipulated videos that distort their positions. Without strong personal content rights, people effectively lose control of their own identity in the digital domain. By establishing these rights, we set a clear ethical and legal line: Your face and voice are you. They cannot be used as a generic asset or public-domain clipart. They are personal and any usage requires personal authorization. This principle is powerful and intuitive, and it should become a cornerstone of digital-era law.
Making Unauthorized Deepfakes Illegal¶
Under the Personal Content Rights framework, it would be unlawful to create, share, or sell AI-generated content that depicts a person without that person’s explicit consent. This creates a strong deterrent against malicious deepfakes. Importantly, the prohibition encompasses both the act of creating the unauthorized clone and the act of distributing or profiting from it. The rationale is that if something is inherently harmful – like forging someone’s identity – it should be banned at its source. We already see movement in this direction. Tennessee’s recently passed “Elvis Act” (Ensuring Likeness, Voice, and Image Security Act, 2024) is one of the first laws directly targeting AI deepfakes: it prohibits the unauthorized commercial use of an individual’s likeness or voice, including simulations created by AI. Violators can face injunctions and damages, and notably the law extends this right beyond death, allowing estates of deceased individuals to enforce it. On the U.S. federal level, proposals like the NO FAKES Act of 2023 would impose liability for distributing AI deepfakes or voice clones, and the No AI Fraud Act would do the same for creating such fakes. Likewise, the draft EU AI Act takes a strict stance – it includes controls on deepfake creation and substantial penalties for offenders. All this momentum underscores a growing consensus: it should be a serious legal offense to impersonate someone via AI without permission.
One critical aspect of our proposal is that persona rights cannot be sold or waived wholesale. This draws an analogy to laws forbidding the sale of human organs – there are some things that market forces should not commodify. If we allowed individuals (or more likely, corporations holding their contracts) to simply sell exclusive rights to their digital likeness, we could see abuses where wealthy entities effectively “buy” people’s identities. For example, a studio might attempt to purchase a famous actor’s face and voice rights in perpetuity, or a politician might be pressured to sign away their image rights. This would create a dangerous dynamic and potential exploitation. Under Personal Content Rights, even the individual cannot alienate these rights entirely – they are inherent and inalienable. Of course, licensing for specific uses and limited durations is expected (that’s essentially what actors do when they sign film contracts, allowing their filmed performance to be used in that movie). But any license must be bounded and cannot amount to a permanent transfer of one’s persona. Notably, this concept finds support in practice: when reports circulated in 2022 that Bruce Willis had “sold his face” to a deepfake company, his representatives clarified that no such sale had occurred – and indeed “Bruce couldn’t sell anyone any rights, they are his by default”. In other words, an individual’s right to their likeness is by nature theirs and not an object for absolute sale. Our framework would enshrine this principle in law, preventing a market for personal identities.
Exceptions and nuances. Banning unauthorized deepfakes does not mean banning all impersonations or eliminating creativity. Traditional impersonators, parody artists, and satire are generally protected, and we would preserve those protections. The key distinction lies in whether the audience is meant to believe the impersonation is real. Parody and satire typically exaggerate or clearly signal that they are not the actual person – their goal is commentary or comedy, not deception. For instance, a comedian dressing up as a politician on a sketch show is understood as parody. Likewise, using someone’s style in a transformative artistic work (e.g. an obvious caricature, or writing a fan fiction “in the style of” a famous author) is usually permissible as fair use or freedom of expression. Our focus is on misleading impersonations where a reasonable viewer or listener could be fooled into thinking it’s genuinely the person. To formalize this, the law can adopt an “identifiability” test: if AI-generated content is readily identifiable as depicting a particular person (to an average observer), then it falls under that person’s personal rights. In other words, if it looks like duck and quacks like a duck – it’s treated as that duck. Under this standard, it wouldn’t matter if a deepfake maker tried to claim “oh, I based it on a lookalike actor” or “the voice isn’t a direct sample”; if the end result clearly is recognized as Person X, then it counts as using Person X’s likeness. This is crucial to prevent loopholes. For example, one could imagine a bad actor hiring an unrelated individual who coincidentally looks or sounds like a celebrity, and then using that to train an AI model. The source might be a different person, but the output is indistinguishable from the celebrity – our rule says that’s still an unauthorized use of the celebrity’s persona. Legal precedent supports this approach: courts have held that using sound-alikes or look-alikes to evoke a famous person in commercials can violate their rights, even if no direct recording was used (the classic Midler v. Ford case, for instance, where a singer won a lawsuit after a car ad used a voice impersonator to mimic her style).
To further guard against abuse, the law should forbid outsourcing consent to the wrong party. Only the individual themselves (or their authorized agent/estate) can grant permission for an AI clone. It would be illegal, for instance, to use a lesser-known person with the same name to sign a release and then claim the deepfake was authorized. Again, the identifiability test covers this – if the content is clearly depicting the famous Alice Smith, you can’t justify it by saying you got permission from a different Alice Smith. In practice, enforcing this may require an identity verification process for consent (which ties into the identity graph ideas discussed later).
Finally, we propose a sensible sunset provision for persona rights after death, analogous to how copyrights expire. This could be on the order of a century after the person’s death (for example). The idea is that long after an individual has passed – say 100 years later – the societal interest in using their image (for historical or creative works) might outweigh the personal rights (since no living person is directly affected, beyond possibly distant descendants). Many jurisdictions already grant post-mortem publicity rights for a number of decades; extending it to something like 100 years would ensure that, in the near-term, estates can control deepfakes of recently deceased public figures (preventing, for example, the gross commercialization of a celebrity immediately after their death without estate approval), but in the long run historical figures enter the public domain of personas. This balances innovation and cultural use (we might, for instance, want the freedom to create a documentary with a realistic reconstruction of Abraham Lincoln’s voice, or have AI narrators that sound like Shakespeare) with respect for those still within living memory. The 100-year term is a policy choice up for debate – the key point is that current generations should have their likeness protected, while truly long-deceased individuals could be considered public heritage.
Building Persona and Identity Graphs for Granular Control¶
Implementing Personal Content Rights in practice will require more than just laws – we need technical infrastructure to manage identity permissions at scale. One crucial innovation is the development of robust persona and identity graphs. These would be comprehensive databases or networks that map unique human identities to their various attributes (faces, voices, names, nicknames, etc.), relationships, and consent permissions. Unlike today’s social media identity systems (which often confuse similar names, or can be tricked by imposters), an identity graph would strive to uniquely identify each person in the network and record information about the status of their persona rights. For example, it could note that “Jane Smith (born on X date, with biometric hashes of her face/voice) is a private individual who has not authorized any AI use of her likeness,” whereas “Jane Q. Smith (a stage actress) has licensed her voice to XYZ Company for a specific game character, and allowed non-commercial academic research use.” The graph could also link known personas or roles of a single individual – e.g. the persona “Dwayne Johnson” is the same as “The Rock,” so any request to use The Rock’s likeness actually pertains to Dwayne Johnson’s rights.
The idea of having “more nodes and edges” in such graphs is to capture the nuance that identity is not flat. People can share names; one person can have multiple identities (pseudonyms, stage names, avatars); and some attributes might overlap between people. To enforce AI cloning permissions accurately, we need to pinpoint which John Smith is being depicted and whether he consented. This is especially important to avoid the scenario discussed earlier, where a bad actor might try to exploit a namesake. A granular identity graph ensures that an authorization is tied to a specific individual node, not just a name label. In practice, this might work by leveraging verified identifiers – for instance, government ID systems, or blockchain-based self-sovereign IDs, or simply platform-verified profiles. Tech companies and standards bodies could collaborate to create a shared protocol for identity verification in media production. Even Adobe’s Content Credentials (C2PA) system hints at this need: technologist Tim Bray noted that just including an author name in content metadata is insufficient if there are multiple people with that name; ideally, the system would include a permanent unique ID for the creator (e.g. an URL or hash) to truly establish who it is. The same logic applies to identities being portrayed by AI.
How would a persona graph work in practice? Imagine an AI video generator that wants to let users create a greeting message in the voice of a famous actor. Under a Personal Content Rights regime, that service would be obligated to check against the identity graph whether the actor has authorized such use. The identity graph might be queried via an API: you input the unique ID or a biometric signature of the target identity, and the graph returns a response like “Not permitted” or “Permitted under conditions X, Y”. If not permitted, the tool must refuse to generate that content (or apply a strict watermark and legal disclaimer if it falls under an allowed parody/transformative use). If permitted (say the actor licensed her voice for fan messages), the output might include an embedded license token from the graph confirming it’s an authorized use. This way, each persona’s preferences and rights are accounted for. Of course, building such a comprehensive identity infrastructure is challenging – it raises privacy and security considerations of its own. But it’s not unprecedented; we already maintain global databases for domain names, for example, and certificate authorities for websites. An identity graph could be decentralized and open, perhaps managed by non-profit coalitions to ensure trust.
Notably, such graphs must allow one person to hold multiple personas while keeping them distinct. For example, a popular YouTuber might have an avatar or character they use (a virtual persona with a certain look and voice). That persona might have its own following and even separate licensing (maybe the YouTuber allows the character to be voiced by others for fan projects, but not their real voice). The identity graph can link the real person and their character, but also treat them as separate entities for consent purposes. It can also accommodate collective personas (imagine rights around a fictional character’s appearance that an actor portrayed – though that gets into copyright territory). The key is flexibility and specificity: who exactly is being cloned, and did that entity consent?
By having this infrastructure, we also solve the issue of differentiating lookalikes and soundalikes. If two unrelated people happen to resemble each other closely, the graph treats them as distinct nodes. A content creator attempting to use Person B’s image to simulate Person A would still end up flagging Person A in the identifiability check, because the graph can store reference biometric data (with appropriate privacy safeguards) to recognize that the likeness matches Person A. Modern face recognition and voice matching, if used ethically, could assist here: for example, an AI system generating a video could internally cross-check, “The output I’m producing has a 98% similarity to Person A’s known face; therefore I must treat it as Person A’s likeness and require Person A’s permission.” This might sound complex, but it’s analogous to how content-ID systems detect copyrighted music or video automatically. Instead of copyrighted material, they’d be detecting identity matches. Indeed, companies like Clearview AI (controversially) scraped billions of online photos to create a face recognition database; such technology can identify faces with high accuracy. While we absolutely must handle this carefully to avoid privacy abuses (no one wants ubiquitous surveillance of faces), a controlled system where individuals opt in their data to an identity graph for the specific purpose of protecting their likeness could be very effective. For instance, an actor might register high-quality voice samples in a protected database so that any future audio creation tool can ping the database: “is this voice matching someone protected?” and get a yes/no. If yes (match found), the usage would be blocked or flagged for review unless consent is on record.
It’s worth noting that today much of the burden of policing deepfakes falls on individuals – you’d only know if your likeness was abused after the fact (if you stumble upon it or suffer harm and investigate). A well-implemented identity graph flips this dynamic: it proactively guards your persona by integrating with content creation pipelines and content platforms. Over time, this could become as standard as age verification or copyright checks in media systems. And just like age checks and content filters are imperfect but useful, an identity graph won’t catch everything but would significantly raise the barrier for abuse. It creates a structured way for permission to be managed in a world of billions of potential digital “faces.”
In summary, persona/identity graphs are the technological counterpart to Personal Content Rights. They ensure that fine-grained information about “who is who” and “who allows what” is available to enforce the rules. They would help prevent cases of mistaken or fraudulent consent and would make it easier to isolate violations (by clearly linking a deepfake back to the identity it’s impersonating). Developing such a graph system will require collaboration across industry, government, and perhaps neutral organizations – a topic we return to under enforcement and governance. But the sooner we lay the groundwork, the more scalable and automated our deepfake defenses can become.
Technological Solutions: Watermarking and Provenance¶
Laws alone won’t solve the deepfake problem; we need technical measures baked into our media ecosystem to identify and label AI-generated content. One cornerstone of our proposal is mandating watermarks and provenance metadata for any AI-generated likeness of a person. In practice, this means if an image, video, or audio has been created or altered by AI to depict someone, it should carry an unambiguous indicator of that fact. This could be a digital watermark hidden in the pixels or audio signal, and/or attached metadata (like a cryptographic signature) that can be verified. The goal is to make detection and verification of deepfakes straightforward, ideally instantaneous.
Some major tech companies and coalitions have already begun work on this problem. For example, the Coalition for Content Provenance and Authenticity (C2PA), which includes companies like Adobe, Microsoft and Intel, has developed a standard for attaching tamper-evident metadata to photos and videos. Using cryptographic public-key infrastructure (PKI), it can record who created or edited a piece of media and what was changed. This is essentially a “nutrition label” for digital content that can travel with the file. A verification tool can then inspect an image or video and tell you if it has a valid provenance trail and whether it’s been manipulated since creation. In Adobe’s implementation (Content Credentials), a creator can opt to include identifiers and actions in the asset’s metadata – for instance, “This image was generated by DALL-E on June 1, 2025 by user @Alice, using prompt X” along with a signature from Adobe’s servers. This is promising, but currently such systems are voluntary and can be stripped away (many social platforms don’t preserve metadata on upload). Our proposal is to mandate these kinds of content credentials for AI-synthesized human content, and ensure platforms honor them. If a platform were to deliberately remove or ignore the watermarking, it could face liability. In fact, emerging laws are heading this way: the EU AI Act is set to require that AI-generated content be clearly disclosed by the creators, and some U.S. states (like California and Texas) already require political deepfake ads to carry disclosures. Louisiana has even proposed that any deepfake of a political candidate must be clearly labeled as fake. We need to extend that principle broadly – not just for political media, but for all impersonations.
From a technology perspective, there are two complementary approaches: visible disclosure and hidden watermarking. Visible disclosure could be as simple as on-screen text (“Synthetic media” or a distinct logo in a corner of a video, an audible message in audio, etc.). Hidden watermarking involves embedding a signal in the content that isn’t obvious to a human but can be detected by software (for example, a specific pattern of pixels or audio frequencies that an algorithm can look for). The advantage of hidden watermarks is that they don’t disrupt the viewer’s experience but can be universally applied by AI generation tools. For instance, an AI voice generator might always introduce an inaudible ultrasonic pattern in the output. A verification app on your phone or browser extension could then scan any audio you play and alert if that pattern is present, i.e. “this speech is AI-synthesized.” OpenAI has researched such watermarks for GPT text outputs as well – inserting statistical quirks in word choices that are imperceptible in reading but detectable by a tool. The catch is that watermarks can sometimes be removed or defeated by adversaries (e.g. by re-encoding a video or adding noise to audio). Therefore, it helps to have legal deterrents too: making it illegal to strip or alter the watermark (similar to how removing DRM or copyright notices can be illegal). We’d treat watermark removal as akin to serial-number defacement on a product.
In addition, provenance metadata can provide a more detailed chain-of-custody. Consider a system where every edit or generation step in a piece of media is logged. If I generate an AI image of Person X, that fact (with my identity, timestamp, tool used) is signed and attached. If someone else then crops that image or inserts it into a video, that step is also recorded. The result is a graph of edits that can be traced. This is analogous to version control in software (like Git), where you can see all commits and authors, or to blockchain transactions where each step is linked. In fact, one can imagine using blockchain or distributed ledger technology to store hashes of content at various stages to prove authenticity or synthetic origin. The ideal end-state is: whenever you encounter a piece of media purporting to show a real person, you (or your device) can query its provenance. If it says “captured by John’s iPhone camera on this date, unedited” and is signed by the device, you can have high confidence it’s real. If it says “created by AI tool Y on this date by user Z, using reference image Q,” you know it’s synthetic. And if it has no provenance data or broken signatures, that’s a red flag that it might have been tampered with. This doesn’t solve everything (e.g. a skilled forger might try to fake metadata too), but it makes authenticity verification much more robust at scale.
One concern raised about systems like C2PA is that they are currently quite heavy and complex. Some experts have critiqued C2PA as “overengineered”, with very ambitious goals, whereas its core function of signing content with PKI is relatively straightforward. There’s worry that the complexity could slow adoption or make it too dependent on certain companies (indeed, most images with C2PA info today are produced by Adobe’s own tools). To address this, our approach advocates for open-source and simplified solutions wherever possible. Perhaps a leaner standard, or an open reference implementation, can achieve the main goal (trusted provenance) without requiring every creator to use Adobe software. For example, a simple protocol could let any camera or AI app generate a signed JSON manifest of what it did, and attach it to the file. The key is ensuring interoperability and broad adoption. We likely need an ecosystem of free and open tools for applying and checking these authenticity tags – akin to how anyone can use SSL/TLS for website security thanks to open libraries and certificate authorities. Governments and international bodies might facilitate this by providing root certification services for content credentials or by endorsing a standard format. The end goal is a ubiquitous “truth layer” for digital media. Just as we have a transmission protocol for the internet (TCP/IP) that everyone uses, we could have a provenance protocol that all content publishers adhere to.
Crucially, the watermarking and metadata approach must be backed by enforcement. It should not be optional. Companies offering AI generation services should be required to build these features in. If a startup releases an app that lets users make talking deepfake videos, the law would mandate that the app automatically watermarks each video and attaches a provenance record. Non-compliance could result in heavy fines or even liability for any harm caused by unmarked fakes. This is analogous to safety regulations in other industries (for example, toy manufacturers must not use lead paint, and if they do, they face penalties even if a consumer misused the toy). The onus is on the creators of the AI tools and platforms hosting content to help uphold integrity. As mentioned, the EU’s forthcoming rules explicitly seek to require disclosure for AI-generated content, and even the U.S. FTC is exploring rules for companies that develop deepfake technologies. So the regulatory winds are blowing in this direction.
One might wonder: can’t bad actors just use underground tools that ignore these rules? Some will try, of course. But if mainstream tools – the ones integrated in our phones, social media, operating systems – follow the rules, any content created by rogue tools will stick out as suspicious (no watermark or an invalid signature). It’s similar to how spam email often fails authentication checks and gets flagged because reputable mail servers use standards like SPF/DKIM. Widespread adoption of provenance standards raises the baseline cost of making undetectable fakes. It also creates a legal hook: if someone knowingly stripped metadata or used a non-compliant generator to impersonate someone, that very act could be an additional offense. Over time, using unwatermarked deepfake tools might become analogous to using lock-picking tools – legal for research perhaps, but highly suspect if harm is caused.
An additional technology we should encourage is deepfake detection and authentication services. These are AI algorithms trained to recognize signs of manipulation in media (such as inconsistent lighting in a fake video frame, or odd spectral artifacts in cloned audio). Dozens of research groups and startups are working on detection tools, and they will play a vital role as a second line of defense. While watermarks are a “by design” solution, detection is an adversarial one – essentially a constant cat-and-mouse between fakers and detectors. Still, detection is improving; for instance, researchers have found that AI-generated faces have telltale quirks (like certain reflections in eyes) that can be spotted. In fact, a combination of detection AI and provenance metadata would be ideal. The metadata provides declared info, and the detection acts as validation or failsafe (it might flag “this video claims to be real, but we detect signs of AI – investigate further”). As deepfakes get more sophisticated, detection will need to incorporate many techniques, including possibly requiring hardware-level attestation (e.g., future cameras might sign images with a chip, so any image without that chip’s signature is presumed synthetic). Governments and institutions should fund R&D in this area heavily – it’s as crucial as antivirus or cybersecurity research. Indeed, Europe has started doing so: the EU’s Horizon program is funding projects like vera.ai to develop trustworthy AI verification tools, including deepfake detectors for audio, video, image, and text, all built as open-source algorithms. Such initiatives recognize that truth tech must grow alongside fake tech. There’s also a burgeoning industry of startups focusing on deepfake detection and content authentication. For example, Italian startup IdentifAI recently secured €2.2 million to develop AI that can distinguish AI-generated content from human-made content, with the mission to “ensure that emerging technologies serve the common good and do not become tools of destabilization”. This is a positive sign – and we believe governments should encourage more of these solutions through grants, prizes (like the FTC’s recent challenge on AI misuse), and procurement (using such tools in law enforcement and national security).
In summary, watermarking + provenance is a non-negotiable piece of the puzzle. It empowers everyone from social media companies to individual consumers to quickly assess the authenticity of media. When combined with strong personal content laws, it creates a scenario where violating those laws is both risky (due to legal penalty) and difficult to hide (due to technological tagging). Our hope is that this would dramatically curb the spread of harmful deepfakes: either they get labeled (so viewers know it’s fake and the damage is mitigated), or the fear of getting caught discourages their creation in the first place. This doesn’t mean no one will ever attempt a fake – just as laws against counterfeiting don’t stop all counterfeiters – but it shifts the environment. Today, deepfakes enjoy a Wild West of impunity and anonymity. In the future we envision, they would be rare, underground, and promptly exposed by the combined forces of law and tech.
Above: A comparison of traditional parody versus AI-driven impersonation. At left, comedian Kate McKinnon impersonates U.S. Senator Elizabeth Warren on a TV show – an obvious parody protected as free speech. At right, deepfake technology has been used to superimpose Senator Warren’s actual face onto McKinnon’s body. The latter creates a far more deceptive impression of “Warren” appearing in that scene. Personal Content Rights would ensure such AI-generated lookalikes are clearly labeled and only made with authorization, preserving space for satire while preventing realistic imposters.
Enforcement and Global Adoption¶
Having strong laws and technology on paper means little without effective enforcement. One major challenge is that the internet is borderless – a deepfake created in one country can instantly proliferate worldwide. However, we can take inspiration from successful regulatory frameworks like the EU’s GDPR data privacy law. GDPR is only an EU law, but its stringent requirements have effectively become a global standard due to what’s known as the “Brussels effect.” Companies operating internationally found it impractical to maintain one set of rules for Europe and a lax set elsewhere, so many adopted GDPR-level practices globally. We anticipate a similar dynamic for Personal Content Rights: if a significant bloc of consumers (say, the EU or a coalition of countries) enacts these protections, major tech platforms and AI providers will choose to implement compliance universally rather than geo-fence different rules. In other words, the most stringent standard (that deepfakes require consent & labeling) could become the default everywhere. This halo effect already occurs in content moderation—platforms often prohibit certain harmful content globally because it’s illegal in one big market.
That said, we need not wait for every superpower or global treaty to agree. Smaller jurisdictions can lead the way. A single forward-thinking country, state, or even city could pass an ordinance establishing personal content rights for its citizens. While that law only directly applies within that territory, it can set an example and put pressure on companies. For instance, if the state of California (home to many tech companies) imposed such rules, platforms might apply them to all users to simplify operations (this mirrors how California’s auto emissions standards influenced all car manufacturing). Even if a small nation with less economic sway passed a law, it could still generate international discussion and serve as a proving ground. Sometimes, a moral stance by a principled nation or institution triggers a domino effect – consider how some countries banned microplastics or recognized digital rights early, pushing others to follow. In the realm of human rights, leadership often comes from those willing to act first, not necessarily the biggest players. We should encourage any willing government, or even regional bodies like the EU or AU, to pioneer these protections.
Enforcement will likely involve a mix of administrative regulation and private litigation. On the regulatory side, agencies (like data protection authorities under GDPR, or the FTC in the US) could be tasked with monitoring compliance of major AI providers and social media platforms. They should have the power to audit algorithms for watermarking compliance, investigate complaints of unlabeled or non-consensual deepfakes, and levy fines. The penalties must be hefty enough to hurt even large companies – GDPR’s model of fines up to 4% of global revenue is a good reference point. Additionally, individuals whose rights are violated should have a cause of action to sue the perpetrators (whether it’s the creator of the deepfake, the platform that enabled its spread, or both, depending on circumstances). This private right of action empowers victims to seek damages and injunctions quickly. For example, if someone finds a deepfake of themselves online, they could go to court to get it removed and claim compensation for any harm (mental distress, reputational damage, etc.). Courts may need to adapt procedures for quick relief – maybe something akin to copyright takedown injunctions or privacy injunctions that can be issued rapidly. In egregious cases (like deepfakes used for criminal fraud or election interference), criminal penalties might be warranted to underscore the severity. Some jurisdictions are considering making certain deepfake uses a crime (e.g., Texas penalizes deepfakes intended to injure candidates in elections).
A practical issue is jurisdiction and cooperation. If a fake is created in Country A but harms someone in Country B, how do we handle that? International law is always tricky in cyberspace, but instruments like the Budapest Convention on cybercrime show that cross-border cooperation is feasible when many countries agree on the harm. Ideally, a multinational agreement on AI persona rights could be developed – perhaps under the UN or Council of Europe – to harmonize definitions and enforcement. Even without a formal treaty, nations can assist each other: extraditing someone who created malicious deepfakes, or at least sharing information to trace the source of anonymous postings. Large platforms (like major social networks) can also play a quasi-enforcement role by detecting and blocking obviously violative content. Indeed, once watermark standards are in place, a platform like Facebook could automatically refuse to publish an AI-synthesized video of a person that lacks the proper consent token in its metadata. They could flag it and require the user to prove they have the right to post it (similar to how YouTube’s Content ID flags copyrighted music). Platforms will likely do this if the legal risk is high – they won’t want to be on the hook for distributing illegal deepfakes.
One positive side effect of requiring provenance data is that it will also help combat bogus claims of deepfakery. We must remember the “liar’s dividend” problem: when people can deny real events by falsely claiming “that video is a deepfake.” With robust authentication, honest individuals can prove their genuine content is real. For example, a politician caught in a genuine scandal might try to dismiss a real video as fake – but if that video has proper camera signatures and no sign of tampering, neutral experts can validate it. Thus, provenance cuts both ways: tagging the fakes and certifying the reals. Over time, we may cultivate a norm of distrusting any media that can’t be authenticated. Courts, for instance, might require that digital evidence have provenance metadata or else treat it as suspect. This could greatly reduce the impact of disinformation by shifting the burden: instead of “prove this video is fake,” it becomes “prove this video is real” by showing its authenticity trail. The technology to do this at scale is still developing, but if we mandate its use, the ecosystem will adapt. Just as HTTPS is now widespread for website authenticity, we could see most news videos, official communications, and so on carrying cryptographic authenticity seals. People will get used to looking for a little “verified content” checkmark (Mozilla Foundation has suggested something along these lines for synthetic media transparency).
Another critical component is independent oversight. Given the sensitivity – we are talking about potential government regulation of media production and powerful interests – it’s vital that enforcement not become politicized or overreaching. We suggest establishing independent bodies (similar to how many countries have independent election commissions or data protection authorities) specifically focused on AI content and persona rights. These bodies could handle complaints from the public, mediate disputes, and ensure that neither government nor corporations abuse the system to stifle legitimate speech. For example, an independent “AI Persona Rights Commission” could publish transparency reports on deepfake incidents, maintain the public identity graph, and coordinate with tech companies on improvements. Civil society organizations and academia should also be involved in watchdogging. Indeed, some NGOs might emerge that specialize in scanning the internet for deepfakes of vulnerable individuals (like activists or private citizens who are targeted) and helping them get those removed – a kind of digital rights advocacy. It’s important because not everyone will have the resources to monitor or legally pursue violations, so collective efforts and perhaps class-action mechanisms should be enabled.
The legal system will no doubt be tested by new scenarios, and it will need to evolve. Courts will have to interpret these personal content rights and balance them against free expression in edge cases. But we’ve navigated similar waters with copyright and privacy laws. Over time, a body of case law will clarify things like what counts as “consent,” how to measure “identifiability,” and what defenses may exist (for example, a narrow exception might be carved out for AI use in obvious satire that is nonetheless very realistic – though requiring prominent disclaimer). It will be a learning process, but starting now means we can shape the norms early. If we delay enforcement until deepfakes are overwhelming, it could be too late to pull back trust in digital media.
Finally, a note on market adjustments: If some major platform or AI provider stubbornly refuses to comply and blocks its service in regions with these laws (much like some websites blocked EU users after GDPR), this creates an opening for compliant alternatives. In the long run, ethical AI and content authenticity could become a selling point. For example, a social network might brag that it’s “deepfake-free” because it verifies all images, attracting users concerned about misinformation. Or AI companies might build tools that are compliant by design and market them to enterprises needing trustworthy outputs. We are already seeing a shift towards “responsible AI” in the corporate world – these regulations would accelerate that, rewarding companies that invest in safety and transparency. Governments can also incentivize local tech industries to fill any gap. If a big multinational pulls out because it doesn’t want to follow the rules, domestic companies (especially startups) might step up with home-grown solutions that do. This not only maintains services for users but also boosts the local tech economy – a strategic win-win that Europe, for instance, has noted in the context of being a “regulatory superpower” that can spur homegrown innovation.
In conclusion, while enforcement of Personal Content Rights is undoubtedly complex, it is feasible through a mix of global influence, smart regulation, technological aids, and community vigilance. The framework we propose actually leverages the strengths of both big institutions (for sweeping standards) and small actors (for innovation and early action). It also explicitly acknowledges that trust in media is a collective good – one that independent guardians and transparent processes must uphold, not just governments or corporations alone. By combining hard law with technical tools and aligning incentives for compliance, we can drastically curtail the wild-west of deepfakes and build a more secure digital public square.
Encouraging Innovation and Safeguarding Progress¶
An important consideration is ensuring that in our zeal to regulate misuses of AI, we don’t accidentally stifle beneficial innovation or entrench the positions of tech giants. Personal Content Rights should be crafted to protect individuals while also encouraging open innovation and competition. This means two things: carving out space for researchers and smaller developers to work on AI (under appropriate safeguards), and actively fostering a new industry of solutions (detection, verification, identity management) that can create jobs and technical leadership, especially in regions like Europe that value digital sovereignty.
First, consider open-source and academic developers. Some of the most significant advancements in AI have come from open communities and university labs, not just Big Tech. We do not want a regime where only large corporations (with big legal teams and resources) feel safe to develop generative AI. That could lead to monopolies or a chilling of research. Therefore, the law might provide safe harbors or allowances for bona fide research and development of AI models, as long as they are not being used to produce public deceptive impersonations. For example, training an AI on a celebrity’s voice without publishing any fake audio might be permissible for research purposes (similar to how you can study copyrighted text under fair use in research). What we outlaw is the deployment of the model to generate a fake song and release it commercially without consent. The difference is between creating capability and actually causing harm. One could imagine requiring researchers to watermark any outputs even in lab settings, but generally academia can be trusted with more leeway. Open-source model developers might need to include warnings or even technical constraints – for instance, an open source face-swap algorithm might ship with a default watermarking feature or require users to click “I agree not to misuse this.” While that alone isn’t enforcement, it shows intent to comply. Regulators should be careful not to impose liability on someone just for writing code that could be misused by others (much like how encryption tools or 3D printers are dual-use). The focus should remain on the end misuse: impersonating without consent.
There is a parallel here to how copyright law has an exemption for service providers (the “safe harbor” in the DMCA) if they promptly remove infringing content when notified. Similarly, maybe AI model repositories or open source communities could be given safe harbor as long as they cooperate in mitigation (for instance, if someone reports “this model is being used to deepfake me,” the repository might flag or remove it). The Tennessee Elvis Act’s approach of imposing liability on tool creators is quite broad – it could be interpreted to even target open source developers of voice cloning software. We should refine that: the liability for tool creators should perhaps apply only if they knowingly or intentionally design the tool for impersonation without safeguards. If an open source tool has clear consent requirements built in, its creator could be seen as acting in good faith. By clarifying this in law, we protect the open innovation ecosystem. After all, many beneficial uses of generative tech exist – from assistive voice for the disabled (cloning a user’s own voice) to artistic transformations – and we want small players to contribute ideas there.
Secondly, far from hindering innovation, strong persona rights and authenticity requirements can spark a new wave of ethical tech innovation. We already touched on startups focusing on detection and verification. Governments and investors should double down on this. Europe, in particular, has an opportunity to lead given its regulatory head start and values-driven market. Reports have noted that trends in GenAI are moving toward more deterministic, transparent, and decentralized systems – areas where Europe’s strengths in open-source and privacy can shine. By funding projects in content authenticity, digital identity, and privacy-preserving AI, Europe (and any region) not only protects its citizens but can export these solutions globally. For instance, European researchers building a multilingual deepfake detector could become the go-to service for media companies worldwide. The market for trust-tech may well explode in coming years – imagine every news organization, court system, and social media needing tools to verify content. That’s a massive demand for skilled companies to meet. According to one analysis, over 50 companies offering deepfake detection have already emerged, forming an “emerging cluster” of innovation. This is just the beginning.
Public funding mechanisms like the EU’s Horizon Europe (which funded vera.ai) and national innovation grants should allocate resources specifically to Personal Content Rights infrastructure. This could include grants for: building the open identity graph, developing robust watermarking algorithms that are hard to remove, creating user-friendly verification apps (perhaps a browser plugin that shows “green/red” on images indicating authentic vs AI), and exploring new approaches like watermarking at the hardware level (e.g., maybe camera sensors of the future will embed fingerprints in genuine footage). It could also fund social and legal research into best practices for enforcement and international cooperation. In essence, treat this as a moonshot similar to cybersecurity or space tech – a comprehensive program where academia, startups, and industry collaborate. The return on investment is not just reduced misinformation, but also leadership in a critical field of AI governance.
We should also consider the role of smaller enterprises and individual creators. While big companies might initially complain about regulations, compliance can actually spur creative solutions that benefit them too (e.g., Adobe jumping on content credentials could give it a competitive edge as a trusted platform). But we must ensure smaller companies are not left behind. One way is to emphasize open standards: if watermarking requires a certain protocol, it should be freely implementable, not a proprietary tech only available to those who license it. The history of the internet shows that open protocols (like HTTP, SSL, etc.) enable broad participation, whereas closed ones can create barriers. Regulators can encourage this by favoring or even requiring open standards for any mandated technology. For example, if a law says “AI content must be labeled,” it could reference a standard like C2PA or any other open standard that achieves the same goal. This leaves room for an open-source alternative to Adobe’s approach to flourish if it’s lighter weight or more accessible.
Moreover, any certification or compliance regime (like a seal of authenticity or a registry of authorized AI content producers) should have low-cost paths for small entities. Perhaps governments can subsidize the issuance of signing certificates for individual creators or open-source projects, so that cost isn’t a barrier to participation in the provenance system. Think of it like providing free SSL certificates (Let’s Encrypt did that for websites, greatly expanding adoption). An analogous “Let’s Authenticate” project for content credentials could be transformative. The involvement of independent organizations is key here – maybe a nonprofit coalition can run the identity graph and certificate authority as a public service, funded by grants and donations, ensuring neutrality and accessibility.
Lastly, by explicitly encouraging new companies that facilitate detection and enforcement, the law can create a virtuous cycle. For instance, if penalties for violations are significant, demand for insurance and compliance tools will rise – companies might pay for services that scan the web for unauthorized use of their brand ambassador’s likeness, etc. That demand fuels more startups in RegTech and AI auditing. The European Commission has highlighted the concept of “trustworthy AI” and could set up accelerators or sandboxes for companies in this space (somewhat like how FinTech sandboxes work for finance compliance tech). In my earlier writings, I argued that Europe’s strategic opportunity in GenAI lies in aligning with its values – supporting tech that is human-centric, transparent, and secure. Nurturing an ecosystem around Personal Content Rights is a prime example. It aligns perfectly with European values of privacy, dignity, and the precautionary principle, and could become an export (both in terms of policy and products) to the rest of the world.
To conclude this section: We affirm that protecting people from deepfake abuse does not mean hampering technological progress. On the contrary, it sets guardrails that enable sustainable progress. Just as environmental regulations spurred the clean tech industry, and safety standards spurred innovation in auto engineering, so too can AI persona regulations spur a wave of “clean AI” innovation. By safeguarding the core values of privacy, consent, and truth, we create a healthier environment for AI to evolve – one where creators, users, and those represented in content all have their rights respected. Open-source communities and nimble startups are indispensable allies in this mission, and they should be empowered, not hindered. If we get this right, the outcome will be a richer, more trustworthy digital media landscape that still buzzes with creative energy and technological marvels – but guided by respect for the individual at its heart.
Conclusion¶
Deepfakes and AI-cloned voices embody the double-edged nature of technology: they hold exciting creative potential, yet pose grave risks to individual rights and societal trust. In this white paper, we have made the case that the unfettered creation of AI-generated impersonations is simply unacceptable in a civil society. It must become illegal – not just unethical – to forge someone’s likeness without permission. Personal Content Rights would establish this principle unequivocally, giving every person a shield of control over their own image and voice in the digital realm. This is a natural evolution of existing rights to privacy, publicity, and identity, updated for the AI era.
We began by illustrating the scale of harm already in evidence: from financially devastating scams to the proliferation of non-consensual explicit material, real people are being hurt by fake likenesses. Without intervention, these harms will multiply and the very notion of “seeing is believing” may crumble in our information ecosystem. But we do not have to accept that dystopian outcome. With sensible legislation, robust technology, and international collaboration, we can ensure AI works for us, not against our autonomy. The ideas detailed here – outlawing unauthorized deepfakes, creating granular identity graphs, mandating watermarks and provenance, enforcing via strong penalties, and catalyzing innovation in authenticity tech – form a holistic solution. Each element supports the others: legal deterrence encourages use of watermarks; watermarks make enforcement easier; identity graphs prevent loopholes; innovation makes compliance cheaper and more effective.
There will be challenges along the way. We must carefully balance free expression with protection from deception, and avoid misuse of these laws to censor legitimate content. We must build technical standards that are secure yet practical, ensuring they are adopted universally rather than sidestepped. And we must bring the global community on board, respecting cultural differences (for instance, notions of parody or public figure rights vary by country) while asserting core principles that transcend borders – namely, that human identity is not fair game for exploitation. These efforts will require ongoing dialogue among lawmakers, technologists, civil liberties groups, and the public. We should expect refinement: perhaps pilot programs in certain jurisdictions, feedback from industry on feasibility, and tuning of what constitutes exempt fair use. But none of these challenges are insurmountable, especially when weighed against the stakes of inaction.
It’s worth recalling that major advances in rights often start with bold, even idealistic steps. The GDPR was born from a bold assertion that privacy is a fundamental right deserving strong safeguards – and despite many skeptics, it has reshaped how the world treats data. In a similar spirit, declaring personal content rights would be a bold move asserting that individuals do not surrender their identity just because technology can copy it. It sends a message that human dignity persists as a priority in the face of AI. Future generations will no doubt inhabit a world even more blended with virtual representations and avatars; establishing these rights now lays a foundation of respect and agency that will carry forward.
In closing, we urge policymakers, industry leaders, and citizens to treat this issue with the urgency it deserves. The genie of generative AI is out of the bottle – we cannot undo the ability to create flawless deepfakes. But we can and must civilize its use. By making the malicious use of that power taboo and illegal, by bolstering our media with transparency tools, and by reinforcing the notion that consent isn’t optional, we can enjoy the creative wonders of AI without living in fear of its dark side. It’s often said that with great power comes great responsibility; Personal Content Rights are about codifying that responsibility when it comes to the power of impersonating a human. Let’s build a future where creativity flourishes but each person’s voice and face remain truly their own – armed with choice, protected by law, and secured by technology.