Digital Twins Demand a New Social Contract
Mark Fenwick, Paul Jurcys / Nov 18, 2025
Image by Alan Warburton / © BBC / Better Images of AI / Quantified Human / CC-BY 4.0
In an age of personal AI, digital twins have begun to take shape. By this, we mean digital duplicates of living and deceased humans, constructed from data and trained on existing writings, public interviews, and presentations, that aspire to resemble an individual’s intellectual knowledge, style and behavior.
One striking example is “LuFlot,” a philosophical agent designed on the corpus of Prof. Luciano Floridi, a leading thinker in the philosophy of information and digital ethics. LuFlot is not a sci-fi concept, but an existing text-based conversational tool that enables his students and followers to engage with his ideas through AI-mediated dialogue, conceptual debate, and inquiry-based learning. It extends his intellectual presence.
Other public thinkers and figures are also experimenting with tools that enable digital versions of themselves. These AI-powered agents can “revive” dead celebrities, or offer access to knowledge, teach, advise, and even negotiate on behalf of their human “principal.”
The proliferation of digital duplicates raises pressing questions. Who controls the data that animates these digital twins? What rights do individuals retain when their synthetic persona is replicated in this way? What obligations fall on developers who create and deploy them? Such ethical discussion is indispensable, but not sufficient.
In this commentary, we propose a framework that integrates the ethical conditions proposed by John Danaher and Sven Nyholm with enforceable legal rights and robust data architectures. These are needed if these systems are to be trustworthy and aligned with human dignity.
Introducing the MVPP framework
In a 2024 article for “AI and Ethics,” John Danaher and Sven Nyholm introduced the “Minimally Viable Permissibility Principle” (MVPP), which proposes five conditions for ethical permissibility: consent, minimal positive value, transparency, harm mitigation, and contextual integrity.
These conditions serve almost as a baseline checklist for digital duplicates. Consent means that duplication cannot occur without authorization. Minimal positive value requires that duplicates serve a constructive purpose, not merely trivial or harmful ends. Transparency ensures that human individuals know when they are interacting with a digital duplicate rather than the real person. Harm mitigation obliges creators to anticipate and prevent foreseeable harms, such as reputational damage or deception. Contextual integrity ensures that digital duplicates appear only in appropriate settings, preserving authenticity and avoiding substitution for human presence.
Critics have identified both strengths and limitations of Danaher and Nyholm’s approach. For example, Miyahara and Shimizu (2025) argue that MVPP overlooks the problem of functional scarcity–the erosion of value when rare talents are cloned. Kozlovski and Makhortykh warn about “grief bots” and “interactive personality constructs of the dead,” which test our intuitions about respect, fungibility, and mourning. Meanwhile, Singh et al. caution that technological mediation risks de-skilling core human capacities like phronesis (practical wisdom) and meta-cognition.
Crucially, MVPP is not an endorsement of digital duplication; it simply establishes ethical boundaries. If the conditions are met, duplication may be ethically permissible, though not necessarily desirable or wise. If not, duplication should be avoided. In this sense, the MVPP functions as a moral safety net: a set of minimum requirements designed to prevent the most obvious abuses.
Legal principles for digital duplicates
However, the MVPP is minimalist by design. It does not address deeper questions of data ownership, architectural safeguards, or enforceability. We argue that a new social contract for digital duplicates is necessary, one grounded in three essential legal principles.
1. A human-centric approach to personal data
Personal data is not simply raw material; it is a constitutive part of human identity. To treat it otherwise is to misunderstand its role in animating digital twins. A human-centric model would make individuals, not enterprises, the primary stewards of “their” data.
In practice, individuals should have the ability—and easy-to-use tools (e.g., personal data vaults)—to control the data used to power their digital doubles. This approach recognizes personal data as an extension of the self, not an asset to be used at the discretion of third parties.
2. Private-by-default data architecture
This principle follows from the first. The current enterprise-centric data paradigm assumes access to user data unless the user opts out. Most digital services collect data by default, leaving individuals to fight for control over their information and privacy.
A legal framework for digital duplicates should instead adopt a private-by-default model where data is inaccessible unless the user explicitly opts in. Technically, this could involve architectures such as encrypted personal data enclaves, on-device computation, and federated learning, in which data remains local.
Legally, the private-by-default approach aligns with the data minimization principle in the European General Data Protection Regulation (GDPR) and emerging provisions in the EU’s AI Act (Recital 69). Our normative claim is simple: individuals should not have to struggle for privacy; systems should assume it.
3. The principle of data dominion
Individuals should have dominion over the data that constitutes their digital identity. They should have enforceable rights to exclude others, authorize uses, and benefit from the value their data generates.
Digital twins intensify ownership questions: If a corporation creates an AI replica of you, who owns it? Under a data dominion model, the answer is clear–you do. The twin is an extension of you, so you should retain ultimate rights over its use, transfer, or deletion. This approach empowers individuals, counters the concentration of power by large platforms, and offers a foundation for a more democratic, just, and inclusive data ecosystem.
These three principles move beyond ethics and into law and the kind of technological architecture that best advances a new social contract that protects and benefits the public. While the MVPP framework governs the ethical permissibility of digital duplicates, the human-centric, private-by-default, and data-dominion principles govern control of the underlying data ecosystem and orient it toward individuals rather than enterprises. Ethics and law can converge in this way to provide a more comprehensive and robust framework.
The future: A new world of digital identities
Just as everyone today has an email account, in the near future, most people may have (multiple) personal AI twins or digital duplicates acting on their behalf. The next step involves integrating ethical precedents like those provided by MVPP with the legal principles outlined above to create a new social contract for a world of personal digital identities.
Such a framework must recognize that digital twins are here to stay, but only on terms that preserve human dignity and autonomy. It should treat unauthorized duplication as a form of identity theft, require developers to implement consent and transparency mechanisms, and design systems with privacy as the default. It should also affirm individual sovereignty over the data that powers personal AI.
If implemented, these ethical and legal principles could make personal AI twins tools for human flourishing, rather than surveillance and social control. They could extend intellectual presence across space and time, open new pathways to access knowledge, create markets for new digital services, and support personal empowerment. But if ignored, digital twins risk becoming new instruments of exploitation that further erode human autonomy.
The conversation initiated by Danaher and Nyholm provides a crucial ethical baseline. Adding a legal and architectural perspective ensures the discussion moves beyond theory to enforcement and practical impact. In doing so, we can establish a new relationship with technology, one that respects the primacy of the individual in an age of AI.
Authors

