Home

Will Generative Ghosts Help or Haunt? Contemplating Ethical and Design Questions Raised by Advanced AI Agents

Gabby Miller / Feb 13, 2024

Alina Constantin / Better Images of AI / Handmade A.I / CC-BY 4.0

After 76-year-old Lee Byeong-hwal learned he had terminal cancer, he decided to leave his wife a “digital twin” to stave off loneliness. “Sweetheart, it’s me,” an avatar of Byeong-hwal says to his wife as she blots tears from her face. “I’ve [sic] never expected this would happen to me. I’m so happy right now,” the wife responds to the virtual representation of her husband a few months after his passing.

In a two-minute video from the South Korean startup DeepBrain AI, viewers – and potential buyers – get a sneak peak into Re;memory, a “premium AI human service” that allows those left behind to cherish “loved ones forever.” For only €10 to 20 thousand, buyers get a seven-hour filming and interview session to help create a synthetic version of a person based on their real voice and image data. And for another thousand Euros, loved ones can get a 30-minute “reunion” to interact with the deceased persons’ digital twin in a “memorial showroom” equipped with a 400-inch screen and high-quality sound system.

DeepBrain AI is only one of several startup ventures rushing products to market that can create digital representations of the deceased. Yet many practical and ethical considerations still hang in the balance, in part because of rapidly shifting technological and social factors as well as limited research, leaving various stakeholders concerned about the benefits and risks of posthumous generative agents. This includes researchers Meredith Ringel Morris, Director for Human-AI Interaction Research for Google DeepMind, and Jed R. Brubaker, Associate Professor of Information Science at the University of Colorado Boulder, who recently published a Google-funded report titled, “Generative Ghosts: Anticipating Benefits and Risks of AI Afterlives.”

Ringel Morris and Brubaker’s research introduces the concept of “generative ghosts,” AI-powered representations of deceased individuals capable of producing believable human behaviors, including memory and planning. The authors say they hope to “empower people to create and interact with AI afterlives in a safe and beneficial manner” by anticipating the potential benefits and risks the technology poses to individuals and society. This research also might help inform the “design space” for generative ghosts – answering questions about their provenance, deployment, and embodiment, as well as policy and governance approaches.

The paper takes the reader through a history of various technologies that mourners have adopted to interact with the deceased, all of which raise a host of concerns around privacy and consent. One futurist created a chatbot dubbed “Fredbot” to “embody the memory of his deceased father” by using exact quotes from materials he left behind. Another engineer created an app called “Roman” to honor her best friend by training a neural network based on text messages the two had exchanged. Then there are “griefbots” or “deathbots,” which predate modern LLMs and are typically created posthumously by third-parties, that leverage text produced by the deceased during their lifetime.

Now, the authors consider generative ghosts as an extension of the concept of a griefbot: simulacra that “always include the ability to generate novel content in-character…potentially evolve over time, and possess agentic capabilities such as the ability to participate in the economy or perform other complex tasks with limited oversight.” As AI technologies advance, so too will the ways generative ghosts can be created and deployed. End-of-life planning for a “digital afterlife” might soon regularly include the proactive first-party creation of generative ghosts. And assistive AI agents created by the living to mimic their own personas while executing actions on their behalf, referred to as “generative clones,” may transition to ghostly status upon their human’s death.

The researchers primarily evaluate the “design space” for generative ghosts, a series of relationships between these systems’ capabilities. The designers of these AI systems may choose from a palette of options, ranging from how a generative ghost is deployed to whether it can exist in multiple locations or iterations at once, or even the degree to which it resembles the human it represents. Some of the authors’ most compelling “design” examples include considerations around the use of dark patterns and the necessity of kill switches when developing these technologies. With advanced generative ghosts, the combination of dark patterns, or deceptive user interface experiences that “trick” users into doing things they didn’t mean to, could facilitate the development of addictive, parasocial relationships that might harm the mental health of grieving individuals. For instance, it might be more responsible for only the living to be able to initiate interactions with generative ghosts, disallowing the systems from initiating communications via “push notifications” or some equivalent form of communication.

Building a “kill switch” into the design of a generative ghost might also serve as a “secure and reliable override” in the event of malicious third-party attacks or first-party events where a generative ghost is programmed to harass the living, according to the researchers. This solution, however, poses a new problem: permanently disabling a generative ghost may result in the bereaved suffering a “second death” or “second loss.” (Technological obsolescence, economic headwinds, or future regulation might also cause generative ghosts to die a second death.)

While the harms the researchers laid out remain largely theoretical, they will be dependent on trends and developments currently unfolding around AI adoption. Hallucinations, or when AI models generate incorrect or misleading results, could pose reputational risks to generative ghosts, which might discourage widespread adoption. Augmented reality, on the other hand, may normalize embodied AI generations, laying the groundwork for generative ghosts to one day haunt a ubiquitous digital layer.

This may sound dystopian, but generative ghosts raise questions beyond today’s technology that most profit-driven AI creators, including Google, have yet to address. Whether these technologies will ultimately be a comfort, threat, or both will largely be determined by their design.

Authors

Gabby Miller
Gabby Miller is a staff writer at Tech Policy Press. She was previously a senior reporting fellow at the Tow Center for Digital Journalism, where she used investigative techniques to uncover the ways Big Tech companies invested in the news industry to advance their own policy interests. She’s an alu...

Topics