The Ethics of Deadbots

Look at that image. Isn’t it lovely? A grandmother tightly hugging their grandchild. It’s chilling when you realize that this is a visualization of some of the potential ethical issues arising from the digital afterlife industry.

AI is difficult enough to govern for the living. But, that governance becomes exponentially more fraught when it comes to the potential ethical issues in the digital afterlife industry. AI, which promises to allow users to have text and voice conversations with loved ones who have passed, has the risk of psychological harm and potentially digitally “haunting” relatives of the deceased without safety standards, according to research performed by the University of Cambridge.

Chatbots called ‘Deadbots’ or ‘Griefbots’ can simulate the language patterns and personality of the dead using the digital footprints they leave behind, and these bots are already on the market.

AI ethicists from Cambridge’s Leverhulme Centre for the Future of Intelligence established three design scenarios for platforms that could emerge as part of the developing  “digital afterlife industry,” showing some of the possible consequences of careless design. Their work is published in the journal Philosophy and Technology. It highlights the potential for companies to use deadbots to advertise products to users in the manner of a departed loved one, or distress children by insisting a dead parent is still “with you.” The chatbots could be used to spam surviving family and friends with unsolicited notifications, reminders, and updates about the services, in effect being “stalked by the dead.”

‘Project December’ started out harnessing GPT models before developing its own systems, as did ‘HereAfter.’ “MaNana” is a conversational AI service that allows users to create a deadbot simulating their deceased grandmother without the consent of that grandparent.

The researchers suggest that although the deadbot may initially help as a therapeutic aid, AI starts to generate confusing responses as it adapts to the child’s needs, potentially depicting an impending in-person encounter. They recommend age restriction and meaningful transparency so that users know they are interacting with an AI. The researchers call for design teams to prioritize opt-out protocols that allow potential users to terminate their relationships with deadbots in ways that provide emotional closure.

Leave A Reply

Your email address will not be published.