In the 1990 fantasy drama “Truly, Madly, Deeply,” the main character Nina grieves the death of her boyfriend Jamie. Jamie returns as a ghost to help her process her loss, but his reappearance forces her to question her memory of him and accept that he may not have been as perfect as she remembered. In 2023, a new wave of AI-based “grief tech” offers the opportunity to spend time with loved ones after their death in various forms. However, there are ethical concerns surrounding this technology, as well as questions about ownership and the impact on those left behind.
While generative AI tools like ChatGPT and Midjourney are dominating the conversation around AI, the larger ethical questions regarding grief and mourning are being ignored. One company, Deepbrain AI, already offers a service called Re;memory, which allows users to create an avatar of themselves that their family can visit at an offsite facility for a high cost. Currently, the avatars created by Re;memory only have one “mood” and do not replicate the subject’s personality in depth. Another service, HereAfter AI, creates audio chatbots that friends and family can interact with, providing verbal answers and stories from the past. However, these chatbots can only provide convincing answers until they encounter a query they don’t understand.
The realism of these avatars is not the primary concern, as AI is constantly improving. The more pressing questions revolve around ownership, data security, and the impact on grieving individuals. Joanna Bryson, a professor of Ethics and Technology, compares the current wave of grief tech to the memorialization of friends on Facebook. She highlights the emotional impact of communicating with deceased friends through social media and raises concerns about the potential unhealthy obsession with spending time with AI avatars. Bryson also suggests that this technology could be used in unintended ways, such as preferring an AI synthesis of a best friend and a celebrity over interacting with the actual friend.
While creating AI versions of living friends may require their participation and consent, the availability of publicly accessible data could make it possible in the near future. Microsoft’s VALL-E can already clone a voice with just three seconds of source material. However, recreating a person in AI raises ethical concerns regarding the disclosure of private information. Access to the digital version of a deceased individual could potentially lead to abuse or the unintentional disclosure of sensitive information.
Data protection is another significant concern. Even with the consent of the person being recreated in AI, there is no guarantee that someone else won’t gain access to the digital version and misuse it. Current laws regarding digital avatars are mainly focused on living individuals, and the misuse of digital versions of deceased individuals falls into a gray area. Transparency and accountability are crucial in ensuring the protection of data, but data will always be at risk, regardless of storage methods.
Despite the concerns surrounding privacy and the cost and accuracy of AI avatars, their development and use are inevitable. However, this does not mean that the future is doomed. Transparency and the ability to verify digital avatars can help mitigate potential risks. Companies like Apple, known for their privacy-focused approach, could develop solutions that prioritize data protection. While there may be immediate problems and challenges, it is possible to find solutions and ensure the responsible use of AI in the creation of digital avatars.
In conclusion, the rise of AI-based “grief tech” offers the opportunity to spend time with loved ones after their death. However, there are ethical considerations surrounding the ownership of these avatars, data security, and their impact on the grieving process. It is essential to address these concerns and establish transparent and accountable practices to ensure the responsible use of AI technology in creating digital avatars.