By: Ashwini Nagappan
If given the opportunity to talk to a deceased loved one, would you take it?
Technology has advanced from Amazon’s Alexa and Apple’s Siri to personalized “chatbots,” a digital reflection of one’s self. Hossein Rahnama, a visiting scholar at MIT’s Media Lab, remarked that an individual needs about 1 trillion gigabytes of personal data to create a respective chatbot, which millennials will have acquired by approximately 2070. Therefore, after death, people can have a digital version of themselves still engaging with those alive. As the digital afterlife materializes, concepts of bereavement and death need refining.
People may turn to a chatbot to talk to a dead “person” rather than mourning, which alters the distinction between life and death. Instead of both human organism and “person” dying, the human organism dies, but the “person” stays alive, the “person” defined as being able to retain consciousness. Machine learning allows the chatbot to predict how the dead individual would have reacted to current events by analyzing their opinions made when alive. While this artificial intelligence (AI) stems from a real person, it is not the same as the physical being that once existed.
The article references an episode of the TV show, Black Mirror, which captures an android’s inability to match the habits and mannerisms essential to its believability. A concern to ponder over is whether we are the same via technology as we are face-to-face. Rahnama’s AI program gathers information from Facebook posts, tweets, snapchats, texts, and more, but many people use these social media platforms to put their ideal self forward. The chatbot may yield a virtual interface that is a fabricated version of our actual selves.
Additionally, we must consider if this is an invasion of the dead’s privacy. Perhaps the AI program is acceptable if a person agrees to this endeavor before their death. However, if family members want to employ this technology without the consent of the dead individual, they infringe on her or his rights. The zettabytes should only be given to people one trusts because they contain valuable personal information, similar to how a doctor has a continued duty of confidentiality after a patient’s death. According to thanatologist and grief counselor Andrea Warnick, if the chatbot is created, it can be used in therapy to mourn the dead by having conversations about them, with them. It is similar to slowly weaning someone off of cigarettes rather than going cold turkey; it gives people the chance to move forward at their own pace.
Still, using a chatbot for therapeutic purposes has different implications than those of using it to replace a loved one. We should be apprehensive about using this technology because there is great value in tangible human connections that can evolve. Further, the chatbot can only reflect on what is happening in the world – it does not perform or contribute to society.
Digital afterlife technology makes immortality appear desirable. It offers hope that death is not the end of one’s “existence” by challenging the limits of humankind. Bereavement is not insufferable that we must create abridged artificial versions of ourselves. In the end, there may be a better use for AI than a chatbot that interacts with your loved ones as you while you are dead.
Click here to read more.