An Open Letter To Grieftech Investors

Executive Summary:

Grief is an inescapable part of loss. Grieftech describes emergent digital products which seek to preserve a person’s essence after death through the extraction of human stories. These are then able to be interactively recalled using artificial intelligence to power chatbot experiences. Grieftech positions itself as remembrance reinvented, but there are deeply ethical considerations for the currently unknown psychological impacts of synthetically prolonged sorrow, the changes in ways we relate to the dead, legacy data privacy, and the commercial rights to likeness left to survivors. As custodians of remembrance beyond what’s stored on a product’s servers, it is essential that we establish the deeply human dimensions of responsibility, care and accountability into the decision making processes which go into not just their development, but also their distribution. Grieftech developers bear the same ethical responsibilities to their future users as they do to their current content creators.

Grieftech platforms must be held accountable to endure themselves, ensuring that existential issues such as large language model collapse, algorithmic bias and commercial risks are held at a mitigated distance. They are products explicitly intended for an unknown future. They bear deeply human custodial responsibilities to operate in ways which sustain themselves into the future. Grieftech runs on the extraction of the oil of human essence, the story. As investors you control the means by which many of these personal experiences achieve an audience in the future. In your hands you hold both great power and great responsibility. Exercise it.

Dear Investors,

Over the past year, many of us have engaged in an artificial intelligence arms race to integrate generative tools into our products in commercial and brand-driven efforts to appear innovative. Those choosing not to participate, or those proceeding with caution, risk being left behind as generative tools vacuum up audience and attention. Prompting has entered our vocabulary, and it’s getting harder for any of us to determine what’s real. But what is real for many is the fear of the future. Of what these generative tools are going to do to us as citizens. Many have been vocal about job loss and human disintermediation. Some are calling for a period of developmental pause. Legislation is already years behind. Our stewardship of the present poses very real risk in the future.

Artificial intelligence is reshaping social norms, issues of identity and citizenship, privacy, and the boundaries we establish with others. As it reshapes who we are as individuals, and who we are to each other, it is even reframing our understanding of the space between life and death. In the emerging field of grieftech, our capacity to digitize the storytelling of human experience, reducing it to a series of prompted responses stored on a remote server for recall in the future, raises deeply ethical questions of faith, prolonged bereavement and the masking of pain as a fundamental part of human experience. It surfaces deeply problematic, culturally nuanced questions of the difference between can and should, especially when the implementation of technologies motivates an often unintended consequences of reinforcing bias, discrimination, exclusion and hierarchy.

This letter is a plea to ensure the endurance of your users’ futures through responsible custodianship of the present.

Grief is a critical part of authentic human experience. But learning to live with digital facsimiles masking sorrow arrests the opportunity to go deeper into our relationship with what we believe to happen next. It arrests a pain necessary in embracing the reality of a loved one lost, and this rich relational aspect of human experience cannot, should not, be reduced to a set of recalled responses. Pain isn’t good in itself. But it's a necessary part of human experience. Because ultimately pain comes from love. If we try to fill our time with something that's really just a coping mechanism, we lose the opportunity to move on. Humans are deeply communal beings. We crave interaction with others and the friendships we carry through life are more than an exchange of information. Too often in digital spaces of innovation we have grown accustomed to numbing our pain with scrolling serotonin and not embracing it as part of what it means to be alive. These are all-too-human problems which should not be masked by synthetic digital solutions, however immediately intoxicating they may appear to be.

We are not advocating for a world where technological innovation does not seek to help humans with deeply emotional experiences. We believe technology has much to offer in grief counseling and our relationship with those no longer with us. But we are advocating for increased responsibility towards the necessary human emotional needs which are masked or prolonged by the use of such products. We seek kindness and care, not faster optimized empathy. That pain of loss comes from embracing a slower feeling of love. That humans are not reducible to exchanges of information or the digitization of their stories. And that facsimiles of loved ones bear a deep responsibility, like the medical field, to do no harm.

Our research across four papers lays out a case for this responsibility across the dimensions of citizenship, extraction, obligation and consequence. Grieftech experiences transcend our historical understanding of what it means to be alive. They preserve human experience for interactive recall by those left behind at some point in the future. Neither those creating the content nor those distributing it are the end users.

The ethical issues surfaced by the fear of the future are nothing new, especially concerning dystopian ideas of artificial intelligence accelerating beyond our control. A discomfort with the very idea of digital memory and preservation beyond death taps into the very real human problems we already have with our own mortality. Resurfacing memory, feelings of nostalgia, and the ability to digitally recall the events of our lives and those we love are powerful motivators of engagement and attention. The means by which we create meaning from our individual experiences of the world surface issues of what’s possible, what’s culturally defined as ethical, and ultimately what’s legal. Individuals may lack the language to articulate this concern but we, as both users and creators still possess the agency and responsibility to be curious about the consequence of equitable exchange. All of these collide inside of grieftech experiences, which at least for now, are often constructed from a western, individualistic and affluent perspective.

In the United States, existing legislation determines that an individual’s privacy is of financial value, and is ultimately negotiable. So while privacy is for sale, it also comes bundled with large ethical issues of ownership. Who owns an individual’s data, likeness, engagement history or network? Does your product bequeath the eternal use of another’s likeness to its descendants, or others? Legal protections already don't do a lot to prevent abuse of populations that we agree should be protected, especially when those populations can't represent themselves. It’s often the case that we willingly say yes to digital identity harvesting because we value the exchange, at least in the terms that we understand it, and even if we don't have the language to express our consent. But even if those who have given up their privacy in advance, and that consent is present, within grieftech the person who has died is not the end user. Proposed legislation from the IEEE recommends that an individual has the right to access services allowing them to create a trusted identity to control the safe, specific, and finite exchange of their data. But they omit considerations of posthumous use, and the consent to do so, either by surviving loved ones, or by third-party providers.

What are those creating these experiences obligated to do? The synthetic feeling of keeping someone alive inside a period of grief may have the noble aspiration of easing the pain of loss, but it also carries the ethical risk of prolonging grief itself. Grieftech runs on the personal disclosure of intimate life stories and thoughts, preserved for survivors, and is extremely empathic in substance and intention for surviving loved ones. Grieftech aspires to endure. Therein is the obligation and responsibility as custodians of remembrance well beyond the recurring monthly subscription payment. The legacy of the deceased and the voices of those who survive are those who must be prioritized in matters of ethical conflict and permissions negotiation. The voice of the end user, in this case, the end user in perpetuity, is that which must be protected. This must be built into these experiences as a matter of critical importance.

With any digital product explicitly intended for long-term use, the future is often unclear, and the propensity towards unintended consequence high. Consequences may include the psychological impact of synthetically prolonged sorrow, changes in the ways in which we relate to the dead, legacy data privacy and the rights to likeness left to survivors, and the right to commercialize the stories we leave behind for others. Responsible AI must be deeply ingrained into these products themselves, but also into the economic mechanics which continue to fund such products. Sustainable processes, much like existing frameworks for accessibility or translation investors will already be familiar with, need to be wrapped around the development process itself. Ethically responsible governance in the distribution and responsiveness of artificial intelligence services is deeply shaped by the all-too-human problems of unintended consequence, fear of the future, bias and flawed, culturally nuanced decision-making.

So what happens next? Armed with these ethical insights, grieftech platforms have a deep responsibility to endure. As digital custodians of human lives they must ensure that emergent existential issues such as large language model model collapse are held at a distance. Use this research to guide your decision-making and to shape your roadmaps. These are products intended for the future, not the present, and they bear deeply human responsibility to operate in ways which sustains themselves into the future. As investors you control the means by which many of these experiences get developed. In your hands you hold both great power and great responsibility. Exercise it.

In hope,
Matt Shadbolt, The University of Pennsylvania

Previous
Previous

Positive Obsession

Next
Next

The Freedom Of The Worst Day Of Your Life