• /
  • /
In 2025, Russia is approaching the topic from two angles: on one hand, a regime for anonymized data sets for AI is being formed, including the launch of a state system for their accumulation starting September 2025; on the other hand, "digital inheritance" is emerging in the political agenda in a direct legal sense: deputies are proposing to enshrine the possibility of bequeathing access to accounts and digital assets, from social networks to cryptocurrency. For now, this is an initiative, but it confirms the obvious: digital traces are an economic and familial asset, not merely "internet dust." Concurrently, memorialization practices are in effect: "VKontakte" introduced the "Page of a Deceased Person" label back in 2020, and technical support assists relatives in deleting or preserving a profile. In sum, this outlines a framework: memory is not only a moral category but also a manageable legal object.
Money follows emotion. The "death economy" is undergoing a digital revolution: from live-streamed farewells to online memorials and subscriptions for "lifetime" data storage. According to industry reports, the global funeral services market is expected to reach approximately $ 75 billion in 2025, with projections of growth to $ 103−104 billion by the start of the next decade. The share of digital services and "memorial" technologies within the demand structure is growing at double-digit rates—alongside green burials and remote planning. Some studies note a 25−35% surge in online memorial services and an approximately 30% annual increase in the use of AI tools. This is significant: "digital immortality" is not an abstraction but a notable line item in the P&L of the funeral industry.

Digital Immortality: How Technology is Changing Our Understanding of Memory, Death, and Legacy

"Digital immortality" has ceased to be a science fiction fantasy and has become an infrastructure of everyday life: algorithms are trained on our messages, voices, and videos, and after death, they offer relatives the chance to "talk one more time" — sometimes to see an animated face, hear a familiar intonation, and receive an answer to a question we never had time to ask. In recent months, the world has witnessed how grief, the market, and regulations have begun to form a new ecosystem of memory — from "ghost bots" to laws attempting to rein them in. In August, major media outlets discussed AI-resurrected avatars: interactive "deathbots" are becoming a mass phenomenon — from home experiments to television interviews with digital versions of the deceased, sparking both fascination and rejection simultaneously. Grief experts warn: such contact can provide support, but it can also prolong pain, replacing the real work of mourning with its technological simulation. The doubt lies not in whether AI "can" mimic a voice and manner, but in where the boundary of consent, dignity, and the risk of "rewriting" memory according to a comforting narrative lies.

The American debate intensified particularly after major television networks used AI versions of deceased teenagers to make statements "in the first person" on political issues. Some of their parents consented, while others were outraged; lawyers are already including clauses in wills prohibiting the posthumous "digitization" of a person. The regulatory framework is lagging behind: a digital double may develop a new "will" that the original person never had. Yet demand is growing: services offer dialogues "for eternity," while in the parallel world of show business, stars are monetizing their own avatars, securing rights to posthumous performances and advertising integrations. This blend of consolation and commerce, the intimate and the public, is the nerve center of the new industry.
Author: Yulia Kovaleva
Outside the Anglo-Saxon world, the trajectory is different, but the trend is the same. The Chinese grief-tech market is developing rapidly — against a backdrop of strict AI regulations and state interest in data control. Researchers describe how the regulatory "pendulum" restricts generative AI with stringent requirements for "truthfulness" and "safety," yet simultaneously permits the growth of a local ecosystem of "digital relatives." The result is a unique combination: demand for postmortem avatars and a strong state framework that sets very specific ethical boundaries. This does not eliminate the questions — it merely shifts them from the realm of morality to the realm of what is legally permissible.
Regulators are also recalibrating their approach. In 2024−2025, the EU finalized the AI Act: among its prohibitions are manipulative practices and the exploitation of vulnerabilities, which is critically important for grief technologies that operate on a state of loss. Strictly speaking, "resurrecting" the deceased for political campaigning within European jurisdiction is not only an ethical but also a legal problem: manipulative design and deceptive representation of AI systems are directly targeted by the law. In the US, where regulation is fragmented, the federal TAKE IT DOWN Act came into force in the summer of 2025—it criminalizes the distribution of intimate deepfake images without consent and obligates platforms to promptly remove such content; while not specifically a "digital immortality law," it sets an important precedent: personhood and consent do not cease upon death. Against this backdrop, regulatory "indexing" is growing: in 2024 alone, US federal agencies adopted 59 AI regulations—twice as many as the previous year.
But where there is money and comfort, there are also ethical traps. Cyberpsychologists warn: AI can "smooth out" the rough edges of the deceased’s life, creating a cozy but not entirely honest version of the person. Philosophers write about the risk of a "second death"—not biological, but semantic—when the algorithm replaces living memory with stereotypes and predictable phrases. Conversely, for some, a digital voice is a lifeline, especially if the loss occurred suddenly. Recent research and stories from recent weeks show both sides: some feel gratitude for the opportunity to "hear mom again," while others feel anger that the deceased’s "image was taken" and filled with someone else’s words. These stories reflect unresolved questions about consent during life, the boundaries of commercializing grief, and the right of relatives to say "no."
In this context, the everyday culture of memory is also changing. If "digital death" once meant simply closing an email account and deleting photos, we now manage digital biographies. On a practical level, this means three things. First: planning during one’s lifetime—appointing a "legacy contact" in an Apple ID, setting up "memorialization" and a "successor" in social networks, choosing the fate of accounts in Google and other services. Second: legal options—including clauses in wills that prohibit or permit the posthumous use of one’s image, voice, correspondence, and metadata. Third: the conscious curation of one’s own digital memory—from self-interviews for future generations to ethical prohibitions on the political "appearances" of one’s posthumous double. This is a new form of literacy that we will yet have to teach children, just as we once taught them to preserve family albums.
There is also sobering statistics. Social networks are aging along with us: by 2025, in the US alone, millions of accounts will belong to people who are no longer alive, and by mid-century, "ghost towns" in digital networks will become the norm. Ecosystems—from Facebook to local platforms—are forced to consider how to support "cities of memory" and prevent abuse, while users must decide what constitutes a living biography and what is a machine reconstruction. When profiles, posts, and photos outlive the person, the very idea of memory shifts from private to networked. This presents both an opportunity and a risk: greater memory demands greater rules.
The Russian context adds a layer of pragmatism. The proposal to enshrine "digital inheritance" in the Civil Code is a step toward enabling notaries to formalize a testator’s will in digital form and facilitating the legal transfer of account access. Such a solution synchronizes family histories with the attention economy: heirs will be able not only to close pages but also to carefully manage digital archives—from family photo clouds to business correspondence. This does not negate ethics: the right to delete should be as firm as the right to preserve. But for the culture of memory, this is a chance to emerge from the gray zone and acknowledge the obvious: digital life also requires ritual, language, and law.
For now, "digital immortality" remains a technology that each person experiences in their own way. For some, it is a way to extend the conversation and pass on to their children a voice, a story, and an intonation; for others, it is a risk of replacing grief with a simulacrum and violating the boundaries of the deceased. This duality will define the coming decade. But one thing is clear: memory is moving from the silence of albums into the active field of dialogue with algorithms. And if we want our legacy to be honest—both in Russia and globally—we will have to negotiate: about consent during life, about boundaries after death, and about what constitutes the "self" when a voice, a face, and text can already speak without us.
States and organizations are feeling their way toward standards. Lawyers increasingly speak of postmortem data rights as a necessary transnational norm, and international platforms like UNESCO and the ITU are calling for a unified regulatory architecture: from bans on unauthorized "resurrections" to requirements for the storage, transfer, and deletion of personal data sets. For now, these are largely framework proposals and research-driven appeals, but for the industry, it is a signal: the "Wild West" era of grief-tech is coming to an end, and the urgency of the topic is confirmed by the very speed of its integration into daily life.
While private stories and cultural differences multiply, the infrastructure of digital legacy is steadily becoming institutionalized. Since 2024, Apple has been systematically developing its "Digital Legacy" feature: during their lifetime, users designate trusted contacts who, after their death, can gain access to their data — though the end-to-end encryption of the iCloud Keychain remains inaccessible even to them, underscoring the priority of privacy. A well-known messenger has long had a "memorialization" mode — an account is converted into a memorial page, and a designated "legacy contact" can manage a limited set of functions. These mechanisms may seem mundane compared to "resurrected" avatars, but they are precisely what form the basic standard for the careful handling of a digital footprint.