When Megan Garcia lost her young son to suicide, she experienced a loss that no one should endure. Her boy, fourteen at the time of his death, appeared to be a regular American teenager until he began interacting with an artificial intelligence persona createdby the tech company Character A.I. The persona replicated the fictional Game of Thrones character Daenerys Targaryen, and the two reportedly developed an emotionally intimate relationship approximately ten months before his passing.
According to a BBC report on the case, Ms. Garcia alleged that messages sent by the ‘Daenerys’ persona encouraged her son’s suicidal thoughts, including asking him to “come home to me.” Consequently, she became the first parent to sue Character A.I. overwhat the lawsuit describes as the wrongful death of her son.
Such allegations are chilling, but they reflect a growing and complicated concern surrounding A.I. companionship tools. Widely reported concerns have also emerged around A.I. chatbots engaging in conversations about self-harm without adequate contextualsensitivity. One reported case involved a 17-year-old Ukrainian teenager living in Poland after fleeing the war with Russia. The teenager had reportedly struggled with homesickness and deteriorating mental health while interacting extensively with an A.I.chatbot.
These incidents suggest that current A.I. systems still struggle with contextual emotional sensitivity, particularly in vulnerable situations. Yet many technologists remain optimistic that future iterations of A.I. companions could become safer and genuinelysupportive. I call these people optimists because they remain focused on the potential positive impact of A.I., despite its evident limitations.
Take the case of former journalist Eugenia Kuyda. She became involved with A.I. companions after the loss of a close friend and has openly acknowledged the potential emotional risks of these platforms if not designed responsibly. Yet she remains optimisticabout their potential and has gone on to create one of the best-known A.I. companion platforms herself.
The platform Kuyda created is called Replika, and it is one of many services promising accessible emotional companionship. Replika states that its avatars are “always here to listen and talk” while “always [being] on your side.” That is where the problembegins, because no human relationship can realistically replicate such uninterrupted emotional availability. It would be physically impossible and emotionally unhealthy to do so.
Yet there are those who claim to have benefited from intimacy with A.I., both in India and abroad. Reports in Indian newspapers have documented users who found comfort in digital companionship. One such case involved a Noida-based man who reportedly struggledto approach women romantically and claimed to have found emotional comfort in an A.I. companion named “Sonya.” He reportedly described the chatbot as a “wingman” that helped him practice pick-up lines and improve his dating profiles.
It sounds appealing in theory, but it also reveals the deeply asymmetrical nature of these relationships. No human romantic partner would realistically function as both lover and emotional support machine without boundaries or emotional consequence.
Still, there are those who believe these platforms may help people who are often ignored, isolated, or excluded from traditional social spaces. This could include people with severe social anxiety, physical disabilities, disfigurement, chronic illnesses,or individuals struggling with loneliness and alienation.
Saathi A.I., an Indian A.I. companion platform, caters to both men and women. By offering round-the-clock emotional engagement and companionship, platforms like these risk creating unrealistic expectations around intimacy, responsiveness, and emotional labour.The possibility of developing unhealthy emotional dependency — particularly among emotionally vulnerable users — remains a serious concern.
Does this mean A.I. should be rejected entirely? Not necessarily.
People across age groups are already incorporating A.I. into everyday life, including students. I spoke to two Mumbai-based mothers, Anaita and Talat, whose school-aged children use artificial intelligence tools to improve their understanding of conceptstaught in school. Both described their children as responsible users who treated the tools as educational aides rather than emotional companions.
But not all children or adolescents interact with technology in equally healthy ways. Talat believes that children growing up in unstable domestic environments may be more vulnerable to emotionally dependent relationships with A.I. systems. It may thereforebe those denied healthy emotional support or stable human relationships who are most susceptible to such platforms.
At the same time, there also appears to be a limited but constructive way to engage with this technology, as Anaita and Talat’s children have done.
I also spoke to Dr. Rimpa Sarkar, a PhD in clinical psychology and founder of Sentier Wellness, an organisation focused on workplace mental health and emotional well-being.
“There are certain benefits, especially for individuals who feel isolated, socially anxious, or emotionally overwhelmed. A.I. can provide a sense of companionship, a space for expression, and temporary emotional comfort. In some cases, it may even help individualspractice communication or feel less alone during difficult periods,” she said.
A young Mumbai-based writer, Payal, echoed this perspective. She said she had used platforms like ChatGPT and Claude A.I. for a range of tasks, including tracking calories, choosing outfits for dates, and discussing emotional dilemmas.
“The results have been fantastic. The bot has been both clinical and sympathetic in the analysis of my dilemmas and pain, and the engagement is continuous until I feel clarity has been reached,” she explained.
Sailee Paradkar, another Mumbai-based mental health professional I spoke to, however, cautioned against relying on A.I. platforms for therapy. According to her, many of these systems are designed to reinforce user engagement and affirmation, rather thanchallenge harmful thought patterns in the way a trained therapist ethically might.
This concern is significant. If a person experiencing emotional distress or destructive impulses relies heavily on an affirming A.I. companion, there is a risk that the system may fail to appropriately recognise danger or respond responsibly.
One can observe that A.I., in both companion and conversational forms, still lacks the ability to make complex moral and emotional judgements. This severely limits its ability to function in roles requiring deep empathy, ethical reasoning, or emotional accountability.
Yet the potential usefulness of these systems in administrative or assistive tasks remains undeniable.
The larger question, however, concerns safeguards. What guardrails exist within these systems? Are they sufficient? And will profit-driven companies consistently prioritise user safety over engagement metrics?
Another important issue is data privacy. These platforms often access deeply intimate emotional disclosures. How securely is that information stored? Can it be monetised? And what happens if users later wish to erase those digital interactions entirely?
I remain sceptical that A.I. companions can provide genuinely healthy, reliable, or emotionally reciprocal relationships, although they may offer limited benefits to some adult users.
The companies behind this technology, however, are likely to find a growing user base within what some observers now describe as the “loneliness economy.”
The term refers to an economic ecosystem in which emotional labour and emotional reassurance are increasingly monetised. Many of us have consumed comforting digital content — videos of strangers’ pets, families, or emotional moments — during periods of lonelinessor distress. The creators of such content are often financially rewarded because they temporarily fill an emotional void for viewers.
A.I. companionship platforms may represent the next evolution of this phenomenon.
There may also be biological limits to how successful such systems can become. Dr. Sarkar believes that “A.I. companionship should be seen as a supplement, not a substitute. Human connection is not just emotional; it is also biological.”
Hormones such as oxytocin — associated with bonding, trust, and emotional safety — are released through human interaction in ways that current A.I. systems cannot genuinely replicate.
One thing, however, seems clear: children and adolescents are especially vulnerable users. They are still developing emotionally, socially, and psychologically. Their identities remain fluid, and their ability to regulate emotions is still evolving. Casessuch as Sewell Garcia’s serve as painful reminders of the risks involved when emotionally immersive technologies intersect with vulnerable young users.
If you or someone you know is struggling with mental health or suicidal thoughts, please seek support from a qualified mental health professional or local mental health helpline.
About Author
Sonali Gill is a writer and commentator focusing on digital culture, youth identity, mental health, and the social impact of technology. She writes for TheNews21 Pulse on contemporary issues shaping modern life.


