HomePolicy AnalysisYou’re Not Just Using AI. You’re Trusting It. Science Says That’s Complicated

You’re Not Just Using AI. You’re Trusting It. Science Says That’s Complicated

Millions of people now interact with AI every day. Research suggests they may be doing far more than asking questions — they may be forming something that resembles a relationship.

Mallari Shroff

AI Policy & Social Data Science Researcher | Dublin, Ireland 

Editor’s Note

This article is part of the Policy Analysis series of TheNews21, examining how emerging technologies are reshaping governance, behaviour, and society.

TECHNOLOGY & SOCIETY

Millions of people now talk to AI every day. Research suggests they may be doing far more than asking questions. They may be forming something that looks a lot like a relationship. The framework scientists have used to predict technology adoption was never designed for that.

By Mallari Shroff

A Framework Built for a Different World

When researchers first began studying whether people would adopt new technology, the systems in question were word processors and database tools. The question was a practical one: would employees bother learning them?

That was 1989. Fred Davis at the University of Michigan built what became known as the Technology Acceptance Model, or TAM. It rested on two predictors: whether the technology seemed useful to a person’s work, and whether it seemed easy to learn. The model was measurable, elegant, and for decades, it held up. It became one of the most cited frameworks in social science research.

The framework that shaped three decades of technology adoption research began with a straightforward observation about task-oriented software. Davis (1989) found that workers adopted systems when they believed those systems would improve their job performance and when the learning curve felt manageable. In 2003, Venkatesh and colleagues expanded TAM into the Unified Theory of Acceptance and Use of Technology, known as UTAUT. That update added social influence, meaning pressure from colleagues and managers, and organisational support, meaning the infrastructure and training available to users, as further predictors.

Together, these models explained adoption across a wide range of workplace systems and generated hundreds of empirical studies. They gave designers, managers, and policymakers a workable answer to a persistent question: will people use this system, and if not, why not?

Today, hundreds of millions of people talk to generative AI systems that write their emails, answer medical questions, and in some cases become the first voice they hear in the morning. Researchers studying human-AI interaction, among them Ella Glikson and Anita Williams Woolley, in a comprehensive 2020 review published in the Academy of Management Annals, have begun to question whether TAM captures what is actually driving the adoption of these systems.

Based on the available evidence, the answer appears to be: not entirely.

WHY THE MODEL FALLS SHORT

What TAM and UTAUT do measure: Whether a system seems useful and easy to use. Whether colleagues use it. Whether organisational support is in place. These remain relevant predictors for task-oriented software.

What they do not measure: Whether users perceive the AI as human-like. Whether they form emotional attachments to it. Whether they trust it in an affective rather than a rational sense. Glikson and Woolley (2020) identify these psychological factors as significant drivers of how people actually relate to and rely on AI systems.

Why it matters: Design decisions and regulatory frameworks built on adoption models that miss these variables may systematically overlook the mechanisms most likely to produce over-reliance, misplaced trust, and harm.

When the Tool Starts to Feel Like a Friend

The psychological story at the centre of this debate predates AI by decades. In 1956, sociologists Donald Horton and Richard Wohl published a paper in the journal Psychiatry on something they had observed in television audiences. Viewers, they found, acted as though they knew on-screen personalities personally. They felt genuine affection for talk show hosts and news anchors. When a favourite performer left the air, some experienced real distress. Horton and Wohl called these para-social relationships: one-sided bonds that followed the same emotional logic as face-to-face relationships, even though the performer had no idea the viewer existed.

The technology they were describing was a television set. Interaction ran in one direction only. Now consider what changes when that one-sidedness is removed. Generative AI systems do respond to the individual user. They adapt their language and retain context within a conversation. That shift, researchers argue, may produce what they describe as para-social bonding with the AI itself: a form of one-sided emotional attachment in which the user develops a sense of familiarity and investment, begins to feel understood by the system, and extends to it the kind of trust more commonly reserved for people. Horton and Wohl identified this dynamic in audiences watching a screen. The concern now is that it may be unfolding in users talking to a chatbot.

The Science of Social Responses to Machines

The experimental evidence supporting this concern is substantial. In the 1990s, Stanford researcher Clifford Nass and colleague Youngme Moon ran a series of studies that produced results most people find counterintuitive. They put participants in front of computers and observed what happened. People were polite to the machines. They applied gender stereotypes to computer voices. They felt more connected to systems that appeared to share their personality. They behaved, in short, as though the computers were social actors, and they did this even after being told explicitly that they were not.

Nass and Moon published their findings in the Journal of Social Issues in 2000. They attributed the behaviour to mindlessness, a term borrowed from psychologist Ellen Langer: the activation of social scripts so deeply ingrained that they fire automatically, below the level of conscious thought. A person can know perfectly well that a computer is not human and still respond to it as one.

Those experiments involved relatively simple text-based interfaces from the 1990s. Generative AI is a different category of system entirely. Where an early interface might trigger a brief social response through a personalised greeting or a conversational prompt, a system that sustains extended, contextually adapted, emotionally attuned dialogue engages users in something that more closely resembles an ongoing relationship. The mechanism Nass and Moon identified is the same. The conditions for activating it are far stronger.

This connects directly to a phenomenon that psychologists call anthropomorphism. Epley, Waytz and Cacioppo, writing in Psychological Review in 2007, define it as the tendency to attribute human characteristics, motivations, intentions, or emotions to non-human agents. It is not a sign of confusion or cognitive failure. It is a deeply rooted tendency that activates automatically when an agent behaves in ways that resemble human interaction. The more conversational, responsive, and linguistically fluent an AI system is, the more readily users are likely to anthropomorphise it, and the more that tendency shapes how much they trust it.

Two Kinds of Trust, and Why the Difference Matters

In 2020, Glikson and Woolley reviewed 20 years of empirical research on human trust in AI. Their review, published in the Academy of Management Annals, found that trust in AI systems tends to develop along two distinct pathways, and that these pathways are driven by different factors.

Cognitive trust is built through evidence. A user observes a system performing correctly over time, develops a sense of its reliability, and calibrates their reliance on that basis. This is rational, evidence-based, and relatively stable. It is also, broadly speaking, the kind of trust that TAM and UTAUT implicitly assume.

Emotional trust works differently. Glikson and Woolley found that it is driven primarily by anthropomorphism: the perception that the AI has human-like qualities. It does not require a track record of accurate performance. It builds through interaction that feels warm, responsive, and socially engaged. A user can develop strong emotional trust in an AI system before they have any real sense of what the system can and cannot do.

The practical gap this creates is significant. TAM and UTAUT measure cognitive predictors. They have no construct for emotional trust and no variable for anthropomorphism. If emotional trust is a meaningful driver of how people relate to and rely on generative AI, then the dominant adoption model is missing one of the factors that matters most.

When Trust Goes Wrong

The concern here is not that people trust AI. Trust is a practical necessity. The concern is miscalibration.

Over-trust occurs when users rely on systems beyond their capability. Under-trust occurs when reliable systems are dismissed.

Over-trust is the more dangerous failure.

Design Choices Are Not Neutral

The features that make AI systems feel responsive and relatable are not neutral. They activate psychological mechanisms that shape how users trust and rely on the system.

Users begin to experience something closer to a relationship than a tool interaction.

THE THREE MISSING VARIABLES

Perceived anthropomorphism

Para-social bonding tendency

Emotional trust

These are central to understanding how AI is actually adopted.

What Comes Next

The frameworks used to understand technology adoption were built for a world where technology behaved like a tool.

That world has changed.

People are not just using AI.

They are relating to it.

Until the science catches up, the risks of those relationships will remain largely unmeasured.

Sources & Academic References

SOURCES & ACADEMIC REFERENCES

Davis, F.D. (1989) Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly. 13(3), 319-340

Venkatesh, V., Morris, M.G., Davis, G.B. & Davis, F.D. (2003) User acceptance of information technology: Toward a unified view. MIS Quarterly. 27(3), 425-478

Mayer, R.C., Davis, J.H. & Schoorman, F.D. (1995) An integrative model of organizational trust. Academy of Management Review. 20(3), 709-734

Lee, J.D. & See, K.A. (2004) Trust in automation: Designing for appropriate reliance. Human Factors. 46(1), 50-80

Horton, D. & Wohl, R.R. (1956) Mass communication and para-social interaction. Psychiatry. 19(3), 215-229

Nass, C. & Moon, Y. (2000) Machines and mindlessness: Social responses to computers. Journal of Social Issues. 56(1), 81-103

Epley, N., Waytz, A. & Cacioppo, J.T. (2007) On seeing human: A three-factor theory of anthropomorphism. Psychological Review. 114(4), 864-886

Glikson, E. & Woolley, A.W. (2020) Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals. 14(2), 627-660

Hancock, J.T., Naaman, M. & Levy, K. (2020) AI-mediated communication: Definition, research agenda, and ethical considerations. Journal of Computer-Mediated Communication. 25(1), 89-100

Sundar, S.S. (2020) Rise of machine agency: A framework for studying the psychology of human-AI interaction. Journal of Computer-Mediated Communication. 25(1), 74-88

Also Read: AI Is Already Judging Students in European Schools. Who Is Checking If It’s Fair?



Subscribe to TheNews21

Stay Ahead with Independent Journalism

Investigations, political analysis and major national and global stories delivered directly to your inbox.

Stay Ahead with Independent Journalism

Investigations, political analysis and major national and global stories delivered directly to your inbox.

Mallari Shroff
Mallari Shroff
Mallari Shroff is a researcher and analyst specialising in social data science, AI policy, and computational social science. She holds an MSc in Social Data Science from University College Dublin and a BA in Psychology. She has contributed to EU-funded research on artificial intelligence in education through the Horizon Europe GenAI4ED project at Trilateral Research. Her work examines how emerging technologies shape policy, education, and society.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read

spot_img

Html code here! Replace this with any non empty text and that's it.

Must Read

spot_img