India is emerging as one of the world’s largest users of generative AI systems, even as it lacks a dedicated regulatory framework. What happens when adoption runs ahead of governance?
Mallari Shroff
AI Policy & Social Data Science Researcher | Dublin, Ireland
Editor’s Note
This article is Part II of TheNews21’s Policy Analysis series on artificial intelligence. In this piece, researcher Mallari Shroff examines how rapid AI adoption in India is unfolding in the absence of a clear regulatory framework, and what that means for trust, governance, and digital sovereignty.
India presents a case that sits uncomfortably at the intersection of scale and uncertainty. The country is consistently identified in industry reports as one of the largest and fastest-growing user bases for generative AI systems. Millions of users are interacting with conversational models in professional, educational, and personal contexts, often integrating these systems into everyday decision-making. Yet this rapid adoption is unfolding in the absence of a dedicated legal framework governing artificial intelligence.
This gap is not merely a question of policy timing. It raises a more fundamental issue about the relationship between use and oversight. In most areas of technology governance, regulatory structures evolve alongside or shortly after adoption, shaping how systems are deployed and what constraints are placed on them. In the case of generative AI in India, usage is scaling first, while governance remains diffuse, distributed across existing legal frameworks that were not designed for systems that simulate human interaction.
The result is a situation in which users are developing patterns of reliance on AI systems without a clear understanding of how those systems are designed, what limitations they carry, or whose interests they ultimately serve. This becomes particularly significant when viewed through the lens of trust. As earlier research has shown, trust in AI does not develop solely through evidence of performance. It is also shaped by perception, interaction, and the degree to which systems appear socially responsive. In a context where regulation is limited, these psychological dynamics become even more consequential.
India’s technology ecosystem adds another layer to this dynamic. The country’s software and services sector has historically been positioned as a global backend for technology development, but the current wave of AI adoption is largely dependent on models developed by foreign firms. The systems being used at scale are not domestically controlled, and the underlying architectures, training data, and optimisation processes remain opaque to most users and institutions within the country. This raises questions that go beyond individual use and move into the domain of digital sovereignty.
Dependence on external AI systems creates an asymmetry of knowledge and control. Users interact with systems that are locally embedded but globally governed. Decisions about how these systems behave, what safeguards are built into them, and how they evolve over time are made outside the regulatory and institutional structures of the country in which they are widely used. In such a scenario, trust becomes not only a psychological phenomenon but also a geopolitical one.
At the same time, the social context in which AI is being adopted in India is distinct. Patterns of technology use are shaped by linguistic diversity, varying levels of digital literacy, and a wide range of socioeconomic conditions. In many cases, AI systems are being used as substitutes for access to expertise, whether in education, health, or professional decision-making. This expands the scope of reliance beyond convenience into areas where the consequences of error are more significant.
The behavioural dynamics identified in research on human–AI interaction become particularly relevant here. If users are inclined to anthropomorphise AI systems and develop emotional trust based on perceived responsiveness, then the absence of regulatory clarity increases the risk of miscalibrated reliance. A system that appears confident and conversational may be treated as authoritative, even when its outputs are probabilistic, incomplete, or contextually inappropriate.
This is not an abstract concern. In environments where institutional support structures are uneven, individuals may turn to AI systems as primary sources of guidance. The distinction between assistance and authority can blur, especially when the system presents itself in a manner that resembles human interaction. The psychological mechanisms that drive trust do not require formal validation. They operate through experience, repetition, and perceived familiarity.
The governance gap also has implications for accountability. When an AI system produces an incorrect or harmful output, the pathways for redress are unclear. Existing legal frameworks, including data protection and intermediary liability provisions, do not fully address the specific challenges posed by generative AI. Questions about responsibility, whether it lies with developers, deployers, or users, remain unresolved in practical terms.
This uncertainty extends to institutional adoption. Organisations integrating AI into workflows must make decisions about how much to rely on these systems without clear regulatory guidance. The calibration of trust becomes an internal responsibility, shaped by organisational culture rather than external standards. In such conditions, the risk of both over-reliance and under-reliance increases.
What emerges is a layered challenge. At one level, there is the need for regulatory clarity that addresses the specific characteristics of generative AI systems. At another, there is the need to understand how users actually interact with these systems, and how trust is formed in practice. These are not separate questions. Governance frameworks that do not account for behavioural dynamics risk addressing the wrong problem.
India’s position, therefore, is not simply that of a late regulator or an early adopter. It is a case in which the scale of use, the structure of dependence, and the psychology of interaction converge. The absence of a dedicated AI law is not, in itself, unusual. What is unusual is the extent to which systems that simulate human interaction are being integrated into daily life without a corresponding framework to evaluate how they are trusted and relied upon.
The question that follows is not whether India will regulate AI, but how it will do so. If regulation focuses narrowly on technical safety and compliance, it may replicate the limitations already visible in existing models of technology governance. If, instead, it incorporates an understanding of how trust is formed, how users interpret AI behaviour, and how reliance develops over time, it may be better positioned to address the realities of adoption.
Until that shift occurs, the gap between use and understanding will persist. Users will continue to interact with systems that feel increasingly human, while the frameworks designed to govern those interactions remain rooted in a different conception of technology.
India is not just adopting AI at scale. It is doing so in a way that brings questions of trust, control, and accountability into sharper focus. How those questions are answered will shape not only the trajectory of AI adoption in the country, but also the terms on which users come to rely on systems that are, at once, powerful and imperfect.


