Nobody Built This For You
And you’re too busy celebrating adoption numbers to notice.
There’s a stat that gets thrown around in every India-AI deck, every ecosystem report, every breathless LinkedIn post by a founder who just discovered GPT wrappers, every AI Summit recap: India is one of the fastest-growing markets for AI adoption in the world. Usage is up. Downloads are up. Enterprise interest is up. The curve is hockey-sticking, and everyone’s invited to the party.
Unfortunately, we’re the guests, not the hosts. And we’re so thrilled to be invited that we haven’t stopped to ask who’s cooking, what’s in the food, or whether the house we’re dancing in could collapse on us.
Let me give you one number that tells the whole story. India now accounts for roughly 19% of the global user base for leading AI apps, more than the United States. Our AI app downloads grew 207% year-on-year in 2025, the highest growth rate of any major economy on the planet. And our share of global AI app revenue? One percent.
India’s relationship with AI right now is the relationship of the world’s most enthusiastic consumer with a product that was never designed for them. The tools we’re adopting were built by American companies, trained on American data, designed for American use cases, and optimised for American users. That they work at all in India is a testament to the brute-force generality of large language models. That we’ve mistaken “works at all” for “works well” is a testament to how badly we want to be part of this moment.
The Wrapper Economy
Let’s start with what India is actually doing with AI.
The honest answer, if you look past the press releases, is: we’re integrating. We’re plugging APIs into existing products. We’re building thin layers on top of foundation models that someone else trained, with data that someone else collected, to solve problems that someone else defined. We’re a wrapper economy, and we’ve gotten very good at pretending that wrapping is building.
This isn’t a dig at the startups. I get it. When OpenAI gives you an API that can do in seconds what would have taken your team months to build, the rational move is to use it. The entire Indian IT services industry was built on this exact logic- take what the West builds, implement it cheaper and faster, capture the arbitrage. It worked for decades. It made Bangalore what it is.
When you’re an IT services company, you understand the technology. You can see the code. You can modify it, adapt it, learn from it and eventually build your own version. The knowledge transfer, however slow and imperfect, happens. You start as an implementer and you can, if you’re ambitious, become a builder. This was the promise of Indian IT, and to some real extent, it delivered.
With AI, the dynamics are inverted. When you wrap an API, you don’t own the model. You don’t own the training data. You don’t own the research that produced the capability. You don’t understand (and I mean this literally, because nobody fully does) the failure modes, the biases, the blind spots. You can’t peek under the hood. You can’t reverse-engineer your way to understanding. The API is a black box by design, and your access to it is conditional on someone else’s pricing decisions, someone else’s terms of service, someone else’s strategic priorities.
You’re building a business on a foundation you can’t inspect, can’t modify, and can’t replicate if the API pricing changes tomorrow or the provider decides your use case is no longer supported. We saw early versions of this when OpenAI changed its pricing and a dozen Indian startups had to frantically rework their unit economics overnight. That was a preview.
And yet, this is what we’re celebrating. X number of AI startups in India. Y amount of funding. Z percent growth in adoption. The numbers are real. The progress is an illusion.
People will say: This is how it always starts. India started as an outsourcing destination and built Infosys, TCS, Wipro. We started by implementing someone else’s technology and built world-class capability. Give it time. The wrappers of today will become the platforms of tomorrow.
Maybe. But I’d note two things. First, the knowledge transfer dynamic is fundamentally different this time (more on that in a moment). And second, of the roughly 10,000+ AI companies worldwide, only about 15 are working on foundational models. India’s Sarvam AI was selected in 2025 to build the country’s first sovereign LLM under the IndiaAI Mission. That’s one company, selected by the government, in a country of 1.4 billion people. It’s a start. It’s nowhere near enough.
The IT services path worked because the knowledge was transferable. You could learn Java by writing Java. You could understand enterprise architecture by building enterprise systems. The learning was embedded in the doing. With AI, the doing, wrapping an API, teaches you almost nothing about how the underlying model works. You’re not learning by building; you’re learning by consuming. The skill you’re developing is integration, not invention. And integration is a commodity.
I spend a lot of time looking at early-stage startups, and I’ll be honest, the Indian AI startup ecosystem right now feels like a gold rush where everyone’s selling shovels they rented from someone else. The pitches are confident. The TAM slides are enormous. The underlying innovation is, in most cases, a prompt and a prayer. There are exceptions, of course. Companies doing genuinely interesting work on Indian language processing, on context-specific applications, on infrastructure that matters. But they’re outnumbered ten to one by wrappers wearing the costume of invention.
Building on top of someone else’s intelligence is not the same as building intelligence. And the longer we confuse the two, the deeper the dependency gets.
Someone will argue that wrappers are a feature, not a bug: that distribution comes first, and distribution creates data, and data eventually becomes models. In theory, yes. In practice, distribution doesn’t automatically translate into usable training data, especially in India, where consent is vague, privacy norms are uneven, data is fragmented across languages and formats, and the incentives push companies to collect what’s easy, not what’s representative or safe.
Worse, without Indian benchmarks and evaluation frameworks, more usage just means more unmeasured failure at scale. And even if distribution does generate valuable data, the value doesn’t magically stay here unless we own critical infrastructure (models, compute, datasets, evaluation, deployment rails), we’re still feeding someone else’s flywheel.
The investment numbers make this painfully clear. India’s share of global private AI funding stood at less than one percent in 2024. The companies building the models we depend on, OpenAI, Anthropic, Google DeepMind, raised over $86 billion in 2025 alone. India’s entire AI startup ecosystem raised $1.34 billion. We’re not in the same conversation. We’re not even in the same building.
The Context Problem
Every major foundation model in production today was trained primarily on English-language data, primarily from Western contexts, primarily reflecting Western assumptions about how the world works. It’s just what happens when the people building the models live in San Francisco, and the training data comes from the internet as they experience it.
So when these models get deployed in India, and they are getting deployed at scale, in healthcare, in education, in financial services, in agriculture, they carry those assumptions with them.
A medical AI trained overwhelmingly on Western clinical data doesn’t know how diseases present differently in South Asian bodies. It doesn’t account for the fact that an Indian patient’s relationship with a doctor, with disclosure, with treatment compliance, with the role of family in medical decisions, is shaped by a completely different set of cultural forces. It doesn’t know that a prescription that makes sense in a country with robust insurance might be financially catastrophic for someone paying out of pocket. It doesn’t understand that when a woman in rural India describes her symptoms, she may be using language that doesn’t map to the clinical terminology the model was trained on, not because she’s imprecise but because her framework for understanding her own body was built in a different knowledge tradition.
An education AI trained on American pedagogical frameworks doesn’t understand that a kid in Tier 2 India isn’t just operating in a different language, they’re operating in a different epistemological universe. The way they learn, the way they’ve been taught to learn, the role of rote memory, the relationship with authority and questioning, none of this is in the training data. When an AI tutor is “patient” and “encouraging” in the way a California edtech company imagines patience and encouragement, it may be completely misreading the dynamic of what a 14-year-old in Lucknow actually needs.
A financial AI that’s been trained on Western market structures doesn’t understand the informal economy, the role of gold as savings, the way credit works in a joint family system, the trust dynamics that make certain financial products land and others fail. It can give you technically correct advice that is practically useless for the person receiving it.
And yet we’re deploying these tools. Fast. At scale. Because the adoption numbers are exciting and the alternative, building context-aware AI from the ground up, is slow, expensive, and doesn’t fit on a pitch deck.
The AI we’re consuming was trained on a world that doesn’t look like ours, and we’re pretending the gap doesn’t matter because the outputs look smart. They’re fluent. They’re confident. They’re convincingly wrong in ways that are very hard to detect unless you know what to look for.
And most consumers don’t know what to look for. Which brings us to the scariest part.
The Literacy Gap
This is perhaps the thing that worries me most.
India leapfrogged into the smartphone era. Hundreds of millions of people went from no internet to a supercomputer in their pocket in the span of a decade. The result was extraordinary: UPI, Aadhaar, the digital public goods revolution. But it also created a massive literacy gap. People who could use the tools couldn’t evaluate the tools. They could tap and swipe and transact but couldn’t necessarily tell the difference between a legitimate app and a scam, between actual information and misinformation, between a product that served them and a product that extracted from them.
The consequences of that gap are still playing out in misinformation epidemics on WhatsApp, in digital fraud, and in the exploitation of data from populations who never meaningfully consented to its collection.
We’re about to replay this exact dynamic with AI, but at a higher level of abstraction and with higher stakes.
When ChatGPT or Gemini or any of the AI tools people are increasingly relying on gives you an answer, do you know what it’s drawing from? Do you know what it can’t access? Do you know the difference between a confident hallucination and a genuine insight? Do you know when it’s reflecting actual knowledge versus pattern-matching its way to something that sounds right?
Most people don’t. And honestly, most people shouldn’t have to! In an ideal world, the products would be designed to make these limitations transparent. But they’re not. They’re designed to seem authoritative, because authority drives engagement, and engagement drives adoption, and adoption is the metric everyone’s chasing.
In India, this gap is compounded by language, digital literacy levels, and the sheer speed of adoption outpacing any kind of consumer education. It’s just not possible to balance them. We don’t have the regulatory infrastructure. We don’t have the media literacy ecosystem. We don’t even have the vocabulary in most Indian languages to talk about what AI is doing and where it falls short.
I think about this with women especially. The women I write about, the women who read this newsletter- ambitious, navigating complex lives, juggling seventeen roles with inadequate support. AI is being marketed to them as the great equaliser. The assistant they never had. The tool that will finally reduce the mental load. And it can do some of that, genuinely. But it can also do something more insidious: it can become another authority that they defer to without questioning, another voice that sounds confident and is sometimes catastrophically wrong, another system that extracts their attention and data while promising empowerment.
I’ve been thinking about this a lot through the lens of what I call #SheAIT- the question of what it would look like if AI tools were actually designed to reduce women’s mental load rather than just increase their productivity. Because those are different things. Productivity tools assume you know what needs to be done and just need help doing it faster. Mental load is the invisible work of figuring out what needs to be done in the first place- the tracking, the anticipating, the remembering, the emotional labour that never shows up in a task list. Most AI tools are adding to the pile of things women need to manage, not reducing it. And because these tools weren’t designed with this distinction in mind, because, unsurprisingly, the teams building them don’t experience mental load the way women do, the gap persists even as the marketing promises otherwise.
I’ve watched women in my own circles use AI to draft difficult emails, plan meals, research symptoms, navigate bureaucratic processes — and I see how quickly the relationship tips from use to dependence. How the question stops being “is this right for me” and becomes “the AI said.” How a tool that was supposed to save time becomes another thing to manage, another output to quality-check, another source of information to integrate into an already overcrowded decision-making process.
We have adoption numbers. We have pitch decks. We have ministers announcing AI missions. What we don’t have is a population that can critically evaluate the AI tools it’s increasingly dependent on. And the people building those tools have zero incentive to close that gap, because an uncritical consumer base is, from a business perspective, ideal.
What We’re Not Building
So what should India be building?
Not more wrappers. Not more “AI-powered” apps that are really just a text box connected to GPT-4 with a Hinglish prompt. Not more products that treat India as a market to be captured rather than a context to be understood.
India should be building the things that are genuinely hard and genuinely ours.
Language models that aren’t just translated English but actually understand how Indian languages work- the code-switching, the register shifts, the way meaning is constructed differently across languages, the fact that a single conversation in any Indian city might move through three languages in as many sentences and the AI needs to keep up without flattening that richness into English-with-extra-steps.
Evaluation frameworks that can test AI performance against Indian realities, not just Western benchmarks. If your AI health tool gets a 95% accuracy score on an American test set and you’re deploying it in Maharashtra, that number is meaningless. Where are the Indian benchmarks? Who is building them? Who is funding them? This is the unsexy, infrastructure-level work that no one wants to do because it doesn’t produce a demo you can show at a conference. But without it, we’re flying blind and deploying powerful tools and measuring their performance against someone else’s reality.
Consumer education infrastructure, not as a nice-to-have, not as a CSR checkbox, but as a core component of any AI deployment at scale. If you’re putting AI in front of a hundred million Indians, you owe them more than a terms-of-service page in English. What would it look like if every AI product deployed in India was required to include a plain-language explanation of what it can and can’t do, in the language its users actually speak? What if AI literacy was part of the school curriculum the way basic computer literacy was a generation ago? These aren’t radical ideas. They’re obvious ones that nobody’s implementing because they don’t show up on a growth chart.
Regulatory thinking that goes beyond “copy the EU” and actually grapples with what AI governance looks like in a country of 1.4 billion people at wildly different levels of digital sophistication. The EU’s AI Act is thoughtful, but it’s designed for a fundamentally different context with higher baseline digital literacy, stronger institutional infrastructure, smaller and more homogeneous populations. India’s AI governance needs to be Indian.
And most importantly, India should be building a culture of critical consumption. Not anti-AI because I’m not a doomer, and I think these tools are genuinely powerful. But critically pro-AI. A culture that asks: who built this, what data did they use, what does it get wrong about my context, and what am I not seeing?
We don’t have that culture yet. What we have is a culture of dazzled adoption, where the speed of uptake is treated as proof of readiness.
It’s not. Speed of adoption without depth of understanding is just a faster way to become dependent.
NASSCOM’s own AI Adoption Index tells the story in miniature: India’s enterprise AI maturity score moved from 2.45 in 2022 to 2.47 in 2024. Two years of breathless AI hype, and the needle barely twitched. Eighty-seven percent of Indian enterprises remain stuck in the middle stages of AI maturity. We adopted the tools. We didn’t develop the capability.
The Agentic Turn
I think there’s actually cause for optimism, but only if we’re willing to think differently about what “using AI” means.
There’s a term floating around tech circles right now: “agentic AI.” In the technical sense, it refers to AI systems that can take actions autonomously. Not just answer questions but actually do things. Book a flight. Write and execute code. Manage a workflow end-to-end. It’s one of the buzzwords of 2025, and like all buzzwords, it’s being used to sell a lot of things that don’t deserve the label.
But I want to reclaim “agentic,” not as a description of what the AI does, but as a description of what the user does.
Because right now, most people interact with AI passively. They type a question, they get an answer, they accept it. The model is the agent; the user is the audience. This is how these tools are designed to feel: effortless, frictionless, magic. You don’t need to understand how it works. You just need to ask.
American users of AI chatbot apps spend 21% more time per session and log 17% more sessions per week than Indian users. We download more but engage less deeply. We’re drive-by consumers.
This is exactly the wrong posture for a country consuming AI tools that weren’t built for its context.
Being “agentic” as a user means something fundamentally different. It means treating AI not as an oracle but as a tool, one that requires skill to use well, judgment to use wisely, and critical distance to use safely. It means learning to prompt with precision, to interrogate outputs, to understand enough about how these systems work to know when they’re likely to fail you. It means being the driver, not the passenger.
The difference between someone who uses Google and someone who is good at using Google is vast. The first person types a vague question and accepts the top result. The second understands how search works. They know about operators, they know how to evaluate sources, they know what Google is optimising for and how that shapes what they see. They know that the first result isn’t necessarily the best result. They know how to dig. Same tool, completely different relationship with it.
AI requires this same shift, but at a much higher level of sophistication. Because the failure modes are less visible. A bad Google result is obviously bad because the link is broken, the source is sketchy, the information doesn’t match. A bad AI output is often beautifully articulated, supremely confident, and subtly, devastatingly wrong. It’s the intern who writes a perfect memo with the wrong numbers. It reads well. It sounds authoritative. And if you don’t independently verify, you’ll make decisions based on fiction.
What Agentic Actually Looks Like
At the individual level, agentic AI use looks like understanding the toolset, not in a “learn to code” way, but in a “learn to drive” way. You don’t need to be a mechanic to drive a car, but you do need to know what the brakes do, when the fuel is low, and what that weird noise means. With AI, this means understanding the basics: what a language model is, what it can and can’t do, why it hallucinates, what “context window” means and why it matters for your output. This is not arcane technical knowledge. This is basic literacy for the century we live in.
It means treating prompting as a skill. The gap between a naive prompt and a skilled prompt is the gap between a useless output and a genuinely useful one. This isn’t about memorising tricks. It’s about understanding that you’re not having a conversation, even if it feels like one. You’re giving instructions to a pattern-matching engine, and the quality of your instructions determines the quality of the output. Specifying context. Giving examples. Telling the model what not to do. Iterating instead of accepting the first draft.
It means making critical evaluation a habit. Every AI output should be treated as a first draft written by a confident intern who may or may not know what they’re talking about. Some of it will be brilliant. Some of it will be subtly wrong in ways that are hard to catch. Your job is to know the difference, or at least to know when you can’t tell the difference, which is when you need to verify through other means.
And maybe most importantly, it means knowing when not to use it. There are things AI is extraordinary at, and there are things it shouldn’t be trusted with. Knowing the boundary and being willing to do the slower, harder, human thing when the stakes are high enough is a form of mastery, not Luddism. The most agentic thing you can do sometimes is close the tab.
For an Indian user, being agentic also means actively compensating for the model’s contextual blind spots. It means knowing that when an AI gives you dietary advice, it’s probably thinking in terms of Western nutrition science and Western grocery stores. It means knowing that when it drafts a business email, it’s defaulting to American professional norms that might land differently in your context. It means developing the habit of asking: What is this tool assuming about me that isn’t true? That’s a higher bar than what’s expected of an American user interacting with a tool designed for their context. It’s also a more powerful skill to develop, because it generalises. Once you learn to question one tool’s assumptions, you can question anything.
At the ecosystem level, agentic looks like Indian developers and entrepreneurs treating Indian complexity not as a bug to be patched but as the design constraint that drives innovation. The messiness of Indian languages, markets, and infrastructure isn’t a problem to be solved with a better wrapper. It’s the problem space that could produce genuinely novel AI approaches if we stopped trying to fit ourselves into Silicon Valley’s frameworks.
It looks like consumer education as infrastructure. Schools, media, government, basically everyone who touches the public, treating AI literacy not as a tech issue but as a civic one. The same way we (eventually, imperfectly) built public understanding of how to use the internet, how to evaluate news sources, how to protect personal data. Except faster, because the stakes are higher and the adoption curve is steeper.
And it looks like demanding more from builders. Both foreign and domestic. If you’re deploying AI in Indian healthcare, prove it works on Indian patients. If you’re building an education tool for Indian students, show me the evaluation framework that tests for Indian educational contexts. If you’re wrapping an API and calling it an Indian AI product, be honest about what you’ve actually built and what you’re renting. And if you’re an investor funding Indian AI, and I say this as someone who has spent years in the early-stage ecosystem, stop rewarding adoption metrics and start asking about contextual accuracy. How well does this thing actually work for the people it claims to serve?
The House
I know how this piece sounds. It sounds like I’m saying India is behind, India is failing, India needs to wake up. And in some ways, yes, that’s exactly what I’m saying.
But I’m saying it because the opposite narrative, the one that dominates right now, that India is booming, India is adopting, India is the next great AI market, is more dangerous than pessimism. It’s the sound of an entire country consuming a technology it doesn’t understand, built by people who don’t understand it, and calling the whole farce innovation.
The stakes aren’t abstract. They’re in the healthcare recommendations that don’t account for Indian bodies. The financial advice that doesn’t account for Indian markets. The educational tools that don’t account for Indian classrooms. The government deployments that move fast because the technology is exciting and move wrong because the evaluation was absent. They’re in the small business owner in Surat who trusts an AI-generated contract because it looks professional. They’re in the student in Patna who submits AI-generated research and doesn’t know the citations are fabricated. They’re in the farmer in Telangana who follows AI-generated crop advice that was optimised for Iowa.
Every one of these is a trust problem. And trust, once broken at scale, is very hard to rebuild.
I don’t think this is settled yet. India has the talent, the scale, and frankly the necessity to become genuinely agentic with AI. We need to be not just using it, but shaping it. The developers are here. The problems worth solving are here. The users who could become the world’s most sophisticated AI consumers are here, because the gap between what AI offers by default and what they actually need is so wide that closing it requires a level of critical intelligence that Silicon Valley’s core users have never been forced to develop.
I keep coming back to UPI (and yes, I know, every Indian tech essay is legally required to mention UPI). UPI worked because India didn’t try to copy Visa’s homework. It looked at the problem- how do you move money in a country where most people don’t have credit cards- and built something genuinely new. Something that was native to the context, not imported and adapted. The result wasn’t just a better payment system for India; it was a payment system the world now wants to learn from.
That’s what agentic looks like at the national level. Not “Indian AI” as a brand exercise. Not “Made in India” as a label slapped on a wrapper. But genuine innovation born from the specific, messy, complicated reality of what India actually needs from this technology, which is not the same as what America needs, or Europe needs, or China needs.
But none of that happens automatically. It happens when we stop confusing adoption with understanding, when we stop celebrating wrappers as innovation, and when we start treating AI literacy not as a nice-to-have but as essential infrastructure for a country that’s betting its future on a technology it hasn’t yet learned to question.
The party is great. The music is loud. The adoption numbers are fantastic.
But somebody should check who built the house.
If this resonated, share it with someone who needs to hear it, especially if they’re building in AI, deploying AI, or investing in AI in India. The conversation needs to shift from “how fast are we adopting” to “how well do we understand what we’re adopting.” That shift starts with us.



Interesting read!