AI Isn’t the Danger, We Are, Why India Must Forge Its Own Path in the AI Revolution

As New Delhi hosts the AI Impact Summit, conversations about artificial intelligence are intensifying. We are repeatedly told that AI is on the brink of surpassing human intelligence, that so-called “superintelligence” is just around the corner, and will soon transform every dimension of society.

The pace of progress in large language models (LLMs) and generative systems is impressive. AI can draft essays, translate languages, write computer code, diagnose diseases and even compose music. It can clone voices and impersonate. On specific tasks, machines can outperform individual humans in speed and scale.

But as a trio of experts argue in a sweeping critique, the key question is not whether AI can outperform a person on a narrow benchmark. It is whether AI is approaching human-level intelligence. And that depends on what we mean by intelligence in the first place.

The Flawed Comparison

The comparison often being made today rests on a flawed assumption: that intelligence is primarily an individual cognitive capacity—something that can be measured by performance on tests or tasks. If a machine can pass an exam or produce a convincing argument, it is said to be as intelligent as a human.

This framing mirrors traditional IQ-style testing, which has long been criticised for cultural bias and for rewarding familiarity and training rather than deeper understanding. When we reduce intelligence to a set of measurable outputs, it becomes easier for machines to appear comparable.

But human intelligence is not simply individual brilliance. It is social.

Intelligence Is Collective

Every major human achievement—scientific breakthroughs, artistic revolutions, technological innovation—is the product of collective processes. No scientist works in isolation. Discoveries rely on shared methods, peer review institutions, and generations of accumulated knowledge. Oral culture rests on rich storytelling traditions, knowledge transfer, and group cohesion. Language itself is a collective achievement shaped over thousands of years.

Research on “collective intelligence” consistently shows that diverse groups can outperform even their most capable individual members when communication and cooperation are effective. Our intelligence is distributed across families, communities, institutions, and cultures. It is cumulative and collaborative.

AI systems do not participate in this collective dimension. They do not cooperate with one another in shared social worlds, negotiate meaning, form relationships or assume responsibility. They generate responses based on patterns in data, without awareness, intention or accountability.

Intelligence Is Embodied

Human intelligence is also embodied. From infancy, we learn through touch, movement, imitation, and shared attention. Developmental psychology shows that abstract reasoning grows out of physical and social experience. Our emotions, bodily sensations and cultural environments shape how we think.

AI has no such grounding. LLMs are disembodied from physical bodies and social groupings. They learn statistical associations from vast collections of text. They do not “understand” in the way humans do; they calculate probabilities. They do not experience fear, joy, empathy or doubt. They do not navigate social norms through lived participation in communities.

This limitation becomes particularly evident in ethical contexts. Humans reason morally within shared systems of values shaped by history, culture and social interaction. Machines, by contrast, simulate responses based on patterns in training data. They do not possess moral agency.

The Data Problem

There is another important constraint that receives less attention: the data on which AI systems are trained represents only a narrow slice of humanity.

Although more than 7,000 languages are spoken worldwide, most online content exists in a handful of dominant languages. Estimates suggest that around 80% of internet content is produced in just ten languages. Entire cultures, oral traditions, and knowledge systems remain underrepresented or absent in machine-readable form.

When AI models are trained on this limited corpus, they inevitably reflect the assumptions, values, and biases of a relatively small segment of the global population. By contrast, human intelligence is shaped by the lived experiences of eight billion people across diverse environments, traditions, and social systems.

AI does not have direct access to this richness. It cannot independently explore new cultural worlds or generate genuinely novel forms of collective meaning. It depends on human-produced data, which is incomplete and uneven.

The Limits of Scaling

There are also practical limits to scaling. Large models improve by ingesting high-quality human-generated text. But this resource is finite. Researchers have warned that we are approaching the limits of available training data.

One proposed solution is to train models on content generated by other AI systems. Yet this risks creating feedback loops in which errors and simplifications are amplified. Instead of learning from the world, systems learn from distorted reflections of themselves—an echo chamber rather than an expansion of understanding.

The Real Risks

None of this means AI is unimportant. On the contrary, it is already transforming industries, education, governance, and research. Used well, AI can increase efficiency, expand access to information and support human decision-making. For a country like India, with its vast population and digitalisation, AI offers significant opportunities.

But usefulness is not the same as human-level intelligence.

The real risk is not that machines will suddenly out-think humanity. It is that the AI hype distracts us from urgent issues: bias in automated systems, concentration of power, labour displacement, regulation, and the need for inclusive technological governance.

The US Fantasy and the Chinese Reality

The buzz at the summit is all about US hyperscalers declaring that artificial general intelligence (AGI) is around the corner, that a few more trillions of borrowed money will allow capital to eliminate labour forever, reducing the power of knowledge workers to nothing, and creating a paradise of infinite profit and total unemployment.

Nothing about this fantasy is true, except the borrowed trillions that inflate their bubble. The kind of software presently called AI, more technically referred to as machine learning, will never lead to their desired destination. But by making unscrupulous use of vast troves of human behaviour collected by social media and equally lawless appropriation of copyrighted cultural material, the platforms have acquired the ability to make low-quality imitations of human language without human thought.

The resulting fake humanness, tricked out with unvarying conversational fluency, produces the chatbot, a parlour trick that its proprietors want us to believe is bottled magic, as all pretenders always do.

Chinese industry will be showcasing the use of similar working parts to achieve a quite different objective. Pursuing not AGI but immediate product utility, Chinese software developers are more fully committed to open-source models and distribution, making and sharing software that reduces the power and hardware requirements necessary to manufacture intelligent products for daily life.

Less apparent is the social ambition behind this, which is to perfect the relationship between technology and the social control intended to make Chinese Communist Party authoritarianism absolute and permanent.

India’s Moment

This can be the Indian moment. Because the capitalists’ paradise where human workers have been replaced by white man’s robots and chatbot therapists makes no economic or social sense in the real human world.

India is not the US. Nor does it need toasters that learn how you like your toast and inform the Communist Party what you say over breakfast. India needs fully open, user-enabling AI applications that serve humans rather than replace them. It needs software tools that make farmers and artisans more productive, help students learn, help doctors take care. It needs software whose entire lifecycle, from training to model refinement to deployment, requires only inexpensive accessible hardware and is orders of magnitude less power-hungry.

All of this Indians can make, are making, and will be offering not only for domestic use but to Africa and the rest of humanity’s billions in decades to come.

Conclusion: The Human Approach

This AI Summit, the Indian approach to humanity’s transformative technology should become, indeed, the human approach. It is time for this century’s equivalent of Atoms for Peace, which was the pivot towards peaceful, civilian use of nuclear energy from Cold War tensions. This time too, what is achieved will matter to everyone. Better get it right.

Q&A: Unpacking the Critique of AI Hype

Q1: What is the central flaw in comparing AI to human intelligence?

The comparison rests on a flawed assumption that intelligence is primarily an individual cognitive capacity measurable by performance on tests. This mirrors IQ-style testing, which has long been criticised for cultural bias. Human intelligence is fundamentally social and collective—it emerges from cooperation, shared institutions, and generations of accumulated knowledge. AI systems do not participate in this collective dimension; they generate responses based on patterns without awareness, intention, or accountability.

Q2: Why is human intelligence described as “embodied”?

Humans learn from infancy through touch, movement, imitation, and shared attention. Abstract reasoning grows out of physical and social experience. Our emotions, bodily sensations, and cultural environments shape how we think. AI has no such grounding—it learns statistical associations from text without experiencing fear, joy, empathy, or doubt. It cannot navigate social norms through lived participation in communities, which becomes particularly evident in ethical contexts where humans reason within shared systems of values.

Q3: What are the limitations of AI training data?

Although more than 7,000 languages are spoken worldwide, around 80% of internet content is produced in just ten dominant languages. Entire cultures, oral traditions, and knowledge systems remain underrepresented. Models trained on this limited corpus inevitably reflect the assumptions, values, and biases of a small segment of the global population. AI cannot independently explore new cultural worlds or generate genuinely novel forms of collective meaning—it depends on incomplete and uneven human-produced data.

Q4: How do the US and Chinese approaches to AI differ according to the analysis?

US hyperscalers pursue the fantasy of AGI, driven by borrowed trillions and the dream of eliminating labour. Their chatbots produce “fake humanness”—a parlour trick masking the reality of low-quality imitations without thought. Chinese industry pursues immediate product utility through open-source models, reducing power and hardware requirements for intelligent products, but with the social ambition of perfecting technology for authoritarian control. Neither approach serves genuine human needs.

Q5: What should India’s AI strategy be?

India should reject both the US fantasy and the Chinese model. It needs fully open, user-enabling AI applications that serve humans rather than replace them—tools that make farmers and artisans more productive, help students learn, help doctors care. This requires software whose lifecycle demands inexpensive, accessible hardware and minimal power. Indians can make this, are making it, and can offer it not only domestically but to Africa and the rest of the world. This is India’s moment to forge a truly human approach to AI.

Your compare list

Compare
REMOVE ALL
COMPARE
0

Student Apply form