Author : Nimisha Chadha

Expert Speak Health Express
Published on Feb 06, 2026

India frames AI as a healthcare ‘force multiplier’ for care gaps via frugal, application-led tools, yet uneven outcomes, bias, and weak oversight show the need for stronger governance and representative data

Innovation, Frugality and Governance: Scaling Trustworthy Health AI in India

In recent years, artificial intelligence (AI) has scaled rapidly and begun to influence life-and-death decisions in healthcare, from disease detection to outbreak containment. For a country like India that is grappling with a high disease burden, uneven access to care, chronic shortages of medical professionals, and constrained public health capacity, AI is often positioned as a ‘force multiplier,’ with the Economic Survey 2025–26 arguing that the country’s comparative advantage lies in bottom‑up, application‑led and ‘frugal’ AI for sectors like healthcare.

Yet, the real-world applications of AI in healthcare have had mixed results. While some applications substantially improved health outcomes, others failed, underperforming for marginalised groups in low-resource settings. The challenge is no longer technical feasibility, but building an ecosystem conducive to innovation and large-scale adoption, while being anchored in safeguards for equity, safety, and accountability.

AI in Healthcare: Achievements and Lessons 

The real-world utility of AI in health is best evidenced in early warning systems and specialised diagnostics. BlueDot, a Toronto-based platform, identified the COVID-19 outbreak nine days before the World Health Organization (WHO) issued an alert by scanning news articles, health reports, and airline data daily, in over 65 languages.

Domestically, MadhuNETrAI, validated on over 3,000 retinal images for detecting Diabetic Retinopathy, has benefitted around 7,100 patients. Similarly, Qure.ai for tuberculosis (TB) screening helped increase detection while saving costs, and is now deployed nationally to support diagnosis in resource-limited settings. Indian startups such as NIRAMAI (AI‑based, breast‑cancer screening used across over 30 cities) and Wadhwani AI (AI solution for TB treatment adherence and case management across 157 government facilities in Haryana) illustrate how home‑grown health‑AI innovations are being integrated into public‑health programmes, aligned with the Economic Survey 2025–26’s emphasis on application‑focused, frugal AI for local challenges.

AI’s effectiveness depends on the representative nature of its training data and its alignment with local deployment contexts.

However, there have also been global instances of failures, with potentially dangerous consequences, ranging from AI chatbots providing misleading, incorrect, and dangerous medical advice to several studies indicating inherent biases in AI models. A 2024 study analysing international medical imaging datasets found that disease classification AI models encode sensitive demographic characteristics and perform systematically worse for some racialised groups. Another 2024 study across seven Asian and African nations, including India, showed that AI tools examining chest X-rays for TB perform unevenly across settings and patient groups. Despite several commercial tools meeting the WHO’s minimum accuracy target on average, their performance declined among women, people living with HIV, and individuals with a history of TB, and varied substantially across countries. Importantly, research suggests that only a small fraction of clinical AI tools include a post-deployment surveillance plan, leaving these failures unmonitored as systems scale.

These cases demonstrate that AI’s effectiveness depends on the representative nature of its training data and its alignment with local deployment contexts. Empirical work and recent public debates raise similar concerns for India, warning that AI systems developed without representation risk reproducing entrenched hierarchies of caste, religion, gender, and socio‑economic status, strengthening the case for India to build its own AI ecosystem. While AI can help democratise healthcare access and reduce costs, scaling it safely requires agile governance frameworks to ensure that innovation remains both inclusive and sustainable.

Governance and Institutional Landscape of Health AI in India 

The Government’s IndiaAI Mission sets the foundation for developing AI capabilities, with seven core pillars including boosting infrastructure, innovation, access to quality data, upskilling, and startup financing, while maintaining safety and responsible use of AI. With a budget of INR 10,371.92 crore over five years, it focuses on several critical sectors, including healthcare, and the Economic Survey 2025-26 explicitly emphasises application-focused AI tailored for low-resource environments.

This is supported by the Ayushman Bharat Digital Mission (ABDM), which aims to build the backbone of India’s integrated digital health infrastructure by enabling interoperable health data exchange, longitudinal electronic health records and unique digital health identifiers, and has been allocated INR 350 crore under the Union Budget 2026-27, signalling continued investments. Parallelly, eSanjeevani, the national telemedicine service, is evolving to integrate AI-based capabilities to improve decision-making and user experience, illustrating how digital health platforms can embed AI within public health infrastructure.

At the firm level, a growing network of incubators and accelerators is supporting health‑tech and health‑AI startups. Innovation programmes such as the Startup India Seed Fund Scheme, which channels funding through incubators such as the STPI MedTech Centre of Excellence in Lucknow, and IIM Bangalore’s NSRCEL’s healthcare incubation programme, provide funding, mentorship, and market‑entry support for health‑tech innovators, helping them progress from early‑stage ideation and proof‑of‑concept towards market readiness. However, links between these innovation supports and formal health‑technology regulation are still evolving.

An Indian doctrinal study from 2025 on AI-based diagnostics finds that existing laws do not specify how responsibility should be allocated among physicians, hospitals, developers and data fiduciaries when AI‑assisted diagnostic errors occur, creating an accountability gap.

On the regulatory front, AI operates within sectoral frameworks such as the AI Governance Guidelines, Information Technology Act 2000, Digital Personal Data Protection (DPDP) Act 2023, and the Information Security Policy for Healthcare. The Central Drugs Standard Control Organisation’s (CDSCO) October 2025 draft Guidance on Medical Device Software addresses ambiguity surrounding AI-driven software by distinguishing between Software in a Medical Device (SiMD) and Software as a Medical Device (SaMD) and mapping both onto the existing medical device rules’ risk‑based classes, and aligns itself with international standards. It also introduces an Algorithm Change Protocol for AI and machine-learning (ML)-based software, requiring developers to pre‑specify how models will be updated and validated post‑deployment to manage black-box risks while still allowing improvements.

Further, the Indian Council of Medical Research’s (ICMR) ethical guidelines for Application of AI in Biomedical Research and Healthcare articulate principles such as accountability, human oversight, equity, data privacy, and inclusiveness, but remain non-binding and are confined to research and institutional review settings rather than enforceable regulatory authority.

Parallelly, India’s international commitments reflect a growing recognition of the need for global cooperation in governing health AI systems. During the G20 New Delhi Leaders’ Declaration in 2023, nations reaffirmed their commitment to an innovation-pro approach to AI, while protecting human rights and safety—a commitment reflected in Indian policy. In 2025, India joined the Health AI Global Regulatory work, which aims to improve safety and accelerate responsible innovation through shared learning, joint standards, and early warnings of emerging risks.

Nevertheless, a 2024 NASSCOM report finds that most sectors, including healthcare, are at early stages of AI maturity. Although many healthcare firms have conducted Proof of Concept (PoC) projects, only a fraction have moved into full production, indicating challenges with scaling innovations. Similarly, the regulation of AI in healthcare remains in its nascent stages. Challenges such as weak post-market surveillance of medical devices and ambiguity in liability allocation persist. An Indian doctrinal study from 2025 on AI-based diagnostics finds that existing laws do not specify how responsibility should be allocated among physicians, hospitals, developers and data fiduciaries when AI‑assisted diagnostic errors occur, creating an accountability gap. 

Moreover, laws such as the DPDP Act 2023 focus on data privacy rather than addressing the unique harms posed by algorithmic errors, such as bias. Research on AI in Indian healthcare highlights structural obstacles, including limited availability of unbiased datasets, insufficient public awareness of data privacy and consent, and an absence of comprehensive legal and ethical standards tailored to health AI. Now, the challenge lies in maturing this framework so that the rapid pace of innovation does not outstrip India’s capacity to protect its most vulnerable patients.

Scaling Trustworthy AI in Healthcare 

To address the existing ambiguity around accountability, a graded liability and accountability matrix should be established. While current frameworks like the India AI Mission and ICMR Ethical Guidelines provide foundational principles, the transition to population-scale clinical production requires legal certainty for both developers and healthcare providers. By distinguishing between technical algorithmic errors and clinical misjudgement, India can create a safe harbour that encourages startups to innovate while ensuring that hospitals have clear protocols for attribution when diagnostic errors occur. This provides the necessary framework to foster innovation without compromising patient safety.

To mitigate the risk of failures, regulatory authorities like the CDSCO should move toward a model of continuous life-cycle governance. This involves making post-market surveillance mandatory for high-risk AI tools, so that their performance is monitored in real-world settings rather than just controlled pilots. This could involve a centralised registry to record system failures and unexpected outcomes, allowing for real-world monitoring of AI performance across diverse demographic markers, ensuring that algorithmic drifts are identified and corrected before they lead to systemic harm as they scale from urban centres to rural primary healthcare centres.

Finally, the promise of equitable healthcare can only be realised if the underlying data reflects India’s vast genetic and socio-economic diversity. Leveraging the digital backbone of the ABDM, the government should foster the creation of representative, high-quality, unbiased, and accessible datasets aligned with the IndiaAI Mission to boost local innovation. By ensuring that AI tools are not only technically validated but also contextually representative, India can transform its unique health challenges into a global model for trustworthy and inclusive digital health transformation.


Nimisha Chadha is a Research Assistant with the Centre for New Economic Diplomacy at the Observer Research Foundation.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Author

Nimisha Chadha

Nimisha Chadha

Nimisha Chadha was a Research Assistant with ORF’s Centre for New Economic Diplomacy. She was previously an Associate at PATH (2023) and has a MSc ...

Read More +