India AI Mission: Catalysing Responsible, Inclusive Growth
Approved on 7 March 2024 with an outlay of INR 10,371.92 crore, the India AI Mission aims to democratise compute, improve data quality, develop indigenous capabilities, attract talent, finance startups, enable industry collaboration, and embed ethics across seven pillars—Compute Capacity, Application Development, Future Skill, Safe & Trusted AI, Innovation Centre, Datasets Platform, and Startup Financing. The Safe & Trusted AI pillar has already funded projects on machine unlearning, synthetic data, bias mitigation, privacy-enhancing strategies, explainability, governance/testing, ethical certification, and auditing, and has launched a second EoI for watermarking and labelling, ethical frameworks, AI risk management, stress testing, and deepfake detection. This ecosystem approach places trust and safety at the core while accelerating innovation across public and private sectors.
Principles for India-Specific AI Governance
Aligned with OECD, NITI Aayog, and NASSCOM, the report anchors India’s AI governance in eight principles: transparency (including meaningful explainability and AI identification), accountability (clear responsibility and redress), safety/robustness (resilience and monitoring), privacy/security (DPDPA compliance and security-by-design), fairness/non-discrimination, human-centred “do no harm” with oversight, inclusive and sustainable innovation, and digital-by-design governance that leverages technology to make compliance, monitoring, and enforcement scalable and rights-respecting.
From Principles to Practice: Lifecycle, Ecosystem, and Techno‑Legal Approaches
The report operationalises principles through a lifecycle lens (development → deployment → diffusion), an ecosystem view of actors (data principals/providers, developers/model builders, deployers/app builders, distributors, end-users) with clarified roles and liabilities, and a techno-legal strategy that uses Reg Tech (e.g., provenance artefacts, traceability, smart contracting, automated compliance) to create liability chains and scalable oversight. It emphasises periodic reviews of such tools for accuracy, fairness, security, and rights impacts, ensuring flexibility for innovation alongside accountability.
Gap Analysis: Where Laws Exist and Capabilities Must Catch Up
Existing frameworks broadly address risks but need upgraded capabilities and clarity in an AI context: deepfakes and malicious content are covered under IT Act/Rules, IPC/BNS, POCSO, JJ Act, and Copyright, yet require stronger provenance/watermarking and rapid takedown processes; cybersecurity is governed by IT Act, CERT-In/NCIIPC, DPDPA, and sectoral norms (RBI/SEBI/IRDAI/DoT) but needs “secure-by-design” AI guidance and enforcement capacity; intellectual property demands clarity on lawful training on copyrighted data and authorship of AI-assisted outputs; bias and discrimination require practical tools and transparency to detect and mitigate “black-box” harms; and regulators need better traceability of data/models/actors and insight into contractual allocation of risk, while antitrust oversight should monitor concentration and algorithmic collusion.
A Coordinated Governance Architecture: Inter-Ministerial Group and Technical Secretariat
The report proposes an Inter-Ministerial AI Governance Group led by the Principal Scientific Adviser, with MeitY, NITI Aayog, BIS, TEC, sectoral regulators (e.g., RBI, SEBI, IRDAI, TRAI, ICMR) and external experts to coordinate a whole-of-government roadmap, harmonise terminology and risk inventories, issue joint guidance, catalyse self-regulation, and stimulate Indian-context datasets for fairness evaluation. Supported by a MeitY-hosted Technical Secretariat (staffed by deputed officers and external experts), this architecture would conduct horizon scanning, stakeholder mapping, cross-domain risk assessment, and develop common metrics and frameworks (e.g., system/model cards, data provenance, security baselines, transparency reporting, evaluation datasets), while co-examining solutions with industry and identifying gaps needing legal or capacity fixes—without creating a new statutory body at this stage.
Transparency and Risk Mitigation in Practice: Incidents, Voluntary Commitments, and Tech Safeguards
To build evidence and drive improvements, the Secretariat should stand up a confidential AI incident database (broader than cybersecurity) focused on learning and harm mitigation—starting with public-sector reporting, encouraging private voluntary inputs, and potentially operated by CERT-In—while industry and government adopt voluntary commitments for high-capability or widely deployed systems (purpose disclosures, transparency reports, internal/external red-teaming, data quality/robustness testing, third-party peer review, conformity to Responsible AI principles, and strong security/continuity). In parallel, the government should evaluate watermarking, platform labelling, content provenance chains, and fact-checking, engage globally on standards, run nationwide awareness programmes, and promote Indian-context datasets and domain-specific risk assessment protocols to measure and mitigate bias.
Modernising the Legal Backbone: Inputs to the Digital India Act and the Path Ahead
A dedicated subgroup should feed actionable inputs into the proposed Digital India Act to strengthen harmonised legal tools, regulatory/technical capacity, and “digital-by-design” grievance redress and adjudication (e.g., online dispute resolution, expanded and specialised Grievance Appellate Committees and Adjudicating Officers, inclusion of external experts, and reduced forum overlaps), while maintaining a technology-agnostic, harm-focused approach and clarifying areas such as safe harbour and IP in an AI context. The overarching philosophy is harm minimisation through proportionate, activity-based regulation that can evolve to combination approaches as needed, leveraging RegTech and meaningful industry self-governance to keep rules lightweight yet effective, protect rights, and ensure India’s AI ecosystem remains trustworthy, inclusive, and globally competitive.
Conclusion
India now has a clear, pragmatic roadmap to build Safe & Trusted AI anchored in harm minimisation, a whole-of-government coordination model, and a digital-by-design, techno-legal approach that scales compliance without stifling innovation. By strengthening the application of existing laws, clarifying grey areas (notably IP and AI-driven bias), deploying practical safeguards (watermarking, provenance, labelling), and fostering meaningful industry self-governance through voluntary transparency and safety commitments, the framework balances agility with accountability. Near-term priorities—standing up the Inter-Ministerial Governance Group and MeitY Technical Secretariat, launching the AI incident database, co-developing standards and datasets tailored to Indian contexts, and feeding actionable inputs into the Digital India Act will translate principles into practice. With active participation from government, industry, academia, and civil society during public consultation and implementation, India can deliver globally competitive AI that is trustworthy, inclusive, and rights-respecting, driving innovation while protecting citizens and societal integrity.
Summarized by: Sriman Mishra
Source: https://indiaai.s3.ap-south-1.amazonaws.com/docs/subcommittee-report-dec26.pdf
