Overview

Artificial Intelligence has moved from research labs to everyday life -- powering search engines, medical diagnostics, autonomous vehicles, and judicial assistants. Yet the rapid deployment of AI raises profound governance questions: Who is accountable when an algorithm discriminates? How should governments regulate systems that evolve faster than legislation? Can AI-generated deepfakes undermine democracies? For UPSC, AI governance spans GS3 (Science & Technology, Economic Development) and GS4 (Ethics) -- questions test understanding of regulatory frameworks, India's policy approach, ethical implications, and global comparisons.

This chapter goes deep into AI governance, ethics, and India's policy architecture -- distinct from the broad overview of emerging technologies covered in Chapter 5.


AI Landscape — Key Concepts

Types of AI

Type Description Current Status
Narrow AI (ANI) Designed for a specific task -- image recognition, language translation, chess All current AI systems are narrow AI; this is the only type that exists today
General AI (AGI) Hypothetical AI with human-level cognitive abilities across all domains -- reasoning, learning, creativity Does not exist; remains a research aspiration; timelines debated (decades to never)
Super AI (ASI) Hypothetical AI surpassing human intelligence in every domain Purely theoretical; raises existential risk debates (Bostrom, Russell)

Key AI Technologies

Technology What It Does
Machine Learning (ML) Algorithms that learn patterns from data without being explicitly programmed; includes supervised, unsupervised, and reinforcement learning
Deep Learning Subset of ML using artificial neural networks with multiple layers; powers image recognition, speech processing, and language models
Generative AI AI that creates new content -- text (ChatGPT, Gemini), images (DALL-E, Midjourney), code, music -- based on patterns in training data
Natural Language Processing (NLP) Enables machines to understand, interpret, and generate human language
Computer Vision Enables machines to interpret visual information from images and videos

The AI Governance Challenge

Why AI Needs Governance

Challenge Detail
Algorithmic bias AI systems trained on biased data reproduce and amplify societal inequalities -- in hiring, lending, criminal justice, and healthcare; Amazon's AI recruiting tool (scrapped 2018) penalised resumes containing the word "women's"
Transparency / Black box problem Deep learning models often cannot explain their decision-making process; a doctor or judge cannot understand why the AI reached a particular conclusion
Accountability gap When an AI system causes harm (misdiagnosis, wrongful denial of loan, autonomous vehicle accident), legal liability is unclear -- is it the developer, deployer, or user?
Privacy AI systems require massive datasets, often including personal data; facial recognition, surveillance, and behavioural profiling raise fundamental privacy concerns
Deepfakes AI-generated synthetic media (video, audio, images) can impersonate real people, spread disinformation, manipulate elections, and enable fraud
Job displacement Automation threatens jobs across sectors -- manufacturing, customer service, data entry, content creation; McKinsey estimates 400-800 million workers globally could be displaced by 2030
Autonomous weapons Lethal Autonomous Weapons Systems (LAWS) that can select and engage targets without human intervention raise fundamental ethical and legal questions

For Mains: AI governance is not just a technology question -- it is a governance, ethics, and rights question. The challenge is to regulate AI without stifling innovation. India's approach of "light-touch regulation" contrasts with the EU's comprehensive legislation. Discuss the merits and risks of each approach.


Global AI Regulatory Approaches

EU AI Act, 2024

Feature Detail
Adopted June 2024; entered into force 1 August 2024; full applicability by 2 August 2026
Approach Risk-based classification -- the first comprehensive AI-specific legislation globally
Unacceptable risk (banned) Social scoring by governments, real-time remote biometric identification in public spaces (with limited exceptions), manipulation of vulnerable groups, emotion recognition in workplaces/schools
High risk (regulated) AI in critical infrastructure, education, employment, law enforcement, migration, justice; must meet transparency, data governance, human oversight, and accuracy requirements
Limited risk (transparency) Chatbots, deepfakes -- users must be informed they are interacting with AI or viewing AI-generated content
Minimal risk (unregulated) AI-enabled video games, spam filters -- the majority of current AI applications
Penalties Up to EUR 35 million or 7% of global annual turnover for violations

US Approach

Feature Detail
Executive Order 14110 Signed by President Biden on 30 October 2023 -- the most comprehensive US government AI governance action; required safety testing, red-teaming, and reporting for powerful AI models
Status Rescinded by President Trump on 20 January 2025; replaced with Executive Order emphasising deregulation and US AI leadership
Approach Sectoral regulation rather than a single comprehensive law; agencies like FDA, FTC, and EEOC apply existing frameworks to AI within their domains
Blueprint for an AI Bill of Rights Released 2022 -- non-binding principles: safe systems, algorithmic discrimination protection, data privacy, notice and explanation, human alternatives

China's AI Regulations

Regulation Year Key Provisions
Administrative Provisions on Deep Synthesis January 2023 Regulates deepfakes and synthetic content; requires labelling and traceability
Interim Measures for Generative AI Services August 2023 First binding regulation for generative AI globally; requires security assessments, algorithm filing with the Cyberspace Administration of China (CAC), content moderation, and adherence to "socialist core values"
AI Content Labelling Measures September 2025 Mandatory "Generated by AI" labels on all AI-generated content
Approach Technology-specific regulations rather than a single comprehensive law; prioritises state control over content and data

For Prelims: EU AI Act = risk-based framework, 4 categories (unacceptable/high/limited/minimal risk), entered into force August 2024. US Biden EO 14110 on AI safety was rescinded by Trump in January 2025. China was the first country with binding generative AI regulations (August 2023).


India's AI Policy Framework

IndiaAI Mission

Feature Detail
Approved 7 March 2024 by Union Cabinet
Outlay Rs 10,372 crore for 5 years
Implementing body IndiaAI Independent Business Division (IBD) under Digital India Corporation (DIC), Ministry of Electronics and IT (MeitY)
Compute infrastructure Rs 4,563 crore for scalable AI computing; 10,000+ GPU capacity for startups, researchers, and government
Innovation Centre Rs 1,971 crore for IndiaAI Innovation Centre (IAIC) -- R&D hub for foundational AI models
Startup financing Rs 1,943 crore for AI startup ecosystem development
FutureSkills Rs 883 crore for AI talent development and skilling programmes
Datasets platform IndiaAI Datasets Platform to unify non-personal government datasets for AI training
AI safety IndiaAI Safe AI pillar -- guidelines for responsible AI deployment

NITI Aayog — Responsible AI for All (#AIForAll)

Document Date Key Content
National Strategy for AI June 2018 Identified 5 focus sectors: healthcare, agriculture, education, smart cities, smart mobility
Responsible AI Part 1: Principles February 2021 7 principles derived from Constitutional values: (1) inclusive growth, non-discrimination and equity; (2) safety and reliability; (3) privacy and data protection; (4) transparency and explainability; (5) accountability and auditability; (6) human oversight; (7) social and environmental well-being
Responsible AI Part 2: Operationalisation August 2021 Framework for implementing the 7 principles in practice -- sector-specific guidance

India's Global AI Engagement

Initiative Detail
GPAI (Global Partnership on AI) India is a founding member (June 2020); served as Council Chair in 2022; hosted the GPAI Summit in New Delhi, December 2023
AI Safety Summit India participated in the Bletchley Park AI Safety Summit (November 2023, UK) and Seoul AI Safety Summit (2024)
Approach India favours innovation-friendly, risk-proportionate regulation rather than prescriptive legislation; no dedicated AI law as of March 2026; regulation through existing frameworks (IT Act, DPDP Act 2023, sector-specific rules)

For Mains: India's approach to AI regulation is distinct from the EU (comprehensive law) and the US (sectoral regulation). India relies on existing legal frameworks, voluntary guidelines (NITI Aayog), and mission-mode programmes (IndiaAI Mission). Critically evaluate whether this "light-touch" approach is adequate given AI's rapid proliferation and the risks of algorithmic bias, deepfakes, and job displacement in a country with India's demographic profile.


AI in Indian Governance

Key Applications

Application System / Initiative Detail
Judiciary SUPACE (Supreme Court Portal for Assistance in Courts Efficiency) Launched April 2021; AI-assisted tool for case management -- extracts facts, chronology, and precedents from case files; does NOT take decisions; assists judges and researchers
Agriculture AI-powered crop advisory ICAR and state governments deploying AI for pest prediction, yield estimation, and soil analysis
Healthcare eSanjeevani CDSS AI-based Clinical Decision Support System integrated into India's telemedicine platform; covers 300 symptoms with branching logic
Tax administration Project Insight AI-based data analytics for identifying tax evasion patterns
Smart cities ICCC (Integrated Command & Control Centres) AI-powered surveillance, traffic management, and civic service delivery in 100 smart cities
e-Courts Phase III AI and blockchain Rs 53.57 crore allocated for AI/ML in judicial domain under eCourts Phase III (2023--2027)

Deepfakes — Regulation and Risks

The Deepfake Challenge

Aspect Detail
What AI-generated synthetic media -- realistic but fake videos, audio, and images of real people
Technology Generative Adversarial Networks (GANs) and diffusion models enable increasingly convincing deepfakes
Risks Election manipulation, non-consensual intimate imagery, financial fraud (CEO voice cloning), erosion of trust in authentic media
Scale India ranks among the top 6 most deepfake-susceptible nations; incidents involving political leaders and celebrities have surged

India's Regulatory Response

Measure Detail
IT Rules Amendment (2025) Amended the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021; came into force 15 November 2025; defines "synthetically generated information" (SGI); mandates 3-hour takedown for flagged deepfake content (down from 36 hours)
MeitY Advisories Multiple advisories to social media intermediaries reminding them of due diligence obligations regarding AI-generated content
Section 66D, IT Act Punishment for cheating by personation using a computer resource -- applicable to deepfake-based fraud
Existing criminal law IPC/BNS provisions on defamation, obscenity, and impersonation apply to harmful deepfakes

Autonomous Weapons — The LAWS Debate

Aspect Detail
What are LAWS Lethal Autonomous Weapons Systems -- weapons that can select and engage targets without meaningful human control
UN process Group of Governmental Experts (GGE) under the Convention on Certain Conventional Weapons (CCW) has been discussing LAWS since 2014
UNGA Resolution (December 2024) Adopted with 166 votes in favour, 3 against (Belarus, DPRK, Russia), 15 abstentions; called for urgent action towards a binding instrument
Key positions Prohibitionists: ban all LAWS (Campaign to Stop Killer Robots); Regulators: new treaty with prohibitions + restrictions; Traditionalists: existing IHL is sufficient
Timeline UN Secretary-General and ICRC call for treaty negotiations to conclude by end of 2026
India's position India participates in CCW GGE discussions; supports human control over the use of force; has not committed to a binding ban

For Mains: "The development of Lethal Autonomous Weapons Systems raises fundamental questions about the ethics of delegating life-and-death decisions to machines." Discuss India's position and the prospects for an international treaty.


AI and Intellectual Property

Issue Detail
AI-generated works Can AI-generated art, music, or text be copyrighted? Most jurisdictions (including India) require a human author; Copyright Act 1957 protects works by "authors" -- AI is not an "author"
AI and patents DABUS (an AI system) was denied patent inventorship by courts in the US, UK, and Australia; only natural persons can be "inventors" under most patent laws
Training data AI models trained on copyrighted material raise infringement questions; ongoing lawsuits (New York Times v. OpenAI) globally
India's position No specific legislation; existing IP framework applies; Parliamentary Standing Committee has recommended a review of IP laws in the context of AI

AI Ethics — Key Philosophical Frameworks

Framework Core Idea Application to AI
Utilitarianism Maximise overall well-being; actions are judged by their outcomes AI should be deployed where it maximises net benefit to society; but whose well-being counts, and who decides?
Deontological (Kantian) Actions must respect universal moral rules and human dignity regardless of consequences AI must never treat humans merely as means to an end; informed consent, transparency, and respect for autonomy are non-negotiable
Virtue ethics Focus on the character and intentions of the moral agent AI developers and deployers must cultivate responsibility, honesty, and fairness; "ethical AI" requires ethical humans
Rights-based Certain fundamental rights (privacy, non-discrimination, due process) cannot be violated even for greater good AI must not infringe on fundamental rights regardless of efficiency gains; basis of the EU AI Act's approach
Justice as fairness (Rawlsian) Inequalities are acceptable only if they benefit the least advantaged members of society AI systems must be evaluated by their impact on the most vulnerable; bias that disproportionately harms marginalised groups is unjust

For Mains (GS4 Ethics): AI ethics is not merely a technical problem -- it raises foundational questions about moral agency, accountability, justice, and what it means to be human. The "trolley problem" in autonomous vehicles (whom should the car save in an unavoidable accident?) illustrates how AI forces us to make explicit the moral choices that humans make implicitly every day.


AI and Data Protection — The DPDP Act Connection

Feature Detail
Digital Personal Data Protection Act, 2023 India's first comprehensive data protection law; governs collection, processing, and storage of personal data
Relevance to AI AI systems depend on massive datasets, often containing personal data; DPDP Act imposes consent requirements, purpose limitation, and data minimisation -- directly affecting AI training and deployment
Data Principal rights Right to access, correction, erasure of personal data; right to grievance redressal -- AI systems must respect these rights
Automated decision-making The DPDP Act does not explicitly address algorithmic decision-making rights (unlike EU GDPR Article 22); this is a gap in India's framework
Cross-border data Data can be transferred to countries not on the government's restricted list; enables AI model training on global cloud infrastructure
AI-specific gap No right to explanation for AI-driven decisions; no mandatory algorithmic impact assessment; these may need to be addressed through sectoral regulations

AI, Jobs, and the Future of Work

Aspect Detail
Displacement risk Routine cognitive tasks (data entry, bookkeeping, basic coding, customer service) most vulnerable; ILO estimates generative AI could automate tasks in ~300 million jobs globally
Augmentation AI augments professionals -- doctors (diagnostics), lawyers (research), teachers (personalised learning) -- rather than fully replacing them
India's challenge India's demographic dividend depends on job creation; if AI automates service sector jobs (IT, BPO) before manufacturing absorbs surplus labour, the employment challenge intensifies
Policy responses Reskilling programmes (IndiaAI FutureSkills), social safety nets, education reform to emphasise creativity, critical thinking, and human skills that AI cannot replicate

Comparison of Global AI Governance Models

Parameter EU USA China India
Approach Comprehensive, risk-based legislation Sectoral regulation (no single AI law) Technology-specific regulations; state-directed Light-touch; existing frameworks + voluntary guidelines
Key instrument EU AI Act (2024) Executive Orders (rescinded); sector-specific rules Generative AI Measures (2023); Deep Synthesis Provisions (2023) IndiaAI Mission (2024); NITI Aayog principles (2021); IT Act/Rules; DPDP Act 2023
Enforcement Strong -- up to EUR 35 million / 7% turnover Varies by sector; FTC, FDA enforcement CAC enforcement; algorithm filing; content control Through existing regulators; no AI-specific enforcement body
Innovation stance Regulation-first; may slow innovation Pro-innovation (post-2025); minimal regulation State-guided innovation; control over content Innovation-first; regulation later
Bias/fairness Mandatory bias audits for high-risk AI No binding requirement Limited provisions Voluntary (NITI Aayog principles)
Transparency Mandatory for high-risk and limited-risk AI Varies Mandatory labelling for AI-generated content Emerging (IT Rules Amendment on deepfakes)

UPSC Relevance

Prelims Focus Areas

  • IndiaAI Mission: approved March 2024; Rs 10,372 crore; 5 years; MeitY
  • EU AI Act: risk-based framework; 4 categories; entered into force August 2024
  • NITI Aayog Responsible AI: 7 principles (February 2021); #AIForAll
  • GPAI: India founding member (2020); Council Chair (2022); hosted Summit December 2023
  • SUPACE: launched April 2021; AI-assisted tool for Supreme Court case management
  • Deepfake regulation: IT Rules Amendment 2025; 3-hour takedown; SGI definition
  • LAWS: UNGA resolution December 2024 -- 166 in favour

Mains Focus Areas

  • AI governance models -- EU (comprehensive), US (sectoral), China (state-directed), India (light-touch)
  • Algorithmic bias and discrimination -- implications for social justice and constitutional values
  • Deepfakes and the integrity of democratic processes
  • AI in governance -- potential and limitations (SUPACE, eSanjeevani CDSS, smart cities)
  • Job displacement vs augmentation -- India's demographic dividend at risk?
  • Autonomous weapons -- ethics of delegating lethal force to machines
  • Balancing innovation with regulation -- India's approach

Vocabulary

Algorithmic Bias

  • Pronunciation: /ˌalgəˈrɪðmɪk ˈbaɪəs/
  • Definition: Systematic and repeatable errors in an AI system's outputs that create unfair outcomes for particular groups, arising from biased training data, flawed model design, or unrepresentative datasets -- resulting in discrimination in areas such as hiring, lending, criminal sentencing, and healthcare.
  • Origin: From Arabic al-khwarizmi (after the 9th-century mathematician whose name gave us "algorithm") + Old French biais ("oblique, slant"); the concept gained prominence in the 2010s as AI systems were deployed at scale in high-stakes decision-making, exposing how historical inequalities embedded in training data are reproduced and amplified by machine learning models.

Deepfake

  • Pronunciation: /ˈdiːpfeɪk/
  • Definition: Synthetic media -- typically video, audio, or images -- created using deep learning techniques (especially Generative Adversarial Networks and diffusion models) that realistically depict a person saying or doing something they never actually said or did, posing risks to democratic integrity, personal reputation, and information ecosystems.
  • Origin: A portmanteau of deep learning + fake; coined in 2017 by a Reddit user who used AI to superimpose celebrity faces onto videos; the technology has since advanced rapidly, making detection increasingly difficult and regulatory intervention urgent.

Key Terms

IndiaAI Mission

  • Pronunciation: /ˈɪndiə eɪˈaɪ ˈmɪʃən/
  • Definition: India's flagship national programme for artificial intelligence, approved by the Union Cabinet on 7 March 2024 with an outlay of Rs 10,372 crore over 5 years, comprising seven pillars -- compute infrastructure (10,000+ GPUs), innovation centre, datasets platform, application development, FutureSkills (talent development), startup financing, and safe AI -- implemented by the IndiaAI Independent Business Division under Digital India Corporation, MeitY.
  • Context: India's approach to AI differs from the EU's regulation-first model; the IndiaAI Mission prioritises building foundational infrastructure (compute, data, talent) to enable India's AI ecosystem, while governance relies on existing legal frameworks and voluntary principles rather than a dedicated AI law.
  • UPSC Relevance: GS3 (Science & Technology, Economic Development). Prelims: budget (Rs 10,372 crore), approval date (March 2024), implementing body (MeitY/DIC). Mains: evaluate India's AI strategy in the context of global competition, the need for responsible AI, and the challenge of ensuring AI benefits reach all sections of society (#AIForAll).

EU AI Act

  • Pronunciation: /ˌiː ˈjuː eɪˈaɪ ækt/
  • Definition: The European Union's Artificial Intelligence Act, adopted in June 2024 and entered into force on 1 August 2024, establishing the world's first comprehensive, legally binding regulatory framework for AI based on a risk classification system -- prohibiting unacceptable-risk AI practices, imposing strict obligations on high-risk AI systems, requiring transparency for limited-risk systems, and leaving minimal-risk AI unregulated.
  • Context: The EU AI Act serves as a global benchmark for AI regulation, similar to how GDPR set the standard for data protection; it has extraterritorial application -- any AI system affecting EU citizens must comply, regardless of where the developer is based; penalties reach up to EUR 35 million or 7% of global turnover.
  • UPSC Relevance: GS3 (Science & Technology). Prelims: risk-based framework, 4 categories, entry into force (August 2024). Mains: compare India's light-touch approach with the EU's comprehensive regulation; discuss whether prescriptive AI legislation would help or hinder India's AI ambitions.

Sources: pib.gov.in (IndiaAI Mission, March 2024), NITI Aayog (Responsible AI #AIForAll, February 2021; August 2021), indiaai.gov.in, European Commission (EU AI Act, 2024), White House Archives (Executive Order 14110, October 2023), Cyberspace Administration of China (Generative AI Interim Measures, 2023), MeitY (IT Rules Amendment 2025 on deepfakes), Supreme Court of India (SUPACE), UNODA (LAWS GGE), UNGA Resolution 78/241 (December 2024)