Ai Ethics
Ai Ethics
AI Ethics
Subject: Science and Technology and Ethics
Context:
IndiaAI has invited expressions of interest (EOI) for
collaborative proposals for undertaking ‘safe & trusted’ AI
projects, including establishing ethical AI frameworks and
creating AI risk assessment and management tools and
deepfake detection tools.
AI Ethics:
❖ Definition: AI ethics is the multidisciplinary study of
moral principles applied to AI systems, focusing on maximizing AI’s beneficial impact while minimizing
risks and adverse outcomes.
❖ Increasing adoption of AI in governance, Key Ethical Principles for AI Development (From
medicine, education, finance, and defense raises NITI Aayog’s Responsible AI Approach)
critical ethical issues. ❖ Safety and Reliability: Ensuring AI systems
Purpose of IndiaAI's proposal to call for ethical AI: perform safely in real-world conditions.
❖ To advance the responsible adoption of AI by ❖ Equality and Inclusivity: Preventing
creating guardrails for ethical use under the Safe & discrimination and ensuring equitable access to
Trusted AI pillar of the IndiaAI Mission. AI benefits.
❖ To promote transparency, accountability, fairness, ❖ Privacy and Security: Safeguarding user data
and societal awareness in AI applications. and protecting against misuse.
Importance of AI Ethics ❖ Transparency: Making AI systems
❖ Ensuring Fairness and Reducing Bias: AI understandable and their decisions explainable.
systems can inherit biases from training data, ❖ Accountability: Clearly defining responsibility
leading to discriminatory practices. for AI outcomes.
➢ A Capgemini survey revealed that 85% of ❖ Promotion of Human Values: Reinforcing
organizations encountered ethical concerns ethical and positive societal norms
in their AI implementations.
❖ Safeguarding Privacy and Data Protection: AI relies on large
datasets, raising concerns about misuse of personal information.
➢ Recently, ANI sued OpenAI for unauthorized use of copyrighted
news articles.
❖ Promoting Transparency and Accountability: Transparent AI ensures
public trust and helps assign responsibility for decisions.
➢ The "black box problem" in deep learning models makes AI
decision-making opaque, hindering accountability.
❖ Preventing Security Risks and Misuse: Adversarial attacks on AI
systems can lead to catastrophic outcomes in critical domains.
➢ AI-enabled cyberattacks, such as deepfakes, are rising; e.g., the
fake video of Ukrainian President Zelenskyy surrendering during the Russia-Ukraine conflict.
❖ Fostering Inclusive Growth: Ethical AI ensures benefits reach all sections of society, avoiding digital divides.
➢ AI solutions like NIRAMAI (early breast cancer detection) must address inclusivity to ensure adoption
in rural areas.
➢ The "AI for All" strategy by India aims to democratize AI benefits across underserved communities.
❖ Building Public Trust in AI Systems: Ethical AI usage fosters trust, ensuring widespread adoption and
acceptance.
➢ Ethical guidelines for facial recognition by Microsoft and Google improved user trust.
AI ethics in other countries
Germany: The Institute for Ethics in Artificial Intelligence
❖ Human-centric engineering and a focus on the cultural and social anchoring of rapid advances in AI,
covering disciplines including philosophy, ethics, sociology, and political science.
United States: The AI Now Institute
❖ The social implications of AI, especially in the areas of: Rights and liberties, labour and automation, bias
and inclusion, and safety and critical infrastructure
United Kingdom: The Institute for Ethical AI & Machine Learning
❖ The Institute aims to empower all from individuals to entire nations to develop AI, based on principles for
responsible machine learning.
Belgium: AI4People
❖ The social impacts of AI, and the founding principles, policies, and practices upon which to build a 'good
AI society'.
Challenges for India in Ethical Use of AI
❖ Data Privacy and Protection: India lacks a comprehensive data protection law, making it difficult to
regulate how AI systems collect and use data.
➢ The absence of strong privacy laws like the EU’s GDPR raises concerns about misuse of personal data
by AI applications.
❖ Bias and Discrimination in AI Models: Training datasets often reflect social, cultural, and systemic biases,
leading to discriminatory AI decisions.
➢ A NITI Aayog report highlights the need for inclusive datasets that reflect India’s linguistic and cultural
diversity.
❖ Digital Divide: Unequal access to AI technologies and infrastructure limits ethical and inclusive AI adoption
in rural and underserved areas.
➢ AI applications like e-health solutions remain inaccessible to marginalized communities due to lack of
internet penetration.
❖ Dual-Use Nature of Technology: The dual-use nature of AI poses challenges for India, enabling both
beneficial applications (e.g., healthcare, security) and harmful misuse (e.g., surveillance, deepfakes),
requiring robust regulations, ethical oversight, and public awareness.
❖ Lack of AI Regulations and Standards: India lacks specific regulations to govern AI development and
deployment, leaving ethical concerns unaddressed.
➢ The Bureau of Indian Standards (BIS) recently initiated work on AI standards, but progress is slow.
❖ AI Governance: India lacks a centralized regulatory framework to govern the ethical use of AI across
sectors. Overlapping responsibilities between MeitY, NITI Aayog, and other bodies create ambiguities in AI
governance.
➢ A dedicated AI ethics and governance body, similar to Singapore's Model AI Governance Framework,
has not yet been established in India.
❖ Job Displacement and Economic Inequality: Rapid automation through AI could lead to large-scale
unemployment, especially in sectors like manufacturing and services.
➢ AI-powered chatbots and automation in customer service roles threaten millions of jobs in India’s BPO
sector.
➢ It is estimated that 800 million jobs could be displaced globally by automation by 2030 (McKinsey Global
Institute).
❖ Lack of Skilled Workforce: India faces a shortage of skilled professionals trained in AI ethics, governance,
and technical domains.
➢ Ethical AI design requires cross-disciplinary expertise in technology, law, and social sciences, which
is currently lacking.
➢ NASSCOM reported in 2021 that India needs 2 million AI professionals, but the current talent pool is
much smaller.
❖ Use of AI in Surveillance: AI-powered surveillance systems risk violating privacy and civil liberties if
misused.
➢ The deployment of facial recognition systems in public spaces has raised concerns about mass
surveillance without adequate oversight.
➢ A 2021 report by Freedom House classified India as "partly free" in terms of digital rights, highlighting
risks of AI misuse.
❖ International Human Rights Matters: The use of AI in areas like surveillance, predictive policing, and
border control risks violating international human rights standards.
➢ India is a signatory to the Universal Declaration of Human Rights, requiring adherence to principles
that protect individual freedoms and rights.
❖ Cross-Border Data Flow and Sovereignty: Dependence on foreign AI technologies and data storage raises
issues of data sovereignty and security.
➢ India banned TikTok in 2020 over data sovereignty and security concerns, alleging unauthorized access
to Indian user data by its China-based parent company, ByteDance.
❖ Public Awareness and Trust: Limited public understanding of AI ethics reduces trust and adoption.
➢ Misuse of deepfakes and AI-generated misinformation has increased skepticism about AI technologies.
➢ A Pew Research Center survey found that 60% of Indians are concerned about AI replacing jobs and
privacy violations.
RBI sets up 8-member panel to develop framework for ethical use of AI
❖ Objective:
➢ The Reserve Bank has constituted an eight-member committee to develop a Framework for
Responsible and Ethical Enablement of Artificial Intelligence (FREE-AI) in the Financial Sector.
➢ The committee will be headed by Dr. Pushpak Bhattacharyya, who serves as a Professor in the
Department of Computer Science and Engineering at IIT Bombay.