0% found this document useful (0 votes)
30 views37 pages

EAI Unit 2

Nil

Uploaded by

Anonymous 96
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views37 pages

EAI Unit 2

Nil

Uploaded by

Anonymous 96
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

Please read this disclaimer before proceeding:

This document is confidential and intended solely for the educational purpose of
RMK Group of Educational Institutions. If you have received this document
through email in error, please notify the system manager. This document
contains proprietary information and is intended only to the respective group /
learning community as intended. If you are not the addressee you should not
disseminate, distribute or copy through e-mail. Please notify the sender
immediately by e-mail if you have received this document by mistake and delete
this document from your system. If you are not the intended recipient you are
notified that disclosing, copying, distributing or taking any action in reliance on
the contents of this information is strictly prohibited.
AD8013

ETHICS OF ARTIFICIAL INTELLIGENCE

BATCH / YEAR / SEMESTER: 2020-24 / IV / 07


1. CONTENTS

Sl. Topics Page


No. No.
5
1. Contents
6
2. Course Objectives
7
3. Pre Requisites (Course Name with Code)
8
4. Syllabus (With Subject Code, Name, LTPC details)
10
5. Course Outcomes (6)
11
6. CO-PO/PSO Mapping
Lecture Plan (S.No., Topic, No. of Periods, Proposed date, 12
7. Actual Lecture Date, pertaining CO, Taxonomy level, Mode of
Delivery)
13
8. Activity based learning
Lecture Notes ( with Links to Videos, e-book reference, PPTs, 14
9.
Quiz and any other learning materials )
Assignments ( For higher level learning and Evaluation - 25
10.
Examples: Case study, Comprehensive design, etc.,)
26
11. Part A Q & A (with K level and CO)
30
12. Part B Qs (with K level and CO)
Supportive online Certification courses (NPTEL, Swayam, 31
13.
Coursera, Udemy, etc.,)
32
14. Real time Applications in day to day life and to Industry
33
15. Content Beyond Syllabus
34
16. Assessment Schedule ( Proposed Date & Actual Date)
35
17. Prescribed Text Books & Reference Books
36
18. Mini Project
2. Course Objectives
To understand the need for ensuring ethics in AI

To understand ethical issues with the development of AI agents

To apply the ethical considerations in different AI applications

To evaluate the relation of ethics with nature

To overcome the risk for Human rights and other fundamental values.
3. Prerequisites

SUBJECT CODE: AD8013


SUBJECT NAME: ETHICS OF ARTIFICIAL INTELLIGENCE
4. Syllabus
L T P C
AD8013 ETHICS IN ARTIFICIAL 3 0 0 3
INTELLIGENCE

OBJECTIVES:
1: To understand the need for ensuring ethics in AI
2: To understand ethical issues with the development of AI agents
3: To apply the ethical considerations in different AI applications
4: To evaluate the relation of ethics with nature
5: To overcome the risk for Human rights and other fundamental values.
UNIT I INTRODUCTION TO ETHICS OF AI 9
Role of Artificial Intelligence in Human Life, Understanding Ethics, Why
Ethics in AI? Ethical Considerations of AI, Current Initiatives in AI and
Ethics, Ethical Issues with our relationship with artificial Entities

UNIT II FRAMEWORK AND MODELS 9


AI Governance by Human-right centered design, Normative models, Role
of professional norms, Teaching Machines to be Moral

UNIT III CONCEPTS AND ISSUES 9


Accountability in Computer Systems, Transparency, Responsibility and AI.
Race and Gender, AI as a moral right-holder
UNIT IV PERSPECTIVES AND APPROACHES 9
Perspectives on Ethics of AI, Integrating ethical values and economic value,
Automating origination, AI a Binary approach, Machine learning values,
Artificial Moral Agents

UNIT V CASES AND APLLICATIONS 9

Ethics of Artificial Intelligence in Transport, Ethical AI in Military, Biomedical


research, Patient Care, Public Health, Robot Teaching, Pedagogy, Policy,
Smart City Ethics.
OUTCOMES:
At the end of this course, the students will be able to:
CO1: Understand the ethical issues in the development of AI agents
CO2: Learn the ethical considerations of AI with perspectives on ethical values
CO3: Apply the ethical policies in AI based applications and Robot development
CO4: To implement the AI concepts to societal problems by adapting the legal
concepts by securing fundamental rights.
CO5: This study will help to overcome the evil genesis in the concepts of AI.
REFERENCES:

1. Paula Boddington, “Towards a Code of Ethics for Artificial Intelligence”, Springer, 2017
2. Markus D. Dubber, Frank Pasquale, Sunit Das, “The Oxford Handbook of Ethics of
AI”, Oxford University Press Edited book, 2020
3. S. Matthew Liao, “Ethics of Artificial Intelligence”, Oxford University Press Edited
Book, 2020
4. N. Bostrom and E. Yudkowsky. “The ethics of artificial intelligence”. In W. M. Ramsey
and K. Frankish, editors, The Cambridge Handbook of Artificial Intelligence, pages
316–334. Cambridge University Press, Cambridge, 2014.
5. Wallach, W., & Allen, C, “Moral machines: ceaching robots right from wrong”, Oxford
University Press, 2008.
5. Course Outcomes
CO# COs K Level

CO1 Understand the ethical issues in the development of AI agents K2

CO2 Learn the ethical considerations of AI with perspectives on ethical values K1

CO3 Apply the ethical policies in AI based applications and Robot development K3
To implement the AI concepts to societal problems by adapting the legal
CO4 K3
concepts by securing fundamental rights.
CO5 This study will help to overcome the evil genesis in the concepts of AI. K4

Knowledge Level Description

K6 Evaluation

K5 Synthesis

K4 Analysis

K3 Application

K2 Comprehension

K1 Knowledge
6. CO – PO /PSO Mapping Matrix

CO PROGRAM OUTCOMES PSO


CO
Attain
#
ment PO PO PO PO PO PO PO PO PO PO PO PO PS PS PS0
1 2 3 4 5 6 7 8 9 10 11 12 O1 O2 3

CO1 1 3 3 3 1 1 1 - - 2 1 2 1 - - -

CO2 2 3 2 3 1 2 1 - - 2 1 2 1 - - -

CO3 3 3 3 3 1 2 1 - 1 2 1 2 1 - - -

CO4 4 3 3 3 1 2 1 - - 2 1 2 1 - - -

CO5 5 3 3 3 1 2 1 - - 2 1 2 1 - - -
7. LECTURE PLAN : UNIT – II
FRAMEWORK AND MODELS

UNIT – II

Highest
Proposed Actual Cognitive
S. Lecture Lecture Level Mode of Delivery
Topic CO LU Outcomes Remark
No Date Date Delivery Resources

1 AI Governance by Human-right MD1,MD4 AI Governance by Human-


19.8.2023 19.8.2023 K6 & T3
centered design right centered design
MD5

MD1,MD4
2 21.8.2023 Normative models 21.8.2023 K6 & T3 Normative models
MD5

CO2 MD1,MD4
&
22.8.2023 Role of professional norms 22.8.2023 K6 MD5 Role of professional norms
3 T3

MD1,MD4
4 23.8.2023 Teaching Machines to be Moral 23.8.2023 K6 & T3 Teaching Machines to be Moral
MD5

• ASSESSMENT COMPONENTS MODE OF DELIVERY


• AC 1. Unit Test MD 1. Oral presentation
• AC 2. Assignment MD 2. Tutorial
• AC 3. Course Seminar MD 3. Seminar
• AC 4. Course Quiz MD 4. Hands On
• AC 5. Case Study MD 5. Videos
• AC 6. Record Work MD 6. Field Visit
• AC 7. Lab / Mini Project
• AC 8. Lab Model Exam
• AC 9. Project Review

12
8. Activity Based Learning: UNIT II
Lecture Notes – Unit 2
FRAMEWORK AND MODELS
Unit 1 –INTRODUCTION TO ETHICS OF AI

Sl. No. Contents Page No.

1 AI Governance by Human-right centered design 16

2 Normative models 19

3 Role of professional norms 21

4 Teaching Machines to be Moral 23


9. LECTURE NOTES
2.1 AI Governance by Human-right centered design:
AI Governance:
Artificial intelligence governance is the legal framework for ensuring AI
and machine learning technologies are researched and developed with the goal of
helping humanity navigate the adoption and use of these systems in ethical and
responsible ways. As existing international norms designed to allow every human
being a life of liberty and dignity, human rights ought to be the foundation for
AI governance.
4 Components of a Strong AI Governance:
Risk management: AI governance and responsible use ensures effective risk
management strategies, such as selecting appropriate training data sets,
implementing cybersecurity measures, and addressing potential biases or errors in
AI models.
Stakeholder involvement: Engaging stakeholders such as CEOs, data privacy
officers and users are vital for governing AI effectively. Stakeholders contribute to
decision-making, provide oversight, and ensure AI technologies are developed
and used responsibly over the course of their lifecycle.
Decision-making and Explainability: AI systems must be designed to make
fair and unbiased decisions. Explainability, or the ability to understand the reasons
behind AI outcomes, is important for building trust and accountability.
Regulatory compliance: Organizations must adhere to data privacy
requirements, accuracy standards and storage restrictions to safeguard sensitive
information. AI regulation helps protect user data and ensure responsible AI use.
Thus, AI is redefining what it means to be human.
Human rights are central to what it means to be human. They were drafted and
agreed internationally, with worldwide popular support, to define freedoms and
entitlements that would allow every human being to live a life of liberty and dignity.
Those fundamental human rights have been interpreted and developed over decades
to delineate the parameters of fairness, equality and liberty for every individual.
Now, artificial intelligence (AI) is redefining what it means to be human. Its systems
and processes have the potential to alter the human experience fundamentally. AI will
affect not only public policy areas such as road safety and healthcare, but also human
autonomy, relationships and dignity. It will affect lifestyles and professions, as well as
the future course of human development and the nature and scale of conflicts. It will
change the relationships between communities and those between the individual, the
state and corporations.
AI offers tremendous benefits for all societies but also presents risks. These risks
potentially include further division between the privileged and the unprivileged; the
erosion of individual freedoms through ubiquitous surveillance.
AI governance by human-rights centred design refers to the approach of developing,
deploying, and regulating artificial intelligence (AI) technologies in a way that prioritizes
and upholds fundamental human rights. This approach recognizes the potential benefits
of AI while also addressing the ethical and social challenges it presents.
Here are some key principles and considerations for AI governance through a human-
rights centred design:
1. Respect for Human Rights: The design, development, and deployment of AI
systems should align with internationally recognized human rights standards, such as
the Universal Declaration of Human Rights and other relevant treaties. AI should not
infringe upon rights like privacy, freedom of expression, non-discrimination, and due
process.
2.Transparency and Accountability: AI systems should be transparent and
understandable. Users should have clear insights into how AI decisions are made, and
there should be mechanisms in place to hold both developers and users accountable for
AI-generated outcomes.
3. Inclusive Development: The design and development of AI systems should
involve diverse and representative stakeholders, including marginalized communities
and individuals who may be affected by the technology. This helps prevent biased or
discriminatory outcomes and ensures that AI benefits all members of society.
4. Data Privacy and Security: AI applications must respect individuals' privacy and
ensure the security of their data. This involves implementing robust data protection
measures, obtaining informed consent for data usage, and minimizing the risks of
unauthorized access or data breaches.
5. Non-Discrimination: AI systems should not perpetuate or amplify biases or
discrimination based on factors such as race, gender, religion, sexual orientation, or
socioeconomic status. Bias detection and mitigation techniques should be integrated
into AI development processes.
6. Human Oversight and Control: While AI can enhance decision-making, ultimate
control and responsibility should remain with humans. There should be mechanisms for
human intervention, appeal, and oversight when AI-generated decisions have significant
impacts.
7. Benefit and Risk Assessment: Prior to deployment, AI systems should undergo
comprehensive assessments of potential benefits and risks, including their impact on
human rights. Mitigation strategies should be developed to address any identified risks.
8. Regulation and Policy: Governments, industry, and civil society should collaborate
to establish clear regulatory frameworks and policies that ensure the responsible use of
AI technology while safeguarding human rights.
9. Education and Awareness: Promote public understanding of AI, its capabilities,
and its potential impact on human rights. This empowers individuals to make informed
decisions about AI use and encourages public discourse on AI ethics and governance.
10. Continuous Monitoring and Adaptation: AI governance should be an ongoing
process, adapting to technological advancements and changing social contexts. Regular
reviews and updates of AI systems and regulations are necessary to ensure alignment
with human rights principles.
By adopting a human-rights centred design approach to AI governance, societies
can harness the benefits of AI technology while safeguarding the rights and well-being
of individuals and communities. This approach requires collaboration among
governments, industry, academia, and civil society to create a responsible and ethical AI
ecosystem.
2.2 Normative Modes/Models:
Normative modes of AI refer to established frameworks, codes, and standards that provide
guidance and regulation for the development, deployment, and use of artificial intelligence
(AI) technologies. These models provide a foundation for creating AI systems that align with
societal values, respect human rights, and promote the well-being of individuals and
communities. Normative models help guide the behavior of AI developers, researchers,
policymakers, and other stakeholders to ensure that AI is used in a responsible and ethical
manner.
Here are some key normative modes of AI, including codes of ethics and standards:
1. Ethical Guidelines and Codes of Conduct:
• IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems:
The IEEE has developed a series of documents, including "Ethically Aligned
Design" and "P7000 - Model Process for Addressing Ethical Concerns During
System Design," which provide guidance on ethical considerations in AI design
and development.
• ACM Code of Ethics and Professional Conduct: The Association for
Computing Machinery (ACM) has established a code of ethics that outlines
principles and guidelines for ethical behavior in computing, including AI.
• Partnership on AI: An organization that brings together stakeholders from
academia, industry, and civil society to develop and promote best practices for AI
that prioritize ethical considerations.
2. Technical Standards and Guidelines:
• ISO/IEC JTC 1/SC 42: This international standardization committee focuses on
the standardization of AI and covers topics such as AI terminology,
trustworthiness, and data governance.
• NIST Framework for AI Trustworthiness: The U.S. National Institute of
Standards and Technology (NIST) has developed a framework that outlines
principles and best practices for building trustworthy AI systems.
• EU Ethics Guidelines for Trustworthy AI: The European Commission has
published guidelines for trustworthy AI, emphasizing human agency, transparency,
accountability, and societal well-being.
3. AI Regulatory Frameworks:
• General Data Protection Regulation (GDPR): While not specific to AI,
GDPR sets rules for data protection and privacy that impact AI applications
involving personal data.
• AI Act (EU Proposal): The European Union has proposed regulations for
AI systems that pose high risks, aiming to ensure safety, transparency, and
accountability in their deployment.
4. Domain-Specific Standards:
• Healthcare: Organizations like the American Medical Association (AMA)
provide guidelines for the ethical use of AI in healthcare, emphasizing
patient safety, data privacy, and informed consent.
• Automotive: ISO 21448, also known as "Road vehicles -- Safety of the
intended functionality," provides guidelines for the safe deployment of AI
systems in vehicles.
5. Professional Organizations' Codes:
• Many professional organizations related to specific industries have
developed codes of ethics that address the use of AI technologies within
their respective domains. For example, medical associations, legal
organizations, and financial institutions may have guidelines for the
responsible use of AI.
6. Local and National Initiatives:
• Many countries and regions are developing their own AI strategies, policies,
and regulatory frameworks that include ethical considerations and guidelines
for AI development and deployment.
These normative modes provide a foundation for AI practitioners, researchers,
policymakers, and organizations to ensure that AI technologies are developed and used
in ways that align with ethical principles, legal requirements, and societal values. They
help create a framework for responsible and trustworthy AI innovation and contribute
to building public trust in AI technologies.
2.3 The Role of Professional Norms in the Governance of Artificial Intelligence:
Professional norms play a crucial role in the governance of artificial intelligence (AI) by
providing a framework of ethical principles, standards, and guidelines that guide the
behavior and practices of individuals and organizations involved in the development,
deployment, and use of AI technologies. These norms contribute to the responsible and
ethical development of AI, promote accountability, and help address the societal,
ethical, and legal challenges associated with AI.

Here's how professional norms contribute to the governance of AI:


1. Ethical Alignment: Professional norms ensure that AI practitioners align their work
with ethical considerations and values. By adhering to these norms, AI professionals
contribute to the development of AI technologies that respect human rights, promote
fairness, and avoid harmful consequences.
2. Accountability and Responsibility: Professional norms hold AI practitioners
accountable for the outcomes of their work. Practitioners are expected to take
responsibility for addressing biases, errors, and unintended consequences in AI
systems, fostering a culture of accountability in the AI community.
3. Transparency and Trust: Norms around transparency require AI professionals to
provide clear and understandable explanations of AI technologies to users,
stakeholders, and the public. This transparency fosters trust and helps mitigate
concerns about the opacity of AI decision-making.
4. Bias Mitigation and Fairness: Professional norms guide AI practitioners in
addressing biases and ensuring fairness in AI algorithms and systems. By
incorporating techniques for bias detection and mitigation, AI professionals work to
minimize discriminatory outcomes.
5. Data Privacy and Security: Norms related to data privacy ensure that AI
practitioners handle data responsibly, protecting individuals' personal information and
complying with relevant data protection regulations.
Human-Centered Design: AI professionals are encouraged to prioritize human well-
being and values in the design and deployment of AI technologies. This focus on human-
centered design ensures that AI systems align with the needs and aspirations of
individuals and communities.

7. Interdisciplinary Collaboration: Professional norms encourage interdisciplinary


collaboration among AI practitioners, ethicists, legal experts, policymakers, and other
stakeholders. This collaboration helps address the multifaceted ethical, legal, and social
dimensions of AI governance.

8. Continuous Learning and Improvement: AI practitioners are expected to engage


in continuous learning and professional development to stay updated on the latest
advancements, ethical considerations, and best practices in AI.

9. Public Engagement and Education: Norms promote engagement with the public
and stakeholders, enabling open discussions about AI technologies' benefits, risks, and
ethical implications. Public input informs AI development and governance.

10. Regulatory Compliance: Professional norms often align with regulatory


requirements and industry standards. Adhering to these norms helps AI practitioners
comply with legal frameworks and ensures that AI technologies meet ethical and legal
expectations.

11. Global Consistency: Professional norms provide a common set of guidelines that
can foster global consistency in AI governance. They contribute to harmonized practices
and standards across different regions and industries.

12. Adaptation and Evolution: Professional norms are not static; they evolve to
address emerging challenges and advancements in AI. This adaptability ensures that AI
governance remains relevant and effective over time.

By guiding the behavior and practices of AI practitioners, professional norms contribute to


a comprehensive and holistic approach to AI governance. They promote the responsible
and ethical development of AI technologies, ensuring that they benefit society while
minimizing risks and potential harms.
2.4 Teaching machines to be moral:

Teaching machines to be moral is a complex and multidisciplinary challenge that


involves combining ethical principles, technical expertise, and societal considerations. While
machines may not possess consciousness or emotions, they can be programmed to follow
ethical guidelines and make decisions that align with human values.

Here are some steps and considerations for teaching machines to be moral:
1. Define Ethical Principles: Begin by defining a clear set of ethical principles or
guidelines that you want the machines to follow. These principles should reflect
universally accepted moral values and standards.
2. Collect Ethical Data: Develop a comprehensive dataset that captures various ethical
scenarios, dilemmas, and decisions. This dataset should cover a wide range of
situations to help machines understand and apply ethical principles in different
contexts.
3. Machine Learning and AI Algorithms:
• Use machine learning techniques to train models to recognize ethical
considerations in data. For example, you can train models to identify biased
content, offensive language, or potential harm.
• Develop algorithms that can reason about ethical dilemmas and make decisions
based on predefined ethical principles.
4. Ethical Constraints: Program machines with explicit constraints that prevent them
from taking actions that violate ethical norms. These constraints can act as safeguards
to ensure that machines do not engage in harmful or immoral behaviors.
5. Explainability and Transparency: Design AI systems to provide explanations for
their decisions. Transparency helps users understand why a particular decision was
made and promotes accountability.
6. Human Oversight and Intervention:
• Implement mechanisms for human oversight and intervention in critical
situations. Humans can review and override machine decisions to ensure that
they align with moral values.
• Develop protocols for handling scenarios where machines encounter ethical
dilemmas without clear answers.
7. Continuous Learning and Adaptation:
• Allow AI systems to learn from their interactions and outcomes. Machines can
refine their ethical decision-making capabilities over time based on feedback
and experiences.
• Regularly update the models and algorithms to address emerging ethical
challenges and changes in societal values.

8. Collaboration Across Disciplines: Collaboration between ethicists, AI researchers,


psychologists, sociologists, and other experts is essential. Ethicists can provide guidance
on ethical principles, while AI experts contribute technical knowledge.

9. Public Input and Engagement: Involve the public, stakeholders, and affected
communities in discussions about the ethical behavior of AI systems. Public input helps
shape the development and governance of AI technologies.

10. Regulation and Standards: Governments and regulatory bodies can establish
standards and guidelines for ethical AI development. Compliance with these regulations
can ensure that AI systems adhere to minimum ethical requirements.

11. Global Considerations: Take into account cultural, societal, and regional variations
in ethical values when teaching machines to be moral. Different societies may have
distinct perspectives on morality.

Teaching machines to be moral is an ongoing and evolving process that


requires interdisciplinary collaboration, continuous improvement, and a deep
understanding of both ethical principles and AI technologies. It's important to strike a
balance between technical feasibility, ethical considerations, and real-world application to
ensure that AI systems contribute positively to society.
10. Assignments : UNIT IV
(CO5,K6)

Online certification Course to be completed mandatorily.

Certification Course Name:


Ethics of Artificial Intelligence

Link to access the course:


https://www.coursera.org/learn/ethics-of-artificial-intelligence?action=enroll#modules
11. PART A - QUESTION AND ANSWERS
1. Define AI Governance.
Artificial intelligence governance is the legal framework for ensuring AI and machine learning
technologies are researched and developed with the goal of helping humanity navigate the
adoption and use of these systems in ethical and responsible ways. As existing international
norms designed to allow every human being a life of liberty and dignity, human rights
ought to be the foundation for AI governance.
2. List 4 components of Strong AI Governance framework.
Risk management: AI governance and responsible use ensures effective risk
management strategies, such as selecting appropriate training data sets, implementing
cybersecurity measures, and addressing potential biases or errors in AI models.
Stakeholder involvement: Engaging stakeholders such as CEOs, data privacy officers
and users are vital for governing AI effectively. Stakeholders contribute to decision-
making, provide oversight, and ensure AI technologies are developed and used responsibly
over the course of their lifecycle.
Decision-making and Explainability: AI systems must be designed to make fair and
unbiased decisions. Explainability, or the ability to understand the reasons behind AI
outcomes, is important for building trust and accountability.
Regulatory compliance: Organizations must adhere to data privacy requirements,
accuracy standards and storage restrictions to safeguard sensitive information. AI
regulation helps protect user data and ensure responsible AI use.
3. What is the Role of AI in good Governance?
AI automation can help streamline administrative processes in government agencies, such as
processing applications for permits or licenses, managing records, and handling citizen
inquiries. By automating these processes, governments can improve efficiency, reduce errors,
and free up staff time for higher-value tasks.
4. Define Normative Models/Modes:
Normative modes of AI refer to established frameworks, codes, and standards that provide
guidance and regulation for the development, deployment, and use of artificial intelligence
(AI) technologies. These models provide a foundation for creating AI systems that align with
societal values, respect human rights, and promote the well-being of individuals and
communities. Normative models help guide the behavior of AI developers, researchers,
policymakers, and other stakeholders to ensure that AI is used in a responsible and ethical
manner.
5. What are the AI regulatory frameworks of Normative Models?General Data
Protection Regulation (GDPR): While not specific to AI, GDPR sets rules for data
protection and privacy that impact AI applications involving personal data.
AI Act (EU Proposal): The European Union has proposed regulations for AI systems that
pose high risks, aiming to ensure safety, transparency, and accountability in their deployment.
6. What are the Ethical Guidelines and Codes of Conduct on Normative modes of
AI?
IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems: The IEEE
has developed a series of documents, including "Ethically Aligned Design" and "P7000 -
Model Process for Addressing Ethical Concerns During System Design," which provide
guidance on ethical considerations in AI design and development.
ACM Code of Ethics and Professional Conduct: The Association for Computing
Machinery (ACM) has established a code of ethics that outlines principles and guidelines for
ethical behavior in computing, including AI.
Partnership on AI: An organization that brings together stakeholders from academia,
industry, and civil society to develop and promote best practices for AI that prioritize ethical
considerations.
7. What are the Domain Specific Standards of Normative modes of AI?
Healthcare: Organizations like the American Medical Association (AMA) provide guidelines
for the ethical use of AI in healthcare, emphasizing patient safety, data privacy, and informed
consent.
Automotive: ISO 21448, also known as "Road vehicles -- Safety of the intended
functionality," provides guidelines for the safe deployment of AI systems in vehicles.
8. what are the roles of Professional norms in AI Governance?
Professional norms play a crucial role in the governance of artificial intelligence (AI)
by providing a framework of ethical principles, standards, and guidelines that guide the
behavior and practices of individuals and organizations involved in the development,
deployment, and use of AI technologies. These norms contribute to the responsible and
ethical development of AI, promote accountability, and help address the societal, ethical, and
legal challenges associated with AI.
9. What are 3 main concerns about the ethics of AI?
Lack of transparency of AI tools: AI decisions are not always intelligible to humans.
Surveillance practices for data gathering and privacy of court users.
Fairness and risk for Human Rights and other fundamental values.
10.What is Teaching machines to be moral?
Teaching machines to be moral is a complex and multidisciplinary challenge that
involves combining ethical principles, technical expertise, and societal considerations.
While machines may not possess consciousness or emotions, they can be programmed
to follow ethical guidelines and make decisions that align with human values.
11. List down the steps to teach machines to be moral.
❖ Define Ethical Principles
❖ Collect Ethical Data
❖ Machine Learning and Model Training
❖ Ethical Constraints
❖ Explainability and Transparency
❖ Human Oversight and Intervention
❖ Collaboration Across Disciplines
❖ Public Input and Engagement
❖ Follow Regulation and Standards
12. How professional norms contribute to the governance of AI?
❖ Ethical Alignment
❖ Accountability and Responsibility
❖ Transparency and Trust
❖ Bias Mitigation and Fairness
❖ Data Privacy and Security
❖ Human-Centered Design
13. Define the term “Collaboration across discipline”
Collaboration between ethicists, AI researchers, psychologists, sociologists, and other
experts is essential. Ethicists can provide guidance on ethical principles, while AI experts
contribute technical knowledge.
14. What are Global and Local initiatives?
Global and national initiatives on AI are efforts by governments, international organizations,
research institutions, and industry groups to advance the development, regulation, and
responsible use of artificial intelligence technologies. These initiatives aim to address
various aspects of AI, including research, innovation, ethics, policy, and governance.
15. List any 2 Global Initiatives.
Global Forum on AI for Humanity: Organized by the French government, this forum
brings together stakeholders to discuss the societal impacts of AI and develop
recommendations for global governance.
World Economic Forum's Centre for the Fourth Industrial Revolution: A platform
that engages governments, businesses, and civil society to shape policies and governance
frameworks for emerging technologies, including AI.
16. List any 2 National Initiatives.
National AI Strategies: Many countries have developed their own national strategies to
promote AI research, innovation, and adoption. Examples include the U.S. National AI
Initiative, Canada's Pan-Canadian AI Strategy, and Germany's AI Strategy.
AI Ethics Initiatives: Some countries are actively working on AI ethics guidelines and
frameworks. For instance, the European Union's Ethics Guidelines for Trustworthy AI and the
UAE's AI Ethics Guidelines are examples of national efforts in this direction.
17. What are Ethical Principles of Normative modes of AI?
Normative modes of AI emphasize the importance of adhering to ethical principles in the
design and operation of AI systems. These principles include fairness, transparency,
accountability, privacy, and human autonomy. AI developers and practitioners are expected to
ensure that their systems do not perpetuate biases, are open about their decision-making
processes, and can be held accountable for their actions.
18. Sketch the components of AI Governance framework
12. Part-B Questions : UNIT IV

Q. Questions CO K Level
No. Level
Explain in detail about AI Governance by Human-right
1 CO2 K2
centered design.
2 Explain about Normative Modes/Models of AI. CO2 K2
Discuss in detail about the Role of Professional Norms
3 CO2 K2
in the Governance of Artificial Intelligence

4 Explain in detail how to teach machines to be moral. CO2 K2


13. Supportive Online Certification
Courses

UNIT IV
Sl. Courses Platform
No.
1 Ethics of Artificial Intelligence Coursera
14. REAL TIME APPLICATIONS : UNIT I

AI Application in E-Commerce
1. Personalized Shopping
Artificial Intelligence technology is used to create recommendation engines through
which you can engage better with your customers. These recommendations are made
in accordance with their browsing history, preference, and interests. It helps in
improving your relationship with your customers and their loyalty towards your brand.

2. AI-Powered Assistants
Virtual shopping assistants and chatbots help improve the user experience while
shopping online. Natural Language Processing is used to make the conversation sound
as human and personal as possible. Moreover, these assistants can have real-time
engagement with your customers. Did you know that on amazon.com, soon, customer
service could be handled by chatbots?

3. Fraud Prevention
Credit card frauds and fake reviews are two of the most significant issues that E-
Commerce companies deal with. By considering the usage patterns, AI can help reduce
the possibility of credit card fraud taking place. Many customers prefer to buy a product
or service based on customer reviews. AI can help identify and handle fake reviews.
15. Content Beyond Syllabus
Ethical AI in the Government & Private Sector
Both government and private sector organizations have important roles to play in promoting
ethical AI.
Government organizations have a responsibility to establish regulations and guidelines for
the development and use of AI, in order to protect citizens’ rights and ensure that AI is used
responsibly and ethically. This can include measures to protect citizens’ privacy, prevent
discrimination, and ensure that AI systems are transparent and accountable. Government
organizations can also invest in research and development to support the development of
ethical AI and can provide funding and resources for the training and education of AI
professionals.

Private sector organizations, on the other hand, have a responsibility to ensure that their
own AI systems and practices are in compliance with relevant regulations and guidelines.
They should establish internal review processes to ensure that their AI systems are aligned
with human values and should be transparent about the data they are collecting and how it
is being used. Private sector organizations should also invest in building a culture of ethics
within the company and provide their employees with training and education on AI ethics.
In addition, both government and private sector organizations can work together to promote
ethical AI by collaborating on research and development, sharing best practices, and
participating in industry-wide initiatives and standards-setting bodies.
It’s important to note that promoting ethical AI is a shared responsibility and requires a
collaborative effort between the government, the private sector, and society at large.
16. Assessment Schedule
Tentative schedule for the Assessment During 2022-2023 Odd semester

S.NO Name of the Start Date End Date Portion


Assessment

1. Unit Test I Unit I

2. IAT 1 Unit I & II


3. Unit Test II Unit III

4. IAT II Unit III & IV

5. Revision I Unit V,I & II

6. Revision II Unit III & IV

7. Model All 5 Units


17. Prescribed Text & Reference Books

Sl. Book Name & Author Book


No.
1 Paula Boddington, “Towards a Code of Ethics for Artificial Reference
Intelligence”, Springer, 2017 PDF
2 Markus D. Dubber, Frank Pasquale, Sunit Das, “The Oxford
Handbook of Ethics of AI”, Oxford University Press Edited book, Book
2020
3 S. Matthew Liao, “Ethics of Artificial Intelligence”, Oxford
University Press Edited Book, 2020 Boook

4 N. Bostrom and E. Yudkowsky. “The ethics of artificial


intelligence”. In W. M. Ramsey and K. Frankish, editors, The Reference
Cambridge Handbook of Artificial Intelligence, pages 316–334. PDF
Cambridge University Press, Cambridge, 2014.
5 Wallach, W., & Allen, C, “Moral machines: ceaching robots right
from wrong”, Oxford University Press, 2008. Book
18. Mini Project Suggestions

Develop any one of the following web applications

1. Stock Prediction

2. House Security

3. Loan Eligibility Prediction

4. Resume Parser

5. Consumer Sentiment Analysis


Thank you

Disclaimer:

This document is confidential and intended solely for the educational purpose of RMK Group of
Educational Institutions. If you have received this document through email in error, please notify the
system manager. This document contains proprietary information and is intended only to the
respective group / learning community as intended. If you are not the addressee you should not
disseminate, distribute or copy through e-mail. Please notify the sender immediately by e-mail if you
have received this document by mistake and delete this document from your system. If you are not
the intended recipient you are notified that disclosing, copying, distributing or taking any action in
reliance on the contents of this information is strictly prohibited.

You might also like