Project 2 Process Assignment - Riya Thomas
Project 2 Process Assignment - Riya Thomas
Template
Name - Riya Thomas
Student ID - 301321511
UNIT 6
1. Title of the source:
“AI can be a force for good or ill in society, so everyone must shape it, not just the ‘tech
guys’” by Afua Bruce
2. Summary of Source (100 words)
In “AI can be a force for good or ill in society, so everyone must shape it, not just the
‘tech guys’”, Afua Bruce (2023) addresses the importance of society as well as designers in
designing AI, which has its own set of ups and downs. Designers have a lot of say in how AI
tools work, and corporate executives, government agencies, and non-profit groups can
utilize their influence to select when and how to use AI systems. However, in order to have
a positive impact on society, it is critical to prevent over-reliance on technology by
establishing a balanced approach.
3. Discuss some of the strengths and flaws found in your source (100 words)
Bruce(2023) argues that AI is a tool created by humans and therefore communities are to
be involved in shaping AI, not just the “tech guys”. Some strengths, for instance such as
ChatGPT’s response to questions of opinion to avoid controversy. It also acknowledges the
benefits of AI for social good, such as generating headshots, planning vacation agendas,
creating work presentations, and even writing new code. However, the article also has some
flaws, such as being too optimistic about the ability of communities to influence AI
development and deployment. The article also does not address the ethical and social
implications of AI replacing human tasks and roles, such as the impact on employment,
privacy, and human dignity. The article could have been more balanced by exploring these
issues and offering some solutions.
For Arguments:
Strengths: HOW is the argument logical, fair, and reasonable? Why might some reasonable
people agree with the author?
The author’s argument is logical, fair, and reasonable. They provide clear examples of how AI
can be a force for good or ill in society, depending on how it is designed and used. They
acknowledge the complexity and diversity of AI issues and do not oversimplify or exaggerate
the benefits or risks of AI. They also recognize the need for open and honest processes to shape
AI development and deployment, involving different stakeholders and communities. The
argument is based on the author’s expertise and experience in the field of AI, as well as credible
sources and research.
Flaws: What are the arguments' problems, flaws, or gaps? Why do you think people may
disagree with the author? Think critically. What new questions and areas for development does
the argument raise?
The author’s argument may be biased, unrealistic, and incomplete. They reflect the author’s
perspective as an AI developer and leader, who may have an interest in promoting AI
innovation and avoiding strict regulation. The writer assumes that AI can be easily shaped by
everyone, not just the ‘tech guys’, without explaining how to overcome the existing power
imbalances, knowledge gaps, or resource constraints that may limit the participation and
influence of diverse actors in AI governance. Moreover, it does not addresses the potential
conflicts or trade-offs that may arise between different values, interests, or goals when shaping
AI. They do not provide concrete solutions or recommendations for how to achieve responsible
and beneficial AI for society, only some general principles and practices.
OR
For Narratives:
Strengths: What makes the narrative seem truthful, important, and relevant? Why should the
narrative matter to other people today?
Flaws: What voice or perspective does the narrative allow us to hear that is perhaps often
hidden? Moreover, what do you think are some other possible perspectives on the same
issue/problem raised in the narrative? In other words, how would the same
experience/issue/problem look from a different perspective?
Unit 7
Academic integrity
1. Discuss two problems resulting from using ChatGPT and other AI tools to write your research
essay for COMM 171. (100 words)
Using artificial intelligence (AI) tools such as ChatGPT can interfere with one's ability to think
critically as well as do research. Learners may become focused on automatic
recommendations, losing the attempts to increase their own knowledge, some of which are
required in COMM 171. Furthermore, students might submit tasks that they wrote
totally on their own but turns out to be a work of ideas created by others. Recognizing or
clearly identifying the sources of knowledge used by the AI, if applicable, will result in academic
dishonesty and penalties.
2. How can students use AI tools such as ChatGPT in a positive way to learn- in a manner that does
not breach the academic integrity policies at the college? (100words)
Students can use AI tools like ChatGPT to learn in a positive way while not violating academic
integrity policies. ChatGPT can be used to provide ideas and suggestions for writing tasks.
Students can use the generated content to begin their own research and writing. Another option
is to use ChatGPT to proofread their work for grammatical and spelling mistakes. It should be
noted, however, that students should not use ChatGPT to construct complete assignments or to
plagiarize text. Instructors should establish clear rules for using AI tools in their classes and
involve students in open discussions regarding AI apps and academic honesty.
Newspaper
- Source title and author: How AI can be a force for good in Africa – if it’s developed
responsibly by Abeba Birhane and Fred Cummins.
- Relevance: This article discusses the challenges and opportunities of developing and
deploying AI in Africa, a continent that is often marginalized and exploited by global tech
companies. The authors argue that AI can be a force for good in Africa if it is designed
with the participation and empowerment of local communities, respecting their values,
cultures, and needs. They also highlight the importance of ethical frameworks and
regulations to ensure that AI does not harm human rights, dignity, and autonomy.
Magazine
- Source title and author: The Ethical Dilemmas of Artificial Intelligence by David Berreby
- Relevance: This article explores the ethical dilemmas that arise from the use of AI in
various domains, such as health care, education, warfare, and social media. The author
examines how AI can affect human decision-making, agency, privacy, and accountability.
He also suggests some possible solutions to address the ethical issues, such as creating
codes of conduct, establishing oversight bodies, and educating the public.
Videos
- Source title and author: AI for Good or Evil? by Stuart Russell
- Relevance: This video is a TED talk by Stuart Russell, a leading AI researcher and
professor at UC Berkeley. He explains why AI is not a neutral technology, but a powerful
tool that can be used for good or evil depending on its goals and values. He warns about
the risks of creating superintelligent AI that may not align with human interests or
values. He proposes a new approach to AI design that ensures that AI is beneficial for
humanity and respects human preferences.
Unit 8
Annotated Bibliography
Write the annotated bibliography for three of your best research sources. One of the sources
must be peer reviewed. Each annotation can be 2-5 sentences. The first few sentences can
summarize the article, and the last few sentences should illustrate how you will use your source
to evaluate your source. IE) What part of the response section will the research go in? What
framework will it support?
Source 1
APA entry: Birhane, A. (2022). Automating ambiguity: Challenges and pitfalls of artificial
intelligence (Doctoral dissertation). University College Dublin, Ireland.
https://browse.arxiv.org/pdf/2206.04179.pdf
The source is a PhD dissertation by Abigail Birhane that examines the challenges and problems
of AI from various perspectives. The thesis argues that AI is a human tool that reflects human
values and biases, and urges communities to be involved in shaping AI. On applying to the
article by Afua Bruce it supports the idea that AI is not neutral or objective, but rather reflects
the values and biases of its creators and users. The source also provides examples of how AI can
be designed to avoid bias and controversy. The source is significant because it is a
comprehensive and interdisciplinary analysis of AI that contributes to the current debates and
discussions on the topic. It’s relevant and credible because it is based on evidence and
examples from various sources, and written by an expert in the field of AI.
Source 2
APA entry: Berreby, D. (2019). The ethical dilemmas of artificial intelligence. National
Geographic. https://aiindex.stanford.edu/wp-content/uploads/2021/03/2021-AI-Index-Report-
_Chapter-5.pdf
The source is an article by David Berreby that explores the ethical dilemmas of artificial
intelligence (AI) from various angles, such as the impact of AI on human dignity, privacy, justice,
and morality. It discusses the challenges of regulating and governing AI, as well as the potential
risks and benefits of AI for humanity. It can be applied to the article by Afua Bruce because it
provides a broader and deeper perspective on the ethical issues of AI that the article touches
upon. The article illustrates some of the real-world examples of how AI can be used for good or
evil, such as facial recognition, autonomous weapons, and social media manipulation. The
article provides a comprehensive and accessible analysis of AI ethics that raises awareness and
stimulates discussion on the topic which is a significance. However, it offers some suggestions
and solutions for addressing the ethical dilemmas of AI, such as developing ethical principles
and standards, creating oversight mechanisms and accountability systems, and fostering public
engagement and education. The source is relevant and credible because it is based on evidence
and examples from various sources, such as academic research, news articles, and expert
interviews.
Source 3
APA entry: Coeckelbergh, M. (2020). Artificial intelligence ethics: A new framework for thinking
about human implications. Routledge. https://direct.mit.edu/books/book/4612/AI-Ethics
The source is a book by Mark Coeckelbergh, a philosopher of technology, that explores the
ethical issues of AI and proposes a new framework for thinking about the human implications of
AI. The book addresses questions such as the impact of AI on human dignity, autonomy,
privacy, justice, and morality, and the challenges of regulating and governing AI. On comparing
with the article by Afua Bruce the main idea is that AI is a tool created by humans and
therefore subject to human beliefs and restrictions. It also provides examples of how AI can be
designed to avoid bias and controversy. It is significant because it is a comprehensive and
original analysis of AI ethics that offers a new way of thinking about the human implications of
AI. The source also offers some suggestions and solutions for addressing the ethical dilemmas
of AI. The resource is relevant and credible because it is based on evidence and examples from
various sources, and written by an expert in the field of AI.