Arg Essay Simple Outline Dang Hoang Anh
Arg Essay Simple Outline Dang Hoang Anh
Part I. Introduction
Thesis statement Although some might argue that making rules and regulations for AI research can slow down the advancement of technology, most
(your position in a full
sentence) people agree that the process of developing an artificial intelligence must be strictly controlled and monitored.
Opposition Argument #1: AI is a creation of human, Rebuttal #1: Although AI is a creation of human, superintelligence cannot be
therefore experts always have ways to keep it safe contained, which leads to a global existential risk
Support Support
“A super-intelligent machine that controls the world there’s no way to contain such an algorithm without building a sort of
sounds like science fiction,” (Cebrian, 2021) ‘containment algorithm’ that simulates the dangerous algorithm’s behavior and
The scenario when AI takes over the world seem blocks it from doing anything harmful. But because the containment algorithm
unrealistic would need to be at least as powerful as the first algorithm, the scientists
declared the problem is impossible to solve. (Alfonseca et al., 2021)
Opposition Argument #2: At present, AI is still at the Rebuttal #2: Although some common AI nowadays are not very intelligent,
beginning stage and seems to be not intelligent enough to experts concern that human-level AI is likely to be achieve in the future
pose any real dangers
Support Support
Apple’s chat-bot “Siri” failed to interpret human While confident that the creation of human-level artificial intelligence is
languages (Bishop, 2021) inevitable, barring a global catastrophe, Bostrom (author of the book
Superintelligence: Paths, Dangers, Strategies) acknowledges that it is difficult
to judge how long it will take to develop this technology (Thorn,2015)
. . . Bostrom calls the “speed of takeoff”, i.e., the speed at which the
development of human-level artificial intelligence would lead to the
development of ‘extreme’ superintelligence . . . a slow takeoff would,
presumably, give the human beings involved in the development of an
extreme superintelligence the opportunity to influence the goals and
character of the superintelligent being, or to avert the process altogether. So,
other things being equal, we ought to pursue AI research in a way that tends
to a slow takeoff (Thorn, 2015)
Opposition Argument #3: Researchers should not be Rebuttal #3: Ethics should be heavily considered in AI researching and
worried about any ethical problems as AI ethical princicles developing, as neglecting this aspect can lead to the creation of unethical AI.
are meaningless
Support Support
AI guidelines and codes of ethics . . . are meaningless AI needs to be developed in a human-centric and trustworthy fashion, for AI
principles which are contested or incoherent, making that benefits the common good (Berendt, 2019)
them difficult to apply; they are isolated For a good rating of an AI system, all ethical principles are more or less
principles situated in an industry and education system equally important. Hence, developers and organizations should not neglect
which largely ignores ethics; and they are toothless some ethical principles, while emphasizing others. (Kieslich et al., 2022)
principles which lack consequences and adhere to
corporate agendas. For these reasons, I argue that AI
ethical principles are useless, failing to mitigate the
racial, social, and environmental damages of AI
technologies in any meaningful sense. (Munn, 2022)
If you have more arguments/ rebuttals, you can copy and paste the table and add opposition argument / rebuttal #4.
Alfonseca, M., Cebrian, M., Fernandez Anta, A., Coviello, L., Abeliuk, A., & Rahwan, I. (2021). Superintelligence Cannot be Contained: Lessons from
Berendt, B. (2019). AI for the Common Good?! Pitfalls, challenges, and ethics pen-testing. Paladyn, Journal of Behavioral Robotics, 10(1), 44–
65. https://doi.org/10.1515/pjbr-2019-0004
Bishop, J. M. (2021). Artificial Intelligence Is Stupid and Causal Reasoning Will Not Fix It. Frontiers in
Psychology, 11. https://doi.org/10.3389/fpsyg.2020.513474
Kieslich, K., Keller, B., & Starke, C. (2022). Artificial intelligence ethics by design. Evaluating public perception on the importance of ethical design
Müller, V. C., & Cannon, M. (2021). Existential risk from AI and orthogonality: Can we have it both ways? Ratio, 35(1), 25–
36. https://doi.org/10.1111/rati.12320
Thorn, P. D. (2015). Nick Bostrom: Superintelligence: Paths, Dangers, Strategies. Minds and Machines, 25(3), 285–289. https://doi.org/10.1007/s11023-
015-9377-7