Final Proposal
Final Proposal
1
Table of Contents
I. Introduction.........................................................................................................2
II. Assignment......................................................................................................3
Assignment 1..........................................................................................................3
Assignment 2..........................................................................................................6
In Class Exercises...................................................................................................7
1.User behavior and Data Privacy in Social Media Platforms............................7
2. Correlation Analysis( Group Exercise)...........................................................8
3. Postmordern Research( Group Exercise)........................................................9
III. Research Proposal.........................................................................................11
1. Research Question.........................................................................................11
2. Literature Review..........................................................................................11
3. Methods.........................................................................................................12
5. Conclusions...................................................................................................14
Bibliography.........................................................................................................16
2
I. Introduction
Throughout this course, I had the opportunity to acquire new skills and knowledge by
completing various projects and practical exercises. This course portfolio serves as a
comprehensive summary of my learning journey, from the initial goals and key lessons to
the working process and final results of the exercises and projects. By documenting each
step of my learning experience, I am able to reflect on what I have accomplished and
assess the development of my skills.
Each section of this portfolio not only reflects the principles and technologies I applied
during my research but also highlights the challenges I encountered and how I overcame
them. I hope that this portfolio will serve as a valuable reference for future projects, as
well as a clear demonstration of my abilities and progress in this field.
II. Assignment
Assignment 1
In this assignment 1, I had the opportunity to delve into how artificial intelligence (AI) is
rapidly integrating into various industries, bringing positive impacts to society, such as
increased productivity and improved efficiency. However, alongside these benefits, risks
are also emerging, and security vulnerabilities are becoming increasingly diverse and
complex in parallel with the development of AI. The purpose of this paper is to identify
security vulnerabilities associated with AI and propose measures to mitigate risks when
they occur, as well as reduce the likelihood of their occurrence. This research stems from
the growing application of artificial intelligence in critical fields such as banking,
healthcare, and automated systems, where security vulnerabilities can lead to severe
consequences for individuals and society. To ensure that the implementation of AI
systems in daily life does not compromise individuals' privacy and security, it is essential
to understand how to protect AI systems from potential vulnerabilities.
LITERATURE REVIEW
Security Challenges in Artificial Intelligence (AI)
Introduction
Artificial intelligence (AI) is currently rapidly integrating into many businesses, bringing
positive things to society such as increased productivity and improved efficiency.
However, alongside the benefits, risks are also gradually emerging, and security
vulnerabilities are becoming increasingly diverse and complex in parallel with the
development of AI. The purpose of this research paper is to identify security
vulnerabilities related to AI and propose measures to mitigate risks when they occur and
3
reduce the likelihood of their occurrence. This research paper stems from the increasing
application of artificial intelligence in critical fields such as banking, healthcare, and
automated systems, where security vulnerabilities can lead to serious consequences for
individuals and society. To ensure that the application of artificial intelligence systems in
life does not affect individuals' privacy and security, it is essential to know how to protect
AI systems from vulnerabilities.
4
Mitigating Security Risks in AI
Bonawitz et al.’s further books focused on the protocols of secure multiparty
communication (SMPC) [15]. As Smith (2017) argues, businesses are able to easily offer
and protect multi-party computations on their data with Smith Business, an end-to-end
robust multi-party computation system. Research by Acar et al. (2018). This paper
focusses on the study of homomorphic encryption, which performs operations over
enciphered data, ensuring the safety of critical information. Lastly, Brundage et al. (2020)
say that through the use and application of modern systems of AI, it is imperative to
ensure that the security of all systems is considered, a need that most if not all standard
regulatory systems promote by outlining a comprehensive methodology.
Conclusion
AI is capable of bringing large amounts of innovation; however, it also poses new risks in
terms of defense. As highlighted by Zohrabi, some of the security risks of AI include
adversarial attack, privacy violation, and model infringement. But their negative impact
on research is being mitigated by scientists using adversarial training, secure multipart
computation, or homomorphic encryption. The advancement of AI in several industries
compels developers and lawmakers to enhance security protocols within the system. To
this aim, the study will focus on enhancing AI security together with its application.
References
Acar, A., Aksu, H., Uluagac, A.S., & Conti, M. (2018). A survey on homomorphic
encryption schemes: theory and implementation. ACM Computing Surveys (CSUR),
51(4), 1-35.
Bonawitz, K., Ivanov, V., Kreuter, B., Marcedone, A., & Patel, S. (2017). Practical
secure aggregation for privacy-preserving machine learning. In Proceedings of the 2017
ACM SIGSAC Conference on Computer and Communications Security (pp. 1175–1191).
ACM.
Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., & Garfinkel, B. (2020).
Toward trustworthy AI development: Mechanisms for supporting verifiable claims.
ArXiv preprint arXiv:2004.07213.
Goodfellow, I., Shlens, J., & Szegedy, C. (2015). Explaining and harnessing adversarial
examples. In International Conference on Learning Representations (ICLR).
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., & Vladu, A. (2018). Towards deep
learning models resistant to adversarial attacks. In International Conference on Learning
Representations (ICLR).
5
Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., & Swami, A. (2016).
The limitations of deep learning in adversarial settings. In IEEE European Symposium on
Security and Privacy (EuroS&P) (pp. 372-387). IEEE.
Shokri, R., Stronati, M., Song, C., & Shmatikov, V. (2017). Membership inference
attacks against machine learning models. In Proceedings of the 2017 IEEE Symposium
on Security and Privacy (pp. 3-18). IEEE.
Tramèr, F., Zhang, F., Juels, A., Reiter, M.K., & Ristenpart, T. (2016). Stealing machine
learning models via prediction APIs. In USENIX Security Symposium (pp. 601-618).
Veale, M., Binns, R., & Van Kleek, M. (2018). Fairness and accountability design needs
algorithmic support in high-stakes public sector decision-making. In Proceedings of the
2018 CHI Conference on Human Factors in Computing Systems (pp. 1-14). ACM.
Assignment 2
RESEARCH QUESTIONS
The poster "Security Challenges in Artificial Intelligence (AI)" focuses on identifying
vulnerabilities associated with AI and presenting strategies to mitigate these risks when
they arise, while also aiming to reduce their likelihood proactively. This approach
emphasizes the importance of safeguarding AI systems to support their secure and
responsible deployment.
6
The privacy risks of “membership inference attacks, ” where attackers can deduce if
specific data was used in model training.
Encryption method can ensure secure data processing without compromising sensitive
information.
METHODOLOGY
Survey
Method: Conducted online using Google Forms with 100 participants.
Questions:
How would you rate your familiarity with AI technology?
What aspect of AI security concerns you the most?
Which of the following do you think could improve AI security?
Interview
Participants: 10 teenagers
Questions:
In your opinion, what are the most significant risks AI poses to security
today?
Do you feel current regulations adequately address AI security and privacy
concerns? What changes, if any, would you like to see in this area?
CHALLENGES
Adversarial Vulnerability
Privacy Concerns
Model Infringement
6. CONCLUSION
The goal of this research is to tackle AI security challenges, including adversarial
attacks and privacy risks. By employing methods such as adversarial training and
encryption, it aims to strengthen security protocols, promoting safer AI
applications.
7
In Class Exercises
1. User Behavior and Data Privacy in Social Media Platforms
Introduction
The "User Behavior and Data Privacy in Social Media Platforms" examines the
relationship between user behavior on social media and the handling of personal data,
focusing on how user actions influence privacy settings, data collection, and security
practices. In the age of pervasive social media usage, where vast amounts of personal
information are shared daily, understanding how user behavior impacts data privacy is
critical to protecting individual rights and improving privacy regulations.
Research Questions
Similar to studies on consumer behavior in digital platforms, this document might
explore key questions regarding user behavior and data privacy, such as:
"How do user behaviors on social media platforms impact data privacy?"
"What privacy risks arise from user-generated content and social media
interactions?"
"How can social media platforms enhance their privacy policies to protect user
data?"
These questions aim to explore the interplay between user behavior and the privacy risks
inherent in social media, with a focus on improving data protection mechanisms.
Characteristics of Related Research Papers
Supporting research related to this topic might include studies on:
User behavior on social media platforms, including how frequently users update
privacy settings or share personal information.
The impact of platform policies and settings on user privacy, including case
studies of data breaches or misuse of personal information.
The role of user education and platform transparency in promoting better privacy
practices and protecting sensitive data.
These topics are central to research in digital privacy and online behavior and are often
explored in academic journals related to information privacy, social media studies, and
human-computer interaction.
Article Citation Style
8
APA style is commonly used for academic research in the social sciences. Example
references might include:
Peterson, L., & Marshall, S. (2017). The impact of user behavior on data privacy
in social media. Journal of Information Privacy, 22(4), 250-265.
Harper, C., & Jenkins, R. (2019). Enhancing privacy policies in social media
platforms. Social Media Studies Review, 11(3), 112-128.
9
Applications of Correlation Analysis: Real-world uses in fields like finance
(e.g., analyzing stock prices) and healthcare (e.g., examining relationships between
BMI and cholesterol levels). Such applications leverage correlations to extract
insights from variable relationships, as demonstrated in epidemiology and
econometrics.
Introduction
The "Postmodern Research" document presents postmodern research as an approach
within the social sciences and humanities that emphasizes diversity, subjectivity, and the
examination of power dynamics. This perspective challenges fixed viewpoints and
welcomes multiple interpretations, frequently employing qualitative methods to
investigate intricate social issues. Postmodern research is especially useful for exploring
topics where context, individual experience, and cultural variation play a crucial role.
Research Questions
Like analyses in other research fields, this document likely addresses questions such as:
"In what ways does postmodern research challenge conventional methodologies in
the social sciences and humanities?"
"What significance do subjectivity and personal experience hold in postmodern
research?"
"How can power structures and hierarchies be examined through a postmodern
framework?"
These questions highlight postmodern research's emphasis on exploring diverse
perspectives and questioning singular truths, making it ideal for investigating complex
social and cultural issues.
Characteristics of Related Research Papers
Supporting research in postmodern studies often focuses on:
Diversity and Subjectivity in Research Methods: This includes techniques such
as in-depth interviews, observations, and document analysis that capture personal
experiences and cultural perspectives.
10
Interdisciplinary Approaches: Complex social issues are explored by integrating
insights from fields like sociology, psychology, and cultural studies, offering a
holistic view.
Critiques of Power Structures: Research in this area examines how societal
hierarchies and power relations shape knowledge, behavior, and identity within
different communities.
These themes are commonly explored in academic literature within postmodern studies,
often featured in journals and texts across the humanities and social sciences.
1. Research Question
What are the security challenges in artificial intelligence (AI) ?
How can the potential risks be mitigated when deploying AI in critical systems?
Why is this issue important?
1.1Rationale and Motive:
Artificial intelligence (AI) is currently rapidly integrating into many businesses, bringing
positive things to society such as increased productivity and improved efficiency.
However, alongside the benefits, risks are also gradually emerging, and security
vulnerabilities are becoming increasingly diverse and complex in parallel with the
development of AI. AI systems can become targets of hackers , which can affect the
safety and stability of society.
This research question is very crucial in the sense that it will aid the AI Researchers and
Engineers to appreciate the security concerns surrounding AI and therefore put in place
effective safeguards that will assist in the proper deployment of AI for the good of
mankind.
2. Literature Review
In the last few years, artificial intelligence (AI) has been growing rapidly and has reached
an advanced level of technology in so many sectors including but not limited to
healthcare, finance, transportation and even defense. However, with the evolution of this,
so too has the security challenges that are AI related. It has been noted in recent studies
11
that AI suffers from not just conventional security risks but also other challenges that are
unique to this technology. Here, we present some of the more important work related to
the security threats in artificial intelligence.
12
3. Methods
3.1 Participants
This study aims at collecting views regarding security concerns embedded in Artificial
Intelligence (AI). Mostly data will be obtained from professional Ai developers,
researchers, and cybersecurity practitioners. The respondents will be picked blindly with
the aim of having one hundred (100) individuals who have previously worked in the AI
systems or attended to AI Security concerns. The sample will portray individuals aged
between 25 and 50 years and who are employed in several industries that include health,
banking and finance, governments, and the information technology industry. Since
opinions would be gathered from experts in different fields of practice, the purpose of the
study will assist in gaining more understanding of the AI security problems.
3.2 Instrument
For this research, we will employ an online questionnaire mediated by Google Forms
which was preferred due to its effectiveness and ease. The survey will include a number
of questions in the form of cross tables, dichotomous, and scale response questions. This
exposure implies that response completion is straightforward and duration likelihood for
each respondent to complete the survey is less. Particular questions would be directed
towards four pertinent issues:
Types of security challenges encountered in AI systems (e.g., adversarial attacks, data
privacy, model vulnerabilities)
Measures taken to address security issues in AI systems
Perceived effectiveness of AI security solutions (e.g., encryption, adversarial training)
Concerns about the future development of AI security
The survey will include 10 questions: 2 dichotomous questions, 4 multiple-choice
questions, and 4 scaled questions to assess the frequency and effectiveness of various
security measures. This combination will allow for both broad and detailed insights into
AI security challenges. The scaled questions will use a Likert scale to measure the degree
of concern or effectiveness of specific security practices. The survey design ensures that
the questions are straightforward and unambiguous, minimizing respondent confusion.
3.3 Procedure
The survey will be disseminated via the internet to professionals who work in the AI and
cybersecurity domains with the use of social networking sites like LinkedIn, Twitter, and
professional societies. A survey random sampling 100 participants will be invited to take
the survey which will be announced on the 15th of October 2024. They will consist of AI
13
developers, researchers, and specialized cybersecurity professionals in different
industries. Instruction will be sent to the participants requiring them to complete the
survey in the period between 15 and 25 of October 2024
Respondents will be investigated under the guarantee of confidentiality and all data will
be collected using Google Forms that will ensure proper and timely synthesis. The data
will be analyzed by studying the responses provided by the participants in order to
provide answers for the questions on what security issues AI systems experience.
Moreover, the combination of Google Forms surveys will supply the necessary statistical
summary required to present the data in an orderly manner
4. Ethical Issues and Research Problems
Lastly, the research ethics state that when studying security issues related to artificial
intelligence, ethical and other problem factors may come into play especially those
regarding the leaves of the participants; the security of data or source concealment, and
potential threats stemming from misinterpretation or misapplication of the study results.
In this regard, two prominent ethical principles and their solutions are listed and
highlighted:
4.1 Privacy and Confidentiality
The most important ethical issue in this study is related to the privacy of information
collected from the respondents. For as long as the survey is to be administered online, it
poses a threat of revealing or misplacing any personal or sensitive details. As a measure
against this risk: Anonymous Participation: The survey will be conducted without
collecting or soliciting any identifiable information from the interviewees. It will be made
clear to the participants that they are simply sharing their opinions and they cannot be
identified based on what they say. There is no need for such identifying details. Secure
Data Storage: The responses to the surveys will be collected and stored in Google Forms
which balances between usability and strict data compliance policies so as to enhance
safety. All data will be on hand only to our research team. All the analysis will be
cumulative to prevent even single person’s data from being revisited again. Data
Handling Protocols: If there exists any demand to present data in any form such as the
findings in visual representation, that will have to be first anonymized for confidentiality
and security purposes. Any materials that we cite for the purposes of reporting or
publishing will also be edited in regard to the participants.
4.2 Informed Consent
It is critical that participants are fully informed about the purpose of the research and how
their data will be used. They must understand that participation is voluntary and that they
14
can withdraw at any time without facing any negative consequences. To ensure informed
consent:
Clear Consent Statement: An introductory section of the survey will explain
the purpose of the research, how the data will be used, and the voluntary nature of
participation. A consent checkbox will be required before participants can begin
the survey to ensure they have understood and agreed to the terms.
Withdrawal Option: Participants will be informed that they can exit the survey
at any point before submission without consequences. Additionally, they can
request to have their data removed from the study if they decide to withdraw after
submitting their responses.
3. Avoiding Harm
Given that the research involves AI security, a sensitive topic, it is important to ensure
that no harm is caused to participants, organizations, or the AI systems discussed.
Potential risks include:
Emotional Discomfort: Some participants may feel uncomfortable discussing
vulnerabilities in AI systems, especially if they are working on developing or
implementing AI technologies that might have security flaws. To avoid this, the
survey will be structured in a way that focuses on general trends and experiences
rather than sensitive, confidential details about specific projects or systems.
Misinterpretation of Findings: The research aims to identify security
challenges in AI, and there is a risk that the findings could be misinterpreted or
misused. To address this:
The research will aim to provide clear, well-contextualized findings.
Any recommendations or conclusions drawn from the research will be
carefully presented to ensure they are actionable and relevant to the
intended audience, such as AI developers and cybersecurity professionals.
Results will be framed in the context of broader AI security research,
avoiding direct implications or conclusions that could harm specific AI
projects or organizations.
5. Conclusions
The goal of this research proposal is to enhance the attention given to security aspects in
the Artificial Intelligence (AI) domain by demonstrating how security-related activities
15
can be incorporated into the AI lifecycle. This study seeks to demonstrate the significance
of active security through risk-based touch point vulnerability assessment, threat
modeling, and the application of advanced security tools like AI-based threat deterrence,
encryption, and monitoring systems. All these methodologies seek to provide a holistic
risk management framework in addressing consistency in the security challenges
associated with AI model development, deployment and operation.
The research questions in the study are directed towards the overall AI security strategy,
which is composed of three approaches. Risk-based touch point vulnerability assessments
contribute to the identification of the risks that warrant the greatest attention, and so assist
to bring order into chaos. Threat modeling makes it possible to define the weaknesses in
security systems and the specific areas where attacks have a strong probability of
happening so that preventative measures can be implemented. Furthermore, the use of
such tools focuses on hardening security checkpoints, thereby preventing human error.
In conclusion, these research gaps are what this study seeks to address by integrating
these techniques and exploring different strategies of addressing security challenges
inherent in AI systems, while maintaining the balance thatexists between AI ,Security and
the specifics of the environment.
16
Bibliography
Acar, A., Aksu, H., Uluagac, A.S., & Conti, M. (2018). A survey on homomorphic
encryption schemes: theory and implementation. ACM Computing Surveys (CSUR),
51(4), 1-35.
Bonawitz, K., Ivanov, V., Kreuter, B., Marcedone, A., & Patel, S. (2017). Practical
secure aggregation for privacy-preserving machine learning. In Proceedings of the 2017
ACM SIGSAC Conference on Computer and Communications Security (pp. 1175–1191).
ACM.
Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., & Garfinkel, B. (2020).
Toward trustworthy AI development: Mechanisms for supporting verifiable claims.
ArXiv preprint arXiv:2004.07213.
Goodfellow, I., Shlens, J., & Szegedy, C. (2015). Explaining and harnessing adversarial
examples. In International Conference on Learning Representations (ICLR).
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., & Vladu, A. (2018). Towards deep
learning models resistant to adversarial attacks. In International Conference on Learning
Representations (ICLR).
Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., & Swami, A. (2016).
The limitations of deep learning in adversarial settings. In IEEE European Symposium on
Security and Privacy (EuroS&P) (pp. 372-387). IEEE.
Shokri, R., Stronati, M., Song, C., & Shmatikov, V. (2017). Membership inference
attacks against machine learning models. In Proceedings of the 2017 IEEE Symposium
on Security and Privacy (pp. 3-18). IEEE.
Tramèr, F., Zhang, F., Juels, A., Reiter, M.K., & Ristenpart, T. (2016). Stealing machine
learning models via prediction APIs. In USENIX Security Symposium (pp. 601-618).
Veale, M., Binns, R., & Van Kleek, M. (2018). Fairness and accountability design needs
algorithmic support in high-stakes public sector decision-making. In Proceedings of the
2018 CHI Conference on Human Factors in Computing Systems (pp. 1-14). ACM.
17