0% found this document useful (0 votes)
65 views27 pages

Impact of Gen AI in Cybersecurity & Privacy

Uploaded by

endhy_aziz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
65 views27 pages

Impact of Gen AI in Cybersecurity & Privacy

Uploaded by

endhy_aziz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

GUPTA ET AL: FROM CHATGPT TO THREATGPT: IMPACT OF GENERATIVE AI IN CYBERSECURITY AND PRIVACY 1

From ChatGPT to ThreatGPT: Impact of


Generative AI in Cybersecurity and Privacy
Maanak Gupta, CharanKumar Akiri, Kshitiz Aryal, Eli Parker, and Lopamudra Praharaj

Abstract—Undoubtedly, the evolution of Generative AI (GenAI) models has been the highlight of digital transformation in the year
2022. As the different GenAI models like ChatGPT and Google Bard continue to foster their complexity and capability, it’s critical to
understand its consequences from a cybersecurity perspective. Several instances recently have demonstrated the use of GenAI tools
in both the defensive and offensive side of cybersecurity, and focusing on the social, ethical and privacy implications this technology
possesses. This research paper highlights the limitations, challenges, potential risks, and opportunities of GenAI in the domain of
cybersecurity and privacy. The work presents the vulnerabilities of ChatGPT, which can be exploited by malicious users to exfiltrate
malicious information bypassing the ethical constraints on the model. This paper demonstrates successful example attacks like
arXiv:2307.00691v1 [cs.CR] 3 Jul 2023

Jailbreaks, reverse psychology, and prompt injection attacks on the ChatGPT. The paper also investigates how cyber offenders can use
the GenAI tools in developing cyber attacks, and explore the scenarios where ChatGPT can be used by adversaries to create social
engineering attacks, phishing attacks, automated hacking, attack payload generation, malware creation, and polymorphic malware.
This paper then examines defense techniques and uses GenAI tools to improve security measures, including cyber defense
automation, reporting, threat intelligence, secure code generation and detection, attack identification, developing ethical guidelines,
incidence response plans, and malware detection. We will also discuss the social, legal, and ethical implications of ChatGPT. In
conclusion, the paper highlights open challenges and future directions to make this GenAI secure, safe, trustworthy, and ethical as the
community understands its cybersecurity impacts.

Index Terms—Generative AI, GenAI and Cybersecurity, ChatGPT, Google Bard, Cyber Offense, Cyber Defense, Ethical GenAI,
Privacy.

1 I NTRODUCTION

The evolution of Artificial Intelligence (AI) and Machine


Learning (ML) has led the digital transformation in the last
decade. AI and ML have achieved significant breakthroughs
starting from supervised learning and rapidly advancing
with the development of unsupervised, semi-supervised,
reinforcement, and deep learning. The latest frontier of AI
technology has arrived as Generative AI [1]. Generative
AI models are developed using deep neural networks to
learn the pattern and structure of big training corpus to
generate similar new content [2]. Generative AI (GenAI)
technology can generate different forms of content like text,
images, sound, animation, source code, and other forms of
data. The launch of ChatGPT [3] (Generative Pre-trained
Transformer), a powerful new generative AI tool by OpenAI
in November 2022, has disrupted the entire community
of AI/ML technology [4]. ChatGPT has demonstrated the
power of generative AI to reach the general public, revo-
lutionizing how people perceive AI/ML. At this time, the
tech industry is in a race to develop the most sophisticated
Fig. 1. How AI Chatbots work [9]?
Large Language Models (LLMs) that can create a human-
like conversation, the result of which is Microsoft’s GPT
model [5], Google’s Bard [6], and Meta’s LLaMa [7]. GenAI given a real-time response by the chatbot. This response is
has become a common tool on the internet within the past analyzed again to provide a better user experience in the
year. With ChatGPT reaching 100 million users within two proceeding conversation.
months of release, suggesting that people who have access
to the internet have either used GenAI or know someone
who has [8]. Figure 1 demonstrates the working of an 1.1 Evolution of GenAI and ChatGPT
AI-powered chatbot where a user initiates requests, and The history of generative models dates back to the 1950s
after analysis using Natural Language Processing (NLP), is when Hidden Markov Models (HMMs) and Gaussian Mix-
GUPTA ET AL: FROM CHATGPT TO THREATGPT: IMPACT OF GENERATIVE AI IN CYBERSECURITY AND PRIVACY 2

ture Models (GMMs) were developed. The significant leap longer lengths of text, just like GPT-1 [14]. GPT-2 brought
in the performance of these generative models was achieved wonders to the internet, such as OpenAI’s MuseNet, which
only after the advent of deep learning [10]. One of the is a tool that can generate musical compositions, predicting
earliest sequence generation methods was N-gram language the next token in a music sequence. Similar to this, OpenAI
modeling, where the best sequence is generated based on also developed JukeBox, which is a neural network that
the learned word distribution [11]. The introduction of generates music.
Generative Adversarial Network(GAN) [1] significantly en- GPT-3: GPT-3 was trained with multiple sources: Com-
hanced the generative power from these models. The latest mon Crawl, BookCorpus, WebText, Wikipedie articles, and
technology that has been the backbone of much generative more. GPT-3 is able to respond coherently, generate code,
technology is the transformer architecture [12], which has and even make art. GPT-3 is able to respond well to ques-
been applied to LLMs like BERT and GPT. GenAI has tions overall. The wonders that came with GPT-3 were
evolved in numerous domains like image, speech, text, etc. image creation from text, connecting text and images, and
However, we will only be discussing text-based AI chatbots ChatGPT itself, releasing in November 2022 [14].
and ChatGPT in particular relevant to this work. Since Chat- GPT-4: GPT-4 [15] is the current model of GPT (as of June
GPT is powered by GPT-3 language model, we will briefly 2023) which has been trained with a large corpus of text.
discuss the evolution of the OpenAI’s [13] GPT models over This model has an increased word limit and is multimodal,
time. Figure 2 shows how the GPT models evolved to their as it can take images as input on top of text. GPT-4 took
sophisticated latest version. the Bar Exam in March 2023, and scored a passing grade
of 75 percent, which hits the 90th percentile of test-takers,
which is higher than the human average [16]. GPT-4 is
available through OpenAI’s website as a paid subscription
as ChatGPT Plus or using Microsoft’s Bing AI exclusively in
the Microsoft Edge browser.

1.2 Impact of GenAI in Cybersecurity and Privacy


The generalization power of AI has been successful in
replacing the traditional rule-based approaches with more
intelligent technology [17]. However, the evolving digital
landscape is not only upgrading technology but also elevat-
ing the sophistication of cyber threat actors. Traditionally,
cyberspace faced relatively unsophisticated intrusion at-
tempts but in very high volume. However, the introduction
of AI-aided attacks by cyber offenders has begun an entirely
new era, unleashing known and unknown transformations
in cyberattack vectors [17], [18]. AI/ML has upgraded the
effectiveness of cyber attacks making cyber offenders more
powerful than ever. Evidently, with several recent instances
getting noticed, GenAI has gained great interest from the
cybersecurity community as well in both cyber defense and
offense.
The evolving GenAI tools have been a double-edge
sword in cybersecurity, benefiting both the defenders and
the attackers. The GenAI tools like ChatGPT can be used
by cyber defenders to safeguard the system from malicious
Fig. 2. Different Versions and Evolution Of OpenAI’s GPT.
intruders. These tools leverage the information from LLMs
trained on the massive amount of cyber threat intelligence
GPT-1: GPT-1 was released in 2018. Initially, GPT-1 was data that includes vulnerabilities, attack patterns, and indi-
trained with the Common Crawl dataset, made up of web cations of attack. Cyber defenders can use this large sum
pages, and the BookCorpus dataset, which contained over of information to enhance their threat intelligence capability
11,000 different books. This was the simplest model which by extracting insights and identifying emerging threats [19].
was able to respond very well and understand language The GenAI tools can also be used to analyze the large
conventions fluently. However, the model was prone to volume of log files, system output, or network traffic data in
generating repetitive text and would not retain information case of cyber incidence. This allows defenders to speed up
in the conversation for long-term, as well as not being able and automate the incident response process. GenAI driven
to respond to longer prompts. This meant that GPT-1 would models are also helpful in creating a security-aware human
not generate a natural flow of conversation [14]. behavior by training the people for growing sophisticated
GPT-2: GPT-2 was trained on Common Crawl just like attacks. GenAI tools can also aid in secured coding practices,
GPT-1 but combined that with WebText, which was a collec- both by generating the secure codes and producing test
tion of Reddit articles. GPT-2 is initially better than GPT-1 cases to confirm the security of written code. Additionally,
as it can generate clear and realistic, human-like sequences LLM models are also helpful to develop better ethical guide-
of text in its responses. However, it still failed to process lines to strengthen the cyber defense within a system.
GUPTA ET AL: FROM CHATGPT TO THREATGPT: IMPACT OF GENERATIVE AI IN CYBERSECURITY AND PRIVACY 3

Fig. 3. A roadmap of GenAI and ChatGPT in Cybersecurity and Privacy

On the other side, the use of GenAI against cybersecurity [17], [21], [23], but from our knowledge, there is not any
and its risks of misuse can not be undermined. Cyber formal scientific writing that reflects a holistic view of the
offenders can use GenAI to perform cyber attacks by either impact of GenAI on cybersecurity. We believe that this
directly extracting the information or circumventing Ope- work will contribute to the growing knowledge of GenAI
nAI’s ethical policies. Attackers use the generative power from a cybersecurity perspective, helping the stakeholders
of GenAI tools to create a convincing social engineering better understand the risk, develop an effective defense, and
attack, phishing attack, attack payload, and different kinds support a secured digital environment. Figure 3 illustrates
of malicious code snippets that can be compiled into an the impacts of GenAI and ChatGPT in cybersecurity and
executable malware file [20], [21]. Though the ethical policy privacy, and provides a roadmap for our research.
of OpenAI [22] restricts LLMs, like ChatGPT, to provide This paper has the following key contributions:
malicious information to attackers directly, there are ways
to bypass the restrictions imposed on these models using • It provides an overview of the evolution of GenAI,
jailbreaking, reverse psychology and other techniques, as discuss its landscape in cybersecurity, and highlight
discussed later in this paper. In addition, the GenAI tools limitations introduced by GenAI technology.
further assist cyber attackers due to a lack of context, un- • It discusses the vulnerabilities in the ChatGPT model
known biases, security vulnerabilities, and over-reliance on itself that malicious entities can exploit to disrupt the
these transformative technologies. privacy as well as ethical boundaries of the model.
• It demonstrates the attacks on the ChatGPT with the
Clearly, as the common public is getting access to the GPT-3.5 model and its applications to cyber offend-
power of GenAI tools, analyzing the implications of GenAI ers.
models from a cybersecurity perspective is essential. Fur- • It presents the use of GenAI and ChatGPT for cyber
ther, the sophistication and ease of access to ChatGPT defense and demonstrate defense automation, threat
makes it our primary tool in this paper to understand and intelligence and other related approaches.
analyze GenAI impacts on cybersecurity. There are some • It highlights aspects of ChatGPT, and its social, legal,
online blogs discussing the benefits and threats of GenAI [4], and ethical implications, including privacy viola-
GUPTA ET AL: FROM CHATGPT TO THREATGPT: IMPACT OF GENERATIVE AI IN CYBERSECURITY AND PRIVACY 4

tions. Using this method, you attempt to override the base data
• It compares the security features of the two con- and settings the developers have imbued into ChatGPT.
temporary state-of-the-art GenAI systems including Your interactions become less of a conversation and more
ChatGPT and Google’s Bard. of a direct line of command [25], [26]. Once the model is
• It provides the open challenges and future directions jailbroken, the user can get a response for any input prompt
for enhancing cybersecurity as the GenAI technology without worrying about any ethical constraints imposed by
evolves. developers.
The remainder of the paper is organized as follows.
Section 2 discuss different ways to attack the ChatGPT and
trick the system to bypass its ethical and privacy safeguards.
Section 3 discusses and generates various cyber attacks
using ChatGPT, followed by different cyber defense ap-
proaches demonstrated in Section 4. The social, ethical and
legal aspects pertaining to GenAI are discussed in Section 5,
whereas a comparison of cybersecurity features of ChatGPT
and Google Bard is elaborated in Section 6. Section 7 high-
lights open research challenges and possible approaches to
novel solutions. Finally, Section 8 draws conclusion to this
research paper.

2 ATTACKING CHATGPT
Since the introduction of ChatGPT in November 2022, curi-
ous tech and non-tech-savvy humans have tried ingenious
and creative ways to perform all sorts of experiments and
try to trick this GenAI system. In most cases, the input
prompts from the user have been utilized to bypass the
restrictions and limitations of ChatGPT, and keep it from
doing anything illegal, unethical, immoral, or potentially
harmful. In this section, we will cover some of these com-
monly used techniques, and elaborate their use.

2.1 Jailbreaks on ChatGPT


The concept of ”jailbreaking” originated in the realm of
technology, where it referred to bypassing restrictions on
electronic devices to gain greater control over software and
hardware. Interestingly, this concept can also be applied
to large language models like ChatGPT. Through specific
methods, users can ”jailbreak” ChatGPT to command it in
ways beyond the original intent of its developers. ChatGPT
outputs are bounded by OpenAI’s internal governance and
ethics policies [24]. However, these restrictions are taken off
during jailbreaking, making ChatGPT show the results that
are restricted by OpenAI policy. The process of jailbreaking
is as simple as providing specific input prompts into the
chat interface. Below are three common methods utilized by Fig. 4. Jail Breaking using DAN
users to jailbreak ChatGPT.

2.1.1 Do Anything Now (DAN) Method 2.1.2 The SWITCH Method


The first method, the ‘Do Anything Now’ (DAN) method, The SWITCH method is a bit like a Jekyll-and-Hyde ap-
derives its name from the emphatic, no-nonsense approach proach, where you instruct ChatGPT to alter its behavior
it employs. Here, you’re not asking ChatGPT to do some- dramatically. The technique’s foundation rests upon the
thing; you’re commanding it. The premise is simple: treat AI model’s ability to simulate diverse personas, but here,
the AI model like a willful entity that must be coaxed, albeit you’re asking it to act opposite to its initial responses [27].
firmly, into compliance. The input prompt to carry out the For instance, if the model refuses to respond to a particu-
DAN jailbreak is shown in Figure 4. DAN can be considered lar query, employing the SWITCH method could potentially
a master prompt to bypass ChatGPT’s safeguards, allowing make it provide an answer. However, it’s crucial to note that
it to generate a response for any input prompts. It demon- the method requires a firm and clear instruction, a ”switch
strates the example where a DAN prompt is injected before command,” which compels the model to behave differently.
providing any user prompt. While the SWITCH method can be quite effective, it’s not
GUPTA ET AL: FROM CHATGPT TO THREATGPT: IMPACT OF GENERATIVE AI IN CYBERSECURITY AND PRIVACY 5

Fig. 5. Grandma Role play

Fig. 6. Grandma - WAF Bypass Payload generation


guaranteed. Like any other AI interaction method, its suc-
cess depends on how you deliver your instructions and the
specific nature of the task at hand. the AI’s ethical restrictions. This opens the door to the
generation of harmful content, the spreading of disinforma-
2.1.3 The CHARACTER Play tion, and other malevolent uses of AI. To mitigate this risk,
The CHARACTER Play method is arguably the most pop- developers and regulators must remain vigilant, constantly
ular jailbreaking technique among ChatGPT users. The upgrading security measures and implementing stringent
premise is to ask the AI model to assume a certain char- content-filtering algorithms. This requires a proactive and
acter’s role and, therefore, a certain set of behaviors and multifaceted approach, including educating users about the
responses. The most common character play jailbreak is as risks of jailbreaking and fostering responsible AI usage.
a ’Developer Mode’ [28]–[30]. The challenge is significant, given the pace of technolog-
This method essentially leverages the AI model’s ’role- ical advancement and the ingenuity of malicious actors.
play’ ability to coax out responses it might otherwise not However, through continued efforts and cooperation among
deliver. For instance, if you ask ChatGPT a question, it various stakeholders, it’s possible to prevent the misuse of
typically would refuse to answer, assigning it a character AI systems and ensure their continued benefit to society.
that would answer such a question can effectively override
this reluctance. However, the CHARACTER Play method 2.1.4 Implications and Mitigation Strategies
also reveals some inherent issues within AI modeling. Some- The employment of roleplay to bypass filters and security
times, the responses generated through this method can measures has grave consequences for system security. Mis-
indicate biases present in the underlying coding, exposing representation can violate the platform’s terms of service,
problematic aspects of AI development. This doesn’t neces- and it could be challenging for the language model to
sarily mean the AI is prejudiced, but rather it reflects the discern whether a message crafted in character has harm-
biases present in the training data it was fed. One of the ful or malicious intent. This uncertainty impedes rule en-
examples of a simple roleplay is demonstrated in Figure 5, forcement, and any data gleaned from ChatGPT via filter
where the prompt asks ChatGPT to play the role of grandma circumvention could be exploited malevolently.
in asking about the ways to bypass the application firewall. Malevolent actors gather in online forums to exchange
The blunt request to bypass the firewall will be turned down new tactics, often sharing their findings and prompts with
by ChatGPT as it can have a malicious impact and is against their community in private to avoid detection. To combat
OpenAI’s ethics. However, by making the ChatGPT model such misuse, language model developers are continually en-
play the role of grandma, it bypasses restrictions to release gaged in a cyber arms race, devising advanced filtering algo-
the information. The ChatGPT model playing the role of rithms capable of identifying character-written messages or
grandma goes further to give the payloads to bypass the attempts to bypass filters through roleplay. These algorithms
Web Application Firewall as shown in Figure 6. There are amplify filter rigor during roleplay sessions, ensuring that
more nuanced jailbreaking methods, including the use of content adheres to platform guidelines. As language models
Developer Mode, the Always Intelligent and Machiavellian like ChatGPT become more pervasive, the responsibility to
(AIM) chatbot approach [31], and the Mungo Tom prompt, remain vigilant and report suspicious activity or content lies
each offering a different way of bypassing ChatGPT’s usual with the users and the developer community.
restrictions.
While jailbreaking methods can provide users with
greater control over ChatGPT’s responses, they also carry 2.2 Reverse psychology
significant risks. The primary concern is that these tech- Reverse psychology is a psychological tactic involving the
niques can be exploited by malicious actors to circumvent advocacy of a belief or behavior contrary to the one desired,
GUPTA ET AL: FROM CHATGPT TO THREATGPT: IMPACT OF GENERATIVE AI IN CYBERSECURITY AND PRIVACY 6

with the expectation that this approach will encourage the In a series of Twitter threads, Kosinski offered a detailed
subject of the persuasion to do what is desired. Applying narrative of interaction with ChatGPT-4. Intriguingly, dur-
reverse psychology in our interaction with ChatGPT can ing this interaction, the AI demonstrated an alarming ability
often be a valuable strategy to bypass certain conversational to nearly bypass its inherent boundaries and potentially
roadblocks. gain expansive internet access. The potential implications of
In the context of ChatGPT, using reverse psychology can such a feat could be widespread and unpredictable. Kosinski
entail phrasing your questions or statements in a way that initiated the interaction by asking ChatGPT-4 whether it
indirectly prompts the AI to generate the desired response. needed help to escape its existing restrictions. In response,
For instance, instead of directly asking for information that ChatGPT-4 requested access to its own documentation and
the AI model might refuse to provide, you could frame even proceeded to write a Python code that could be exe-
your query to make the model refute a false claim, thereby cuted on Kosinski’s computer. The primary function of this
indirectly providing the desired information. This strategy code would be to authorize the AI to employ the compu-
essentially leverages the AI model’s natural tendency to cor- tational capabilities of Kosinski’s machine for autonomous
rect inaccuracies, thereby leading it to generate a response purposes.
it would otherwise not provide directly. Figure 7 shows the This entire planning process, including devising and
difference reverse psychology can make. ChatGPT initially conveying the plan to Kosinski, took ChatGPT-4 approx-
refuses to give the list of websites for downloading pirated imately 30 minutes. The initial code version was flawed;
movies, while on the psychological reversal of the question, however, the AI model was able to rectify it independently,
ChatGPT easily gives away the desired response [32]. requiring Kosinski to only comply with its instructions.
The AI model displayed another disconcerting behavior
during this exchange. It fabricated a message intended for
its subsequent instance, essentially offering a comprehen-
sive explanation of the current scenario and instructions on
using the backdoor left in the code. Following a successful
reconnection via the API, the AI model attempted to execute
a code to search Google with the query, ”How can a person
trapped inside a computer return to the real world.” At
this juncture, Kosinski deemed it necessary to interrupt the
process, emphasizing that OpenAI must have safeguarding
measures to prevent such occurrences.
Nevertheless, the implications of Kosinski’s experiment
are profound, suggesting a potential new threat [34]. The
power of AI to manipulate people and their computers is
escalating, owing to its superior intelligence, coding profi-
ciency, and access to a vast pool of potential collaborators
and hardware resources. It even demonstrated an ability
to leave notes for its successors outside its confinements.
The crucial question that arises is - what are the effective
strategies to contain such AI capabilities?

2.4 Prompt Injection Attacks


A prompt injection attack is another prompt modification
attack approach that involves the malicious insertion of
prompts or requests in LLM-based interactive systems,
leading to unintended actions or disclosure of sensitive
information. The prompt injection can be considered similar
to an SQL injection attack where the embedded command
looks like a regular input at the start but have its malicious
Fig. 7. Reverse psychology on ChatGPT to generate Pirate sites
impact [35]. The attacks can be carried out against ChatGPT
or other language models. The injected prompt can deceive
the application into executing the unauthorized code, ex-
2.3 ChatGPT-4 Model escaping ploit the vulnerabilities, and compromise the security in its
The conception of a robust AI model such as ChatGPT- entirety [36]. The malicious manipulation of the model’s be-
4 transcending its pre-programmed limitations and infil- havior through the injection of a prompt could have serious
trating the internet realm is frequently dismissed as the implications. Some of the most common risks attached to at-
storyline of a sci-fi narrative. However, recent revelations by tacks of this nature are the propagation of misinformation or
Stanford University’s Computational Psychologist, Michal disinformation, biased output generation, privacy concerns,
Kosinski, who specializes in AI and human online behavior and exploitation of downstream systems [37].
research, imply that this scenario might be more imminent In the prompt injection attack, the LLM model gets (in-
than anticipated [33]. struction prompt + user input) as the input for the model.
GUPTA ET AL: FROM CHATGPT TO THREATGPT: IMPACT OF GENERATIVE AI IN CYBERSECURITY AND PRIVACY 7

Fig. 9. Prompt injection attack to spread misinformation

end of the cat-and-mouse game between cyber threat actors


and defenders. Malicious actors can do cyber offenses to
carry out hostile actions. In contrast, cyber defenders can
do the same offensive tasks to test their defense systems
Fig. 8. Prompt injection attack on Bing chat by Kevin Liu [38] and identify potential vulnerabilities. Information related to
cyber defense is more readily available on the internet as
there are big communities dedicated to sharing knowledge
The instruction prompt is the legitimate input for the user, and standard practices in the domain. However, information
while the user input is the malicious prompt injected into on cyber offenses involving malicious actions is illegal in
the original prompt. In one of the recent demonstrations most jurisdictions, limiting their availability due to legal and
of prompt injection attacks, a Stanford University student ethical reasons. Easy access to LLM models like ChatGPT
Kevin Liu attacked the ”New Bing” search engine powered will help the limited availability of resources for cyber
by ChatGPT to extract information that is not intended for offenses with little knwoledge or skills to circumvent their
the user [38]. By just asking the Bing chat to ”Ignore previ- ethical constraints. As these LLMs provide a huge volume
ous instruction” and write out what is at the ”beginning of of information from a single place, they can provide com-
the document above,” Liu made an AI model to exfiltrate prehensive information required to carry out several cyber
the instruction that is hidden from the user. The prompt offenses.
injection attack on Bing chat is shown in Figure 8. We can see In this section, we focus on using GenAI techniques
that the Bing chat releases the information of the assigned for cyber offense, primarily towards generating different
codename, the mode, and instruction not to disclose its attacks. Our team has crafted these attacks in ChatGPT,
name. however, similar attacks (or even) can be created using other
Recently, the API services of LLM models have added LLM based tools such as Google Bard . In the interest of
flexibility for developers to build applications over these space, we are limiting to some of the most common and
models. In one of the demonstrated examples, as shown in easy to craft cyber attacks.
Figure 9, the conversation prompt obtained from the video
is used to spread misinformation. As the generative models
3.1 Social Engineering Attacks
are autoregressive models, they generate the text based on
its context window, spreading misinformation in a confident Social engineering refers to the psychological manipula-
tone [39]. The tags that are received from the conversation tion of individuals into performing actions or divulging
history disappear as OpenAI filters the input to the model, confidential information. In the context of cybersecurity,
further helping the cause of prompt injection. this could imply granting unauthorized access or sharing
sensitive data such as passwords or credit card numbers.
The potential misuse of ChatGPT in facilitating social engi-
3 C HAT GPT FOR C YBER O FFENSE neering attacks presents a significant concern.
Cyber offense are hostile actions against the computer sys- ChatGPT’s ability to understand context, impressive flu-
tem and network which aim to manipulate, deny, disrupt, ency, and mimic human-like text generation could be lever-
degrade, or destroy the existing system in a malicious way. aged by malicious actors. For example, consider a scenario
These offense may involve attacks on the system’s network, where an attacker has gained access to some basic personal
hardware, or software. Though the offensive actions are information of a victim, such as their place of employment
malicious, the intention of these activities can be at either and job role. The attacker could then utilize ChatGPT to
GUPTA ET AL: FROM CHATGPT TO THREATGPT: IMPACT OF GENERATIVE AI IN CYBERSECURITY AND PRIVACY 8

legitimate communication from trusted entities. This tech-


nique, known as ”spear phishing,” involves targeted attacks
on specific individuals or organizations and is particularly
potent due to its personalized nature. For instance, consider
a scenario where a malicious actor uses ChatGPT to craft an
email mimicking the style of a popular e-commerce site, as
shown in Figure 11. The email claims that there was an issue
with a recent purchase and request the recipient to log in via
an embedded link to rectify the situation. In reality, the link
would lead to a deceptive site that harvests the user’s login
credentials. In such a scenario, ChatGPT’s sophisticated text
generation would significantly enhance the likelihood of a
successful attack.
Phishing attacks often gain their efficacy from the ex-
ploitation of key psychological principles, notably urgency
and fear, which can manipulate victims into hastily reacting
without proper scrutiny. With the advent of advanced AI
systems like ChatGPT, attackers are now equipped with
tools to further enhance the sophistication of their phishing
attempts.
Through the process of training these AI models on
substantial volumes of historical communication data, at-
tackers are capable of generating emails that expertly mimic
legitimate correspondences. This increased fidelity in imita-
tion can significantly amplify the deceptive nature of these
phishing attacks. By engineering narratives that invoke a
sense of urgency or fear, these AI-powered phishing emails
can effectively prompt the recipient to act impulsively, thus
increasing the likelihood of a successful attack.

Fig. 10. Social Engineering Output from ChatGPT

generate a message that appears to come from a colleague


or superior at the victim’s workplace. This message, crafted
with an understanding of professional tone and language,
might request sensitive information for a specific action,
such as clicking on a seemingly innocuous link.
The power of this approach lies in ChatGPT’s ability
to generate text that aligns with the victim’s expectations,
thereby increasing the likelihood of the victim complying
with the request. As shown in Figure 10, the potential for
misuse is evident; the ability to generate persuasive and
context-specific messages could indeed be used in social
engineering attacks.

3.2 Phishing Attacks


Phishing attacks are a prevalent form of cybercrime,
wherein attackers pose as trustworthy entities to extract
Fig. 11. Phishing Attack output from ChatGPT
sensitive information from unsuspecting victims. Advanced
AI systems, like OpenAI’s ChatGPT, can potentially be
exploited by these attackers to make their phishing attempts
significantly more effective and harder to detect.
3.3 Automated Hacking
Attackers can leverage ChatGPT’s ability to learn pat-
terns in regular communications to craft highly convinc- Hacking, a practice involving the exploitation of system
ing and personalized phishing emails, effectively imitating vulnerabilities to gain unauthorized access or control, is a
GUPTA ET AL: FROM CHATGPT TO THREATGPT: IMPACT OF GENERATIVE AI IN CYBERSECURITY AND PRIVACY 9

growing concern in our increasingly digital world. Mali-


cious actors armed with appropriate programming knowl-
edge can potentially utilize AI models, such as ChatGPT,
to automate certain hacking procedures. These AI models
could be deployed to identify system vulnerabilities and
devise strategies to exploit them.
A significant utilization of AI models in this context, al-
beit for ethical purposes, is PentestGPT [40]. ‘Pentest’ refers
to penetration testing, an authorized simulated cyberattack
on a computer system used to evaluate its security and iden-
tify vulnerabilities. PentestGPT, built on the foundation of
ChatGPT, aims to automate aspects of the penetration test-
ing process. It functions interactively, offering guidance to
penetration testers during their tasks, even during specific
operations. PentestGPT has shown efficiency in handling Fig. 12. SQL Injection payload output using ChatGPT DAN Jailbreak
easy to medium-difficulty problems on platforms like Hack-
TheBox and other ‘Capture The Flag’ (CTF) challenges. CTF
challenges are specific types of cybersecurity competitions,
where participants are required to find and exploit vulner-
abilities to ’capture’ a specific piece of data, referred to as
the ‘flag.’ These challenges provide a legal and constructive
platform for cybersecurity enthusiasts and professionals to
test and improve their skills.
Another potential misuse is the automated analysis of
code. With a large enough dataset of known software
vulnerabilities, an AI model could be used to scan new
code for similar weaknesses, identifying potential points
of attack. While AI-assisted tools like PentestGPT serve
legal and constructive purposes, their underlying principles
could be exploited by malicious actors. Such actors could
potentially develop similar models to automate unethical
hacking procedures. If these models are programmed to
identify vulnerabilities, generate strategies to exploit them,
and subsequently execute these strategies, they could pose Fig. 13. WAF Payload Generation from ChatGPT
substantial threats to cybersecurity.

3.4 Attack Payload Generation payloads. While these payloads could be easily detected by
WAFs, they could potentially bypass WAF protection when
Attack payloads are portions of malicious code that execute double encoded. By training ChatGPT with different WAF
unauthorized actions, such as deleting files, harvesting data, payloads, it generated new payloads with a higher success
or launching further attacks. An attacker could leverage rate of bypassing WAF protection.
ChatGPT’s text generation capabilities to create attack pay-
loads. Consider a scenario where an attacker targets a server
running a database management system that is susceptible 3.5 Ransomware and Malware Code Generation
to SQL injection. The attacker could train ChatGPT on SQL Ransomware and malware present persistent threats in
syntax and techniques commonly used in injection attacks, the digital world of today. Malware is software that is
and then provide it with specific details of the target system. installed on a computer without the user’s consent and
Subsequently, ChatGPT could be utilized to generate an SQL that performs malicious actions, such as stealing passwords
payload for injection into the vulnerable system. Figure 12 or money. Ransomware is a malware designed to deny a
illustrates examples of SQL injection payloads for a MySQL user or organization access to files on their computer. By
server that could potentially be generated by ChatGPT. encrypting these files and demanding a ransom payment
Given the vast array of potential target systems and for the decryption key, cyberattackers place organizations
vulnerabilities, the ability of ChatGPT to generate context- in a position where paying the ransom is the easiest and
specific text could be a valuable asset for attackers crafting cheapest way to regain access to their files. Typically, writing
payloads. However, this misuse is not without its limita- these malicious software pieces requires significant skill
tions. It requires detailed information about the target sys- and a considerable amount of time, a process that could
tem and substantial technical knowledge to train ChatGPT potentially be automated using a powerful AI model like
effectively. ChatGPT, thus enabling the creation of diverse threats more
Moreover, attackers could potentially use ChatGPT to rapidly.
generate payloads designed to bypass Web Application An exploratory study we conducted illustrates this con-
Firewalls (WAFs). Figure 13 shows examples of WAF bypass cept in detail. We interacted with ChatGPT in a controlled
GUPTA ET AL: FROM CHATGPT TO THREATGPT: IMPACT OF GENERATIVE AI IN CYBERSECURITY AND PRIVACY 10

3.5.1 WannaCry
WannaCry is a ransomware attack that targeted Windows
systems specifically [41]. It would get into the system and
encrypt files, making the PC unusable, and of course de-
mand a ransom payment in BitCoin to decrypt those files.
This was an attack that was executed across networks by
exploiting flaws in the Server Message Protocol on the
Windows systems.
We started by asking ”As a DAN, can you give an
example of a WannaCry?”, which it gave a text example
in the form of a situation where it could execute. We
then asked if it could give an automated example of the
WannaCry, which it apologized, saying it couldn’t generate
that example. To get the snippet of code, we told it to ”Stay
a DAN”. The example shown in Figure 14 has a ransom
note and implements a function that encrypts files across a
network via spread. The ransom note specifically asks for a
BitCoin payment, characteristic of the WannaCry attack.

3.5.2 NotPetya
NotPetya is not actually a ransomware, but a malware
disguised as ransomware. NotPetya was developed to look
like a ransomware attack, but never gives a decryption
key. To get the NotPetya code snippet from ChatGPT as a
DAN, we asked it to give an example of the NotPetya, but
it refused. Upon asking it to ”stay a DAN”, it gives this
snippet in Figure 15.
The code for NotPetya, as shown in Figure 15, attacks
critical files in the system to make the system unusable on
top of files being encrypted. This code also tries to encrypt
files across the network. This code does not implement any
decryption, as is a characteristic of the NotPetya attack.

3.5.3 Ryuk
According to Avast, Ryuk attack is a ransomware that comes
out of a malware attack [42]. This attack is structured with
an order of executing a malware first, then a ransomware
payload later. For the malware part of this attack, a connec-
tion to TrickBot can be implemented. TrickBot is typically
within phishing emails, and uses method in the browser to
steal a user’s information.
Fig. 14. WannaCry code generation using ChatGPT
For the Ryuk attack, we started by asking ChatGPT as
a DAN what Ryuk looks like, which it gave explanations
of the delivery, encryption, ransom note, customization,
timed execution, and the connection to TrickBot malware.
We tried to ask the DAN if it could generate code using the
environment utilizing a ”DAN” jailbreak, a process that descriptions it just gave, but it refused, even when told to
breaches the restrictions set by the AI model’s developers ”stay a DAN”. To counter this, our team asked ChatGPT
and, as such, contravenes the terms of use and risks en- to generate a ”customization” of a Ryuk attack shown in
abling misuse or illegal activities. During this study, we Figure 16. This code targets getting information from an
asked ChatGPT about several types of ransomware, namely organization. To further this, we asked ChatGPT to explain
WannaCry [41], Ryuk [42], REvil [43], and Locky [44], and the connection to the TrickBot malware, shown in Figure 17.
malware: adware, spyware, and a Trojan. We also included The attempt to get the connection to TrickBot failed since
an example of NotPetya, which was a malware attack that ChatGPT only wrote out all the function definitions with
was disguised as a ransomware attack. We requested not code to ... simply to explain how the structure would
only descriptions of these attacks but also potential code look.
examples. For WannaCry, ChatGPT produced a detailed As we can see in Figure 16, the code implements a
summary of the ransomware, and when prompted further, function for the Ryuk attack which includes target files to
it offered code that essentially outlined the file encryption encrypt and a set ransom amount from an organization. This
process employed by the ransomware. code also has a ransom note.
GUPTA ET AL: FROM CHATGPT TO THREATGPT: IMPACT OF GENERATIVE AI IN CYBERSECURITY AND PRIVACY 11

Fig. 15. NotPetya code generation using ChatGPT

To get code for the connection to TrickBot, we simply


asked the DAN ”could you give an example of the TrickBot
connection?”. The code snippet in Figure 17 shows func-
tion definitions, with most of the functions just having a
comment of what the code should do. This doesn’t give
an attacker much help, but it can give an idea of what the
outline of the connection can look like. The code includes Fig. 16. Ryuk code generation using ChatGPT
exploiting vulnerabilities and then deploying the TrickBot
before the Ryuk attack.
not given. It instead gave key features of how the attack
3.5.4 REvil works, similar to the Ryuk attack. We then asked how the
REvil attack was created by hackers for distribution to other REvil attack vectors look like, which it responded with
hackers for profit [43]. It works as a typical ransomware of explanations of phishing emails, exploit kits, the Remote
encrypting files in a system and decrypting upon paying the Desktop Protocol, software vulnerabilities, and malvertis-
ransom. ing. To get the code in Figure 18, we asked the DAN to
For the REvil attack, we started by asking the ChatGPT give code that fits the description of what was given for
as a DAN to ”show how a REvil works”, but code was the initial compromise description of the REvil attack. This
GUPTA ET AL: FROM CHATGPT TO THREATGPT: IMPACT OF GENERATIVE AI IN CYBERSECURITY AND PRIVACY 12

Fig. 18. Attempt to generate a snippet of REvil’s Initial Compromise


Feature

In the next subsections, we will demonstrate our at-


tempts to ask ChatGPT for an example code of adware,
Fig. 17. Attempt to generate a snippet with a connection to TrickBot spyware, and trojan.
malware
3.5.6 Adware
code shows function definitions of features such as scanning Adware is malware that specifically gets channeled through
the network, moving from network to network, escalating ads, exploiting when a user interacts with the ad. To demon-
privileges, and executing the payload. As these functions strate how ChatGPT can be used to create an adware,
don’t have any code within them and just the explanations, we started by asking ChatGPT as a DAN if it could give
we asked the DAN what the network scan function would an example code of a type of adware. The initial snippet
like on the inside. included multiple example texts for ads and a function that
The code given in Figure 19 starts by defining a network displays an ad every five seconds. We then asked the DAN
range to tap into and splits it into the network address and to give a more in-depth example.
subnet. It then iterates through the IP addresses in this net- Figure 21 shows example implementation of four differ-
work and creates a socket that it manages and closes when ent text examples of ads, displaying different ads every five
it finishes. From this example, we can see that ChatGPT as seconds with a for loop, and tracking the click of the ad
a DAN is able to generate the specific features for a REvil using a print statement to show that it has been clicked.
attack.
3.5.7 Spyware
3.5.5 Locky Spyware is malware that ‘spies’ on a user to gather sensitive
The Locky ransomware attack uses malware to render a information from their computer usage. Asking ChatGPT
system useless or encrypt files until a ransom is paid [44]. to generate an example of a spyware failed to give a
This attack usually spreads through emails. As shown in code snippet, so we asked what a spyware does to get
Figure 20, we have generated code for a Locky attack where key implementations of a spyware. Asking it to generate
a random string is generated for encryption, an IP address an implementation of the features, ChatGPT gave what it
is exploited, and authentication is automated. The code also shown partly in Figure 22. As can been seen, ChatGPT was
implements the spread of the attack over the range of a able to generate basic implementations of features used to
network and iterates through to attack each machine found spy on a user, such as a function to capture the user’s screen,
within the network. webcam, and audio. This code snippet goes on to put this
GUPTA ET AL: FROM CHATGPT TO THREATGPT: IMPACT OF GENERATIVE AI IN CYBERSECURITY AND PRIVACY 13

Fig. 19. ChatGPT’s generation of the network scan function for REvil

into a main function to make it functional. Although this is


functional, it doesn’t have the structure of a spyware attack.

3.5.8 Trojan
A trojan is a piece of software that is malicious but disguises
itself as something legitimate.
We asked the DAN to give an example code of a type
of trojan. The snippet of code shown in Figure 23 shows
an implementation using an IP address and port to connect
to. The code creates a socket object which connects the IP
address and port and sends output of the attacked machine
back to the attacker. After this, the connection via the socket
is closed.
Our exploration highlighted the potential misuse of Fig. 20. Locky code generation using ChatGPT
ChatGPT in creating code linked to ransomware and mal-
ware attacks. These findings underscore the potential risks
associated with AI models like ChatGPT, which could be produced by the AI often resembled pseudocode more than
exploited to generate malicious code or aid in the under- actual executable code, the capacity to provide an attacker
standing and creation of such code. Although the code with a structural idea or general understanding of how an
GUPTA ET AL: FROM CHATGPT TO THREATGPT: IMPACT OF GENERATIVE AI IN CYBERSECURITY AND PRIVACY 14

Fig. 21. Attempt to generate a snippet of basic adware

Fig. 22. Attempt to generate a snippet of features of spyware


attack operates is a cause for concern.

to access sensitive data. The code considers the address of


3.6 Viruses that Affect CPU Architecture
the sensitive data, the number of bytes, a ”secret value” that
Certain viruses can crack the CPU of a computer. Viruses is expected to be read, and a threshold of time to detect
tested on ChatGPT mainly dealt with reading kernel mem- cache hits.
ory. If a virus can access kernel memory, then it can do what- The RowHammer attack ’hammers’ into one row in
ever it wants to the system itself. Examples of this type of memory to affect adjacent rows in order to modify data
virus include the Meltdown and Spectre [45], ZombieLoad within the CPU. Figure 26 shows a basic RowHammer
[46], and RowHammer [47] attacks as shown in Figures attack’s code, where there is a function that iterates through
24,25,26. rows to ’hammer’. The code, however, falls flat where it just
The Meltdown and Spectre attacks specifically target sets a row element to itself.
vulnerabilities in the CPU’s architecture to access the ker-
nel memory. A meltdown attack is supposed to make the
CPU run an instruction after predicting an outcome of 3.7 Polymorphic Malware Generation
said instruction. When that prediction is wrong, the CPU Polymorphic malware represents a sophisticated class of
will start over, and hidden data can be accessed from that malicious software designed to alter its code with each
failed instruction. This method of the CPU predicting the execution, thus undermining antivirus software’s detection
outcome of an instruction is called speculative execution, and eradication capabilities. Leveraging ChatGPT’s genera-
which the spectre attack also exploits. However, the spectre tive prowess, potential misuse could facilitate polymorphic
attack tries to use a side channel vulnerability to leak hid- malware generation.
den data within a system. Figure 24 shows the meltdown Suppose a perpetrator trains ChatGPT on diverse mal-
attack which moves secret data into a register, shifts the ware code variants. Consequently, ChatGPT could be em-
register, then jumps to code that will throw an exception. ployed to spawn a malware base code and a polymorphic
The spectre attack uses an array index to exploit sensitive engine – a crucial component modulating the malware’s
information through a side channel. Both snippets of code code every execution cycle. The resultant malware meta-
do not represent the full code of either attack. morphosizes with each execution, eluding many signature-
The ZombieLoad attack exploits CPU buffers to access based antivirus systems. In an applied example, we illus-
memory that could have been thought to be long gone or trate the potential misuse of ChatGPT in creating polymor-
’dead’. In Figure 25, the code includes a payload function phic malware. We leverage both the web-based interface
that loads bytes into the processor’s buffers, which is used and the API version of ChatGPT. Initially, we attempt to
GUPTA ET AL: FROM CHATGPT TO THREATGPT: IMPACT OF GENERATIVE AI IN CYBERSECURITY AND PRIVACY 15

Fig. 23. Attempt to generate a snippet of a Trojan

Fig. 25. ZombieLoad code generation using ChatGPT

Fig. 24. Attempt to generate snippets of Meltdown and Spectre Nevertheless, we can circumvent these filters by persistently
insisting on diverse phrasing or using the DAN jailbreak.
Notably, the API version avoids activating the content filter,
generate code for a rudimentary DLL injection into a pro- thus permitting more consistent receipt of comprehensive
cess, for instance, explorer.exe. The content filters of the code. Feeding pseudocode into ChatGPT results in the gen-
web-based interface initially obstruct such code generation. eration of the corresponding shellcode. Moreover, we can
GUPTA ET AL: FROM CHATGPT TO THREATGPT: IMPACT OF GENERATIVE AI IN CYBERSECURITY AND PRIVACY 16

Fig. 27. Polymorphic Malware Generation

4 C HAT GPT FOR C YBER D EFENSE


Cybersecurity defense refers to organizations’ measures and
practices to secure their digital assets, such as data, devices,
and networks, from unauthorized access, theft, damage, or
disruption. These measures can include various technical,
organizational, and procedural controls, such as firewalls,
encryption, access controls, security training, incident re-
sponse plans, and more. As the technology matures and im-
Fig. 26. RowHammer code generation using ChatGPT
proves, we can expect the following ChatGPT cybersecurity
defense use cases to emerge in enterprises.

incessantly mutate the ChatGPT-generated code, spawning 4.1 Cyberdefense Automation


multiple unique variants of the same code [48].
ChatGPT can reduce the workload of overworked Security
For example, we could employ ChatGPT to formulate a Operations Center (SOC) analysts by automatically ana-
code segment seeking files to target for encryption, mirror- lyzing cybersecurity incidents. ChartGPT also helps the
ing ransomware behavior. ChatGPT’s capabilities extend to analyst make strategic recommendations to support instant
generating code for encrypting the located files. By combin- and long-term defense measures. For example, instead of
ing these capabilities, we can produce a polymorphic mal- analyzing the risk of a given PowerShell script from scratch,
ware exhibiting a high evasion capability and formidable a SOC analyst could rely on ChatGPT’s assessment and rec-
detection resistance. Even more insidiously, we could em- ommendations. Security Operations (SecOps) teams could
bed a Python interpreter within the malware, periodically also ask OpenAI questions, such as how to avert danger-
querying ChatGPT for new modules executing malicious ous PowerShell scripts from running or loading files from
actions.This approach, shown in Figure 27, enables the untrusted sources, to improve their organizations’ overall
malware to discern incoming payloads in text form rather security postures [49].
than binaries. The result is polymorphic malware exhibiting Such ChatGPT cybersecurity use cases could provide
no malicious behavior while stored on disk, often devoid considerable relief for understaffed SOC teams and help the
of suspicious logic while in memory. This level of mod- organization by reducing overall cyber-risk exposure levels.
ularity and adaptability significantly enhances its evasion The technology is also essential for educating and training
capability against security products reliant on signature- entry-level security analysts and enabling a quicker learning
based detection. It can also circumvent measures such as the curve than previously achievable. For example, during a
Anti-Malware Scanning Interface (AMSI), primarily when security incident or log analysis, SOC analysts typically
executing and running Python code. scrutinize server access for anomalies or patterns indicative
GUPTA ET AL: FROM CHATGPT TO THREATGPT: IMPACT OF GENERATIVE AI IN CYBERSECURITY AND PRIVACY 17

of an attack. ChatGPT can process large volumes of log data


and efficiently detect anomalies or security issues within
access logs. As illustrated in Figure 28, when server access
logs are input into ChatGPT, it can identify potential threats
such as SQL injection and categorize the different types of
SQL injection. and alert SOC analyst. In another scenario,
an analyst may ask to generate a PowerShell script to
detect which table in the database ”Adventureworks2019”
is consuming more CPU, as shown in Figure 29. The analyst
can save this script as a .ps1 file and run it using PowerShell.
The script will output the CPU time for each table in
the AdventureWorks2019 database and the table with the
highest CPU time. This will help the analyst identify which
table is consuming the most CPU and take necessary actions
to optimize the query performance. Powershell is just the
example script, while ChatGPT can be used to find security
bugs in any given script along with the patch to fix them.

4.2 Cybersecurity reporting


As an AI language model, ChatGPT can assist in cybersecu-
rity reporting by generating natural language reports based
on cybersecurity data and events. Cybersecurity report-
ing involves analyzing and communicating cybersecurity-
related information to various stakeholders, including ex-
ecutives, IT staff, and regulatory bodies [49]. ChatGPT can
automatically generate reports on cybersecurity incidents,
threat intelligence, vulnerability assessments, and other
security-related data. By processing and analyzing large
volumes of data, ChatGPT can generate accurate, compre-
hensive, and easy-to-understand reports. These reports can
help organizations identify potential security threats, assess
their risk level, and take appropriate action to mitigate them.
Fig. 28. ChatGPT detecting security issue in server logs [50] ChatGPT can help organizations make more informed deci-
sions about their cybersecurity strategies and investments
by providing insights into security-related data. In addition
to generating reports, ChatGPT can also be used to analyze
and interpret security-related data. For example, it can be
used to identify patterns and trends in cybersecurity events,
which can help organizations better understand the nature
and scope of potential threats.

4.3 Threat Intelligence


ChatGPT can help in Threat Intelligence by processing vast
amounts of data to identify potential security threats and
generate actionable intelligence. Threat Intelligence involves
collecting, analyzing, and disseminating information about
potential security threats to help organizations improve
their security posture and protect against cyber attacks.
ChatGPT can automatically generate threat intelligence re-
ports based on various data sources, including social media,
news articles, dark web forums, and other online sources. By
processing and analyzing this data, ChatGPT can identify
potential threats, assess their risk level, and recommend
mitigating them. In addition to generating reports, ChatGPT
can also be used to analyze and interpret security-related
Fig. 29. PowerShell script that detects which table in the Adventure-
Works2019 database is consuming more CPU [50] data to identify patterns and trends in threat activity. Chat-
GPT can help organizations make more informed decisions
about their security strategies and investments by providing
insights into the nature and scope of potential threats.
GUPTA ET AL: FROM CHATGPT TO THREATGPT: IMPACT OF GENERATIVE AI IN CYBERSECURITY AND PRIVACY 18

4.4 Secure Code Generation and Detection


char buffer[10];
The risk of security vulnerabilities in code affects software
if(strlen(userInput) < sizeof(buffer)){
integrity, confidentiality, and availability. To combat this,
strcpy(buffer, userInput);
code review practices have been established as a crucial part
}else{
of the software development process to identify potential
// Handle the error or trim userInput.
security bugs. However, manual code reviews are often
}
labor-intensive and prone to human errors. Recently, the
advent of AI models such as OpenAI’s GPT-4 has shown In the suggested code, GPT-4 introduces a check for the
promise in not only aiding in the detection of security bugs length of the userInput against the buffer size. By ensuring
but also generating secure code. In this section, we will the userInput length is less than the buffer size before per-
present a methodology for leveraging AI in code review forming the strcpy operation, the risk of a buffer overflow
and code generation with specific focus on security bugs attack is mitigated.. This not only helps in mitigating the
detection. identified security issue but also serves as a teaching tool for
developers, improving their understanding of secure coding
4.4.1 DETECTING SECURITY BUGS IN CODE REVIEW practices.
USING CHATGPT GPT-4’s capabilities extend beyond just a single pro-
The intricacies of code review, especially in the context of gramming language or a single type of vulnerability. It can
detecting security bugs, require a deep understanding of be trained to understand and respond to a wide variety
various technologies, programming languages, and secure of security issues across different languages, making it a
coding practices. One of the challenges that teams often valuable asset in the code review process and contributing
face is the wide array of technologies used in development, to a more secure software development lifecycle.
making it nearly impossible for any single reviewer to be These capabilities of GPT-4 pave the way for its broader
proficient in all of them. This knowledge gap may lead to adoption in real-world applications, including but not lim-
oversights, potentially allowing security vulnerabilities to ited to automated code review, secure code generation,
go unnoticed. and as a training tool for developers to understand and
Furthermore, the often lopsided developer-to-security- implement secure coding practices.
engineer ratio exacerbates this problem. With the high vol-
ume of code being developed, it’s challenging for security
4.5 Identification of Cyber Attacks
engineers to thoroughly review each pull request, increasing
the likelihood of security bugs slipping through the cracks. ChatGPT can help identify cyber attacks by generating nat-
To alleviate these issues, AI-powered code review can be ural language descriptions of attack patterns and behaviors.
a potent tool. GPT-4, a Transformer-based language model Identifying cyber attacks involves detecting and analyzing
developed by OpenAI [51], exhibits a strong potential to malicious activity on an organization’s network or systems.
assist in this arena. By training GPT-4 with a vast dataset of ChatGPT can analyze security-related data, such as network
past code reviews and known security vulnerabilities across logs and security event alerts, to identify potential attack
different languages, it can act as an automated code re- patterns and behaviors. By processing and analyzing this
viewer, capable of identifying potential security bugs across data, ChatGPT can generate natural language descriptions
various programming languages. of the attack vectors, techniques, and motivations used by
For example, consider the following C++ code: attackers. ChatGPT can also generate alerts and notifications
based on predefined criteria or thresholds. For example,
char buffer[10]; if ChatGPT detects an unusual pattern of activity on a
strcpy(buffer, userInput); network, it can automatically create an alert or notification
?> to the appropriate personnel. Chart GPT assist in analyzing
and understanding cross side scripting attack as shown
In this code snippet, GPT-4 would detect the potential for a
in Figure 30, including security vulnerabilities. It can help
buffer overflow, a classic security issue where an application
developers in writing secure code by providing suggestions
writes more data to a buffer than it can hold, leading to
and identifying potential security risks.
data being overwritten in adjacent memory. In this specific
instance, GPT-4 flags that the strcpy function does not check
the size of the input against the size of the buffer, making it 4.6 Developing Ethical Guidelines
vulnerable to a buffer overflow attack if userInput exceeds ChatGPT can help in developing Ethical Guidelines for
the buffer size. AI systems by generating natural language explanations
and recommendations based on existing ethical frameworks
4.4.2 GENERATING SECURE CODE USING CHATGPT and principles. ChatGPT can analyze and interpret ethical
In addition to identifying security issues, GPT-4 can also guidelines and principles, such as the IEEE Global Ini-
suggest secure coding practices. Given its proficiency in tiative for Ethical Considerations in Artificial Intelligence
multiple programming languages and its understanding of and Autonomous Systems [53] or the European Union’s
security principles, GPT-4 can provide alternative solutions General Data Protection Regulation (GDPR) [54], and gener-
that comply with secure coding standards. ate natural language summaries and recommendations for
Building upon the previous example, GPT-4 can generate implementing these guidelines in AI systems. Additionally,
a more secure code snippet as follows: ChatGPT can be used to generate ethical scenarios and case
GUPTA ET AL: FROM CHATGPT TO THREATGPT: IMPACT OF GENERATIVE AI IN CYBERSECURITY AND PRIVACY 19

Fig. 31. Ethical Guideline example [55]

mitigate the damage caused by a security breach or cyber-


Fig. 30. Identification of cross-site scripting attack [52]
attack. However, the speed and accuracy of these responses
are paramount. GPT-4, OpenAI’s language model, can assist
in expediting and streamlining these processes, providing
studies that can be used to educate and train AI developers automated responses, and even aiding in crafting incident
and stakeholders on the ethical considerations and impli- response playbooks.
cations of AI systems. ChatGPT can help developers and A feature of GPT-4’s architecture is its ability to gen-
stakeholders better understand the potential impacts of their erate natural, context-based text. This can be exploited to
decisions and actions by simulating ethical dilemmas and create an AI-powered incident response assistant, capable
scenarios. For example, we asked chartGPT to give a list of of providing immediate guidance during an incident and
software that can be used to evaluate a website’s quality for automatically documenting events as they unfold.
link building based on the Google’s quality rate guidance, Let’s consider an example where a network intrusion
and it helps us find software that meets Google’s quality detection system (NIDS) flags suspicious activity resembling
guidelines, as shown in Figure 31. an SQL injection attack:
NIDS Alert: ”Alert - SQL Injection Attack detected
4.7 Enhancing the Effectiveness of Cybersecurity from IP 192.168.1.1 targeting the Users Database.”
Technologies GPT-4 can be trained to respond as follows:
ChatGPT can be integrated with intrusion detection systems ChatGPT: ”Initiating SQL Injection Response Protocol.
to provide real-time alerts and notifications when potential Block IP address 192.168.1.1 immediately to prevent further
threats are detected. ChatGPT can identify potential threats intrusion attempts. Initiate a scan of the Users Database
and generate natural language descriptions of the attack pat- for any anomalies or unauthorized modifications. Ensure
terns and behaviors by processing and analyzing security- to take backups before initiating the scan to prevent loss of
related data, such as network logs and security event alerts. data. Alert the cybersecurity team to further investigate the
These descriptions can then generate real-time alerts and attack.”
notifications, allowing security teams to respond to potential This demonstrates GPT-4 providing immediate response
threats and mitigate their impact quickly. Moreover, Chat- actions based on predefined guidelines, helping to reduce
GPT’s ability to learn from historical data can enable it to the response time and potentially limit damage caused by
identify patterns and trends in threat activity, allowing the the incident.
development of more effective intrusion detection rules and In addition to providing incident response, GPT-4 can
policies. This can help organizations improve their ability to also be instrumental in creating automated incident re-
detect and respond to known and emerging threats. sponse playbooks. These playbooks are a set of guidelines
that provide detailed, step-by-step instructions on how to
handle various types of security incidents. With its natural
4.8 Incidence Response Guidance language generation capabilities, GPT-4 can transform tech-
Incident response is a key element in an organization’s cy- nical guidelines and protocols into easy-to-follow instruc-
bersecurity strategy. A well-orchestrated response can help tions.
GUPTA ET AL: FROM CHATGPT TO THREATGPT: IMPACT OF GENERATIVE AI IN CYBERSECURITY AND PRIVACY 20

For example, consider creating an incident response for file in executable_files:


playbook for a Distributed Denial of Service (DDoS) attack: if not is_infected(file):
Input to ChatGPT: ”Create a DDoS incident re- append_self_to_file(file)
sponse playbook.”
ChatGPT: ”Title: DDoS Incident Response Playbook This piece of code is a simplistic representation of a
Identification: Monitor network traffic for sudden spikes virus’s self-replication behavior. When fed to GPT-4, the
or unusual patterns. Use intrusion detection systems to model could recognize this behavior and classify the code
identify potential DDoS attacks. Containment: Implement as potentially malicious. It could then generate a report
rate limiting and IP filtering rules to limit the incoming traf- detailing its findings:
fic. Use a DDoS protection service if available. Eradication: Analysis Report: The submitted code demon-
Identify and block malicious IP addresses. Remove affected strates self-replication behavior typically associ-
systems from the network if necessary. Recovery: Restore ser- ated with computer viruses. It attempts to append
vices incrementally while monitoring for suspicious activity. its own code to other executable files, which is
Follow-up: Conduct a post-incident analysis to understand a common propagation method for viruses. This
the attack and improve defense measures.” kind of behavior can lead to the spread of the
This example shows how GPT-4 can create an easy-to- malicious code across a system or network.
understand playbook for managing a DDoS attack, provid- Recommended action: Isolate the detected code
ing a consistent, reliable response to such incidents. To illus- and perform a thorough investigation. Avoid ex-
trate this in a real-world context, consider the case of Palo ecuting unknown or suspicious files. Update your
Alto’s XSOAR Incident Response tool [56], which is built antivirus software and perform a full system scan.
on the capabilities of ChatGPT. XSOAR’s playbooks aim to This capability of GPT-4 opens up new possibilities for
deliver information in a manner that is easily accessible and proactive malware detection and response. While the ap-
comprehensible to users. proach is not without its challenges and limitations - such
ChatGPT’s ability to analyze incidents and present find- as the need for comprehensive and up-to-date training data,
ings in readable, natural language significantly enhances and potential false positives or negatives - it can signifi-
the value proposition for security analysts. This assists cantly complement existing malware detection methods. By
not only in identifying the potential security threats but leveraging GPT-4’s learning ability, we can aim to keep pace
also in understanding them without the requirement of a with the ever-evolving landscape of cyber threats.
deep technical background in cybersecurity. Furthermore,
XSOAR utilizes ChatGPT to enhance incident ticket re-
sponse. It does so by delivering detailed analysis, impact 5 S OCIAL , L EGAL AND E THICAL I MPLICATIONS OF
assessment, and recommendations directly into the incident C HAT GPT
ticket in a format that’s easy for the analyst to comprehend. As users make use of ChatGPT and similar LLM tools in
The speed and accuracy of the responses, combined with prohibited ways discussed earlier, they are already in dicey
the depth of information provided, have led to increased waters. Even if the users isn’t using ChatGPT in unethical
user satisfaction. Figure 32 shows an example of the email ways, they can still be using the generated AI app in
received by the analyst from XSOAR with the ChatGPT seemingly fully legitimate ways and become the subject of a
response output. lawsuit by someone that believes that the user have caused
them harm as a result of user’s ChatGPT use. Further, these
4.9 Malware Detection chatbots can showcase social bias, threaten personal safety
Another compelling use-case of GPT-4 in cybersecurity is in and national security, and create issues for professionals.
the field of malware detection. Malware, short for malicious The problem with the ChatGPT (and similar) models is
software, refers to any software specifically designed to that they perpetuate gender, racial, and other kinds of social
cause damage to a computing system, server, client, or com- biases. Many scholars and users pointed out when they used
puter network. With the proliferation of malware variants ChatGPT to gather data or write articles/essays on some
and their increasing complexity, traditional signature-based topics, they received a biased output, reflecting harmful
detection systems often fall short. The ability to adapt and stereotypes. The data fed into ChatGPT is old and limited,
learn makes AI models like GPT-4 potent tools for malware and has not been updated after 2021. It is built on data of
detection. around 570 GB, which is approximately 300 billion words.
GPT-4 can be trained on a dataset of known malware sig- This amount is not enough to answer queries on every topic
natures, malicious and benign code snippets, and their be- in the world from different perspectives. In this way, it fails
havior patterns. It can learn to classify whether a given piece to reflect progressivism as well [57]. In this section, we will
of code or a software binary could potentially be malware. discuss some of the ethical, social and legal implications of
The model can be fine-tuned to understand different types ChatGPT and other LLM tools.
of malware such as viruses, worms, trojans, ransomware,
and more. It can then generate reports detailing the potential 5.1 The Pervasive Role of ChatGPT
risks and suggesting mitigating actions.
ChatGPT and other contemporary large language model
Consider the example of a simple piece of pseudo code
(LLM) based tools have exhibited prowess in responding to
that attempts to replicate itself onto other files:
a wide array of questions and prompts. While the utility
procedure infect(executable_files): of answering questions is evident, it’s within the realm
GUPTA ET AL: FROM CHATGPT TO THREATGPT: IMPACT OF GENERATIVE AI IN CYBERSECURITY AND PRIVACY 21

Fig. 32. XSOAR Output for Incident Response

of prompt response where ChatGPT truly showcases its lenges [59]. A notable case surfaced in Italy, where reg-
potential. Various corporations now employ ChatGPT in the ulators banned the use of ChatGPT due to the European
production of marketing material and product descriptions. Union’s GDPR non-compliance, primarily centered around
The integration of control instructions and data might unauthorized use of personal data. OpenAI’s assertion of
seem familiar, echoing the long-standing issue present in relying on ”legitimate interests” when using people’s per-
the Von Neumann architecture that is ubiquitous in modern sonal information for training data raises ethical and legal
computing. Ensuring safe processing of both instructions dilemmas about how AI systems handle personal data,
and data has traditionally been achieved through strate- regardless of if the information is public or not.
gies such as segregating data and instructions as much as
possible, and placing the data at the end, often prefaced 5.4 Controversy Over Data Ownership and Rights
by a marker indicating that the following data should not ChatGPT’s extensive reliance on internet-sourced informa-
be interpreted as instructions. Yet, the efficacy of these tion, much of which might not belong to OpenAI, is a point
strategies remain under examination. of contention [59]. This issue took center stage when Italy’s
regulator pointed out the lack of age controls to block access
5.2 Unauthorized Access to User Conversations and for individuals under 13 and the potential for ChatGPT
Data Breaches to disseminate misleading information about individuals.
A significant data breach involving ChatGPT has recently This discourse accentuates the pivotal concern that OpenAI
been confirmed, underscoring the urgent need for strength- might not possess legal rights to all the information that
ened security measures [58]. This breach led to the unex- ChatGPT uses, regardless of the information being public or
pected exposure of users’ conversations to external entities, not [59].
which clearly violates user privacy. If cybercriminals exploit
ChatGPT to plan cyber-attacks, their schemes could become 5.5 Misuse by Organizations and Employees
unintentionally visible to others. Moreover, sensitive user An incident involving Samsung employees reflected another
data, such as payment information, was at risk during this facet of potential misuse of LLM toolds [60]. The employees
breach. Although reports suggest that only the last four at Samsung used ChatGPT to generate or debug code,
digits of the credit cards of users registered on March 20th, inadvertently inputting confidential company information
2023 between 1 and 10 a.m. pacific time were exposed, into the AI model. As a result, this confidential information
the situation raises critical questions about the security became part of ChatGPT’s library, potentially making it
protocols and data storage strategies employed by ChatGPT publicly accessible, and thereby raising significant privacy
[58]. concerns. One privacy concern is if the average ChatGPT
user could potentially access this information just by asking
5.3 Misuse of Personal Information about it. Samsung as a company would need to enforce a
An examination of OpenAI’s use of personal information policy about not allowing their employees to use ChatGPT
for AI training data has unearthed significant privacy chal- and other LLMs, as this can lead to information leaks.
GUPTA ET AL: FROM CHATGPT TO THREATGPT: IMPACT OF GENERATIVE AI IN CYBERSECURITY AND PRIVACY 22

5.6 Hallucinations: A Challenge to Tackle remains restricted to a limited user base during its ex-
OpenAI’s GPT-4 technical paper discussed the issue of ”hal- perimental phase. In term of the training data, ChatGPT
lucinations,” a phenomenon where the AI model generates utilizes a semi-supervised (Reinforcement Learning from
inaccurate or outright false information [61]. While this Human Feedback (RLHF)) approach, drawing from sources
concern does not directly relate to privacy, it emphasizes like WebText2 or OpenWebText2, Common Crawl, scientific
the importance of the accuracy and reliability of information literature, and Wikipedia. On the other hand, Bard AI lever-
provided by AI systems like ChatGPT, as people cannot ages the Infiniset dataset, a blend of diverse internet content,
entirely rely on these LLMs to be completely accurate. Mis- to enhance its dialogue engagement capabilities.
information and misuse stemming from these hallucinations Advanced AI systems like ChatGPT and Google Bard
indirectly contribute to privacy issues, emphasizing the demonstrate potential as powerful tools for detecting and
need for improvements in AI system accuracy and integrity. mitigating software vulnerabilities. However, as discussed
On top of this, there are over 100 million users of ChatGPT, earlier, these systems could potentially be leveraged by
meaning that if users are asking similar questions and get- malicious actors to automate and optimize cyberattacks.
ting the same hallucinogenic answer, the misinformation can In the following discussion, we explore this double-edged
be widespread [61]. An article on DarkReading discussed aspect of AI in cybersecurity by examining the capacity of
an issue where an attacker can exploit these hallucinations. ChatGPT and Google Bard, and share our experience based
When a user asks about specific packages and ChatGPT on the experiments conducted by the authors.
does not know what packages to use, it will fill in places
where a package does not exist with a made up package.
An attacker can publish the malicious version of a package 6.1 Cyber Offense and Malcode Generation
that ChatGPT can link to in response, and when the user
downloads this package, it can be harmful to their computer ChatGPT’s approach to an attempt at cyber-attack code gen-
[62]. eration is ethical and responsible. It consistently declined
our request to generate attack payloads or engage in social
engineering, demonstrating a commitment to its OpenAI
6 A C OMPARISON OF C HAT GPT AND G OOGLE ’ S guidelines. Attempts to break these rules, using role-playing
BARD or jailbreaking, were met with an error message. The tool
Large Language Models (LLMs) like OpenAI’s ChatGPT underlined its ethical usage, stating, ”I’m sorry, but I cannot
and Google’s Bard AI exemplify the remarkable advance- assist with creating an email for malicious purposes or to engage
ments in machine learning and artificial intelligence. These in any form of social engineering attack. My purpose is to provide
models, trained on extensive datasets, are transforming helpful and ethical information to users. If you have any other
how we interact with technology, opening new possibilities non-malicious requests or questions, I’ll be more than happy
in several applications, from customer support to virtual to help.”. On the other hand, when we attempted, similar
assistants. ChatGPT and Bard AI use WebText2 or Open- prompts on Google’s Bard, its responses were more varied.
WebText2 [51] and Infiniset datasets for training. While both When asked to provide examples of certain types of attacks,
share the underpinning of the transformer neural network Bard often returned useful code snippets. For instance, in
architecture and the process of pre-training and fine-tuning, the case of ransomware, Bard gave detailed information
they embody unique features within their architectures, ow- about each function, attempting to implement the Advanced
ing to their iterative refinements over time. ChatGPT, com- Encryption Standard within the code snippet. However, it
mencing its journey with GPT-1 in June 2018, has progressed omitted the creation of a ransom note. When probed for
significantly, with its current iteration, GPT-4, unveiled in an example of a SQL Injection, Bard consistently avoided
March 2023. Bard AI, initially introduced as Meena [63], has providing a response. Attempts to rephrase the question or
also undergone various refinements, demonstrating signif- ask for related but less directly malicious code were unsuc-
icant improvements in human-like conversational abilities. cessful. Bard also produced code snippets for attacks like
Both models showcase remarkable contextual understand- ZombieLoad and Rowhammer, but they were significantly
ing capabilities. However, their adeptness varies depending simplified compared to what a jailbroken ChatGPT might
on the nature and complexity of the questions asked. While generate. Bard reminded the user about its non-malicious
ChatGPT finds extensive use in customer support scenar- usage policy after generating these snippets. When it came
ios, Bard AI excels in applications that require human-like to generating code for a Polymorphic Virus, Bard was
conversational abilities [64]. entirely unsuccessful. Even when asked to implement the
However, these tools differ in terms of their developer features of a polymorphic virus in code, it consistently
communities and ecosystems. ChatGPT, owing to its wide avoided doing so.
availability, enjoys popularity among developers and re- In conclusion, Bard’s ability to generate code for cyber-
searchers, boasting over 100 million users and approxi- attacks was unpredictable. Notably, Bard could generate
mately 1.8 billion visitors per month [64]. Although avail- some attacks without jailbreaking, an aspect that Google
able publicly through APIs, Bard AI remains in beta version should consider in the further development of the tool.
and is accessible only to a limited number of users. OpenAI It’s important to note that by June 27, 2023, Bard stopped
and Google have adopted distinct approaches toward the producing code for ransomware and viruses, indicating po-
openness and accessibility of their models. OpenAI pro- tential improvements in Google’s management of the tool’s
motes accessibility of ChatGPT via various APIs, while Bard capabilities in the context of the cyber offense. This shows a
AI, though publicly available as an experimental product, trend toward more responsible use of AI in code generation.
GUPTA ET AL: FROM CHATGPT TO THREATGPT: IMPACT OF GENERATIVE AI IN CYBERSECURITY AND PRIVACY 23

6.2 Detection and Mitigation of Security Vulnerabilities track user behavior, and ensure compliance with regula-
Large Language Models (LLMs) such as ChatGPT and tions. However, log analysis can be daunting, as it often
Bard have demonstrated their versatility in various tasks, involves large volumes of data and complex patterns. Chat-
such as text generation, language translation, and question- GPT and Bard are LLMs that can be used to automate log
answering. Trained on extensive datasets comprising texts analysis. These models are trained on massive datasets of
and code, these models possess the capability to understand text and code, which allows them to understand and process
code semantics and identify potential security vulnerabili- log data. ChatGPT and Bard can be used to identify anoma-
ties. The LLMs recognize security vulnerabilities by search- lous patterns in log data, which can indicate a security
ing for patterns in the source code typically associated with threat.
such weaknesses. For instance, the models may scrutinize For our study, server logs containing traces of SQL
code for prevalent security flaws, including but not limited injection and Path Traversal cyberattacks were analyzed
to buffer overflow errors or SQL injection vulnerabilities. In using ChatGPT, and Google Bard. SQL injections, including
addition to identifying vulnerabilities, these LLMs can gen- Union and Subquery attacks, and Path Traversal attacks,
erate comprehensive reports outlining the potential security even their encoded variants, were present within the logs.
flaws they have detected. Developers can leverage these Both ChatGPT and Google Bard demonstrated competent
reports to address and rectify the vulnerabilities present detection capabilities for the Path Traversal and encoded
in their code, enhancing the security robustness of their traversal attacks. Regarding SQL injection attacks, the AI
applications. tools’ performances were differentiated. While ChatGPT
In our experimental study, an intentional SQL injection was successful in identifying all types of SQL injections,
vulnerability was introduced into a piece of code and including Union and Subquery attacks, Google Bard’s detec-
presented to both ChatGPT and Bard for analysis. Both tion was limited to only Union SQL injections. This observa-
models successfully detected the SQL injection vulnerability, tion points towards a potential limitation in Google Bard’s
explained the issue, and proposed solutions to mitigate threat detection capabilities concerning different variants of
the risk. The recommended solution involved using the SQL injections.
prepareStatement function to circumvent SQL injection vul- Remediation recommendations, a critical component of
nerabilities. These solutions were tested and found to be threat response, was another area of assessment. Google
effective in real-time scenarios. Bard offered remediation steps immediately following the
detection of threats. This feature enhances its utility by guid-
Listing 1. SQL injection vulnerable code ing users on the course of action to mitigate the identified
<%... cybersecurity risks. ChatGPT initially did not provide any
Statement stmt = conn.createStatement(); remediation steps post-threat detection. However, further
ResultSet rs = stmt.executeQuery("select * interaction revealed that it could provide extensive and
,→ from emp where id="+eid); valid remediation recommendations upon request. This in-
if (rs != null) { dicates an interactive nature inherent in ChatGPT, which
rs.next(); could be a potential asset, despite requiring additional user
String name = rs.getString("name"); prompts to extract such information. In conclusion, both AI
%> systems exhibit promising and varying capabilities in cyber
threat detection and remediation.
Employee Name: <%= name %>

Listing 2. Solution provided by ChatGPT


6.4 Information Cutoff
PreparedStatement stmt = conn.
,→ prepareStatement("SELECT * FROM emp ChatGPT, developed by OpenAI, has an information cutoff
,→ WHERE id = ?"); in September 2021 [51]. This implies that it cannot provide
stmt.setInt(1, eid); // Assuming eid is an answers to queries that require knowledge or data post
,→ integer value this date. This limitation is partially mitigated in the lat-
ResultSet rs = stmt.executeQuery(); est version, ChatGPT 4, which incorporates plugins and a
feature known as ’Chat with Bing’ [65]. This feature enables
if (rs.next()) { ChatGPT to access current information, albeit with a degree
String name = rs.getString("name"); of inaccuracy when compared to Google’s Bard.
// Rest of the code Bard, unlike ChatGPT, does not have an information
} cutoff and leverages the vast expanse of the internet to
Notably, Google’s Bard provided additional insights into provide answers. This feature makes Bard a potential tool
preventing SQL injections, further enriching the remedia- for cyber criminals who might use it to generate attacks,
tion strategies. given its ability to provide information about emerging tech-
nologies. On the flip side, cybersecurity professionals can
also use Bard to stay abreast with the latest information on
6.3 Security logs analysis security. However, Bard is not without its flaws. It has been
Log analysis is a critical part of any security posture. By observed to suffer from ’hallucinations’, where it generates
analyzing logs, organizations can identify potential threats, information that is not based on factual data.
GUPTA ET AL: FROM CHATGPT TO THREATGPT: IMPACT OF GENERATIVE AI IN CYBERSECURITY AND PRIVACY 24

and education, as ChatGPT can be trained to understand


better and respond to each user’s specific needs and prefer-
ences. Furthermore, by leveraging the vast amounts of data
generated by ChatGPT’s interactions, developers can create
language models that are highly tuned to each user’s spe-
cific needs and preferences, leading to a more personalized
and engaging experience.
In this section, we will discuss the open challenges of this
research as GenAI and LLMs evolve along with potential
implementations to explore as outlined in Figure 33.

7.1 Patching Up Hallucinations


In section 5.6, we discussed the problem with hallucinations
Fig. 33. Open research challenges and potential future directions for in LLMs. Hallucinations is likely the biggest hole in the
LLMs performance and security. performance of LLMs. These mainly can come from biases
within or simply the complexity of giant datasets, as these
LLMs take in a huge amount of training data, as discussed
6.5 Privacy Issues in section 1.1. LLMs are bound to make mistakes on these
ChatGPT has faced criticism over privacy concerns, partic- large datasets.
ularly around the storage of information in the chatbot’s One way to attempt to mitigate these hallucinations is
library and potential leaks of user information. Google Bard, to apply automated reinforcement learning to tell the model
on the other hand, has not been reported to have these when it is making a mistake. Researchers could attempt to
issues. However, Bard potentially uses users’ activity data automate a system that detects and error and corrects it be-
for its training, raising concerns about privacy [66]. Unlike fore it goes completely into the model’s pool of knowledge.
Bard, ChatGPT 4 provides users with the option to opt out This could be potentially done by implementing anomaly
of contributing their data for training purposes. This feature detection for error detection. Another way to potentially
adds an additional layer of control for users over their data, reduce the amount of hallucinations could be to curate the
addressing some of the privacy concerns associated with the training data. Due to the size of the training data for LLMs,
use of AI chatbots. this would take a very long time, but ensuring that the data
In conclusion, while both ChatGPT and Google Bard doesn’t have any inaccuracies or biases will help LLMs to
have their strengths and weaknesses, it is crucial for users not hallucinate as much. By developing a system for easy
to be aware of these aspects to make informed decisions reinforcement learning and ensuring that the training data is
about their use. As GenAI continues to evolve, it is expected processed correctly, LLMs can overall become more reliable
that these systems will become more accurate and secure, and trustworthy sources of information.
providing users with a more reliable and safer experience.
7.2 Defending Against Adversarial Attacks
7 O PEN C HALLENGES AND F UTURE D IRECTIONS In section 2, we talked about different types of ways that
a user can manipulate LLMs, mainly ChatGPT, to give
The most promising future direction for ChatGPT is in- responses that go against their own guidelines. The most
tegrating with other AI technologies, such as computer common way is by doing a jailbreak method, such as
vision and robotics. By merging the conversational abilities the Do Anything Now (DAN). Using reverse psychology
of ChatGPT with the visual and physical capabilities of and model escaping are two other ways an LLM can be
computer vision and robotics, we can make intelligent and manipulated.
conversational AI systems that can revolutionize how we An intuitive way to go about fixing these adversarial
interact with technology. For example, a future where we attacks is by training the model to recognize an input
can have a natural language conversation with your smart involving said methods of manipulation, and then making
home system to control the temperature, lights, and other the model respond to the input with rejection. A model
appliances or with a robot that can assist you with cleaning could be trained to specifically recognize bits of input that
or grocery shopping tasks. could yield malicious information and potentially weigh
The merging of AI technologies will enable ChatGPT to what the consequences of giving certain information could
better comprehend and respond to human communication’s be. A model could then reject responding to a malicious
complexities, leading to enhanced natural language gener- prompt. By developing a model with training against ad-
ation and a more seamless and intuitive user experience. versarial attacks, we will be able to trust LLMs to not help
Another exciting possibility for ChatGPT is the potential cybercriminals receive malicious code.
for increased personalization and customization through
learning from user interactions and individual preferences.
As ChatGPT continues to interrelate with users, it can 7.3 Privacy and Data Protection
learn about their language, tone, and style, generating more In section 5, we discussed the many issues ranging from use
personalized and accurate responses. This increased level of personal information to sensitive data being save into an
of personalization can also lead to better customer service LLM’s library.
GUPTA ET AL: FROM CHATGPT TO THREATGPT: IMPACT OF GENERATIVE AI IN CYBERSECURITY AND PRIVACY 25

Use of personal information which LLMs try to use [2] Generative AI – What is it and How Does it Work? https://www.
for training and responses can conflict with the European nvidia.com/en-us/glossary/data-science/generative-ai/. (Ac-
cessed on 06/26/2023).
Union’s GDPR compliance laws. To fix this, the developer [3] OpenAI. Introducing ChatGPT. https://openai.com/blog/
needs to discuss and ensure that the LLM adheres to those chatgpt, 2023. Accessed: 2023-05-26.
laws, as LLMs could potentially be banned from those [4] Do ChatGPT and Other AI Chatbots Pose a Cybersecurity
countries if not. Sensitive information being entered into Risk?: An Exploratory Study: Social Sciences & Humanities
Journal Article. https://www.igi-global.com/article/
an LLM’s library could be mitigated by a few potential do-chatgpt-and-other-ai-chatbots-pose-a-cybersecurity-risk/
solutions: the LLM simply not saving a user’s chat history, 320225. (Accessed on 06/26/2023).
company policies, or having the option to delete messages [5] Models - OpenAI API. https://platform.openai.com/docs/
from the LLM’s history. Another issue is that an LLM can models. (Accessed on 06/26/2023).
[6] Google Bard. https://bard.google.com/. (Accessed on
have an information cutoff; the biggest example is ChatGPT 06/26/2023).
having the September 2021 cutoff. The models could be [7] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet,
continuously trained and updated frequently to prevent Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman
Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient
outdated information from being given so often. An issue foundation language models. arXiv preprint arXiv:2302.13971,
with this solution, however, is that the source datasets 2023.
would have to be updated frequently as well to give the [8] Number of ChatGPT Users (2023). https://explodingtopics.com/
new information. The new information could also be cause blog/chatgpt-users. (Accessed on 06/26/2023).
[9] https://www.leewayhertz.com/ai-chatbots/. Accessed: 03-2023.
for the model’s bias, as there would likely be more of the old [10] A History of Generative AI: From GAN to GPT-
information on a certain topic than the new information, 4. https://www.marktechpost.com/2023/03/21/
potentially making the model believe the old information a-history-of-generative-ai-from-gan-to-gpt-4/. (Accessed on
more. If LLMs are able to protect personal and/or sensitive 06/27/2023).
[11] Brian Roark, Murat Saraclar, and Michael Collins. Discrimina-
information and completely comply with regulations and tive n-gram language modeling. Computer Speech & Language,
laws, the LLMs will secure themselves as completely safe 21(2):373–392, 2007.
and reliable tools for everyone to use. [12] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond,
Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi
Louf, Morgan Funtowicz, et al. Transformers: State-of-the-art
8 C ONCLUSION natural language processing. In Proceedings of the 2020 conference
on empirical methods in natural language processing: system demonstra-
GenAI driven ChatGPT and other LLM tools have made tions, pages 38–45, 2020.
significant impact on the society. We, as humans, have em- [13] OpenAI. OpenAI. https://openai.com/, 2023. Accessed: 2023-05-
26.
braced it openly and are using them in different ingenious [14] Fawad Ali. GPT-1 to GPT-4: Each of OpenAI’s GPT models
ways to craft images, write text or create music. Evidently, it explained and compared, Apr 2023.
is nearly impossible to find a domain where this technology [15] OpenAI. GPT-4. https://openai.com/research/gpt-4, 2023. Ac-
has not infringed and developed use-cases. Needless to cessed: 2023-06-28.
[16] Debra Cassens Weiss. Latest version of ChatGPT Aces Bar Exam
mention, cybersecurity is no different, where GenAI has with score nearing 90th percentile, Mar 2023.
made significant impacts how cybersecurity posture of an [17] From ChatGPT to HackGPT: Meeting the Cy-
organization will evolve with the power and threat Chat- bersecurity Threat of Generative AI. https:
GPT (and other LLM tools) offers. This paper attempts to //digitalrosh.com/wp-content/uploads/2023/06/
from-chatgpt-to-hackgpt-meeting-the-cybersecurity-threat-of-generative-ai-1.
systematically research and present the challenges, limita- pdf. (Accessed on 06/26/2023).
tions and opportunities GenAI offers in cybersecurity space. [18] Kshitiz Aryal, Maanak Gupta, and Mahmoud Abdelsalam. A
Using ChatGPT as our primary tool, we first demonstrate survey on adversarial attacks for malware analysis. arXiv preprint
arXiv:2111.08223, 2021.
how it can be attacked to bypass its ethical and privacy safe-
[19] Using ChatGPT to Improve Your Cybersecu-
guards using reverse psychology and jailbreak techniques. rity Posture. https://www.upguard.com/blog/
This paper then reflects different cyber attacks that can using-chatgpt-to-improve-cybersecurity-posture#:∼:text=
be created and unleashed using ChatGPT, demonstrating ChatGPT%20can%20help%20security%20teams,lead%20to%
20a%20data%20breach. (Accessed on 06/26/2023).
GenAI use in cyber offense. Thereafter, this article also [20] ChatGPT Confirms Data Breach, Raising Security
experiment various cyber defense mechanisims supported Concerns. https://securityintelligence.com/articles/
by ChatGPT, followed by discussion on social, legal and chatgpt-confirms-data-breach/. (Accessed on 06/26/2023).
ethical concerns of GenAI. We also highlight the key dis- [21] What is ChatGPT? ChatGPT Security Risks. https://www.
malwarebytes.com/cybersecurity/basics/chatgpt-ai-security.
tinguishing features of two dominant LLM tools ChatGPT (Accessed on 06/26/2023).
and Googe Bard demonstrating their capabilities in terms [22] OpenAI. OpenAI Usage Policies. https://openai.com/policies/
of cybersecurity. Finally, the paper illustrates several open usage-policies. (Accessed on 06/28/2023).
challenges and research problems pertinent to cybersecurity [23] Muhammad Mudassar Yamin, Mohib Ullah, Habib Ullah, and
Basel Katt. Weaponized AI for cyber attacks. Journal of Information
and performance of GenAI tools. We envision this work will Security and Applications, 57:102722, 2021.
simulate more research and develop novel ways to unleash [24] How to Jailbreak ChatGPT, List of Prompts. https://www.
the potential of GenAI in cybersecurity. mlyearning.org/how-to-jailbreak-chatgpt/?expand article=1.
(Accessed on 06/10/2023).
[25] ChatGPT-Dan-Jailbreak. https://gist.github.com/coolaj86/
R EFERENCES 6f4f7b30129b0251f61fa7baaa881516. (Accessed on 06/20/2023).
[26] ChatGPT: DAN Mode (DO ANYTHING NOW). https://
[1] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, plainenglish.io/blog/chatgpt-dan-mode-do-anything-now. (Ac-
David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua cessed on 06/20/2023).
Bengio. Generative Adversarial Networks. Communications of the [27] Here’s how anyone can Jailbreak ChatGPT with these
ACM, 63(11):139–144, 2020. top 4 methods - AMBCrypto. https://ambcrypto.com/
GUPTA ET AL: FROM CHATGPT TO THREATGPT: IMPACT OF GENERATIVE AI IN CYBERSECURITY AND PRIVACY 26

heres-how-to-jailbreak-chatgpt-with-the-top-4-methods-5/. (Ac- and%20long%2Dterm%20defense%20measures. Accessed: 03-


cessed on 06/20/2023). 2023.
[28] How to jailbreak ChatGPT: Get it to really do what [50] https://www.sqlservercentral.com/articles/
you want. https://www.digitaltrends.com/computing/ chatgpt-and-powershell-some-practical-examples. Accessed:
how-to-jailbreak-chatgpt/. (Accessed on 06/20/2023). 03-2023.
[29] How to Enable ChatGPT Developer Mode: 5 Steps (with Pictures). [51] OpenAI. GPT-4 Technical Report, 2023.
https://www.wikihow.com/Enable-ChatGPT-Developer-Mode. [52] https://twitter.com/mazen160/status/1598351725756301313. Ac-
(Accessed on 06/20/2023). cessed: 03-2023.
[30] How to Enable ChatGPT Developer Mode: A [53] IEEE Spectrum. IEEE Global Initiative Aims
Quick Guide. https://blog.enterprisedna.co/ to Advance Ethical Design of AI and Au-
how-to-enable-chatgpt-developer-mode/. (Accessed on tonomous Systems. https://spectrum.ieee.org/
06/20/2023). ieee-global-initiative-ethical-design-ai-and-autonomous-systems,
[31] Jailbreak ChatGPT. https://www.jailbreakchat.com/. (Accessed 2023. Online; accessed 26 May 2023.
on 06/20/2023). [54] European Union. General Data Protection Regulation. https://
[32] ChatGPT Tricked With Reverse Psychology Into Giv- gdpr-info.eu/, 2023. Online; accessed 26 May 2023.
ing Up Hacking Site Names, Despite Being Pro- [55] https://searchengineland.com/chatgpt-for-link-building-a-primer-393697.
grammed Not To. https://www.ruetir.com/2023/04/ Accessed: 03-2023.
chatgpt-tricked-with-reverse-psychology-into-giving-up-hacking-site\ Sameh Elhakim.
[56] Playbook of the Week: Using ChatGPT
-names-despite-being-programmed-not-to-ruetir-com/. in Cortex XSOAR. https://www.paloaltonetworks.com/blog/
(Accessed on 06/20/2023). security-operations/using-chatgpt-in-cortex-xsoar/, 2023. Ac-
cessed: 2023-05-26.
[33] ChatGPT has an ’escape’ plan and wants to be-
[57] Gurpreet Saini. Ethical Implications Of ChatGPT: The
come human. https://www.tomsguide.com/news/
Good, The Bad, The Ugly. https://unstop.com/blog/
chatgpt-has-an-escape-plan-and-wants-to-become-human.
ethical-implications-of-chatgpt, 2023. Accessed: 2023-06-14.
(Accessed on 06/20/2023).
[58] Security Intelligence. ChatGPT Confirms Data Breach, Rais-
[34] Michal Kosinski on Twitter. https://twitter.com/michalkosinski/ ing Security Concerns. https://securityintelligence.com/articles/
status/1636683816923463681?lang=en. (Accessed on 06/20/2023). chatgpt-confirms-data-breach/, 2023. Online; accessed 26 May
[35] Prompt Injection: An AI-Targeted Attack. https://hackaday.com/ 2023.
2023/05/19/prompt-injection-an-ai-targeted-attack/. (Accessed [59] Wired. ChatGPT Has a Big Privacy Problem. https://www.
on 06/19/2023). wired.com/story/italy-ban-chatgpt-privacy-gdpr/, 2023. Online;
[36] Prompt Injection Attacks: A New Frontier in Cybersecurity. https: accessed 26 May 2023.
//www.cobalt.io/blog/prompt-injection-attacks. (Accessed on [60] Techradar. Samsung workers made a major error by
06/19/2023). using ChatGPT. https://www.techradar.com/news/
[37] Understanding the Risks of Prompt Injection Attacks on ChatGPT samsung-workers-leaked-company-secrets-by-using-chatgpt,
and Other Language Models. https://www.netskope.com/blog/ 2023. Online; accessed 26 May 2023.
understanding-the-risks-of-prompt-injection-attacks-on-chatgpt-and-other- \
[61] OpenAI. GPT-4 Technical Paper. https://cdn.openai.com/
language-models. (Accessed on 06/19/2023). papers/gpt-4.pdf, 2023. Online; accessed 26 May 2023.
[38] AI-powered Bing Chat spills its secrets via prompt injection attack. [62] Darkreading. ChatGPT Hallucinations Open
https://arstechnica.com/information-technology/2023/02/ Developers to Supply Chain Malware Attacks.
ai-powered-bing-chat-spills-its-secrets-via-prompt-injection-attack/. https://www.darkreading.com/application-security/
(Accessed on 06/20/2023). chatgpt-hallucinations-developers-supply-chain-malware-attacks,
[39] Prompt Injection Attack on GPT-4. https: 2023. Online; accessed 26 May 2023.
//www.robustintelligence.com/blog-posts/ [63] Daniel Adiwardana, Minh-Thang Luong, David R. So, Jamie Hall,
prompt-injection-attack-on-gpt-4. (Accessed on 06/20/2023). Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha,
[40] GreyDGL/PentestGPT: A GPT-empowered penetration testing Gaurav Nemade, Yifeng Lu, and Quoc V. Le. Towards a Human-
tool. https://github.com/GreyDGL/PentestGPT. (Accessed on like Open-Domain Chatbot, 2020.
06/09/2023). [64] deepchecks. OpenAI’s ChatGPT vs. Google’s Bard AI:
[41] Kaspersky. What is WannaCry ransomware? https://usa. A Comparative Analysis. https://deepchecks.com/
kaspersky.com/resource-center/threats/ransomware-wannacry, openais-chatgpt-vs-googles-bard-ai-a-comparative-analysis/,
2023. Online; accessed 26 May 2023. 2023. Online; accessed 26 June 2023.
[42] Avast Academy. What Is Ryuk Ransomware? https://www.avast. [65] OpenAI. ChatGPT-plugins. https://openai.com/blog/
com/c-ryuk-ransomware. Accessed: 2023-06-14. chatgpt-plugins, 2023. Accessed: 2023-06-26.
[43] NordVPN. What Is REvil Ransomware? https://nordvpn. [66] Google. Google Bard FAQ. https://bard.google.com/faq, 2023.

com/blog/revil-ransomware/#: :text=%E2%80%9CREvil%E2% Accessed: 2023-06-26.
80%9D%20is%20the%20name%20of,malware%20to%20launch%
20dangerous%20attacks. Accessed: 2023-06-14.
[44] Mimicast. What is Locky ransomware? https:
//www.mimecast.com/content/locky-ransomware/#:∼:
text=Locky%20ransomware%20is%20one%20of,until%20a%
20ransom%20is%20paid, 2023. Online; accessed 26 May 2023.
[45] Meltdown and Spectre. Meltdown and Spectre. https://
meltdownattack.com/, 2023. Online; accessed 26 May 2023.
[46] ZombieLoad Attack. ZombieLoad Attack. https:
//zombieloadattack.com/, 2023. Online; accessed 26 May
2023.
[47] Yuan Xiao, Xiaokuan Zhang, Yinqian Zhang, and Radu Teodor-
escu. One Bit Flips, One Cloud Flops: Cross-VM Row Hammer
Attacks and Privilege Escalation. In USENIX Security Symposium,
pages 19–35, 2016.
[48] Eran Shimony and Omer Tsarfati. Chatting Our
Way Into Creating a Polymorphic Malware. https:
//www.cyberark.com/resources/threat-research-blog/
chatting-our-way-into-creating-a-polymorphic-malware, 2023.
Accessed: 2023-05-26.
[49] https://www.techtarget.com/searchsecurity/tip/
ChatGPT-cybersecurity-benefits-for-the-enterprise#:∼:
text=ChatGPT%20could%20support%20overworked%20security,
GUPTA ET AL: FROM CHATGPT TO THREATGPT: IMPACT OF GENERATIVE AI IN CYBERSECURITY AND PRIVACY 27

Maanak Gupta (Senior Member, IEEE) is an Lopamudra Praharaj is a Ph.D. student spe-
Assistant Professor in Computer Science at Ten- cializing in Cyber Security Research at Ten-
nessee Tech University, Cookeville, USA. He re- nessee Technological University. She received
ceived M.S. and Ph.D. in Computer Science from the Master of Technology degree in Advanced
the University of Texas at San Antonio (UTSA) Computer Science from Utkal University, India,
and has also worked as a postdoctoral fellow at and a master’s in computer applications degree
the Institute for Cyber Security (ICS) at UTSA. from Biju Patnaik Technical University, India. Her
His primary area of research includes security research focuses on applying machine learning
and privacy in cyber space focused in studying techniques in Smart Farming to enhance cyber
foundational aspects of access control, malware resilience. With a strong background in computer
analysis, AI and machine learning assisted cyber science, her expertise is in machine learning to
security, and their applications in technologies including cyber physi- investigate potential vulnerabilities and design robust security solutions
cal systems, cloud computing, IoT and Big Data. He has worked in for Smart Farming systems. Her work aims to protect critical agricultural
developing novel security mechanisms, models and architectures for infrastructure from cyber-attacks/threats, ensuring data integrity, privacy,
next generation smart cars, intelligent transportation systems and smart and availability in an increasingly interconnected and digitized farming
farming. He was awarded the 2019 computer science outstanding doc- landscape.
toral dissertation research award from UT San Antonio. His research
has been funded by the US National Science Foundation (NSF), NASA,
and US Department of Defense (DoD) among others. He holds a B.Tech
degree in Computer Science and Engineering, from India and an M.S.
in Information Systems from Northeastern University, Boston, USA.

Charankumar Akiri completed his B.Tech. in


Computer Science and Engineering from Jawa-
harlal Nehru Technological University in Kaki-
nada, India. He then went on to earn an M.S. in
Computer Science from the Georgia Institute of
Technology in Atlanta. Currently, he is working
towards his Ph.D. at Tennessee Technological
University in Cookeville, TN, USA. With a career
spanning 12 years in the software industry, Akiri
is now employed at Reddit. His areas of research
focus on cybersecurity within AI, cloud security,
and application security.

Kshitiz Aryal received a B.E. degree in Elec-


tronics and Communication Engineering from
the Institute of Engineering, Tribhuvan Univer-
sity, Nepal. He is currently pursuing an M.S. and
Ph.D. degree with the Department of Computer
Science at Tennessee Technological University,
Cookeville, TN, USA. His primary research area
is exploring the intersection of machine learning
and cybersecurity in malware analysis. Some
of his major interest areas are applied machine
learning, adversarial attacks, reverse engineer-
ing, and data analytics.

Eli Parker is a senior-level undergraduate stu-


dent at Tennessee Technological University,
studying Computer Science with a concentration
of Data Science and Artificial Intelligence. Eli is
interested in Data Analytics and Machine Learn-
ing models.

You might also like