Generative AI and ChatGPT Enterprise Risks
Generative AI and ChatGPT Enterprise Risks
Generative AI
A C I S O’ S G U I D E
and ChatGPT
Enterprise Risks
Enabling businesses to make the
Generative AI leap by assessing
risks and opportunities, as well
as policy development
DISCLAIMER: These materials are provided for convenience only and may not be relied upon for any purpose. The
contents of this document are not to be construed as legal or business advice; please consult your own attorney
or business advisor for any such legal and business advice. The contributions of any of the authors, reviewers,
or any other person involved in the production of this document do not in any way represent their employers.
This document is released under the Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license.
WRITTEN BY
CONTRIBUTORS
Many Team8 CISO Village members, and others from the wider community,
assisted in the writing, reviewing, and editing of this document. These are the ones
who could share their names publicly: Aaron Dubin, Adam Shostack, Alyssa
Miller, Amir Zilberstein, Ann Johnson, Aryeh Goretsky, Avner Langut, Avi Ben-
Menahem, Brian Barrios, Chenxi Wang, Dave Ruedger, Dikla Saad Ramot, Doron
Shikmoni, Gidi Farkash, Imri Goldberg, Jeffrey DiMuro, Larry Seltzer, Liran
Grinberg, ADM Michael S. Rogers USN (ret), Michal Kamensky, Nadav Zafrir,
Nate Lee, Oren Gur, Reet Kaur, Roy Heldshtein, Sara Lazarus, Ric Longenecker,
Susanne Senoff, Tomer Gershoni
Executive Summary
and Key Takeaways
It supports decision making regarding the implementation, integration, and use of GenAI, in order to write
organizational policies that allow the safe and secure use of this novel technology.
It can be read in written order, to gain an understanding of GenAI threats and policy considerations, as
a framework to discuss that understanding with organizational stakeholders, and finally apply them in
organizational GenAI security policies.
Alternatively, to achieve specific objectives, the sections can be read out of order in the way that is most
effective for the reader to achieve their objectives.
To understand generative AI security implications and build threat matrices: Read The Changing
Threat Landscape, New Enterprise Risks Introduced by GenAI, and Engaging engineering teams and
Threat Modeling.
To write a Generative AI / ChatGPT security policy and make risk decisions: Read Enterprise GenAI and
ChatGPT Policy Considerations, Making risk decisions, and Sample GenAI and ChatGPT Policy Template.
Let's Talk
You're invited to reach out and discuss this and other topics with Team8. Feedback, open discussion,
and briefing requests are welcome.
You are also invited to work with us on future CISO Village collaborative projects.
Legal Risks
Reporting for Critical
Infrastructure Act of 2022 for Critical Infrastructure
and Liabilities Act of 2022"
A handbook on what CISOs
need to know before a chat
with the General Counsel
Conclusion 25
Appendix 1 - Open Source / On-Premise Alternatives 26
Purpose 27
Background 27
Enterprise Risks 27
Corporate Policy 27
Acceptable Use Policy 28
GenAI and ChatGPT implementation and integration guidelines 28
However, as with any technology, the use of GenAI also poses a range of security risks,threats, and
impacts that organizations must consider carefully.
Key questions CISOs are asking: Who is using the technology in my organization, and for what purpose?
How can I protect enterprise information (data) when employees are interacting with GenAI? How can I
manage the security risks of the underlying technology? How do I balance the security tradeoffs with the
value the technology offers?
This document provides information on risks and suggested best practices that security teams and CISOs
can leverage within their own organizations, and serves as a call to action for the community to evangelize
and further engage on the topic.
• We will explore the potential risks and threats associated with using GenAI in an enterprise setting,
focusing on technical aspects, and some of the legal and regulatory risks that stem from these in turn .
We will further discuss threat modeling, engaging engineering teams, and on-premise or open source
alternatives.
• We will then provide actionable recommendations for how enterprises can take a comprehensive
approach that includes developing an organizational policy and action plans, along with a sample policy,
so that they can ensure that they are using GenAI in a secure and safe way.
Aside from bringing the enterprise up to standard on GenAI usage, developing written policies, and
implementing security controls, we have a unique opportunity as security leaders to promote and enable
the business through safe and secure technology innovation.
Given the productivity boost that GenAI provides for all enterprises, organizations need a measured
business-enabling alternative to rejecting the use of the technology altogether. Organizations that have
initially implemented such restrictions, have more recently been reconsidering their positions and lifting
wholesale bans.
1
Legal and regulatory content is meant to flag issues in preparation for discussions with other enterprise stakeholders, and is not
intended to replace legal advice where necessary.
In the context of the developing EU AI act, some have described CISOs as “Ambassadors of Trust.” The
CISO organization’s potential is now recognized as a catalyst for developing a wider approach to risk that
includes additional emphasis on data collection, usage, governance, and infrastructures.
These are areas of responsibility where CISOs, who often do not hold the full—or any—authority, do have
technical risk literacy, and can provide substantive support to the enterprise mission. They may also be
asked to take some level of responsibility.
As senior executives with an understanding of both the business and technology, CISOs are well
positioned to help organizations navigate the risks and opportunities engendered by this new technology
When new technologies are introduced to the world, security and privacy are often an afterthought,
whether due to business incentives, budgetary and resource concerns, or simply the sheer force of
innovation.
Indeed, this has happened before with social media, the cloud, smartphones, and many other technologies
where security and privacy have been introduced very late in the adoption cycle, while uncontrolled
usage skyrocketed.
As a collective, the CISO community has the power to shape the best practices and influence the future
development of these tools—today, with the safety and privacy of individuals, as well as our organizations’,
in mind.
As an example of its rapid adoption curve, ChatGPT reached 100 million users in two months, orders of
magnitude faster than any previous technology.
• Intuitive interaction and novel content production model – A new approach in AI combined with an
interactive chat system, delivering “polished” results. End users have the ability to live-edit the results in a
chat interface to enhance future accuracy much more easily than in earlier platforms.
• Accessible to all – Many of the GenAI technologies are free or low-cost, open to the public, and accessible
to anyone with an Internet connection.
• Ease of Use – Many of the GenAI technologies are designed to be easy for people of all positions and
roles, using natural language as though conversing with another person.
• Speed and Agility – The system can produce information, source code, and data faster than previously
possible through manual searching, queries, and indexing. It can synthesize millions of pages of information
into a single paragraph.
• Integration with Third Party Applications – Many current everyday applications, such as the Microsoft
Office 365 suite of tools and browser plugins, point to a reality that GenAI will become ubiquitous in our
everyday lives.
To provide an overview of these risks, we created the reference table below, arranged and prioritized by
risk level, current as of April 2023 (Disclaimer: Every organization is unique, these are offered as guidelines).
Enterprises can take proactive measures to minimize the potential negative impact of GenAI usage and
ensure that they are leveraging this powerful tool in a secure, compliant, and responsible manner.
For example, organizations could decide that while data is being sent to a third-party GenAI SaaS platform,
opting out of the user prompt information being used to train future models, and accepting the data
retention policy of 30 days (as is in the case of OpenAI, for both), is secure enough for their needs and
meets their risk appetite.
Others may decide to declare a Risk Exception and accept the risk, or decide to explore an on-premise
alternative.
Due to hype in the media, the authors and contributors want to specifically address data exposure risks
associated with GenAI, and ChatGPT specifically, on making user input and proprietary data available to
unintended audiences, such as competitors. As of this writing, Large Language Models (LLMs) cannot
update themselves in real-time and therefore cannot return one’s inputs to another’s response,
effectively debunking this concern.
However, this is not necessarily true for the training of future versions of these models. GenAI
technologies may use submitted content to improve their algorithms and models in the future. We touch
on this risk and its likelihood, as well as specifically OpenAI’s position on leveraging user input for model
training (opt-out from such usage, data retention limitations, challenges with biased models on user input,
etc.) in the table below, and later in this document.
High
LEGAL CONSIDERATION consult an attorney
• While data sent to GenAI technologies such as ChatGPT has been effectively
entrusted to a third-party SaaS, i.e. OpenAI (see also the “Third-Party Risk
and Data Security” section of this table), it is not currently incorporated
into the LLM in real-time, and thus won’t be seen by other users. GenAI
platforms may choose to use user input to train future models, but that
doesn’t seem to be the case now. See more on this and why we make this
differentiation here.
• Specifically in the case of OpenAI, according to the company documentation,
content submitted to their API is not retained for more than 30 days, and
it is opt-out by default, while ChatGPT is opt-in by default and opt-out with
fee-based accounts. Nevertheless, information submitted is always subject
to storage and processing risk. But, this may affect the risk appetite for
different organizations.
2
These estimated risk levels are offered as guidelines based on active threats as seen in open source intelligence, and consensus
among the CISOs who wrote and reviewed this document, as of April 2023. We recommend our readers to conduct their own
estimates and risk analysis to account for their unique enterprise risks and the specific technologies used. Doubly so when
attorneys need to be consulted
AI Behavioral Actors may use models or cause models to be used in ways which will
Vulnerabilities expose confidential information about the model or cause the model to
take actions which are against its design objectives.
(e.g. Prompt
Injection) • For example, using maliciously crafted inputs, attackers can bypass expected
AI behavior or make AI systems perform unexpected jobs. This is sometimes
Estimated Risk Level2 known as “jailbreaking” and might be possible to perform in GenAI systems
to adversely impact other organizations and stakeholders to encounter and
High receive maliciously crafted results without their knowledge.
• On the user side, for example, third party applications leveraging a GenAI
API, if compromised, could potentially provide access to email and the web
browser, and allow an attacker to take actions on behalf of a user
• One common attack currently seen in the wild is where a customer support
chatbot is targeted with injection attacks, and unauthorized access to
enterprise systems could potentially be achieved by an attacker.
Threat Actor Threat actors use GenAI for malicious purposes, increasing the frequency
Evolution of their attacks and the complexity level some are currently capable of, e.g.
phishing attacks, fraud, social engineering, and other possible malicious use
Estimated Risk Level2 such as with writing malware, although that remains a limited capability at
Medium this stage.
3
The legal and regulatory landscape is as of yet in the early stages of discovery, and the topics mentioned are far from decided,
and our writing can not be taken as legal advice. You should conduct your own research and consult with your own attorney
• Some GenAI models have been reported to have used content created
by others, which they were trained on originally (in the referenced case,
code, although it could be text, an image, etc.) instead of content uniquely
generated by the model, raising risks of intellectual property infringement
or even plagiarism. In addition, the same content could potentially be
generated for multiple parties.
• Using the output of GenAI could risk potential claims of copyright
infringement, due to the training of some GenAI models on copyrighted
content, without sufficient permission from dataset owners.
• Currently, there is a lack of definitive case law in this area to provide clear
guidance for legal policies. Policies should be developed according to
current intellectual property principles.
Trust and There are substantial reputational risks stemming from GenAI producing
Reputation erroneous, harmful, biased, or embarrassing output, as well as resulting safety
considerations such as in doxxing, or hate speech.
Estimated Risk Level2
• The current generation of GenAI models have been observed to output
Medium incorrect, inaccurate, wrong and misleading information.
• Incorporation of AI outputs in organizational work products, communication
or research without vetting for accuracy may lead to publication of incorrect
statements and information.
LEGAL CONSIDERATION consult an attorney
Software • Internal and third-party applications using GenAI must be up-to-date and
Security protected by proper controls against classic software vulnerabilities and
ways they interact with evolving AI vulnerabilities.
Vulnerabilities
• Any GenAI system is exposed to the same risks as any traditional software
Estimated Risk Level 2
systems. Additionally, software vulnerabilities may interact with AI vulnerabilities
to create additional risk.
Low
• For example, a vulnerability in a front-end may allow prompt injection on
a back-end model. Alternatively in situations where model output is used
programmatically, attackers may try to affect the model output to attack
downstream systems (e.g., causing a model to output text which causes an
SQL injection when added to an SQL query).
The scope of a policy for AI/ML can cover several types of technologies, and we need to understand
which ones are being addressed when writing our policies for a more accurate risk mitigation model:
Most AI/ML-related risks should be covered by existing enterprise policies. While every enterprise
has unique requirements, these should be incorporated appropriately into its existing internal regulatory
and policy body of work.
For example, policies for writing business emails, sharing data with third parties, or using third-party
code projects, should already be well-established. These governing documents should be reviewed and
updated for GenAI-specific risks.
It is incumbent on the CISO organization to educate our user base on the risks and how they are already
governed by existing policies. Awareness campaigns around the subject should be considered.
The advent of GenAI and ChatGPT also introduces the need for new policies and controls, especially
in the contexts where GenAI technology has had a significant impact on user and system behavior.
To further illustrate the last point: Users may have used cloud services for spell-checking in the past,
which could have exposed sensitive data to a third party. But, they have not changed their entire content
production workflow or provided raw data to be used for automated text generation and analysis work
such as a presentation created with GenAI from uploaded documents.
Further, the incorporation of ChatGPT and other GenAI systems into third-party applications, from the
Microsoft Office 365 suite of tools to browser plugins, is fast becoming ubiquitous and contributes to the
rapid expansion of the risk surface.
Another area where a well-formulated policy can provide direction is in making the choice to use a large
platform, one that supports many use cases, in contrast to selecting smaller use-case specific tools (for
example platforms like copy.ai or jasper.ai used for copyrighting).
Implications for advocating one direction over another are similar to our current monoculture tradeoffs,
e.g. using the same operating system everywhere for ease-of-use, maintenance, reduced attack surface,
and patching, where the same operating system presents a homogeneous and potentially vulnerable
infrastructure across-the-board.
The 2020 Gartner State of AI Cyber Risk Management Study showed a sizable disconnect between Chief
Information Security Officers’ (CISOs’) view of AI//ML risk and their AI/ML team’s outlook. The October
2022 Gartner article Quick Answer: How Can Executive Leaders Manage AI Trust, Risk and Security?
argues that misalignment on what risks may exist or occur demands the creation and agreement on
policies across all roles and usage.
Further, not all GenAI risks are limited to the realm of cyber security, and thus mitigating potential risks
requires a comprehensive enterprise approach. As we discussed above, risks such as in privacy and data
protection, intellectual property exposure, sector-specific regulation, and AI ethics should be considered
as part of a holistic risk management strategy.
Relevant enterprise functions, such as, but not limited to, the Chief Data Officer, Chief Information Officer,
Data Protection Officer, Chief Risk Officer, and General Counsel, should be involved in the formulation of
the strategy. The CISO organization can play an important role in coordinating a streamlined approach, and
together, the organization can develop policy recommendations regarding risk level and recommended
measures for board-level approval.
Considerations workflow
GenAI policies should be designed to provide clarity and direction on the the following topics,
at a minimum:
• What are the requirements for the enterprise’s use of GenAI, and how do these correlate with the risks,
threats, and impacts described above? Are there any gaps?
• What are the risks and controls specific to the enterprise that have been identified and are applicable to
the usage of GenAI?
• How do the GenAI security, privacy, data retention, and other policies and terms of service affect the
enterprise and its choices around usage of GenAI?
• Are these customizable for users or customers?
• What enterprise business, application, and infrastructure dependencies could be impacted by the use
of GenAI?
• Who can use GenAI, for what purposes, and under what circumstances?
• What integrations does the application utilizing GenAI provide access to, when considering the GenAI
implementation?
› For example, does a customer support chatbot have access to all user data and is able to offer
compensation on missed deliveries, service outages, and the like?
› Alternatively, how could the implementation affect enterprise systems on the back-end?
• Does the organization prefer to use specific tools per use case, or one multi-purpose platform?
• When a technology is chosen, does it access a larger platform for its operation? Is it a private instance of that
platform (such as is reported to be the case with Office 365 and ChatGPT), or does it operate independently?
Some of the above questions and guardrails stem from the lack of ability to fully understand how AI/
ML is constructed or operated, at times even by the developers themselves. The below are lower-
level considerations that can, as the need arises depending on the organization’s risk appetite, also be
addressed in policy:
ChatGPT 3.5 was released publicly for the purposes of testing these guardrails. Over time, these have improved
significantly as the general public found various adversarial examples (also known as “jailbreaking”) that broke
through the guardrails. The core ChatGPT model was trained on data dating back to September 2021.
That core model has not been retrained, and as such, we should be able to quickly dispel claims that ChatGPT
is regurgitating user prompt data. As noted elsewhere in the document, this does not preclude the risk of such
training happening in future versions of the model, opt-out policy aside.
Informed risk decisions can be reached by providing clarity and direction to the enterprise on the considerations
listed above. Depending on its risk appetite, an enterprise could allow employees to share some enterprise
data with a third party GenAI platform.
For example, assuming the enterprise considers allowing using tools developed and hosted by OpenAI, it
should consider enforcing certain administrative guardrails, or principles:
Selecting to opt-out of user prompt information being used to train future models, as it currently an option
under OpenAI’s policy;
Accepting the data retention policy of 30 days, which is OpenAI’s current policy;
Requiring users to follow the Acceptable Use Policy, and to undergo risk-awareness training;
Then, other guardrails could be introduced as overlaying technical controls. It isn’t clear to what degree these,
and others, are possible at this stage:
Restricting the use of GenAI generated code to be limited to open source and software packages with
permitting licenses;
Reviewing all images and graphics that were generated from GenAI queries for copyright and trademark
infringement.
Another risk treatment approach may be to declare a permanent Risk Exception and allow employees to use a
GenAI service as-is, or a temporary Risk Acceptance with a view to re-evaluate the decision at some later time.
Yet another way to manage the GenAI risks is to host the technology on-premises, where the enterprise
security team has full control over hardening and privacy configurations. This solution is further discussed
in this appendix.
By fostering a threat-centric mindset, we can proactively address key security, privacy, and safety
concerns, while still harnessing the capabilities offered by these tools. We explore the risks discussed
from a technical standpoint, and introduce a taxonomy for further discussion.
In the Threat Modeling Manifesto, a set of authors use the Four Question Framework to frame threat
modeling work:
A small sketch of the system can illustrate the focus of an analysis. For example, there are threats to the
AI/ML tool, and threats to the users of the tool (or further parties). One example is prompt injection threats
which are overall threats to the GenA system(s).
As can be seen from the above, there are many ways to address the question of what can go wrong. One
additional example to understand the range of potential attacks is the Berryville Institute for Machine
Learning’s simple taxonomy used below. Here is that taxonomy with some minor adaptations and
examples for clarity.
Training Training Data Derivation: Deriving Data Poisoning: Introducing bias, false
Data training data from the outputs (a.k.a. information.
“Model inversion” again).
AI Model Model Theft: Opening the black box. Backdoor Models: Adding covert
functionality which attackers can
activate to cause the model to behave in
ways it has not been designed to.
Examining GenAI through the lens of this threat model helps to dispel some of the concerns about the
leakage of an organization’s confidential data through the usage of GenAI. Some of these concerns are
suspect and others are clear, so the threat model helps us to see the true nature and differences among the
issues. There are three different aspects of the threat model in consideration.
The primary concern that most organizations have is the loss of intellectual property through User Input
Extraction. As we expanded on above, the concern that the prompt inputs are used to train future models
can be managed through the provider’s opt-out process. Specifically for ChatGPT, OpenAI does not retain
API-based inputs, but does reserve the right to use non-API-based inputs unless one chooses to opt-out.
A second concern arises from the belief that the user-provided prompt inputs will be incorporated into
the core model such that one person’s inputs (which may include intellectual property or other non-public
information) might become part of another person’s outputs, thereby resulting in the loss of intellectual
property to not just the GenAI provider, but also potentially all the users of that GenAI provider.
GenAI providers that take this approach risk two major problems. First being concerns of the users who
do not wish for their intellectual property to propagate further than needed. More importantly, the second
problem of incorporating prompt inputs directly into the training of the core model is that it gives attackers
a direct path to conduct Data Poisoning attacks (Training Data Manipulation).
The third concern arises from the jailbreaks that violate guardrails established by the GenAI provider. For
ChatGPT specifically, the user-provided inputs are being actively used to train and fine-tune this part of the
product to improve the guardrails and the overall user experience. It is computationally expensive to recreate
the core model, and as such, the actual core model does not change with every new release of ChatGPT and
has not changed since September 2021.
CONCLUSIONS
To enable GenAI technology adoption, organizations need to quickly and diligently identify and understand
all the potential risks, threats, and impacts that may occur in their businesses. Reducing the pitfalls and
unacceptable use of GenAI will require formulating risk mitigations expressed in clear policy statements,
which can then be feasibly implemented within the context of the enterprise.
Although a threat model analysis may alleviate and dispel some concerns around the purported instances
of intellectual property leakage or theft, there are still legitimate reasons to be concerned about the loss
of User Input Confidentiality through the use of commercial services that operate as a SaaS offering. To
address this concern, organizations may be interested in examining an on-premise alternative to the AI
tools that are delivered through the cloud, and thus subject to potential interception by other parties.
The table below lists several open source alternatives that enable organizations to examine and instantiate
an on-premise version that can be considered and used by organizations to address intellectual property
concerns. Note that open source options have the potential for Backdoor Models (AI Model Manipulation)
in accordance with the threat model described above.
Audio
• Airgram, Descript, Otter • Whisper
Transcription
and Analysis • Chorus, Gong, Revenue.io
Image
• Hugging Face, Midjourney • Stable Diffusion
and Video
Creation • Runway • ModelScope
Purpose
[Organization Name] recognizes the potential benefits and risks associated with the use of Generative
AI (we will refer to various Generative AI technologies and Large Language Models, or LLMs, collectively
as “GenAI”). This policy outlines our commitment to responsible implementation of this technology to
ensure that its use is consistent with our values and mission, business standards, security policies, and
that the associated risks are appropriately managed.
Background
Recent GenAI innovations offer a multitude of business benefits which enterprises are actively exploring,
and which the industry is reporting many employees are already leveraging.
Specifically, ChatGPT has caused a surge of interest on the topic. It is a chatbot developed by OpenAI
that uses a machine-learning technique with natural spoken language for informational inquiries and
responding back with human-like generated responses.
GenAI technologies introduce risks which the enterprise should be aware of, and prepare for.
For example, potential intellectual property exposure, the third party risks associated with using a GenAI
platform, the data set leveraged to train the machine learning model, which has the potential to produce
flawed or inaccurate responses, and other risks as described below.
Further, GenAI can be used to help attackers become more sophisticated, which affects our security
program building. From a broader perspective, there are additional legal, regulatory, and privacy risks that
should be considered.
This policy will provide guidance on practices the organization must or should adhere to, from writing an
acceptable use policy, to developing user education and awareness campaigns.
Enterprise Risks
For the purposes of this section, review the risks table above, and identify which elements are relevant
to your own organization, risk, and technology choices.
Corporate Policy
This policy provides guidelines for the use of GenAI at [Organization Name] and is intended to promote
responsible and ethical use of this technology.
Violations of GenAI usage policies may result in disciplinary action, up to and including termination of
employment.
• Employees must not disclose confidential or proprietary information to a GenAI technology, directly or
through a third party application, unless through following the guidelines of the policy.
• Employees must use GenAI in a respectful and professional manner, refraining from using profanity,
discriminatory language, or any other form of communication that could be perceived as offensive.
• Employees must comply with all relevant laws and regulations, including those related to data privacy
and information security, according to our internal policy [policy name, link to the policy].
• Employees should report any concerns or incidents related to the use of GenAI to their supervisor or
the appropriate department.
1. Use GenAI technology in a responsible manner that aligns with our mission and values:
a. Ensure that any use of GenAI technology complies with applicable laws and regulations
b. Conduct appropriate risk assessments to identify and manage potential risks associated with the use
of GenAI technology
c. Consider the potential impact of GenAI on stakeholders, including customers, employees, and partners
d. Prepare awareness campaigns for employees and others leveraging the technology.
e. Beyond awareness of the risks, and guardrails for engagement withGenAI technology, employees
should also be reminded that:
i. It is easy to forget the answers from the other side are not coming from a human.
ii. What they input could potentially be reused by the GenAI technology in the future, when interacting
with someone else.
f. Prepare an elevation and escalation path for employees to make contact, or report in, both violations
of the policy, as well as in case suspect results are returned from the GenAI technology
2. Identify any technology, infrastructure, or business processes and systems reliant on, making use of,
or that have dependencies to or with GenAI technologies, that need to be evaluated and validated for all
GenAI integrations with the enterprise:
As part of an RFI or risk assessment process, examine the privacy-by-design and security architecture
considerations built into the system or model. There are significant differences between the various
systems that require specific analysis for each.
A draft list of security controls and mitigation strategies has been initially drafted, but it is a topic that is
still being investigated and will be the subject of an upcoming Team8 CISO Village collaborative project,
planned to be released as a paper in August 2023. It is included as a starting point for the community to
research and consider. The August 2023 publication will be shared from the working group to provide
recommended controls to consider for the growing list of risks being discovered across the spectrum of
areas related to GenAI, and as new controls are introduced into the industry, also taking into account new
ISO, NIST, and IEEE initiatives. Once again, user education and awareness training would be a good fit for
most of the below, so will be left out of the table.
Risk Control
Privacy and • Legal disclaimer in privacy policies that mention AI is used in products or processes
Confidentiality • Interactive and explicit end user opt-out when using services that have embedded
GenAI
Enterprise, SaaS, and • Filters, masks or scrubs sensitive content between organization APIs and chatbot AI
Third-party Security services
• Secure Enterprise browser
Threat Actor Evolution • Adjustment of social engineering training to consider targeted and high quality
phishing and similar attacks
Copyright and • Favor solutions trained on curated or licensed content, including the use of internally
Ownership trained systems using the OpenAI API
• Detection of intellectual property misuse, or plagiarism (GenAI for cases where content
has been copied instead of generated)
• Trademark detection
Insecure Code • Create a GenAI DMZ/staging ground to observe applications using AI/ML-generated
Generation code
• Code review should include AI/ML-generated code, possibly marked as such
Bias and • Currently out of scope of this document, as it is a more generic AI/ML issue
Discrimination
Trust and Reputation • Consider GenAI data use in enterprise system dependencies
• Add AI content to review processes
• Prompt filtering
• Inclusion of a safety system on top of the AI app to filter and monitor responses.
Software Security • Model interactions with other systems should be analyzed to identify potential
Vulnerabilities interactions
• Use model output filtering to identify problematic outputs
While CISOs are researching GenAI, and ChatGPT specifically, enterprises and employees are already using
them. The table below lists some of the primary use cases for GenAI in the enterprise that CISOs should
consider when developing policies on the subject and to assist with ideation on other possible uses in your
organization.
Experimenting with Enterprise analysts can use GenAI to experiment with training models and improve
training models the accuracy of language processing. This can help develop more sophisticated
conversational agents and chatbots to handle complex customer queries and
interactions.
Research and Enterprise product and development teams can use GenAI for research and
development to enhance development to enhance current product offerings or software. GenAI can help
develop and improve natural language processing algorithms, which can be later
current product offerings used in various applications such as voice recognition, sentiment analysis, and
or internally developed chatbots.
software
Use cases to enhance GenAI can be used by programmers in the software development lifecycle to improve
the quality of code code quality. By training the model on large datasets of code and programming
languages, GenAI can identify potential known bugs (including security defects),
provide suggestions for code optimization, and improve the overall efficiency of the
development process.
Data analysis Analysis of large amounts of data, such as customer feedback, to identify trends and
insights.
Customer service Automation of customer service and support tasks, such as answering common
and support questions, assisting with product or service inquiries, and solving common problems.
Content creation Generating content for marketing and advertising purposes, such as blog posts,
social media posts, and email marketing campaigns, at least as a first draft.
Generating Intellectual Property content, such as graphics and dialogs for relevant
companies (e.g. game companies)
Innovation Generating ideas and insights to assist in the development of new products and
services, as well as the improvement of existing ones.
Incidents
› ChatGPT Risks and the Need for Corporate Policies [National Law Review]
› Gartner: Quick Answer: How Can Executive Leaders Manage AI Trust, Risk and Security?
› ChatGPT Risks and the Need for Corporate Policies
› AI Policy Observatory [OECD]
› Advancing accountability in AI [OECD]
Books
› Not with a Bug, But with a Sticker: Attacks on Machine Learning Systems
and What To Do About Them
OpenAI documentation
› OpenAI security page
› OpenAI security portal
› OpenAI’s Bug Bounty Program
› API data usage policies
› How your data is used to improve model performance
› Data usage for consumer services FAQ