664cff5e Ai Governance Playbook Innovation Meets Security For Responsible Ai Adoption
664cff5e Ai Governance Playbook Innovation Meets Security For Responsible Ai Adoption
664cff5e Ai Governance Playbook Innovation Meets Security For Responsible Ai Adoption
AI Governance Playbook:
Innovation Meets Security
for Responsible AI Adoption
How to Adopt AI and Multi-LLM Strategies
in a Secure, Governable Way
KongHQ.com
1 © Kong Inc. AI Governance Playbook: Innovation Meets Security for Responsible AI Adoption
Executive summary
The breakneck pace of AI innovation and the potential risks
associated with improper usage necessitate a well-defined, robust
governance playbook for responsible adoption.
Embrace AI as a transformative force
Organizations that recognize AI’s transformative potential early on and embrace it strategically
will gain a significant competitive advantage.
2 © Kong Inc. AI Governance Playbook: Innovation Meets Security for Responsible AI Adoption
Table of contents
02 13
Executive summary The AI governance playbook
• Benefits of an AI playbook
• Balance innovation and control
with an internal service
04 16
Introduction Introducing the AI gateway
pattern
• A free, open source AI gateway
05 22
The state of AI in the enterprise Conclusion
• Why AI adoption is taking off now
• How organizations can embrace AI
• The relationship between AI and APIs
08
AI adoption challenges
• Determining best use cases
• Multi-LLM adoption
• Cost optimization
• Fine-tuning AI models
• The need for observability
3 © Kong Inc. AI Governance Playbook: Innovation Meets Security for Responsible AI Adoption
Introduction
AI is top of mind for nearly every executive in the world. Across
industries, organizations are scrambling to adopt and integrate
generative AI and large language models (LLMs) into their products
and services.
But in the race to innovate and keep ahead of the competition, how can organizations ensure
concerns around AI governance and security aren’t left in the rearview mirror?
In this eBook, we’ll cover the keys to properly implementing AI in your organization and
considerations to keep in mind when developing your organization’s AI governance playbook.
4 © Kong Inc. AI Governance Playbook: Innovation Meets Security for Responsible AI Adoption
The state of AI in the
enterprise
The word “disruptive” may be overused to the point of losing all
meaning, but it would be short-sighted to consider AI anything less than
radically disruptive.
But there’s not much that can be said about So let’s shift the focus from what’s down
the disruptive impact of AI that you haven’t the road to the reality of right now and how
seen before in countless articles, online generative AI and LLMs (e.g., OpenAI’s
ramblings, and hyperbolic TV interviews from ChatGPT, Google’s Bard, Meta’s Llama) can
Big Tech leaders — some comparing AI to fire be used. And, crucially for businesses, we’ll
or electricity in terms of how important they talk about how they can be used securely by
believe it will be to the future of humanity. developers and organizations around the world
and across industries.
More recent AI innovations are enabled by Consider an application like AI art generator
advances in hardware. We now have enough Midjourney, where you enter a prompt to
computing power to create compelling training generate an image. The inference cost to
models. We can also point to a specific generate an image may be something like
technical innovation: transformers, a deep 1/100th of a penny — and it only takes a
learning architecture that came out of Google. moment. In contrast, hiring a designer to make
the image might cost you $100 an hour, and it
Another reason that AI has been rarely would probably take longer. Compared, we’re
embraced outside of the realm of big looking at a difference of orders of magnitude.
enterprise players is that the economics of AI
5 © Kong Inc. AI Governance Playbook: Innovation Meets Security for Responsible AI Adoption
AI economics have also faced historical challenges because AI tackled
problems that humans excel at — and where errors meant significant
consequences, demanding extensive investment.
So while we’re “there” with things like rendering navigate their environments, source food, and
art or summarizing documents, the economics avoid predators.
still aren’t great yet for other AI areas.
However, there are other surprising areas where
Self-driving cars are a technology with AI is already proving itself a powerful tool and
investments totaling around $75 billion, and ally. The current wave of generative AI is great
still, the unit economics aren’t superior to those at tackling some things that humans are not
of humans. It’s simply not yet competitive with so good at. Applications of AI like serving as
an offering like Lyft or Uber. Why? Because a creative companion, a coding partner, or a
despite what one might think when driving brainstorming buddy are carving out new use
during rush hour, humans are actually pretty cases, improving the economics of AI, and
good at driving — thanks to evolution and our pushing computers (and businesses) into
ancestors developing the abilities needed to entirely new directions.
As more businesses come to embrace AI, The key is to understand the adoption cycle:
we’re likely to see similar missteps made users will use these tools, so rather than
in the past around other unprecedented attempting (and failing) to ban AI technologies
innovations. Think back to mobile or the and LLMs, enterprises must figure out how
cloud: not so long ago, there were Fortune to address the challenges around them —
500 companies that wouldn’t allow developers like figuring out what this means for data
to use AWS — or employees to use mobile governance or regulations.
devices for work purposes.
One potential misstep would be to internalize
With new and disruptive technologies, it this in central IT and have an AI strategy that’s
takes time for organizations to catch up. disconnected from the actual use cases
However, the speed with which organizations because these are secular, organic movements.
are adopting AI outpaces these previous We don’t want to be like the companies in
breakthroughs, which means organizations the 1990s that banned internet browsers
need to move faster than ever to keep on top of and e-mail in the workplace. Innovative
the wave of AI innovation. organizations need to move fast to catch up
with the adoption and figure out intelligent
ways to incorporate it.
6 © Kong Inc. AI Governance Playbook: Innovation Meets Security for Responsible AI Adoption
The relationship between AI and APIs
APIs (application programming interfaces) are a set of functions and
procedures that act as a bridge between applications by dictating how
services interact.
APIs are essential for AI. They act as hands, eyes, and ears for AI. As the usage of AI increases,
so will the number of APIs that enable them.
The secular tailwinds of APIs are being Let’s assume there are 4 billion people online
strengthened by the adoption of AI. Though today. With AI adoption growing as it is, let’s
estimates vary on the exact percentage, we assume there may soon be 100 individual
know that the majority of internet traffic today AIs per person online. That’s 400 billion AIs.
is API traffic. Now that we have a new category Each of these AIs will be like a user interacting
with AI traffic, how much more will API traffic online — but not using a keyboard, mouse, and
grow in the coming years? monitor. These AIs will interact with APIs. This
is why an API strategy is needed: in the future,
APIs will play a crucial role in facilitating the consumer of your website or product will
communication between humans and AI almost certainly be AI, not a human.
systems, as well as AI systems and other
connected tools. The future will see a surge in
AI-powered agents and applications interacting
primarily through APIs, to a degree where we
can assume the users of the future will be AIs,
and they’ll need APIs to accomplish their tasks.
7 © Kong Inc. AI Governance Playbook: Innovation Meets Security for Responsible AI Adoption
AI adoption challenges
There are essentially three things we’re currently doing with AI:
We use AI. We train AI. We have AI interact with APIs to perform
operations — making AI itself the client making API calls to access
data or trigger capabilities.
In any of these cases, the operation requires an API. That’s why controlling the usage of AI at the
interface level is an API problem.
Today, we have data that developers use to At the end of the day, AI or any tool you
build applications. Developers build services on implement in your business should serve a
top of this data and build APIs to turn the data specific purpose. Therefore, an easy way to
and services into a platform that others can consider AI adoption in your organization is to
use. AI introduces a new dimension by bringing think about the end use case — or the customer
intelligence to the platform to build on top of experience that you hope to build. From
the APIs and services you’ve already created there you can work backwards to determine
by making them more accessible. For example, if intelligence can help you deliver a better
retailers might bring more advanced fraud experience. This gives a much more pragmatic
prevention systems into their e-commerce view of AI adoption in the organization, and
platforms or smarter chatbots empowered with from this vantage point we can figure out how
access to a customer’s order history. to implement it.
8 © Kong Inc. AI Governance Playbook: Innovation Meets Security for Responsible AI Adoption
Multi-LLM adoption
For organizations looking to leverage AI, there are many LLMs to
choose from. While ChatGPT may be the LLM that first captured the
spotlight, there’s a great deal of competition.
In addition, we’re now seeing that open source models are getting as “good” as the cloud-hosted
models we’re most familiar with. The competition and options we’re seeing are only going to
benefit us as the early adopters of AI.
The most used LLM providers via Kong’s AI Gateway March 2024
Organizations are likely to start with cloud-hosted LLMs. And cloud-hosted models are a fantastic
starting point, but in the long run, they can become expensive. This is why many organizations
begin deploying self-hosted models and allow for orchestration between cloud-hosted and local
models to cut costs, reduce latency, and retain control over potentially sensitive data.
9 © Kong Inc. AI Governance Playbook: Innovation Meets Security for Responsible AI Adoption
As a bonus, implementing orchestration across multiple LLMs may also offer potential
guardrails to minimize hallucinations and bias and improve the quality of our AI queries.
Open source models are rapidly improving and even surpassing some private models.
Source: ARK Invest
Cost optimization
Early AI adopters are already thinking about many of these problems and looking for best
practices emerging. They already see the need for cost optimization as running millions of
queries on a cloud-hosted model can get pricey. Many of these organizations seek to deploy their
own models, which is one of the reasons open source models are taking off.
For organizations looking to reduce the total In organizations running self-hosted models,
cost of ownership or speed up return on we often see orchestration between the two
investment, using less AI isn’t the answer. We models. Many opt to fine-tune their self-hosted
need to find a way to use AI tools as needed LLM and make it highly available while using
while finding other ways to simplify costs. cloud-hosted models as a fallback.
10 © Kong Inc. AI Governance Playbook: Innovation Meets Security for Responsible AI Adoption
Fine-tuning AI models
Organizations typically start with more general, pre-trained models
with broad knowledge but a lack of specific domain expertise that
may be needed for our use cases.
This is a fine way to start, but we’re not going to get far with this approach. Eventually, we’ll have
to think about how we’re going to fine-tune our models to fully realize the potential benefits of
AI tools.
Extracting the right data and feeding it into This intelligence differs between organizations.
LLMs is a significant hurdle. The data may be This is truly the competitive advantage
spread across the organization — in databases, that organizations have: using the data and
documents, or other unstructured sources — services they’ve built over time. How do we
and accessing and consolidating this data can train our models on top of this data? This is
be difficult. a trickier problem to solve. Organizations are
experimenting with various approaches, like
Additionally, there are often concerns around dumping entire databases into the models or
data privacy, security, and compliance mirroring internal API traffic. However, each
that need to be addressed. The database, approach comes with its own set of challenges
compliance, and other teams may be hesitant and considerations.
to allow the free use of sensitive data for model
fine-tuning. Ultimately, fine-tuning AI models requires
a well-thought-out playbook that involves
How are we planning to fine-tune our models? multiple stakeholders across the organization.
Well, this is the million (or even billion) dollar It’s not just a technical problem; it’s a people
question. It’s possibly the biggest challenge and process challenge. Creating a playbook
around implementing intelligence in the is crucial, but there’s no standard solution —
organization. each organization will need to develop its own
approach based on its data, infrastructure, and
There’s no one-size-fits-all solution. Whether we governance requirements.
use cloud or self-hosted, whether we have an AI
service that the platform team is managing or
not, whether we have all of that infrastructure in
place — how do we extract our data and make
sure that AI can build on top of our specific and
unique data?
11 © Kong Inc. AI Governance Playbook: Innovation Meets Security for Responsible AI Adoption
The need for observability
While the excitement around AI is palpable, • How many tokens are being consumed?
organizations must resist the temptation • What’s the cost of the AI over a period of
to adopt AI haphazardly. We have to focus time?
on observability. There’s no security if we’re
blind to our traffic. There are many things an There’s much that goes into this underlying
organization would benefit from knowing: infrastructure, and we can’t ask our developers
to build all of this from scratch every time
• What models are developers using? they want to use AI. The implementation of a
• What prompts are developers using? playbook has to include the creation of AI as
• What’s the semantic meaning of prompts an internal platform service, and this internal
being used (for further analysis on those platform service should perform all these
prompts)? different types of operations.
12 © Kong Inc. AI Governance Playbook: Innovation Meets Security for Responsible AI Adoption
The AI governance playbook
How can your organization ensure the adoption of AI doesn’t become
a Wild West? As the adoption of AI increases in the organization,
developers need to rapidly iterate and build new capabilities without
having to manage the specific cross-cutting concerns around AI usage.
We have to enforce a playbook our developers must follow to consume
AI, train AI, or have AI interact with other APIs.
The idea of implementing a playbook is no different than how many organizations implement any
other technology they may be using (or considering using) today.
However, this step can be easy to overlook given the time to respond feels so much shorter
than other breakthrough technologies. Generative AI and LLMs seemed to come into public
consciousness lightning-fast, and savvy organizations have sought to move with equal speed.
But that doesn’t mean taking shortcuts and sacrificing speed for the business-critical areas of
security and governance.
Many areas need to be addressed if we hope Without a control plane for AI that gives
to ensure responsible AI usage in our products, this level of visibility and control, the
including: organization is blind to how teams are
adopting AI in their products.
1. AI and data security: We must prevent 3. Multi-AI adoption: We should be able to
customer and sensitive data from being fed leverage the best AI for the job and lower
into AI/LLMs, which could cause privacy the time it takes to integrate different
leaks, security escalations, and potential LLMs and models. Generalist LLMs and
data breaches. specialized LLMs are going to be adopted
2. AI governance: We need to be able to to cater to different tasks, and developers
manage how applications and teams are may adopt different cloud-hosted or
consuming AI across all providers and open source models based on their
models with a single control plane to requirements or to reduce costs, with OSS
manage, secure, and observe AI traffic models rapidly catching up in performance
being generated by the organization. and intelligence.
13 © Kong Inc. AI Governance Playbook: Innovation Meets Security for Responsible AI Adoption
Benefits of an AI playbook
Organizations that implement their own AI playbook are going to
gain a competitive advantage. AI can accelerate the gap between
organizations in the same industry, potentially changing companies’
fortunes — in both senses of the word.
Those who move with speed and strategy in mind will create vastly superior customer
experiences and expand their businesses faster than the competition.
With an AI adoption playbook in place, is the use cases aren’t that different from
organizations can ensure they’re not slowing developers and knowledge workers. If you
down developers who want to leverage AI have, say, an internal chatbot that you want
and LLMs. They can create better digital to allow employees to use, it’s going to be
experiences while implementing standards to an application that leverages the same AI
minimize risks around customer data, security, infrastructure that you’re building for your other
and compliance when it comes to sharing AI use cases.
potentially sensitive information (that could be
leaked elsewhere) with LLMs and AI tools. However your people interact with AI tools, to
adopt AI responsibility you need security, you
Of course, developers aren’t the only people need to validate prompts (or at least decide
using AI tools. You may have people in sales how pre-formed those prompts should be), and
using Claude to write prospecting emails you need observability. By building an internal
or someone in recruiting using ChatGPT to service, you can address these concerns for all
help with LinkedIn outreach. The good news of your people.
This service can enforce security, governance, multiple LLMs — cloud or self-hosted — in
and observability for all the AI traffic that every our organizations because it’s an inevitability.
team is generating. Developers may use one specialized LLM that’s
fine-tuned for a specific use case but want to
This internal service should allow developers use a more generalist LLM for other operations.
to consume AI across one or more LLMs. Different teams may find different LLMs are
Why more than one LLM? We must plan for better suited for their different tasks.
14 © Kong Inc. AI Governance Playbook: Innovation Meets Security for Responsible AI Adoption
Next, the platform service should allow the organization to see the
operations people are asking AI technologies to perform.
Typically, this involves implementing some type of advanced prompt engineering that allows us to
create a playbook determining the prompts we want people to be able to use — and which ones
we want to forbid.
15 © Kong Inc. AI Governance Playbook: Innovation Meets Security for Responsible AI Adoption
Introducing the AI gateway
pattern
To accelerate the adoption of AI in the organization with the right
level of observability, security, and governance, organizations have
started adopting an AI gateway to provide a distributed AI egress to
any LLM and model that developers may want to use.
An AI gateway gives us a centralized place to manage AI consumption across every team,
whether the AI models are hosted in the cloud or self-hosted.
The AI gateway operates like a traditional API gateway, but instead of acting as a reverse proxy
for exposing our internal APIs to other clients, it’s being deployed as an egress proxy for AI traffic
generated by our applications. That traffic is directed either inside or outside the organization
depending on where the backend AI models are being hosted — in the cloud or self-hosted.
16 © Kong Inc. AI Governance Playbook: Innovation Meets Security for Responsible AI Adoption
Without an AI gateway, we risk introducing complexity,
fragmentation, security blindspots, shadow IT, and overall lower
efficiency and higher costs.
In the same way that we adopt an API gateway to cater to API cross-cutting concerns, we can
adopt an AI gateway to do the same for all AI consumption.
To simplify the architecture, the AI gateway This approach provides us with a unified
should be managed by the platform team and control plane to manage all AI consumption
offered as an internal core service to every generated by the organization and therefore
application that needs to use it. By having gives us the opportunity to quickly implement
the platform team work in partnership with security policies, observability collection,
the developers and the security team, we can and developer pipelines for automating the
ensure that we provide an internal service onboarding of different teams whenever they
that is compliant and that developers can use need to access AI in their applications.
responsibly.
The AI gateway becomes a core platform service that every team can use.
An AI gateway will support multiple AI backends (e.g., OpenAI, Mistral, Llama, Anthropic) but still
provide one API interface that developers can use to access any AI model they need. We can now
manage the security credentials of any AI backend from one place so that our applications don’t
need to be updated whenever we rotate or revoke a credential to a third-party AI.
17 © Kong Inc. AI Governance Playbook: Innovation Meets Security for Responsible AI Adoption
In addition, it can implement prompt security, validation, and template
generation so that the prompts themselves can be managed from one
control plane and changed without having to update the client applications.
Prompts are at the core of what we ask AI to do, and being able to control what prompts our
applications are allowed to generate is essential for responsible and compliant adoption of AI.
We wouldn’t want developers to build an AI integration around restricted topics (political, for
example), or to mistakenly set the wrong context in the prompts which can then be later exploited
by a malicious user.
Because different teams will most likely use AI observability can be managed from one
different AI services, the AI gateway can offer place and even be sent to third-party log/
a standardized interface to consume multiple metrics collectors. And since it’s being
models, simplifying the implementation of AI configured in one place, we can easily capture
across different models and even switching the entirety of AI traffic being generated to
between one another. further ensure data is compliant and that there
are no anomalies in the usage.
The AI gateway could also implement security
overlays like AuthN/Z, rate-limiting, and full API Last but not least, AI models can be expensive
lifecycle governance to further manage how AI to run. The AI gateway can be leveraged to
is being accessed internally by the teams. At allow the organization to learn from its AI
the end of the day, AI traffic is API traffic. usage to implement cost-reduction initiatives
and optimizations.
Kong Gateway — the world’s most adopted on top of Kong Gateway, it will be possible to
open source API gateway — has been orchestrate AI flows in the cloud or on self-
enhanced with free, open source plugins that hosted LLMs with the best performance and
turn every Kong Gateway deployment into an AI the lowest latency, which are critical in AI-based
gateway. applications.
18 © Kong Inc. AI Governance Playbook: Innovation Meets Security for Responsible AI Adoption
With Kong’s AI Gateway, you can: “ai-response-transformer” plugins that will
intercept every API request and response
• Build multi-LLM integrations — The “ai- and augment it with any AI prompt that
proxy” plugin allows you to consume you have configured. For example, you
multiple LLM implementations — in the can translate on the fly an existing API
cloud or self-hosted — with the same API response for internationalization without
interface. It ships with native support actually having to change the API or change
for OpenAI, Azure AI, Cohere, Anthropic, your client applications. You can enrich,
Mistral, and LLaMA. And because we transform, and convert all existing API
standardize how they can be used, you can traffic without lifting a finger, and more.
also easily swap between LLMs at the “flip You can even enrich an AI request directed
of a switch” without having to change your to a specific LLM provider with another
application code — great for using multiple LLM provider that will instantly update
specialized models in your applications and the request before sending it to the final
for prototyping. destination.
• Manage AI credentials centrally — With “ai- • Advanced AI prompt engineering — AI
proxy” you can also store all AI credentials, Gateway offers plugins fully focused
including tokens and API keys, in Kong on advanced prompt engineering to
Gateway without having to store them in fundamentally simplify and improve how
your applications, so you can easily update you’re using AI in your applications. “ai-
and rotate them on the fly and in one prompt-template” gives an easy way to
centralized place without having to update create prompt templates that can be
your code. managed centrally in Kong Gateway and
• Collect L7 AI metrics — Using the “ai-proxy” used on the fly by applications by only
plugin you can now collect L7 analytics — sending the named values to use in your
like the number of request and response templates. This way you can update your
tokens or the LLM providers and models prompts at a later time without having to
used — into any third party like Datadog, update your applications, or even enforce
New Relic, or any logging plugin that Kong a compliance process for adding a new
Gateway already supports (such as TCP, approved template to the system.
Syslog, and Prometheus). By doing this, you • Decorate your AI prompts — Most AI
not only simplify monitoring of AI traffic by prompts that you generate also set a
the applications, but you also get insights context for what the AI should or should
as to what are the most common LLM not do and specify rules for interpreting
technologies that the developers are using requests and responses. Instead of setting
in the organization. The L7 AI observability up the context every time, you can centrally
metrics are in addition to all other request configure it using the “ai-prompt-decorator”
and response metrics already collected by plugin that will prepend or append your
Kong Gateway. context on the fly on every AI request. This
• No-code AI integrations — You can is also useful to ensure compliance in the
leverage the benefits of AI without having to organization by instructing AI to never
write any line of code in your applications discuss — for example — restrictive topics
by using the “ai-request-transformer” and and more.
19 © Kong Inc. AI Governance Playbook: Innovation Meets Security for Responsible AI Adoption
• AI prompt firewall — This capability is centralize how you manage, secure,
oriented toward teams and organizations and observe all your AI traffic from one
that want to ensure that prompts are place. You can also leverage official and
approved and that someone doesn’t community Kong Gateway plugins to
mistakenly use the wrong prompts in their further secure how your AI egress should
applications. With the “ai-prompt-guard” be accessed by other developers and add
plugin, you can set a list of rules to deny other features for your unique use cases.
or allow free-form prompts that are being
generated by the applications and that are Every AI egress in Kong Gateway is a service
being received by Kong Gateway before like any other, so all Kong Gateway features and
being sent to LLM providers. plugins are available out of the box. This makes
• Create an AI egress with hundreds of Kong’s AI Gateway the most capable in the
features — By leveraging these new AI entire AI ecosystem. Kong Konnect’s developer
capabilities in Kong Gateway you can portal and service catalog are also supported.
20 © Kong Inc. AI Governance Playbook: Innovation Meets Security for Responsible AI Adoption
With Kong AI Gateway, we can address cross-cutting capabilities that all teams will need to
otherwise build themselves.
21 © Kong Inc. AI Governance Playbook: Innovation Meets Security for Responsible AI Adoption
Conclusion
The speed at which both agile startups and colossal enterprises
are adopting AI is staggering. Some may move slowly and with
caution while others are racing headfirst into the unknown seeking
competitive advantage at nearly any cost.
AI and APIs are unlocking new possibilities Whatever your organization’s stance, if taking
and reshaping industries across the board. advantage of AI is important to your business,
While challenges will arise, the potential for then building a playbook for responsible AI
creative innovation and scientific discovery adoption is essential.
is boundless. As we navigate the evolving
landscape of AI and APIs, businesses must
adapt, harness AI’s creative potential, and use
APIs as a bridge to this exciting new world of
possibilities.
22 © Kong Inc. AI Governance Playbook: Innovation Meets Security for Responsible AI Adoption
Konghq.com
Kong Inc.
[email protected]
23 © Kong Inc. AI Governance Playbook: Innovation Meets Security for Responsible AI Adoption