What We Learned From A Year of Building With LLMs (Part I) - O'Reilly
What We Learned From A Year of Building With LLMs (Part I) - O'Reilly
Radar / AI & ML
By Eugene Yan, Bryan Bischof, Charles Frye, Hamel Husain, Jason Liu and Shreya Shankar
To hear directly from the authors on this topic, sign up for the upcoming virtual
event on June 20th, and learn more from the Generative AI Success Stories
Superstream on June 12th.
Part II of this series can be found here and part III is forthcoming. Stay tuned.SIGN IN Try Now
Learn more
It’s an exciting time to build with large language models (LLMs). Over the past
year, LLMs have become “good enough” for real-world applications. The pace
of improvements in LLMs, coupled with a parade of demos on social media, will
fuel an estimated $200B investment in AI by 2025. LLMs are also broadly
accessible, allowing everyone, not just ML engineers and scientists, to build
intelligence into their products. While the barrier to entry for building AI
products has been lowered, creating those effective beyond a demo remains a
deceptively difficult endeavor.
We’ve identified some crucial, yet often neglected, lessons and methodologies
informed by machine learning that are essential for developing products based
on LLMs. Awareness of these concepts can give you a competitive advantage
against most others in the field without requiring ML expertise! Over the past
year, the six of us have been building real-world applications on top of LLMs.
We realized that there was a need to distill these lessons in one place for the
benefit of the community.
We come from a variety of backgrounds and serve in different roles, but we’ve
all experienced firsthand the challenges that come with using this new
technology. Two of us are independent consultants who’ve helped numerous
clients take LLM projects from initial concept to successful product, seeing the
patterns determining success or failure. One of us is a researcher studying how
ML/AI teams work and how to improve their workflows. Two of us are leaders
on applied AI teams: one at a tech giant and one at a startup. Finally, one of us
has taught deep learning to thousands and now works on making AI tooling and
infrastructure easier to use. Despite our different experiences, we were struck
by the consistent themes in the lessons we’ve learned, and we’re surprised that
these insights aren’t more widely discussed.
This work is organized into three sections: tactical, operational, and strategic.
This is the first of three pieces. It dives into the tactical nuts and bolts of
working with LLMs. We share best practices and common pitfalls around
prompting, setting up retrieval-augmented generation, applying flow
engineering, and evaluation and monitoring. Whether you’re a practitioner
building with LLMs or a hacker working on weekend projects, this section was
written for you. Look out for the operational and strategic sections in the
coming weeks.
Tactical
In this section, we share best practices for the core components of the
emerging LLM stack: prompting tips to improve quality and reliability,
evaluation strategies to assess output, retrieval-augmented generation ideas to
improve grounding, and more. We also explore how to design human-in-the-
loop workflows. While the technology is still rapidly developing, we hope these
lessons, the by-product of countless experiments we’ve collectively run, will
stand the test of time and help you build and ship robust LLM applications.
Prompting
The idea of in-context learning via n-shot prompts is to provide the LLM with a
few examples that demonstrate the task and align outputs to our expectations.
A few tips:
You don’t necessarily need to provide the full input-output pairs. In many
cases, examples of desired outputs are sufficient.
If you are using an LLM that supports tool use, your n-shot examples should
also use the tools you want the agent to use.
First, list the key decisions, follow-up items, and associated owners in a
sketchpad.
Then, check that the details in the sketchpad are factually consistent with
the transcript.
Finally, synthesize the key points into a concise summary.
Recently, some doubt has been cast on whether this technique is as powerful as
believed. Additionally, there’s significant debate about exactly what happens
during inference when chain-of-thought is used. Regardless, this technique is
one to experiment with when possible.
Structured input and output help models better understand the input as well as
return output that can reliably integrate with downstream systems. Adding
serialization formatting to your inputs can help provide more clues to the
model as to the relationships between tokens in the context, additional
metadata to specific tokens (like types), or relate the request to similar
examples in the model’s training data.
As an example, many questions on the internet about writing SQL begin by
specifying the SQL schema. Thus, you may expect that effective prompting forIN
SIGN Try Now
Text-to-SQL should include structured schema definitions; indeed.
When using structured input, be aware that each LLM family has their own
preferences. Claude prefers xml while GPT favors Markdown and JSON. With
XML, you can even pre-fill Claude’s responses by providing a response tag like
so.
</> python
messages=[
{
"role": "user",
"content": """Extract the <name>, <size>, <price>, and <color>
from this product description into your <response>.
<description>The SmartHome Mini
is a compact smart home assistant
available in black or white for only $49.99.
At just 5 inches wide, it lets you control
lights, thermostats, and other connected
devices via voice or app—no matter where you
place it in your home. This affordable little hub
brings convenient hands-free control to your
smart devices.
</description>"""
},
{
"role": "assistant",
"content": "<response><name>"
}
]
Have small prompts that do one thing, and only one thing, well
Just like how we strive (read: struggle) to keep our systems and code simple,
so should we for our prompts. Instead of having a single, catch-all prompt for
the meeting transcript summarizer, we can break it into steps to:
Extract key decisions, action items, and owners into structured format
Check extracted details against the original transcription for consistency
Generate a concise summary from the structured details
As a result, we’ve split our single prompt into multiple prompts that are each
simple, focused, and easy to understand. And by breaking them up, we can now
iterate and eval each prompt individually.
Rethink, and challenge your assumptions about how much context you actually
need to send to the agent. Be like Michaelangelo, do not build up your context
sculpture—chisel away the superfluous material until the sculpture is revealed.
RAG is a popular way to collate all of the potentially relevant blocks of marble,
but what are you doing to extract what’s necessary?
We’ve found that taking the final prompt sent to the model—with all of the
context construction, and meta-prompting, and RAG results—putting it on a
blank page and just reading it, really helps you rethink your context. We have
found redundancy, self-contradictory language, and poor formatting using this
method.
The other key optimization is the structure of your context. Your bag-of-docs
representation isn’t helpful for humans, don’t assume it’s any good for agents.
Think carefully about how you structure your context to underscore the
relationships between parts of it, and make extraction as simple as possible.
Information Retrieval/RAG
The first and most obvious metric is relevance. This is typically quantified via
ranking metrics such as Mean Reciprocal Rank (MRR) or Normalized Discounted
Cumulative Gain (NDCG). MRR evaluates how well a system places the first
relevant result in a ranked list while NDCG considers the relevance of all the
results and their positions. They measure how good the system is at ranking
relevant documents higher and irrelevant documents lower. For example, if
we’re retrieving user summaries to generate movie review summaries, we’ll
want to rank reviews for the specific movie higher while excluding reviews for
other movies.
Like traditional recommendation systems, the rank of retrieved items will have
a significant impact on how the LLM performs on downstream tasks. To
measure the impact, run a RAG-based task but with the retrieved items
shuffled—how does the RAG output perform?
Finally, consider the level of detail provided in the document. Imagine we’re
building a RAG system to generate SQL queries from natural language. We
could simply provide table schemas with column names as context. But, what if
we include column descriptions and some representative values? The additional
detail could help the LLM better understand the semantics of the table and
thus generate more correct SQL.
Given how prevalent the embedding-based RAG demo is, it’s easy to forget or
overlook the decades of research and solutions in information retrieval.
Nonetheless, while embeddings are undoubtedly a powerful tool, they are not
the be all and end all. First, while they excel at capturing high-level semantic
similarity, they may struggle with more specific, keyword-based queries, like
when users search for names (e.g., Ilya), acronyms (e.g., RAG), or IDs (e.g.,
claude-3-sonnet). Keyword-based search, such as BM25, are explicitly designed
for this. And after years of keyword-based search, users have likely taken it for
granted and may get frustrated if the document they expect to retrieve isn’t
being returned.
Vector embeddings do not magically solve search. In fact, the heavy lifting is in
the step before you re-rank with semantic similarity search. Making aSIGN
genuine
IN Try Now
improvement over BM25 or full-text search is hard.
We’ve been communicating this to our customers and partners for months now.
Nearest Neighbor Search with naive embeddings yields very noisy results and
you’re likely better off starting with a keyword-based approach.
In most cases, a hybrid will work best: keyword matching for the obvious
matches, and embeddings for synonyms, hypernyms, and spelling errors, as
well as multimodality (e.g., images and text). Shortwave shared how they built
their RAG pipeline, including query rewriting, keyword + embedding retrieval,
and ranking.
Both RAG and fine-tuning can be used to incorporate new information into
LLMs and increase performance on specific tasks. Thus, which should we try
first?
Recent research suggests that RAG may have an edge. One study compared
RAG against unsupervised fine-tuning (a.k.a. continued pre-training),
evaluating both on a subset of MMLU and current events. They found that RAG
consistently outperformed fine-tuning for knowledge encountered during
training as well as entirely new knowledge. In another paper, they compared
RAG against supervised fine-tuning on an agricultural dataset. Similarly, the
performance boost from RAG was greater than fine-tuning, especially for GPT-
4 (see Table 20 of the paper).
With Gemini 1.5 providing context windows of up to 10M tokens in size, some
have begun to question the future of RAG.
— Yao Fu
While it’s true that long contexts will be a game-changer for use cases such as
analyzing multiple documents or chatting with PDFs, the rumors of RAG’s
demise are greatly exaggerated.
First, even with a context window of 10M tokens, we’d still need a way to
select information to feed into the model. Second, beyond the narrow needle-
in-a-haystack eval, we’ve yet to see convincing data that models can
effectively reason over such a large context. Thus, without good retrieval (and
ranking), we risk overwhelming the model with distractors, or may even fill the
context window with completely irrelevant information.
Finally, there’s cost. The Transformer’s inference cost scales quadratically (or
linearly in both space and time) with context length. Just because there exists a
model that could read your organization’s entire Google Drive contents before
answering each question doesn’t mean that’s a good idea. Consider an analogy
to how we use RAM: we still read and write from disk, even though there exist
compute instances with RAM running into the tens of terabytes.
So don’t throw your RAGs in the trash just yet. This pattern will remain useful
even as context windows grow in size.
Prompting an LLM is just the beginning. To get the most juice out of them, we
need to think beyond a single prompt and embrace workflows. For example,
how could we split a single complex task into multiple simpler tasks? When is
finetuning or caching helpful with increasing performance and reducing SIGN IN Try Now
latency/cost? In this section, we share proven strategies and real-world
examples to help you optimize and build reliable LLM workflows.
Small tasks with clear objectives make for the best agent or flow prompts. It’s
not required that every agent prompt requests structured output, but
structured outputs help a lot to interface with whatever system is orchestrating
the agent’s interactions with the environment.
The most successful agent builders may be those with strong experience
managing junior engineers because the process of generating plans is similar to
how we instruct and manage juniors. We give juniors clear goals and concrete
plans, instead of vague open-ended directions, and we should do the same for
our agents too.
In the end, the key to reliable, working agents will likely be found in adopting
more structured, deterministic approaches, as well as collecting data to refine
prompts and finetune models. Without this, we’ll build agents that may work
exceptionally well some of the time, but on average, disappoint users which
leads to poor retention.
Suppose your task requires diversity in an LLM’s output. Maybe you’re writing
an LLM pipeline to suggest products to buy from your catalog given a list of
products the user bought previously. When running your prompt multiple
times, you might notice that the resulting recommendations are too similar—so
you might increase the temperature parameter in your LLM requests.
In other words, increasing temperature does not guarantee that the LLM will
sample outputs from the probability distribution you expect (e.g., uniform
random). Nonetheless, we have other tricks to increase output diversity. The
simplest way is to adjust elements within the prompt. For example, if the
prompt template includes a list of items, such as historical purchases, shuffling
the order of these items each time they’re inserted into the prompt can make a
significant difference.
Caching is underrated.
Caching saves cost and eliminates generation latency by removing the need to
recompute responses for the same input. Furthermore, if a response has
previously been guardrailed, we can serve these vetted responses and reduce
the risk of serving harmful or inappropriate content.
One straightforward approach to caching is to use unique IDs for the items
being processed, such as if we’re summarizing new articles or product reviews.
When a request comes in, we can check to see if a summary already exists in
the cache. If so, we can return it immediately; if not, we generate, guardrail,
and serve it, and then store it in the cache for future requests.
For more open-ended queries, we can borrow techniques from the field of
search, which also leverages caching for open-ended inputs. Features like
autocomplete and spelling correction also help normalize user input and thus
increase the cache hit rate.
When to fine-tune
We may have some tasks where even the most cleverly designed prompts fall
short. For example, even after significant prompt engineering, our system may
still be a ways from returning reliable, high-quality output. If so, then it may be
necessary to finetune a model for your specific task.
Successful examples include:
SIGN IN Try Now
Honeycomb’s Natural Language Query Assistant: Initially, the “programming
manual” was provided in the prompt together with n-shot examples for in-
context learning. While this worked decently, fine-tuning the model led to
better output on the syntax and rules of the domain-specific language.
ReChat’s Lucy: The LLM needed to generate responses in a very specific
format that combined structured and unstructured data for the frontend to
render correctly. Fine-tuning was essential to get it to work consistently.
Evaluating LLMs can be a minefield. The inputs and the outputs of LLMs are
arbitrary text, and the tasks we set them to are varied. Nonetheless, rigorous
and thoughtful evals are critical—it’s no coincidence that technical leaders at
OpenAI work on evaluation and give feedback on individual evals.
Create unit tests (i.e., assertions) consisting of samples of inputs and outputs
from production, with expectations for outputs based on at least three criteria.
While three criteria might seem arbitrary, it’s a practical number to start with;
fewer might indicate that your task isn’t sufficiently defined or is too open-
ended, like a general-purpose chatbot. These unit tests, or assertions, should
be triggered by any changes to the pipeline, whether it’s editing a prompt,
adding new context via RAG, or other modifications. This write-up has an
example of an assertion-based test for an actual use case.
As an example, if the user asks for a new function named foo; then after
executing the agent’s generated code, foo should be callable! One challenge in
execution-evaluation is that the agent code frequently leaves the runtime in
slightly different form than the target code. It can be effective to “relax”
assertions to the absolute most weak assumptions that any viable answer
would satisfy.
Finally, using your product as intended for customers (i.e., “dogfooding”) can
provide insight into failure modes on real-world data. This approach not only
helps identify potential weaknesses, but also provides a useful source of
production samples that can be converted into evals.
Use pairwise comparisons: Instead of asking the LLM to score a single output
on a Likert scale, present it with two options and ask it to select the better
one. This tends to lead to more stable results.
Control for position bias: The order of options presented can bias the LLM’s
decision. To mitigate this, do each pairwise comparison twice, swapping the
order of pairs each time. Just be sure to attribute wins to the right option
after swapping!
Allow for ties: In some cases, both options may be equally good. Thus, allow
the LLM to declare a tie so it doesn’t have to arbitrarily pick a winner.
Use Chain-of-Thought: Asking the LLM to explain its decision before giving a
final preference can increase eval reliability. As a bonus, this allows you to
use a weaker but faster LLM and still achieve similar results. Because
frequently this part of the pipeline is in batch mode, the extra latency from
CoT isn’t a problem.
Control for response length: LLMs tend to bias toward longer responses. To
mitigate this, ensure response pairs are similar in length.
We like to use the following “intern test” when evaluating generations: If you
took the exact input to the language model, including the context, and gave it
to an average college student in the relevant major as a task, could they
succeed? How long would it take?
If the answer is no because the LLM lacks the required knowledge, consider
ways to enrich the context.
If the answer is no and we simply can’t improve the context to fix it, then we
may have hit a task that’s too hard for contemporary LLMs.
If the answer is yes, but it would take a while, we can try to reduce the
complexity of the task. Is it decomposable? Are there aspects of the task that
can be made more templatized?
If the answer is yes, they would get it quickly, then it’s time to dig into the
data. What’s the model doing wrong? Can we find a pattern of failures? Try
asking the model to explain itself before or after it responds, to help you build
a theory of mind. SIGN IN Try Now
— Goodhart’s Law
While some models achieve near-perfect recall, it’s questionable whether NIAH
truly reflects the reasoning and recall abilities needed in real-world
applications. Consider a more practical scenario: Given the transcript of an
hour-long meeting, can the LLM summarize the key decisions and next steps, as
well as correctly attribute each item to the relevant person? This task is more
realistic, going beyond rote memorization and also considering the ability to
parse complex discussions, identify relevant information, and synthesize
summaries.
This could also apply to other evals and use cases. For example, summarization.
An emphasis on factual consistency could lead to summaries that are less
specific (and thus less likely to be factually inconsistent) and possibly less
relevant. Conversely, an emphasis on writing style and eloquence could lead to
more flowery, marketing-type language that could introduce factual
inconsistencies.
A key challenge when working with LLMs is that they’ll often generate output
even when they shouldn’t. This can lead to harmless but nonsensical responses,
or more egregious defects like toxicity or dangerous content. For example,
when asked to extract specific attributes or metadata from a document, an LLM
may confidently return values even when those values don’t actually exist.
Alternatively, the model may respond in a language other than English because
we provided non-English documents in the context.
While we can try to prompt the LLM to return a “not applicable” or “unknown”
response, it’s not foolproof. Even when the log probabilities are available,
they’re a poor indicator of output quality. While log probs indicate the
likelihood of a token appearing in the output, they don’t necessarily reflect the
correctness of the generated text. On the contrary, for instruction-tuned
models that are trained to respond to queries and generate coherent response,
log probabilities may not be well-calibrated. Thus, while a high log probability
may indicate that the output is fluent and coherent, it doesn’t mean it’s SIGN IN Try Now
accurate or relevant.
A corollary here is that LLMs may fail to produce outputs when they are
expected to. This can happen for various reasons, from straightforward issues
like long tail latencies from API providers to more complex ones such as
outputs being blocked by content moderation filters. As such, it’s important to
consistently log inputs and (potentially a lack of) outputs for debugging and
monitoring.
Unlike content safety or PII defects which have a lot of attention and thus
seldom occur, factual inconsistencies are stubbornly persistent and more
challenging to detect. They’re more common and occur at a baseline rate of 5 –
10%, and from what we’ve learned from LLM providers, it can be challenging to
get it below 2%, even on simple tasks such as summarization.
Eugene Yan designs, builds, and operates machine learning systems that serve
customers at scale. He’s currently a Senior Applied Scientist at Amazon where
he builds RecSys serving millions of customers worldwide RecSys 2022 keynote
and applies LLMs to serve customers better AI Eng Summit 2023 keynote.
Previously, he led machine learning at Lazada (acquired by Alibaba) and a
Healthtech Series A. He writes & speaks about ML, RecSys, LLMs, and
engineering at eugeneyan.com and ApplyingML.com. SIGN IN Try Now
Bryan Bischof is the Head of AI at Hex, where he leads the team of engineers
building Magic—the data science and analytics copilot. Bryan has worked all
over the data stack leading teams in analytics, machine learning engineering,
data platform engineering, and AI engineering. He started the data team at
Blue Bottle Coffee, led several projects at Stitch Fix, and built the data teams
at Weights and Biases. Bryan previously co-authored the book Building
Production Recommendation Systems with O’Reilly, and teaches Data Science
and Analytics in the graduate school at Rutgers. His Ph.D. is in pure
mathematics.
Contact Us
We would love to hear your thoughts on this post. You can contact us at
[email protected]. Many of us are open to various forms of consulting
SIGN IN Try Now
and advisory. We will route you to the correct expert(s) upon contact with us if
appropriate.
Acknowledgements
This series started as a conversation in a group chat, where Bryan quipped that
he was inspired to write “A Year of AI Engineering.” Then, ✨magic✨
happened in the group chat, and we were all inspired to chip in and share what
we’ve learned so far.
The authors would like to thank Eugene for leading the bulk of the document
integration and overall structure in addition to a large proportion of the
lessons. Additionally, for primary editing responsibilities and document
direction. The authors would like to thank Bryan for the spark that led to this
writeup, restructuring the write-up into tactical, operational, and strategic
sections and their intros, and for pushing us to think bigger on how we could
reach and help the community. The authors would like to thank Charles for his
deep dives on cost and LLMOps, as well as weaving the lessons to make them
more coherent and tighter—you have him to thank for this being 30 instead of
40 pages! The authors appreciate Hamel and Jason for their insights from
advising clients and being on the front lines, for their broad generalizable
learnings from clients, and for deep knowledge of tools. And finally, thank you
Shreya for reminding us of the importance of evals and rigorous production
practices and for bringing her research and original results to this piece.
Finally, the authors would like to thank all the teams who so generously shared
your challenges and lessons in your own write-ups which we’ve referenced
throughout this series, along with the AI communities for your vibrant
participation and engagement with this group.
Media coverage
Community partners
WATCH ON YOUR BIG SCREEN
Affiliate program
View all O’Reilly videos, Superstream events, and Meet
Submit an RFP
the Expert sessions on your home TV.
Diversity
Contact us
Newsletters
Privacy policy
INTERNATIONAL
India
Indonesia
Japan
© 2024, O’Reilly Media, Inc. All trademarks and registered trademarks appearing on oreilly.com are the property of their respective
owners.
Terms of service • Privacy policy • Editorial independence