The Prompt Engineering Playbook For Programmers
The Prompt Engineering Playbook For Programmers
addyo.substack.com
56–71 minutes
AI pair programmers are powerful but not magical – they have no prior
knowledge of your specific project or intent beyond what you tell them or
include as context. The more information you provide, the better the
output. We’ll distill key prompt patterns, repeatable frameworks, and
memorable examples that have resonated with developers. You’ll see
side-by-side comparisons of good vs. bad prompts with actual AI
responses, along with commentary to understand why one succeeds
where the other falters. Here’s a cheat sheet to get started:
• Provide rich context. Always assume the AI knows nothing about your
project beyond what you provide. Include relevant details such as the
programming language, framework, and libraries, as well as the specific
function or snippet in question. If there’s an error, provide the exact error
message and describe what the code is supposed to do. Specificity and
context make the difference between vague suggestions and precise,
actionable solutions . In practice, this means your prompt might include a
brief setup like: “I have a Node.js function using Express and Mongoose
that should fetch a user by ID, but it throws a TypeError. Here’s the code
and error…”. The more setup you give, the less the AI has to guess.
not only keeps the AI’s responses focused and manageable, but also
mirrors how a human would incrementally build a solution.
case: you have code that’s misbehaving, and you want the AI to help
figure out why.
// Example usage:
const result = mapUsersById([{ id: 1, name: "Alice"
}]);
This code is supposed to iterate through the users array and build an
object where keys are user IDs. However, there is a bug: the loop uses i
<= users.length instead of i < users.length. This will cause users[i] to be
undefined on the last iteration, likely leading to a runtime error (e.g.,
attempting to access property id of undefined). Let’s see how an AI
assistant might respond to two different prompts about this issue.
It’s hard to say without more information. Are you sure the function is
being called with a proper array of users? Perhaps the issue is that users
is empty or not an array, causing the loop not to execute properly. It could
also be that there’s an error when setting the property on userMap.
Make sure that each user has an id property. If users is undefined or null,
that would also cause it to not work.
should convert an array of user objects into a map (object) keyed by user
ID. However, it throws an error when I run it. For example, when I pass [
{id: 1, name: "Alice"} ], I get TypeError: Cannot read property 'id' of
undefined. Here is the function code:
function mapUsersById(users) {
const userMap = {};
for (let i = 0; i <= users.length; i++) {
const user = users[i];
userMap[user.id] = user;
}
return userMap;
}
It should return { "1": {id: 1, name: "Alice"} }. What is the bug and how can
I fix it?”
*The error Cannot read property 'id' of undefined suggests that at some
point, user is undefined, and then you try to access user.id. Looking at
your loop, the issue is with the loop bounds. You are using i <=
users.length. If users.length is 1 (for one user in the array), the loop runs
with i = 0 and i = 1. When i is 1, users[1] is undefined (since the only
index in the array is 0). Thus user becomes undefined on that iteration,
and user.id throws a TypeError. The fix is to change the loop condition to
i < users.length so that i goes from 0 to users.length-1. For example:
This way, i will be 0 for the first (and only) element, and then the loop
stops. With that change, your function will no longer hit an undefined
user and should work as expected, returning {"1": {id: 1, name: "Alice"}}
for the given input.*
Analysis: This response directly pinpoints the bug (using <= in the loop)
and explains why it causes user to be undefined. It then provides the
correct code solution (i < users.length) and confirms the expected
outcome. The improved prompt succeeded because it followed our
principles: it specified the language (JavaScript), described the function’s
purpose, included the exact error message and a sample input, and
even provided the code snippet in question. Notice how the AI used the
error message as a clue to focus on the loop bounds – a targeted prompt
enabled the AI to engage in true problem-solving, effectively simulating
how a human debugger would think: “where could undefined come
from? likely from the loop indexing”. This is a concrete demonstration of
the benefit of detailed prompts.
• Ask for potential causes. If you’re truly stumped, you can broaden the
question slightly: “What are some possible reasons for a TypeError:
cannot read property 'foo' of undefined in this code?” along with the code.
The model might list a few scenarios (e.g. the object wasn’t initialized, a
race condition, wrong variable scoping, etc.). This can give you angles to
investigate that you hadn’t considered. It’s like brainstorming with a
colleague.
• “Ask the Rubber Duck” – i.e., explain your code to the AI. This may
sound counterintuitive (why explain to the assistant?), but the act of
writing an explanation can clarify your own understanding, and you can
then have the AI verify or critique it. For example: “I will explain what this
function is doing: [your explanation]. Given that, is my reasoning correct
and does it reveal where the bug is?” The AI might catch a flaw in your
explanation that points to the actual bug. This technique leverages the AI
as an active rubber duck that not only listens but responds.
• Have the AI create test cases. You can ask: “Can you provide a couple of
test cases (inputs) that might break this function?” The assistant might
come up with edge cases you didn’t think of (empty array, extremely
large numbers, null values, etc.). This is useful both for debugging and
for generating tests for future robustness.
Suppose we have a function that makes two database calls and does
some processing. It works, but it’s not pretty – there’s duplicated code
and it’s hard to read. We’d like to refactor it for clarity and efficiency.
Here’s the original code:
Issues in this code: We see duplicated fetch logic for users and orders
(could be refactored into a helper or loop), lack of error detail (just a
generic error message), and it could perhaps run the fetches in parallel
to improve speed. Let’s see two prompting approaches:
The risk with a generic “Refactor this” prompt is that the AI might make
changes that alter behavior in unintended ways. Here, combining error
handling might be acceptable or not, depending on requirements. The
assistant also made a structural change (using a dictionary to group
orders) – that’s fine and probably for the better, but if we had specific
constraints (like wanting to stream results or keep memory usage low),
we didn’t inform it. So while the result looks good, it was somewhat lucky
that the AI’s own interpretation of “refactor” aligned with what we wanted.
This illustrates that an underspecified prompt can sometimes get good
results on simple cases, but for complex code, it might go off in an
undesired direction or miss certain improvements you care about.
map[order.userId].push(order);
return map;
}, {});
Explanation of changes:
• Parallel fetching: I used Promise.all to fetch users and orders at the same
time (point 1).
Analysis: This response directly addressed all the specified goals. The
code is refactored to be cleaner and faster, and it maintained separate
error messages as requested. The AI’s explanation confirms each point
This example demonstrates a key lesson: when you know what you
want improved, spell it out. AI is good at following instructions, but it
won’t read your mind. A broad “make this better” might work for simple
things, but for non-trivial code, you’ll get the best results by enumerating
what “better” means to you. This aligns with community insights that
clear, structured prompts yield significantly improved results .
• Refactor in steps: If the code is very large or you have a long list of
changes, you can tackle them one at a time. For example, first ask the AI
to “refactor for readability” (focus on renaming, splitting functions), then
later “optimize the algorithm in this function.” This prevents
overwhelming the model with too many instructions at once and lets you
verify each change stepwise.
• Ask for alternative approaches: Maybe the AI’s first refactor works but
you’re curious about a different angle. You can ask, “Can you refactor it in
another way, perhaps using functional programming style (e.g. array
methods instead of loops)?” or “How about using recursion here instead
of iterative approach, just to compare?” This way, you can evaluate
different solutions. It’s like brainstorming multiple refactoring options with
a colleague.
• Validation and testing: After any AI-generated refactor, always run your
tests or try the code with sample inputs. AI might inadvertently introduce
subtle bugs, especially if the prompt didn’t specify an important
constraint. For example, in our refactor, if the original code intentionally
separated fetch errors for logging but we didn’t mention logging, the
combined error might be less useful. It’s our job to catch that in review.
The AI can help by writing tests too – you could ask “Generate a few unit
tests for the refactored function” to ensure it behaves the same as before
on expected inputs.
✅ Enhanced Prompt:
useEffect(() => {
fetchUser(userId).then(setUser).finally(() =>
setLoading(false));
}, [userId, setUser, setLoading]); // Problem is
here
Expected behavior: Should fetch user data once when userId changes
Actual behavior: Component re-renders infinitely Error in console:
"Warning: Maximum update depth exceeded"
What's causing this infinite loop and how do I fix the dependency array?
✅ Enhanced Prompt:
I'm building a Next.js 14 e-commerce app and need to design the state
management architecture. Here are my requirements:
Components:
Technical constraints:
Should I use:
2. React Query/TanStack Query for server state + Zustand for client state
Why this works: Real-world scenario with specific tech stack, clear
requirements, and asks for architectural guidance with implementation
details.
One of the most exciting uses of AI code assistants is to help you write
new code from scratch or integrate a new feature into an existing
codebase. This could range from generating a boilerplate for a React
component to writing a new API endpoint in an Express app. The
challenge here is often that these tasks are open-ended – there are
many ways to implement a feature. Prompt engineering for code
generation is about guiding the AI to produce code that fits your needs
The AI might give you a step-by-step plan: “1. Add an input field for the
search query. 2. Add state to hold the query. 3. Filter the products list
based on the query. 4. Ensure it’s case-insensitive, etc.” Once you have
this plan (which you can refine with the AI’s help), you can tackle each
bullet with focused prompts.
The AI will see the patterns (maybe you use certain libraries or style
conventions) and apply them. Having relevant files open or referencing
them in your prompt provides context that leads to more project-specific
• If starting something new but you have a preferred approach, you can
also mention that: “I’d like to implement this using functional
programming style (no external state, using array methods).” Or, “Ensure
to follow the MVC pattern and put logic in the controller, not the view.”
These are the kind of details a senior engineer might remind a junior
about, and here you are the senior telling the AI.
5. When the result isn’t what you want, rewrite the prompt with
more detail or constraints. It’s common that the first attempt at
generating a new feature doesn’t nail it. Maybe the code runs but is not
idiomatic, or it missed a requirement. Instead of getting frustrated, treat
the AI like a junior dev who gave a first draft – now you need to give
feedback. For example, “The solution works but I’d prefer if you used the
built-in array filter method instead of a for loop.” Or, “Can you refactor the
generated component to use React Hooks for state instead of a class
component? Our codebase is all functional components.” You can also
add new constraints: “Also, ensure the function runs in O(n) time or
better, because n could be large.” This iterative prompting is powerful. A
real-world scenario: one developer asked an LLM to generate code to
draw an ice cream cone using a JS canvas library, but it kept giving
irrelevant output until they refined the prompt with more specifics and
context . The lesson is, don’t give up after one try. Figure out what was
lacking or misunderstood in the prompt and clarify it. This is the essence
of prompt engineering – each tweak can guide the model closer to what
you envision.
That’s a long prompt but it’s specific. Let’s assume we give this to an LLM
in a chat.
function ProductList() {
const [products, setProducts] = useState([]);
const [search, setSearch] = useState('');
const [loading, setLoading] = useState(true);
const [error, setError] = useState(null);
useEffect(() => {
product.name.toLowerCase().includes(search.toLowerCase())
);
return (
<div className="product-list">
<input
type="text"
placeholder="Search products"
value={search}
onChange={handleSearchChange}
/>
<ul>
{filteredProducts.map(prod => (
<li key={prod.id}>{prod.name}</li>
))}
</ul>
</div>
);
}
• Suppose the above code came back, but our project uses a context or a
global store for data fetching instead of fetching inside the component.
This is something we didn’t specify. The AI made a choice to fetch in
• If the AI makes a subtle mistake (say it forgot to make the search filter
case-insensitive), we just point that out: “Make the search case-
insensitive.” It will adjust the filter to use lowercase comparison (which in
our pseudo-output it already did, but if not it would fix it).
• Let the AI scaffold, then you fill in specifics: Sometimes it’s useful to have
the AI generate a rough structure, then you tweak it. For example,
“Generate the skeleton of a Node.js Express route for user registration
with validation and error handling.” It might produce a generic route with
placeholders. You can then fill in the actual validation rules or database
calls which are specific to your app. The AI saves you from writing
boilerplate, and you handle the custom logic if it’s sensitive.
• Ask for edge case handling: When generating a feature, you might
prompt the AI to think of edge cases: “What edge cases should we
consider for this feature (and can you handle them in the code)?” For
instance, in the search example, an edge case might be “what if the
products haven’t loaded yet when the user types?” (though our code
handles that via loading state) or “what if two products have the same
name” (not a big issue but maybe mention it). The AI could mention
things like empty result handling, very large lists (maybe needing
debounce for search input), etc. This is a way to leverage the AI’s training
on common pitfalls.
/**
* Returns the nth Fibonacci number.
* @param {number} n - The position in Fibonacci
sequence (0-indexed).
* @returns {number} The nth Fibonacci number.
*
* Example: fibonacci(5) -> 5 (sequence:
0,1,1,2,3,5,…)
*/
function fibonacci(n) {
// ... implementation
}
• If you write the above comment and function signature, an LLM might fill
in the implementation correctly because the comment describes exactly
what to do and even gives an example. This technique ensures you
clarify the feature in words first (which is a good practice generally), and
then the AI uses that as the spec to write the code.
Not all prompts are created equal. By now, we’ve seen numerous
Here are some frequent prompt failures and how to fix them:
• Anti-Pattern: The Vague Prompt. This is the classic “It doesn’t work,
please fix it” or “Write something that does X” without enough detail. We
saw an example of this when the question “Why isn’t my function
working?” got a useless answer . Vague prompts force the AI to guess
the context and often result in generic advice or irrelevant code. The fix is
straightforward: add context and specifics. If you find yourself asking a
question and the answer feels like a Magic 8-ball response (“Have you
tried checking X?”), stop and reframe your query with more details (error
messages, code excerpt, expected vs actual outcome, etc.). A good
practice is to read your prompt and ask, “Could this question apply to
dozens of different scenarios?” If yes, it’s too vague. Make it so specific
that it could only apply to your scenario.
gives it direction.
• We saw the power of iterating with the AI, whether it’s stepping through a
function’s logic line by line, or refining a solution through multiple prompts
(like turning a recursive solution into an iterative one, then improving
variable names) . Patience and iteration turn the AI into a true pair
programmer rather than a one-shot code generator.
• Along the way, we identified pitfalls to avoid: keeping prompts neither too
vague nor too overloaded, always specifying our intent and constraints,
and being ready to adjust when the AI’s output isn’t on target. We cited
concrete examples of bad prompts and saw how minor changes (like
including an error message or expected output) can dramatically
improve the outcome.
As you incorporate these techniques into your workflow, you’ll likely find
that working with AI becomes more intuitive. You’ll develop a feel for
what phrasing gets the best results and how to guide the model when it
goes off course. Remember that the AI is a product of its training data – it
has seen many examples of code and problem-solving, but it’s you who
provides direction on which of those examples are relevant now. In
essence, you set the context, and the AI follows through.
and thinking – you can turn these code-focused AI tools into true
extensions of your development workflow. The end result is not only that
you code faster, but often you pick up new insights and patterns along
the way (as the AI explains things or suggests alternatives), leveling up
your own skillset.
Further reading: