0% found this document useful (0 votes)
205 views42 pages

The Prompt Engineering Playbook For Programmers

The document discusses the importance of prompt engineering for developers using AI coding assistants, emphasizing that the quality of AI output heavily relies on the clarity and specificity of the prompts provided. It outlines foundational principles for crafting effective prompts, such as providing rich context, being specific about goals, breaking down complex tasks, and iterating on responses. The document also includes practical examples and techniques to enhance the interaction with AI tools for debugging and code optimization.

Uploaded by

Vishnu Vardhan V
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
205 views42 pages

The Prompt Engineering Playbook For Programmers

The document discusses the importance of prompt engineering for developers using AI coding assistants, emphasizing that the quality of AI output heavily relies on the clarity and specificity of the prompts provided. It outlines foundational principles for crafting effective prompts, such as providing rich context, being specific about goals, breaking down complex tasks, and iterating on responses. The document also includes practical examples and techniques to enhance the interaction with AI tools for debugging and code optimization.

Uploaded by

Vishnu Vardhan V
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

addyo.substack.com

The Prompt Engineering Playbook for


Programmers
Addy Osmani

56–71 minutes

Developers are increasingly relying on AI coding assistants to accelerate


our daily workflows. These tools can autocomplete functions, suggest
bug fixes, and even generate entire modules or MVPs. Yet, as many of
us have learned, the quality of the AI’s output depends largely on the
quality of the prompt you provide. In other words, prompt engineering
has become an essential skill. A poorly phrased request can yield
irrelevant or generic answers, while a well-crafted prompt can produce
thoughtful, accurate, and even creative code solutions. This write-up
takes a practical look at how to systematically craft effective prompts for
common development tasks.

AI pair programmers are powerful but not magical – they have no prior
knowledge of your specific project or intent beyond what you tell them or
include as context. The more information you provide, the better the
output. We’ll distill key prompt patterns, repeatable frameworks, and
memorable examples that have resonated with developers. You’ll see
side-by-side comparisons of good vs. bad prompts with actual AI
responses, along with commentary to understand why one succeeds
where the other falters. Here’s a cheat sheet to get started:

1 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

Prompting an AI coding tool is somewhat like communicating with a very


literal, sometimes knowledgeable collaborator. To get useful results, you
need to set the stage clearly and guide the AI on what you want and how
you want it.

Below are foundational principles that underpin all examples in this


playbook:

2 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

• Provide rich context. Always assume the AI knows nothing about your
project beyond what you provide. Include relevant details such as the
programming language, framework, and libraries, as well as the specific
function or snippet in question. If there’s an error, provide the exact error
message and describe what the code is supposed to do. Specificity and
context make the difference between vague suggestions and precise,
actionable solutions . In practice, this means your prompt might include a
brief setup like: “I have a Node.js function using Express and Mongoose
that should fetch a user by ID, but it throws a TypeError. Here’s the code
and error…”. The more setup you give, the less the AI has to guess.

• Be specific about your goal or question. Vague queries lead to vague


answers. Instead of asking something like “Why isn’t my code working?”,
pinpoint what insight you need. For example: “This JavaScript function is
returning undefined instead of the expected result. Given the code
below, can you help identify why and how to fix it?” is far more likely to
yield a helpful answer. One prompt formula for debugging is: “It’s
expected to do [expected behavior] but instead it’s doing [current
behavior] when given [example input]. Where is the bug?” . Similarly, if
you want an optimization, ask for a specific kind of optimization (e.g.
“How can I improve the runtime performance of this sorting function for
10k items?”). Specificity guides the AI’s focus .

• Break down complex tasks. When implementing a new feature or


tackling a multi-step problem, don’t feed the entire problem in one
gigantic prompt. It’s often more effective to split the work into smaller
chunks and iterate. For instance, “First, generate a React component
skeleton for a product list page. Next, we’ll add state management. Then,
we’ll integrate the API call.” Each prompt builds on the previous. It’s often
not advised to ask for a whole large feature in one go; instead, start with
a high-level goal and then iteratively ask for each piece . This approach

3 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

not only keeps the AI’s responses focused and manageable, but also
mirrors how a human would incrementally build a solution.

• Include examples of inputs/outputs or expected behavior. If you can


illustrate what you want with an example, do it. For example, “Given the
array [3,1,4], this function should return [1,3,4].” Providing a concrete
example in the prompt helps the AI understand your intent and reduces
ambiguity . It’s akin to giving a junior developer a quick test case – it
clarifies the requirements. In prompt engineering terms, this is
sometimes called “few-shot prompting,” where you show the AI a
pattern to follow. Even one example of correct behavior can guide the
model’s response significantly.

• Leverage roles or personas. A powerful technique popularized in


many viral prompt examples is to ask the AI to “act as” a certain persona
or role. This can influence the style and depth of the answer. For
instance, “Act as a senior React developer and review my code for
potential bugs” or “You are a JavaScript performance expert. Optimize
the following function.” By setting a role, you prime the assistant to adopt
the relevant tone – whether it’s being a strict code reviewer, a helpful
teacher for a junior dev, or a security analyst looking for vulnerabilities.
Community-shared prompts have shown success with this method,
such as “Act as a JavaScript error handler and debug this function for
me. The data isn’t rendering properly from the API call.” . In our own
usage, we must still provide the code and problem details, but the role-
play prompt can yield more structured and expert-level guidance.

4 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

• Iterate and refine the conversation. Prompt engineering is an


interactive process, not a one-shot deal. Developers often need to
review the AI’s first answer and then ask follow-up questions or make
corrections. If the solution isn’t quite right, you might say, “That solution
uses recursion, but I’d prefer an iterative approach – can you try again
without recursion?” Or, “Great, now can you improve the variable names
and add comments?” The AI remembers the context in a chat session,
so you can progressively steer it to the desired outcome. The key is to
view the AI as a partner you can coach – progress over perfection on
the first try .

• Maintain code clarity and consistency. This last principle is a bit


indirect but very important for tools that work on your code context. Write
clean, well-structured code and comments, even before the AI comes
into play. Meaningful function and variable names, consistent formatting,
and docstrings not only make your code easier to understand for
humans, but also give the AI stronger clues about what you’re doing. If
you show a consistent pattern or style, the AI will continue it . Treat these
tools as extremely attentive junior developers – they take every cue from
your code and comments.

With these foundational principles in mind, let’s dive into specific


scenarios. We’ll start with debugging, perhaps the most immediate use-

5 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

case: you have code that’s misbehaving, and you want the AI to help
figure out why.

Debugging is a natural fit for an AI assistant. It’s like having a rubber-


duck that not only listens, but actually talks back with suggestions.
However, success largely depends on how you present the problem to
the AI. Here’s how to systematically prompt for help in finding and fixing
bugs:

1. Clearly describe the problem and symptoms. Begin your prompt


by describing what is going wrong and what the code is supposed to do.
Always include the exact error message or incorrect behavior. For
example, instead of just saying “My code doesn’t work,” you might
prompt: “I have a function in JavaScript that should calculate the sum of
an array of numbers, but it’s returning NaN (Not a Number) instead of the
actual sum. Here is the code: [include code]. It should output a number
(the sum) for an array of numbers like [1,2,3], but I’m getting NaN. What
could be the cause of this bug?” This prompt specifies the language, the
intended behavior, the observed wrong output, and provides the code
context – all crucial information. Providing a structured context (code +
error + expected outcome + what you’ve tried) gives the AI a solid
starting point . By contrast, a generic question like “Why isn’t my function
working?” yields meager results – the model can only offer the most
general guesses without context.

2. Use a step-by-step or line-by-line approach for tricky bugs. For


more complex logic bugs (where no obvious error message is thrown but
the output is wrong), you can prompt the AI to walk through the code’s
execution. For instance: “Walk through this function line by line and track
the value of total at each step. It’s not accumulating correctly – where
does the logic go wrong?” This is an example of a rubber duck
debugging prompt – you’re essentially asking the AI to simulate the

6 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

debugging process a human might do with prints or a debugger. Such


prompts often reveal subtle issues like variables not resetting or incorrect
conditional logic, because the AI will spell out the state at each step. If
you suspect a certain part of the code, you can zoom in: “Explain what
the filter call is doing here, and if it might be excluding more items than it
should.” Engaging the AI in an explanatory role can surface the bug in
the process of explanation.

3. Provide minimal reproducible examples when possible.


Sometimes your actual codebase is large, but the bug can be
demonstrated in a small snippet. If you can extract or simplify the code
that still reproduces the issue, do so and feed that to the AI. This not only
makes it easier for the AI to focus, but also forces you to clarify the
problem (often a useful exercise in itself). For example, if you’re getting a
TypeError in a deeply nested function call, try to reproduce it with a few
lines that you can share. Aim to isolate the bug with the minimum code,
make an assumption about what’s wrong, test it, and iterate . You can
involve the AI in this by saying: “Here’s a pared-down example that still
triggers the error [include snippet]. Why does this error occur?” By
simplifying, you remove noise and help the AI pinpoint the issue. (This
technique mirrors the advice of many senior engineers: if you can’t
immediately find a bug, simplify the problem space. The AI can assist in
that analysis if you present a smaller case to it.)

4. Ask focused questions and follow-ups. After providing context, it’s


often effective to directly ask what you need, for example: “What might
be causing this issue, and how can I fix it?” . This invites the AI to both
diagnose and propose a solution. If the AI’s first answer is unclear or
partially helpful, don’t hesitate to ask a follow-up. You could say, “That
explanation makes sense. Can you show me how to fix the code?
Please provide the corrected code.” In a chat setting, the AI has the

7 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

conversation history, so it can directly output the modified code. If you’re


using an inline tool like Copilot in VS Code or Cursor without a chat, you
might instead write a comment above the code like // BUG: returns NaN,
fix this function and see how it autocompletes – but in general, the
interactive chat yields more thorough explanations. Another follow-up
pattern: if the AI gives a fix but you don’t understand why, ask “Can you
explain why that change solves the problem?” This way you learn for
next time, and you double-check that the AI’s reasoning is sound.

Now, let’s illustrate these debugging prompt principles with a concrete


example, showing a poor prompt vs. improved prompt and the
difference in AI responses:

Imagine we have a simple Node.js function meant to convert a list of user


objects to a lookup map by user ID. However, it’s throwing an error.
Here’s the buggy code:

// Buggy function: converts array of users to a map


by ID
function mapUsersById(users) {
const userMap = {};
for (let i = 0; i <= users.length; i++) {
// Note: using <= will go out of bounds on the
last iteration
const user = users[i];
userMap[user.id] = user;
}
return userMap;
}

// Example usage:
const result = mapUsersById([{ id: 1, name: "Alice"

8 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

}]);

This code is supposed to iterate through the users array and build an
object where keys are user IDs. However, there is a bug: the loop uses i
<= users.length instead of i < users.length. This will cause users[i] to be
undefined on the last iteration, likely leading to a runtime error (e.g.,
attempting to access property id of undefined). Let’s see how an AI
assistant might respond to two different prompts about this issue.

❌ Poor Prompt: “Why isn’t my mapUsersById function working?”

AI’s response to the poor prompt (simulated):

It’s hard to say without more information. Are you sure the function is
being called with a proper array of users? Perhaps the issue is that users
is empty or not an array, causing the loop not to execute properly. It could
also be that there’s an error when setting the property on userMap.
Make sure that each user has an id property. If users is undefined or null,
that would also cause it to not work.

Analysis: This answer is quite vague. The prompt gave no details


except the function name, so the AI is guessing at generic causes
(“maybe the array is empty or not an array”, “make sure each user has
an id”). It did not identify the actual bug. This is the kind of response you
get when the question is underspecified – the model can only regurgitate
common debugging tips. As a developer, this isn’t very helpful because it
doesn’t address the real mistake. The lack of context and specificity in
the prompt yields a generic answer (essentially the AI equivalent of
“have you tried turning it off and on again?”). This aligns with the
community advice that vague prompts lead to irrelevant or generic
answers .

✅ Improved Prompt: “I have a JavaScript function mapUsersById that

9 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

should convert an array of user objects into a map (object) keyed by user
ID. However, it throws an error when I run it. For example, when I pass [
{id: 1, name: "Alice"} ], I get TypeError: Cannot read property 'id' of
undefined. Here is the function code:

function mapUsersById(users) {
const userMap = {};
for (let i = 0; i <= users.length; i++) {
const user = users[i];
userMap[user.id] = user;
}
return userMap;
}

It should return { "1": {id: 1, name: "Alice"} }. What is the bug and how can
I fix it?”

AI’s response to the improved prompt (simulated):

*The error Cannot read property 'id' of undefined suggests that at some
point, user is undefined, and then you try to access user.id. Looking at
your loop, the issue is with the loop bounds. You are using i <=
users.length. If users.length is 1 (for one user in the array), the loop runs
with i = 0 and i = 1. When i is 1, users[1] is undefined (since the only
index in the array is 0). Thus user becomes undefined on that iteration,
and user.id throws a TypeError. The fix is to change the loop condition to
i < users.length so that i goes from 0 to users.length-1. For example:

for (let i = 0; i < users.length; i++) {


const user = users[i];
userMap[user.id] = user;
}

10 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

This way, i will be 0 for the first (and only) element, and then the loop
stops. With that change, your function will no longer hit an undefined
user and should work as expected, returning {"1": {id: 1, name: "Alice"}}
for the given input.*

Analysis: This response directly pinpoints the bug (using <= in the loop)
and explains why it causes user to be undefined. It then provides the
correct code solution (i < users.length) and confirms the expected
outcome. The improved prompt succeeded because it followed our
principles: it specified the language (JavaScript), described the function’s
purpose, included the exact error message and a sample input, and
even provided the code snippet in question. Notice how the AI used the
error message as a clue to focus on the loop bounds – a targeted prompt
enabled the AI to engage in true problem-solving, effectively simulating
how a human debugger would think: “where could undefined come
from? likely from the loop indexing”. This is a concrete demonstration of
the benefit of detailed prompts.

Additional Debugging Tactics: Beyond identifying obvious bugs, you


can use prompt engineering for deeper debugging assistance:

• Ask for potential causes. If you’re truly stumped, you can broaden the
question slightly: “What are some possible reasons for a TypeError:
cannot read property 'foo' of undefined in this code?” along with the code.
The model might list a few scenarios (e.g. the object wasn’t initialized, a
race condition, wrong variable scoping, etc.). This can give you angles to
investigate that you hadn’t considered. It’s like brainstorming with a
colleague.

• “Ask the Rubber Duck” – i.e., explain your code to the AI. This may
sound counterintuitive (why explain to the assistant?), but the act of
writing an explanation can clarify your own understanding, and you can

11 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

then have the AI verify or critique it. For example: “I will explain what this
function is doing: [your explanation]. Given that, is my reasoning correct
and does it reveal where the bug is?” The AI might catch a flaw in your
explanation that points to the actual bug. This technique leverages the AI
as an active rubber duck that not only listens but responds.

• Have the AI create test cases. You can ask: “Can you provide a couple of
test cases (inputs) that might break this function?” The assistant might
come up with edge cases you didn’t think of (empty array, extremely
large numbers, null values, etc.). This is useful both for debugging and
for generating tests for future robustness.

• Role-play a code reviewer. As an alternative to a direct “debug this”


prompt, you can say: “Act as a code reviewer. Here’s a snippet that isn’t
working as expected. Review it and point out any mistakes or bad
practices that could be causing issues: [code]”. This sets the AI into a
critical mode. Many developers find that phrasing the request as a code
review yields a very thorough analysis, because the model will comment
on each part of the code (and often, in doing so, it spots the bug). In fact,
one prompt engineering tip is to explicitly request the AI to behave like a
meticulous reviewer . This can surface not only the bug at hand but also
other issues (e.g. potential null checks missing) which might be useful.

In summary, when debugging with an AI assistant, detail and direction


are your friends. Provide the scenario, the symptoms, and then ask
pointed questions. The difference between a flailing “it doesn’t work,
help!” prompt and a surgical debugging prompt is night and day, as we
saw above. Next, we’ll move on to another major use case: refactoring
and improving existing code.

Refactoring code – making it cleaner, faster, or more idiomatic without


changing its functionality – is an area where AI assistants can shine.

12 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

They’ve been trained on vast amounts of code, which includes many


examples of well-structured, optimized solutions. However, to tap into
that knowledge effectively, your prompt must clarify what “better”
means for your situation. Here’s how to prompt for refactoring tasks:

1. State your refactoring goals explicitly. “Refactor this code” on its


own is too open-ended. Do you want to improve readability? Reduce
complexity? Optimize performance? Use a different paradigm or library?
The AI needs a target. A good prompt frames the task, for example:
“Refactor the following function to improve its readability and
maintainability (reduce repetition, use clearer variable names).” Or
“Optimize this algorithm for speed – it’s too slow on large inputs.” By
stating specific goals, you help the model decide which transformations
to apply . For instance, telling it you care about performance might lead it
to use a more efficient sorting algorithm or caching, whereas focusing on
readability might lead it to break a function into smaller ones or add
comments. If you have multiple goals, list them out. A prompt template
from the Strapi guide suggests even enumerating issues: “Issues I’d like
to address: 1) [performance issue], 2) [code duplication], 3) [outdated
API usage].” . This way, the AI knows exactly what to fix. Remember, it
will not inherently know what you consider a problem in the code – you
must tell it.

2. Provide the necessary code context. When refactoring, you’ll


typically include the code snippet that needs improvement in the prompt.
It’s important to include the full function or section that you want to be
refactored, and sometimes a bit of surrounding context if relevant (like
the function’s usage or related code, which could affect how you
refactor). Also mention the language and framework, because
“idiomatic” code varies between, say, idiomatic Node.js vs. idiomatic
Deno, or React class components vs. functional components. For

13 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

example: “I have a React component written as a class. Please refactor it


to a functional component using Hooks.” The AI will then apply the typical
steps (using useState, useEffect, etc.). If you just said “refactor this React
component” without clarifying the style, the AI might not know you
specifically wanted Hooks.

• Include version or environment details if relevant. For instance, “This


is a Node.js v14 codebase” or “We’re using ES6 modules”. This can
influence whether the AI uses certain syntax (like import/export vs.
require), which is part of a correct refactoring. If you want to ensure it
doesn’t introduce something incompatible, mention your constraints.

3. Encourage explanations along with the code. A great way to learn


from an AI-led refactor (and to verify its correctness) is to ask for an
explanation of the changes. For example: “Please suggest a refactored
version of the code, and explain the improvements you made.” This was
even built into the prompt template we referenced: “…suggest refactored
code with explanations for your changes.” . When the AI provides an
explanation, you can assess if it understood the code and met your
objectives. The explanation might say: “I combined two similar loops into
one to reduce duplication, and I used a dictionary for faster lookups,” etc.
If something sounds off in the explanation, that’s a red flag to examine
the code carefully. In short, use the AI’s ability to explain as a safeguard –
it’s like having the AI perform a code review on its own refactor.

4. Use role-play to set a high standard. As mentioned earlier, asking


the AI to act as a code reviewer or senior engineer can be very effective.
For refactoring, you might say: “Act as a seasoned TypeScript expert
and refactor this code to align with best practices and modern
standards.” This often yields not just superficial changes, but more
insightful improvements because the AI tries to live up to the “expert”
persona. A popular example from a prompt guide is having the AI role-

14 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

play a mentor: “Act like an experienced Python developer mentoring a


junior. Provide explanations and write docstrings. Rewrite the code to
optimize it.” . The result in that case was that the AI used a more efficient
data structure (set to remove duplicates) and provided a one-line
solution for a function that originally used a loop . The role-play helped it
not only refactor but also explain why the new approach is better (in that
case, using a set is a well-known optimization for uniqueness).

Now, let’s walk through an example of refactoring to see how a prompt


can influence the outcome. We will use a scenario in JavaScript
(Node.js) where we have some less-than-ideal code and we want it
improved.

Suppose we have a function that makes two database calls and does
some processing. It works, but it’s not pretty – there’s duplicated code
and it’s hard to read. We’d like to refactor it for clarity and efficiency.
Here’s the original code:

// Original function: Fetches two lists and


processes them (needs refactoring)
async function getCombinedData(apiClient) {
// Fetch list of users
const usersResponse = await apiClient.fetch('/
users');
if (!usersResponse.ok) {
throw new Error('Failed to fetch users');
}
const users = await usersResponse.json();
// Fetch list of orders
const ordersResponse = await apiClient.fetch('/
orders');
if (!ordersResponse.ok) {

15 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

throw new Error('Failed to fetch orders');


}
const orders = await ordersResponse.json();
// Combine data (match user to orders)
const result = [];
for (let user of users) {
const userOrders = orders.filter(o => o.userId
=== user.id);
result.push({ user, orders: userOrders });
}
return result;
}

Issues in this code: We see duplicated fetch logic for users and orders
(could be refactored into a helper or loop), lack of error detail (just a
generic error message), and it could perhaps run the fetches in parallel
to improve speed. Let’s see two prompting approaches:

❌ Prompt 1 (Underspecified): “Refactor the above getCombinedData


function.”

AI’s response to Prompt 1:

*Sure. Here’s a refactored version of getCombinedData:

async function getCombinedData(apiClient) {


// Fetch users and orders in parallel
const [usersResponse, ordersResponse] = await
Promise.all([
apiClient.fetch('/users'),
apiClient.fetch('/orders')
]);
if (!usersResponse.ok || !ordersResponse.ok) {

16 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

throw new Error('Failed to fetch data');


}
const [users, orders] = await Promise.all([
usersResponse.json(),
ordersResponse.json()
]);
// Map orders by userId for efficient lookup
const ordersByUser = {};
for (const order of orders) {
if (!ordersByUser[order.userId]) {
ordersByUser[order.userId] = [];
}
ordersByUser[order.userId].push(order);
}
// Combine data
return users.map(user => ({
user,
orders: ordersByUser[user.id] || []
}));
}

I fetched users and orders in parallel to improve performance. I also


consolidated error handling and used a map (ordersByUser) to group
orders by user, which makes the combination step cleaner and
potentially faster.*

Analysis: This is actually a pretty good refactor, hitting several


improvements: parallel fetching, grouping orders by user for efficiency,
cleaner combination using .map and defaulting to an empty array. The AI
also explained what it did. So why do we label this prompt as
underspecified or “poor”? In this case, the AI happened to make good

17 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

choices, but it assumed a lot because the prompt was open-ended. It


assumed that performance should be improved (parallel requests, which
is a smart change but what if parallel wasn’t safe here?), and it changed
error handling to a generic message “Failed to fetch data,” which in
combining the errors lost the specificity of which call failed.

The risk with a generic “Refactor this” prompt is that the AI might make
changes that alter behavior in unintended ways. Here, combining error
handling might be acceptable or not, depending on requirements. The
assistant also made a structural change (using a dictionary to group
orders) – that’s fine and probably for the better, but if we had specific
constraints (like wanting to stream results or keep memory usage low),
we didn’t inform it. So while the result looks good, it was somewhat lucky
that the AI’s own interpretation of “refactor” aligned with what we wanted.
This illustrates that an underspecified prompt can sometimes get good
results on simple cases, but for complex code, it might go off in an
undesired direction or miss certain improvements you care about.

✅ Prompt 2 (Goal-Oriented): “Refactor the above getCombinedData


function to eliminate duplicate code and improve performance.
Specifically: (1) Avoid repeating the fetch logic for users and orders –
maybe use a helper or fetch them together. (2) Fetch both lists in parallel
if possible. (3) Keep the error handling for each fetch (we want to know
which call failed). (4) Improve the combination of data, possibly by using
a more efficient structure for lookup instead of a nested loop. Provide the
refactored code with comments explaining the changes.”

AI’s response to Prompt 2:

*Sure! Here’s an improved version of getCombinedData with the


requested refactoring:

async function getCombinedData(apiClient) {

18 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

// 1. Fetch users and orders in parallel to


improve performance
const [usersResponse, ordersResponse] = await
Promise.all([
apiClient.fetch('/users'),
apiClient.fetch('/orders')
]);

// 2. Handle errors separately to know which call


failed
if (!usersResponse.ok) {
throw new Error('Failed to fetch users');
}
if (!ordersResponse.ok) {
throw new Error('Failed to fetch orders');
}

// Parse JSON responses in parallel


const [users, orders] = await Promise.all([
usersResponse.json(),
ordersResponse.json()
]);

// 3. Use a lookup map for orders by userId for


efficient matching
const ordersByUser = orders.reduce((map, order) =>
{
if (!map[order.userId]) {
map[order.userId] = [];
}

19 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

map[order.userId].push(order);
return map;
}, {});

// 4. Combine data using the lookup map


return users.map(user => ({
user,
orders: ordersByUser[user.id] || [] // empty
array if no orders for user
}));
}

Explanation of changes:

• Parallel fetching: I used Promise.all to fetch users and orders at the same
time (point 1).

• Preserved individual error handling: I kept separate checks for


usersResponse.ok and ordersResponse.ok so that any error clearly
indicates which fetch failed (point 2).

• Parallel JSON parsing: Also used Promise.all for response.json() calls,


which is a minor improvement.

• Efficient combination: Instead of a nested loop, I created an


ordersByUser map (point 3). This groups orders by userId in a single
pass. Then I map over users to build the result, pulling each user’s
orders from the map in O(1) time (point 4). This is more efficient and
concise.*

Analysis: This response directly addressed all the specified goals. The
code is refactored to be cleaner and faster, and it maintained separate
error messages as requested. The AI’s explanation confirms each point

20 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

we listed, which shows it carefully followed the prompt instructions. This


is a great outcome because we, as the prompter, defined what “refactor”
meant in this context. By doing so, we guided the AI to produce a solution
that matches our needs with minimal back-and-forth. If the AI had
overlooked one of the points (say it still merged the error handling), we
could easily prompt again: “Looks good, but please ensure the error
messages remain distinct for users vs orders.” – however, in this case it
wasn’t needed because our prompt was thorough.

This example demonstrates a key lesson: when you know what you
want improved, spell it out. AI is good at following instructions, but it
won’t read your mind. A broad “make this better” might work for simple
things, but for non-trivial code, you’ll get the best results by enumerating
what “better” means to you. This aligns with community insights that
clear, structured prompts yield significantly improved results .

Additional Refactoring Tips:

• Refactor in steps: If the code is very large or you have a long list of
changes, you can tackle them one at a time. For example, first ask the AI
to “refactor for readability” (focus on renaming, splitting functions), then
later “optimize the algorithm in this function.” This prevents
overwhelming the model with too many instructions at once and lets you
verify each change stepwise.

• Ask for alternative approaches: Maybe the AI’s first refactor works but
you’re curious about a different angle. You can ask, “Can you refactor it in
another way, perhaps using functional programming style (e.g. array
methods instead of loops)?” or “How about using recursion here instead
of iterative approach, just to compare?” This way, you can evaluate
different solutions. It’s like brainstorming multiple refactoring options with
a colleague.

21 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

• Combine refactoring with explanation to learn patterns: We touched on


this, but it’s worth emphasizing – use the AI as a learning tool. If it
refactors code in a clever way, study the output and explanation. You
might discover a new API or technique (like using reduce to build a map)
that you hadn’t used before. This is one reason to ask for explanations: it
turns an answer into a mini-tutorial, reinforcing your understanding of
best practices.

• Validation and testing: After any AI-generated refactor, always run your
tests or try the code with sample inputs. AI might inadvertently introduce
subtle bugs, especially if the prompt didn’t specify an important
constraint. For example, in our refactor, if the original code intentionally
separated fetch errors for logging but we didn’t mention logging, the
combined error might be less useful. It’s our job to catch that in review.
The AI can help by writing tests too – you could ask “Generate a few unit
tests for the refactored function” to ensure it behaves the same as before
on expected inputs.

At this point, we’ve covered debugging and refactoring – improving


existing code. The next logical step is to use AI assistance for
implementing new features or generating new code. We’ll explore how
to prompt for that scenario effectively.

❌ Poor Prompt: "My useEffect isn't working right"

✅ Enhanced Prompt:

I have a React component that fetches user data, but


it's causing infinite re-renders. Here's my code:

const UserProfile = ({ userId }) => {


const [user, setUser] = useState(null);

22 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

const [loading, setLoading] = useState(true);

useEffect(() => {
fetchUser(userId).then(setUser).finally(() =>
setLoading(false));
}, [userId, setUser, setLoading]); // Problem is
here

return loading ? <div>Loading...</div> :


<div>{user?.name}</div>;
};

Expected behavior: Should fetch user data once when userId changes
Actual behavior: Component re-renders infinitely Error in console:
"Warning: Maximum update depth exceeded"

What's causing this infinite loop and how do I fix the dependency array?

Why this works: Provides exact code, error message, expected vs


actual behavior, and focuses on a specific React pattern that's commonly
misunderstood.

❌ Poor Prompt: "Build the state management for my Next.js


ecommerce app”

✅ Enhanced Prompt:

I'm building a Next.js 14 e-commerce app and need to design the state
management architecture. Here are my requirements:

Components:

• Product listing page (needs: products[], filters, pagination)

• Shopping cart (needs: cart items, totals, shipping info)

23 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

• User auth (needs: user profile, auth status, preferences)

• Real-time notifications (needs: toast messages, error states)

Technical constraints:

• Next.js 14 with App Router and Server Components

• TypeScript strict mode

• Server-side data fetching for SEO

• Client-side interactivity for cart/user actions

• State should persist across navigation

Should I use:

1. Zustand stores for each domain (cart, auth, notifications)

2. React Query/TanStack Query for server state + Zustand for client state

3. A single Zustand store with slices

Please provide a recommended architecture with code examples


showing how to structure stores and integrate with Next.js App Router
patterns.

Why this works: Real-world scenario with specific tech stack, clear
requirements, and asks for architectural guidance with implementation
details.

One of the most exciting uses of AI code assistants is to help you write
new code from scratch or integrate a new feature into an existing
codebase. This could range from generating a boilerplate for a React
component to writing a new API endpoint in an Express app. The
challenge here is often that these tasks are open-ended – there are
many ways to implement a feature. Prompt engineering for code
generation is about guiding the AI to produce code that fits your needs

24 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

and style. Here are strategies to do that:

1. Start with high-level instructions, then drill down. Begin by


outlining what you want to build in plain language, possibly breaking it
into smaller tasks (similar to our advice on breaking down complex tasks
earlier). For example, say you want to add a search bar feature to an
existing web app. You might first prompt: “Outline a plan to add a search
feature that filters a list of products by name in my React app. The
products are fetched from an API.”

The AI might give you a step-by-step plan: “1. Add an input field for the
search query. 2. Add state to hold the query. 3. Filter the products list
based on the query. 4. Ensure it’s case-insensitive, etc.” Once you have
this plan (which you can refine with the AI’s help), you can tackle each
bullet with focused prompts.

For instance: “Okay, implement step 1: create a SearchBar component


with an input that updates a searchQuery state.” After that, “Implement
step 3: given the searchQuery and an array of products, filter the
products (case-insensitive match on name).” By dividing the feature, you
ensure each prompt is specific and the responses are manageable. This
also mirrors iterative development – you can test each piece as it’s built.

2. Provide relevant context or reference code. If you’re adding a


feature to an existing project, it helps tremendously to show the AI how
similar things are done in that project. For example, if you already have a
component that is similar to what you want, you can say: “Here is an
existing UserList component (code…). Now create a ProductList
component that is similar but includes a search bar.”

The AI will see the patterns (maybe you use certain libraries or style
conventions) and apply them. Having relevant files open or referencing
them in your prompt provides context that leads to more project-specific

25 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

and consistent code suggestions . Another trick: if your project uses a


particular coding style or architecture (say Redux for state or a certain
CSS framework), mention that. “We use Redux for state management –
integrate the search state into Redux store.”

A well-trained model will then generate code consistent with Redux


patterns, etc. Essentially, you are teaching the AI about your project’s
environment so it can tailor the output. Some assistants can even use
your entire repository as context to draw from; if using those, ensure you
point it to similar modules or documentation in your repo.

• If starting something new but you have a preferred approach, you can
also mention that: “I’d like to implement this using functional
programming style (no external state, using array methods).” Or, “Ensure
to follow the MVC pattern and put logic in the controller, not the view.”
These are the kind of details a senior engineer might remind a junior
about, and here you are the senior telling the AI.

3. Use comments and TODOs as inline prompts. When working


directly in an IDE with Copilot, one effective workflow is writing a
comment that describes the next chunk of code you need, then letting
the AI autocomplete it. For example, in a Node.js backend, you might
write: // TODO: Validate the request payload (ensure name and email
are provided) and then start the next line. Copilot often picks up on the
intent and generates a block of code performing that validation. This
works because your comment is effectively a natural language prompt.
However, be prepared to edit the generated code if the AI misinterprets –
as always, verify its correctness.

4. Provide examples of expected input/output or usage. Similar to


what we discussed before, if you’re asking the AI to implement a new
function, include a quick example of how it will be used or a simple test

26 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

case. For instance: “Implement a function formatPrice(amount) in


JavaScript that takes a number (like 2.5) and returns a string formatted in
USD (like $2.50). For example, formatPrice(2.5) should return '$2.50'.”

By giving that example, you constrain the AI to produce a function


consistent with it. Without the example, the AI might assume some other
formatting or currency. The difference could be subtle but important.
Another example in a web context: “Implement an Express middleware
that logs requests. For instance, a GET request to /users should log
‘GET /users’ to the console.” This makes it clear what the output should
look like. Including expected behavior in the prompt acts as a test the AI
will try to satisfy.

5. When the result isn’t what you want, rewrite the prompt with
more detail or constraints. It’s common that the first attempt at
generating a new feature doesn’t nail it. Maybe the code runs but is not
idiomatic, or it missed a requirement. Instead of getting frustrated, treat
the AI like a junior dev who gave a first draft – now you need to give
feedback. For example, “The solution works but I’d prefer if you used the
built-in array filter method instead of a for loop.” Or, “Can you refactor the
generated component to use React Hooks for state instead of a class
component? Our codebase is all functional components.” You can also
add new constraints: “Also, ensure the function runs in O(n) time or
better, because n could be large.” This iterative prompting is powerful. A
real-world scenario: one developer asked an LLM to generate code to
draw an ice cream cone using a JS canvas library, but it kept giving
irrelevant output until they refined the prompt with more specifics and
context . The lesson is, don’t give up after one try. Figure out what was
lacking or misunderstood in the prompt and clarify it. This is the essence
of prompt engineering – each tweak can guide the model closer to what
you envision.

27 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

Let’s illustrate feature implementation prompting with an example


scenario:

Suppose we have a React application and we want to add a new


component: a product list with a search filter and a sort dropdown. This is
a fairly typical feature in a web app. We’ll focus on prompting for the
search functionality within this component for brevity.

Initial Prompt (Feature request): “Create a React functional


component called ProductList that displays a list of products and
includes a text input to filter the products by name. The component
should: fetch an array of products from /api/products (assume it returns
JSON array of {id, name, ...}), store them in state, and allow the user to
type in a search box to filter the displayed products by name (case-
insensitive match). Display the filtered list in a simple <ul> with product
names. Include basic error handling (e.g. show an error message if the
API call fails) and a loading state while fetching.”

That’s a long prompt but it’s specific. Let’s assume we give this to an LLM
in a chat.

AI’s response (summary of what it might produce): The AI would likely


output a ProductList component something like:

import React, { useState, useEffect } from 'react';

function ProductList() {
const [products, setProducts] = useState([]);
const [search, setSearch] = useState('');
const [loading, setLoading] = useState(true);
const [error, setError] = useState(null);

useEffect(() => {

28 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

async function fetchProducts() {


try {
const res = await fetch('/api/products');
if (!res.ok) throw new Error('Network
response was not ok');
const data = await res.json();
setProducts(data);
} catch (err) {
console.error('Failed to fetch products:',
err);
setError('Failed to load products.');
} finally {
setLoading(false);
}
}
fetchProducts();
}, []);

const handleSearchChange = (e) => {


setSearch(e.target.value);
};

// Filter products by search query (case-


insensitive)
const filteredProducts = products.filter(product
=>

product.name.toLowerCase().includes(search.toLowerCase())
);

29 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

if (loading) return <div>Loading products...</


div>;
if (error) return <div>{error}</div>;

return (
<div className="product-list">
<input
type="text"
placeholder="Search products"
value={search}
onChange={handleSearchChange}
/>
<ul>
{filteredProducts.map(prod => (
<li key={prod.id}>{prod.name}</li>
))}
</ul>
</div>
);
}

export default ProductList;

(Plus possibly some comments in the code explaining each part, if we


asked for them.)

Now, let’s analyze this result and consider prompt refinements:

• Suppose the above code came back, but our project uses a context or a
global store for data fetching instead of fetching inside the component.
This is something we didn’t specify. The AI made a choice to fetch in

30 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

useEffect. If we prefer using a custom hook or context, we should have


hinted at that. So we could refine our prompt: “Actually, in our app we use
a custom hook useProducts() that already handles fetching. Please
refactor the component to use useProducts hook instead of directly
calling fetch.” The AI would then adjust, perhaps assuming useProducts
returns { products, loading, error } and simplify the component
accordingly.

• Another refinement: maybe we realize we also want a sort dropdown


(which we didn’t mention initially). We can now extend the conversation:
“Great, now add a dropdown to sort the products by name (A-Z or Z-A).
The dropdown should let the user choose ascending or descending, and
the list should sort accordingly in addition to the filtering.” Because the AI
has the context of the existing code, it can insert a sort state and adjust
the rendering. We provided a clear new requirement, and it will attempt
to fulfill it, likely by adding something like:

const [sortOrder, setSortOrder] = useState('asc');


// ... a select input for sortOrder ...
// and sort the filteredProducts before rendering:
const sortedProducts =
[...filteredProducts].sort((a, b) => {
if (sortOrder === 'asc') return
a.name.localeCompare(b.name);
else return b.name.localeCompare(a.name);
});

• (plus the dropdown UI).

By iterating like this, feature by feature, we simulate a development cycle


with the AI. This is far more effective than trying to prompt for the entire,
complex component with all features in one go initially. It reduces

31 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

mistakes and allows mid-course corrections as requirements become


clearer.

• If the AI makes a subtle mistake (say it forgot to make the search filter
case-insensitive), we just point that out: “Make the search case-
insensitive.” It will adjust the filter to use lowercase comparison (which in
our pseudo-output it already did, but if not it would fix it).

This example shows that implementing features with AI is all about


incremental development and prompt refinement. A Twitter thread
might exclaim how someone built a small app by continually prompting
an LLM for each part – that’s essentially the approach: build, review,
refine, extend. Each prompt is like a commit in your development
process.

Additional tips for feature implementation:

• Let the AI scaffold, then you fill in specifics: Sometimes it’s useful to have
the AI generate a rough structure, then you tweak it. For example,
“Generate the skeleton of a Node.js Express route for user registration
with validation and error handling.” It might produce a generic route with
placeholders. You can then fill in the actual validation rules or database
calls which are specific to your app. The AI saves you from writing
boilerplate, and you handle the custom logic if it’s sensitive.

• Ask for edge case handling: When generating a feature, you might
prompt the AI to think of edge cases: “What edge cases should we
consider for this feature (and can you handle them in the code)?” For
instance, in the search example, an edge case might be “what if the
products haven’t loaded yet when the user types?” (though our code
handles that via loading state) or “what if two products have the same
name” (not a big issue but maybe mention it). The AI could mention
things like empty result handling, very large lists (maybe needing

32 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

debounce for search input), etc. This is a way to leverage the AI’s training
on common pitfalls.

• Documentation-driven development: A nifty approach some have taken


is writing a docstring or usage example first and having the AI implement
the function to match. For example:

/**
* Returns the nth Fibonacci number.
* @param {number} n - The position in Fibonacci
sequence (0-indexed).
* @returns {number} The nth Fibonacci number.
*
* Example: fibonacci(5) -> 5 (sequence:
0,1,1,2,3,5,…)
*/
function fibonacci(n) {
// ... implementation
}

• If you write the above comment and function signature, an LLM might fill
in the implementation correctly because the comment describes exactly
what to do and even gives an example. This technique ensures you
clarify the feature in words first (which is a good practice generally), and
then the AI uses that as the spec to write the code.

Having covered prompting strategies for debugging, refactoring, and


new code generation, let’s turn our attention to some common pitfalls
and anti-patterns in prompt engineering for coding. Understanding
these will help you avoid wasting time on unproductive interactions and
quickly adjust when the AI isn’t giving you what you need.

Not all prompts are created equal. By now, we’ve seen numerous

33 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

examples of effective prompts, but it’s equally instructive to recognize


anti-patterns – common mistakes that lead to poor AI responses.

Here are some frequent prompt failures and how to fix them:

• Anti-Pattern: The Vague Prompt. This is the classic “It doesn’t work,
please fix it” or “Write something that does X” without enough detail. We
saw an example of this when the question “Why isn’t my function
working?” got a useless answer . Vague prompts force the AI to guess
the context and often result in generic advice or irrelevant code. The fix is
straightforward: add context and specifics. If you find yourself asking a

34 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

question and the answer feels like a Magic 8-ball response (“Have you
tried checking X?”), stop and reframe your query with more details (error
messages, code excerpt, expected vs actual outcome, etc.). A good
practice is to read your prompt and ask, “Could this question apply to
dozens of different scenarios?” If yes, it’s too vague. Make it so specific
that it could only apply to your scenario.

• Anti-Pattern: The Overloaded Prompt. This is the opposite issue:


asking the AI to do too many things at once. For instance, “Generate a
complete Node.js app with authentication, a front-end in React, and
deployment scripts.” Or even on a smaller scale, “Fix these 5 bugs and
also add these 3 features in one go.” The AI might attempt it, but you’ll
likely get a jumbled or incomplete result, or it might ignore some parts of
the request. Even if it addresses everything, the response will be long
and harder to verify. The remedy is to split the tasks. Prioritize: do one
thing at a time, as we emphasized earlier. This makes it easier to catch
mistakes and ensures the model stays focused. If you catch yourself
writing a paragraph with multiple “and” in the instructions, consider
breaking it into separate prompts or sequential steps.

• Anti-Pattern: Missing the Question. Sometimes users will present a


lot of information but never clearly ask a question or specify what they
need. For example, dumping a large code snippet and just saying
“Here’s my code.” This can confuse the AI – it doesn’t know what you
want. Always include a clear ask, such as “Identify any bugs in the above
code”, “Explain what this code does”, or “Complete the TODOs in the
code”. A prompt should have a purpose. If you just provide text without a
question or instruction, the AI might make incorrect assumptions (like
summarizing the code instead of fixing it, etc.). Make sure the AI knows
why you showed it some code. Even a simple addition like, “What’s
wrong with this code?” or “Please continue implementing this function.”

35 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

gives it direction.

• Anti-Pattern: Vague Success Criteria. This is a subtle one –


sometimes you might ask for an optimization or improvement, but you
don’t define what success looks like. For example, “Make this function
faster.” Faster by what metric? If the AI doesn’t know your performance
constraints, it might micro-optimize something that doesn’t matter or use
an approach that’s theoretically faster but practically negligible. Or “make
this code cleaner” – “cleaner” is subjective. We dealt with this by explicitly
stating goals like “reduce duplication” or “improve variable names” etc.
The fix: quantify or qualify the improvement. E.g., “optimize this
function to run in linear time (current version is quadratic)” or “refactor this
to remove global variables and use a class instead.” Basically, be explicit
about what problem you’re solving with the refactor or feature. If you
leave it too open, the AI might solve a different problem than the one you
care about.

• Anti-Pattern: Ignoring AI’s Clarification or Output. Sometimes the AI


might respond with a clarifying question or an assumption. For instance:
“Are you using React class components or functional components?” or “I
assume the input is a string – please confirm.” If you ignore these and
just reiterate your request, you’re missing an opportunity to improve the
prompt. The AI is signaling that it needs more info. Always answer its
questions or refine your prompt to include those details. Additionally, if
the AI’s output is clearly off (like it misunderstood the question), don’t just
retry the same prompt verbatim. Take a moment to adjust your wording.
Maybe your prompt had an ambiguous phrase or omitted something
essential. Treat it like a conversation – if a human misunderstood, you’d
explain differently; do the same for the AI.

• Anti-Pattern: Varying Style or Inconsistency. If you keep changing


how you ask or mixing different formats in one go, the model can get

36 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

confused. For example, switching between first-person and third-person


in instructions, or mixing pseudocode with actual code in a confusing
way. Try to maintain a consistent style within a single prompt. If you
provide examples, ensure they are clearly delineated (use Markdown
triple backticks for code, quotes for input/output examples, etc.).
Consistency helps the model parse your intent correctly. Also, if you
have a preferred style (say, ES6 vs ES5 syntax), consistently mention it,
otherwise the model might suggest one way in one prompt and another
way later.

• Anti-Pattern: Vague references like “above code”. When using chat,


if you say “the above function” or “the previous output”, be sure the
reference is clear. If the conversation is long and you say “refactor the
above code”, the AI might lose track or pick the wrong code snippet to
refactor. It’s safer to either quote the code again or specifically name the
function you want refactored. Models have a limited attention window,
and although many LLMs can refer to prior parts of the conversation,
giving it explicit context again can help avoid confusion. This is especially
true if some time (or several messages) passed since the code was
shown.

Finally, here’s a tactical approach to rewriting prompts when things


go wrong:

• Identify what was missing or incorrect in the AI’s response. Did it


solve a different problem? Did it produce an error or a solution that
doesn’t fit? For example, maybe you asked for a solution in TypeScript
but it gave plain JavaScript. Or it wrote a recursive solution when you
explicitly wanted iterative. Pinpoint the discrepancy.

• Add or emphasize that requirement in a new prompt. You might say,


“The solution should be in TypeScript, not JavaScript. Please include

37 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

type annotations.” Or, “I mentioned I wanted an iterative solution – please


avoid recursion and use a loop instead.” Sometimes it helps to literally
use phrases like “Note:” or “Important:” in your prompt to highlight key
constraints (the model doesn’t have emotions, but it does weigh certain
phrasing as indicating importance). For instance: “Important: Do not
use any external libraries for this.” or “Note: The code must run in the
browser, so no Node-specific APIs.”.

• Break down the request further if needed. If the AI repeatedly fails on


a complex request, try asking for a smaller piece first. Or ask a question
that might enlighten the situation: “Do you understand what I mean by
X?” The model might then paraphrase what it thinks you mean, and you
can correct it if it’s wrong. This is meta-prompting – discussing the
prompt itself – and can sometimes resolve misunderstandings.

• Consider starting fresh if the thread is stuck. Sometimes after


multiple tries, the conversation may reach a confused state. It can help to
start a new session (or clear the chat history for a moment) and prompt
from scratch with a more refined ask that you’ve formulated based on
previous failures. The model doesn’t mind repetition, and a fresh context
can eliminate any accumulated confusion from prior messages.

By being aware of these anti-patterns and their solutions, you’ll become


much faster at adjusting your prompts on the fly. Prompt engineering for
developers is very much an iterative, feedback-driven process (as any
programming task is!). The good news is, you now have a lot of patterns
and examples in your toolkit to draw from.

Prompt engineering is a bit of an art and a bit of a science – and as we’ve


seen, it’s quickly becoming a must-have skill for developers working with
AI code assistants. By crafting clear, context-rich prompts, you
essentially teach the AI what you need, just as you would onboard a

38 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

human team member or explain a problem to a peer. Throughout this


article, we explored how to systematically approach prompts for
debugging, refactoring, and feature implementation:

• We learned to feed the AI the same information you’d give a colleague


when asking for help: what the code is supposed to do, how it’s
misbehaving, relevant code snippets, and so on – thereby getting much
more targeted help .

• We saw the power of iterating with the AI, whether it’s stepping through a
function’s logic line by line, or refining a solution through multiple prompts
(like turning a recursive solution into an iterative one, then improving
variable names) . Patience and iteration turn the AI into a true pair
programmer rather than a one-shot code generator.

• We utilized role-playing and personas to up-level the responses –


treating the AI as a code reviewer, a mentor, or an expert in a certain
stack . This often produces more rigorous and explanation-rich outputs,
which not only solve the problem but educate us in the process.

• For refactoring and optimization, we emphasized defining what “good”


looks like (be it faster, cleaner, more idiomatic, etc.) , and the AI showed
that it can apply known best practices when guided (like parallelizing
calls, removing duplication, handling errors properly). It’s like having
access to the collective wisdom of countless code reviewers – but you
have to ask the right questions to tap into it.

• We also demonstrated building new features step by step with AI


assistance, showing that even complex tasks can be decomposed and
tackled one prompt at a time. The AI can scaffold boilerplate, suggest
implementations, and even highlight edge cases if prompted – acting as
a knowledgeable co-developer who’s always available.

• Along the way, we identified pitfalls to avoid: keeping prompts neither too

39 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

vague nor too overloaded, always specifying our intent and constraints,
and being ready to adjust when the AI’s output isn’t on target. We cited
concrete examples of bad prompts and saw how minor changes (like
including an error message or expected output) can dramatically
improve the outcome.

As you incorporate these techniques into your workflow, you’ll likely find
that working with AI becomes more intuitive. You’ll develop a feel for
what phrasing gets the best results and how to guide the model when it
goes off course. Remember that the AI is a product of its training data – it
has seen many examples of code and problem-solving, but it’s you who
provides direction on which of those examples are relevant now. In
essence, you set the context, and the AI follows through.

It’s also worth noting that prompt engineering is an evolving


practice. The community of developers is constantly discovering new
tricks – a clever one-liner prompt or a structured template can suddenly
go viral on social media because it unlocks a capability people didn’t
realize was there. Stay tuned to those discussions (on Hacker News,
Twitter, etc.) because they can inspire your own techniques. But also,
don’t be afraid to experiment yourself. Treat the AI as a flexible tool – if
you have an idea (“what if I ask it to draw an ASCII diagram of my
architecture?”), just try it. You might be surprised at the results, and if it
fails, no harm done – you’ve learned something about the model’s limits
or needs.

In summary, prompt engineering empowers developers to get


more out of AI assistants. It’s the difference between a frustrating
experience (“this tool is useless, it gave me nonsense”) and a productive
one (“this feels like pair programming with an expert who writes
boilerplate for me”). By applying the playbook of strategies we’ve
covered – from providing exhaustive context to nudging the AI’s style

40 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

and thinking – you can turn these code-focused AI tools into true
extensions of your development workflow. The end result is not only that
you code faster, but often you pick up new insights and patterns along
the way (as the AI explains things or suggests alternatives), leveling up
your own skillset.

As a final takeaway, remember that prompting is an iterative dialogue.


Approach it with the same clarity, patience, and thoroughness you’d use
when communicating with another engineer. Do that, and you’ll find that
AI assistants can significantly amplify your abilities – helping you debug
quicker, refactor smarter, and implement features with greater ease.

Happy prompting, and happy coding!

Further reading:

• How to write better prompts for GitHub Copilot. GitHub Blog

• ChatGPT Prompt Engineering for Developers: 13 Best Examples

• Using ChatGPT for Efficient Debugging

• Prompt Engineering for Lazy Programmers: Getting Exactly the Code


You Want

• Best practices for prompting GitHub Copilot in VS Code

• ChatGPT: A new-age Debugger, 10 Prompts

• ChatGPT Prompts for Code Review and Debugging

41 of 42 2025-06-02, 10:41 a.m.


The Prompt Engineering Playbook for Programmers about:reader?url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Faddyo.substack.com%2Fp%2Fthe...

42 of 42 2025-06-02, 10:41 a.m.

You might also like