0% found this document useful (0 votes)
91 views9 pages

Prompt Engineering For Large Language Models

This document explores the intricacies of prompt engineering for large language models (LLMs), detailing various prompting techniques such as zero-shot, one-shot, and few-shot prompting, and their respective applications. It emphasizes the importance of clarity, context, and example selection in crafting effective prompts, while also discussing advanced methodologies like Chain-of-Thought prompting and the capabilities of Claude 3.7 in code generation. The report highlights the balance between prompt length and context for optimal model performance and the significance of experimentation in determining the best prompting strategies.

Uploaded by

ehsan255
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
91 views9 pages

Prompt Engineering For Large Language Models

This document explores the intricacies of prompt engineering for large language models (LLMs), detailing various prompting techniques such as zero-shot, one-shot, and few-shot prompting, and their respective applications. It emphasizes the importance of clarity, context, and example selection in crafting effective prompts, while also discussing advanced methodologies like Chain-of-Thought prompting and the capabilities of Claude 3.7 in code generation. The report highlights the balance between prompt length and context for optimal model performance and the significance of experimentation in determining the best prompting strategies.

Uploaded by

ehsan255
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

The Nuances of Prompt Engineering for Large Language

Models: From Fundamentals to Advanced Applications


The ability to effectively communicate with large language models (LLMs) is paramount to harnessing their vast
potential across diverse applications. Understanding the subtle yet critical distinctions between various prompting
techniques, coupled with an awareness of best practices, allows users to elicit more accurate, relevant, and
insightful responses. This report delves into the spectrum of prompting methodologies, from foundational
approaches like zero-shot, one-shot, and few-shot prompting, to more advanced techniques designed for complex
reasoning and task execution. Furthermore, it examines the crucial role of prompt length and context, the
determination of optimal example counts, a comparative analysis of specific LLM performance, and common
challenges encountered in prompt engineering.

Demystifying the Fundamentals: Zero-Shot, One-Shot, and


Few-Shot Prompting
At the core of interacting with LLMs lies the concept of "shots," which refer to the number of examples provided
within the prompt to guide the model's output. Zero-shot prompting represents the simplest form, where the
1
model is given a direct instruction to perform a task without any accompanying examples . This technique relies
1
entirely on the LLM's pre-trained knowledge to infer the desired response . For instance, a prompt like "Classify
the sentiment of the following text as positive, negative, or neutral: I think the vacation was okay" exemplifies
2
zero-shot prompting, as no prior examples of text and their corresponding sentiments are provided . This
approach can be effective for tasks that the model has likely encountered frequently during its training, such as
1
basic classification, question answering, or text summarization . Its simplicity and ease of use make it a valuable
1
starting point, especially when relevant data for examples is scarce .

One-shot prompting builds upon zero-shot by including a single example within the prompt before presenting the
2
new task . This single demonstration helps to clarify the expected format, tone, or style of the desired output,
2
often leading to improved model performance . Consider the sentiment classification task again: "Classify the
sentiment of the following text as positive, negative, or neutral. Text: The product is terrible. Sentiment: Negative.
Text: I think the vacation was okay. Sentiment:" Here, the model is shown one example of a text and its sentiment
2
before being asked to classify a new text . This technique can be particularly useful for tasks that require more
2
specific guidance or when zero-shot prompting yields ambiguous results .

Few-shot prompting involves providing the LLM with multiple examples (typically two to ten) within the prompt to
2
illustrate the task . These examples allow the model to recognize patterns and generalize them to new, similar
2
tasks, often resulting in higher accuracy and consistency, especially for more complex tasks . For example, in a
few-shot prompt for sentiment classification, several examples of text with their corresponding sentiments
9
(positive or negative) might be provided before the new text requiring classification . This method, known as
in-context learning, enables the AI to learn directly from the examples embedded in the prompt, rather than solely
2
relying on its pre-trained knowledge . Few-shot prompting is beneficial for tasks with varied inputs, those
requiring precise formatting, or those demanding a higher degree of accuracy, such as generating structured
2
outputs or handling nuanced classifications .
Technique Definition Number of Typical Use Advantages Limitations
Examples Cases

Zero-Shot Prompting without 0 Simple tasks, Simplicity, ease of May not work well
providing any general queries, use, no additional for complex or
examples. common data required nuanced tasks,
classifications results can be
unpredictable

One-Shot Prompting with a 1 Tasks needing Clarifies May struggle with


single example specific guidance, expectations, complex tasks,
before the new basic improves limited scope
task. classification performance over
zero-shot

Few-Shot Prompting with 2+ Complex tasks, Helps recognize Requires careful


two or more varied inputs, patterns, improved selection of
examples to guide precise formatting accuracy and examples,
the model. consistency diminishing
returns with too
many

The Art of Crafting Powerful Prompts: Best Practices and


Guidelines
Effective prompt engineering is crucial for maximizing the performance of LLMs. Several best practices can
significantly enhance the quality and relevance of the generated outputs. One fundamental principle is to be as
13
clear and specific as possible in the prompt . Ambiguous or vague instructions can lead to irrelevant or generic
13
responses . Providing detailed context, specifying the desired format, output length, level of detail, and even the
13
desired tone and style can guide the model towards the intended outcome . For instance, instead of a broad
prompt like "Summarize this article," a more specific prompt would be "Summarize this article in three bullet points,
13
focusing on the main arguments and conclusions, and adopt a formal tone" .

Incorporating examples, as discussed in the context of one-shot and few-shot prompting, is another powerful
13
technique . Examples demonstrate the expected output format and content, setting a clear precedent for the
12
model to follow . Furthermore, providing the model with relevant data within the prompt can significantly improve
13
the accuracy and insightfulness of the response . When supplying data, it is beneficial to provide context and,
13
where possible, cite the source to enhance credibility .

Instead of instructing the model on what not to do, it is generally more effective to provide positive instructions on
13
what to do instead . This approach reduces ambiguity and guides the model towards the desired behavior more
13
directly . For complex tasks, breaking them down into simpler, more manageable subtasks can improve the clarity
13
of the prompt and the quality of the resulting output . Experimentation and iterative refinement are also key
14
aspects of successful prompt engineering . Users should try different variations of prompts, analyze the model's
14
responses, and adjust their instructions accordingly to achieve the desired results .
15
A practical workflow often involves starting with zero-shot prompting to assess the model's inherent capabilities .
If the results are not satisfactory, one can then progress to one-shot or few-shot prompting by adding relevant
15
examples . Understanding the specific model's strengths and limitations is also crucial for crafting effective
13
prompts . Some advanced techniques, such as using "perspective prompts" to explore different viewpoints or
14
"leading words" for code generation, can further enhance the interaction with LLMs . The emphasis on clarity
and specificity across various sources highlights a fundamental principle: well-defined prompts minimize the
model's need to interpret intent, leading to more predictable and relevant outputs. The recommendation to begin
with zero-shot prompting and incrementally add examples offers an efficient strategy, allowing users to leverage
the simplest approach first before increasing the complexity of the prompt.

The Impact of Prompt Length and Context on LLM Performance


The length of the prompt and the amount of context provided play a significant role in shaping the output quality
22
and coherence of LLMs . Short prompts may lack the necessary context, resulting in incomplete or overly generic
22
responses . Conversely, while long prompts can provide a rich understanding of the topic, they can also
overwhelm the model, potentially leading to a phenomenon known as the "lost in the middle" effect, where the
17
model struggles to retain and process information located in the middle of a lengthy input . LLMs operate with
24
fixed token limits, and exceeding these limits can lead to truncation or suboptimal performance .

Medium-length prompts often strike a better balance, providing sufficient context for the model to understand the
22
task and generate more detailed and relevant information without overwhelming its processing capabilities . The
optimal prompt length is not a one-size-fits-all answer and depends on the complexity of the task, the specific
22
model being used, and the desired level of detail in the output . As a general guideline, aiming for concise yet
22
informative prompts, typically ranging from 50 to 200 words, can be effective .

Context length, also referred to as the context window, defines the maximum number of tokens an LLM can
25
process in a single input . A longer context length generally allows the model to produce higher quality outputs
25
by enabling it to consider more information, but it might also impact the speed of processing . Insufficient
context can lead to misinterpretations and inaccurate responses, as the model may not have enough information to
7
understand the nuances of the request . Conversely, excessive context can cause the model to lose focus on
24
critical information, particularly if that information is located in the middle of the input . The "lost in the middle"
effect suggests that the placement of key instructions and information within the prompt is important to ensure
they are not overlooked.

Strategies for effectively managing context length include focusing on the most relevant information, breaking
down complex tasks into smaller steps, and structuring prompts in a way that encodes essential details for
18
consistency . Techniques like Retrieval-Augmented Generation (RAG) can also be employed to incorporate
external knowledge into the prompt within the context window, enhancing the model's understanding and the
11
relevance of its responses . The interplay between prompt length and context length underscores the need to
find a balance between providing adequate information and avoiding information overload to optimize LLM
performance and output quality.

Determining the Optimal Number of Examples ("Shots") for


Desired Outcomes
The number of examples, or "shots," required in a prompt is contingent upon the specific goals of the interaction
and the complexity of the task at hand [User Query]. While providing examples can significantly enhance the
model's ability to understand and perform a task, research indicates that there are diminishing returns after a
12
certain point, typically around two to three examples for many tasks . Including an excessive number of examples
12
can consume more tokens without a corresponding increase in the quality of the output . A practical range to
12
consider is two to five examples, generally not exceeding eight .

For straightforward tasks, such as basic sentiment analysis, a single prompt, potentially even a zero-shot or
12
one-shot prompt, might be sufficient . However, for more intricate or nuanced tasks, few-shot prompting, with its
2
provision of multiple examples, often proves beneficial . It is crucial to ensure that the examples provided are
10
directly relevant to the task and cover a diverse range of potential inputs and desired outputs . Maintaining a
8
consistent format across all examples helps the model recognize the underlying patterns . Furthermore,
incorporating both positive and negative examples can provide the model with a more comprehensive
12
understanding of the task by illustrating what constitutes both desired and undesired outputs . The order in
which examples are presented might also influence the model's output, with some strategies suggesting placing
12
the most representative or best example last .

Interestingly, with the advent of LLMs boasting larger context windows, the concept of "many-shot" learning has
28
emerged, involving the inclusion of hundreds or even thousands of examples within the prompt . This approach
has shown potential for significant performance gains, particularly on complex reasoning tasks, and might even
28
rival traditional fine-tuning in certain scenarios . Ultimately, determining the optimal number of shots often
8
requires experimentation . The principle of diminishing returns in few-shot prompting suggests that focusing on
the quality and diversity of examples is more effective than simply increasing their quantity. The emergence of
many-shot learning, however, indicates a potential shift in strategies as context windows expand, allowing for more
extensive in-context guidance.

Exploring Advanced Prompting Techniques Beyond the Basics


Beyond the foundational prompting techniques, a plethora of advanced methodologies exist to further refine
interactions with LLMs and tackle more complex challenges. Chain-of-Thought (CoT) prompting encourages the
13
model to break down intricate tasks into a logical sequence of intermediate reasoning steps . This technique
31
enhances the model's reasoning capabilities and makes its thought process more transparent . CoT prompting
32
can be combined with few-shot prompting by providing examples of step-by-step reasoning . A simpler variant,
zero-shot CoT, involves appending a phrase like "Let's think step by step" to the original prompt to elicit reasoning
15
.

Other advanced techniques include Tree-of-Thought prompting, which encourages the model to explore
31 15
multiple reasoning paths ; Role Prompting, where the model is instructed to adopt a specific role or persona ;
and Retrieval-Augmented Generation (RAG), which augments the LLM's knowledge by retrieving relevant
11
information from an external knowledge source and incorporating it into the prompt . Techniques like Instruction
Tuning and Reinforcement Learning with Human Feedback (RLHF) are used to further refine model
1
performance and align it with human preferences .

Beyond these, a range of more specialized techniques can be employed, such as using a pseudocode-like syntax
33 33
for clearer instructions ; recursive prompts that feed the output of one prompt back as input to the next ;
33
multi-entrant prompts designed to handle various input types ; prompts that split outputs to elicit
33 33
multifaceted responses ; counterfactual prompting to explore hypothetical scenarios ; and prompt chaining
33
to create a sequence of related prompts . Techniques like Self-Consistency Prompting, Reflection Prompting,
Progressive Prompting, Clarification Prompting, Error-guided Prompting, Hypothetical Prompting, and
31
Meta-prompting offer further control over the model's reasoning and output generation . Finally, Prompt
Compression aims to shorten prompts without losing crucial information, while General Knowledge Prompting
and ReAct Prompting enhance the model's understanding of the world and its ability to interact with external
35
tools . The sheer breadth of these advanced prompting techniques underscores the ongoing innovation in this
field, moving beyond basic instructions to strategically guide the model's internal processes for complex tasks.
Techniques like Chain-of-Thought and Retrieval-Augmented Generation are particularly noteworthy for their
effectiveness in enhancing reasoning and leveraging external knowledge, respectively.

A Comparative Look: Prompting Claude 3.5 and 3.7 for Optimal


Results, Especially in Code Generation
Claude 3.7 Sonnet represents a significant advancement over its predecessor, Claude 3.5, offering enhanced
36
speed, intelligence, and capabilities, particularly in code generation . A key innovation in Claude 3.7 is its "hybrid
reasoning" system, which allows it to seamlessly switch between providing quick responses and engaging in deep,
36
structured thinking as needed . Users can even control the amount of "thinking time" the model dedicates to a
36
task, allowing for a trade-off between speed and accuracy .

In the realm of coding, Claude 3.7 boasts more powerful abilities through "Claude Code," a feature that enables
developers to perform tasks such as editing files, running tests, debugging code, and pushing changes to GitHub
36
directly within the model's environment . This enhanced capability allows Claude 3.7 to handle full-stack
36
development with a reduced number of errors . Benchmark data from SWE-bench Verified, which evaluates AI
models on software engineering tasks, demonstrates a clear advantage for Claude 3.7, showing a substantial
37
increase in accuracy compared to Claude 3.5 . Furthermore, Claude 3.7 exhibits improved performance in agentic
37
tool use, indicating its enhanced ability to interact with external tools and systems .

Despite these advancements, some early developer feedback suggests that Claude 3.7 might exhibit a tendency to
40
"overengineer" solutions, sometimes adding unnecessary complexity to the generated code . To mitigate this,
users have found that including explicit instructions such as "Use as few lines of code as possible" in the prompt
40
can yield better, more concise results . For certain everyday coding tasks, some developers still prefer Claude 3.5
40
for its more consistent output and potentially less demanding prompt engineering requirements .
Recommendations for using Claude 3.7 often involve starting with clear and concise instructions and iteratively
40
refining the prompts based on the model's output .

Anthropic, the developer of Claude, highlights Claude 3.7's superior instruction following, tool selection, error
39
correction, and advanced reasoning capabilities . Early testing on platforms like GitHub Spark has shown Claude
3.7 to be more successful in generating higher quality applications from natural language descriptions and
39
producing passing code, especially when its extended thinking mode is enabled . Overall, Claude 3.7 Sonnet
appears to be a significant step forward in reasoning and coding capabilities compared to 3.5, with the hybrid
reasoning feature offering greater control. However, users might need to adapt their prompting strategies to
manage the newer model's potential for complexity in code generation.
Feature Claude 3.5 Claude 3.7 Key Improvement in 3.7

Reasoning One-speed reasoning Hybrid reasoning (quick and Ability to switch between
deep thinking modes) reasoning speeds for better
accuracy

Coding Capabilities Good at coding Claude Code for editing, Integrated coding tools and
debugging, testing, and enhanced functionality
GitHub integration

Accuracy (SWE-bench) 49.0% 62.3% (70.3% with custom Significant improvement in


scaffold) software engineering accuracy

Agentic Tool Use Good Improved performance in retail Enhanced ability to interact
and airline-related tasks with external tools

Instruction Following Could struggle with complex Understands and follows Superior ability to adhere to
prompts instructions better, making complex instructions
fewer mistakes

Extended Thinking Mode Not available Users can control thinking time Allows for optimization of
for better accuracy speed and accuracy

Navigating the Challenges: Common Pitfalls to Avoid in Prompt


Engineering
Despite the power and versatility of LLMs, several common pitfalls can hinder effective prompt engineering and
lead to suboptimal results. One prevalent issue is the use of ambiguous or vague prompts, which often result in
18
irrelevant or misleading answers . Overloading a prompt with too many questions or topics can also overwhelm
19
the model, leading to incomplete or superficial responses .

13
A critical aspect of prompt engineering is understanding and acknowledging the inherent limitations of LLMs .
These limitations include the tendency to "hallucinate" or make up information, limited reasoning skills (particularly
in areas like mathematics), limited long-term memory across interactions, and knowledge cut-offs based on their
18
training data . Expecting the model to possess expertise beyond its training data or failing to verify the
19
information it provides are common mistakes . The phenomenon of hallucinations underscores the importance of
fact-checking any critical information generated by an LLM.

Over-reliance on default outputs without customizing prompts to specific needs and ignoring the context in which
19
the prompt is being used can also lead to less effective results . Furthermore, LLMs are susceptible to "prompt
20
hacking," where malicious users can manipulate prompts to generate inappropriate or harmful content . Practical
considerations such as token limits, which can make managing context challenging, and the potential for
18
inconsistent outputs further complicate prompt engineering . Overly lengthy prompts can sometimes hinder
17
accuracy, while failing to provide sufficient context can lead to vague or unhelpful responses . Avoiding leading
questions that imply a desired answer is also important for eliciting neutral and objective responses. The recurring
issue of hallucinations highlights a fundamental limitation of LLMs, emphasizing the need for users to critically
evaluate and verify the generated information. The challenges related to prompt length and context underscore
the delicate balance required in providing enough information for the model to understand the task without
overwhelming its processing capabilities.

Conclusion: Mastering Prompting for Enhanced LLM Interactions


In conclusion, effective prompt engineering is a multifaceted skill that is essential for unlocking the full potential of
large language models. Understanding the distinctions between zero-shot, one-shot, and few-shot prompting
provides a foundational framework for interacting with these models. Adhering to best practices, such as ensuring
clarity and specificity, providing relevant context and examples, and iteratively refining prompts, can significantly
enhance the quality of the generated outputs. Recognizing the impact of prompt length and context, and
determining the optimal number of examples for a given task, are crucial considerations for optimizing
performance. Exploring advanced prompting techniques opens up possibilities for tackling more complex
reasoning and generation challenges. While models like Claude 3.7 offer significant improvements, particularly in
areas like code generation, users must be mindful of potential nuances in prompting to leverage their capabilities
effectively. Finally, awareness of common pitfalls and limitations of LLMs, such as the risk of hallucinations and the
importance of context management, is paramount for responsible and effective utilization of these powerful tools.
Mastering the art of prompting is an ongoing process of learning and experimentation, but it is an increasingly vital
skill in a world where interaction with AI is becoming ever more prevalent.

1.​ What is zero-shot prompting? | IBM,


https://www.ibm.com/think/topics/zero-shot-prompting
2.​ Shot-Based Prompting: Zero-Shot, One-Shot, and Few-Shot Prompting,
https://learnprompting.org/docs/basics/few_shot
3.​ Zero-Shot Prompting - Prompt Engineering Guide,
https://www.promptingguide.ai/techniques/zeroshot
4.​ What is Zero-Shot Prompting? Examples & Applications - Digital Adoption,
https://www.digital-adoption.com/zero-shot-prompting/
5.​ What is One Shot Prompting? | IBM,
https://www.ibm.com/think/topics/one-shot-prompting
6.​ What is One-Shot Prompting? Examples & Uses - Digital Adoption,
https://www.digital-adoption.com/one-shot-prompting/
7.​ A Guide to Zero-Shot, One-Shot, & Few-Shot AI Prompting | The GoSearch Blog,
https://www.gosearch.ai/blog/zero-shot-one-shot-few-shot-ai-prompting/
8.​ Include few-shot examples | Generative AI - Google Cloud,
https://cloud.google.com/vertex-ai/generative-ai/docs/learn/prompts/few-shot-ex
amples
9.​ Zero-Shot vs. Few-Shot Prompting: Key Differences - Shelf.io,
https://shelf.io/blog/zero-shot-and-few-shot-prompting/
10.​Few-Shot Prompting: Examples, Theory, Use Cases - DataCamp,
https://www.datacamp.com/tutorial/few-shot-prompting
11.​ What is few shot prompting? - IBM,
https://www.ibm.com/think/topics/few-shot-prompting
12.​The Few Shot Prompting Guide - PromptHub,
https://www.prompthub.us/blog/the-few-shot-prompting-guide
13.​Prompt Engineering Best Practices: Tips, Tricks, and Tools | DigitalOcean,
https://www.digitalocean.com/resources/articles/prompt-engineering-best-practi
ces
14.​Prompt Engineering for Large Language Models – Business Applications of
Artificial Intelligence and Machine Learning - OPEN OCO,
https://open.ocolearnok.org/aibusinessapplications/chapter/prompt-engineering-
for-large-language-models/
15.​LLM Prompt Engineering Techniques and Best Practices | by Ali Shafique |
Medium,
https://medium.com/@alishafique3/llm-prompt-engineering-techniques-and-bes
t-practices-7cc0f46467e9
16.​Best practices for prompt engineering with the OpenAI API,
https://help.openai.com/en/articles/6654000-best-practices-for-prompt-enginee
ring-with-the-openai-api
17.​Ensuring Consistent LLM Outputs Using Structured Prompts - Ubiai,
https://ubiai.tools/ensuring-consistent-llm-outputs-using-structured-prompts-2/
18.​Common LLM Prompt Engineering Challenges and Solutions - Ghost,
https://latitude-blog.ghost.io/blog/common-llm-prompt-engineering-challenges-
and-solutions/
19.​Common Mistakes in Writing Prompts for Language Models - Medium,
https://medium.com/@promptengineering/common-mistakes-in-writing-prompts
-for-language-models-a04dc662aed6
20.​Limitations of LLMs: Bias, Hallucinations, and More - Learn Prompting,
https://learnprompting.org/docs/basics/pitfalls
21.​Pitfalls of LLMs - Learn Prompting,
https://learnprompting.org/si/docs/basics/pitfalls
22.​How does prompt length impact the output of an LLM? - Infermatic.ai,
https://infermatic.ai/ask/?question=How%20does%20prompt%20length%20impa
ct%20the%20output%20of%20an%20LLM?
23.​The Impact of Prompt Length on LLM Performance: A Data-Driven Study - Media
& Technology Group, LLC,
https://mediatech.group/prompt-engineering/the-impact-of-prompt-length-on-ll
m-performance-a-data-driven-study/
24.​How does the length of retrieved context fed into the prompt affect the LLM's
performance and the risk of it ignoring some parts of the context? - Milvus,
https://milvus.io/ai-quick-reference/how-does-the-length-of-retrieved-context-f
ed-into-the-prompt-affect-the-llms-performance-and-the-risk-of-it-ignoring-s
ome-parts-of-the-context
25.​The Crucial Role of Context Length in Large Language Models for Business
Applications - Groq is Fast AI Inference,
https://groq.com/the-crucial-role-of-context-length-in-large-language-models-f
or-business-applications/
26.​Zero-Shot vs Few-Shot prompting: A Guide with Examples - Vellum AI,
https://www.vellum.ai/blog/zero-shot-vs-few-shot-prompting-a-guide-with-exa
mples
27.​Mastering Few-Shot Prompting: A Comprehensive Guide | by Software Guide -
Medium,
https://softwareguide.medium.com/mastering-few-shot-prompting-a-comprehe
nsive-guide-6eda3761538c
28.​Many-Shot In-Context Learning - arXiv, https://arxiv.org/pdf/2404.11018
29.​Few-Shot Prompting - Prompt Engineering Guide,
https://www.promptingguide.ai/techniques/fewshot
30.​LLM Prompting: The Basic Techniques - Determined AI,
https://www.determined.ai/blog/llm-prompting
31.​17 Prompting Techniques to Supercharge Your LLMs - Analytics Vidhya,
https://www.analyticsvidhya.com/blog/2024/10/17-prompting-techniques-to-sup
ercharge-your-llms/
32.​Prompt Engineering Techniques: Top 5 for 2025 - K2view,
https://www.k2view.com/blog/prompt-engineering-techniques/
33.​Advanced Prompt-Engineering Techniques for Large Language Models - Medium,
https://medium.com/@sschepis/advanced-prompt-engineering-techniques-for-l
arge-language-models-5f34868c9026
34.​Advanced Prompt Engineering Techniques - Mercity AI,
https://www.mercity.ai/blog-post/advanced-prompt-engineering-techniques
35.​Top 5 Advanced Prompt Engineering Techniques For Your LLM,
https://promptopti.com/5-advanced-prompt-engineering-techniques-for-llms/
36.​Claude 3.7 Sonnet and Claude Code Update: Here's Everything You Need To
Know - AI Tools - God of Prompt,
https://www.godofprompt.ai/blog/claude-3-7-sonnet
37.​Claude 3.7 Sonnet: Features, Access, Benchmarks & More - DataCamp,
https://www.datacamp.com/blog/claude-3-7-sonnet
38.​Claude 3.7 Sonnet and Claude Code - Anthropic,
https://www.anthropic.com/news/claude-3-7-sonnet
39.​Claude 3.7 Sonnet - Anthropic, https://www.anthropic.com/claude/sonnet
40.​Claude 3.7 vs 3.5 Sonnet for Coding - Which One Should You Use? | 16x Prompt,
https://prompt.16x.engineer/blog/claude-37-vs-35-sonnet-coding

You might also like