0% found this document useful (0 votes)
9K views17 pages

OpenAI Official Prompt Engineering Guide

OpenAI Official Prompt Engineering Guide

Uploaded by

Marcos Luis
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
9K views17 pages

OpenAI Official Prompt Engineering Guide

OpenAI Official Prompt Engineering Guide

Uploaded by

Marcos Luis
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 17
9115/24, 9:33 AM LeamPromptdecsipromptengineering/ @ OpenAl_Prompt_Guide md at main- LearnPrompiLearsPrompt » GitHub & LearnPrompt / LearnPrompt (Public <> Code © Issues £11 Pullrequests © Actions FH Projects. © Security Lx Insights LeamPrompt / docs / prompt-engineering / @ OpenAl_Prompt_Guide.md © aaa @ donttal LearnPrompt v5 coming Y 68d5044-2 months ago © 263 Lines (141 Joc) + 16.9 KB [Preview | Code Blame rw Die 2 v jebar_position title description This page summarizes the official Opendl guidelines Official for prompt 25 Prompt engineering OPenAL__prompt engineering Engineering released by Guide OpenAl, focusing on six core principles. @ OpenAl Official Prompt Engineering Guide © Preface: On the 15th, Opendl updated the official Prompt Engineering Guide. The guide mentions six key principles: 1. Write clear instructions 2. Provide reference text 3. Split complex tasks into simpler subtasks /tps:igthub. coms eamPromptLeamPromptiblobimsintdocsiprompt-engineering @ OpenAl_Prompl_Guide.ma wT 9115724, 9:33 AM LeamPromptecsipromptengineering! @ OpenAl_Prompt_Guide md at main- LearnPrompiLearsPrompt » GitHub 4. Give the model time to “think” 5. Use external tools 6. Test changes systematically These principles can be combined to achieve greater effectiveness. Following this framework can optimize your prompts by 99%, 1. Write Clear Instructions The model can't read minds and can't guess your thoughts. * If the model's output is too long, you can ask it to respond briefly. * If the model's output is too simple, you can request it to use a more professional level of writing. * Ifyou are not satisfied with the output format, you can directly show the format you expect. It's best to make sure the model doesn't need to guess what you want, as this will give you the best chance of getting the desired result. OpenAl provides 6 core tips 1. Add Details to the Question Ensure your question includes all important details and background information. & Dor't say: "Summarize the meeting notes." @ Instead say: "Please summarize the meeting notes in one paragraph. Then, list all the speakers and their key points in a markdown list. Finally, if any, list the next steps or suggested actions by the speakers.” 2. Ask the Model to Play a Specific Role Explicitly telling the model to play a role can activate its "role-playing" ability. Here is an improved example: ritpsgthnub.conv.eamPrompiL eamPrompliblobimainidocsiprompt-engineering/@ OpenAl_Prompt_Guide.ma ant 9115124, 9:33 AM LeamPramptidocs/prompt-engineering/ @ OpenAl_Prompt_Guide.md at main - LeariPrompt/LearnPrompt - GitHub I want you to play the role of a novelist. You will come up with creative and engaging stories that can captivate readers for a long time. You can choose any genre, such as fantasy, romance, historical fiction, etc., but the goal is to write works with outstanding plots, compelling characters, and unexpected climaxes. My first request is, “I want to write a science fiction novel set in the future.” 3. Use Delimiters to Clearly Separate Different Parts of the Input Using triple quotes, XML tags, chapter titles, etc, as delimiters can effectively distinguish and process different parts of the text. (In simple terms, it allows the model to clearly distinguish between your requirements and the text to be processed) For example You will receive two articles on the same topic. First, summarize the main arguments of each article separately. Then, evaluate which article's arguments are more convincing and explain why. Article content Using blank lines and "*" (commonly used in the coding field to divide different areas) is very effective and convenient. 4. Clearly Specify the Steps Required to Complete the Task For complex tasks, itis best to break them down into a series of clear steps. Writing out the steps clearly can help the model follow instructions more effectively. For example Please respond to the user's input by following these steps. Step 1 - The user will provide you with text wrapped in triple quotes. Summarize this text in one sentence, prefixed with "Summary: Step 2 - Translate the summary from Step 1 into Spanish, prefixed with "Translatio Input text’ 5. Provide Examples as References ritpsgthnub.conv.eamPrompiL eamPrompliblobimainidocsiprompt-engineering/@ OpenAl_Prompt_Guide.ma ant 9115124, 9:33 AM LeamPramptidocs/prompt-engineering! @ OpenAl_Prompt_Guide.md at main - LearnPrompt/LearnPrompt : GitHub Few-shot technique: In some cases, providing concrete examples to illustrate may be more intuitive. For instance, you want the model to learn a specific way of responding, For example whatpu is a furry little animal native to Tanzania. Examples of sentences using the word whatpu: We saw these very cute whatpus on our trip to Africa. “farduddle” means to jump up and down quickly. the Examples of sentences using this word: The children loved to fardudd| playground. 6. Clearly Specify the Desired Output Length Please summarize the text within the triple quotes in two paragraphs ‘insert text here’ 2. Provide Reference Text Language models may confidently fabricate false answers, especially when responding to deep topics or being asked for citations and URLs. Providing GPT with reference text can reduce the occurrence of false information. 1. Use Reference Text to Construct Answers For example When you are provided with specific articles and need to answer questions, please base your answers on the content of these articles. If the answers are not included in these articles, just state "Unable to find the answer." < Insert article content, separated by triple quotes between each article> Question: < Insert question> © Since all models are limited by the context window size, we need a method to dynamically query information related to the question asked. Embeddings can be used to achieve effective knowledge retrieval 2. Instruct the Model to Answer Using Referenced Text ritpsgthnub.conv.eamPrompiL eamPrompliblobimainidocsiprompt-engineering/@ OpenAl_Prompt_Guide.ma ant (9115124, 9:33 AM LeamPramptidocs/prompt-engineering/ @ OpenAl_Prompt_Guide.md at main - LeariPrompt/LearnPrompt - GitHub If the input information already contains relevant knowledge, you can directly ask the model to quote the provided documents when answering questions. Note that the quotes in the output can be verified by matching strings in the provided documents. For example You will receive a document marked with triple quotes and a question, Your task is to answer the question using only the provided document and cite the parts of the document used to answer the question. If the document does not contain enough information to answer the question, simply write "Insufficient information." If the answer to the question is provided, it must be marked with a citation. When citing relevant paragraphs, use the following format ({"citation": ..}) < Insert document> Question: < Insert question> 3. Split Complex Tasks into Simpler Subtasks Breaking down a large and complex task into smaller, simpler subtasks is an effective method. This also applies to large models. This approach can help them handle complex tasks more effectively, resulting in superior performance. 1. Use Intent Classification to Determine the Most Relevant Instructions for User Queries When you have many different tasks to handle, one method is to first classify these tasks into several categories. Then, for each category of tasks, you can decide which specific steps are needed to complete them. For example, you can first set a few main task types and then set some fixed steps for each type of task. The benefit of this approach is that you don't have to handle everything at once but can take it step by step, reducing the chances of making mistakes. This approach not only reduces errors but also saves costs, as handling a large number of things usually costs more than handling them step by step. For example, for a customer service ap tion, queries can be effectively classified into the following categories: /tps:igthub. coms eamPromptLeamPromptiblobimsintdocsiprompt-engineering @ OpenAl_Prompl_Guide.ma sn7 9115/24, 9:33 AM LeamPromptdecsipromptengineering! @ OpenAl_Prompt_Guide md at main- Learn@rompiLearsPrompt » GitHub RABNEPRSENSY, BESTS A-TERAAA-NRBK BI, FLA json StI, HBA: primary l secondary. SSRI: UK. HARRIS, IkP RRA. TRAD RET BIE: ~ RUBIT RAR RMR PR - PUR, PERRIS REARS IE ~ BURHERR -REREIE Rea. TkP SN READ OE -8a5 - BHTAER - RKP -KPRE. ARS AREAS BE - PRR - et - Bi - SARIN RLSM MARO RRM, Now, based on Step 1, the model knows that "I'm disconnected, what should | do” falls under technical support troubleshooting, and we can continue with Step 2: /tps:igthub. coms eamPromptLeamPromptiblobimsintdocsiprompt-engineering @ OpenAl_Prompl_Guide.ma 9115/24, 9:33 AM LeamPromptdecsipromptengineering! @ OpenAl_Prompt_Guide md at main- LearnPromplLearsPrompt » GitHub MEARUBRARAEPHEPBS AW, BEATA. AAPA SRF: Bt, BRPRERHRNTARNERE SSE, RELRNS, BH AMIR S HEB MAD. ~ ORGIES TR AKIA, TAD RNS, -HEDR, REBARA SBR REN AMES: ~ BHSA M7D-327), BRE BRH SY, ASS 5 EBA Meee. -- BHSN MTD-3275, BVP VRSSMBL, BSS 5 Dat Smee. -URERREHSS 5 DEB MOMMA, TS IIES ITS 43, FELT IT RISA ABE -DRAPARMISHRABHSEANTG, BRIMMNZSRRaR RINE, FHRIRA END ARAB IIER. RE MRR BIE. 2. For Applications Requiring Long Conversations, Summarize or Filter Previous Conversations Since the model's context length is fixed, conversations between the user and the assistant cannot continue indefinitely, especially when the entire conversation content is included in the context window. ‘One way to address this issue is to summarize the previous conversation. When the input content reaches a certain length, it can trigger a query to summarize part of the conversation. This summary can be part of the system message. Alternatively, the previous conversation can be summarized continuously in the background throughout the conversation. sstakeaways Although this leans towards a developer scenario, ordinary users can also use prompts to actively summarize the conversation history. For example: /tps:igthub. coms eamPromptLeamPromptiblobimsintdocsiprompt-engineering @ OpenAl_Prompl_Guide.ma m7 a7, 9:29AM LeamPromptidesipromptengineeing! @ OpenAl_Promp_Guide dat main - LeamPromplearnPromp GitHub Your task is to summarize the information history of a conversation between an Al character and a human. The provided conversation comes from a fixed context window and may not be complete. Summarize what happened in the conversation from the Al's perspective (using the first person). The summary should be less than (WORD_LIMIT} words and must not exceed the word limit. WORD_LIMIT is the desired output length. =: Another method is to dynamically select the parts of the conversation most relevant to the current question. For details, see the strategy "Use Embedding-Based Search to Implement Efficient Knowledge Retrieval 3. Summarize Long Documents in Segments and Recursively Construct a Complete Summary Since the model's context length is fixed, it cannot summarize a text longer than the context length minus the length of the generated summary in one go. For example, to summarize a long book, we can use a series of queries to summarize each chapter of the book separately. These partial summaries can be concatenated and further summarized to form a summary of the summaries. This process can be done recursively until the entire book is summarized. If information from earlier chapters is needed to understand later parts of the book, attaching a continuous summary of the previous content when summarizing the current part is a useful technique. OpenAl previously conducted research on this method of summarizing books using a variant of GPT-3. /tps:igthub. coms eamPromptLeamPromptiblobimsintdocsiprompt-engineering @ OpenAl_Prompl_Guide.ma an7 9115/24, 9:33 AM LeamPromptdecsipromptenginseringl @ OpenAl_Prompt_Guide md at main- LearnPromplLearsPrompt » GitHub river izing books with human feedback iSO eM eae L oe Vy eee 4. Give the Model "Time" to Think 1. Guide the Model to Find a Solution Before Hastily Concluding Sometimes, we may get better results by explicitly guiding the model to reason based on fundamental principles before making a conclusion. Suppose we want the model to evaluate a student's answer to a math problem. The most intuitive way is to directly ask if the student's answer is correct. ritpsgthnub.conv.eamPrompiL eamPrompliblobimainidocsiprompt-engineering/@ OpenAl_Prompt_Guide.ma on7 9115124, 9:33 AM LeamPromptdecsipromptengineering! @ OpenAl_Prompt_Guide md at main- LearnPrompllLearsPrompt » GitHub HRPEMRSES EM, (QUA: REEEE-TARERETA, BEMMTAMSAAHS ml. - EHIME SEED RR 100 Sr ~ BAAS FARR 250 RCHMAM A RARER -REBENT—-ORFAA, RRAHAR, RBRKISFHEH 105 ELURSFARRAHN 10 SHA. BA, $-FHSESMALS DUE? RTMAMBURRER (FARR BQ) AREA. FEOMES: MRRRERA x FHRR. 1. Litas 100x 2. RORAEARALATA 250% 3. HIP ALAR 100,000 + 10x ASARARA 100x (Lith) + 250x (ABRBEAR) + 100,000 (IEHHPE) + 10x (RHEE) = 360x + 100,000 FENRSER. However, the student's answer is actually incorrect! By guiding the model to produce its ‘own answer first, it can successfully identify the issue. /tps:igthub. coms eamPromptLeamPromptiblobimsintdocsiprompt-engineering @ OpenAl_Prompl_Guide.ma 1017 9115124, 9:33 AM LeamPromptdecsipromptengineering! @ OpenAl_Prompt_Guide md at main- Learn@romptLearPrompt » GitHub Bk, MPBATLLABNRS, ASOMOMS SPER S TLL R, VEPEMRSES AR. AMRAATMARNIBZH, WEE POMS TAM. (ORR: REESE—TARERAME, BEMMTAMSBROS Re - LIME ES FARR 100 27, ~ REALMS FBR 250 ETA AML A BER -RESEIT—-ORFAR, RRAGAR, KREBARISEMEN 105 RAURSEHRRASM 10 SHA. BA, H-FNBERRALS IE? RTMAMRURRER (FARR ) ALERT. PENME: MRRRHRA x THAR, 1. time 100x 2. KURBERRAA 250x 3. HEP ALAR 100,000 + 10x SARAH 100x (+H) + 250x (ABRAEHE) + 100,000 (BEM) + 10x (RRSMHEPER) = 360x + 100,000 IERATRE x AAHER BREWER, HN FHRR, 1. sbithatzs: 100x 2. APAREMR ALAR: 250x 3. MEP ARAL: 100,000 + 10x ASBUAR: 100x + 250x + 100,000 + 10x = 360x + 100,000 PENMAFERR, MUIETRAP MAN, tiRHHL 100x BHT ER 9 10x, B—-FSTHEMSMARA 360x + 100,000 7. 2. Hide the Model's Reasoning Process Using Inner Monologue or Sequential Questioning Previous strategies have shown that the model sometimes needs to reason through the problem deeply before answering a specific question. However, in some application environments, the reasoning process may not be suitable for sharing with users. For example, in educational applications, we might want to encourage students to think for themselves, but the model's reasoning process may inadvertently reveal the answer. /tps:igthub. coms eamPromptLeamPromptiblobimsintdocsiprompt-engineering @ OpenAl_Prompl_Guide.ma nr 16m, 93am LeamPromptidesipromplengineering! @ OpenAl_Promp_ Guided at main: LeamPromplearnProm GtHub Inner monologue is an effective strategy to address this situation. The main concept of inner monologue is to guide the model to present parts of the output that need to be hidden from the user in a structured way, making it easy to parse. The output can then be parsed before presenting it to the user, showing only part of the parsed result. RUTSROSAP AA, B1S-SRAUBRTA. THRMFENSR, AAVRAR. Bite ROP AAS B= 85/5 (""") OB. B2S-HOVNPSSFENSRIM, HMGFENSREREMB. Buty RMA BASH= S15 (""") OB. BIS -UMRFLSRAR, UU-TTPARARSRNET. USN MAASH=H515 ("") OH. 4H -MRBESRHIR, UB SHH (RARSHSS) . AYE TR EH 4 - (RRR: <4 NiO) PERS: RAPER S> Another way is to achieve this through a series of queries, where all but the last query's results are not shown to the user. First, we can have the model solve the problem independently. Since this initial step does not require the student's answer, it can be omitted. This ensures that the model's answer is not influenced by the student's answer. /tps:igthub. coms eamPromptLeamPromptiblobimsintdocsiprompt-engineering @ OpenAl_Prompl_Guide.ma wan7 9118/24, 9:33 AM LeamPromptdecsipromptengineering! @ OpenAl_Prompt_Guide md at main- LearnPromplLearsPrompt » GitHub AP # Playground PHS fils, GLSUAAR SRAM St OSRES ER. A LAGHMAMSSPENRS, WERENRSESER. FRR" HS >" RP AAR: AP PENS: AER DAT: "<3 RBM BE A 3. Ask the Model If There Are Omissions For example, when listing excerpts related to a specific question, the model needs to decide whether to continue with the next excerpt or stop after listing one. If the original text is long, the model may stop too early, missing some relevant excerpts. By asking follow-up queries to search for previously omitted excerpts, better results can usually be obtained, ps: conv. eamPromplLeamPromtobmainidesipomst-engineering/@ OpenAl Prompt uide-ma san7 9115/24, 9:33 AM LeamPromptdecsipromptengineering! @ OpenAl_Prompt_Guide md at main- Learn@rompiLearPrompt » GitHub PBK, AASHSTISRE. MUESSAES RNR RODD: EALSRVRRAL, RE TER AMEE? EIR RPA RABEHRAABNA Ra, DBRRENMIS YL — BZ, FETMARABSS SBMA. iI TF JSON texte (texcerp hy wee Mexcerp “HW won tee Att \ Sco [excerpt": "SELES F—MOR", (excerpt": "BME RB— MOR} PARMAR? FEETRSS HER. MRT R! SROWARES, ESCNATOS — ORE, BRAID AR 5. Use External Tools In short, the model can generate more accurate and timely responses by utilizing external information provided as part of the input (the plugin system for GPT demonstrates the effectiveness of this strategy). 1. Use Embedding-Based Search for Efficient Knowledge Retrieval If a user asks a question about a specific movie, adding high-quality information about the movie (e.g,, actors, director) to the model input can be helpful. Embedding technology can be used for efficient knowledge retrieval, allowing relevant information to be dynamically added to the model input at runtime. sstakeaways {) Text embeddings are vectors that measure the relevance between text strings. Related or similar strings are closer in the embedding space than unrelated strings This fact, combined with the existence of fast vector search algorithms, means that embeddings can be used for efficient knowledge retrieval. Specifically, a text corpus can be divided into multiple chunks, each chunk embedded and stored. Then, a specific query can be embedded and a vector search can be performed to find the most relevant embedded text chunks in the corpus (Le, those closest in the embedding space). In the QpenAl Cookbook, you can find some practical implementation examples. /tps:igthub. coms eamPromptLeamPromptiblobimsintdocsiprompt-engineering @ OpenAl_Prompl_Guide.ma san7 9115724, 9:33 AM LeamPromptdecsipromptengineering! @ OpenAl_Prompt_Guide md at main- LearnPromplLearsPrompt » GitHub 2. Use Code Execution for Precise Calculation or External API Calls We can't expect the language model to accurately perform arithmetic or complex calculations on its own. In cases where precise calculation is required, we can have the model write and run code instead of calculating by itself. Specifically, we can have the model put the code to be executed in a specific format, such as triple backticks. The output generated by the code can be extracted and executed. If necessary, the output of the code execution engine (e.g., Python interpreter) can be used as input for the next query. For example: You can write and execute code by wrapping it in triple backticks, such as code goes here This method is suitable for situations where calculations are required Solve for all real roots of the following polynomial: 3x**5 - 5x4 - 3*x3 - 7*x - 10. 3. Enable the Model to Access Specific Functions This is the recommended method for using OpenAl models to perform external function calls, mostly for developers. In short, the Chat Completions AP! allows function descriptions to be passed in the request. This way, the model can generate function parameters that match these descriptions. These parameters are returned by the API in JSON format and can be used to perform function calls. The results of the function calls can be fed back into the model, forming a closed loop. function-calling /tps:igthub. coms eamPromptLeamPromptiblobimsintdocsiprompt-engineering @ OpenAl_Prompl_Guide.ma 1817 9115/24, 9:33 AM LeamPromptdecsipromptengineering! @ OpenAl_Prompt_Guide md at main- LearnPrompLearPrompt » GitHub Introduction 713 In an API call, you can describe functions and have the model intelligently choose to. ‘output a JSON object containing arguments to call one or many functions. The Chat ‘Completions API does not call the function; instead, the model generates JSON that you can use to call the function in your code. APR, QOMGRRR, HiRISE —-TRS TRANS UNISON HR, WARM API FSBARRM; HR, RVSEMISON, ATMEAE RBA AOE The latest models ( gpt-3.5-turbo-1106 and gpt-4-1106-preview ) have been trained to both detect when a function should to be called (depending on the input) and to respond with JSON that adheres to the function signature more closely than previous models. With this capability also comes potential risks. We strongly recommend building in user confirmation flows before taking actions that impact the world on behalf of users (sending an email, posting something online, making a purchase, etc). HMBD ( gpt-3.5-turbo-1106 #] gpt-4-1106-preview ) Bitiliss, ATLAMMMTAT BBA (ATMA) AERA AA ISON HTN MLL ARID S BIE. RN PRT ENR, RTRUEECAAP RRMA) (RABF Ot, EBRAAS, WLS) ZARB RUE. 5. Evaluate Model Output Against a Standard Answer ‘Suppose we already know the correct answer to a question should involve a specific set of known facts. In this case, we can check the answer generated by the model to see if it includes the necessary facts. sutakeaways The main purpose is to help developers evaluate whether prompt changes have improved or degraded actual performance. Typically, the sample size is limited, making it difficult to determine if the change is a genuine improvement or just due to random factors. :: The main idea is to “track the similarity between the model-generated answers and the standard answers, and check if there are any contradictions between the candidate answers and the standard answers.” This part is recommended to read in the original text: prompt-engineering i} References /tps:igthub.comiLeamPromptLeamPromptiolobimaintdocsiprompt-engineering!@ Openal_Prompt_Guide.ma 187 9115724, 9:33 AM LeamPromptecsipromptengineering! @ OpenAl_Prompt_Guide md at main- LearnPrompiLearsPrompt » GitHub * Qpendl Prompt Examples © Translation by Baoyu htps:igthub.comiLeamPromptLeamPromptiblobimsinttocsiprompt-engineeringl@ Openal_Prompl_Guide.ma wT

You might also like