Open navigation menu
Close suggestions
Search
Search
en
Change Language
Upload
Sign in
Sign in
Download free for days
0 ratings
0% found this document useful (0 votes)
9K views
17 pages
OpenAI Official Prompt Engineering Guide
OpenAI Official Prompt Engineering Guide
Uploaded by
Marcos Luis
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download
Save
Save OpenAI Official Prompt Engineering Guide For Later
Share
0%
0% found this document useful, undefined
0%
, undefined
Print
Embed
Report
0 ratings
0% found this document useful (0 votes)
9K views
17 pages
OpenAI Official Prompt Engineering Guide
OpenAI Official Prompt Engineering Guide
Uploaded by
Marcos Luis
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Carousel Previous
Carousel Next
Download
Save
Save OpenAI Official Prompt Engineering Guide For Later
Share
0%
0% found this document useful, undefined
0%
, undefined
Print
Embed
Report
Download
Save OpenAI Official Prompt Engineering Guide For Later
You are on page 1
/ 17
Search
Fullscreen
9115/24, 9:33 AM LeamPromptdecsipromptengineering/ @ OpenAl_Prompt_Guide md at main- LearnPrompiLearsPrompt » GitHub & LearnPrompt / LearnPrompt (Public <> Code © Issues £11 Pullrequests © Actions FH Projects. © Security Lx Insights LeamPrompt / docs / prompt-engineering / @ OpenAl_Prompt_Guide.md © aaa @ donttal LearnPrompt v5 coming Y 68d5044-2 months ago © 263 Lines (141 Joc) + 16.9 KB [Preview | Code Blame rw Die 2 v jebar_position title description This page summarizes the official Opendl guidelines Official for prompt 25 Prompt engineering OPenAL__prompt engineering Engineering released by Guide OpenAl, focusing on six core principles. @ OpenAl Official Prompt Engineering Guide © Preface: On the 15th, Opendl updated the official Prompt Engineering Guide. The guide mentions six key principles: 1. Write clear instructions 2. Provide reference text 3. Split complex tasks into simpler subtasks /tps:igthub. coms eamPromptLeamPromptiblobimsintdocsiprompt-engineering @ OpenAl_Prompl_Guide.ma wT9115724, 9:33 AM LeamPromptecsipromptengineering! @ OpenAl_Prompt_Guide md at main- LearnPrompiLearsPrompt » GitHub 4. Give the model time to “think” 5. Use external tools 6. Test changes systematically These principles can be combined to achieve greater effectiveness. Following this framework can optimize your prompts by 99%, 1. Write Clear Instructions The model can't read minds and can't guess your thoughts. * If the model's output is too long, you can ask it to respond briefly. * If the model's output is too simple, you can request it to use a more professional level of writing. * Ifyou are not satisfied with the output format, you can directly show the format you expect. It's best to make sure the model doesn't need to guess what you want, as this will give you the best chance of getting the desired result. OpenAl provides 6 core tips 1. Add Details to the Question Ensure your question includes all important details and background information. & Dor't say: "Summarize the meeting notes." @ Instead say: "Please summarize the meeting notes in one paragraph. Then, list all the speakers and their key points in a markdown list. Finally, if any, list the next steps or suggested actions by the speakers.” 2. Ask the Model to Play a Specific Role Explicitly telling the model to play a role can activate its "role-playing" ability. Here is an improved example: ritpsgthnub.conv.eamPrompiL eamPrompliblobimainidocsiprompt-engineering/@ OpenAl_Prompt_Guide.ma ant9115124, 9:33 AM LeamPramptidocs/prompt-engineering/ @ OpenAl_Prompt_Guide.md at main - LeariPrompt/LearnPrompt - GitHub I want you to play the role of a novelist. You will come up with creative and engaging stories that can captivate readers for a long time. You can choose any genre, such as fantasy, romance, historical fiction, etc., but the goal is to write works with outstanding plots, compelling characters, and unexpected climaxes. My first request is, “I want to write a science fiction novel set in the future.” 3. Use Delimiters to Clearly Separate Different Parts of the Input Using triple quotes, XML tags, chapter titles, etc, as delimiters can effectively distinguish and process different parts of the text. (In simple terms, it allows the model to clearly distinguish between your requirements and the text to be processed) For example You will receive two articles on the same topic. First, summarize the main arguments of each article separately. Then, evaluate which article's arguments are more convincing and explain why. Article content Using blank lines and "*" (commonly used in the coding field to divide different areas) is very effective and convenient. 4. Clearly Specify the Steps Required to Complete the Task For complex tasks, itis best to break them down into a series of clear steps. Writing out the steps clearly can help the model follow instructions more effectively. For example Please respond to the user's input by following these steps. Step 1 - The user will provide you with text wrapped in triple quotes. Summarize this text in one sentence, prefixed with "Summary: Step 2 - Translate the summary from Step 1 into Spanish, prefixed with "Translatio Input text’ 5. Provide Examples as References ritpsgthnub.conv.eamPrompiL eamPrompliblobimainidocsiprompt-engineering/@ OpenAl_Prompt_Guide.ma ant9115124, 9:33 AM LeamPramptidocs/prompt-engineering! @ OpenAl_Prompt_Guide.md at main - LearnPrompt/LearnPrompt : GitHub Few-shot technique: In some cases, providing concrete examples to illustrate may be more intuitive. For instance, you want the model to learn a specific way of responding, For example whatpu is a furry little animal native to Tanzania. Examples of sentences using the word whatpu: We saw these very cute whatpus on our trip to Africa. “farduddle” means to jump up and down quickly. the Examples of sentences using this word: The children loved to fardudd| playground. 6. Clearly Specify the Desired Output Length Please summarize the text within the triple quotes in two paragraphs ‘insert text here’ 2. Provide Reference Text Language models may confidently fabricate false answers, especially when responding to deep topics or being asked for citations and URLs. Providing GPT with reference text can reduce the occurrence of false information. 1. Use Reference Text to Construct Answers For example When you are provided with specific articles and need to answer questions, please base your answers on the content of these articles. If the answers are not included in these articles, just state "Unable to find the answer." < Insert article content, separated by triple quotes between each article> Question: < Insert question> © Since all models are limited by the context window size, we need a method to dynamically query information related to the question asked. Embeddings can be used to achieve effective knowledge retrieval 2. Instruct the Model to Answer Using Referenced Text ritpsgthnub.conv.eamPrompiL eamPrompliblobimainidocsiprompt-engineering/@ OpenAl_Prompt_Guide.ma ant(9115124, 9:33 AM LeamPramptidocs/prompt-engineering/ @ OpenAl_Prompt_Guide.md at main - LeariPrompt/LearnPrompt - GitHub If the input information already contains relevant knowledge, you can directly ask the model to quote the provided documents when answering questions. Note that the quotes in the output can be verified by matching strings in the provided documents. For example You will receive a document marked with triple quotes and a question, Your task is to answer the question using only the provided document and cite the parts of the document used to answer the question. If the document does not contain enough information to answer the question, simply write "Insufficient information." If the answer to the question is provided, it must be marked with a citation. When citing relevant paragraphs, use the following format ({"citation": ..}) < Insert document> Question: < Insert question> 3. Split Complex Tasks into Simpler Subtasks Breaking down a large and complex task into smaller, simpler subtasks is an effective method. This also applies to large models. This approach can help them handle complex tasks more effectively, resulting in superior performance. 1. Use Intent Classification to Determine the Most Relevant Instructions for User Queries When you have many different tasks to handle, one method is to first classify these tasks into several categories. Then, for each category of tasks, you can decide which specific steps are needed to complete them. For example, you can first set a few main task types and then set some fixed steps for each type of task. The benefit of this approach is that you don't have to handle everything at once but can take it step by step, reducing the chances of making mistakes. This approach not only reduces errors but also saves costs, as handling a large number of things usually costs more than handling them step by step. For example, for a customer service ap tion, queries can be effectively classified into the following categories: /tps:igthub. coms eamPromptLeamPromptiblobimsintdocsiprompt-engineering @ OpenAl_Prompl_Guide.ma sn79115/24, 9:33 AM LeamPromptdecsipromptengineering! @ OpenAl_Prompt_Guide md at main- Learn@rompiLearsPrompt » GitHub RABNEPRSENSY, BESTS A-TERAAA-NRBK BI, FLA json StI, HBA: primary l secondary. SSRI: UK. HARRIS, IkP RRA. TRAD RET BIE: ~ RUBIT RAR RMR PR - PUR, PERRIS REARS IE ~ BURHERR -REREIE Rea. TkP SN READ OE -8a5 - BHTAER - RKP -KPRE. ARS AREAS BE - PRR - et - Bi - SARIN RLSM MARO RRM, Now, based on Step 1, the model knows that "I'm disconnected, what should | do” falls under technical support troubleshooting, and we can continue with Step 2: /tps:igthub. coms eamPromptLeamPromptiblobimsintdocsiprompt-engineering @ OpenAl_Prompl_Guide.ma9115/24, 9:33 AM LeamPromptdecsipromptengineering! @ OpenAl_Prompt_Guide md at main- LearnPromplLearsPrompt » GitHub MEARUBRARAEPHEPBS AW, BEATA. AAPA SRF: Bt, BRPRERHRNTARNERE SSE, RELRNS, BH AMIR S HEB MAD. ~ ORGIES TR AKIA, TAD RNS, -HEDR, REBARA SBR REN AMES: ~ BHSA M7D-327), BRE BRH SY, ASS 5 EBA Meee. -- BHSN MTD-3275, BVP VRSSMBL, BSS 5 Dat Smee. -URERREHSS 5 DEB MOMMA, TS IIES ITS 43, FELT IT RISA ABE -DRAPARMISHRABHSEANTG, BRIMMNZSRRaR RINE, FHRIRA END ARAB IIER.
RE MRR BIE. 2. For Applications Requiring Long Conversations, Summarize or Filter Previous Conversations Since the model's context length is fixed, conversations between the user and the assistant cannot continue indefinitely, especially when the entire conversation content is included in the context window. ‘One way to address this issue is to summarize the previous conversation. When the input content reaches a certain length, it can trigger a query to summarize part of the conversation. This summary can be part of the system message. Alternatively, the previous conversation can be summarized continuously in the background throughout the conversation. sstakeaways Although this leans towards a developer scenario, ordinary users can also use prompts to actively summarize the conversation history. For example: /tps:igthub. coms eamPromptLeamPromptiblobimsintdocsiprompt-engineering @ OpenAl_Prompl_Guide.ma m7a7, 9:29AM LeamPromptidesipromptengineeing! @ OpenAl_Promp_Guide dat main - LeamPromplearnPromp GitHub Your task is to summarize the information history of a conversation between an Al character and a human. The provided conversation comes from a fixed context window and may not be complete. Summarize what happened in the conversation from the Al's perspective (using the first person). The summary should be less than (WORD_LIMIT} words and must not exceed the word limit. WORD_LIMIT is the desired output length. =: Another method is to dynamically select the parts of the conversation most relevant to the current question. For details, see the strategy "Use Embedding-Based Search to Implement Efficient Knowledge Retrieval 3. Summarize Long Documents in Segments and Recursively Construct a Complete Summary Since the model's context length is fixed, it cannot summarize a text longer than the context length minus the length of the generated summary in one go. For example, to summarize a long book, we can use a series of queries to summarize each chapter of the book separately. These partial summaries can be concatenated and further summarized to form a summary of the summaries. This process can be done recursively until the entire book is summarized. If information from earlier chapters is needed to understand later parts of the book, attaching a continuous summary of the previous content when summarizing the current part is a useful technique. OpenAl previously conducted research on this method of summarizing books using a variant of GPT-3. /tps:igthub. coms eamPromptLeamPromptiblobimsintdocsiprompt-engineering @ OpenAl_Prompl_Guide.ma an79115/24, 9:33 AM LeamPromptdecsipromptenginseringl @ OpenAl_Prompt_Guide md at main- LearnPromplLearsPrompt » GitHub river izing books with human feedback iSO eM eae L oe Vy eee 4. Give the Model "Time" to Think 1. Guide the Model to Find a Solution Before Hastily Concluding Sometimes, we may get better results by explicitly guiding the model to reason based on fundamental principles before making a conclusion. Suppose we want the model to evaluate a student's answer to a math problem. The most intuitive way is to directly ask if the student's answer is correct. ritpsgthnub.conv.eamPrompiL eamPrompliblobimainidocsiprompt-engineering/@ OpenAl_Prompt_Guide.ma on79115124, 9:33 AM LeamPromptdecsipromptengineering! @ OpenAl_Prompt_Guide md at main- LearnPrompllLearsPrompt » GitHub HRPEMRSES EM, (QUA: REEEE-TARERETA, BEMMTAMSAAHS ml. - EHIME SEED RR 100 Sr ~ BAAS FARR 250 RCHMAM A RARER -REBENT—-ORFAA, RRAHAR, RBRKISFHEH 105 ELURSFARRAHN 10 SHA. BA, $-FHSESMALS DUE? RTMAMBURRER (FARR BQ) AREA. FEOMES: MRRRERA x FHRR. 1. Litas 100x 2. RORAEARALATA 250% 3. HIP ALAR 100,000 + 10x ASARARA 100x (Lith) + 250x (ABRBEAR) + 100,000 (IEHHPE) + 10x (RHEE) = 360x + 100,000 FENRSER. However, the student's answer is actually incorrect! By guiding the model to produce its ‘own answer first, it can successfully identify the issue. /tps:igthub. coms eamPromptLeamPromptiblobimsintdocsiprompt-engineering @ OpenAl_Prompl_Guide.ma 10179115124, 9:33 AM LeamPromptdecsipromptengineering! @ OpenAl_Prompt_Guide md at main- Learn@romptLearPrompt » GitHub Bk, MPBATLLABNRS, ASOMOMS SPER S TLL R, VEPEMRSES AR. AMRAATMARNIBZH, WEE POMS TAM. (ORR: REESE—TARERAME, BEMMTAMSBROS Re - LIME ES FARR 100 27, ~ REALMS FBR 250 ETA AML A BER -RESEIT—-ORFAR, RRAGAR, KREBARISEMEN 105 RAURSEHRRASM 10 SHA. BA, H-FNBERRALS IE? RTMAMRURRER (FARR ) ALERT. PENME: MRRRHRA x THAR, 1. time 100x 2. KURBERRAA 250x 3. HEP ALAR 100,000 + 10x SARAH 100x (+H) + 250x (ABRAEHE) + 100,000 (BEM) + 10x (RRSMHEPER) = 360x + 100,000 IERATRE x AAHER BREWER, HN FHRR, 1. sbithatzs: 100x 2. APAREMR ALAR: 250x 3. MEP ARAL: 100,000 + 10x ASBUAR: 100x + 250x + 100,000 + 10x = 360x + 100,000 PENMAFERR, MUIETRAP MAN, tiRHHL 100x BHT ER 9 10x, B—-FSTHEMSMARA 360x + 100,000 7. 2. Hide the Model's Reasoning Process Using Inner Monologue or Sequential Questioning Previous strategies have shown that the model sometimes needs to reason through the problem deeply before answering a specific question. However, in some application environments, the reasoning process may not be suitable for sharing with users. For example, in educational applications, we might want to encourage students to think for themselves, but the model's reasoning process may inadvertently reveal the answer. /tps:igthub. coms eamPromptLeamPromptiblobimsintdocsiprompt-engineering @ OpenAl_Prompl_Guide.ma nr16m, 93am LeamPromptidesipromplengineering! @ OpenAl_Promp_ Guided at main: LeamPromplearnProm GtHub Inner monologue is an effective strategy to address this situation. The main concept of inner monologue is to guide the model to present parts of the output that need to be hidden from the user in a structured way, making it easy to parse. The output can then be parsed before presenting it to the user, showing only part of the parsed result. RUTSROSAP AA, B1S-SRAUBRTA. THRMFENSR, AAVRAR. Bite ROP AAS B= 85/5 (""") OB. B2S-HOVNPSSFENSRIM, HMGFENSREREMB. Buty RMA BASH= S15 (""") OB. BIS -UMRFLSRAR, UU-TTPARARSRNET. USN MAASH=H515 ("") OH. 4H -MRBESRHIR, UB SHH (RARSHSS) . AYE TR EH 4 - (RRR: <4 NiO) PERS: RAPER S> Another way is to achieve this through a series of queries, where all but the last query's results are not shown to the user. First, we can have the model solve the problem independently. Since this initial step does not require the student's answer, it can be omitted. This ensures that the model's answer is not influenced by the student's answer. /tps:igthub. coms eamPromptLeamPromptiblobimsintdocsiprompt-engineering @ OpenAl_Prompl_Guide.ma wan79118/24, 9:33 AM LeamPromptdecsipromptengineering! @ OpenAl_Prompt_Guide md at main- LearnPromplLearsPrompt » GitHub AP
# Playground PHS fils, GLSUAAR SRAM St OSRES ER. A LAGHMAMSSPENRS, WERENRSESER. FRR" HS >" RP AAR:
AP PENS: AER DAT: "<3 RBM BE A 3. Ask the Model If There Are Omissions For example, when listing excerpts related to a specific question, the model needs to decide whether to continue with the next excerpt or stop after listing one. If the original text is long, the model may stop too early, missing some relevant excerpts. By asking follow-up queries to search for previously omitted excerpts, better results can usually be obtained, ps: conv. eamPromplLeamPromtobmainidesipomst-engineering/@ OpenAl Prompt uide-ma san79115/24, 9:33 AM LeamPromptdecsipromptengineering! @ OpenAl_Prompt_Guide md at main- Learn@rompiLearPrompt » GitHub PBK, AASHSTISRE. MUESSAES RNR RODD: EALSRVRRAL, RE TER AMEE? EIR RPA RABEHRAABNA Ra, DBRRENMIS YL — BZ, FETMARABSS SBMA. iI TF JSON texte (texcerp hy wee Mexcerp “HW won tee Att \ Sco [excerpt": "SELES F—MOR", (excerpt": "BME RB— MOR} PARMAR? FEETRSS HER. MRT R! SROWARES, ESCNATOS — ORE, BRAID AR 5. Use External Tools In short, the model can generate more accurate and timely responses by utilizing external information provided as part of the input (the plugin system for GPT demonstrates the effectiveness of this strategy). 1. Use Embedding-Based Search for Efficient Knowledge Retrieval If a user asks a question about a specific movie, adding high-quality information about the movie (e.g,, actors, director) to the model input can be helpful. Embedding technology can be used for efficient knowledge retrieval, allowing relevant information to be dynamically added to the model input at runtime. sstakeaways {) Text embeddings are vectors that measure the relevance between text strings. Related or similar strings are closer in the embedding space than unrelated strings This fact, combined with the existence of fast vector search algorithms, means that embeddings can be used for efficient knowledge retrieval. Specifically, a text corpus can be divided into multiple chunks, each chunk embedded and stored. Then, a specific query can be embedded and a vector search can be performed to find the most relevant embedded text chunks in the corpus (Le, those closest in the embedding space). In the QpenAl Cookbook, you can find some practical implementation examples. /tps:igthub. coms eamPromptLeamPromptiblobimsintdocsiprompt-engineering @ OpenAl_Prompl_Guide.ma san79115724, 9:33 AM LeamPromptdecsipromptengineering! @ OpenAl_Prompt_Guide md at main- LearnPromplLearsPrompt » GitHub 2. Use Code Execution for Precise Calculation or External API Calls We can't expect the language model to accurately perform arithmetic or complex calculations on its own. In cases where precise calculation is required, we can have the model write and run code instead of calculating by itself. Specifically, we can have the model put the code to be executed in a specific format, such as triple backticks. The output generated by the code can be extracted and executed. If necessary, the output of the code execution engine (e.g., Python interpreter) can be used as input for the next query. For example: You can write and execute code by wrapping it in triple backticks, such as code goes here This method is suitable for situations where calculations are required Solve for all real roots of the following polynomial: 3x**5 - 5x4 - 3*x3 - 7*x - 10. 3. Enable the Model to Access Specific Functions This is the recommended method for using OpenAl models to perform external function calls, mostly for developers. In short, the Chat Completions AP! allows function descriptions to be passed in the request. This way, the model can generate function parameters that match these descriptions. These parameters are returned by the API in JSON format and can be used to perform function calls. The results of the function calls can be fed back into the model, forming a closed loop. function-calling /tps:igthub. coms eamPromptLeamPromptiblobimsintdocsiprompt-engineering @ OpenAl_Prompl_Guide.ma 18179115/24, 9:33 AM LeamPromptdecsipromptengineering! @ OpenAl_Prompt_Guide md at main- LearnPrompLearPrompt » GitHub Introduction 713 In an API call, you can describe functions and have the model intelligently choose to. ‘output a JSON object containing arguments to call one or many functions. The Chat ‘Completions API does not call the function; instead, the model generates JSON that you can use to call the function in your code. APR, QOMGRRR, HiRISE —-TRS TRANS UNISON HR, WARM API FSBARRM; HR, RVSEMISON, ATMEAE RBA AOE The latest models ( gpt-3.5-turbo-1106 and gpt-4-1106-preview ) have been trained to both detect when a function should to be called (depending on the input) and to respond with JSON that adheres to the function signature more closely than previous models. With this capability also comes potential risks. We strongly recommend building in user confirmation flows before taking actions that impact the world on behalf of users (sending an email, posting something online, making a purchase, etc). HMBD ( gpt-3.5-turbo-1106 #] gpt-4-1106-preview ) Bitiliss, ATLAMMMTAT BBA (ATMA) AERA AA ISON HTN MLL ARID S BIE. RN PRT ENR, RTRUEECAAP RRMA) (RABF Ot, EBRAAS, WLS) ZARB RUE. 5. Evaluate Model Output Against a Standard Answer ‘Suppose we already know the correct answer to a question should involve a specific set of known facts. In this case, we can check the answer generated by the model to see if it includes the necessary facts. sutakeaways The main purpose is to help developers evaluate whether prompt changes have improved or degraded actual performance. Typically, the sample size is limited, making it difficult to determine if the change is a genuine improvement or just due to random factors. :: The main idea is to “track the similarity between the model-generated answers and the standard answers, and check if there are any contradictions between the candidate answers and the standard answers.” This part is recommended to read in the original text: prompt-engineering i} References /tps:igthub.comiLeamPromptLeamPromptiolobimaintdocsiprompt-engineering!@ Openal_Prompt_Guide.ma 1879115724, 9:33 AM LeamPromptecsipromptengineering! @ OpenAl_Prompt_Guide md at main- LearnPrompiLearsPrompt » GitHub * Qpendl Prompt Examples © Translation by Baoyu htps:igthub.comiLeamPromptLeamPromptiblobimsinttocsiprompt-engineeringl@ Openal_Prompl_Guide.ma wT
You might also like
6-Month AI Automation Agency Roadmap - by Liam Ottley
PDF
No ratings yet
6-Month AI Automation Agency Roadmap - by Liam Ottley
21 pages
AI Agents by Google
PDF
100% (8)
AI Agents by Google
42 pages
30,000+ ChatGPT Prompts Bundle
PDF
No ratings yet
30,000+ ChatGPT Prompts Bundle
2 pages
The Ultimate AI Agency Masterclass Resources
PDF
No ratings yet
The Ultimate AI Agency Masterclass Resources
7 pages
Documentacao Crewai
PDF
No ratings yet
Documentacao Crewai
34 pages
Lovable Dev Blog 2025-01-16 Lovable Prompting Handbook
PDF
No ratings yet
Lovable Dev Blog 2025-01-16 Lovable Prompting Handbook
13 pages
Jain Y. The Art of Prompt Engineering For DeepSeek AI... 2025
PDF
No ratings yet
Jain Y. The Art of Prompt Engineering For DeepSeek AI... 2025
73 pages
Prompt Engineering Bible Join and Master The AI Revolution Profit Online With GPT-4 Plugins For Effortless Money Making (Robert E. Miller) (Z-Library)
PDF
100% (5)
Prompt Engineering Bible Join and Master The AI Revolution Profit Online With GPT-4 Plugins For Effortless Money Making (Robert E. Miller) (Z-Library)
209 pages
109 GPT AI Prompts To Grow Your LinkedIn Following Faster
PDF
75% (4)
109 GPT AI Prompts To Grow Your LinkedIn Following Faster
15 pages
Hegazi ChatGPT Book
PDF
100% (2)
Hegazi ChatGPT Book
375 pages
Building LLM Powered Applications With Langchain
PDF
100% (1)
Building LLM Powered Applications With Langchain
11 pages
Guide To Building AI Agents From Scratch
PDF
100% (5)
Guide To Building AI Agents From Scratch
17 pages
How To Become An AI Prompt Engineer
PDF
100% (3)
How To Become An AI Prompt Engineer
137 pages
Advanced ChatGPT Prompt Engineering
PDF
100% (4)
Advanced ChatGPT Prompt Engineering
7 pages
Prompt Egineering Techniques
PDF
100% (1)
Prompt Egineering Techniques
31 pages
Ebook The Ultimate Chatgpt 4 Training Guide
PDF
100% (1)
Ebook The Ultimate Chatgpt 4 Training Guide
150 pages
150 - ChatGPT Prompts
PDF
100% (3)
150 - ChatGPT Prompts
37 pages
126 ChatGPT Prompts For Digital Marketing
PDF
83% (6)
126 ChatGPT Prompts For Digital Marketing
32 pages
1.3 Creating Effective Prompt Guide
PDF
No ratings yet
1.3 Creating Effective Prompt Guide
1 page
Types of AI Agents Artificial Intelligence
PDF
100% (1)
Types of AI Agents Artificial Intelligence
4 pages
Prompt Engineering Guide
PDF
100% (1)
Prompt Engineering Guide
33 pages
chatGPT Prompt Engineering
PDF
No ratings yet
chatGPT Prompt Engineering
11 pages
Master Prompt Engineering Like Pro
PDF
No ratings yet
Master Prompt Engineering Like Pro
31 pages
ChatGPT Prompt Frameworks 1710645644
PDF
100% (3)
ChatGPT Prompt Frameworks 1710645644
7 pages
Newwhitepaper - Prompt Engineering - v4
PDF
0% (1)
Newwhitepaper - Prompt Engineering - v4
65 pages
Rag 1708257109
PDF
No ratings yet
Rag 1708257109
5 pages
Custom GPT Research Template 3 1
PDF
No ratings yet
Custom GPT Research Template 3 1
8 pages
ChatGPT Prompt Patterns For Improving Code Quality, - Refactoring, Requirements Elicitation, and Software Design
PDF
No ratings yet
ChatGPT Prompt Patterns For Improving Code Quality, - Refactoring, Requirements Elicitation, and Software Design
14 pages
Master Resources Prompt Engineering
PDF
100% (1)
Master Resources Prompt Engineering
2 pages
ChatGPT & Social Media Marketing - The Ultimate Guide To Succeeding On Social Media. Discover How Artificial Intelligence Can Make You The World - S Best Social Media Manager.
PDF
No ratings yet
ChatGPT & Social Media Marketing - The Ultimate Guide To Succeeding On Social Media. Discover How Artificial Intelligence Can Make You The World - S Best Social Media Manager.
43 pages
Create Your Custom ChatGPT With Transfer Learning
PDF
No ratings yet
Create Your Custom ChatGPT With Transfer Learning
10 pages
Prompt Engineering
PDF
100% (1)
Prompt Engineering
13 pages
ChatGPT - Lesson 1 - Quick Start
PDF
No ratings yet
ChatGPT - Lesson 1 - Quick Start
34 pages
ChatGPT CheatSheet 20
PDF
No ratings yet
ChatGPT CheatSheet 20
1 page
Building Generative AI Agents With Vertex AI Agent Builder
PDF
No ratings yet
Building Generative AI Agents With Vertex AI Agent Builder
13 pages
AI Automation
PDF
No ratings yet
AI Automation
3 pages
Prompt Engineering
PDF
100% (3)
Prompt Engineering
37 pages
ChatGPT For Coders
PDF
33% (3)
ChatGPT For Coders
46 pages
Guide To Effective ChatGPT Prompting
PDF
No ratings yet
Guide To Effective ChatGPT Prompting
42 pages
10 AI Agent Tools That Are Reshaping The Industry in 2025 - by Murat Aslan - Freedium
PDF
No ratings yet
10 AI Agent Tools That Are Reshaping The Industry in 2025 - by Murat Aslan - Freedium
6 pages
Krimmel T. AI Prompt Engineering. The Engineer's Handbook 2023
PDF
75% (4)
Krimmel T. AI Prompt Engineering. The Engineer's Handbook 2023
217 pages
State of GPT
PDF
No ratings yet
State of GPT
50 pages
AI Billionaires ChatGPT Google BARD Llama Battle For Dominance NeoMind
PDF
100% (1)
AI Billionaires ChatGPT Google BARD Llama Battle For Dominance NeoMind
86 pages
6 ChatGPT Prompts For PM S 1689780442
PDF
No ratings yet
6 ChatGPT Prompts For PM S 1689780442
9 pages
10 Ways To Make Money From ChatGPT
PDF
No ratings yet
10 Ways To Make Money From ChatGPT
23 pages
ChatGPT For Product Managers. The Ultimate Guide - HelloPM
PDF
No ratings yet
ChatGPT For Product Managers. The Ultimate Guide - HelloPM
10 pages
(Ebook) Supercharge Your Workday With ChatGPT
PDF
100% (5)
(Ebook) Supercharge Your Workday With ChatGPT
38 pages
900+ Best ChatGPT Prompts
PDF
No ratings yet
900+ Best ChatGPT Prompts
82 pages
Infinite Possibilities by Sam Maiyaki
PDF
No ratings yet
Infinite Possibilities by Sam Maiyaki
18 pages
Understanding Prompt Engineering Fundamentals
PDF
100% (1)
Understanding Prompt Engineering Fundamentals
38 pages
Instructions For Your Custom GPT
PDF
No ratings yet
Instructions For Your Custom GPT
1 page
60 ChatGPT Prompts Ebook
PDF
100% (5)
60 ChatGPT Prompts Ebook
37 pages
Modified Generative AI and LLMs in Practice
PDF
No ratings yet
Modified Generative AI and LLMs in Practice
6 pages
100 Best ChatGPT Prompts To Unleash AI's Potentia
PDF
No ratings yet
100 Best ChatGPT Prompts To Unleash AI's Potentia
22 pages
ChatGpt Prompt Ideas
PDF
100% (1)
ChatGpt Prompt Ideas
64 pages
Welcome To This Course On ChatGPT Video 2
PDF
No ratings yet
Welcome To This Course On ChatGPT Video 2
4 pages
Prompt Engineering Training
PDF
No ratings yet
Prompt Engineering Training
7 pages
PROMPTS
PDF
No ratings yet
PROMPTS
20 pages
Ser5nicT1Kb6xcefXcim Prompt Engineering For Developers Prompting Tips No Code Blocks Final
PDF
No ratings yet
Ser5nicT1Kb6xcefXcim Prompt Engineering For Developers Prompting Tips No Code Blocks Final
11 pages
Huyenchip Com 2023 04 11 LLM Engineering HTML
PDF
No ratings yet
Huyenchip Com 2023 04 11 LLM Engineering HTML
13 pages
Create GUI Python Programs
PDF
No ratings yet
Create GUI Python Programs
2 pages
PaddlePaddle Generative Adversarial Network CN
PDF
No ratings yet
PaddlePaddle Generative Adversarial Network CN
5 pages
Support For GraphQL in generateDS
PDF
No ratings yet
Support For GraphQL in generateDS
6 pages
A Python Book Beginning Python Advanced Python and Python Exercises
PDF
No ratings yet
A Python Book Beginning Python Advanced Python and Python Exercises
261 pages
How To Build A Python GUI Application With Wxpython
PDF
No ratings yet
How To Build A Python GUI Application With Wxpython
17 pages
Therapist GPT
PDF
No ratings yet
Therapist GPT
2 pages
ERNIE
PDF
No ratings yet
ERNIE
7 pages
MemGPT - Unlimited Context (Memory) For LLMs
PDF
No ratings yet
MemGPT - Unlimited Context (Memory) For LLMs
11 pages
Learning Assistant
PDF
No ratings yet
Learning Assistant
6 pages
A Cross-Platform ChatGPT Gemini UI
PDF
No ratings yet
A Cross-Platform ChatGPT Gemini UI
15 pages
Paddle OCR EN
PDF
No ratings yet
Paddle OCR EN
16 pages
Writing & Blogging
PDF
No ratings yet
Writing & Blogging
8 pages
Kwai Agents
PDF
No ratings yet
Kwai Agents
7 pages
Awesome AI Agents
PDF
100% (2)
Awesome AI Agents
35 pages
Llama 3 - Open Model That Is Truly Useful
PDF
No ratings yet
Llama 3 - Open Model That Is Truly Useful
19 pages
Learning Different Languages
PDF
No ratings yet
Learning Different Languages
9 pages
Auto GPT
PDF
No ratings yet
Auto GPT
7 pages
Document Classification With LayoutLMv3
PDF
No ratings yet
Document Classification With LayoutLMv3
25 pages
Chat With Multiple PDFs Using Llama 2 and LangChain
PDF
No ratings yet
Chat With Multiple PDFs Using Llama 2 and LangChain
17 pages
Prompts For Large Language Models
PDF
No ratings yet
Prompts For Large Language Models
6 pages
LLaVA - Large Multimodal Model
PDF
No ratings yet
LLaVA - Large Multimodal Model
15 pages
Flux.1-Dev - Photorealistic (And Cute) Images
PDF
100% (1)
Flux.1-Dev - Photorealistic (And Cute) Images
15 pages
Agents
PDF
No ratings yet
Agents
4 pages
Private Chatbot With Local LLM (Falcon 7B) and LangChain
PDF
No ratings yet
Private Chatbot With Local LLM (Falcon 7B) and LangChain
14 pages
GPT-4o API Deep Dive Text Generation Vision and Function Calling
PDF
No ratings yet
GPT-4o API Deep Dive Text Generation Vision and Function Calling
21 pages
LangChain QuickStart With Llama 2
PDF
No ratings yet
LangChain QuickStart With Llama 2
16 pages
ChatGPT-repositories JP
PDF
0% (1)
ChatGPT-repositories JP
102 pages
Fine-Tuning Llama 2 On A Custom Dataset
PDF
No ratings yet
Fine-Tuning Llama 2 On A Custom Dataset
22 pages
Awesome Japanese NLP Resources
PDF
No ratings yet
Awesome Japanese NLP Resources
32 pages
ChatGPT-repositories ZH
PDF
No ratings yet
ChatGPT-repositories ZH
81 pages