Open navigation menu
Close suggestions
Search
Search
en
Change Language
Upload
Sign in
Sign in
Download free for days
0 ratings
0% found this document useful (0 votes)
202 views
4 pages
RAG Notes
Notes for RAG
Uploaded by
sanjaych333
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download
Save
Save RAG notes For Later
Share
0%
0% found this document useful, undefined
0%
, undefined
Print
Embed
Report
0 ratings
0% found this document useful (0 votes)
202 views
4 pages
RAG Notes
Notes for RAG
Uploaded by
sanjaych333
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Carousel Previous
Carousel Next
Download
Save
Save RAG notes For Later
Share
0%
0% found this document useful, undefined
0%
, undefined
Print
Embed
Report
Download
Save RAG notes For Later
You are on page 1
/ 4
Search
Fullscreen
Retrievalaugmented generation (RAG) combines large language models (LLMs) with retrieval 2-1 Explain the main parts of a RAG system and how they work. ‘Ans. A RAG (retrieval-augmented generation) system has two main components: the retriever and the generator. The retriever searches for and collects relevant information from extemal sources, like databases, documents, or websites. ‘The generator, usually an advanced language model, uses this information to create clear and accurate text. ‘The retriever makes sure the system gets the most up-to-date information, while the generator combines this with its own knowledge to produce better answers, Together, they provide more accurate responses than the generator could on its own. Q.2 What are the main benefits of using RAG instead of just relying on an LLM's internal knowledge? Ans. If you rely only on an LLM’s builtin knowledge, the system is limited to what it was trained ‘on, which could be outdated or lacking detail. roving ame say nae emt tm tr on, This approach also reduces “hallucinations"—errors where the model makes up facts—because the answers are based on real data. RAG is especially helpful for specific fields like law, medicine, or tech, where up-to-date, specialized knowledge is needed. Q.3 What types of external knowledge sources can RAG use? ‘Ans. RAG systems can gather information from both structured and unstructured external sources: © Structured sources include databases, APIs, or knowledge graphs, where data is organized and easy to search. ¢ Unstructured sources consist of large collections of text, such as documents, websites, oF archives, where the information needs to be processed using natural language understanding.This flexibility allows RAG systems to be tailored to different fields, such as legal or medical use, by pulling from case law databases, research journals, or clinical trial data. 2.4 Does prompt engineering matter in RAG? ‘Ans. Prompt engineering helps language models provide high-quality responses using the retrieved information. How you design a prompt can affect the relevance and clarity of the ‘output. '* Specific system prompt templates help guide the model. For example, instead of having a simple out-of-the-box system prompt like “Answer the question,” you might have, “Answer the question based only on the context provided.” This gives the model explicit instructions to only use the context provided to answer the question, which can reduce the probability of hallucinations. ‘¢ Few-shot prompting involves giving the model a few example responses before asking it to generate its own, so it knows the type of response you're looking for. ¢ Chain-of-thought prompting helps break down complex questions by encouraging the ‘model to explain its reasoning step-by-step before answering. Q.5 How does the retriever work in a RAG system? What are common retrieval methods? ‘Ans. In a RAG system, the retriever gathers relevant information from extemal sources for the generator to use. There are different ways to retrieve information. ‘One method is sparse retrieval, which matches keywords (e.g., TF-IDF or BM25). This is simple but may not capture the deeper meaning behind the words. Another approach is dense retrieval. which uses neural embeddings to understand the meaning of documents and queries. Methods like BERT or Dense Passage Retrieval (DPR) represent documents as vectors in a shared space, making retrieval more accurate. ‘The choice between these methods can greatly affect how well the RAG system works, (Q.6 What are the challenges of combining retrieved Information with LLM generation? ‘Ans, Combining retrieved information with an LLM’s generation presents some challenges. For instance, the retrieved data must be highly relevant to the query as irrelevant data can confuse the model and reduce the quality of the response. Additionally, if the retrieved information conflicts with the model's internal knowledge, it can create confusing or inaccurate answers. As such, resolving these conflicts without confusing the user is crucial.Fal th syle an format of reeved data may not lays match the mod's usual wring ot lormating, making i ard forthe modelo agate he nlomation smh 27 Whats the role ofa vector database in RAG? [Ans @ RAG systom,a vector database helps manage and store dense embeddings of ox ‘These embedcngs are numancal representations that capture te meaning of words and traces, created by models ke BERT or Open nen a query is made, is ambeddng is compared othe stored ones inthe database o fad ‘amar document. Th makes faster and more accurate to reteve the ight infrmaton. The [process heb the system quekly locate and pul up the most relevant infomation, improving ‘bom he speed and accuracy of retioval {8 what are some common ways to evaluate RAG systems? [Ans To evalusle @ RAG system, you need 10 look at both the retieval and generation Metis tke precaion (now many reeved documents ere rlevet) and recall how any of he toll relevant documents wee found) can be vaed ere. 1+ For the generator, matics Ike BLEU and ROUGE can be used Yo compare the erected isto huran-wrten examples o 9098 quay. For dowirean aah ike ueston-answerng, matics ike Ft score, pecislon, ar recs ‘so be used to evaluate the overat RAG sytem, 12.9 How do you handle ambiguous or incompleto quer telovant results? Ina RAG system to ensure [Ana. Handing ambiguous or incomplete queries in a RAG system roquies strategies to ensure ‘hat reevant an accurate nfermaton s etveved despite the lack of Gary 9 he user's mp (One approach i 1 iglament query raoament technique, where th system automaticaly ‘suggests ications or aformutates the ambiguous gry nto a more prise ono Dasa’ O0 Known pattems or previous ilerctona. Tha can svelve taking folowup questons of ‘roning te ser wih multe optns 6 narrow down thew nent Another method is 10 rtieve 4 diverse set of documents that cover mute possible Interpretations of he quer. By retiving a range of ess te system ensures that even fhe ‘ers vague, some relevant nlomatin i kayo be coed,Intermediate RAG Interview Questions 2.10 How do you choose the right retriever for a RAG application? ‘Ans. Choosing the right retriever depends on the type of data you're working with, the nature of the queries, and how much computing power you have. For complex queries that need a deep understanding of the meaning behind words, dense retrieval methods like BERT or DPR are better. These methods capture context and are ideal for tasks like customer support or research, where understanding the underlying meanings matter. if the task is simpler and revolves around keyword matching, or if you have limited computational resources, sparse retrieval methods such as BM25 or TF-IDF might be more suitable. These methods are quicker and easier to set up but might not find documents that don't match exact keywords. The main trade-off between dense and sparse retrieval methods is accuracy versus ‘computational cost. Sometimes, combining both approaches in a hybrid retrieval system can help balance accuracy with computational efficiency. This way, you get the benefits of both dense and sparse methods depending on your needs. Q.11 Describe what a hybrid search Is. ‘Ans. Hybrid search combines the strengths of both dense and sparse retrieval methods. For instance, you can start with a sparse method like BM25 to quickly find documents based on keywords. Then, a dense method like BERT re-ranks those documents by understanding their context and meaning. This gives you the speed of sparse search with the accuracy of dense methods, which is great for complex queries and large datasets. Q.12 Do you need a vector database to implement RAG? If not, what are the alternatives? Ans. A vector database is great for managing dense embeddings, but it's not always necessary. Alternatives include: © Traditional databases: If you're using sparse methods or structured data, regular relational or NoSQL databases can be enough. They work well for keyword searches. Databases like MongoDB or Elasticsearch are good for handling unstructured data and full-text searches, but they lack deep semantic search. © Inverted indices: These map keywords to documents for fast searches, but they don't capture the meaning behind the words.
You might also like
LangChain Programming For Beginners
PDF
No ratings yet
LangChain Programming For Beginners
154 pages
5 Pretraining On Unlabeled Data - Build A Large Language Model (From Scratch)
PDF
No ratings yet
5 Pretraining On Unlabeled Data - Build A Large Language Model (From Scratch)
61 pages
LLMs and Retrieval-Augmented Generation (RAG)
PDF
No ratings yet
LLMs and Retrieval-Augmented Generation (RAG)
120 pages
Yugandar - Generative AI Architect
PDF
No ratings yet
Yugandar - Generative AI Architect
8 pages
Retrieval-Augmented Generation For Large Language Models: A Survey
PDF
No ratings yet
Retrieval-Augmented Generation For Large Language Models: A Survey
26 pages
Generative AI Notes
PDF
No ratings yet
Generative AI Notes
1 page
Developing Retrieval Augmented Generation (RAG) Based LLM Systems From Pdfs - An Expert Report
PDF
No ratings yet
Developing Retrieval Augmented Generation (RAG) Based LLM Systems From Pdfs - An Expert Report
36 pages
Generative AI
PDF
No ratings yet
Generative AI
25 pages
Guide To Evaluating LLM and RAG Systems
PDF
No ratings yet
Guide To Evaluating LLM and RAG Systems
41 pages
GenAI Roadmap
PDF
No ratings yet
GenAI Roadmap
8 pages
Gen AI Roadmap 2025
PDF
No ratings yet
Gen AI Roadmap 2025
19 pages
Generative AI Interview Questions and Answers
PDF
No ratings yet
Generative AI Interview Questions and Answers
7 pages
AI Privacy Risks and Mitigations in Large Language Models
PDF
No ratings yet
AI Privacy Risks and Mitigations in Large Language Models
102 pages
NCA-GENL Nvidia Generative Ai Llms Exam Dumps
PDF
No ratings yet
NCA-GENL Nvidia Generative Ai Llms Exam Dumps
5 pages
Generative AI With Large Language Models AWS & DeepLearning
PDF
No ratings yet
Generative AI With Large Language Models AWS & DeepLearning
96 pages
LangChain & RAG
PDF
No ratings yet
LangChain & RAG
62 pages
Advanced RAG Techniques - What They Are & How To Use Them
PDF
No ratings yet
Advanced RAG Techniques - What They Are & How To Use Them
16 pages
Advanced Retrieval-Augmented Generation (RAG) With LangChain, LangGraph, and AI Agents - by Manoj Mukherjee - Oct, 2024 - Medium
PDF
No ratings yet
Advanced Retrieval-Augmented Generation (RAG) With LangChain, LangGraph, and AI Agents - by Manoj Mukherjee - Oct, 2024 - Medium
15 pages
LLM and RAG
PDF
No ratings yet
LLM and RAG
12 pages
Fine Tuning Techniques For Large Language Models LLMs
PDF
No ratings yet
Fine Tuning Techniques For Large Language Models LLMs
15 pages
Application of Large Language
PDF
No ratings yet
Application of Large Language
75 pages
TensorFlow Cheatsheet Zero To Mastery V1.01
PDF
No ratings yet
TensorFlow Cheatsheet Zero To Mastery V1.01
26 pages
GenAI POC - Training
PDF
100% (1)
GenAI POC - Training
43 pages
Techniques To FineTune LLMs
PDF
No ratings yet
Techniques To FineTune LLMs
7 pages
GenAI Unit1 3
PDF
No ratings yet
GenAI Unit1 3
31 pages
Interview Questions On RAG
PDF
No ratings yet
Interview Questions On RAG
6 pages
Fine-Tuning Legal-BERT - LLMs For Automated Legal Text Classification - by Drewgelbard - Nov, 2024 - Towards AI
PDF
No ratings yet
Fine-Tuning Legal-BERT - LLMs For Automated Legal Text Classification - by Drewgelbard - Nov, 2024 - Towards AI
27 pages
RAG Technics
PDF
100% (1)
RAG Technics
8 pages
Hands-On Lab With LLMs and Gen AI Within IDC
PDF
No ratings yet
Hands-On Lab With LLMs and Gen AI Within IDC
57 pages
GenAI Interview Questions-Draft
PDF
No ratings yet
GenAI Interview Questions-Draft
27 pages
Multi-Agent Agentic RAG Systems - Prashant Sahu
PDF
No ratings yet
Multi-Agent Agentic RAG Systems - Prashant Sahu
10 pages
MasterClass Agentic AI & RAG Flyer-1
PDF
No ratings yet
MasterClass Agentic AI & RAG Flyer-1
4 pages
Top 50 GenAI Interview Questions
PDF
No ratings yet
Top 50 GenAI Interview Questions
3 pages
Langchain Retrieval Augmented Generation White Paper
PDF
100% (1)
Langchain Retrieval Augmented Generation White Paper
23 pages
Rag - LLM
PDF
No ratings yet
Rag - LLM
16 pages
Knowledge Graph Construction Using Large Language Models
PDF
No ratings yet
Knowledge Graph Construction Using Large Language Models
17 pages
Long-Context LLMs Meet RAG: Overcoming Challenges For Long Inputs in RAG
PDF
No ratings yet
Long-Context LLMs Meet RAG: Overcoming Challenges For Long Inputs in RAG
34 pages
Types of RAG: @bhavishya Pandit
PDF
No ratings yet
Types of RAG: @bhavishya Pandit
15 pages
Small Language Models (SLMS)
PDF
No ratings yet
Small Language Models (SLMS)
23 pages
AI Institutes
PDF
No ratings yet
AI Institutes
98 pages
Reading:: Sources
PDF
No ratings yet
Reading:: Sources
15 pages
RAG - The Future of LLMs - LinkedIn
PDF
No ratings yet
RAG - The Future of LLMs - LinkedIn
7 pages
1GitHub - Modelcontextprotocol - Python-Sdk - The Official Python SDK For Model Context Protocol Servers and Clients
PDF
No ratings yet
1GitHub - Modelcontextprotocol - Python-Sdk - The Official Python SDK For Model Context Protocol Servers and Clients
9 pages
Everything You Need To Know About Small Language Models (SLM) and Its Applications
PDF
No ratings yet
Everything You Need To Know About Small Language Models (SLM) and Its Applications
3 pages
MM-LLMs Recent Advances in MultiModal Large Language Models
PDF
No ratings yet
MM-LLMs Recent Advances in MultiModal Large Language Models
22 pages
Knowledge Graphs V Vector Databases and When Not To Use Them!
PDF
No ratings yet
Knowledge Graphs V Vector Databases and When Not To Use Them!
3 pages
Transformers
PDF
No ratings yet
Transformers
21 pages
LangChain Academy - Introduction To LangGraph - Motivation
PDF
No ratings yet
LangChain Academy - Introduction To LangGraph - Motivation
17 pages
Context-Aware Summarization For PDF Documents Using Large Language Models
PDF
No ratings yet
Context-Aware Summarization For PDF Documents Using Large Language Models
6 pages
Introduction To Generative AI LLM
PDF
100% (1)
Introduction To Generative AI LLM
9 pages
Large Language Model (LLM) 1
PDF
100% (1)
Large Language Model (LLM) 1
17 pages
Ai Notes
PDF
No ratings yet
Ai Notes
2 pages
Hugging Face Transformers
PDF
No ratings yet
Hugging Face Transformers
8 pages
Hugging Face Case Study 112023
PDF
No ratings yet
Hugging Face Case Study 112023
2 pages
Generative AI - 48 Hours TOC
PDF
100% (1)
Generative AI - 48 Hours TOC
4 pages
10 Evani Generative AI Champion
PDF
No ratings yet
10 Evani Generative AI Champion
39 pages
Generative AI
PDF
No ratings yet
Generative AI
2 pages
Rakesh Kumar - Data Scientist
PDF
No ratings yet
Rakesh Kumar - Data Scientist
3 pages
Build An MLOps Project in 6 Steps
PDF
No ratings yet
Build An MLOps Project in 6 Steps
8 pages
Langchain PDF Reader
PDF
100% (1)
Langchain PDF Reader
15 pages