0% found this document useful (0 votes)
137 views51 pages

Webinar Chainlit Copilot

Uploaded by

Mukenze junior
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
137 views51 pages

Webinar Chainlit Copilot

Uploaded by

Mukenze junior
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 51

Build your own Copilot

Instructor
Adi Gudimella
Senior AI Engineer
Microsoft
Bio

● Senior AI Engineer at Microsoft


● Over 10 years of experience in AI
topics including LLM, RL, DL, DS
● Currently building LLM powered
applications
● Master’s in Computational Science
and Engineering from Georgia Tech
● Bachelor’s in Civil Engineering from
IIT Madras
Introduction
● Learn to build an AI-powered assistant using the Chainlit framework.
● Chainlit simplifies integrating large language models (LLMs) into
applications.
● Suitable for developers of all levels interested in conversational AI.
● Start by creating an app that connects to an LLM for generating
responses.
● Integrate a single data source, like a PDF, and use Retrieval-Augmented
Generation (RAG) techniques.
● Connect multiple data sources, such as PDF and CSV files, to enhance
functionality.
● Cover testing and evaluation strategies to ensure reliable AI performance.
Table of Contents
● Step 0: Echo Bot
● Step 1: No Tool LLM
○ Step 1.1: LLM connection without chat history
○ Step 1.2: Use streaming response
○ Step 1.3: Add chat history
○ Step 1 Detour: Refactor & ChatProfile
● Step 2: Assistant RAG
○ Step 2: Assistant integration
○ Step 2.1: Persist threads
○ Step 2.2: PDF QA over hardcoded file
■ Step 2.2.1: Add annotations support
○ Step 2.3: PDF QA over dynamically uploaded files
● Step 3: App connects to multiple data sources - pdf and csv files
● Step 4: Custom Rag
○ Step 4.1: Remove OpenAI FileSearch
○ Step 4.2: Simple PDF QA with RAG
● Step 5: Testing and Evaluation
○ Step 5.1: Simple integration tests
Step 0: Echo Bot
Local setup

● Clone the repo:


○ git clone https://github.com/AdityaGudimella/copilot.git
● Open the directory in the terminal:
○ cd copilot
● Install the dependencies in a Python 3.12 env
○ pip install -e .
● Checkout the appropriate tag:
○ git checkout tags/”Step0”
● Run the app:
○ chainlit run copilot/app.py
Local setup: Environment variables

● You need the following env vars set up:


○ COPILOT_OPENAI_API_KEY
■ Get it here
○ COPILOT_LLAMA_CLOUD_API_KEY
■ Instructions
○ COPILOT_PINECONE_API_KEY
■ Get it here
Demo: Echo Bot
Step 1: LLM connection without chat history
Step 1: LLM connection without chat history
Demo: LLM connection without chat history

● Checkout the appropriate tag:


○ git checkout tags/”Step1.1”
● Run the app:
○ chainlit run copilot/app.py
● Required env var:
○ COPILOT_OPENAI_API_KEY
Step 1.2: Use streaming response

● Currently entire answer gets output at once


● Let’s change it so that each character gets output as OpenAI
generates it.
Step 1.2: Use streaming response
Demo: Use streaming response

● Checkout the appropriate tag:


○ git checkout tags/”Step1.2”
● Run the app:
○ chainlit run copilot/app.py
● Required env var:
○ COPILOT_OPENAI_API_KEY
Step 1.3: Add chat history

● Currently the copilot has no knowledge of previous conversation


● Let’s change it so that the copilot remembers the conversation
Step 1.3: Add chat history
Demo: Add chat history

● Checkout the appropriate tag:


○ git checkout tags/”Step1.3”
● Run the app:
○ chainlit run copilot/app.py
● Required env var:
○ COPILOT_OPENAI_API_KEY
Step 1 Detour: Refactor and add ChatProfiles

● Clean up the code


● Allow users to choose the underlying OpenAI model
Step 1 Detour: Refactor and add ChatProfiles
Step 1 Detour: Refactor and add ChatProfiles
Step 1 Detour: Refactor and add ChatProfiles
Step 1 Detour: Refactor and add ChatProfiles
Demo: Refactor and ChatProfiles

● Checkout the appropriate tag:


○ git checkout tags/”Step1Detour”
● Run the app:
○ chainlit run copilot/app.py
● Required env var:
○ COPILOT_OPENAI_API_KEY
Step 2: Assistant integration

● Use the OpenAI Assistant API to power our copilot


● Parts of an assistant:
○ Thread
○ Assistant
○ Run
Step 2: Assistant integration
Step 2: Assistant integration
Step 2: Assistant integration
Step 2: Assistant integration
Step 2.1: Persist threads

● Have the assistant remember conversations across app


invocations
Step 2.2: PDF QA over hardcoded file

● We’re using OpenAI’s File Search tool that automatically comes


with RAG
Step 2.2: PDF QA over hardcoded file
Step 2.2: PDF QA over hardcoded file
Step 2.2: PDF QA over hardcoded file
Demo: PDF QA over hardcoded files

● Checkout the appropriate tag:


○ git checkout tags/”Step2.2”
● Run the app:
○ chainlit run copilot/app.py
● Required env var:
○ COPILOT_OPENAI_API_KEY
Step 2.2.1: Add annotations support

● Show which file was used to answer our question.


Step 2.2.1: Add annotations support
Step 2.3: PDF QA over dynamically uploaded files

● Upload a file and ask a question about it.


Demo: PDF QA over dynamically uploaded files

● Checkout the appropriate tag:


○ git checkout tags/”Step2.3”
● Run the app:
○ chainlit run copilot/app.py
● Required env var:
○ COPILOT_OPENAI_API_KEY
Step 3: App connects to multiple data sources

● Add support for answering questions over CSV files in addition to


PDFs.
● Use Assistant’s Function Calling API
○ Tool Call: Call to a function we defined for OpenAI Assistant to
use.
● Commit Diff
○ Add a function to do QA over CSV files (using llama-index)
○ Implement methods in OpenAI Assistant EventHandler to
handle tool calling.
Demo: CSV QA

● Checkout the appropriate tag:


○ git checkout tags/”Step3”
● Run the app:
○ chainlit run copilot/app.py
● Required env var:
○ COPILOT_OPENAI_API_KEY
Step 4: Custom RAG

● Instead of using OpenAI’s File Search, we use llama-index for


PDF QA
Step 4.1: Remove OpenAI File Search

● Commit Diff
Step 4.2: Simple PDF QA with RAG

● Use llama-index to perform PDF QA


Step 4.2: Simple PDF QA with RAG
Step 4.2: Simple PDF QA with RAG
Step 4.2: Simple PDF QA with RAG
Demo: LlamaIndex PDF QA

● Checkout the appropriate tag:


○ git checkout tags/”Step4.2”
● Run the app:
○ chainlit run copilot/app.py
● Required env var:
○ COPILOT_OPENAI_API_KEY
○ COPILOT_LLAMA_CLOUD_API_KEY
○ COPILOT_PINECONE_API_KEY
Step 5.1: Simple integration tests
● Commit Diff
● Advice:
○ Make heavy use of mocks
○ Instead of asserting against the response of the assistant, check that
the right tool is called
■ Then, test that tool separately
■ This will reduce the flakiness of the test
○ Have tests at two levels:
■ Check that the right tool is called at the assistant level
■ Check that the tool is giving the right response
○ For tool level tests:
■ Don’t check for exact responses
■ If possible, use an LLM to evaluate the response
Link to Code
Questions?
Thank You!

You might also like