0% found this document useful (0 votes)
66 views6 pages

Ollama Ai Chatbot

This document outlines the steps to set up an AI chatbot using Ollama and Streamlit, starting with the installation of Python and required libraries. It details the creation of a Streamlit application, including code for initializing the chatbot model, managing chat history, and enabling document uploads for retrieval. Finally, it explains how to run the application and interact with the AI chatbot and uploaded documents.

Uploaded by

veldutinagasai97
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
66 views6 pages

Ollama Ai Chatbot

This document outlines the steps to set up an AI chatbot using Ollama and Streamlit, starting with the installation of Python and required libraries. It details the creation of a Streamlit application, including code for initializing the chatbot model, managing chat history, and enabling document uploads for retrieval. Finally, it explains how to run the application and interact with the AI chatbot and uploaded documents.

Uploaded by

veldutinagasai97
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

01:The process of setting up an AI

chatbot by Ollama and Streamlit.


Step 1: Install Python and Required Libraries
Install Python

Step 2: Install Required Libraries


 Run the following command to install the necessary Python libraries:

pip install streamlit langchain-ollama langchain faiss-cpu


Step 3: Set Up Ollama
 Download and install Ollama from the official website: Ollama.ai.
 Follow the installation instructions for your operating system.

 Download a Model:
Run the following command to download the mode

ollama pull deepseek-r1


Verify Ollama Installation
ollama serve #Keep the server running in the background
Step 4: Create the Streamlit Application
Create a Python File:
 Create a new file named ollama_chatbot.py.
Add the Code:
import streamlit as st
from langchain_ollama import ChatOllama
from langchain.vectorstores import FAISS
from langchain_community.document_loaders import TextLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.memory import ChatMessageHistory
from langchain_ollama import OllamaEmbeddings
from langchain.schema import HumanMessage, AIMessage
from io import StringIO

# Initialize the chatbot model


llm = ChatOllama(model="deepseek-r1:latest")

# Set up Streamlit UI
st.title("Ollama AI Chatbot")
st.sidebar.header("Chat Settings")

# Initialize chat history


if "history" not in st.session_state:
st.session_state.history = ChatMessageHistory()

# User input
user_input = st.text_input("Ask me anything:")
if st.button("Send") and user_input:
# Add user message to history
st.session_state.history.add_user_message(user_input)

# Get AI response
ai_response = llm.invoke(st.session_state.history.messages)
# Add AI message to history
st.session_state.history.add_ai_message(ai_response.content)

# Display chat history


for msg in st.session_state.history.messages:
if isinstance(msg, HumanMessage):
st.write(f"**You:** {msg.content}")
elif isinstance(msg, AIMessage):
st.write(f"**AI:** {msg.content}")

# Document Retrieval Setup


st.sidebar.subheader("Upload Document for Retrieval")
uploaded_file = st.sidebar.file_uploader("Upload a .txt file", type=["txt"])

@st.cache_resource
def load_and_embed_docs(file):
"""Loads and embeds the document using FAISS."""
# Read the content of the uploaded file
file_content = file.read().decode("utf-8")

# Use StringIO to simulate a file-like object


file_like_object = StringIO(file_content)

loader = TextLoader(file_like_object)
documents = loader.load()

# Split document into chunks


text_splitter = RecursiveCharacterTextSplitter(chunk_size=150,
chunk_overlap=20)
texts = text_splitter.split_documents(documents)

# Create FAISS vector database


embeddings = OllamaEmbeddings(model="nomic-embed-text:latest")
db = FAISS.from_documents(texts, embeddings)

return db.as_retriever()

if uploaded_file:
retriever = load_and_embed_docs(uploaded_file)

# Retrieve relevant document section


query = st.sidebar.text_input("Search in Document")
if st.sidebar.button("Retrieve") and query:
docs = retriever.invoke(query)
st.sidebar.write("### Retrieved Text:")
for doc in docs:
st.sidebar.write(doc.page_content[:300])

Step 4: Run the Application


streamlit run ollama_chatbot.py

Step 5: Access the Chatbot


 The application will open in your default web browser.
 You can interact with the chatbot in the main window and upload
documents in the sidebar.
Step 6: Test the Application
1. Chat with the AI:
 Type a question or message in the chat input box and click "Send".
 The AI will respond based on the Ollama model.
2. Upload a Document:
 In the sidebar, upload a .txt file.
 Use the "Search in Document" feature to retrieve relevant sections
from the document.
This application allows you to chat with an AI model and retrieve information
from uploaded documents.
Attached screenshot for your reference:

You might also like