In the previous post, I covered the installation and usage of Ollama and the basic functionalities of an LLM with interactions via CLI. However, that left me wondering, is that all we can do with these language models?
Are there capabilities limited to a terminal? Can’t we use them in a Python environment? You are about to find answers to these intriguing questions soon!
In this post, we will try to build a text summarizer using an LLM in a Python environment. If we can set up the LLM in an interactive environment, we can extend the functionalities to build an interface for this task using Gradio.
Before we get into the coding part, let us understand the libraries/APIs that help us achieve this task.
Recommended Read: Machine Learning Workflows using Pycaret
1. Which Libraries Are We Using?
In this section, we are going to understand which libraries are being used and why.
Ollama
Ollama is a Python library that supports running a wide variety of large language models both locally and 9n cloud. In other words, we can say Ollama hosts many state-of-the-art language models that are open-sourced and free to use.
Langchain Community
The Langchain framework is used to build, deploy and manage LLMs by chaining interoperable components. Langchain Community is a part of the parent framework, which is used to interact with large language models and APIs.
Gradio
Gradio is a Python library specifically designed to build and share machine-learning applications. It has a cool interface and we can deploy the apps made by gradio in HuggingFace Spaces.
2. How to Develop a Text Summarizer?
A good practice when working on projects is to always create a different virtual environment so that the versions of the dependencies required for that task don’t merge with the versions outside of the environment.
For this post, I’m going to use Visual Studio Code and on Windows. To create a virtual environment in VSCode(Windows), use the following command:
python -m venv <name_of_the_environment>
In the similar way, I have created a virtual environment called llmenv
.
To activate the virtual environment, follow this command.
.\llmenv\Scripts\Activate
Once the virtual environment is activated, we need to install the required libraries through the terminal.
pip install gradio
pip install langchain_community
Another thing to remember before we start is that the desired LLM must be running locally on the device.
If you haven’t installed Ollama and run LLMs locally, please refer to the previous post[Link to the previous post]
Let us get started!
Since we are using Ollama services to run LLMs, we need to use the Ollama class from the langchain community.
from langchain_community.llms import Ollama
Next up, we will check if we can access the local LLM through the Python environment.
llm = Ollama(model="phi3")
response = llm("Tell me about Barack Obama")
print(response)
Using the Ollama class, we have defined the large language model that is currently running locally. To test if the setup was successful, let us give a prompt to the llm. When this cell is run, llm gives the following output.

Which means, the setup was successful!
Now, let us define a simple function that could take a text, and pass it to the LLM so that the LLM returns a summary of the text.
def summarize_text(text):
prompt = f"Summarize the following text:\n\n{text}\n\nSummary:"
response = llm{prompt}
return response
Let us call this function using a sample text.
text_to_summarize = """
The rise of artificial intelligence has led to significant advancements in various fields,
from healthcare to finance. AI systems are now capable of performing tasks that were once
thought to be the exclusive domain of humans. However, this progress also raises ethical
concerns, including the potential for job displacement, privacy issues, and the need for
regulatory oversight. As AI continues to evolve, it will be crucial to balance innovation
with ethical considerations to ensure that the technology benefits society as a whole.
"""
summary = summarize_text(text_to_summarize)
print(summary)

We have successfully passed the first step of building a text summarizer! Let us create an interface for this task using gradio.
Here is a simple example to understand the workflow of gradio.
import gradio as gr
def morning(name):
return "Good Morning " + name + "!"
demo = gr.Interface(fn=morning, inputs="text", outputs="text")
demo.launch()
We have defined a function called morning which takes the user’s name and greets them “Good Morning”. This function is taken as a parameter called fn by the interface module of gradio. The interface module also has additional parameters like input(the user’s name) and output. This whole functionality is encapsulated in demo
. When the demo is launched, we get the following output.

If we enter the name in the input field, the output field is populated by the message.

We are going to use the gr.Interface
and launch
modules in the text summarizer application.
The text summarizer function(summarize_text) is going to be the same and it is passed as a parameter to the gr.Interface
module.
def summarize_text(text):
prompt = f"Summarize the following text:\n\n{text}\n\nSummary:"
response = llm(prompt)
return response
sum = gr.Interface(fn=summarize_text, inputs="text", outputs="text")
sum.launch(server_port=7862)
We can also specify a port explicitly if the local host is busy at that time.
In the output, we can see that the application is running on a local host URL. Now if we click the URL, the same input-output interface opens up in a browser.
We can enter the text to be summarized in the input field.

That is how we used the most powerful libraries and built a text summarizer using the large language model running locally on the PC!
Summary
In this post, we have understood how to use the model running locally on the computer to summarize a text input in a Python environment. There’s more to explore in the large language models, like the advanced supporting concepts like Retrieval Augmented Generation(RAGs) and multi-modal llms.
We are going to dive right into these concepts in the coming posts!