Major Project
Major Project
titled
VIRTUAL ASSISTANT
Submitted in partial fulfilment of the requirements for the award of the degree of
Bachelor of Technology
In
Computer Science and Engineering
By
TANGUTURI YASWANTH (O190999)
AMPOLU PAVANI (O190278)
GUNTURU PRASANNA BABU (O190847)
PEPAKAYALA MEGHANA SRI NAGA SOWMYA (O191041)
SAKA MADHURIMA (O190862)
During
Fourth Year Semester-II
RAJIV GANDHI UNIVERSITY OF KNOWLEDGE TECHNOLOGIES-A.P.
ONGOLE CAMPUS
Kurnool road, Ongole, Prakasam Dt., Andhra Pradesh523225
A.Y. 2024-25
1
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
CERTIFICATE
This is to certificate that the project report entitled “VIRTUAL ASSISTANT”
submitted by TANGUTURI YASWANTH (O190999), AMPOLU PAVANI (O190278),
GUNTURU PRASANNA BABU (O190847), PEPAKAYALA MEGHANA SRINAGA
SOWMYA (O191041) and SAKA MADHURIMA (O190862) for the Major Project during
Fourth Year Semester-II and in partial fulfillment of the requirement for the award of the degree of
the Bachelor of Technology in Computer Science and Engineering at Rajiv Gandhi University
of Knowledge Technologies - A.P., ONGOLE CAMPUS is the record of bonafide work carried
out by them under my guidance and supervision during the academic year 2024-2025.
The results presented in this project have been verified and formed to be excellent. The report
hasn’t been submitted previously in part or in full to this or any other university or institution for
the award of any degree.
Date: /04/2025
Place: ONGOLE
2
APPROVAL SHEET
Examiner(s)
Supervisor(s)
Date : /04/2025
Place : ONGOLE
3
ACKNOWLEDGMENT
It is our privilege and pleasure to express a profound sense of respect, gratitude and in
debtedness to our guide Mr. B. SAMPATH BABU, Assistant Professor, Dept. of Computer
Science and Engineering, Rajiv Gandhi University of Knowledge Technologies-A.P.,ONGOLE
CAMPUS, for her indefatigable inspiration, illuminating guidance, cogent discussion,
constructive criticisms and encouragement throughout this dissertation work.
We express our sincere gratitude to Mr. NANDI MALLIKARJUNA, Assistant Professor &
Head Department of Computer Science and Engineering, Rajiv Gandhi University of Knowledge
Technologies-A.P.,ONGOLE CAMPUS, for his suggestions, motivations and co-operation for the
successful completion of the work.
We extend our sincere thanks to our Academic Dean Mr. MESSALA RUPAS KUMAR,
for his encouragement and constant help.
We convey our profound thanks and gratitude to Dr. BHASKAR PATEL, DIRECTOR -
Rajiv Gandhi University of Knowledge TechnologiesA.P., ONGOLE CAMPUS for providing us
an excellent academic climate which made this endeavour possible.
We thank all our department teaching and non-teaching staffs, who have helped directly and
indirectly to complete this project in time. Last but not least to extend my deep gratitude to our
beloved family members for their moral coordination, encouragement and financial support to
carry out this project. Finally, We express my heartful and deep sense of gratitude to all faculty
members in our division and to our friends for their helping hands, valuable support and
encouragement during the project work.
4
DECLARATION
We hereby declare that the project work entitled “VIRTUAL ASSISTANT” submitted to the
Rajiv Gandhi University Of Knowledge Technologies-A.P., ONGOLE CAMPUS for Mini Project
during Third Year Semester-II and in partial fulfilment of the requirements for award of the degree
of Bachelor of Technology (B.Tech) in Computer Science and Engineering is a record of an
original work done by us under the guidance of Mr.B.Sampath Babu, Assistant Professor and this
project work have not been submitted to any other university for the award of any other degree or
diploma.
Date: /04/2025
Palce: ONGOLE
5
ABSTRACT
Virtual Assistant a voice-enabled virtual assistant built using Python and Streamlit. The
assistant integrates natural language processing, speech recognition, and text-to-speech
technologies to deliver an interactive, user-friendly web-based interface. Alice can understand and
respond to both spoken and typed queries, providing functionalities such as weather updates, web
browsing assistance, real-time information retrieval, and entertainment through jokes. The
assistant leverages Google search scraping for weather data, and seamlessly connects users to
platforms like YouTube, Spotify, Google, and ChatGPT.
The speech recognition and synthesis modules are abstracted to handle voice input and output
independently of the main application. The speech recognition component uses the
SpeechRecognition library to capture audio input from the user via a microphone and convert it
into text. This abstraction allows the assistant to understand and process spoken commands, such
as asking for the time or searching the web. The text-to-speech module, powered by pyttsx3,
converts textual responses back into speech, allowing the assistant to audibly reply to the user’s
requests. These two components are designed to work together seamlessly, enabling the virtual
assistant to function with minimal user intervention.
Designed with a modular and intuitive layout, the application demonstrates how modern
Python libraries like speech_recognition, pyttsx3, and streamlit_option_menu can be combined to
create a responsive and intelligent personal assistant. This project showcases the potential of
accessible AI tools in enhancing digital user experiences.
In addition to its core features, Alice is built with scalability and flexibility in mind.
Developers can easily extend its capabilities by integrating third-party APIs for more accurate
weather, news, or media services. The modular design also allows for features like user
authentication, calendar scheduling, or smart home control to be added without disrupting the core
logic. By leveraging open-source libraries and maintaining clean abstractions, Alice provides a
strong foundation for building tailored AI assistants across a variety of use cases—from
productivity tools to accessible technology for users with special needs.
6
CONTENT
S.NO TITLE PAGE NO
1 INTRODUCTION 01
1.1 MOTIVATION 01
2 LITERATURE SURVEY 02
3 ANALYSIS 03
3.3.1 PURPOSE 04
3.3.2 SCOPE 04
4 DESIGN 05
5 IMPLEMENTATION 21
5.1 MODULES 21
6 TEST CASE 30
7 SCREENSHOTS 34
8 CONCLUSION 36
9 FUTURE ENHANCEMENT 37
10 BIBLIOGRAPHY 38
7
1. INTRODUCTION
Alice is a modular, voice-enabled virtual assistant built with Python and Streamlit, offering hands-free
interaction for tasks like weather updates, web browsing, and entertainment. It combines speech
recognition, text-to-speech, and an intuitive interface to enhance accessibility and user experience.
1.1 MOTIVATION
The motivation for developing Alice is to address the need for flexible, intuitive, and
cross-platform virtual assistants that simplify daily tasks through natural, hands-free interaction.
Unlike many platform-restricted solutions, Alice offers a web-based, accessible experience powered
by lightweight Python tools. The project showcases how AI-driven assistants can enhance
productivity and user engagement while demonstrating the potential of scalable, Python-based digital
solutions.
This project addresses the complexity of managing everyday digital tasks by creating a
web-based virtual assistant that simplifies interactions through voice and text commands. Unlike
existing platform-restricted assistants, it offers hands-free operation and real-time responses for tasks
like opening websites, checking the time, and telling jokes. By integrating speech recognition and
text-to-speech, the assistant enhances accessibility, efficiency, and user control in a fragmented digital
environment.
1.3 OBJECTIVES
The objective of this project is to develop Alice, a voice- and text-enabled virtual assistant
that simplifies everyday tasks through an interactive Streamlit web interface. Using
SpeechRecognition and pyttsx3, Alice processes voice commands and provides audible responses for
tasks like opening websites, telling jokes, and reporting the time. It includes session-based
conversation history for dynamic interaction and showcases how Python libraries can be used to build
scalable, AI-powered applications. The project enhances productivity, supports hands-free use, and
serves as a proof of concept for accessible, adaptable virtual assistant solutions.
1
2. LITERATURE SURVEY
The development of virtual assistants has seen significant advancements over the past decade,
largely driven by improvements in natural language processing (NLP), speech recognition, and
artificial intelligence (AI). Pioneering systems such as Apple’s Siri, Google Assistant, Amazon’s
Alexa, and Microsoft’s Cortana have popularized the concept of hands-free interaction through
voice commands. These systems are typically embedded within their respective ecosystems,
offering powerful capabilities but often limited in cross-platform accessibility and customization
Several academic projects and papers have explored modular virtual assistant architectures to
allow for easy feature extension and cross-platform deployment. These projects often highlight
the importance of event-driven design, session-based interaction history, and real-time response
generation for delivering engaging user experiences.
Alice builds upon these concepts by offering a web-based, cross-platform assistant that
leverages a combination of these open-source tools to perform essential digital tasks like opening
websites, reporting time, telling jokes, and more. Unlike traditional assistants tied to specific
devices, Alice emphasizes accessibility, ease of use, and flexibility, demonstrating a scalable
model for developing personalized and extensible voice-based applications.
2
3. ANALYSIS
This project presents an efficient voice recognition approach for a virtual assistant, addressing
limitations in existing solutions by using natural language processing to execute tasks via voice
commands, reducing reliance on input devices like keyboards. It leverages APIs, particularly Google
Speech API, to convert voice to text, which is then matched against a command database to trigger
appropriate responses. While the system achieves high accuracy, it faces drawbacks such as increased
task completion time and algorithmic complexity, making future modifications challenging.
The proposed Alice Virtual Assistant is a web-based, interactive system that simplifies user
interactions through voice and text commands. It uses Speech Recognition, Text-to-Speech, and Task
Execution to perform functions like opening websites and reporting time. Built with a responsive
Streamlit interface and session-based history, its modular, real-time design enhances productivity,
accessibility, and user experience in a lightweight environment.
3
3.3.1 PURPOSE
The purpose of this project is to create an interactive virtual assistant named Alice that can
respond to both voice and text inputs. It aims to simplify daily digital tasks like opening websites,
telling jokes, reporting the current time, and engaging in basic conversations. The assistant also uses
text-to-speech technology to deliver audible responses, enhancing user experience. Built using Python
and Streamlit, this project offers a simple, user-friendly, and accessible web-based interface. It
demonstrates the potential of combining AI with voice and web technologies to improve
human-computer interaction.
3.3.2 FUTURE SCOPE
The Virtual Assistant Alice is designed to provide users with an interactive, AI-powered
assistant that responds to both voice and text commands. It can perform basic tasks like greeting
users, telling jokes, reporting the current time, opening websites, and speaking responses aloud. This
project focuses on creating a simple, web-based, voice-enabled assistant using Python and Streamlit,
laying the foundation for future smart assistant features.
4
4. DESIGN
They are:
● Class Diagram
● Object Diagram
● Component Diagram
● Composite Structure Diagram
● Deployment Diagram
● Package Diagram
● Profile Diagram
5
CLASS DIAGRAM
The class diagram is the main building block of object-oriented modeling. It is used for general
conceptual modeling of the structure of the application, and for detailed modeling translating the
models into programming code. Class diagrams can also be used for data modeling.
6
OBJECT DIAGRAM
It describes the static structure of a system at a particular point in time. It can be used to test the
accuracy of class diagrams. It represents distinct instances of classes and the relationship between
them at a time.
7
COMPONENT DIAGRAM
Component diagrams are used in modeling the physical aspects of object-oriented systems that are
used for visualizing, specifying, and documenting component-based systems and also for constructing
executable systems through forward and reverse engineering. Component diagrams are essentially class
diagrams that focus on a system's components that often used to model the static view of diagram.
8
PROFILE DIAGRAM
Profile diagram, a kind of structural diagram in the Unified Modeling Language (UML), provides a
generic extension mechanism for customizing UML models for particular domains and platforms.
Extension mechanisms allow refining standard semantics in strictly additive manner, preventing them
from contradicting standard semantics.
9
COMPOSITE STRUCTURE DIAGRAM
Composite Structure Diagram is one of the new artifacts added to UML 2.0. A composite structure
diagram is a UML structural diagram that contains classes, interfaces, packages, and their relationships,
and that provides a logical view of all, or part of a software system. It shows the internal structure
(including parts and connectors) of a structured classifier or collaboration.
10
PACKAGE DIAGRAM
Package diagrams are used, in part, to depict import and access dependencies between packages,
classes, components, and other named elements within your system. Each dependency is rendered as a
connecting line with an arrow representing the type of relationship between the two or more elements.
11
SEQUENTIAL DIAGRAM
A sequence diagram consists of a group of objects that are represented by lifelines, and the
messages that they exchange over time during the interaction. A sequence diagram shows the sequence
of messages passed between objects. Sequence diagrams can also show the control structures between
objects.
12
COMMUNICATION DIAGRAM
A Communication diagram models the interactions between objects or parts in terms of sequenced
messages. Communication diagrams represent a combination of information taken from Class, Sequence,
and Use Case Diagrams describing both the static structure and dynamic behaviour of a system.
13
ACTIVITY DIAGRAM
An activity diagram visually presents a series of actions or flow of control in a system similar to a
flowchart or a data flow diagram. Activity diagrams are often used in business process modeling. They
can also describe the steps in a use case diagram.Activities modelled can be sequential and concurrent.
14
TIMING DIAGRAM
A timing diagram includes timing data for at least one horizontal lifeline, with vertical messages
exchanged between states. Timing diagrams represent timing data for individual classifiers and interactions
of classifiers. You can use this diagram to provide a snapshot of timing data for a particular part of a system.
15
USECASE DIAGRAM
Use-case diagrams describe the high-level functions and scope of a system. These diagrams also
identify the interactions between the system and its actors. The use cases and actors in use-case diagrams
describe what the system does and how the actors use it, but not how the system operates internal.
16
DEPLOYMENT DIAGRAM
A deployment diagram is a UML diagram type that shows the execution architecture of a system,
including nodes such as the hardware or software execution environments, and the middle ware
connecting them. Deployment diagrams are typically used to visualize the physical hardware & the
software of a system.
17
5. IMPLEMENTATION
5.1 MODULES:
1. USER INTERFACE(UI) MODULE
2. SPEECH RECOGNITION MODULE
3. TEXT TO SPEECH MODULE
18
● Passes transcribed input to the main assistant logic
● Alerts users if speech input is not recognized or too noisy
19
5.3 INTRODUCTION OF TECHNOLOGIES USED:
● Python: The core programming language used for building the assistant’s functionalities,
integrating different libraries, and managing the application flow.
● Streamlit : For building the web-based user interface (UI) for interaction.
● SpeechRecognition : To capture and convert voice input from the user into text.
● pyttsx3 : For converting text responses into spoken audio (Text-to-Speech).
● webbrowser : To open web pages like Google, YouTube, and Spotify directly from
commands.
● datetime : To fetch and display the current date and time.
● requests_html (HTMLSession) : For scraping live weather information from Google
search results.
● random : To randomly select jokes and responses for varied interactions.
● re (Regular Expressions) : For parsing user queries (like extracting the city name in weather
requests).
● streamlit_option_menu : To create a stylish and interactive sidebar menu for navigation.
● base64 : For encoding and handling images (if needed in future enhancements).
● Google Search (weather information)
Real-time weather details are fetched by scraping Google search results using
requests_html.
● Streamlit Web Application
The entire assistant runs as a web-based app, accessible from any browser, without requiring
local installation of heavy applications.
20
5.4 SAMPLE CODE:
import streamlit as st
import os
import webbrowser
from datetime import datetime
import speech_recognition as sr
import pyttsx3
import re
import random
from streamlit_option_menu
import option_menu
from requests_html import
HTMLSession
import base64
21
except:
return None
22
webbrowser.open("http://chatgpt.com")
reply = "Opening ChatGPT for you!"
elif "google" in user_data:
webbrowser.open("https://google.com")
reply = "Google is at your service!"
elif "time now" in user_data:
now = datetime.now()
reply = f"The current time is {now.strftime('%H:%M:%S')}"
elif "weather" in user_data:
match = re.search(r"weather in ([a-zA-Z\s]+)", user_data)
if match:
city = match.group(1).strip()
weather = get_weather(city)
else:
weather = get_weather()
reply = f"{weather_icon(weather)} {weather}"
elif "joke" in user_data:
reply = random.choice(jokes)
else:
reply = "Hmm, I didn't catch that. Can you please repeat?"
text_to_speech(reply)
return reply
23
menu = ["Home", "Assistance", "About"]
with st.sidebar:
choice = option_menu("Categories", menu, icons=["house",
"mic", "info-circle"], menu_icon="cast", default_index=0)
### 🧠 Features:
- Speech & Text commands
- Weather forecast with icons
- Web browsing assistant
- Jokes & fun responses
with col1:
st.subheader(" ✍️ Type a message")
24
user_input = st.text_input("You:", placeholder="Type here
and press enter...")
if st.button("Send") and user_input:
response = assistant_action(user_input)
st.session_state.history.append((user_input, response))
with col2:
st.subheader(" 🎤 Speak to Alice")
if st.button("Speak Now"):
spoken_text = speech_to_text()
if spoken_text:
st.success(f"You said: {spoken_text}")
response = assistant_action(spoken_text)
st.session_state.history.append((spoken_text, response))
else:
st.warning("Could not understand your speech. Try
again!")
st.divider()
st.subheader(" 📝 Conversation History")
for user, bot in st.session_state.history:
st.markdown(f"*You:* {user}")
st.markdown(f"*Alice:* {bot}")
25
Created with ❤️ by Pavani. Alice is designed to make your
digital tasks easier & more enjoyable!
""")
🔗 Useful Links")
st.subheader("
st.markdown("[🌍 Google](https://google.com)")
st.markdown("[🎵 Spotify](https://spotify.com)")
st.markdown("[📺 YouTube](https://youtube.com)")
26
6. TEST CASES
A test case is a defined format for software testing required to check if a particular application or software
is working or not. A test consist of a certain set of conditions that needed to be checked to test an
application or software.
27
Expected Result: The assistant opens Google in a new browser tab and confirms the action both
textually and audibly.
28
7. SCREENSHOTS
29
8. CONCLUSION
In conclusion, The Virtual Assistant Alice is a thoughtfully designed, interactive AI-based application
developed using Python and Streamlit. It brings together multiple technologies and libraries to deliver a
seamless user experience, making daily digital interactions simpler and more enjoyable. The assistant is
capable of understanding both text and voice commands, responding intelligently through text-to-speech
output, and performing various tasks like fetching weather information, telling jokes, displaying the current
time, and opening popular websites like YouTube, Google, and Spotify.
One of the key highlights of this project is its integration of speech recognition and text-to-speech
functionality, enabling hands-free, real-time interactions between the user and the system. Additionally, the
use of Streamlit makes the interface highly intuitive and web-accessible, without the need for complex
installations or configurations. By combining these technologies, the assistant transforms ordinary
command-based applications into a conversational and engaging experience.
The implementation of features like live weather updates with emojis, voice-based command
recognition, and web-based control demonstrates the flexibility and power of Python when integrated with
third-party libraries. The assistant’s ability to maintain a conversation history enhances usability by
allowing users to track their previous interactions in a clean and organized format.
This project not only showcases how AI tools and Python libraries can be integrated effectively but
also emphasizes the growing potential of virtual assistants in modern technology. The success of this
assistant paves the way for more advanced, feature-rich, and intelligent applications that can further
enhance human-computer interaction.
30
9. FUTURE ENHANCEMENT
Smart Web Search
Let users ask questions like "Who is Elon Musk?" or "Capital of France?" Use libraries like wikipedia or
Google Search API to fetch answers and read them out.
Translation Support
Integrate googletrans or a translation API so Alice can translate phrases to different languages.
Calendar Integration
Connect to Google Calendar to fetch upcoming events or create new ones via voice/text commands.
Fun Games
Simple voice/text games like "Guess the Number", "Trivia", or "Would You Rather".
Storyteller Mode
Ask Alice to narrate short stories or bedtime tales using text-to-speech.
31
10. BIBLIOGRAPHY
1. Abhay Dekate, Chaitanya Kulkarni, Rohan Killedar, “Study of Voice Controlled Personal Assistant
Device”, International Journal of Computer Trends and Technology (IJCTT) – Volume 42 Number 1 –
December 2016.
2. Deny Nancy, Sumithra Praveen, Anushria Sai, M.Ganga, R.S.Abisree, “Voice Assistant Application for a
college Website”, International Journal of Recent Technology and Engineering (IJRTE) ISSN:
2277-3878, Volume-7,April 2019.
3. Deepak Shende, Ria Umahiya, Monika Raghorte, Aishwarya Bhisikar, Anup Bhange, “AI Based Voice
Assistant Using Python”, Journal of Emerging Technologies and Innovative Research (JETIR), February
2019, Volume 6.
4. Dr.Kshama V.Kulhalli, Dr.Kotrappa Sirbi, Mr.Abhijit J. Patankar, “Personal Assistant with Voice
Recognition Intelligence”, International Journal of Engineering Research and Technology. ISSN 0974-
3154 Volume 10, Number 1 (2017).
5. Isha S. Dubey, Jyotsna S. Verma, Ms.Arundhati Mehendale, “An Assistive System for Visually Impaired
using Raspberry Pi”, International Journal of Engineering Research & Technology (IJERT), Volume 8,
May-2019.
6. Kishore Kumar R, Ms. J. Jayalakshmi, Karthik Prasanna, “A Python based Virtual Assistant using
Raspberry Pi for Home Automation”, International Journal of Electronics and Communication
Engineering (IJECE), Volume 5, July 2018.
-- o --
32