0% found this document useful (0 votes)
11 views39 pages

Major Project

The document is a major project report for a virtual assistant named 'Alice', developed by a group of students at Rajiv Gandhi University of Knowledge Technologies, aimed at fulfilling their Bachelor of Technology degree requirements. The assistant utilizes Python and Streamlit, integrating natural language processing and speech recognition to provide functionalities like weather updates and web browsing through voice commands. The report outlines the project's objectives, design, implementation, and potential for scalability and accessibility in enhancing user experiences.

Uploaded by

yaswanthdev999
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views39 pages

Major Project

The document is a major project report for a virtual assistant named 'Alice', developed by a group of students at Rajiv Gandhi University of Knowledge Technologies, aimed at fulfilling their Bachelor of Technology degree requirements. The assistant utilizes Python and Streamlit, integrating natural language processing and speech recognition to provide functionalities like weather updates and web browsing through voice commands. The report outlines the project's objectives, design, implementation, and potential for scalability and accessibility in enhancing user experiences.

Uploaded by

yaswanthdev999
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

A Major Project Report​

titled

VIRTUAL ASSISTANT
Submitted in partial fulfilment of the requirements for the award of the degree of​
Bachelor of Technology​
In​
Computer Science and Engineering​
By​
TANGUTURI YASWANTH ​ ​ ​ ​ (O190999)​
AMPOLU PAVANI​ ​ ​ ​ ​ ​ (O190278)​
GUNTURU PRASANNA BABU ​ ​ ​ ​ (O190847)​
PEPAKAYALA MEGHANA SRI NAGA SOWMYA ​ (O191041)​
SAKA MADHURIMA​ ​ ​ ​ ​ (O190862)​

During ​
Fourth Year Semester-II

Under the guidance of​



Mr. B. SAMPATH BABU​
Assistant Professor(C)​

Department of Computer Science and Engineering


RAJIV GANDHI UNIVERSITY OF KNOWLEDGE TECHNOLOGIES-A.P.​
ONGOLE CAMPUS​
Kurnool road, Ongole, Prakasam Dt., Andhra Pradesh523225

A.Y. 2024-25

1
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING​

CERTIFICATE
​ ​ This is to certificate that the project report entitled “VIRTUAL ASSISTANT”
submitted by TANGUTURI YASWANTH (O190999), AMPOLU PAVANI (O190278),
GUNTURU PRASANNA BABU (O190847), PEPAKAYALA MEGHANA SRINAGA
SOWMYA (O191041) and SAKA MADHURIMA (O190862) for the Major Project during
Fourth Year Semester-II and in partial fulfillment of the requirement for the award of the degree of
the Bachelor of Technology in Computer Science and Engineering at Rajiv Gandhi University
of Knowledge Technologies - A.P., ONGOLE CAMPUS is the record of bonafide work carried
out by them under my guidance and supervision during the academic year 2024-2025.
​ The results presented in this project have been verified and formed to be excellent. The report
hasn’t been submitted previously in part or in full to this or any other university or institution for
the award of any degree.


​ Date: ​ /04/2025 ​
​ Place: ONGOLE


2

APPROVAL SHEET

​ ​ This report entitled “VIRTUAL ASSISTANT” by TANGUTURI YASWANTH


(O190999), AMPOLU PAVANI (O190278), GUNTURU PRASANNA BABU (O190847)
PEPAKAYALA MEGHANA SRINAGA SOWMYA (O191047), and SAKA MADHURIMA
(O190862) under supervision of Mr. B. SAMAPTH BABU, Assistant Professor is approved for
Major Project and for the degree of Bachelor of Technology in Computer Science and Engineering
at Rajiv Gandhi University of Knowledge Technologies-A.P., ONGOLE CAMPUS

Examiner(s)

Supervisor(s)

Date : /04/2025

Place : ONGOLE

3
ACKNOWLEDGMENT

​ It is our privilege and pleasure to express a profound sense of respect, gratitude and in
debtedness to our guide Mr. B. SAMPATH BABU, Assistant Professor, Dept. of Computer
Science and Engineering, Rajiv Gandhi University of Knowledge Technologies-A.P.,ONGOLE
CAMPUS, for her indefatigable inspiration, illuminating guidance, cogent discussion,
constructive criticisms and encouragement throughout this dissertation work.
​ ​
​ We express our sincere gratitude to Mr. NANDI MALLIKARJUNA, Assistant Professor &
Head Department of Computer Science and Engineering, Rajiv Gandhi University of Knowledge
Technologies-A.P.,ONGOLE CAMPUS, for his suggestions, motivations and co-operation for the
successful completion of the work.

​ We extend our sincere thanks to our Academic Dean Mr. MESSALA RUPAS KUMAR, ​
for his encouragement and constant help.

​ We convey our profound thanks and gratitude to Dr. BHASKAR PATEL, DIRECTOR -
Rajiv Gandhi University of Knowledge TechnologiesA.P., ONGOLE CAMPUS for providing us
an excellent academic climate which made this endeavour possible.

​ We thank all our department teaching and non-teaching staffs, who have helped directly and
indirectly to complete this project in time. Last but not least to extend my deep gratitude to our
beloved family members for their moral coordination, encouragement and financial support to
carry out this project. Finally, We express my heartful and deep sense of gratitude to all faculty
members in our division and to our friends for their helping hands, valuable support and
encouragement during the project work.

TANGUTURI YASWANTH​ ​​ (O190999)​


AMPOLU PAVANI​ ​​ (O190278)​
GUNTURU PRASANNA BABU ​ ​​ (O190847)​
PEPAKAYALA MEGHANA SRINAGA SOWMYA ​ (O191041)​
SAKA MADHURIMA ​ ​​ (O190862)

4
DECLARATION

​ We hereby declare that the project work entitled “VIRTUAL ASSISTANT” submitted to the

Rajiv Gandhi University Of Knowledge Technologies-A.P., ONGOLE CAMPUS for Mini Project
during Third Year Semester-II and in partial fulfilment of the requirements for award of the degree
of Bachelor of Technology (B.Tech) in Computer Science and Engineering is a record of an
original work done by us under the guidance of Mr.B.Sampath Babu, Assistant Professor and this
project work have not been submitted to any other university for the award of any other degree or
diploma.

TANGUTURI YASWANTH (O190999) _____________________

AMPOLU PAVANI (O190278) ​ ​ ​ _____________________

GUNTURU PRASANNA BABU (O190847) ​ ​ _____________________

PEPAKAYALA MEGHANA SRI NAGA SOWMYA (O191041) _____________________

SAKA MADHURIMA (O190862)​ ​ ​ ​ _____________________

Date: /04/2025
Palce: ONGOLE

5

ABSTRACT

Virtual Assistant a voice-enabled virtual assistant built using Python and Streamlit. The
assistant integrates natural language processing, speech recognition, and text-to-speech
technologies to deliver an interactive, user-friendly web-based interface. Alice can understand and
respond to both spoken and typed queries, providing functionalities such as weather updates, web
browsing assistance, real-time information retrieval, and entertainment through jokes. The
assistant leverages Google search scraping for weather data, and seamlessly connects users to
platforms like YouTube, Spotify, Google, and ChatGPT.

The speech recognition and synthesis modules are abstracted to handle voice input and output
independently of the main application. The speech recognition component uses the
SpeechRecognition library to capture audio input from the user via a microphone and convert it
into text. This abstraction allows the assistant to understand and process spoken commands, such
as asking for the time or searching the web. The text-to-speech module, powered by pyttsx3,
converts textual responses back into speech, allowing the assistant to audibly reply to the user’s
requests. These two components are designed to work together seamlessly, enabling the virtual
assistant to function with minimal user intervention.

Designed with a modular and intuitive layout, the application demonstrates how modern
Python libraries like speech_recognition, pyttsx3, and streamlit_option_menu can be combined to
create a responsive and intelligent personal assistant. This project showcases the potential of
accessible AI tools in enhancing digital user experiences.

In addition to its core features, Alice is built with scalability and flexibility in mind.
Developers can easily extend its capabilities by integrating third-party APIs for more accurate
weather, news, or media services. The modular design also allows for features like user
authentication, calendar scheduling, or smart home control to be added without disrupting the core
logic. By leveraging open-source libraries and maintaining clean abstractions, Alice provides a
strong foundation for building tailored AI assistants across a variety of use cases—from
productivity tools to accessible technology for users with special needs.

6
CONTENT
S.NO​​ ​ TITLE​ ​ ​ ​ ​ ​ PAGE NO

​ ​ 1​ INTRODUCTION​ ​ ​ ​ ​ ​ ​ ​ 01

​ ​ ​ 1.1​ MOTIVATION​ ​ ​ ​ ​ ​ ​ ​ 01

​ ​ ​ 1.2​ PROBLEM DEFINITIONS​ ​ ​ ​ ​ ​ 01

​ ​ ​ 1.3​ OBJECTIVES OF THE PROJECT​ ​ ​ ​ ​ 01

​ ​ 2​ LITERATURE SURVEY​ ​ ​ ​ ​ ​ ​ 02

​ ​ 3​ ANALYSIS​ ​ ​ ​ ​ ​ ​ ​ ​ 03

​ ​3.1​ EXISTING SYSTEM​ ​ ​ ​ ​ ​ ​ 03

​ ​3.2​ PROPOSED SYSTEM​ ​ ​ ​ ​ ​ ​ 03

​ ​3.3​ SOFTWARE REQUIREMENT SPECIFICATION​ ​ ​ 03

​ ​3.3.1 ​ PURPOSE​ ​ ​ ​ ​ ​ ​ 04

​ ​3.3.2 ​ SCOPE​​ ​ ​ ​ ​ ​ ​ 04

​ ​3.3.3 ​ OVERALL DESCRIPTION​ ​ ​ ​ ​ 04

​ ​ 4​ DESIGN​ ​ ​ ​ ​ ​ ​ ​ ​ 05

​ ​ ​ 4.1 ​ UML DIAGRAMS​ ​ ​ ​ ​ ​ ​ 05

​ ​ 5​ IMPLEMENTATION​ ​ ​ ​ ​ ​ ​ ​ 21

​ ​5.1 ​ MODULES​ ​ ​ ​ ​ ​ ​ ​ 21

​ ​5.2 ​ MODULE DESCRIPTION​ ​ ​ ​ ​ ​ 21

​ ​5.3 ​ INTRODUCTION TO TECHNOLOGIES USED​ ​ ​ 23

​ ​5.4 ​ SAMPLE CODE​ ​ ​ ​ ​ ​ ​ 25

​ ​ 6​ TEST CASE​ ​ ​ ​ ​ ​ ​ ​ ​ 30

​ ​ 7​ SCREENSHOTS​ ​ ​ ​ ​ ​ ​ ​ 34

​ ​ 8​ CONCLUSION​​ ​ ​ ​ ​ ​ ​ 36

​ ​ 9 ​ FUTURE ENHANCEMENT​ ​ ​ ​ ​ ​ ​ 37

​ ​ 10 ​ BIBLIOGRAPHY​ ​ ​ ​ ​ ​ ​ ​ 38

7

1. INTRODUCTION
Alice is a modular, voice-enabled virtual assistant built with Python and Streamlit, offering hands-free
interaction for tasks like weather updates, web browsing, and entertainment. It combines speech
recognition, text-to-speech, and an intuitive interface to enhance accessibility and user experience.
1.1 MOTIVATION

​ The motivation for developing Alice is to address the need for flexible, intuitive, and
cross-platform virtual assistants that simplify daily tasks through natural, hands-free interaction.
Unlike many platform-restricted solutions, Alice offers a web-based, accessible experience powered
by lightweight Python tools. The project showcases how AI-driven assistants can enhance
productivity and user engagement while demonstrating the potential of scalable, Python-based digital
solutions.

1.2 PROBLEM DEFINITION

​ This project addresses the complexity of managing everyday digital tasks by creating a
web-based virtual assistant that simplifies interactions through voice and text commands. Unlike
existing platform-restricted assistants, it offers hands-free operation and real-time responses for tasks
like opening websites, checking the time, and telling jokes. By integrating speech recognition and
text-to-speech, the assistant enhances accessibility, efficiency, and user control in a fragmented digital
environment.

1.3 OBJECTIVES

​ The objective of this project is to develop Alice, a voice- and text-enabled virtual assistant
that simplifies everyday tasks through an interactive Streamlit web interface. Using
SpeechRecognition and pyttsx3, Alice processes voice commands and provides audible responses for
tasks like opening websites, telling jokes, and reporting the time. It includes session-based
conversation history for dynamic interaction and showcases how Python libraries can be used to build
scalable, AI-powered applications. The project enhances productivity, supports hands-free use, and
serves as a proof of concept for accessible, adaptable virtual assistant solutions.

1

2. LITERATURE SURVEY
The development of virtual assistants has seen significant advancements over the past decade,
largely driven by improvements in natural language processing (NLP), speech recognition, and
artificial intelligence (AI). Pioneering systems such as Apple’s Siri, Google Assistant, Amazon’s
Alexa, and Microsoft’s Cortana have popularized the concept of hands-free interaction through
voice commands. These systems are typically embedded within their respective ecosystems,
offering powerful capabilities but often limited in cross-platform accessibility and customization

Recent research emphasizes the growing importance of accessible, lightweight, and


customizable virtual assistants, particularly for enhancing productivity and supporting users with
physical impairments. Studies have shown that voice interfaces significantly reduce cognitive
load and improve user satisfaction, especially when multitasking or performing routine tasks.
However, existing solutions tend to be either hardware-dependent or closed-source, limiting their
adaptability for educational, research, or lightweight deployment scenarios.

Open-source frameworks and libraries in Python have enabled developers to prototype


intelligent assistants with ease. Libraries such as SpeechRecognition for voice input, pyttsx3 for
text-to-speech synthesis, and Streamlit for building web applications have become popular tools
in academic and developer communities. Streamlit, in particular, allows rapid development of
interactive applications without requiring advanced frontend development skills, making it ideal
for AI-based projects.

Several academic projects and papers have explored modular virtual assistant architectures to
allow for easy feature extension and cross-platform deployment. These projects often highlight
the importance of event-driven design, session-based interaction history, and real-time response
generation for delivering engaging user experiences.

Alice builds upon these concepts by offering a web-based, cross-platform assistant that
leverages a combination of these open-source tools to perform essential digital tasks like opening
websites, reporting time, telling jokes, and more. Unlike traditional assistants tied to specific
devices, Alice emphasizes accessibility, ease of use, and flexibility, demonstrating a scalable
model for developing personalized and extensible voice-based applications.

2
3. ANALYSIS

3.1 EXISTED SYSTEM

​ This project presents an efficient voice recognition approach for a virtual assistant, addressing
limitations in existing solutions by using natural language processing to execute tasks via voice
commands, reducing reliance on input devices like keyboards. It leverages APIs, particularly Google
Speech API, to convert voice to text, which is then matched against a command database to trigger
appropriate responses. While the system achieves high accuracy, it faces drawbacks such as increased
task completion time and algorithmic complexity, making future modifications challenging.

3.2 PROPOSED SYSTEM

​ The proposed Alice Virtual Assistant is a web-based, interactive system that simplifies user
interactions through voice and text commands. It uses Speech Recognition, Text-to-Speech, and Task
Execution to perform functions like opening websites and reporting time. Built with a responsive
Streamlit interface and session-based history, its modular, real-time design enhances productivity,
accessibility, and user experience in a lightweight environment.

3.3 SOFTWARE REQUIREMENTS SPECIFICATION


●​ Pyttsx3(Python Text to Speech):
●​ Speech recognition
●​ Pyaudio
HARDWARE REQUIREMENTS
●​ AMD RYZEN 3 series
●​ 6GB RAM to 8GB RAM
●​ 256 SSD/HDD to 512 SSD/HDD
●​ Windows OS/Ubuntu OS

3
3.3.1 PURPOSE
​ The purpose of this project is to create an interactive virtual assistant named Alice that can
respond to both voice and text inputs. It aims to simplify daily digital tasks like opening websites,
telling jokes, reporting the current time, and engaging in basic conversations. The assistant also uses
text-to-speech technology to deliver audible responses, enhancing user experience. Built using Python
and Streamlit, this project offers a simple, user-friendly, and accessible web-based interface. It
demonstrates the potential of combining AI with voice and web technologies to improve
human-computer interaction.


3.3.2 FUTURE SCOPE
​ The Virtual Assistant Alice is designed to provide users with an interactive, AI-powered
assistant that responds to both voice and text commands. It can perform basic tasks like greeting
users, telling jokes, reporting the current time, opening websites, and speaking responses aloud. This
project focuses on creating a simple, web-based, voice-enabled assistant using Python and Streamlit,
laying the foundation for future smart assistant features.

3.3.3 OVERALL DESCRIPTION


​ Alice is an AI-powered virtual assistant built with Python and Streamlit, designed to handle
both voice and text commands through a simple web interface. It responds to greetings, tells jokes,
opens websites like Google and YouTube, provides the current time, and maintains session-based
conversation history. Alice uses text-to-speech for spoken replies, enhancing hands-free interaction.
The interface includes a sidebar menu with sections for Home, Assistance, and About, offering users
a clear and accessible experience suitable for all skill levels.

4
4. DESIGN

4.1 UML DIAGRAM


​ ​ A UML Diagram is based on UML (Unified Modeling Language) with the purpose of
visually representing a system along with its main actors, roles, actions, artifacts or classes, in order to
better understand, alter, maintain or document information about the system. The UML diagrams are
divided into Structural and Behavioral UML Diagrams.

STRUCTURAL UML DIAGRAMS:


Structural diagrams depict a static view of a structure of a system. It is widely used in the
Documentation of software architecture. The Structural UML Diagrams involves 7 diagrams.

They are:
●​ Class Diagram
●​ Object Diagram
●​ Component Diagram
●​ Composite Structure Diagram
●​ Deployment Diagram
●​ Package Diagram
●​ Profile Diagram

BEHAVIORAL UML DIAGRAMS:


Behavioral diagrams portray a dynamic view of a system or the behavior of a system, which describes
the functioning the system. It involves 7 diagrams
They are:
●​ Use case Diagram
●​ Sequence Diagram
●​ Activity Diagram
●​ State Machine Diagram
●​ Interaction Overview Diagram
●​ Communication Diagram

5

CLASS DIAGRAM
​ The class diagram is the main building block of object-oriented modeling. It is used for general
conceptual modeling of the structure of the application, and for detailed modeling translating the
models into programming code. Class diagrams can also be used for data modeling.

6
OBJECT DIAGRAM
​ It describes the static structure of a system at a particular point in time. It can be used to test the
accuracy of class diagrams. It represents distinct instances of classes and the relationship between
them at a time.

7
COMPONENT DIAGRAM
​ Component diagrams are used in modeling the physical aspects of object-oriented systems that are
used for visualizing, specifying, and documenting component-based systems and also for constructing
executable systems through forward and reverse engineering. Component diagrams are essentially class
diagrams that focus on a system's components that often used to model the static view of diagram.

8
PROFILE DIAGRAM
​ Profile diagram, a kind of structural diagram in the Unified Modeling Language (UML), provides a
generic extension mechanism for customizing UML models for particular domains and platforms.
Extension mechanisms allow refining standard semantics in strictly additive manner, preventing them
from contradicting standard semantics.

9
COMPOSITE STRUCTURE DIAGRAM
​ Composite Structure Diagram is one of the new artifacts added to UML 2.0. A composite structure
diagram is a UML structural diagram that contains classes, interfaces, packages, and their relationships,
and that provides a logical view of all, or part of a software system. It shows the internal structure
(including parts and connectors) of a structured classifier or collaboration.

10
PACKAGE DIAGRAM
​ Package diagrams are used, in part, to depict import and access dependencies between packages,
classes, components, and other named elements within your system. Each dependency is rendered as a
connecting line with an arrow representing the type of relationship between the two or more elements.

11

SEQUENTIAL DIAGRAM
​ A sequence diagram consists of a group of objects that are represented by lifelines, and the
messages that they exchange over time during the interaction. A sequence diagram shows the sequence
of messages passed between objects. Sequence diagrams can also show the control structures between
objects.

12

COMMUNICATION DIAGRAM
​ A Communication diagram models the interactions between objects or parts in terms of sequenced
messages. Communication diagrams represent a combination of information taken from Class, Sequence,
and Use Case Diagrams describing both the static structure and dynamic behaviour of a system.

13
ACTIVITY DIAGRAM
​ An activity diagram visually presents a series of actions or flow of control in a system similar to a
flowchart or a data flow diagram. Activity diagrams are often used in business process modeling. They
can also describe the steps in a use case diagram.Activities modelled can be sequential and concurrent.

14

TIMING DIAGRAM
​ A timing diagram includes timing data for at least one horizontal lifeline, with vertical messages
exchanged between states. Timing diagrams represent timing data for individual classifiers and interactions
of classifiers. You can use this diagram to provide a snapshot of timing data for a particular part of a system.

15
USECASE DIAGRAM
​ Use-case diagrams describe the high-level functions and scope of a system. These diagrams also
identify the interactions between the system and its actors. The use cases and actors in use-case diagrams
describe what the system does and how the actors use it, but not how the system operates internal.

16
DEPLOYMENT DIAGRAM
​ A deployment diagram is a UML diagram type that shows the execution architecture of a system,
including nodes such as the hardware or software execution environments, and the middle ware
connecting them. Deployment diagrams are typically used to visualize the physical hardware & the
software of a system.

17

5. IMPLEMENTATION

5.1 MODULES:
1.​ USER INTERFACE(UI) MODULE
2.​ SPEECH RECOGNITION MODULE
3.​ TEXT TO SPEECH MODULE

5.2 MODULE DESCRIPTION:

1.​ USER INTERFACE(UI) MODULE


Features:
●​ Web-based interface built with Streamlit
●​ Sidebar navigation menu (Home, Assistance, About) using streamlit_option_menu
●​ Text input box for typing commands
●​ Conversation history visible in-session
●​ Clean and minimal UI design for accessibility
Functionality:
●​ Displays text responses from the assistant in real-time
●​ Accepts and processes user commands via text
●​ Shows an interactive chat-style layout for input/output
●​ Allows users to switch views/pages using the sidebar
●​ Enhances user experience with a responsive, browser-friendly design

2.​ SPEECH RECOGNITION MODULE


Features:
●​ Captures voice input through the microphone
●​ Converts speech to text using Google Speech Recognition API
●​ Handles different accents, speech speeds, and background noise
●​ Built-in fallback for speech recognition errors
Functionality:
●​ Listens for spoken commands when activated
●​ Converts spoken phrases into text for processing

18
●​ Passes transcribed input to the main assistant logic
●​ Alerts users if speech input is not recognized or too noisy

3.​ TEXT TO SPEECH MODULE


Features:
●​ Uses pyttsx3 for text-to-speech synthesis
●​ Works offline (no need for an internet connection)
●​ Supports voice customization (e.g., rate, volume, voice type)
●​ Synchronized with displayed output for clarity
Functionality:
●​ Converts Alice’s text responses into audible speech
●​ Reads out replies to user queries or commands
●​ Enhances engagement and usability, especially for visually impaired users
●​ Offers a hands-free, immersive experience
●​ Maintains consistency between spoken and visual responses

19

5.3 INTRODUCTION OF TECHNOLOGIES USED:

●​ Python: The core programming language used for building the assistant’s functionalities,
integrating different libraries, and managing the application flow.
●​ Streamlit : For building the web-based user interface (UI) for interaction.
●​ SpeechRecognition : To capture and convert voice input from the user into text.
●​ pyttsx3 : For converting text responses into spoken audio (Text-to-Speech).
●​ webbrowser : To open web pages like Google, YouTube, and Spotify directly from
commands.
●​ datetime : To fetch and display the current date and time.
●​ requests_html (HTMLSession) : For scraping live weather information from Google
search results.
●​ random : To randomly select jokes and responses for varied interactions.
●​ re (Regular Expressions) : For parsing user queries (like extracting the city name in weather
requests).
●​ streamlit_option_menu : To create a stylish and interactive sidebar menu for navigation.
●​ base64 : For encoding and handling images (if needed in future enhancements).
●​ Google Search (weather information)​
Real-time weather details are fetched by scraping Google search results using
requests_html.
●​ Streamlit Web Application

The entire assistant runs as a web-based app, accessible from any browser, without requiring
local installation of heavy applications.

●​ OpenAI / ChatGPT API (for advanced conversational AI)​

●​ Google Calendar API (for event scheduling and reminders)​

●​ Wikipedia API (for real-time search answers)

20
5.4 SAMPLE CODE:
​ ​ import streamlit as st
import os
import webbrowser
from datetime import datetime
import speech_recognition as sr
import pyttsx3
import re
import random
from streamlit_option_menu
import option_menu
from requests_html import
HTMLSession
import base64

# --- Text-to-Speech ---


def text_to_speech(text):
engine = pyttsx3.init()
engine.setProperty('rate', 150)
engine.say(text)
engine.runAndWait()

# --- Speech-to-Text ---


def speech_to_text():
recognizer = sr.Recognizer()
try:
with sr.Microphone() as source:
🎙️ Listening...")
st.info("
recognizer.adjust_for_ambient_noise(source)
audio = recognizer.listen(source, timeout=5)
st.success(" ✅ Processing...")
return recognizer.recognize_google(audio)

21
except:
return None

# --- Assistant Logic ---


def assistant_action(data):
user_data = data.lower()
jokes = [
"Why don’t scientists trust atoms? Because they make up
everything!",
"I'm reading a book on anti-gravity. It's impossible to put
down!"
]

if "what is your name" in user_data:


reply = "My name is Alice."
elif "who are you" in user_data:
reply = "I am your cheerful virtual assistant!"
elif "hello" in user_data or "hi" in user_data:
reply = "Hello there! 😊 How can I assist you today?"
elif "good morning" in user_data:
reply = "Good morning! Wishing you a productive day
ahead!"
elif "good afternoon" in user_data:
reply = "Good afternoon! What can I help you with?"
elif "shutdown" in user_data:
reply = "Alright, shutting down. See you soon!"
elif "play music" in user_data:
webbrowser.open("https://spotify.com")
reply = "Spotify is all set! Enjoy your tunes 🎶"
elif "youtube" in user_data:
webbrowser.open("https://youtube.com")
reply = "YouTube is ready for you!"
elif "chatgpt" in user_data:

22
webbrowser.open("http://chatgpt.com")
reply = "Opening ChatGPT for you!"
elif "google" in user_data:
webbrowser.open("https://google.com")
reply = "Google is at your service!"
elif "time now" in user_data:
now = datetime.now()
reply = f"The current time is {now.strftime('%H:%M:%S')}"
elif "weather" in user_data:
match = re.search(r"weather in ([a-zA-Z\s]+)", user_data)
if match:
city = match.group(1).strip()
weather = get_weather(city)
else:
weather = get_weather()
reply = f"{weather_icon(weather)} {weather}"
elif "joke" in user_data:
reply = random.choice(jokes)
else:
reply = "Hmm, I didn't catch that. Can you please repeat?"

text_to_speech(reply)
return reply

# --- Streamlit Setup ---


st.set_page_config(page_title=" ✨ Virtual Assistant ✨",
layout="wide")
st.sidebar.image("https://media.giphy.com/media/3o7TKxohkk8v3
dPsAw/giphy.gif", width=150)
st.sidebar.markdown(f" 🕒 *{datetime.now().strftime('%A, %d %B
%Y %H:%M:%S')}*")

# --- Menu ---

23
menu = ["Home", "Assistance", "About"]
with st.sidebar:
choice = option_menu("Categories", menu, icons=["house",
"mic", "info-circle"], menu_icon="cast", default_index=0)

# --- Chat History ---


if 'history' not in st.session_state:
st.session_state.history = []

# --- Pages ---


if choice == "Home":
st.title("🤖 Welcome to Alice - Your Virtual Assistant")
st.markdown("""
## 👋 Hello There!
Meet *Alice*, your always-ready AI assistant. Ask her
anything!

### 🧠 Features:
- Speech & Text commands
- Weather forecast with icons
- Web browsing assistant
- Jokes & fun responses

👉 Head to the *Assistance* tab to start talking with Alice!


""")

elif choice == "Assistance":


st.title("🧠 Talk to Alice")
col1, col2 = st.columns(2)

with col1:
st.subheader(" ✍️ Type a message")
24
user_input = st.text_input("You:", placeholder="Type here
and press enter...")
if st.button("Send") and user_input:
response = assistant_action(user_input)
st.session_state.history.append((user_input, response))

with col2:
st.subheader(" 🎤 Speak to Alice")
if st.button("Speak Now"):
spoken_text = speech_to_text()
if spoken_text:
st.success(f"You said: {spoken_text}")
response = assistant_action(spoken_text)
st.session_state.history.append((spoken_text, response))
else:
st.warning("Could not understand your speech. Try
again!")

st.divider()
st.subheader(" 📝 Conversation History")
for user, bot in st.session_state.history:
st.markdown(f"*You:* {user}")
st.markdown(f"*Alice:* {bot}")

elif choice == "About":


st.title("📘 About This App")
st.markdown("""
*Virtual Assistant Alice* is a fun, interactive AI helper built
using:
- 🐍 Python
- 🎙 Speech Recognition
- 🔊 Text-to-Speech
- 🌐 Streamlit for Web UI

25
Created with ❤️ by Pavani. Alice is designed to make your
digital tasks easier & more enjoyable!
""")

🔗 Useful Links")
st.subheader("
st.markdown("[🌍 Google](https://google.com)")
st.markdown("[🎵 Spotify](https://spotify.com)")
st.markdown("[📺 YouTube](https://youtube.com)")

26
6. TEST CASES

A test case is a defined format for software testing required to check if a particular application or software
is working or not. A test consist of a certain set of conditions that needed to be checked to test an
application or software.

1.​ Test Case ID: TC-VA-01


Title: Verify that the assistant responds correctly to a text command.
Description: Ensure that Alice processes a typed command and provides the correct response.
Preconditions: Assistant is running and the user interface is loaded.
Test Steps:
❖​ Navigate to the “Assistance” page.
❖​ Type “What is the time?” in the input box.
❖​ Click the ‘Submit’ or press ‘Enter’.
Expected Result: The assistant responds with the current time, displays it on the screen, and (if TTS
is enabled) speaks it aloud.

2.​ Test Case ID: TC-VA-02


Title: Verify that the assistant accurately converts voice to text.
Description: Ensure that speech input is correctly recognized and converted.
Preconditions: Microphone access is granted and internet connection is available.
Test Steps:
❖​ Activate the voice input feature.
❖​ Speak the command: “Tell me a joke”.
❖​ Wait for the assistant to process the command.
Expected Result: The command is accurately transcribed and a joke is shown and spoken by the
assistant.

3.​ Test Case ID: TC-VA-03


Title: Verify that the assistant opens a website on voice command.
Description: Ensure that valid voice commands trigger correct browser actions.
Preconditions: Assistant is running and internet is available.
Test Steps:
❖​ Activate microphone.
❖​ Say: “Open Google”.

27
Expected Result: The assistant opens Google in a new browser tab and confirms the action both
textually and audibly.

4.​ Test Case ID: TC-VA-04


Title: Verify that the assistant handles unrecognized speech gracefully.
Description: Ensure that unclear voice input does not crash the system and provides user feedback.
Preconditions: Voice input is active.
Test Steps:
❖​ Speak a non-command gibberish phrase (e.g., “Music Play chey”).
❖​ Wait for the assistant to respond.
Expected Result: Assistant displays a message like “Sorry, I didn’t understand that” without crashing
or freezing.

5.​ Test Case ID: TC-VA-05


Title: Verify that TTS speaks the correct response.
Description: Ensure that the text-to-speech module accurately voices the assistant's replies.
Preconditions: Speakers are enabled and TTS is functioning.
Test Steps:
❖​ Enter a command like “Who are you?” via text input.
❖​ Wait for the assistant’s response.
Expected Result: Alice replies with a predefined introduction message and the response is audibly
played.

28
7. SCREENSHOTS

29
8. CONCLUSION
In conclusion, The Virtual Assistant Alice is a thoughtfully designed, interactive AI-based application
developed using Python and Streamlit. It brings together multiple technologies and libraries to deliver a
seamless user experience, making daily digital interactions simpler and more enjoyable. The assistant is
capable of understanding both text and voice commands, responding intelligently through text-to-speech
output, and performing various tasks like fetching weather information, telling jokes, displaying the current
time, and opening popular websites like YouTube, Google, and Spotify.

One of the key highlights of this project is its integration of speech recognition and text-to-speech
functionality, enabling hands-free, real-time interactions between the user and the system. Additionally, the
use of Streamlit makes the interface highly intuitive and web-accessible, without the need for complex
installations or configurations. By combining these technologies, the assistant transforms ordinary
command-based applications into a conversational and engaging experience.

The implementation of features like live weather updates with emojis, voice-based command
recognition, and web-based control demonstrates the flexibility and power of Python when integrated with
third-party libraries. The assistant’s ability to maintain a conversation history enhances usability by
allowing users to track their previous interactions in a clean and organized format.

This project not only showcases how AI tools and Python libraries can be integrated effectively but
also emphasizes the growing potential of virtual assistants in modern technology. The success of this
assistant paves the way for more advanced, feature-rich, and intelligent applications that can further
enhance human-computer interaction.

​ ​

30

9. FUTURE ENHANCEMENT
Smart Web Search
Let users ask questions like "Who is Elon Musk?" or "Capital of France?" Use libraries like wikipedia or
Google Search API to fetch answers and read them out.

Reminders & Notes


Add a feature to set simple reminders like "Remind me to drink water in 10 minutes."
Allow users to create, view, and delete notes (stored locally or in a database).

Translation Support
Integrate googletrans or a translation API so Alice can translate phrases to different languages.

Calendar Integration
Connect to Google Calendar to fetch upcoming events or create new ones via voice/text commands.

Image & Media Display


Display images, memes, or gifs based on certain keywords like "Show me a cat meme".

Voice Reply Selection


Give users a choice between multiple voice options or accents using pyttsx3 voice properties.

Fun Games
Simple voice/text games like "Guess the Number", "Trivia", or "Would You Rather".

Storyteller Mode
Ask Alice to narrate short stories or bedtime tales using text-to-speech.

ChatGPT API Integration


Connect Alice to the real OpenAI GPT API for much deeper, conversational AI capabilities.

31

10. BIBLIOGRAPHY
1.​ Abhay Dekate, Chaitanya Kulkarni, Rohan Killedar, “Study of Voice Controlled Personal Assistant
Device”, International Journal of Computer Trends and Technology (IJCTT) – Volume 42 Number 1 –
December 2016.​

2.​ Deny Nancy, Sumithra Praveen, Anushria Sai, M.Ganga, R.S.Abisree, “Voice Assistant Application for a
college Website”, International Journal of Recent Technology and Engineering (IJRTE) ISSN:
2277-3878, Volume-7,April 2019.

3.​ Deepak Shende, Ria Umahiya, Monika Raghorte, Aishwarya Bhisikar, Anup Bhange, “AI Based Voice
Assistant Using Python”, Journal of Emerging Technologies and Innovative Research (JETIR), February
2019, Volume 6.

4.​ Dr.Kshama V.Kulhalli, Dr.Kotrappa Sirbi, Mr.Abhijit J. Patankar, “Personal Assistant with Voice
Recognition Intelligence”, International Journal of Engineering Research and Technology. ISSN 0974-
3154 Volume 10, Number 1 (2017).

5.​ Isha S. Dubey, Jyotsna S. Verma, Ms.Arundhati Mehendale, “An Assistive System for Visually Impaired
using Raspberry Pi”, International Journal of Engineering Research & Technology (IJERT), Volume 8,
May-2019.

6.​ Kishore Kumar R, Ms. J. Jayalakshmi, Karthik Prasanna, “A Python based Virtual Assistant using
Raspberry Pi for Home Automation”, International Journal of Electronics and Communication
Engineering (IJECE), Volume 5, July 2018.

-- o --

32

You might also like