Jarvis Report Editing
Jarvis Report Editing
The project aims to develop a personal-assistant for python-based systems. Jarvis draws
its inspiration from virtual assistants like Cortana for Windows, and Siri for iOS. It has
been designed to provide a user-friendly interface for carrying out a variety of tasks by
employing certain well defined commands. Users can interact with the assistant either
through voice commands or using keyboard input.
As a personal assistant, Jarvis assists the end-user with day-to-day activities like general
human conversation, searching queries in Google, Bing or Yahoo, searching for videos,
retrieving images, live weather conditions, word meanings, searching for medicine
details, health recommendations based on symptoms and reminding the user about the
scheduled events and tasks. The user statements/commands are analyzed with the help of
machine learning to give an optimal solution.
Acknowledgements
It has come out to be a great pleasure and experience for us to work on the
project “JARVIS: The Personal Assistant”. We wish to express our
ideas to those who helped us i.e faculty of our Institute Prof. Borale S.A
(Project Guide) the preparation of the manual script of this text. This
would not have been made successful without their help and precious
suggestions. Finally, we also warmly thank all my colleagues who encouraged
us to an extent, which made the project successful.
Contents
Certificate
Declaration
Abstract
Acknowledgements
v
Contents
List of Tables
Abbreviations
x
1 INTRODUCTION
1.1 Introduction..................................................................................................................
1.2 Objectives.........................................................................................................................
1.3 Purpose, Scope, and Applicability..........................................................................
1.3.1 Purpose.................................................................................................................
1.3.2 Scope.....................................................................................................................
1.3.3 Applicability........................................................................................................
1.4 Achievements..................................................................................................................
1.5 Organisation of Report................................................................................................
2 LITERATURE REVIEW
2.1 LITERATURE SURVEY............................................................................................
2.2 LITERATURE SURVEY COMPARATIVE STUDY.......................................
vi
Content
3 SURVEY OF TECHNOLOGIES
3.1 Artificial Intelligence....................................................................................................
5 SYSTEM DESIGN 14
5.1 Basic Modules..............................................................................................................
5.1.1 Open websites.................................................................................................
5.1.2 Give information about someone............................................................
5.1.3 Get weather for a location.........................................................................
5.1.4 Tells about your upcoming events..........................................................
5.1.5 Tells top headlines........................................................................................
5.2 User interface design................................................................................................
6 CONCLUSIONS 21
6.1 Conclusion......................................................................................................................
6.2 Limitations of the System........................................................................................
6.3 Future Scope of the Project....................................................................................
A Appendix A 23
A.1 System Code.................................................................................................................
A.1.1 main.py..............................................................................................................
Bibliography 33
List of Figures
5.1 Outpu 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
t .
5.2 Output 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
.
5.3 Output 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
.
5.4 Output 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
.
5.5 Output 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
.
5.6 Output 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
.
5.7 Outpu 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
t .
5.8 Output 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
.
5.9 Output 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
.
List of Tables
AIML
ArtificialIntelligenceMarkupLan
guage gTTs
GoogleTexttoSpeech
pyttsx
PythonTextto
Speech AI
ArtificialIntelligence
CHAPTER 1
INTRODUCTION
In recent times only in the Virtual Assistants we can experience the major changes, the
way users interact and the experience of user. We are already using them for many tasks
like switching on/offlights, playing music through streaming apps like Wynk Music, Spo-
tify etc., This is the new method of interacting with the technical devices makes lexical
communication as a new ally to this technology.
The concept of virtual assistants in earlier days is to describe the professionals who pro-
vide ancillary services on the web. The job of a voice is defined in three stages: Text to
speech; Text to Intention; Intention to action; Voice assistant will be fully developed to
improve the current range. Voice assistants are not befuddled with the virtual assistants,
which are people, who work casually and can therefore handle all kinds of tasks. Voice
Assistants anticipate our every need and it takes action, Thanks to AI based Voice Assis-
tants. AI-based Voice assistants can be useful in manyfields such as IT Helpdesk, Home
automation, HR related tasks, voice based search etc., and the voice based search is going
to be the future for next generation people where users are all most dependent on voice
assistants for every needs. In this proposal we have built the AI-based voice assistant
which can do all of these tasks without inconvenience.
This is thefirst system proposed to intelligently speak to the users in natural language. It
is inspired by the character JARVIS played by Edward Jarvis in the Marvel Studio
Productions Iron Man who served Tony Stark [Robert Downey Jr] as his most intelligent
hypothetical assistant lending him help into every possible aspect such as Technology,
Medical, Social and other coherent curriculum aspects.
1.1 Overview
A "Jarvis AI voice assistant" project aims to develop a virtual assistant program, inspired
by the AI character "Jarvis" from the Iron Man movies, that utilizes speech recognition
and natural language processing to respond to voice commands and perform various tasks
like searching the web, setting reminders, playing music, controlling smart home devices,
sending emails, and more, essentially acting as a personal digital assistant that interacts
with the user through spoken language.
1.2 Motivation
The motivation behind developing a Jarvis AI voice assistant stems from the desire to
create an intelligent, voice-controlled system that enhances productivity, automates tasks,
and improves human-computer interaction. Inspired by sci-fi depictions like Iron Man’s
J.A.R.V.I.S., this project aims to integrate AI, machine learning, and natural language
processing to enable seamless voice communication and automation. It offers a unique
opportunity to explore cutting-edge technologies such as speech recognition, smart
home integration, and deep learning, making daily activities more efficient and
futuristic. Additionally, it serves as a hands-on learning experience in AI development,
automation, and cloud computing, strengthening technical skills while contributing to
open-source innovation. Beyond technical benefits, this project is a step toward building
personalized AI assistants that adapt to user needs, control IoT devices, and
revolutionize the way we interact with technology. Whether for personal use, career
growth, or research, developing a Jarvis AI assistant is an exciting challenge that brings
the vision of futuristic AI closer to reality.
1.3 Problem, Definition, and Objective
1.3.1 Problem Statement
In today’s fast-paced digital world, managing multiple tasks efficiently while interacting
with technology can be challenging. Users often struggle with time-consuming manual
operations, lack of hands-free interaction, and inefficient task automation. Existing
voice assistants offer basic functionalities but often lack personalization, advanced AI-
driven decision-making, and seamless smart home integration. There is a growing
need for a more intelligent, context-aware, and efficient voice assistant that can
perform complex tasks, automate daily routines, and improve human-computer
interaction.
1.3.2 Definition
1.3.3 Objective
The primary objective of the Jarvis AI Voice Assistant project is to develop an intelligent
and interactive voice assistant that enhances daily productivity and automates tasks using
AI-driven functionalities. Specific goals include:
Voice Recognition & NLP: Implement accurate speech-to-text and text-to-
speech capabilities for seamless interaction.
Task Automation: Enable smart scheduling, reminders, email management, and
task execution through voice commands.
Smart Home Integration: Control IoT devices, lights, fans, security systems, and
home appliances using voice instructions.
AI-Powered Decision Making: Use machine learning algorithms to understand
user preferences and optimize responses.
Multi-Platform Accessibility: Develop a system that works across PCs,
smartphones, and IoT devices for flexibility.
Context Awareness & Personalization: Enhance AI’s ability to adapt to user
habits, preferences, and contextual needs.
Security & Privacy: Ensure data encryption, user authentication, and privacy
protection to build a secure AI assistant.
ADVANTAGES:
• Time Efficiency: It enables quick access to information and services with voice-
activated commands, enhancingproductivity by reducing the time spent on manual tasks
like web searches or scheduling.
• Personalization: The system greets users dynamically based on the time of day,
creating a personalized interaction thatenhances user engagement.
1.4.2 Limitations
1. Requirement Analysis
Identify key functionalities such as voice recognition, NLP, smart automation,
and IoT integration.
Define hardware and software requirements (e.g., Python, TensorFlow, Google
API, Raspberry Pi).
Determine user needs and security considerations for data privacy.
2. System Design & Architecture
Design a modular system with components for speech processing, NLP,
automation, and device control.
Use client-server architecture for cloud-based AI functionalities.
Develop a user-friendly interface for easy interaction and accessibility.
3. Speech Recognition & NLP Implementation
Use Google Speech-to-Text API, OpenAI Whisper, or CMU Sphinx for voice
recognition.
Integrate Natural Language Processing (NLTK, spaCy, GPT-based models) for
command understanding.
Implement text-to-speech (TTS) engines like Google TTS, pyttsx3, or Festival for
AI responses.
4. Task Automation & Smart Integration
Develop custom scripts for automating tasks like email, calendar, and system
control.
Use IoT protocols (MQTT, HTTP, Home Assistant API) for smart home
integration.
Implement database storage for user preferences and history tracking.
5. Machine Learning & AI Enhancement
Train AI models for personalized responses and contextual understanding.
Implement voice authentication for security using machine learning models.
Use reinforcement learning and neural networks to improve AI interactions over
time.
6. Testing & Debugging
Conduct unit testing for each module (speech processing, NLP, automation).
Perform integration testing to ensure seamless communication between
components.
Use real-world scenarios to improve accuracy and response time.
7. Deployment & Maintenance
Deploy the assistant on local machines, Raspberry Pi, or cloud platforms.
Continuously update AI models and fix bugs based on user feedback.
Ensure data security and regular performance optimization.
1.5.1 Problem-Solving
While developing the Jarvis AI Assistant, various challenges arise. Here’s how they can
be addressed:
1. Voice Recognition Accuracy Issues
Problem: Difficulty recognizing accents, background noise interference.
Solution:
Use deep learning-based models (Whisper, Vosk, DeepSpeech) for better
accuracy.
Implement noise reduction algorithms and microphone calibration.
2. Misinterpretation of Commands (NLP Limitations)
Problem: The assistant may misunderstand complex or ambiguous commands.
Solution:
Use advanced NLP techniques like GPT, BERT, or Transformer models.
Implement context-awareness by storing previous interactions.
3. Smart Home Integration Issues
Problem: Compatibility issues with different IoT devices.
Solution:
Use standard communication protocols (MQTT, HTTP, IFTTT) for integration.
Develop custom APIs to connect with non-compatible devices.
4. Security & Privacy Concerns
Problem: Risk of unauthorized access or data breaches.
Solution:
Implement voice authentication and encrypted communication.
Use local processing instead of cloud storage for sensitive data.
5. Performance Optimization
Problem: High CPU/GPU usage due to AI processing.
Solution:
Optimize AI models using quantization and edge computing.
Use cloud-based processing for heavy computations.
CHAPTER 2
LITERATURE REVIEW
LITERATURE SURVEY
Paper1
Title:JARVIS: An interpretation of AIML with integration of gTTS and Python.
Author:Ravivanshikum ar Sangpal, Tanvee Gawand, Sahil Vaykar, and Neha Madhavi.
Publication Year:2019
Description:This paper presents JARVIS, a virtual integrated voice assistant comprising
of gTTS, AIML[Artificial Intelligence Markup Language], and Python-based state-of-
the- art technology in personalized assistant development. JARVIS incorporates the
power of AIML and with the industry-leading Google platform for text-to-speech
conversion and the voice of the Male Pitch in the gTTS libraries inspired from the Marvel
World. This is the result of the adoption of the dynamic base Pythons pyttsx which
considers intentionally in adjacent phases of gTTS and AIML, facilitating the
establishment of considerably smooth dialogues between the assistant and the users. This
is a unique result of the exaggerated contribution of several contributors such as the
feasible use of AIML and its dynamic fu- sion with platforms like Python[pyttsx] and
gTTS[Google Text to Speech] resulting into a consistent and modular structure of
JARVIS exposing the widespread reusability and negligible maintenance.
Paper2
Title:Artificial Intelligence Based Voice Assistant.
Author:Subhash S, Prajwal N Srivatsa, Siddesh S, Ullas A.
Publication Year:2020
Description:Voice control is a major growing feature that change the way people
can live. The voice assistant is commonly being used in smartphones and laptops. AI-
based Voice assistants are the operating systems that can recognize human voice and
respond via integrated voices. This voice assistant will gather the audio from the
microphone and then convert that into text, later it is sent through GTTS (Google text
to speech). GTTS engine will convert text into audiofile in English language, then that
audio is played using play sound package of python programming Language.
Paper3
Title:Darwin: Convolutional Neural Network based Intelligent Health Assistant.
Author:Siddhant Rai, Akshayanand Raut, Akash Savaliya, Dr. Radha Shankarmani.
Publication Year:2018
Description:Healthcare is an essential factor for living a good life. Unfortunately, con-
sulting a doctor for non-life threatening problems can be difficult at times due to our busy
lives. The aim of healthcare services is to make our lives easier and to improve the
quality of life. The concept of personal assistants or chatbots builds on the progressing
maturity of areas such as Artificial Intelligence (AI) and Artificial Neural Network
(ANN). In this paper, we propose a healthcare assistant that will allow users to check
for symptoms of common diseases, a suggestion to visit a doctor if needed, exercise
recommendation, track- ing exercise/ workout routine, along with a comprehensive
exercise guide. The primary objective is to develop a system that utilizes AI and
deep learning to help improve the lives of people who have busy schedule to easily
keep a check on their health.
CHAPTER 3
SOFTWARE REQUIRMENTS SPECIFICATION
3.1 Assumptions and Dependencies
Assumptions
Several assumptions are made while developing the Jarvis AI Voice Assistant to ensure
smooth functionality and deployment:
User Has a Stable Internet Connection
The assistant relies on cloud-based services for AI processing, NLP, and real-time
updates (e.g., weather, news).
Some basic functionalities may work offline, but advanced features require an
internet connection.
Hardware Meets Minimum Requirements
The system assumes the user has a microphone and speaker for voice interaction.
For advanced AI processing, a powerful CPU/GPU or cloud computing support is
assumed.
User Speaks Clearly and Uses Supported Languages
The assistant assumes users will speak in a language it supports (e.g., English,
Spanish, etc.).
Background noise is minimal, or noise cancellation techniques are in place.
Compatible Software Environment
The system runs on a Windows, Linux, or macOS environment with Python and
necessary libraries installed.
Required APIs (e.g., Google Speech-to-Text, OpenAI Whisper, Home Assistant
API) are accessible.
Users Will Follow Security Protocols
Users are expected to set up voice authentication or password protection for
secure access.
The assistant assumes the device is not shared with unauthorized users.
Smart Devices Use Standard Communication Protocols
IoT devices support MQTT, HTTP, Bluetooth, or Wi-Fi for integration.
The assistant assumes smart devices can be controlled via APIs or automation
frameworks.
Dependencies
1. Software Dependencies
Speech Recognition & NLP: Google Speech API, OpenAI Whisper, CMU
Sphinx, DeepSpeech.
AI & Machine Learning: TensorFlow, PyTorch, OpenAI API (GPT), NLTK,
spaCy.
Text-to-Speech (TTS): Google TTS, pyttsx3, Festival.
Smart Home Integration: Home Assistant API, MQTT broker, IFTTT.
Cloud Services: OpenAI, Google Cloud, AWS for AI processing.
2. Hardware Dependencies
Microphone & Speakers: Required for voice input and output.
Processing Power: AI models require a high-performance CPU/GPU or cloud
computing.
IoT Devices: Smart home appliances (lights, security cameras, thermostats) need
to be compatible.
3. Internet & Network Dependencies
Cloud-Based AI Processing: Many AI functions (e.g., GPT-based NLP, voice
processing) need an active internet connection.
IoT Communication: Smart home automation depends on Wi-Fi, MQTT, or
Bluetooth connections.
4. API & Third-Party Service Dependencies
Reliance on third-party APIs (Google, OpenAI, AWS) means service disruptions
could impact functionality.
API pricing and limitations may restrict usage in free-tier accounts.
5. Security & Privacy Measures
Data encryption and authentication mechanisms are required to prevent
unauthorized access.
User data handling policies need to comply with privacy regulations like GDPR
or CCPA.
Functional requirements define the core functionalities that the Jarvis AI Voice Assistant
must support to ensure efficient operation and user satisfaction. These requirements are
categorized based on different modules of the system.
Speech-to-Text (STT) – Converts user voice input into text using Google Speech
API, OpenAI Whisper, or CMU Sphinx.
Text-to-Speech (TTS) – Generates human-like voice responses using pyttsx3,
Google TTS, or Festival.
Natural Language Understanding (NLU) – Understands user commands and
context using NLP techniques (spaCy, NLTK, GPT).
Multi-Language Support – Can process and respond in multiple languages for
broader accessibility.
The AI assistant should process voice commands with a response time of no more
than 1 second for local operations and 3 seconds for cloud-dependent operations.
The system should support at least 100 concurrent users without significant
degradation in performance.
The voice recognition accuracy should maintain a success rate of 95% or higher
in quiet environments and 85% or higher in noisy environments.
The system should efficiently utilize CPU and memory resources, ensuring that it
does not exceed 30% CPU usage and 500MB RAM usage under normal load.
Reliability: The assistant should maintain 99.9% uptime and handle failures
gracefully with appropriate fallback mechanisms.
Scalability: The system should be capable of handling increased user demand
without significant performance degradation.
Usability: The voice assistant should have an intuitive interface with clear and
natural language interactions.
Maintainability: The codebase should be modular and well-documented to allow
for easy updates and bug fixes.
Portability: The assistant should be deployable across multiple platforms,
including Windows, macOS, Android, and iOS.
Extensibility: The system should allow third-party integrations via APIs to
enhance functionalities.
Energy Efficiency: The assistant should optimize power consumption, especially
on mobile devices, to ensure minimal battery drain.
The system should use a NoSQL database (e.g., MongoDB) for handling
unstructured and semi-structured data efficiently.
A relational database (e.g., PostgreSQL or MySQL) should be used for
structured data storage, such as user profiles and logs.
The database should support automatic backups and redundancy to ensure data
integrity and disaster recovery.
Data access should be optimized using indexed queries to enhance retrieval
performance.
The Jarvis Personal Voice Assistant was developed using Python 3.6+, chosen for its
extensive library support for machine learning and AI tasks. The project runs on
Windows 10 or higher, though adaptable to Linux with minor adjustments, and utilizes
PyCharm or Visual Studio Code as IDEs for effective debugging. Key libraries include
SpeechRecognition for voice-to-text conversion, Pyttsx3 for text-to-speech synthesis,
PyQt5 for GUI development, and PyAutoGUI for simulating keyboard and mouse
actions. OpenCV supports webcam access, while the Wikipedia API, Requests, and
PyWhatKit enhance Jarvis’s web-based functions, such as retrieving information,
accessing YouTube, and fetching news.
External APIs like News API and OpenWeatherMap allow for real-time news and
weather updates, and IP geolocation via GeoJS or IPify provides location-based services.
Basic internet connectivity and updated audio drivers are essential to support voice
interaction, along with a web browser for external searches and multimedia content. This
setup ensures Jarvis’s capabilities to assist users in an efficient, voice-driven manner.
The server hosting the AI assistant should have at least 16 GB RAM, 8-core
CPU, and 1TB SSD storage for optimal performance.
Edge devices (e.g., smart speakers or mobile devices) should have at least 4 GB
RAM and a quad-core processor to support local processing.
The system should support integration with external GPUs (NVIDIA CUDA
cores) for machine learning inference acceleration.
The hardware should be energy efficient, with power consumption optimized for
continuous operation.
The Agile Software Development Life Cycle (SDLC) model will be applied to
ensure iterative and incremental development.
Agile allows for continuous feedback and adaptability to changes based on user
requirements and testing outcomes.
Development will follow Scrum methodology, incorporating sprints, daily
stand-up meetings, and retrospectives to enhance collaboration and efficiency.
Regular prototype releases will ensure that user feedback is integrated into the
development process.
The system will undergo frequent testing, including unit testing, integration
testing, and user acceptance testing (UAT) to ensure robustness and
performance.
CHAPTER 4
SYSTEM DESIGN
CHAPTER 5
PROJECT PLAN
The project estimate has been determined through a combination of expert judgment,
historical data, and cost estimation techniques. Cost reconciliation involves comparing
different estimation methods to arrive at a balanced and realistic budget. The estimation
process ensures accuracy and accounts for potential contingencies.
Risk identification is the process of recognizing potential threats and opportunities that
could impact the project. Identified risks include:
Each identified risk is analyzed based on probability and impact. A risk assessment
matrix categorizes risks into high, medium, and low priority, enabling effective
mitigation strategies.
5.2.3 Overview of Risk Mitigation, Monitoring, Management
Speech Recognition Module: Converts spoken words into text using speech-to-
text algorithms.
Natural Language Processing (NLP) Module: Analyzes and understands user
commands using NLP techniques.
Command Processing Module: Interprets and processes user requests to execute
tasks.
Response Generation Module: Uses AI models to generate meaningful
responses.
Integration Module: Connects with external APIs (Google Search, Weather API,
Smart Home devices, etc.).
6.2 Tools and Technologies Used
The following tools and technologies were utilized for building the Jarvis AI Voice
Assistant:
SOFTWARE TESTING
Ensure the reliability and efficiency of the Jarvis AI Voice Assistant, various types of
testing were conducted:
Unit Testing
Each module (Speech Recognition, NLP, Task Execution) was tested
independently.
Example: Checking if speech-to-text conversion returns accurate results.
Integration Testing
Verified seamless interaction between different components.
Example: Ensuring that recognized text is correctly processed by the NLP
module.
Functional Testing
Tested whether the assistant performs intended functions correctly.
Example: Opening applications, fetching weather updates, or performing web
searches.
Usability Testing
Evaluated the ease of use and user-friendliness.
Example: Checking response time and clarity of generated speech output.
Performance Testing
Assessed speed and accuracy under various conditions.
Example: Testing response time under noisy environments.
CONCLUSIONS
6.1 Conclusion
21
Voice recognition isn’t always perfect. May get inaacurate results.
Appendix A
A.1.1 main.py
from
Jarvis
impor
t
Jarvis
A
ssista
nt
impor
t re
impor
t os
impor
t
rand
om
impo
rt
pprin
t
impo
rt
date
time
impo
rt
requ
ests
impo
rt
sys
impor
t
urllib .
parse
impor
t
pyjok
es
impor
t time
impor
t
pyaut
ogui
impor
t
pywh
atkit
import wolframalpha
# = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = MEMORY = == == == == == == = == == = = == == = == == == ==
== == = == ==
EM A IL_D IC = {
* myself ’: ’ atharvaaingle@
gm ail . com ’, ’ my official em
ail ’: ’ atharvaaingle@ gm ail
. com ’, ’ my second em ail ’:
’ atharvaaingle@ gm ail .
com ’,
’ my official mail ’: ’
atharvaaingle@ gm ail .
com ’, ’ my second mail ’: ’
atharvaaingle@ gm ail .
com ’
}
def
s
p
e
a
k
( t
e
xt
):
obj
.
tts
( t
ex
t)
def com
putational_intelligence
( question ): try:
client = w olfram
alpha . Client(
app_id ) answer =
client. query (
question )
answ er = next(
answer. results ).
text print( answer)
return
answer except:
speak (" Sorry sir I couldn ’t fetch your question ’s
answ er.
Please try again ")
QThread ): def
init ( self
):
super( MainThread , self ). init ()
w hile True:
command = obj. m ic_input ()
if path is None:
speak (’ A
pplication path not
found ’) print(’ A
pplication path not
found ’)
else:
speak (’ Launching : ’ +
app + ’ for you sir!’) obj.
launch_any_app (
path_of_app = path)
== len(
new s_res )- 2: break
print(’ These were the top headlines ,
Have a nice day Sir !!.. ’) speak (’ These
were the top headlines
, Have a nice day Sir !!.. ’)
elif ’ youtube ’ in
command : video =
command . split(’ ’)
[1]
speak ( f" Okay sir , playing {
video } on youtube")
pywhatkit. playonyt( video )
else:
print(
" I coudn ’t find the requested
person ’s email in my database.
Please try again with a different
name")
speak (
" I coudn ’t find the requested
person ’s email in my database.
Please try again with a different
name")
except:
speak (" Sorry sir. Couldn ’t send your mail.
Please try again ")
if " joke" in
command :
joke =
pyjokes.
get_joke ()
print( joke)
speak ( joke)
if city:
res = f"{ place} is in { state}
state and country { country }. It is
{ distance} km away from your
current location "
print(
res)
speak
( res)
else:
res = f"{ state} is a state in {
country }. It is { distance} km
away from your current location "
print( res)
speak ( res)
except:
res = " Sorry sir , I couldn ’t get the co
- ordinates of the location you
requested . Please try again " speak (
res)
except IOError:
speak (" Sorry sir , I am unable to
display the screenshot")
QM ain W indow ):
def
init (
self ):
super ().
init ()
self. ui = U i_M
ain W indow ()
self. ui.
setupUi( self)
self. ui. push B utton . clicked .
connect( self. startT ask ) self.
ui. push Button_ 2 . clicked .
connect( self. close)
Time ( self ):
current_tim e =
= QD ate. currentD ate ()
label_time = current_time .
toString (’ hh: mm : ss’)
label_date = current_date
. toString ( Qt. ISODate)
self. ui. textBrow ser
. setText( label_date)
self. ui. textBrow ser_2 . setT ext( label_time)
app = Q A
pplication (
sys. argv) jarvis
= Main ()
jarvis. show ()
exit( app. exec_ ())
Bibliography
[2] Rai, S., Raut, A., Savaliya, A., and Shankarmani, R. (2018).
Darwin: convolutional neural network based intelligent health
assistant. In2018 Second International Confer- ence on
Electronics, Communication and Aerospace Technology
(ICECA), pages 1367– 1371. IEEE.
[3] Sangpal, R., Gawand, T., Vaykar, S., and Madhavi, N. (2019).
Jarvis: An interpreta- tion of aiml with integration of gtts and
python. In2019 2nd International Conference on Intelligent
Computing, Instrumentation and Control Technologies
(ICICICT), vol- ume 1, pages 486–489. IEEE.
[4] Subhash, S., Srivatsa, P. N., Siddesh, S., Ullas, A., and Santhosh, B.
(2020). Artificial intelligence-based voice assistant. In2020 Fourth
World Conference on Smart Trends in Systems, Security and
Sustainability (WorldS4), pages 593–596. IEEE.