0% found this document useful (0 votes)
63 views66 pages

Jarvis Report Editing

The project aims to develop 'Jarvis', a personal assistant for Python-based systems inspired by existing virtual assistants like Cortana and Siri. Jarvis is designed to assist users with various tasks through voice commands or keyboard input, utilizing machine learning for optimal responses. The project encompasses features such as task automation, smart home integration, and personalized user interactions, while also addressing limitations like voice recognition accuracy and security concerns.

Uploaded by

sachinmondhe4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
63 views66 pages

Jarvis Report Editing

The project aims to develop 'Jarvis', a personal assistant for Python-based systems inspired by existing virtual assistants like Cortana and Siri. Jarvis is designed to assist users with various tasks through voice commands or keyboard input, utilizing machine learning for optimal responses. The project encompasses features such as task automation, smart home integration, and personalized user interactions, while also addressing limitations like voice recognition accuracy and security concerns.

Uploaded by

sachinmondhe4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 66

Abstract

The project aims to develop a personal-assistant for python-based systems. Jarvis draws
its inspiration from virtual assistants like Cortana for Windows, and Siri for iOS. It has
been designed to provide a user-friendly interface for carrying out a variety of tasks by
employing certain well defined commands. Users can interact with the assistant either
through voice commands or using keyboard input.
As a personal assistant, Jarvis assists the end-user with day-to-day activities like general
human conversation, searching queries in Google, Bing or Yahoo, searching for videos,
retrieving images, live weather conditions, word meanings, searching for medicine
details, health recommendations based on symptoms and reminding the user about the
scheduled events and tasks. The user statements/commands are analyzed with the help of
machine learning to give an optimal solution.
Acknowledgements

It has come out to be a great pleasure and experience for us to work on the
project “JARVIS: The Personal Assistant”. We wish to express our
ideas to those who helped us i.e faculty of our Institute Prof. Borale S.A
(Project Guide) the preparation of the manual script of this text. This
would not have been made successful without their help and precious
suggestions. Finally, we also warmly thank all my colleagues who encouraged
us to an extent, which made the project successful.
Contents

Certificate

Project Report Approval

Declaration

Abstract

Acknowledgements
v

Contents

List of Figures viii

List of Tables

Abbreviations
x

1 INTRODUCTION
1.1 Introduction..................................................................................................................
1.2 Objectives.........................................................................................................................
1.3 Purpose, Scope, and Applicability..........................................................................
1.3.1 Purpose.................................................................................................................
1.3.2 Scope.....................................................................................................................
1.3.3 Applicability........................................................................................................
1.4 Achievements..................................................................................................................
1.5 Organisation of Report................................................................................................

2 LITERATURE REVIEW
2.1 LITERATURE SURVEY............................................................................................
2.2 LITERATURE SURVEY COMPARATIVE STUDY.......................................
vi
Content

3 SURVEY OF TECHNOLOGIES
3.1 Artificial Intelligence....................................................................................................

4 REQUIREMENTS AND ANALYSIS


4.1 Problem Definition........................................................................................................
4.2 Requirements Specification.......................................................................................
4.3 Planning and Scheduling.........................................................................................
4.4 Software and Hardware Requirements..............................................................
4.5 UML Diagram...............................................................................................................
4.6 Conceptual Models....................................................................................................

5 SYSTEM DESIGN 14
5.1 Basic Modules..............................................................................................................
5.1.1 Open websites.................................................................................................
5.1.2 Give information about someone............................................................
5.1.3 Get weather for a location.........................................................................
5.1.4 Tells about your upcoming events..........................................................
5.1.5 Tells top headlines........................................................................................
5.2 User interface design................................................................................................

6 CONCLUSIONS 21
6.1 Conclusion......................................................................................................................
6.2 Limitations of the System........................................................................................
6.3 Future Scope of the Project....................................................................................

A Appendix A 23
A.1 System Code.................................................................................................................
A.1.1 main.py..............................................................................................................

Bibliography 33
List of Figures

4.1 UML Diagram 1...........................................................................................................


11
4.2 UML Diagram 2...........................................................................................................
12
4.3 Flow chart......................................................................................................................
13

5.1 Outpu 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
t .
5.2 Output 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
.
5.3 Output 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
.
5.4 Output 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
.
5.5 Output 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
.
5.6 Output 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
.
5.7 Outpu 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
t .
5.8 Output 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
.
5.9 Output 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
.
List of Tables

2.1 LITERATURE SURVEY COMPARATIVE STUDY......................................

4.1 Planning and Scheduling.........................................................................................


Abbreviations

AIML

ArtificialIntelligenceMarkupLan
guage gTTs
GoogleTexttoSpeech

pyttsx
PythonTextto
Speech AI
ArtificialIntelligence
CHAPTER 1
INTRODUCTION

In recent times only in the Virtual Assistants we can experience the major changes, the
way users interact and the experience of user. We are already using them for many tasks
like switching on/offlights, playing music through streaming apps like Wynk Music, Spo-
tify etc., This is the new method of interacting with the technical devices makes lexical
communication as a new ally to this technology.

The concept of virtual assistants in earlier days is to describe the professionals who pro-
vide ancillary services on the web. The job of a voice is defined in three stages: Text to
speech; Text to Intention; Intention to action; Voice assistant will be fully developed to
improve the current range. Voice assistants are not befuddled with the virtual assistants,
which are people, who work casually and can therefore handle all kinds of tasks. Voice
Assistants anticipate our every need and it takes action, Thanks to AI based Voice Assis-
tants. AI-based Voice assistants can be useful in manyfields such as IT Helpdesk, Home
automation, HR related tasks, voice based search etc., and the voice based search is going
to be the future for next generation people where users are all most dependent on voice
assistants for every needs. In this proposal we have built the AI-based voice assistant
which can do all of these tasks without inconvenience.

This is thefirst system proposed to intelligently speak to the users in natural language. It
is inspired by the character JARVIS played by Edward Jarvis in the Marvel Studio
Productions Iron Man who served Tony Stark [Robert Downey Jr] as his most intelligent
hypothetical assistant lending him help into every possible aspect such as Technology,
Medical, Social and other coherent curriculum aspects.
1.1 Overview

A "Jarvis AI voice assistant" project aims to develop a virtual assistant program, inspired
by the AI character "Jarvis" from the Iron Man movies, that utilizes speech recognition
and natural language processing to respond to voice commands and perform various tasks
like searching the web, setting reminders, playing music, controlling smart home devices,
sending emails, and more, essentially acting as a personal digital assistant that interacts
with the user through spoken language.

1.2 Motivation
The motivation behind developing a Jarvis AI voice assistant stems from the desire to
create an intelligent, voice-controlled system that enhances productivity, automates tasks,
and improves human-computer interaction. Inspired by sci-fi depictions like Iron Man’s
J.A.R.V.I.S., this project aims to integrate AI, machine learning, and natural language
processing to enable seamless voice communication and automation. It offers a unique
opportunity to explore cutting-edge technologies such as speech recognition, smart
home integration, and deep learning, making daily activities more efficient and
futuristic. Additionally, it serves as a hands-on learning experience in AI development,
automation, and cloud computing, strengthening technical skills while contributing to
open-source innovation. Beyond technical benefits, this project is a step toward building
personalized AI assistants that adapt to user needs, control IoT devices, and
revolutionize the way we interact with technology. Whether for personal use, career
growth, or research, developing a Jarvis AI assistant is an exciting challenge that brings
the vision of futuristic AI closer to reality.
1.3 Problem, Definition, and Objective
1.3.1 Problem Statement

In today’s fast-paced digital world, managing multiple tasks efficiently while interacting
with technology can be challenging. Users often struggle with time-consuming manual
operations, lack of hands-free interaction, and inefficient task automation. Existing
voice assistants offer basic functionalities but often lack personalization, advanced AI-
driven decision-making, and seamless smart home integration. There is a growing
need for a more intelligent, context-aware, and efficient voice assistant that can
perform complex tasks, automate daily routines, and improve human-computer
interaction.

1.3.2 Definition

A Jarvis AI Voice Assistant is an artificial intelligence-powered virtual assistant


designed to understand and process voice commands, perform automated tasks, and
provide real-time assistance. It integrates natural language processing (NLP), speech
recognition, and machine learning to enable users to interact with their devices using
voice commands. The assistant can retrieve information, control smart devices,
schedule tasks, send notifications, and provide AI-driven insights. By leveraging AI
and automation, it aims to create a more interactive, personalized, and efficient user
experience.

1.3.3 Objective

The primary objective of the Jarvis AI Voice Assistant project is to develop an intelligent
and interactive voice assistant that enhances daily productivity and automates tasks using
AI-driven functionalities. Specific goals include:
 Voice Recognition & NLP: Implement accurate speech-to-text and text-to-
speech capabilities for seamless interaction.
 Task Automation: Enable smart scheduling, reminders, email management, and
task execution through voice commands.
 Smart Home Integration: Control IoT devices, lights, fans, security systems, and
home appliances using voice instructions.
 AI-Powered Decision Making: Use machine learning algorithms to understand
user preferences and optimize responses.
 Multi-Platform Accessibility: Develop a system that works across PCs,
smartphones, and IoT devices for flexibility.
 Context Awareness & Personalization: Enhance AI’s ability to adapt to user
habits, preferences, and contextual needs.
 Security & Privacy: Ensure data encryption, user authentication, and privacy
protection to build a secure AI assistant.

1.4 Project Scope and Limitations

1.4.1 Project Scope


The proposed system, named JARVIS (Just A Rather Very Intelligent System), leverages
web technologies to create an intuitive virtual assistant. Built upon Speech Recognition
and Speech Synthesis APIs, JARVIS enables users to interact through voice commands,
initiating actions like opening websites (Google, YouTube, etc.), fetching information
from the web and Wikipedia, and providing real-time data such as current time and date.
The system's architecture includes eventdriven programming for real-time command
processing and integration with external APIs for extended functionalities. JARVIS
prioritizes user experience with dynamic greetings based on the time of day and
responsive speech synthesis for clear, audible interactions. Future enhancements aim to
improve natural language understanding, expand task capabilities, and ensure robust
security and privacy measures for user data handling. This project showcases the
potential of web-based virtual assistants in enhancing productivity and accessibility
through innovative AI-driven solutions.

ADVANTAGES:

• User-Friendly Interaction: JARVIS offers a seamless user experience through voice


commands, making it accessiblefor users who prefer hands-free interaction.

• Time Efficiency: It enables quick access to information and services with voice-
activated commands, enhancingproductivity by reducing the time spent on manual tasks
like web searches or scheduling.

• Multifunctionality: JARVIS can perform a variety of tasks such as opening specific


websites (Google, YouTube),fetching information from the web and Wikipedia, and
providing real-time updates like current time and date.

• Accessibility: By leveraging Speech Recognition and Speech Synthesis APIs, JARVIS


accommodates users withdifferent abilities, enabling inclusive access to digital services.

• Personalization: The system greets users dynamically based on the time of day,
creating a personalized interaction thatenhances user engagement.

1. Voice Recognition & NLP:


 Implement speech-to-text and text-to-speech functionalities.
 Use Natural Language Processing (NLP) for understanding and responding to
user queries.
2. Task Automation:
 Automate reminders, alarms, notes, email handling, and calendar scheduling.
 Fetch real-time information like weather updates, news, and traffic reports.
3. Smart Home & IoT Integration:
 Control lights, fans, AC, security cameras, and other smart home devices using
voice commands.
 Enable automation sequences (e.g., "Good morning" mode to turn on lights and
play news).

4. AI-Based Decision Making & Learning:


 Implement machine learning algorithms to personalize user experiences.
 Adapt responses based on user preferences and past interactions.
5. Multi-Platform Support:
 Develop the assistant for PC, mobile, Raspberry Pi, and IoT devices.
 Enable integration with cloud-based services like Google Assistant, Alexa, or
ChatGPT APIs.
6. Security & Privacy:
 Implement user authentication and voice recognition for secure access.
 Encrypt sensitive user data to ensure privacy protection.

1.4.2 Limitations

1. Accuracy of Voice Recognition:


 May struggle with different accents, background noise, or unclear speech.
 Errors in speech-to-text conversion can impact task execution.
2. Limited Context Awareness:
 AI may misinterpret ambiguous commands due to limited contextual
understanding.
 Difficulty in maintaining long-term conversational memory.
3. Dependence on Internet & Cloud Services:
 Features like real-time weather, news updates, and AI learning require an active
internet connection.
 Offline functionality may be limited to basic commands and device control.

4. Hardware & Software Constraints:


 Performance depends on CPU, RAM, and microphone quality.
 Advanced AI features require powerful processors or cloud-based processing.
5. Security Risks:
 Potential risks of voice spoofing or unauthorized access if security is not properly
implemented.
 User data privacy concerns if stored on cloud servers.
6. Integration Challenges:
 Compatibility issues with third-party smart home devices and applications.
 Requires custom APIs or middleware for seamless integration.

1.5 Methodologies and Problem-Solving

The development of the Jarvis AI Voice Assistant follows a structured approach,


integrating Artificial Intelligence (AI), Natural Language Processing (NLP), Speech
Recognition, and Automation. The methodology includes:

1. Requirement Analysis
 Identify key functionalities such as voice recognition, NLP, smart automation,
and IoT integration.
 Define hardware and software requirements (e.g., Python, TensorFlow, Google
API, Raspberry Pi).
 Determine user needs and security considerations for data privacy.
2. System Design & Architecture
 Design a modular system with components for speech processing, NLP,
automation, and device control.
 Use client-server architecture for cloud-based AI functionalities.
 Develop a user-friendly interface for easy interaction and accessibility.
3. Speech Recognition & NLP Implementation
 Use Google Speech-to-Text API, OpenAI Whisper, or CMU Sphinx for voice
recognition.
 Integrate Natural Language Processing (NLTK, spaCy, GPT-based models) for
command understanding.
 Implement text-to-speech (TTS) engines like Google TTS, pyttsx3, or Festival for
AI responses.
4. Task Automation & Smart Integration
 Develop custom scripts for automating tasks like email, calendar, and system
control.
 Use IoT protocols (MQTT, HTTP, Home Assistant API) for smart home
integration.
 Implement database storage for user preferences and history tracking.
5. Machine Learning & AI Enhancement
 Train AI models for personalized responses and contextual understanding.
 Implement voice authentication for security using machine learning models.
 Use reinforcement learning and neural networks to improve AI interactions over
time.
6. Testing & Debugging
 Conduct unit testing for each module (speech processing, NLP, automation).
 Perform integration testing to ensure seamless communication between
components.
 Use real-world scenarios to improve accuracy and response time.
7. Deployment & Maintenance
 Deploy the assistant on local machines, Raspberry Pi, or cloud platforms.
 Continuously update AI models and fix bugs based on user feedback.
 Ensure data security and regular performance optimization.
1.5.1 Problem-Solving
While developing the Jarvis AI Assistant, various challenges arise. Here’s how they can
be addressed:
1. Voice Recognition Accuracy Issues
 Problem: Difficulty recognizing accents, background noise interference.
Solution:
 Use deep learning-based models (Whisper, Vosk, DeepSpeech) for better
accuracy.
Implement noise reduction algorithms and microphone calibration.
2. Misinterpretation of Commands (NLP Limitations)
 Problem: The assistant may misunderstand complex or ambiguous commands.
Solution:
 Use advanced NLP techniques like GPT, BERT, or Transformer models.
 Implement context-awareness by storing previous interactions.
3. Smart Home Integration Issues
 Problem: Compatibility issues with different IoT devices.
Solution:
 Use standard communication protocols (MQTT, HTTP, IFTTT) for integration.
 Develop custom APIs to connect with non-compatible devices.
4. Security & Privacy Concerns
 Problem: Risk of unauthorized access or data breaches.
Solution:
 Implement voice authentication and encrypted communication.
 Use local processing instead of cloud storage for sensitive data.
5. Performance Optimization
 Problem: High CPU/GPU usage due to AI processing.
Solution:
 Optimize AI models using quantization and edge computing.
 Use cloud-based processing for heavy computations.
CHAPTER 2
LITERATURE REVIEW

LITERATURE SURVEY

Paper1
Title:JARVIS: An interpretation of AIML with integration of gTTS and Python.
Author:Ravivanshikum ar Sangpal, Tanvee Gawand, Sahil Vaykar, and Neha Madhavi.
Publication Year:2019
Description:This paper presents JARVIS, a virtual integrated voice assistant comprising
of gTTS, AIML[Artificial Intelligence Markup Language], and Python-based state-of-
the- art technology in personalized assistant development. JARVIS incorporates the
power of AIML and with the industry-leading Google platform for text-to-speech
conversion and the voice of the Male Pitch in the gTTS libraries inspired from the Marvel
World. This is the result of the adoption of the dynamic base Pythons pyttsx which
considers intentionally in adjacent phases of gTTS and AIML, facilitating the
establishment of considerably smooth dialogues between the assistant and the users. This
is a unique result of the exaggerated contribution of several contributors such as the
feasible use of AIML and its dynamic fu- sion with platforms like Python[pyttsx] and
gTTS[Google Text to Speech] resulting into a consistent and modular structure of
JARVIS exposing the widespread reusability and negligible maintenance.

Paper2
Title:Artificial Intelligence Based Voice Assistant.
Author:Subhash S, Prajwal N Srivatsa, Siddesh S, Ullas A.
Publication Year:2020
Description:Voice control is a major growing feature that change the way people
can live. The voice assistant is commonly being used in smartphones and laptops. AI-
based Voice assistants are the operating systems that can recognize human voice and
respond via integrated voices. This voice assistant will gather the audio from the
microphone and then convert that into text, later it is sent through GTTS (Google text
to speech). GTTS engine will convert text into audiofile in English language, then that
audio is played using play sound package of python programming Language.

Paper3
Title:Darwin: Convolutional Neural Network based Intelligent Health Assistant.
Author:Siddhant Rai, Akshayanand Raut, Akash Savaliya, Dr. Radha Shankarmani.
Publication Year:2018
Description:Healthcare is an essential factor for living a good life. Unfortunately, con-
sulting a doctor for non-life threatening problems can be difficult at times due to our busy
lives. The aim of healthcare services is to make our lives easier and to improve the
quality of life. The concept of personal assistants or chatbots builds on the progressing
maturity of areas such as Artificial Intelligence (AI) and Artificial Neural Network
(ANN). In this paper, we propose a healthcare assistant that will allow users to check
for symptoms of common diseases, a suggestion to visit a doctor if needed, exercise
recommendation, track- ing exercise/ workout routine, along with a comprehensive
exercise guide. The primary objective is to develop a system that utilizes AI and
deep learning to help improve the lives of people who have busy schedule to easily
keep a check on their health.
CHAPTER 3
SOFTWARE REQUIRMENTS SPECIFICATION
3.1 Assumptions and Dependencies
Assumptions
Several assumptions are made while developing the Jarvis AI Voice Assistant to ensure
smooth functionality and deployment:
User Has a Stable Internet Connection
 The assistant relies on cloud-based services for AI processing, NLP, and real-time
updates (e.g., weather, news).
 Some basic functionalities may work offline, but advanced features require an
internet connection.
Hardware Meets Minimum Requirements
 The system assumes the user has a microphone and speaker for voice interaction.
 For advanced AI processing, a powerful CPU/GPU or cloud computing support is
assumed.
User Speaks Clearly and Uses Supported Languages
 The assistant assumes users will speak in a language it supports (e.g., English,
Spanish, etc.).
 Background noise is minimal, or noise cancellation techniques are in place.
Compatible Software Environment
 The system runs on a Windows, Linux, or macOS environment with Python and
necessary libraries installed.
 Required APIs (e.g., Google Speech-to-Text, OpenAI Whisper, Home Assistant
API) are accessible.
Users Will Follow Security Protocols
 Users are expected to set up voice authentication or password protection for
secure access.
 The assistant assumes the device is not shared with unauthorized users.
Smart Devices Use Standard Communication Protocols
 IoT devices support MQTT, HTTP, Bluetooth, or Wi-Fi for integration.
 The assistant assumes smart devices can be controlled via APIs or automation
frameworks.

Dependencies
1. Software Dependencies
 Speech Recognition & NLP: Google Speech API, OpenAI Whisper, CMU
Sphinx, DeepSpeech.
 AI & Machine Learning: TensorFlow, PyTorch, OpenAI API (GPT), NLTK,
spaCy.
 Text-to-Speech (TTS): Google TTS, pyttsx3, Festival.
 Smart Home Integration: Home Assistant API, MQTT broker, IFTTT.
 Cloud Services: OpenAI, Google Cloud, AWS for AI processing.
2. Hardware Dependencies
 Microphone & Speakers: Required for voice input and output.
 Processing Power: AI models require a high-performance CPU/GPU or cloud
computing.
 IoT Devices: Smart home appliances (lights, security cameras, thermostats) need
to be compatible.
3. Internet & Network Dependencies
 Cloud-Based AI Processing: Many AI functions (e.g., GPT-based NLP, voice
processing) need an active internet connection.
 IoT Communication: Smart home automation depends on Wi-Fi, MQTT, or
Bluetooth connections.
4. API & Third-Party Service Dependencies
 Reliance on third-party APIs (Google, OpenAI, AWS) means service disruptions
could impact functionality.
 API pricing and limitations may restrict usage in free-tier accounts.
5. Security & Privacy Measures
 Data encryption and authentication mechanisms are required to prevent
unauthorized access.
 User data handling policies need to comply with privacy regulations like GDPR
or CCPA.

3.2 Functional Requirements

Functional requirements define the core functionalities that the Jarvis AI Voice Assistant
must support to ensure efficient operation and user satisfaction. These requirements are
categorized based on different modules of the system.

3.2.1 System Features

1. Voice Recognition & NLP (Natural Language Processing)

 Speech-to-Text (STT) – Converts user voice input into text using Google Speech
API, OpenAI Whisper, or CMU Sphinx.
 Text-to-Speech (TTS) – Generates human-like voice responses using pyttsx3,
Google TTS, or Festival.
 Natural Language Understanding (NLU) – Understands user commands and
context using NLP techniques (spaCy, NLTK, GPT).
 Multi-Language Support – Can process and respond in multiple languages for
broader accessibility.

2. Task Automation & Productivity Tools


 Reminders & Alarms – Users can set timers, schedule tasks, and receive
notifications.
 Calendar & Email Management – Adds events to Google Calendar and
manages emails.
 System & File Management – Opens applications, organizes files, and executes
system commands.
 Web Search & Information Retrieval – Provides news updates, weather
forecasts, stock market reports, and Wikipedia searches.
 Custom Voice Commands – Users can program and customize commands for
specific tasks.

3. Smart Home & IoT Integration

 Voice-Controlled Smart Devices – Supports turning on/off lights, adjusting


thermostats, and managing security cameras via IoT protocols (MQTT, Home
Assistant API, IFTTT).
 Automation Routines – Executes predefined scenarios, such as “Good Morning
Mode” (turning on lights, playing music).
 Multi-Device Connectivity – Connects with various IoT devices, including smart
speakers, TVs, and appliances.

4. AI-Based Learning & Adaptation

 User Preference Learning – Adapts to user behavior and provides personalized


responses.
 Context Awareness – Maintains conversation history for better contextual
replies.
 Machine Learning Algorithms – Continuously improves its performance based
on past interactions.
5. Security & Privacy Features

 Voice Authentication – Grants access only to authorized users using voice


recognition.
 Data Encryption – Ensures secure storage and communication of user data.
 Offline Mode Support – Allows execution of basic commands without an
internet connection for privacy-focused users.

6. Multi-Platform & Cross-Device Compatibility

 Windows, Linux, macOS, and Raspberry Pi Support – Compatible with


multiple operating systems.
 Cloud & Local Execution – Supports cloud-based AI processing and local
execution for efficiency.
 Mobile App Support – Offers remote access via an Android/iOS app for
notifications and controls.

7. User Interaction & Customization


 Conversational AI – Engages in interactive voice-based conversations.
 Graphical User Interface (GUI) Option – Provides a visual dashboard for
manual controls.
 Customizable Personality & Voice – Users can modify the assistant’s voice,
tone, and behavior.

3.3 External Interface Requirements

1. User Interface (UI) Requirements


 The system should support voice-based interaction as the primary mode of
communication.
 A Graphical User Interface (GUI) should be available for users who prefer
manual interaction.
 The assistant should provide audio feedback for responses, confirmations, and
errors.
 Users should be able to customize the assistant’s voice, tone, and personality.
 The UI should be simple, intuitive, and accessible, with support for screen readers
and voice commands.

2. Hardware Interface Requirements


 The system must support microphones and speakers for voice input and output.
 Compatibility with smart home devices like smart lights, fans, security cameras,
and thermostats using IoT protocols (Wi-Fi, MQTT, Zigbee, Bluetooth).
 Must support external devices like webcams, keyboards, and smart displays for
extended functionality.
 Should run on PCs (Windows, macOS, Linux), Raspberry Pi, and mobile devices
with adequate processing power.
3. Software Interface Requirements
 Integration with speech recognition APIs like Google Speech-to-Text, OpenAI
Whisper, or CMU Sphinx.
 Use of Text-to-Speech (TTS) engines like Google TTS, pyttsx3, or Festival for
voice output.
 Compatibility with NLP frameworks like OpenAI GPT, NLTK, or spaCy for
natural language understanding.
 Support for email, calendar, and task management APIs (e.g., Gmail API, Google
Calendar API).
 Integration with home automation platforms like Home Assistant, IFTTT, or
MQTT.
4. Communication & Network Requirements
 The assistant should work with a stable internet connection for cloud-based AI
processing and real-time data retrieval.
 Offline mode support should be available for basic functionalities.
 IoT device communication should be supported via Wi-Fi, Bluetooth, or Zigbee.
 Encrypted cloud communication should be used to protect sensitive data.
 Secure API connections should be established using HTTPS and OAuth
authentication.

3.4 Nonfunctional Requirements

3.4.1 Performance Requirements

 The AI assistant should process voice commands with a response time of no more
than 1 second for local operations and 3 seconds for cloud-dependent operations.
 The system should support at least 100 concurrent users without significant
degradation in performance.
 The voice recognition accuracy should maintain a success rate of 95% or higher
in quiet environments and 85% or higher in noisy environments.
 The system should efficiently utilize CPU and memory resources, ensuring that it
does not exceed 30% CPU usage and 500MB RAM usage under normal load.

3.4.2 Safety Requirements

 The AI assistant should not execute potentially harmful commands, such as


deleting critical files or making unauthorized system changes.
 The system should include safeguards to prevent voice-activated purchases
without explicit user confirmation.
 The assistant must comply with data privacy regulations (e.g., GDPR, CCPA) to
ensure the safety of user data.
 In case of errors or misinterpretations, the assistant should request confirmation
before executing critical operations.

3.4.3 Security Requirements

 The AI assistant must use end-to-end encryption (AES-256) for transmitting


sensitive data.
 User authentication should be required for accessing personalized or sensitive
information.
 The system should have a role-based access control (RBAC) mechanism to
restrict functionalities based on user permissions.
 The assistant should implement automatic session timeouts after a period of
inactivity to prevent unauthorized access.
 The AI should have protection against voice spoofing attacks through biometric
verification (e.g., voiceprint recognition).
 The system should log all voice commands and activities securely for audit and
troubleshooting purposes.

3.4.4 Software Quality Attributes

 Reliability: The assistant should maintain 99.9% uptime and handle failures
gracefully with appropriate fallback mechanisms.
 Scalability: The system should be capable of handling increased user demand
without significant performance degradation.
 Usability: The voice assistant should have an intuitive interface with clear and
natural language interactions.
 Maintainability: The codebase should be modular and well-documented to allow
for easy updates and bug fixes.
 Portability: The assistant should be deployable across multiple platforms,
including Windows, macOS, Android, and iOS.
 Extensibility: The system should allow third-party integrations via APIs to
enhance functionalities.
 Energy Efficiency: The assistant should optimize power consumption, especially
on mobile devices, to ensure minimal battery drain.

3.5 System Requirements

3.5.1 Database Requirements

 The system should use a NoSQL database (e.g., MongoDB) for handling
unstructured and semi-structured data efficiently.
 A relational database (e.g., PostgreSQL or MySQL) should be used for
structured data storage, such as user profiles and logs.
 The database should support automatic backups and redundancy to ensure data
integrity and disaster recovery.
 Data access should be optimized using indexed queries to enhance retrieval
performance.

3.5.2 Software Requirements (Platform Choice)

The Jarvis Personal Voice Assistant was developed using Python 3.6+, chosen for its
extensive library support for machine learning and AI tasks. The project runs on
Windows 10 or higher, though adaptable to Linux with minor adjustments, and utilizes
PyCharm or Visual Studio Code as IDEs for effective debugging. Key libraries include
SpeechRecognition for voice-to-text conversion, Pyttsx3 for text-to-speech synthesis,
PyQt5 for GUI development, and PyAutoGUI for simulating keyboard and mouse
actions. OpenCV supports webcam access, while the Wikipedia API, Requests, and
PyWhatKit enhance Jarvis’s web-based functions, such as retrieving information,
accessing YouTube, and fetching news.
External APIs like News API and OpenWeatherMap allow for real-time news and
weather updates, and IP geolocation via GeoJS or IPify provides location-based services.
Basic internet connectivity and updated audio drivers are essential to support voice
interaction, along with a web browser for external searches and multimedia content. This
setup ensures Jarvis’s capabilities to assist users in an efficient, voice-driven manner.

3.5.3 Hardware Requirements

 The server hosting the AI assistant should have at least 16 GB RAM, 8-core
CPU, and 1TB SSD storage for optimal performance.
 Edge devices (e.g., smart speakers or mobile devices) should have at least 4 GB
RAM and a quad-core processor to support local processing.
 The system should support integration with external GPUs (NVIDIA CUDA
cores) for machine learning inference acceleration.
 The hardware should be energy efficient, with power consumption optimized for
continuous operation.

3.6 Analysis Models: SDLC Model to be Applied

 The Agile Software Development Life Cycle (SDLC) model will be applied to
ensure iterative and incremental development.
 Agile allows for continuous feedback and adaptability to changes based on user
requirements and testing outcomes.
 Development will follow Scrum methodology, incorporating sprints, daily
stand-up meetings, and retrospectives to enhance collaboration and efficiency.
 Regular prototype releases will ensure that user feedback is integrated into the
development process.
 The system will undergo frequent testing, including unit testing, integration
testing, and user acceptance testing (UAT) to ensure robustness and
performance.

CHAPTER 4
SYSTEM DESIGN

4.1 System Architecture

Figure 4.1.1 :System Architecture

4.2 Data Flow Diagrams

Figure 4.2.1: DFD Level 0


Figure 4.2.2: DFD Level 1
Figure 4.2.3: DFD Level 2
4.3 Entity Relationship Diagrams

Figure 4.3.1: E-R Diagram


4.5 UML Diagrams

Figure 4.5.1: UML Diagram


4.6 Flow Chart
Figure 4.6.1: Flow Chart

CHAPTER 5
PROJECT PLAN

5.1 Project Estimate

5.1.1 Reconciled Estimates

The project estimate has been determined through a combination of expert judgment,
historical data, and cost estimation techniques. Cost reconciliation involves comparing
different estimation methods to arrive at a balanced and realistic budget. The estimation
process ensures accuracy and accounts for potential contingencies.

5.1.2 Project Resources


Resource planning involves allocating human, financial, and material resources required
to successfully execute the project. The following categories are considered:

 Human Resources: Team members, stakeholders, external consultants.


 Financial Resources: Budget allocation for development, testing, deployment.
 Material Resources: Hardware, software, tools, and infrastructure.

5.2 Risk Management

5.2.1 Risk Identification

Risk identification is the process of recognizing potential threats and opportunities that
could impact the project. Identified risks include:

 Technical Risks: Unforeseen software or hardware failures.


 Operational Risks: Resource constraints or skill gaps.
 Financial Risks: Budget overruns or funding issues.
 External Risks: Regulatory changes or market fluctuations.

5.2.2 Risk Analysis

Each identified risk is analyzed based on probability and impact. A risk assessment
matrix categorizes risks into high, medium, and low priority, enabling effective
mitigation strategies.
5.2.3 Overview of Risk Mitigation, Monitoring, Management

 Mitigation: Preventative measures to reduce risk impact.


 Monitoring: Continuous tracking of risks using dashboards and review meetings.
 Management: Developing contingency plans for risk response.

5.3 Project Schedule

5.3.3 Timeline Chart


CHAPTER 6
PROJECT IMPLEMENTATION

6.1 Overview of Project Modules


The Jarvis AI Voice Assistant is designed with multiple interconnected modules to enable
seamless interaction between users and the system. The key modules include:

 Speech Recognition Module: Converts spoken words into text using speech-to-
text algorithms.
 Natural Language Processing (NLP) Module: Analyzes and understands user
commands using NLP techniques.
 Command Processing Module: Interprets and processes user requests to execute
tasks.
 Response Generation Module: Uses AI models to generate meaningful
responses.
 Integration Module: Connects with external APIs (Google Search, Weather API,
Smart Home devices, etc.).
6.2 Tools and Technologies Used

The following tools and technologies were utilized for building the Jarvis AI Voice
Assistant:

 Programming Language: Python


 Speech Recognition: Google Speech Recognition API, CMU Sphinx
 Natural Language Processing: NLTK, spaCy, OpenAI GPT
 Machine Learning Frameworks: TensorFlow, PyTorch
 Text-to-Speech (TTS): pyttsx3, Google TTS
 Integration & Automation: Selenium (for web automation), APIs (for third-
party services)
CHAPTER 7

SOFTWARE TESTING

7.1 Types of Testing

Ensure the reliability and efficiency of the Jarvis AI Voice Assistant, various types of
testing were conducted:
 Unit Testing
Each module (Speech Recognition, NLP, Task Execution) was tested
independently.
Example: Checking if speech-to-text conversion returns accurate results.

 Integration Testing
Verified seamless interaction between different components.
Example: Ensuring that recognized text is correctly processed by the NLP
module.

 Functional Testing
Tested whether the assistant performs intended functions correctly.
Example: Opening applications, fetching weather updates, or performing web
searches.

 Usability Testing
Evaluated the ease of use and user-friendliness.
Example: Checking response time and clarity of generated speech output.

 Performance Testing
Assessed speed and accuracy under various conditions.
Example: Testing response time under noisy environments.

 Error Handling & Exception Testing


Verified how the system handles errors like unclear voice input or network
failures.
Example: Checking fallback responses when speech recognition fails.

7.2 Test Cases & Test Results


Chapter 6

CONCLUSIONS

6.1 Conclusion

Through this voice assistant, we have automated various services using


a single line com- mand. It eases most of the tasks of the user like
searching the web, retrieving weather forecast details, vocabulary
help and medical related queries. We aim to make this project a
complete server assistant and make it smart enough to act as a
replacement for a gen- eral server administration. The future plans
include integrating Jarvis with mobile using React Native to provide a
synchronised experience between the two connected devices. Further,
in the long run, Jarvis is planned to feature auto deployment
supporting elastic beanstalk, backupfiles, and all operations which a
general Server Administrator does. The functionality would be
seamless enough to replace the Server Administrator with Jarvis.

6.2 Limitations of the System


1. The AI can work with a limited number of languages hence it is
not language diversity friendly which makes it useless for a huge
number of people who do not know the languages it can work
with.

2. It fails to operate efficiently in a noisy environment. You need to


have a silent and calm environment in order to make JARVIS
work.

21
Voice recognition isn’t always perfect. May get inaacurate results.

6.3 Future Scope of the Project

We plan to Integrate Jarvis with mobile using react native, to


provide a synchronized experience between the two connected
devices. Further, in the long run, Jarvis is planned to feature auto
deployment supporting elastic beanstalk, backupfiles, and all
operations which a general Server Administrator does. The
functionality would be seamless enough to replace the Server
Administrator with Jarvis.
Appendix A

Appendix A

A.1 System Code

A.1.1 main.py

from
Jarvis
impor
t
Jarvis
A
ssista
nt
impor
t re
impor
t os
impor
t
rand
om
impo
rt
pprin
t
impo
rt
date
time
impo
rt
requ
ests
impo
rt
sys
impor
t
urllib .
parse
impor
t
pyjok
es
impor
t time
impor
t
pyaut
ogui
impor
t
pywh
atkit
import wolframalpha

from PIL im port Image


from PyQt5 import QtWidgets ,
QtCore , QtGui from PyQt5 . QtCore
im port QTimer , QTime , QDate , Qt
from PyQt5 . Q tGui im port QM ovie
from PyQt5 . QtC ore
import * from PyQt5 .
Q tGui import *
from PyQt5 . Q tWidgets im port *
Appendix A.System
2

from PyQt5 . uic import


loadUiType from Jarvis.
features. gui import U
i_M ain W indow from
Jarvis. config import
config

obj = Jarvis A ssistant ()

# = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = MEMORY = == == == == == == = == == = = == == = == == == ==
== == = == ==

GREETINGS = [" hello jarvis", " jarvis", " wake up


jarvis", " you there jarvis",
" time to work jarvis", " hey jarvis", " ok
jarvis", " are you there"] GREETINGS_RES = [" always
there for you sir", " i am ready sir",
" your wish my command ", " how can i help
you sir?", " i am online and ready sir"]

EM A IL_D IC = {
* myself ’: ’ atharvaaingle@
gm ail . com ’, ’ my official em
ail ’: ’ atharvaaingle@ gm ail
. com ’, ’ my second em ail ’:
’ atharvaaingle@ gm ail .
com ’,
’ my official mail ’: ’
atharvaaingle@ gm ail .
com ’, ’ my second mail ’: ’
atharvaaingle@ gm ail .
com ’
}

CALENDAR_STRS = [" what do i have", " do i have plans", " am


i busy"]
# = == == == == == == == == == == == == == == == == == == == == == == == == == == == == == == == == == == == == == == ==
== =

def
s
p
e
a
k
( t
e
xt
):
obj
.
tts
( t
ex
t)

app_id = config . wolframalpha_id

def com
putational_intelligence
( question ): try:
client = w olfram
alpha . Client(
app_id ) answer =
client. query (
question )
answ er = next(
answer. results ).
text print( answer)
return
answer except:
speak (" Sorry sir I couldn ’t fetch your question ’s
answ er.
Please try again ")

return None def startup ():


# speak(" Initializing Jarvis")
# speak(" C aliberating and exam ining all the core
processors ") # speak(" C hecking the internet
connection ")
hour = int( datetim e. datetim e .
now (). hour) if hour >=0 and hour
<=12:
print("
Good M
orning
Sir")
speak ("
Good M
orning
Sir")
elif hour >12 and
hour <18:
print(" Good A
fternoon Sir")
speak ("
Good A
fternoon Sir")
else:
print(" Good
Evening Sir")
speak (" Good
Evening Sir")
print("
I am
Jarvis"
)
speak
(" I am
Jarvis")
c_tim e = obj.
tell_tim e () print( f"
Currently it is {
c_tim e}") speak (
f" Currently it is {
c_time}")
print(" Please tell me how may I help
you Sir") speak (" Please
tell me how may I help
you Sir")

def wish ():


hour = int( datetim e.
datetim e . now (). hour)
if hour >=0 and hour
<=12:
speak
(" Good M
orning ") elif
hour >12
and hour
<18:
speak ("
Good
afternoon ")
else:
speak ("
Good
evening ")
c_tim e =
obj. tell_tim
e ()
speak ( f" Currently it is { c_time}")
speak (" I am Jarvis. Online and ready sir. Please tell me
how may I help you")

class M ain Thread (

QThread ): def
init ( self
):
super( MainThread , self ). init ()

def run( self ):


self. Task Execution ()
def Task
Executio
n ( self ):
startup
()

w hile True:
command = obj. m ic_input ()

if re. search (’ date


’, command ):
date = obj.
tell_m e_date
() print( date)
speak ( date)

elif " time" in


command :
tim e_c = obj.
tell_tim e ()
print( tim e_c)
speak ( f" Sir the time is { time_c}")

elif re. search (’


launch ’,
command ):
dict_app = {
* chrome ’: ’C :/ Program Files/ G oogle / Chrom e/ A
pplication / chrom e ’
}

app = command . split(’ ’, 1)


[1] path =
dict_app . get(
app)

if path is None:
speak (’ A
pplication path not
found ’) print(’ A
pplication path not
found ’)

else:
speak (’ Launching : ’ +
app + ’ for you sir!’) obj.
launch_any_app (
path_of_app = path)

elif command in GREETINGS :


speak ( random . choice( GREETINGS_RES ))
elif re. search (’ open ’,
command ): domain =
command . split(’ ’)[ -1]
open_result = obj. w
ebsite_opener ( domain )
speak ( f’ Alright sir !!
O pening {
domain }’)
print( open_res
ult)

elif re. search (’ weather


’, command ): city =
command . split(’ ’)[
-1]
w eather_res = obj.
weather( city= city)
print( w eather_res)
speak ( w eather_res)

elif re. search (’ tell me


about ’, command ):
topic = command .
split(’ ’)[ -1]
if topic:
w iki_res = obj.
tell_me( topic)
print( w iki_res)
speak (
wiki_re
s)
else
:
spe
ak (
" Sorry sir. I couldn ’t load your query
from my database. Please try again ")

elif " buzzing " in command or " news" in


command or " headlines" in command : new
s_res = obj. news ()
print(’ Source: The
Tim es Of India ’)
speak (’ Source:
The Times Of India
’) print(’ Todays H
eadlines are ..’)
speak (’ Todays Headlines are ..’)
for index , articles in
enum erate( news_res
): pprint. pprint(
articles[’ title ’])
speak (
articles[’ title
’]) if index

== len(
new s_res )- 2: break
print(’ These were the top headlines ,
Have a nice day Sir !!.. ’) speak (’ These
were the top headlines
, Have a nice day Sir !!.. ’)

elif ’ search google for ’ in


command : obj.
search_anything_google (
command )
elif " play music" in command or " hit
some music" in command : m usic_dir
= " F :// Songs // Imagine_D ragons
"
songs =
os. listdir(
m usic_dir)
for song in
songs:
os. startfile( os. path. join( music_dir , song ))

elif ’ youtube ’ in
command : video =
command . split(’ ’)
[1]
speak ( f" Okay sir , playing {
video } on youtube")
pywhatkit. playonyt( video )

elif " email" in command or "


send email" in command :
sender_em ail = config .
email
sender_passw ord = config . email_password
try:
speak (" Whom do you w
ant to email sir ?")
recipient = obj.
mic_input () receiver_em
ail = EM AIL_DIC . get(
recipient) if receiver_em
ail :

speak (" What is the subject sir


?") subject = obj.
mic_input () speak ("
What should I
say?")
message
= obj. m
ic_input
()
msg = ’ Subject: {}\ n\ n{}’.
format( subject , message)
obj. send_m ail( sender_email
, sender_passw ord ,
receiver_em
ail , msg) speak (" Email
has been successfully
sent") time. sleep (2)

else:
print(
" I coudn ’t find the requested
person ’s email in my database.
Please try again with a different
name")
speak (
" I coudn ’t find the requested
person ’s email in my database.
Please try again with a different
name")

except:
speak (" Sorry sir. Couldn ’t send your mail.
Please try again ")

elif " calculate"


in command :
question =
command
answer = com
putational_intelligence (
question ) speak ( answ er)
elif " w hat is" in command or
" who is" in command :
question = command
answer = com
putational_intelligence (
question ) speak ( answ er)

elif " calendar" in command or " do i have


plans" or " am i busy" in command : obj.
google_calendar_events ( command )

if " make a note" in command or " w rite this


down" in command or " rem em ber this" in
command :
speak (" What would you like
me to write down?")
note_text = obj. m ic_input ()
obj. take_note( note_text)
speak (" I’ ve made a note of that")

elif " close the note" in command or "


close notepad " in command : speak ("
Okay sir , closing notepad ") os.
system (" taskkill / f / im notepad .
exe")

if " joke" in
command :
joke =
pyjokes.
get_joke ()
print( joke)
speak ( joke)

elif " system " in


command :
sys_info =
obj. system
_info () print(
sys_info )
speak ( sys_info )

elif " w here is" in command :


place = command . split(’ w here is
’, 1)[1] current_loc , target_loc ,
distance = obj. location ( place)
city = target_loc. get(’ city ’, ’’)
state = target_loc. get(’
state ’, ’’) country =
target_loc. get(’
country ’, ’’) time. sleep
(1)
try:

if city:
res = f"{ place} is in { state}
state and country { country }. It is
{ distance} km away from your
current location "
print(
res)
speak
( res)

else:
res = f"{ state} is a state in {
country }. It is { distance} km
away from your current location "
print( res)
speak ( res)

except:
res = " Sorry sir , I couldn ’t get the co
- ordinates of the location you
requested . Please try again " speak (
res)

elif " ip address" in command :


ip = requests. get(’ https ://
api. ipify . org ’). text print( ip)
speak ( f" Your ip address is { ip}")

elif " switch the window " in command or "


switch window " in command : speak ("
Okay sir , Sw itching the window ")
pyautogui.
keyDown ("
alt")
pyautogui.
press("
tab")
time. sleep
(1)
pyautogui.
keyUp (" alt")

elif " where i am" in command or " current


location " in command or " where am i" in
command :
try:
city , state , country
= obj. m y_location ()
print( city , state ,
country )
speak (
f" You are currently in { city} city w
hich is in { state} state and
country { country }")
except
Excepti
on as e:
speak (
" Sorry sir , I coundn ’t fetch your
current location . Please try again ")

elif " take screenshot" in command or " take


a screenshot" in command or " capture the
screen " in command :
speak (" By what name do you want to save
the screenshot?") name = obj. m ic_input ()
speak (" Alright sir ,
taking the
screenshot") img =
pyautogui.
screenshot ()
name = f"{ name
}. png" img.
save( name)
speak (" The screenshot has been succesfully
captured ")
elif " show me the
screenshot" in
command : try:
img = Image. open(’D :// JARVIS // JARVIS_2 .0// ’
+ name) img.
show ( img)
speak ("
Here it is
sir") time.
sleep (2)

except IOError:
speak (" Sorry sir , I am unable to
display the screenshot")

elif " hide all files" in command or " hide


this folder" in command : os. system ("
attrib +h / s / d")
speak (" Sir , all the files in this folder are now
hidden ")
elif " visible" in command or " make
files visible" in command : os.
system (" attrib -h / s / d")
speak (" Sir , all the files in this folder
are now visible to everyone. I hope you
are taking this decision in your own
peace")

# if " calculate" in command or " what is" in command


:# query = command
# answer = com
putational_intelligence ( query) #
speak( answer)

elif " goodbye" in command or " offline" in


command or " bye" in command : speak ("
Alright sir , going offline. It was nice
working with you") sys. exit ()

startE xecution = M ain

Thread () class Main(

QM ain W indow ):
def
init (
self ):
super ().
init ()
self. ui = U i_M
ain W indow ()
self. ui.
setupUi( self)
self. ui. push B utton . clicked .
connect( self. startT ask ) self.
ui. push Button_ 2 . clicked .
connect( self. close)

def del ( self ):


sys. stdout = sys. stdout

# def run( self):


# self. Task Exection
def startTask ( self ):
self. ui. movie = QtGui. QMovie(" Jarvis/
utils/ images/ live_wallpaper . gif") self. ui.
label. setMovie( self. ui. m ovie)
self. ui. m ovie. start ()
self. ui. movie = QtG ui. QM ovie(" Jarvis/
utils/ im ages/ initiating . gif") self. ui.
label_2 . setMovie ( self. ui. m ovie)
self. ui.
movie.
start ()
timer
= Q
Tim
er( sel
f)
timer. timeout.
connect( self. show
Time) tim er. start
(1000)
startExecution .

start () def show

Time ( self ):
current_tim e =
= QD ate. currentD ate ()
label_time = current_time .
toString (’ hh: mm : ss’)
label_date = current_date
. toString ( Qt. ISODate)
self. ui. textBrow ser
. setText( label_date)
self. ui. textBrow ser_2 . setT ext( label_time)

app = Q A
pplication (
sys. argv) jarvis
= Main ()
jarvis. show ()
exit( app. exec_ ())
Bibliography

[1] Hebbar, A. (2017). Augmented intelligence: Enhancing human


capabilities. In2017 Third International Conference on Research
in Computational Intelligence and Com- munication Networks
(ICRCICN), pages 251–254. IEEE.

[2] Rai, S., Raut, A., Savaliya, A., and Shankarmani, R. (2018).
Darwin: convolutional neural network based intelligent health
assistant. In2018 Second International Confer- ence on
Electronics, Communication and Aerospace Technology
(ICECA), pages 1367– 1371. IEEE.

[3] Sangpal, R., Gawand, T., Vaykar, S., and Madhavi, N. (2019).
Jarvis: An interpreta- tion of aiml with integration of gtts and
python. In2019 2nd International Conference on Intelligent
Computing, Instrumentation and Control Technologies
(ICICICT), vol- ume 1, pages 486–489. IEEE.

[4] Subhash, S., Srivatsa, P. N., Siddesh, S., Ullas, A., and Santhosh, B.
(2020). Artificial intelligence-based voice assistant. In2020 Fourth
World Conference on Smart Trends in Systems, Security and
Sustainability (WorldS4), pages 593–596. IEEE.

You might also like