Final Report Eve Autonomous Robot
Final Report Eve Autonomous Robot
Final Report Eve Autonomous Robot
from
University of Wollongong
by
May 2023
1
ABSTRACT
Using computer vision technology, Eve records and analyses facial features
to distinguish between intruders and known individuals. Furthermore, it
incorporates speech recognition and natural language processing to provide
interactive responses. Additionally, Eve is trained to detect and categorize
various items with a remarkable accuracy rate of 99 percent. Our solution
aims to provide enhanced neighbourhood security at an affordable cost
accessible to the middle-class population, thereby contributing to global
innovation and progress.
2
ACKNOWLEDGEMENTS
3
Statement of Originality
I, Muhammad Ameer Hamza Janjua, declare that this thesis, submitted as
part of the requirements for the award of Master of Engineering, in the
School of Electrical, Computer and Telecommunications Engineering,
University of Wollongong, is wholly my own work unless otherwise
referenced or acknowledged. The document has not been submitted for
qualifications or assessment at any other academic institution.
Signature: Muhammad Ameer Hamza Janjua
Print Name: Muhammad Ameer Hamza Janjua
Student ID Number:5085706
Date: 21st May 2023
4
Table of Contents
ABSTRACT .................................................................................................................................................. 2
ACKNOWLEDGEMENTS .......................................................................................................................... 3
Statement of Originality ................................................................................................................................ 4
Chapter 1. Introduction ................................................................................................................................. 8
1.1 Overview ....................................................................................................................................... 8
1.2 Problem Statement .............................................................................................................................. 8
1.3 Approach ....................................................................................................................................... 8
1.4 Scope ............................................................................................................................................. 9
1.5 Objectives ........................................................................................................................................... 9
1.6 Deliverables ...................................................................................................................................... 10
1.7 Overview of the Document ............................................................................................................... 10
1.8 Document Conventions ..................................................................................................................... 11
1.9 Intended Audience ...................................................................................................................... 11
Chapter 2. Literature Review ...................................................................................................................... 12
2.1 Utilizing image processing to identify faces ..................................................................................... 17
2.2 Face tracking ..................................................................................................................................... 20
Chapter 3. Software Requirement Specification ......................................................................................... 22
3.1. Introduction ...................................................................................................................................... 22
3.1.1. Purpose...................................................................................................................................... 22
3.2.3. User Classes and Characteristics............................................................................................... 23
3.2.4. Operating environment ............................................................................................................. 24
3.2.4. Hardware ................................................................................................................................... 24
3.2.5. Software .................................................................................................................................... 24
3.2.6. Design and Implementation Constraints ................................................................................... 25
3.2.6. User Documentation ................................................................................................................. 25
3.2.7. Assumptions and Dependencies................................................................................................ 25
3.3. External Interfaces Requirements .................................................................................................... 26
3.3.1. User Interfaces .......................................................................................................................... 26
3.3.2. Hardware Interfaces .................................................................................................................. 26
3.3.3. Software Interfaces ................................................................................................................... 27
3.3.4. Communications Interfaces....................................................................................................... 27
3.4. System Features ............................................................................................................................... 28
3.4.1. Face detection ........................................................................................................................... 28
3.4.2 Face Recognition and Tracking ................................................................................................. 28
3.4.4 Wifi & GSM module.................................................................................................................. 32
3.4.5 Object detection ......................................................................................................................... 33
3.4.6 GUI (Desktop Application) ........................................................................................................ 36
3.5. Other Non-functional Requirements ................................................................................................ 37
3.5.1. Safety Requirements ................................................................................................................. 37
5
3.5.2. Security Requirements .............................................................................................................. 37
3.5.3. Performance Requirement......................................................................................................... 38
3.5.4. Cross platform Requirement ..................................................................................................... 38
3.5.5. Software Quality Attributes ...................................................................................................... 38
Chapter 4. Design and Development .......................................................................................................... 38
4.1. Introduction ...................................................................................................................................... 38
4.2. Purpose............................................................................................................................................. 39
4.3. Project Scope ................................................................................................................................... 39
4.4. System Architecture Description ..................................................................................................... 39
4.4.1. Overview of modules ................................................................................................................ 39
4.5. Structure and Relationships ............................................................................................................. 40
4.5.1 Block Diagram ........................................................................................................................... 40
4.5.4 Sequence Diagrams .................................................................................................................... 50
4.5.5 Activity Diagram........................................................................................................................ 51
4.5.5 State Transition Diagram ........................................................................................................... 52
4.5.5 Class Diagram (UML) ............................................................................................................... 53
4.6 Detailed Description of components ................................................................................................. 54
4.6.1 Input Hardware: ......................................................................................................................... 54
4.6.2 Application ................................................................................................................................. 55
4.6.3 Subsystem .................................................................................................................................. 57
4.6.4 Database ..................................................................................................................................... 58
4.6.5 Output hardware (Servos, Speakers) .......................................................................................... 60
4.7. Reuse and Relationships to other Products ...................................................................................... 61
4.8. Design Decisions and Trade-offs ..................................................................................................... 62
Chapter 5. Project Test and Evaluation ....................................................................................................... 64
5.1. Introduction ...................................................................................................................................... 64
5.2. Test Items ......................................................................................................................................... 65
5.3. Features to Be Tested ....................................................................................................................... 65
5.4. Test Approach .................................................................................................................................. 66
5.5. Item Pass/Fail Criteria...................................................................................................................... 67
5.6. Suspension Criteria and Resumption Requirements ........................................................................ 67
5.7. Test Deliverables ............................................................................................................................. 68
5.7.1. Acquire Video ........................................................................................................................... 68
5.7.2 Face detection ............................................................................................................................ 68
5.7.3. Face Recognition....................................................................................................................... 69
5.7.4 Intruder Detection ...................................................................................................................... 70
5.7.5 Face Tracking............................................................................................................................. 70
5.7.6. Notifications .............................................................................................................................. 72
5.7.7 Voice Recognition...................................................................................................................... 73
5.7.8 Response Generation.................................................................................................................. 74
6
5.7.9 Admin Login .............................................................................................................................. 75
5.7.10 Edit Dataset .............................................................................................................................. 76
5.7.11 Live Video ............................................................................................................................... 77
5.7.12. Object Detection ..................................................................................................................... 78
5.7.13. Hardware Integration .............................................................................................................. 80
5.7.14. System Integration .................................................................................................................. 81
5.8. Responsibilities, Staffing and Training Needs ................................................................................. 83
5.8.1. Responsibilities: ........................................................................................................................ 83
5.8.1. Staffing and Training needs ...................................................................................................... 83
5.9. Environmental Needs ....................................................................................................................... 83
5.10. Risk and Contingencies.................................................................................................................. 84
5.10.1. Schedule Risk: ........................................................................................................................ 84
5.10.2. Operational Risks: ................................................................................................................... 84
5.10.3. Technical risks: ....................................................................................................................... 84
5.10.4 Programmatic Risks: ................................................................................................................ 84
Chapter 6. Future Work .............................................................................................................................. 85
Chapter 7. Conclusion ................................................................................................................................. 85
7.1 Objectives achieved. ......................................................................................................................... 85
Chapter 8 General Information ................................................................................................................... 86
Chapter 9 System Summary........................................................................................................................ 87
Chapter 10 Getting Started.......................................................................................................................... 87
Chapter 11 Using the System ...................................................................................................................... 88
APPENDICES ............................................................................................................................................ 89
APPENDIX A ......................................................................................................................................... 89
Glossary .............................................................................................................................................. 89
7
Chapter 1. Introduction
1.1 Overview
Thanks to current technology, computers and robots have shown that they
can detect and discriminate between faces in the same way that humans can.
We can give robots the capacity to see as humans do, which is beneficial for
a variety of tasks. In our project, we are giving a robot this ability
exclusively for the sake of home security. We show a surveillance robot that
can protect our homes and locations using face recognition. It will also be a
social robot that can reply to speech recognition and many other
characteristics.
Home security is a critical concern, and advancements in information
technology have paved the way for innovative solutions. The introduction
of artificial intelligence (AI), machine learning, computer vision, and
natural language processing has brought forth the potential for automation
in home security systems.
1.3 Approach
1.4 Scope
Eve will be an autonomous robot with features for home security and in
locations where guards are not allowed. These functions[5] include face
detection, tracking facial recognition, and speech recognition. Additionally,
it can function as a social robot.
As a result, it will serve as a means of both human engagement and security
provision. Modern technologies including artificial intelligence, computer
vision, kinematics, and digital image processing will be used in our
research.
1.5 Objectives
The first and most crucial goal of this system is to offer home security in the
best manner possible by substituting CCTV cameras and notifying the
concerned person(s). This system will give home protection that is more
effective, productive, and affordable than CCTV cameras and security
guards. Other supporting goals
9
using characteristics of our system including voice recognition, object
identification, and speech, we want to decrease the communication gap
between people and robots.
1. Artificial Intelligence.
2. Computer Vision.
3. Image processing.
4. Python programming.
5. Kinematics and Control theory
1.6 Deliverables
Sr Tasks Deliverables
1 Literature Review Literature Survey
Requirements Software Requirement Specification Document
2 Specifications (SRS)
3 Detailed Design Software Design Specification document (SDS)
4 Implementation project demonstration
5 Testing Evaluation plan and test document
6 Training deployment plan
7 Deployment Application completion with documentation
10
diagrams, Use Cases, a system's sequence, and a general design of the
system. The discussion will then shift to a full description of all the relevant
components. The system's dependencies, interactions with other products,
and reusability will also be covered in more detail[22]. Test examples and
any suggested next work have been offered at the conclusion.
to make sure they are creating the system in accordance with its
specs and essential needs.
3. Users:
Utilize the system and identify any flaws in it, as well as to use it,
react to failure scenarios, and suggest new features that might
improve the project's capabilities.
11
4. Document writer:
To learn about the system's numerous needs and its design elements.
What technologies will be employed in the system and what
potential issues might arise during testing and operation.
6. Project Evaluators:
Knowing the project's scope will help you grade it and will also
allow you to monitor its progress.
12
Russell and Norvig [1] present "Artificial Intelligence: A Modern
Approach," which serves as a foundational resource for
understanding AI techniques and their application in various
domains.
14
Roy and Thrun [16] propose the Certainty Grid, a sensing technique
for mapping the environment, in their paper published in
Autonomous Robots, contributing to the development of perception
algorithms for robots like Eve.
15
Fox et al. [23] discuss Bayesian filtering techniques for location
estimation in their paper published in IEEE Pervasive Computing,
providing insights into probabilistic methods applicable to robot
localization tasks.
16
Autonomous Robots, contributing to the development of mapping
algorithms for autonomous systems like Eve.
The following are the first two fundamental methods for face
detection:
1. Image Processing
2. Using Haar’s Cascade
17
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
image_dir = os.path.join(BASE_DIR,"images")
face_cascade =
cv2.CascadeClassifier('E:\Projects\PyCharm\Faces\data\haarcascade_frontalface_
alt2.xml')
recognizer = cv2.face.LBPHFaceRecognizer_create()
current_id = 0
label_ids = {}
y_labels = []
x_train = []
#print(y_labels)
#print(x_train)
with open("labels.pickle","wb") as f:
pickle.dump(label_ids,f)
recognizer.train(x_train,np.array(y_labels))
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
image_dir = os.path.join(BASE_DIR,"images")
18
face_cascade =
cv2.CascadeClassifier('E:\Projects\PyCharm\Faces\data\haarcascade_fr
ontalface_alt2.xml')
recognizer = cv2.face.LBPHFaceRecognizer_create()
current_id = 0
label_ids = {}
y_labels = []
x_train = []
#print(y_labels)
#print(x_train)
with open("labels.pickle","wb") as f:
pickle.dump(label_ids,f)
recognizer.train(x_train,np.array(y_labels))
recognizer.save("trainer.yml")
19
2.2 Face tracking
For face tracing, a microcontroller called Arduino is utilised. An open-
source platform called Arduino has both hardware and software
programmes.
The limitation of the Viola Jones method to frontal faces only contrasts with
the KLT algorithm, which continually tracks human faces in the live video
stream.
In our project, object identification was carried out utilising the classifier
"coco" model.
The microcontroller is coupled to two servos. Prior to tracking, the servos
are centred. Servos are moved and slanted for face tracking using the
coordinates of faces that are obtained by boxing the face. As a person walks
to the left or right, the camera position adjusts thanks to servos. The servos
have a range of 0 to 180 degrees of rotation.
void loop() {
while (Serial.available()) {
com = Serial.read();
switch (com) {
20
EveServo_2.write(100);
delay(1000);
EveServo_2.write(60);
delay(1000);
EveServo_2.write(100);
delay(1000);
EveServo_2.write(82);
break;
}
}
value1_x = analogRead(knob1_x);
value1_y = analogRead(knob1_y);
value2_x = analogRead(knob2_x);
value2_y = analogRead(knob2_y);
value1_sw = digitalRead(knob1_sw);
value2_sw = digitalRead(knob2_sw);
if (value1_sw == ACTIVATED) {
if (degree < 141) degree = degree + 2;
}
if (value2_sw == ACTIVATED) {
if (degree > 89) degree = degree - 2;
}
delay(300);
Serial.print(value1_x);
Serial.print(" ");
Serial.print(value1_y);
Serial.print(" ");
Serial.print(value2_x);
Serial.print(" ");
Serial.println(value2_y);
21
// EveServo_2.write(value2_xF);
// if (value2_yF > 65)
// EveServo_3.write(value2_yF);
// EveServo_4.write(value1_xF);
// EveServo_5.write(degree);
}
3.1.1. Purpose
22
contribution to the massive global development of autonomous robots,
enhancing global innovation via the use of cutting-edge technology.
3.2.3.4 Developers
The developers will use this at the developing time and at the time
of any defect occurred in the product during maintenance.
23
3.2.4. Operating environment
3.2.4. Hardware
1. Acrylic parts
2. Arduino mini mega and Raspberry pi
3. Camera
4. Servos
5. Ultrasonic soundwaves sensor
6. GSM
7. Gtech Voice module
8. Microphone and joysticks
3.2.5. Software
1. Pycharm
2. Python 3, PyQt
3. Anaconda
4. OpenCV
5. Processing
6. Aruduino
24
3.2.6. Design and Implementation Constraints
3.2.6.1 Hardware Constraints
25
7. Night Vision affecting camera’s output
Figure 3.1 UI
26
Figure 3.2 Arduino minimega
27
3.4. System Features
The capabilities of our product or the functions it will provide overall are
described in the System features that we are describing here.
The robot's camera unit can detect the existence of a face or a person
in front of it thanks to this feature. This feature is very important
because without it, nothing will be seen and nothing will be done.
28
3.4.2.3 Functional Requirements
REQ-1 The robot will accurately recognise faces from the collected data.
REQ-2 The system's camera needs to precisely calculate how far away a
person's face is from it and then follow that face in the appropriate direction.
3.4.3.1 Training
29
Figure 3.4.3.1 Geetech Connected with computer via Serial
Data Flow
1. Robot hears voice, follows instructions, and completes tasks.
2. When required, it may reply to voice instructions.
3. It may respond via Google Assistant, NLP, voice recognition, or
another approach.
4. #include <Servo.h>
5.
6. #define ACTIVATED LOW
7.
8. Servo EveServo_1; //Head Movement - Front and Back
9. Servo EveServo_2; //Head Movement - Clockwise and
Anticlockwise
10. Servo EveServo_3; //Head Rotation - Up and Down
11. Servo EveServo_4; //Whole Body Rotation - Z axis
12. Servo EveServo_5; //Head Movement - Up and Down
13. byte com = 0;
14.
15. void setup() {
16. // put your setup code here, to run once:
17. Serial.begin(9600);
18.
19. EveServo_1.attach(32);
20. EveServo_2.attach(34);
21. EveServo_3.attach(36);
22. EveServo_4.attach(38);
23. EveServo_5.attach(40);
24.
25. EveServo_1.write(90);
30
26. EveServo_2.write(82);
27. EveServo_3.write(100);
28. EveServo_4.write(80);
29. EveServo_5.write(80);
30.
31. delay(2000);
32.
33. Serial.write(0xAA);
34. Serial.write(0x37);
35.
36. delay(1000);
37.
38. Serial.write(0xAA);
39. Serial.write(0x21);
40. }
41.
42. void loop() {
43. // put your main code here, to run repeatedly:
44.
45. while (Serial.available()) {
46.
47. com = Serial.read();
48.
49. switch (com) {
50.
51. case 0x11: //Command 1 is to make Nova turn left
52. EveServo_4.write(170);
53. break;
54.
55. case 0x12: //Command 2 is to make Nova turn right
56. EveServo_4.write(60);
57. break;
58.
59. case 0x13: //Command 3
60. EveServo_3.write(150);
61. break;
62.
63. case 0x14: //Command 4
64. EveServo_3.write(75);
65. break;
66.
67. case 0x15: //Command 5
68. EveServo_4.write(170);
69. break;
70. }
71. }
72. }
73.
31
Alternate Data Flow
1. The robot's microphone cannot transmit speech signals.
2. After voice recognition, the voice is not accurately identified or the
appropriate actions are not conducted.
3. Mistakes are made while answering questions or doing actions in
accordance with instructions
Functional Requirements
REQ-1 Eve must accurately transfer every vocal signal to the computer
through serial port connectivity, and after natural language processing, it
must be able to provide the desired results.
REQ-2 Following voice recognition, the system must perform the proper
action
These capabilities allow the robot to establish connections with both a Wi-
Fi and a GSM module.
32
Figure 3.4.4.2 GSM Message test
Alternate Data Flow
GSM not having signals and unable to send text messages
REQ-1 The Arduino IDE has to be used to setup the system's network
connection
REQ-2 When a face that is not recognised is detected, the system must
send a notification through the GSM module.
3.4.5 Object detection
33
3.4.5.2 Stimulus/Response Sequences
34
detection_masks = tf.squeeze(tensor_dict['detection_masks'],
[0])
# Reframe is required to translate mask from box coordinates to
image coordinates and fit the image size.
real_num_detection = tf.cast(tensor_dict['num_detections'][0],
tf.int32)
detection_boxes = tf.slice(detection_boxes, [0, 0],
[real_num_detection, -1])
detection_masks = tf.slice(detection_masks, [0, 0, 0],
[real_num_detection, -1, -1])
detection_masks_reframed =
utils_ops.reframe_box_masks_to_image_masks(
detection_masks, detection_boxes, image.shape[0],
image.shape[1])
detection_masks_reframed = tf.cast(
tf.greater(detection_masks_reframed, 0.5), tf.uint8)
# Follow the convention by adding back the batch dimension
tensor_dict['detection_masks'] = tf.expand_dims(
detection_masks_reframed, 0)
image_tensor =
tf.get_default_graph().get_tensor_by_name('image_tensor:0')
# Run inference
output_dict = sess.run(tensor_dict,
feed_dict={image_tensor:
np.expand_dims(image, 0)})
# In[6]:
Logging in allows the administrator to access the dataset, change it, encode
and decode faces, and watch live video.
36
Figure 3.4.2 GUI
Basic Data Flow
Reliability
38
provide sufficient information regarding the creation of an autonomous
robot.
4.2. Purpose
A system must have a strong architecture since it will serve as the project's
fundamental building block and serve as a basic blueprint for its design
and growth. As a result, the design, functionality, and architecture of EVE
are described in this software design specification (SDS) document that we
prepared. Along with the architecture, the paper also includes several
design diagrams and explanations for them, as well as information on the
key parts, building blocks, and system modules that will aid developers in
developing and organising the system appropriately.
39
Figure 4.1 System Diagram (Created on Visio)
40
Figure 4.2 Block Diagram (Created on Visio)
41
Figure 4.3 Dataflow Diagram (created on Visio)
42
Figure 4.4 Use Case Diagram created on Lucid Charts
4.5.3.1 Actors
1. Admin
2. Robot
3. Dataset (Secondary actor)
1. Login
2. Views Camera input/Video feed
3. Gives Voice Commands
4. Gets Notifications
5. View dataset
6. Change dataset.
Normal flow (i)Robot will extract an image of face from camera input.
flow
Face is properly detected because the robot will not be
Precondition
able to recognize anything without prior detection
Post Face recognition occurs successfully
condition
Includes Dataset
44
Use Case Face detection
Actors Robot
Use case
This feature allows the robot to detect a human face when
description
it comes in robot’s camera view
Use case This feature will allow robot to track a face which is not
Description Recognized
Alternative Robot cannot detect that some voice commands are given
flow to it or voice
There must be a voice of a known person which must
Precondition
be
generated
Post Voice is detected successfully
condition
Includes N/A
Extends N/A
Table 4.4 Use Case 4
Extends N/A
Table 4.5 Use Case 5
46
Use case Response Generation
Actors Robot
Description of the Robot will generate response according to the given
use case voice commands
(i)Admin/known user gives commands
(ii) Robot receives instructions via serial from the
Normal flow Arduino.
(iii)Arduino processes the information and makes the
actuators move
Alternative
Actuator movement is not correct
Data Flow
There must be some commands/instructions given
Precondition
Post condition servos/hardware move in the right direction
Includes Actuators
Excludes Chatbot, Google Assistant
Table 4.6 Use Case 6
Use case This use case will allow robot to send notifications to
description admin in case of an intruder or unknown person
Use case Using this use case, the Admin can give voice
description commands to the system also in case of an intruder
Includes N/A
Extends N/A
Table 4.12 Use Case 12
49
4.5.4 Sequence Diagrams
4.5.4.1 Voice Recognition sequence diagram
50
4.5.4.3 Desktop application Sequence Diagram
51
4.5.5 State Transition Diagram
In this section, state transition of application is shown how it changes to
another state.
52
4.5.5 Class Diagram (UML)
Class Description
This is the main class.it will execute first that video it
will extract images and call for face detection method
Main
and acquire video from that video it will extract
images and call for face detection method
In this class input from camera is taken in form images
Face detection
and then it will detect human faces
53
Servos will be able to be moved by hardware class.
The call will come after voice recognition or facial
recognition training. It may monitor faces by moving
Hardware
Eve's hardware (servos) or respond to vocal
instructions by moving servos; for example, when told
to gaze up, it will move servos to look up.
4.6.2 Application
Identific
ation Name: Application UI
Location: presentation layer of system architecture
Type UI components
55
will
be connected to a screen,
Subordin
ates This component has 2 subordinates, one is responsible for
displaying feed other is responsible for recognizing new
faces
and Commands
Depende This component 3.2 interacts with 3.3 ‘subsystem’, 3.1
ncies ‘Input
Hardware’ and 3.1 ‘Input Hardware’ and 3.4 ‘Databases’
This component is dependent upon ‘subsystem database
and
subsystem whereas in component is dependent upon this
component.
Interface This component will provide interface to component 3.3
s DB and
3.4subsystem when the raspberry pi will be connected to
and
external screen using HDMI
Resource Screen for display, storage device for logs in raspberry pi,
s HDMI
cable, power supply, python code embedded in the
raspberry to
run the application
56
Table 4.16 Application
4.6.3 Subsystem
This component has 7 sub-systems:
1. face detection
2. face recognition
3. face tracking
4. voice detection
5. voice recognition
6. voice commands
7. actuators
Table 4- 17 Subsystem
Identific
ation Name: Subsystem management
Location: Processing layer of system architecture.
Type Processing component
Input that comes in from the input devices component and
data
Sets are extracted from the database and processing is done
Purpose here.
1. It system’s camera shall correctly identify the distance of
person’s face from it and track it in the right direction
2
. The robot shall correctly recognize faces from the dataset.
3 The system shall correctly take actions after voice
. recognition
4 The system shall transmit all the voice signals correctly
. to
computer through serial port communication and after
Natural
language processing it should be able to deliver output
correctly.
5 The system’s network connection must be configured
. through
the arduino
57
4.6.4 Database
Table 4- 18 Database
Identific
ation Name: Database
Location: Application layer of system architecture
Type Data storage component
The function of this component is to use the input data
Function stream
of Audio and video, take assistance from the internet and
provided data set, first detect faces and audio faces are
recognized using the ratio between different points on the
face
and audio is recognized using the frequency matching. It
then
keeps track of the face and generates appropriate response
for
command. This component is also responsible for
classifying
new commands and faces and updating database. Learning
process is also handled here.
Subordi
nates This component has 8 subordinates face detection, face
Identification, face tracking, voice detection, voice
recognition,
voice commands, input stream and decision.
Depende This component is dependent upon 3.1 ‘Input HW’ and 3.4
ncies DB
Whereas 3.2 application and 3.5 ‘output HW’ are
dependent on
it.
Interfac Saves and takes data from database and provides content
es for
actuators and application reside in Arduino minimega and
raspberry pi
Resourc
es Hardware: all input hardware
SW: database, python, and processing (java) code,
Raspian.
58
Data All data from input stream, database, and internet.
Identific
ation Name: Output hardware
Location: Presentation layer
Subordi
nates This component has 3 subordinates
1
. Servos
2
. Audio output unit
3
. Acrylic body parts
Depende
ncies No component depends on this component whereas this
component is dependent upon (3.4) Database and (3.3) Sub
system component
Interfac USB ports and pins used to connect to Arduino/raspberry
es pi
60
Resourc Hardware: Power supply provided to Arduino used to run
es this
component
Processi
ng N/A
Data Audio stream, electrical pulses
Our system, the Eve autonomous robot, is a wholly original creation that
neither extends nor has developed from any other application at any level or
from any previously existing system.
1) Our autonomous robot has the potential to develop into a larger, more
complicated system with greater capabilities.
2) Developers may repurpose a variety of hardware utilised in our project
and link it to other hardware systems to create a new, improved system.
3) Our system's components may also be used to build a robot that uses a
laser pointer to automatically find targets. (Can also be fitted with weapons
of a light and medium calibre)
4) By developing AI algorithms that enable our system to automatically
detect and target ground combat units, sea-based combat units, and air-
based combat units with a high-power laser, we can also reuse the robotic
hardware and software components to develop our system into a larger
surveillance system.
5) In order to further enhance our project's autonomy, we may also add
potential machine learning applications.
6) As a result, by continuously expanding the system's services, its
usefulness may be improved.
61
4.8. Design Decisions and Trade-offs
2) The effectiveness and reaction time of the final product will be quite
poor, which will negatively impact the target user's experience.
2. view: shows the user the information (more than one view may be
defined)
The system is divided into modules through decoupling, which also ensures
effective code reuse.
62
Figure 4.11 MVC Design Pattern
The following graphic illustrates how the controller gets input from sensors,
which might take the form of camera images or spoken instructions. The
controller also reacts to user input and interacts with the objects in the data
model. The input is received by the controller, which may check it before
passing it on to the model. In our scenario, Arduino serves as a controller
and communicates input to the model through the serial port.
After that, the model takes the controller's input and utilises the dataset's
image data to match the picture with the dataset before doing subsequent
operations, such as face detection, face recognition, and voice recognition.
The model then changes the view, which may be any information output
type.
It is conceivable to have many perspectives on the same data, and our
approach likewise employs multiple perspectives.
Therefore, in the event of face tracking, our model refreshes the view by
sending movement orders through Arduino to servos that oversee moving in
the proper directions to track speech or to provide an accurate response in
the case of voice command response generation. The model changes the
second view, which is of the desktop application, by supplying data from
the desktop app.
MVC therefore aids our project by decoupling various system components
and offering support for many views.
63
Chapter 5. Project Test and Evaluation
5.1. Introduction
The acceptable methods, processes, and techniques used to design tests are
described in this section on test plans.
After it has been built, software is tested to verify its quality. Project testing
enhances reliability and efficiency.
This test plan document outlines the proper tactics, procedures, and
management techniques used to organise, carry out, and manage testing of
"Eve." This test plan document will make sure that our system complies
with the criteria for functional, on-functional, and customer needs.
In order to verify that our system complies with both its functional and non-
functional criteria, we will manually test it. As part of this approach, the
tester will manually go through the test cases. The tester examines the
system in the role of an end user and finds any flaws or problems. Unit
testing and integration testing will be used, and each unit will first be tested
independently before being merged with other units. Black box testing is
performed for each item, and acceptance testing is performed for units that
are merged.
The scope, strategy, methodology, and testing technique for EVE are all
included in this paper. The test items' pass/fail criteria are also specified.
Each test case details who will conduct the test, the prerequisites needed to
carry out each test case, the object to be tested, the input, the anticipated
output or results, and, when relevant, the procedural procedures.
64
5.2. Test Items
The following modules, features, and functions of our systems have been
selected for testing in accordance with the specifications stated in our
Software Specifications Document for EVE:
1) Face recognition
2) Face identification
3) Obtain sensor readings and outputs using sensors and hardware
components.
4) Voice response and recognition
5) Desktop Software
6) Dataset
7) Notifications
8) Tracking and measuring distance
9) Detection of objects
The following list outlines the order in which these test items will be
administered:
65
3) The robots must accurately classify faces in the data collection as either
known or strange faces, such as intruders.
Unit testing utilising White Box and Black Box testing is part of the plan.
1) Unit testing: Unit testing is carried out at the source or code level
to check for programming language-specific issues including incorrect
syntax and logical flaws as well as to test individual functions or code
66
modules. The purpose of the unit test cases is to validate the programmes'
accuracy.
Developers are responsible for unit testing.
2) Integration test:
In this testing, we'll check that each of the previously examined
components operates correctly when used in tandem.
Following individual testing of each unit, each unit will undergo a module
test. Black box testing is carried out after modular integration.
3) System Testing
System testing will ultimately verify that each module is functional both
on its own and when coupled. The effectiveness of the whole system will
thereafter only be determined by the program's results.
68
1. Camera feed will go to computer through serial
communication for processing.
2. Haar’s cascades will extract the facial features
Steps and facial co-ordinates will be identified in the
video stream/frame
3. Target will be locked depicting that face in front of
the camera is present
Test case
number TC 05
Input
Specifications Valid Input:
1 Camera Video
) frames
2
) Detected face within 400 cm distance
3 Correct Ultrasonic sound waves sensor
) readings
Invalid Input:
1
) Person is out of camera’s range i.e. 4m
2 Incorrect distance measured by ultrasonic
) soundwaves
sensor
3
) Incorrect, jittery servo movement.
1) Face is detected and not recognized and is
Steps classified as
an intruder.
2 Ultrasonic sound waves sensor measures
) distance of
the intruder’s face. If the face is within the
range of the
sensor then movement directions are given to
the servos
via Arduino which then track the face of
intruder (locked
target)
Output Valid output:
Specifications
1)Servo movements to track intruder’s face
Servo.write(up,down,left,write);
Invalid output:
Tracking due to incorrect
discontinued sensor
readings and range
limitation
71
Environmental Hardware: servos, ultrasonic sound wave’s sensor.
needs Software: python
5.7.6. Notifications
Test case
number TC 06
Test
items/features Notifications will be to the admin in case of
intruders/unfamiliar person
Testing
technique Black Box testing, component testing
Input
Specifications Valid Input:
1)Video frames, intruder’s face
2) GSM module
3) Admin information through database
Invalid input:
Testing
technique Black Box testing, component testing
Input
Specifications Valid Input:
1) Voice commands to robot
2) Gtech hexadecimal dictionary for voice
commands,
Invalid Input:
74
Invalid input:
75
Input
Specifications Valid input:
1) Correct User name and password given by admin
Invalid Output:
1 Username not recognized due to limitations of
) database
Table 5. 9 TestCase 9
Test This test case tests the feature that Admin to change
items/features the
images folder in the data set through which the
system
recognizes faces and an add more images data set so
that
76
the system/robot can recognize more faces
Testing
technique Black Box testing, component testing
Input
Specifications Valid Input:
1)Images of faces to be added in dataset are given
as
input in python desktop application
Invalid
input: Image file format incompatible
Steps 1) Admin is logged in
2)Admin clicks “Add User’’ to add new faces to
dataset(Encoding)
3)Admin clicks “ Delete ” button to decode any face
from data set
4) Admin can view on “Users” box see images in
dataset
5) Admin can see recognized faces in “recognized
face”
box.
Environment
al Software: desktop application (GUI)
needs
Hardware: laptop containing GUI
Table 5- 10 TestCase 10
77
Test case name Live Video (GUI desktop)
Test
items/features This test case examines the ability of the desktop
application (GUI) to show live video of the robot
Input
Specifications Input values:
1) Admin is logged in
Table 5- 11 TestCase 11
78
Testing
technique Black Box testing, component testing
Input
Specifications Valid Input:
1
) Different Objects in front of camera.
Invalid Input:
1 Object should be within range of the camera i.e
) 4m
2
) Brightness should be present
Invalid output:
1 Object recognized incorrectly. (chair is
) recognized
as bottle)
2
) Not happened yet but rarely possible.
Environmental Software : Keras, tensor flow library, python
needs
interpreter
Hardware : camera, sensors
79
5.7.13. Hardware Integration
Test case
number TC 013
Test
items/features This test case tests that the different hardware
components of the system are working properly
after
hardware integration
Input
Specifications 1) Camera with resolution 1280x960
2
) Arduino mini mega,
3
) Servos attached to camera
4
) Ultrasonic Sound waves sensor
5) GSM module
6
) Gtech Voice module
7
) Joysticks and Microphone
Testing
technique Black box testing
1
Output ) Camera acquiring video every second
Specifications 2
) Serial port communication with arduino
3
) Servos showing movements with and without
joysticks
4) GSM module working
5
) Gtech module and microphone detecting voice
Input
Specifications Valid Input:
1
) Face in front of robot’s camera
2 Camera acquiring video
81
)
3 Encoded dataset, correctly mounted GSM
) module,
GUI application
4 Ultrasonic soundwaves sensor, Servos, Arduino
) ,serial
port communication
Invalid Input:
1 Face behind camera or away from camera’s
) range
and angle
2 Dataset limitations , GSM module not
) configured
properly,
Output Valid output
Specifications 1) detected face
2) recognized face and intruder detected
3
) Tracked face
4) notifications send to admin
5) objects are detected
6) Admin is logged in and can view dataset and can
see
live video.
Invalid output:
1) face not detected, familiar person is recognized
as
intruder,
2) face is not tracked due to incorrect sensor
readings
Table 5- 14 Test Case 14
82
Figure 5- 8 System Integration
The Testing Procedures and testing methods are being carried out
simultaneously.
by all the members of the project i.e. Eve.
Hardware:
1) Camera with resolution 1280x960
2) Arduino mini mega,
3) Servos attached to camera.
4) Ultrasonic Sound waves sensor
5) GSM module
6) GTECH Voice module
7) Joysticks and Microphone
Software:
1) Python 3.7
2) PyCharm
83
3) Anaconda for NLP
4) Java IDE
5) OpenCV
6) Coco for object recognition
7) Kera’s, TensorFlow
8) PyQt 5 for GUI
9) Processing IDE
10) MATLAB
Situations that expose a project to defects and interfere with its timely
completion might happen while it is being developed.
The effects of the events might result in the project falling behind schedule,
thus in order to finish it on time, we will need to increase the number of
hours each day that the project is worked on.
By maintaining the once-defined criteria, technical risks that could arise due
to non-conformance with the requirements will be minimised.
84
In order to keep within the project's parameters and reduce programmatic
risks, the project's scope will be restricted.
Chapter 7. Conclusion
85
Chapter 8 General Information
System Overview:
Several recognisable faces are intended to be captured by Eve, an
autonomous home security robot, utilising its camera unit from a variety of
angles in order to establish a dataset before installation. Each familiar
individual is given their own dataset along with name.
The desktop application for Eve is used during this installation step.
Following the installation phase, Eve operates by continually filming its
surroundings and identifying each person's face. If a person enters the area
who is not listed in Eve's databases, it will recognise the individual as an
intruder and notify the user or administrator through text message or sound
output via its speaker that a stranger has entered the area. Along with this,
Eve also functions as a home assistant, which means it is fully capable of
carrying out simple daily activities and having conversations, much like
Google Assistant and Amazon Alexa. Despite having a desktop programme
that can display the live stream of the input video and the faces it detects,
Eve is a self-contained device that can function without an interface.
Organization of the manual:
1. General Information, System Summary, Getting Started, and Using the
System make up the five parts of the user's handbook.
2. The system and the intended use are described in broad terms in the
General Information section.
3. A broad overview of the system is provided in section two of the
document. The overview describes how the system's necessary hardware
and software are used.
4. System settings, user access levels, and response in the event of any
potential issues.
5. The Getting Started section describes how to instal and configure the
system. the first time. The system settings are briefly presented in this
section.
6. The section Using the System offers a thorough explanation of system
operations
86
Chapter 9 System Summary
A broad description of the system is provided in the System Summary
section.
System Configuration:
Eve is a self-sufficient machine that has all the parts it needs to work. A
GSM module, a Raspberry Pi, an Arduino Mini, a set of pre-assembled
servos, a camera, a microphone, a speaker, and a port are all included with
Eve. It also has a set of servos. The desktop programme must be executed
on a computer running Windows 7 or later and a functional smartphone in
order to receive the SMS delivered by Eve as an alarm. For Eve's helper
component to function, an internet connection is most important.
Contingencies:
The database will not be impacted, and user datasets will be secure in the
event of any problems or system breakdowns.
Eve System:
1. Remove the camera cover and clean the camera.
87
4. Raspberry pie ought to have internet connectivity.
Desktop Application:
System Interface:
The user will be able to log in on their own by inputting their username and
password.
Display Stream and Datasets
The user may browse the live stream, submit their information, and access
previously saved datasets after successfully authenticating into the
programme.
Notification SMS
This function will alert the user through SMS if there is an intruder around
Eve.
Surveillance Mode:
1. The user will launch Eve, which will record its surroundings and search
for both known and unknown faces.
2. Upon seeing an unfamiliar person:
2.1. The user will get notification through SMS.
Login:
The user may log into the desktop programme themselves using this option.
It asks for:
1)Account name
2) Key term
Saving and Accessing Datasets:
88
1. The user will place the person in front of the camera unit of the system
and take many pictures from different angles to add the new person
using the programme.
2. Using the desktop programme, the user may examine, change, and
remove previously used datasets.
Notifications:
APPENDICES
APPENDIX A
Glossary
1. APP: Application
2. GUI: Graphical User Interface
3. DB: Database
4. SDS: Software Design Specification
5. UML: The Unified Modelling Language (UML) is a general-
purpose modelling language which is designed to provide a standard way to
visualize the design of a system.
7. UI: User Interface
8. WBS: The project management Work Breakdown Structure
References
89
[5] R. Murphy, Introduction to AI Robotics. Cambridge, MA: MIT
Press, 2019.
[14] C. Szegedy et al., Eds., CVPR 2019: Computer Vision and Pattern
Recognition. Piscataway, NJ: IEEE Press, 2019.
[16] N. Roy and S. Thrun, The Certainty Grid: A Technique for Sensing
the World. Autonomous Robots, vol. 11, no. 3, pp. 305–316, 2001.
90
[20] J. Dean and S. Ghemawat, MapReduce: Simplified Data
Processing on Large Clusters. Communications of the ACM, vol. 51,
no. 1, pp. 107–113, 2008.
91
[34] M. S. Fox et al., Intelligent Robotics: Principles and Practice.
Menlo Park, CA: AAAI Press, 1998.
92