Ai Glass 1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

International Journal of Recent Technology and Engineering (IJRTE)

ISSN: 2277-3878, Volume-9 Issue-2, July 2020

AI Powered Glasses for Visually Impaired


Person
Nirav Satani, Smit Patel, Sandip Patel

 help of others. Their life always depends upon their caregivers


Abstract: This dissertation presents a system that can assist a and can be quite difficult for them alone. The increasing
person with a visual impairment in both navigation and number of people with disabilities inworld attracts the
movability. Meanwhile, number of solutions are available in concern of researchers to invent various technologies, aiming
current time. We described some of them in the later part of the
paper. But to date, a reliable and cost-effective solution has not that these technologies can useful the disabled people to
been put forward to replace the legacy devices currently used in perform their daily tasks in everyday life like normal people.
mobilizing on a daily basis for people with a visual impairment. So, we want to make something for them that would help them
This report first examines the problem at hand and the become independent. An open source smart glasses project is
motivation behind addressing it. Later, it explores relative current what we want to create. This smart glass can assist them while
technologies and research in the assistive technologies industry.
Finally, it proposes a system design and implementation for the
walking alone in new environments by taking inputs through a
assistance of visually impaired people. The proposed device is stereo camera and providing feedback to the person through
equipped with hardware like raspberry pi processor, camera, headphones. so, people blind can be trained to visualize
battery, goggles, earphone, power bank and connector. Objects objects using sensory substitution devices programmed.
will be captured with the help of camera. Image processing and Smart glasses are computing devices worn ahead of the eyes.
detecting would be done with the help of deep learning, R-CNN
Evidently their displays move with the user‟s head, which
like modules on the device itself. However, final output would be
delivered by the earphone into the visually impaired person’s ear. results in the users seeing the display independently of his or
The research work contains the methodology and the solutions of her position and orientation. Therefore, the technology like
above mention problem. The research works can be used in smart glasses or lenses can be use which can alter or enhance
practical use cases, for visually impaired person. The system the wearer‟s vision no matter where he/she is physically
proposed in this project includes the use of a region based located and where he/she looks. There are three different
convolutional neural network as well as the use of a raspberry pi
for processing the image data. System includes tesseract library of
paradigms of the way to alter the visual information a wearer
programming language python for OCR and give output to the perceives. Leveraging the potential of modern technologies,
user. The detailed methodology and result are elaborated later in this report explores how much they can be used to replace the
this paper. current legacy devices used by visually impaired people for
mobility and ultimately improve the quality of life for these
Keywords: OCR, R-CNN, Transfer learning. people worldwide.
I. INTRODUCTION II. A REVIEW OF RELATED WORKS
Starting with most crucial part of human psychology, Till now so many works had done for the blind and visually
Vision is an essential need of any person. Considering the fact impaired person by making developing many different kinds
given by the World Health Organization (WHO), in 2018 of assistive technologies to aid them in navigation. Some of
nearby 1.3 billion people in the world are suffered by visual the works are described below.
problems. Among them, about 39 million people are blind,
and roughly 246 million people have light vision [7]. Visual 1.1 The Assisted Vision Smart Glasses
impaired person usually depends on others hand to fulfill their The assisted vision smart glasses were designed to aid
daily needs, and many times suffer or compromise a lot due to individuals with very low vision to navigate unfamiliar
this illness. Considering modern era, and revolution in settings, recognize obstacles, and achieve a higher degree of
technology we try to introduce an equipment which can help independence [15]. The glasses are based on the fact that most
to face daily life problems of visually impaired person. blind individuals have at least some dim vision. The assisted
Elaboration of this equipment is mention further in this paper. vision smart glasses are intended to capitalize on this sight.
Could you imagine how the lifetime of an individual who is The glasses are made of OLED displays, with a gyroscope, a
blind could be? Many of them can‟t even walk without the GPS module, an earpiece, a compass, and two small cameras.
Incoming data is processed, and then used in a myriad of
Revised Manuscript Received on June 30, 2020. ways, for instance, brilliance can be used to show depth.
* Correspondence Author
Nirav Satani*, Information Technology, Parul University, Vadodara,
India. Email: [email protected]
Smit Patel, Information Technology, Parul University, Vadodara, India.
Email: [email protected]
Sandip Patel, Information Technology, Parul University, Vadodara,
India. Email: [email protected]

Published By:
Retrieval Number: B3565079220/2020©BEIESP Blue Eyes Intelligence Engineering
DOI:10.35940/ijrte.B3565.079220 416 & Sciences Publication
AI Powered Glasses for Visually Impaired Person

Since most visually impaired persons can distinguish bank fulfils the electricity requirements of the processor.
brilliance from dullness, the glasses can brighten anything
1.4 A voice playback module
close to the wearer so that they can discern obstacles and
persons. The GPS module can provide directions, and the This module will inform the user of the Earphone is used
gyroscope assists the glasses in calculating perspective for output purposes. Output generated by the raspberry pi will
changes as the wearer moves. The camera can also work with be delivered to the person‟s ear with the help of earphone.
the computing module to help read markers along the way
(cit). IV. BACKGROUND ON CNN AND MACHINE
LEARNING
1.2 The AI glasses
Today, computer vision-based applications are making the
AI glasses integrate a host of features, including artificial planet a far better and convenient place. during this project,
intelligence, ultrasound techniques, and computational we are making smart glasses for visually impaired, which may
geometry to create an essential aid for visually impaired help them navigate also as identifying objects in day to day
persons [3]. By linking glasses with GPS technology of a life. this text explains a deep learning-based [1] method for
tablet, along with stereo sound sensors, the prototype can classification of objects given during a road or indoor setting.
issue spoken commands, recognize currency denominations, We propose one method to classify the objects on the road.
read signs, etc. The estimated cost of AI glasses is between during this method, we train a convolutional neural network.
$1000 and $1500. In practice, machine learning requires features and to
1.3 Envision glasses extract features, one must do feature engineering. Feature
Recently Google and envision company introduced AI engineering takes tons of diligence and manpower. Instead,
glasses which helpful for the visually impaired person. The deep learning may be a sort of representation learning, where
cost of this equipment is around 1,50,000₹. This glass is also we feed the input signals in its raw form, be it speech, image,
known as Google smart glass [4]. video. In our application, we'd like to classify objects in any
given image. Deep learning methods exploit the tremendous
III. AI GLASSED FOR VISUALLY IMPAIRED computational power of Graphics Processing Units (GPUs).
PERSON At each level of the deep learning architectures, we introduce
a non-linearity which helps in solving very complex
The AI glasses for this project is an electric supplementary problems.
eye goggles, equipped with image capturing and recognition
using Artificial Intelligence technologies. In this regard, it
will have the following components:
1.1 A raspberry PI
The Raspberry PI 3 is a microcontroller board n PI series [14].
It can be considered as a single-board computer that works on
a LINUX operating system. According to a 2018 article, the
board not only has tons of features it also has terrific
processing speed making it suitable for advanced
applications. here wireless connectivity is needed. Raspberry
PI 3 has wireless LAN and Bluetooth facility by which you Figure 1: Simple Multilayer Perceptron
can setup WIFI HOTSPOT for internet connectivity.
As shown in the image, multi-layer perceptrons consider all
Raspberry PI 3 had dedicated port for connecting touch LCD
the features of equal importance and do not consider any
display which is a feature that completely omits the need to
spatial position of the input features. While in the image the
monitor. Raspberry PI 3 also has dedicated camera port so
neighboring pixels have similar values and they contribute
one can connect camera without any hassle to the PI board.
significantly in the semantic information of the image. In
Take the 16GB micro SD card and dedicate it specifically for
order to classify the image, if somehow, we can help the
PI OS. Board has a reset button and a power jack. With an
neural network [16] get the spatial structure, then it can help
operating voltage of 5V, the tool is highly user-friendly. It can
in achieving better classification accuracy. This is where
be connected to a computer directly through a USB cable or
convolutional neural networks come into the picture.
powered up with a battery, or an AC-DC adapter.
1.2 Raspberry pi camera module
The camera is used to take the input from outside. The
camera is responsible for taking the images of the objects.
Further, this input data transfer to the raspberry pi processor
for computation. Position of the camera would be at the top or
side of the glasses.
1.3 Power supply
To provide the constant power to the raspberry pi portable
power bank would be used. DC supply through the power

Published By:
Retrieval Number: B3565079220/2020©BEIESP Blue Eyes Intelligence Engineering
DOI:10.35940/ijrte.B3565.079220 417 & Sciences Publication
International Journal of Recent Technology and Engineering (IJRTE)
ISSN: 2277-3878, Volume-9 Issue-2, July 2020

classification challenges work (millions of images). We use a


typical convolutional layer with the initial layer being stacked
convolutional and max-pooling layer. And after that a flatten
layer, which unrolls any N-Dimensional matrix to a 1-D array,
followed by some simple multi-layer perceptron layers and
finally the output layer with a number of units equal to the
number of classes.

V. SYSSTEM DESIGN

Figure 2: Convolution Operation

In the image above, a convolution function [2] is applied to


an input image. We use 2-dimensional convolution filters.
Let‟s understand the convolution function a bit. Here, the
kernel is basically a convolution filter. Suppose, if we want to
find a diagonal in the input image. So, the right type of filter
would be all ones on the diagonal and negatives on other
places. Now, we can find diagonals in the image, by
convolving it through the image. Here, one important thing to
keep in mind is that, we need only one kernel for an arbitrarily
large input image. Here, parameter sharing comes into the
picture. In practice, multi-layered neural networks generally
have orders of times a greater number of parameters than an
equivalent or better performing convolutional neural network. Work flow diagram
In earlier computer vision eras, filters were convolved with Here are the steps involved in image detection and
the image to identify edges and curves [20]. identification. The Raspberry PI board is the main part of the
Convolutional neural networks exploit the spatial structure of system. It controls the other system components. When the
an image and we can use it to solve many classification and object is captured by the camera, it will send the information
regression problems [8]. Convolutional Neural Networks was to the microcontroller. The microcontroller, in turn, will
first proposed by Lecun et al. But, at that time, the detect the object using artificial intelligence. Then, the result
computational capability of the computers held back the is transmitted to the earphone, which is contacted with a blind
algorithm from thriving. CNNs became popular in 2012 when person.
AlexNet proposed by Krizhevsky et al. won the ILSVRC APPROACH
2012 challenge in classifying ImageNet images. Here, 2 These are smart glasses which give voice feedback of the
GPUs were used in parallel by them. things around you when prompted or when in caution. With
In general, CNN consists of convolutional and pooling layers, the development of advanced deep learning architectures such
non-linear activation functions, and multi-layer perceptrons. as CNNs, RNNs, and GANs, we can classify images, extract
As explained above, convolutional functions try to find a information from the images and then convert this text to
specific shape or edge of the function, pooling layers down speech [19] in order to feed into the speaker. We train a CNN
sample the image to get more semantic details with the same model to classify images, Tesseract OCR library to get a text
filter size, non-linear activation function makes the network from images and face recognition library and a matching
learn complex functions and fully connected layers try to put algorithm to detect and recognize faces.
up altogether to classify the image. The approach taken by us can be divided into four major
Deep learning neural networks [14] need to have a lot of data steps:
to train themselves for better accuracy. But, if the data is in
1.5 Data collection and training the algorithm:
abundance, in practice, a convolutional neural network works
better. And if the dataset is not sufficiently large then the To train a CNN model to get some respectable accuracy,
network may result in overfitting. Whenever the network we need to collect as many images as possible in the given
learns a very specific type of edge or shape or connection, time frame. We collected images of the classes we needed and
where it works on training images, but cannot generalize well then trained a CNN model on it. The programming was done
then it is called overfitting.
using Keras which is a Python library using Tensorflow in the
In the classification of road, animal or a person the data
backend. After data collection, we divide the training data
needs to be trained on many images of such classes. In our
project, we needed „real‟ world images, i.e. images captured into batches and then optimize the loss function using gradient
from a camera mounted on the head. So, the data needed to be descent algorithm.
collected manually. This requires a lot of time and effort and
still, it cannot come to the order of images in which image

Published By:
Retrieval Number: B3565079220/2020©BEIESP Blue Eyes Intelligence Engineering
DOI:10.35940/ijrte.B3565.079220 418 & Sciences Publication
AI Powered Glasses for Visually Impaired Person

1.6 Inference of the model:


After training the model is then serialized and saved to the
disk. At the testing (inferencing) stage we load the model and
keep it in the memory in order to make the inference process
faster. Images captured by the camera can be read directly
into the python library openCv and then fed to the model we
loaded.
The model gives us the individual probabilities of it being
in each class. Then the image can be classified into any of the
classes above.
1.7 Tesseract And Text To Speech:
We use Google‟s tesseract library as an Optical Character
Recognition tool to convert image to text [11]. As same as in
the second section, we use the images captured from the
camera directly into the tesseract model. We also keep this
Figure 3: Cow Detection
model loaded in the memory to make the inference faster. It
gets the data and the text data then can be converted to text to
speech.
1.8 Face recognition:
We also used openCV‟s face recognition library to
recognize the name of the human on which the model is
trained [17]. We can train some users on the initial base and
can improve along the way. The location of the face is
determined using a pre-trained face detection model. Then the
face features (face embeddings) are extracted using the face
recognition model. Then we use these face embeddings and
compare these to the face embeddings we have trained using
Euclidean distance. The lowest Euclidean distance with the
highest frequency among the dataset determines in which
class the image belongs.
Glasses usually sit idle and observe the surroundings, when
any cautionary event occurs (such as a car is coming or road Figure 4: Tree Detection
ahead, a person ahead, animal ahead, stairs) it triggers internal
thread and alarm the user with a text to speech output saying
“thing ahead”. Otherwise, when prompted by the user (by
pressing a button or saying an action word) it triggers and
gives user the output (who is in front of the user, where to go
for a place using inbuilt GPS, what are the things around the
user, image description (advanced deep learning) and harder
to incorporate in an IoT device.). I will use Intel Movidius
NCS for the computation and a Raspberry Pi.

VI. RESULT AND DISCUSSION


The contents Images of the objects are captured by the
camera and captured information sent to backend application
and later backend application gives the result with the help of
Artificial based system. This generated result is tended to
accurate and its transfers to the person‟s ear with the help of
the sound module. Some related clicks of the working
procedure, as a result, are mention below:

Figure 5: Elephant Detection

Published By:
Retrieval Number: B3565079220/2020©BEIESP Blue Eyes Intelligence Engineering
DOI:10.35940/ijrte.B3565.079220 419 & Sciences Publication
International Journal of Recent Technology and Engineering (IJRTE)
ISSN: 2277-3878, Volume-9 Issue-2, July 2020

VII. CONCLUSION
A conclusion the proof of concept version can be used in a
production environment. Our proposed device is able to
detect the objects, and recognize the faces but it has limitation
because of lower processor and therefore, it still needs some
improvements. The use of transfer learning can make the
classification highly accurate but slow the performance down,
conversely, quick response is essential for the convention of
users. To address this issue, one needs a faster computing
device which ideally should be low-powered. Apart from this,
Other better option is to use cloud computing.

REFERENCES
1. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. “Imagenet
classification with deep convolutional neural networks”. In: Advances
in neural information processing systems. 2012, pp. 1097–1105
2. Frieden, B. Roy. "Image enhancement and restoration." Picture
processing and digital filtering. Springer, Berlin, Heidelberg, 1975.
177-248.
3. Ilag, Balu N., and Yogesh Athave. "A Design review of Smart Stick for
the Blind Equipped with Obstacle Detection and Identification using
Artificial Intelligence." International Journal of Computer
Applications 975: 8887.
4. J. Donahue, R. Girshick, T. Darrell and J. Malik, "Rich feature
hierarchies for accurate object detection and semantic segmentation,"
IEEE Int. Conf. on Computer Vision and Pattern Recognition,
Figure 6: Horse Detection Columbus, 2014.
5. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. “Deep
residual learning for image recognition”. In: Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition. 2016, pp.
770–778
6. Karen Simonyan and Andrew Zisserman. “Very deep convolutional
networks for large-scale image recognition”. In: arXiv preprint
arXiv:1409.1556 (2014).
7. Li, Xiang, et al. "Cross-Safe: A computer vision-based approach to
make all intersection-related pedestrian signals accessible for the
visually impaired." Science and Information Conference. Springer,
Cham, 2019.
8. Liu, Fayao, Chunhua Shen, and Guosheng Lin. "Deep convolutional
neural fields for depth estimation from a single image." Proceedings of
the IEEE conference on computer vision and pattern recognition. 2015.
9. Matthew D Zeiler and Rob Fergus. “Visualizing and understanding
convolutional networks”. In: European Conference on Computer
Vision. Springer. 2014, pp. 818–833
10. Maxime Oquab, Leon Bottou, Ivan Laptev, and Josef Sivic. “Learning
and transferring mid-level image representations using convolutional
neural networks”. In: Proceedings of the IEEE conference on computer
Figure 7: Face Recognition vision and pattern recognition. 2014, pp. 1717–1724.
11. Mithe, Ravina, Supriya Indalkar, and Nilam Divekar. "Optical
character recognition." International journal of recent technology and
Table 1: Result engineering (IJRTE) 2.1 (2013): 72-75.
Object Accuracy Result 12. Raspberry Pi information, 26 April 2018,
Cow 83.72% Cow detected https://components101.com/microcontrollers/raspberrypi-3-pinout-fe
Tree 46.28% Tree detected atures-datasheet
Elephant 43.96% Elephant detected 13. R. Girshick, "Fast R-CNN," IEEE International Conference on
Horse 64.67% Horse detected Computer Vision, Santiago, 2015
Faces 100% Face recognized 14. Sakhardande, Jayant, Pratik Pattanayak, Mita Bhowmick. “Smart
Cane Assisted Mobility for the Visually Impaired.” International
Journal of Electrical and Computer Engineering, vol. 6, no. 10, 2012,
Table 2: System deficiencies and remediation pp. 9-20
System deficiencies Remediation 15. Sandnes, Frode Eika. "What do low-vision users really want from
One drawback of this system is it By using other faster algorithm and smart glasses? Faces, text and perhaps no glasses at all." International
is comparatively slow. highly efficient processor, speed can Conference on Computers Helping People with Special Needs.
be improved. Springer, Cham, 2016.
Limited objects detection possible The highly efficient processor can 16. Takefuji, Yoshiyasu. Neural network parallel computing. Vol. 164.
due to raspberry pi limited solve this problem. Springer Science & Business Media, 2012.
efficiency. 17. Viraktamath, S. V., et al. "Face detection and tracking using OpenCV."
This system may not be accessible By organizing events to give a The SIJ Transactions on Computer Networks & Communication
for use in the underdeveloped demonstration of the new Engineering (CNCE) 1.3 (2013):
nation because of cost and technology and financial aid can 45-50.
unfamiliar with technology. cure this problem.

Published By:
Retrieval Number: B3565079220/2020©BEIESP Blue Eyes Intelligence Engineering
DOI:10.35940/ijrte.B3565.079220 420 & Sciences Publication
AI Powered Glasses for Visually Impaired Person

18. World Health Organization (WHO). “Blindness and Vision


Impairment.” WHO, 11 October 2018,
https://www.who.int/news-room/factsheets/detail/blindness-and-visua
l-impairment
19. Xing Luo, Oscar. "Deep Learning for Speech Enhancement: A Study
on WaveNet, GANs and General CNN-RNN Architectures." (2019).
20. Yann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner.
“Gradient-based learning applied to document ´ recognition”. In:
Proceedings of the IEEE 86.11 (1998), pp. 2278–2324

AUTHORS PROFILE
Nirav satani is currently pursuing a bachelor‟s degree
in the field of Information and Technology at the
Department of Computer Science at Parul University,
India. He is always keen on computer and technology.
He did several use cases during his academics. For
instance, he did smart signal-based project which was
selected as best visionary-project of the year 2018 by his university. He did
competitive coding too. He took a part in various online coding competition
and he also won the coding challenge (in group) which held on university
level. Deep learning, Machine learning and Data Science are the favorite
domain of him. However, he also cleared competitive exam namely GATE
(Graduate Aptitude Test in Engineering) in 2020. In future, he willing to
works on data science domain. Email: [email protected]

Smit Patel is currently pursuing a bachelor‟s degree in


the field of Information and Technology at the
Department of Computer Science at Parul University,
India. He has done a number of projects during his
academics. He interested in machine learning, deep
learning. However, he also like to build a website. He
had done an internship as full stack developer. He willing
to work as full stack developer and contributing in deep learning domain in
future. Mathematics and visualization of the data are the interested leisure
activity of him. He also did competitive programming on various online
platform. He took a part in various online coding competition and he also
won the coding challenge (in group) which held on university level. Email:
[email protected]

Sandip Patel is currently pursuing a bachelor‟s degree


in the field of Information and Technology at the
Department of Computer Science at Parul University,
India. He has been involved in a number of roles and
projects during his academics. He did several projects
by his self. Some of the projects he has worked on
include Density based Management of traffic
Management System, HR Management System, and the project related to the
Job costing. His research interests are in the fields of Convolutional Neural
Networks, Transfer learning and Deep Learning Algorithms. He has prior
Experience of six Months in IT industry of India. His favorite academic
activity is Competitive coding. He took a part in various online coding
competition and he also won the coding challenge (in group) which held on
university level. Email: [email protected]

Published By:
Retrieval Number: B3565079220/2020©BEIESP Blue Eyes Intelligence Engineering
DOI:10.35940/ijrte.B3565.079220 421 & Sciences Publication

You might also like