0% found this document useful (0 votes)
29 views

An Real Time Object Detection Method For Visually Impaired Using Machine Learning

Uploaded by

Sachin C V Sachi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views

An Real Time Object Detection Method For Visually Impaired Using Machine Learning

Uploaded by

Sachin C V Sachi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

2023 International Conference on Computer Communication and Informatics (ICCCI), Jan 23-25, 2023, Coimbatore, India

An Real Time Object Detection Method for


Visually Impaired Using Machine Learning
Saravanan Alagarsamy T. Dhiliphan Rajkumar K. P. L. Syamala
2023 International Conference on Computer Communication and Informatics (ICCCI) | 979-8-3503-4821-7/23/$31.00 ©2023 IEEE | DOI: 10.1109/ICCCI56745.2023.10128388

Department of Computer Science Department of Computer Science Department of Computer Science


and Engineering. Engineering. Engineering.
Kalasalingam Academy of Kalasalingam Academy of Kalasalingam Academy of
Research and Education, Research and Education, Research and Education,
Anand Nagar, Krishnankoil, Anand Nagar, Krishnankoil, Anand Nagar, Krishnankoil,
Tamilnadu, India Tamilnadu, India Tamilnadu, India
[email protected] [email protected] [email protected]

Ch. Sandya Niharika D. Usha rani K. Balaji


Department of Computer Science Department of Computer Science Department of Computer Science
Engineering. Engineering. Engineering.
Kalasalingam Academy of Kalasalingam Academy of Kalasalingam Academy of
Research and Education, Research and Education, Research and Education,
Anand Nagar, Krishnankoil, Anand Nagar, Krishnankoil, Anand Nagar, Krishnankoil,
Tamilnadu, India Tamilnadu, India Tamilnadu, India
[email protected] [email protected] [email protected]

Abstract— Vision, one of the five fundamental human senses, million having impaired vision, according to the World Health
is crucial for defining how people perceive the objects around Organization (WHO). The number of people with visual
them. Visual impairments affect more than 200 million people impairments is growing as a result of an increase in birth rate,
worldwide, severely limiting their ability to perform numerous eye conditions, accidents, aging, and other factors [4]. This
activities of daily living. Thus, it is essential for blind people to number rises by up to 2 million people worldwide each year.
understand their surroundings and the objects they are The ability of the visually impaired to do daily tasks is
interacting with. In this work, we created a tool that helps blind constrained or negatively impacted. Many people with visual
persons recognize diverse items in their environment by impairments will bring a seeing friend or family member
utilizing the YOLO V3 algorithm combined with R-CNN. This
along in order to explore new places. Blind people struggle to
comprises a variety of approaches to develop an app that not
interact with others due to these social barriers [5].
only instantly recognizes different objects in the visually
impaired person's environment but also guides them using Prior research has suggested a number of ways to assist
audio output. A convolutional neural network (CNN) called visually impaired people (VIPs) in overcoming their
YOLO (You Only Look at Once) recognizes objects in real time. challenges and leading regular lives. These strategies have not
This suggested method is more effective and accurate than other adequately addressed the safety concerns for VIPs walking
algorithms for recognizing things, according to research results, alone, and the offered solutions are frequently complicated,
and it produces results for object detection that are extremely pricey, and unsuccessful, among other things [6].
similar in real time. It is crucial for persons who are blind or
visually impaired to be able to reliably and effectively detect and We suggest an approach based on recent developments in
recognize objects in order to navigate both common and computer vision and machine learning. The YOLO (You Only
unfamiliar situations safely, become stronger, and become more Look Once) deep learning technology is used to find objects
independent. in the immediate vicinity, identify the items, and provide voice
output. The camera will take a picture of whatever is in front
Keywords— Convolutional Neural Network, Computer vision, of the individual [7]. After being processed using deep
Hyper Text Markup Language learning algorithms, the output, which is product
I. INTRODUCTION identification, is then converted into voice. A method is
presented to assist people with vision problems in managing
"The eyes are the portals to the soul," It describes the everyday activities like walking, working, and house cleaning.
strong connection created when staring someone in the eyes General-purpose object detection should have the following
and is crucial to how we view the outside world. The ability qualities: speed, accuracy, and versatility [27]. The
to see clearly is crucial for our safety, awareness of our advancement of neural networks has increased the detection
surroundings, and mental alertness [1]. frameworks' speed and accuracy. The bulk of detection
Visually impaired people are blind to the threats they face methods, however, are still restricted to a small number of
on a daily basis. They might run across a lot of obstacles while object classes [8].
going about their daily lives, even in their cosy surroundings. The eyesight of someone who has a visual impairment,
Humans require vision as a sense since it is essential to how also known as impaired vision, is often so badly affected that
they interpret their surroundings [2]. it cannot be repaired to normal. This implies that a complete
In Worldwide, 285 million people are believed to have rectification is not possible, not even with the aid of glasses,
visual impairments, with 39 million being blind and 246 eyeglasses, medication, or eye surgery. Vision impairment

979-8-3503-4821-7/23/$31.00 ©2023 IEEE

Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY DELHI. Downloaded on November 22,2023 at 08:13:51 UTC from IEEE Xplore. Restrictions apply.
2023 International Conference on Computer Communication and Informatics (ICCCI), Jan 23-25, 2023, Coimbatore, India
might imply different things to different people. The phrase photos in the dataset
may be used slightly differently by different medical groups, containing photographs of
organizations, and practitioners. Even visually impaired common things, there are a
people themselves may hold varying views on the subject. total of 2.5 million
instances in which each
Low vision, which is classified based on the degree of object has been labeled,
vision loss, is sometimes mistaken for eye issues. The two making the object
primary aspects of vision, image quality, and visual field, are detection problem simpler
usually used to characterize poor vision[28]. to solve.
3 Kumar You Only This research shows how
There are numerous potential causes of vision impairment.
et al. [9] Look Once: Fast YOLO can process an
While some scenarios can take some time to play out and get Unified, astounding 155 frames per
worse, others might happen quickly. Some people who have Real-Time second while still
bad vision either develop it as they age or are born with it. Object obtaining twice the mAP of
Detection other real-time detectors.
On the baseline, modern
detection techniques
produce fewer false
positive predictions, while
YOLO commits more
localization mistakes. Last
but not least, YOLO
chooses very general
examples. It performs
better than other detection
methods like DPM and R-
CNN when generalizing
from natural images when
used in other domains,
Fig 1: Causes of Visual Impairment such as art.
4 Ramík Detection According to this study,
Eye disease - Various levels of irreversible vision loss may Based on YOLOv2 performs better
et al.
be caused by particular eye conditions. Illness - Diseases in a CNN than Faster R-CNN in
[10]
number of different physiological systems can also impair terms of performance and
vision. These problems can harm your vision in a variety of accuracy and also features
ways. Injury - Head injuries have the potential to permanently an object detector with
or completely damage eyesight. excellent generalization
properties that can
II. RELATED WORKS represent the full image.
S.No Author Used Advantages and 5 Ren et Blind Person The TensorFlow Object
details methodology limitations al. [21] Assistant: Detection API allows for
Object the detection of several
1 Redmon Multi-box The authors discussed the
Detection objects. They presented an
et al. [3] detector accuracy validation
algorithm in this
algorithm parameters aspect ratio as
publication (SSD). SSD
well as how they increased
uses a similar phase during
the accuracy in their
training to connect the
convolutional layers using
proper anchor box with the
depth- and space-wise
bounding boxes of each
separable convolutional
underlying data object
layers. The single shot
within an image.
multi-box detector (SSD)
algorithm, which uses a 6 Saini et SSD for This study finds that
single layer of a neural al. [22] Real-Time YOLO v3 can be used for
network to detect objects in Pill real-time object
the image, was also Identification recognition because it has
suggested by the authors as faster detection speeds
the fastest optimization than R-CNN and SSD
technique. while maintaining a
specific MAP.
2 Girshick Microsoft This work suggested a new
COCO: dataset with the intention 7 Arafat et Object Building a way to
et al [8]
Common of improving the state-of- al. [23] Detection recognize objects in real-
Objects in the-art in object System for time utilizing a camera as
Context. recognition by placing the Visually an input device and
problem of object Impaired communicating that
recognition in the context Persons information to the user via
of the more general Using a smartphone and
problem of pattern Smartphones headphones is the aim of
recognition. With 328k this project. The system

Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY DELHI. Downloaded on November 22,2023 at 08:13:51 UTC from IEEE Xplore. Restrictions apply.
2023 International Conference on Computer Communication and Informatics (ICCCI), Jan 23-25, 2023, Coimbatore, India
would use an audio device, The brain's neural networks are artificially created. They
such as speakers or function similarly to the brain. Convolutional neural
headphones, to deliver networks are the most significant neural network utilized for
information about products image recognition. Multiple layers of neurons in
in order to aid persons who
convolutional neural networks are used to extract information
are visually impaired.
from the image. The model grows complex without
convolution layers since they need a lot of neurons depending
After performing the survey, the combination of R-CNN with
Yolo is identified to improve the performance of the algorithm on the size of the image. Convolution layers apply
in identifying the objects exactly for blind people with the help convolution operations to every layer in order to create a
of an android mobile phone. feature map and transmit the results to the following layer. In
these levels, feature extraction is carried out [14].
III. PROPOSED SYSTEM
The development of portable assistive technology systems The forward propagation and backward propagation, which
aims to improve the capacities of people with disabilities. are combined to form an epoch, are the two crucial events that
One of a person's most important senses is vision. Vision is take place throughout the model's training. An era is a single
the most important sense that helps us be aware of how we cycle of forward and backward propagation. From the input
interpret our surroundings [12]. Legally blind people layer to the output layer, forward propagation travels in a
frequently struggle to understand what is going on around single direction. The feature extraction, weight computation,
them, especially in an outdoor setting where things are and application of the activation function take place in
continuously moving and shifting. Approaches for object forward propagation, after which the error is computed. The
classification would go a long way toward assisting visually weights will be adjusted in the backward propagation
impaired individuals in navigating the challenges they face method, which moves from the input layer to the output layer
every day [11]. The object-detecting system's objective is to in a reverse manner, based on the error value. The number of
provide visually impaired individuals with a simple, epochs is represented by the number n at which this process
approachable, practical, inexpensive, and efficient method. occurs [15].
With the help of a smartphone, headphones, and an input
device like a camera, this system aims to develop an The biggest issue that arises when these methods are applied
application that can recognize objects in real-time. The in real time is the FPS value. The frame rate of an object
application would use an audio recorder, such as speakers, to detection model determines how quickly the video is
convey information about products as voice output and processed and output is produced. FPS is a metric used to
increase the independence of persons who are visually describe how quick a procedure is. For the real-time effect,
impaired. The suggested method helps people who are the FPS value must be higher than 20. However, the CNN
visually impaired identify and steer clear of objects in both family algorithms' highest FPS value is only 18 (faster R-
indoor and outdoor settings that obstruct everyday duties and CNN). The mAP value is a typical performance statistic
job motivation. People with vision impairment would find (mean average precision value) [16].
daily life much easier if they were aware of the stuff nearby
[13]. As there will only be one forward propagation to identify
objects in each run, the YOLO (You Only Look Once)
method enhanced FPS and mAP values while reducing run
time complexity [17]. The FPS value margin of 155 was
attained by this algorithm. It also comprehends how items are
represented generally. Each version's backbone was a neural
network, such as DarkNet or efficientNet. It gained notoriety
for its accuracy, quickness, and capacity for learning [21].
Fig.2. Blind Person Using App on his Mobile

In this project, an Android application that can recognise


objects will be developed for users with vision impairments.
Due to the specially designed application, it is intended for
blind people to find the goods they are looking for in daily
life more quickly [18]. The application will benefit from
object detection technology that is based on vision.
For the advantage of the blind individual, an effort will be
made to gauge the object's location and distance during object
detection. A voice message including the names and locations
Fig.3.Detection process of Yolo
of the recognized objects will then be delivered to the visually
impaired person [19]. Fig.3 epitomizes the detection process of YOLO. Grids split
Additionally, the user will receive a voice message describing each cell, and it is the duty of each cell to locate the object.
the functioning of the application, making it simple for blind Three techniques—Residual blocks, Bounding boxes, and
individuals to utilize. The software will then be tested to Intersection over Union—are the foundation of the algorithm.
determine how well it can recognize objects and behave in IoU (intersection over union) is another evaluation statistic
various situations [20]. that shows how much two boxes overlap [27]. It assessed how

Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY DELHI. Downloaded on November 22,2023 at 08:13:51 UTC from IEEE Xplore. Restrictions apply.
2023 International Conference on Computer Communication and Informatics (ICCCI), Jan 23-25, 2023, Coimbatore, India
well original and anticipated values coincided. It accepts
input in the form of weights and coordinates for training, Fig.6. depicts the working principles of the suggested
unlike many neural algorithms [22]. method. After the model is detected the image, then the image
is fed into the CNN model. The model prediction is
performed using the CNN model [25].
(A) YOLO Architecture
(B) R-CNN combined with Yolo
YOLO is a fast-working method with good accuracy and FPS
values that can be utilized for real-time object detection in
comparison to many other existing techniques [23]. The
object detection model, which was based on the DarkNet53
model, was constructed using the YOLOv3 version.

Fig.4. YOLO V3 network Architecture


Fig.4 epitomizes the architecture of YOLO V3. The
utilization of the method is combined with the R-CNN
networks. YOLO model is built, and Weights are assigned
based on the feature of the images. The shapes of the images
are predicted using bound box method [26]. Edges of object
is used for identifying the different entity.

Fig.7.Flowchart of the proposed system

Fig.7 exemplifies the complete process of the suggested


method. The application must have access to the mobile
Fig.5.Prediction of the target variable device's camera, speaker, and microphone in order to work as
intended, as indicated in the diagram. The user starts the
Fig. 5 embodies the prediction process of the target variables. application first. The device's camera and microphone are
The output of the YOLO technique is combined with the R- then requested from the user. If accepted, the procedure is
CNN method to increase the accuracy of the prediction carried out; otherwise, the application is rejected. The novelty
process. Based on the predetermined feature set, the object is of the suggested method is emphasized based on producing
predicted [24]. the audio output of the predicted object. The object detected
by the model will be delivered to the user in the format of
voice. This processing aid for blind people to assist and travel
to many places.
IV. RESULTS AND DISCUSSION
The data set used for testing is VOC 2012 real-time test data
also the error analysis of the suggested method is performed
based on the above data set. First, the model is tested with the
wildlife animals. The animals are detected based on the
Fig.6.Working principle of R-CNN with YOLO method predefined shapes fed into the model.

Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY DELHI. Downloaded on November 22,2023 at 08:13:51 UTC from IEEE Xplore. Restrictions apply.
2023 International Conference on Computer Communication and Informatics (ICCCI), Jan 23-25, 2023, Coimbatore, India
Fig.9 reveals the accuracy comparison. The suggested model
Fig.8 indicates the prediction of wildlife animals. Edge provides more accurate information while comparing it with
detection is used to capture the structure of the image. The the related methods.
boundary of the image is detected then the object is mapped
with the features data set fed.
MAP
100 96.48
95 91.26
90.23 89.36 88.78
90
84.45
85
80
75

Hypernet
Yolo
CNN
R-CNN+Yolo

RCNN

Edge based
detection
Fig.8.Wildlife animal prediction Fig.9 MAP comparison

Fig.9 reveals the accuracy comparison. The suggested model


provides more accurate information while comparing it with
the related methods.
V. CONCLUSION
The suggested method act as a tool that aids blind people
in recognizing various objects in their environment. This
includes a number of techniques to create an app that not only
instantaneously recognizes various items in the environment
of the visually impaired person, but also directs them through
auditory output. Real-time object recognition is accomplished
with an RCNN with YOLO. The suggested method is more
efficient and accurate than other algorithms for object
recognition, and it delivers results for object detection that are
quite similar in real-time.

Fig.9.Object detection of R-CNN with YOLO REFERENCES

Fig.9. embodies the process of object detection of the 1. J. -G. Kim and J. -H. Yoo, "HW Implementation of Real-Time Road
suggested method. The detected image is mapped with the & Lane Detection in FPGA-Based Stereo Camera," 2019 IEEE
International Conference on Big Data and Smart Computing
predefined image. If a match is found, the output will be (BigComp), 2019, pp. 1-4.
delivered to the user in the format of voice for recognition. 2. J. Hosang, R. Benenson, P. Dollár and B. Schiele, "What Makes for
Effective Detection Proposals?," in IEEE Transactions on Pattern
TABLE1. ACCURACY COMPARISON Analysis and Machine Intelligence, vol. 38, no. 4, pp. 814-830, 1 April
2016.
S. State of art methods MAP in % 3. J. Redmon, S. Divvala, R. Girshick and A. Farhadi, "You Only Look
No Once: Unified, Real-Time Object Detection," 2016 IEEE Conference
1 R-CNN+Yolo 96.48 on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 779-
788.
2 CNN 90.23
4. K. G, R. P, N. N N, A. Beulah and R. Priyadharshini, "Detection of
3 Yolo 89.36 Electronic Devices in real images using Deep Learning Techniques,"
4 RCNN 91.26 2021 5th International Conference on Computer, Communication and
5 Hypernet 84.45 Signal Processing (ICCCSP), 2021, pp. 295-300.
6 Edge based detection 88.78 5. K. He, X. Zhang, S. Ren and J. Sun, "Deep Residual Learning for
Image Recognition" in Computer Vision and Pattern Recognition,
IEEE, pp. 770-778, 2016..
Table 1 encapsulates the accuracy of the suggested and
6. K. Selvaraj and S. Alagarsamy, "Raspberry Pi based Automatic Door
comparative techniques. The feature set map of the suggested Control System", 2021 3rd International Conference on Signal
and related techniques is presented in fig.9.Based on the Processing and Communication (ICPSC), pp. 652-656, 2021.
feature set mapped to the model, the accuracy of the method 7. P. F. Felzenszwalb, R. B. Girshick, D. McAllester and D. Ramanan,
will deliver. The R-CNN +Yolo deliberately improved "Object Detection with Discriminatively Trained Part-Based Models,"
accuracy in finding the objects based on the predefined in IEEE Transactions on Pattern Analysis and Machine Intelligence,
vol. 32, no. 9, pp. 1627-1645, Sept. 2010.
shapes. The sample type of object is trained with different
8. R. Girshick, J. Donahue, T. Darrell and J. Malik, "Rich Feature
shapes in various dimensions. Hierarchies for Accurate Object Detection and Semantic

Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY DELHI. Downloaded on November 22,2023 at 08:13:51 UTC from IEEE Xplore. Restrictions apply.
2023 International Conference on Computer Communication and Informatics (ICCCI), Jan 23-25, 2023, Coimbatore, India
Segmentation," 2014 IEEE Conference on Computer Vision and Intelligence and Electromechanical Automation (AIEA), 2021, pp.
Pattern Recognition, 2014, pp. 580-587. 334-337.
9. R. Kumar, S. Lal, S. Kumar and P. Chand, "Object detection and 27. Z. Zhang, S. Qiao, C. Xie, W. Shen, B. Wang and A. L. Yuille,
recognition for a pick and place Robot," Asia-Pacific World Congress "Single-Shot Object Detection with Enriched Semantics," 2018
on Computer Science and Engineering, 2014, pp. 1-7. IEEE/CVF Conference on Computer Vision and Pattern Recognition,
10. Ramík, D.M., Sabourin, C., Moreno, R. and Madani, K., 2014. A 2018, pp. 5813-5821.
machine learning based intelligent vision system for autonomous 28. Boopalan C, Saravanan V, “Certain Investigation of Real
object detection and recognition. Applied intelligence, 40(2), pp.358- power flow control of Artificial Neural Network based
375. Matrix converter-Unified Power Flow Controller in IEEE 14
11. S Alagarsamy, S Ramkumar, K Kamatchi and H Shankar, "Designing Bus system”, International Journal Of Computer
a Advanced Technique for Detection and Violation of Traffic Control Communication And Informatics, volume-2, issue-2, 54-81,
System", Journal of Critical Reviews, vol. 7, no. 8, pp. 2874-2879, 2020
2020.
12. S. Alagarsamy et al., "Smart System for Reading the Bar Code using
Bayesian Deformable Algorithm for Blind People," 2022 6th
International Conference on Trends in Electronics and Informatics
(ICOEI), Tirunelveli, India, 2022, pp. 424-429.
13. S. Alagarsamy, D. Sreshta, D. U. Rani, D. S. Y. Reddy, P. Nidita and
B. N. S. Sai, "Pattern Recognition based Smart Billing System for
Water Consumption," 2022 7th International Conference on
Communication and Electronics Systems (ICCES), Coimbatore,
India, 2022, pp. 1444-1449.
14. S. Alagarsamy, K. Selvaraj, V. Govindaraj, A. A. Kumar, S.
HariShankar and G. L. Narasimman, "Automated Data analytics
approach for examining the background economy of Cybercrime,"
2021 Third International Conference on Inventive Research in
Computing Applications (ICIRCA), Coimbatore, India, 2021, pp.
332-336.
15. S. Alagarsamy, K. V. Sudheer Kumar, P. Vamsi, D. Bhargava and B.
D. Hemanth, "Identifying the Missing People using Deep Learning
Method," 2022 7th International Conference on Communication and
Electronics Systems (ICCES), Coimbatore, India, 2022, pp. 1104-
1109.
16. S. Alagarsamy, M. Malathi, M. Manonmani, T. Sanathani and A. S.
Kumar, "Prediction of Road Accidents Using Machine Learning
Technique," 2021 5th International Conference on Electronics,
Communication and Aerospace Technology (ICECA), Coimbatore,
India, 2021, pp. 1695-1701.
17. S. Alagarsamy, R.R. Subramanian, P.K. Bobba, P. Jonnadula and S.R.
Devarapa, "Designing a Smart Speaking System for Voiceless
Community", Expert Clouds and Applications, vol. 209, pp. 21-34,
2022.
18. S. Alagarsamy, T. Abitha and S. Ajitha, "Identification of high grade
and low grade tumors in MR Brain Image using Modified Monkey
Search Algorithm", IOP Conf Ser.: Mater. Sci. Eng. 993 012052, vol.
993, 2020.
19. S. Alagarsamy, V. Govindaraj, M. Irfan and R. Swami, Smart
Recognition of Real Time Face using Convolution Neural Network
(CNN) Technique, vol. 83, pp. 23406-23411, 2020.
20. S. Alagarsamy, V. Govindaraj, T. T. Reddy, B. P. Kumar, P. S.
Vineeth and V. A. Kumar Reddy, "An automated assistance system
for detecting the stupor of drivers using vision-based technique", 2021
Second International Conference on Electronics and Sustainable
Communication Systems (ICESC), pp. 1203-1207, 2021.
21. S. Ren, K. He, R. Girshick and J. Sun, "Faster R-CNN: Towards Real-
Time Object Detection with Region Proposal Networks," in IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 39,
no. 6, pp. 1137-1149, 1 June 2017.
22. S. S. Saini and P. Rawat, "Deep Residual Network for Image
Recognition," 2022 IEEE International Conference on Distributed
Computing and Electrical Circuits and Electronics (ICDCECE), 2022,
pp. 1-4.
23. S. Y. Arafat and M. J. Iqbal, "Urdu-Text Detection and Recognition
in Natural Scene Images Using Deep Learning," in IEEE Access, vol.
8, pp. 96787-96803, 2020.
24. W. Liu, G. Wu, F. Ren and X. Kang, "DFF-ResNet: An insect pest
recognition model based on residual networks," in Big Data Mining
and Analytics, vol. 3, no. 4, pp. 300-310, Dec. 2020.
25. X. Li, L. Ding, L. Wang and F. Cao, "FPGA accelerates deep residual
learning for image recognition," 2017 IEEE 2nd Information
Technology, Networking, Electronic and Automation Control
Conference (ITNEC), 2017, pp. 837-840.
26. Y. Li and H. Chen, "Image recognition based on deep residual
shrinkage Network," 2021 International Conference on Artificial

Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY DELHI. Downloaded on November 22,2023 at 08:13:51 UTC from IEEE Xplore. Restrictions apply.

You might also like