0% found this document useful (0 votes)
18 views24 pages

Presentation 3

Uploaded by

tomarunt
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views24 pages

Presentation 3

Uploaded by

tomarunt
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 24

CNN-Based Object Recognition and

Guidance System to Assist Visually Impaired People

Guided by, Presented by,


Mrs. Nisha A. M. Johin M. S. (JIT20CS034)
Assistant Professor Jyothika Jayakumar(JIT20CS035)
Dept of CSE Nimisha S. S. (JIT20CS040)
Vineetha P. Victor (JIT20CS057)
Introduction
Objectives
Proposed system
Advantages
CONTENTS
Use case diagram
Modules
Methodology
Implementation
Conclusion
References
INTRODUCTION

• Object recognition and tracking systems based on Convolutional Neural Networks (CNNs)
have shown remarkable performance in detecting and recognizing objects in real-world
environments.
• Visual impairment is a condition that affects a person's ability to see.
• It can range from partial sight to complete blindness.
• Independence plays a significant role for achieving goals and objectives in life.
• The independence and security of their daily activities and outdoor travel are necessary for
them.
• A possible use for CNN-based object recognition and tracking systems is to help the blind go
about their daily lives.
• These systems can be used to find and identify environmental obstacles like traffic lights,
pedestrian crossings, and other objects.
• These systems can aid visually impaired people in more safely and independently navigating
their surroundings by giving them real-time feedback.
OBJECTIVES

1 2 3 4 5 6 7 8

To recognize and This can include To create the To have a user- To track objects in To understand and To ensure safety of To provide
classify objects in detecting and system which will friendly interface real. time as they provide visually the VIP's to their independent
the environment identifying identify objects in that can be easily move in the user's impaired with the family members. feelings for the
accurately. everyday objects the scene in order navigated by a field of view. information they VIP's than
such as chairs, to provide visually impaired need to safely cross depending on
tables, doors, and information to the user. the road. others for
stairs. user in real-time. experience their
own company.
PROPOSED SYSTEM
• The proposed system consists of a smart cane and a pocket held device.
• Object Detection System using Raspberry pi along with a camera module is used to capture
images in real-time. Object detection algorithms are implemented to detect objects within the
captured images.
• Detected objects are then converted into voice output using text-to-speech (TTS) libraries,
providing auditory feedback to the user.
• Blind Walking Stick integrates an ultrasonic sensor and a vibrator with a walking stick
commonly used by visually impaired individuals. The ultrasonic sensor detects obstacles in the
user's path, and the vibrator provides haptic feedback to alert the user of the obstacle's presence.
• The SOS button provides a quick and easy way for visually impaired users to call for help in
emergency situations.
ARCHITECTURE
Raspberry pi
OF
PROPOSED SYSTEM
ATmega328P

VIP
GPS & GSM
Ultrasonic
Accelerometer Sensor
ADVANTAGES​
• It help visually impaired individuals navigate their surroundings independently by
identifying objects and obstacles in real-time.
• It provides immediate auditory or haptic feedback, enabling users to make quick
decisions and adapt to changing situations.
• It is helpful for both indoor and outdoor environments.
• The ultrasonic module detects the obstacle from the front sides.
• It detects objects which are 2- 4 meters away.
• The audible instruction is produced to assist the user in navigation.
• It ensures to let know the family members if any obstacles were faced by the VIP.
• This feature helps prevent collisions and enhances the safety of visually impaired
individuals when navigating unfamiliar environments.
• Allowing users to customize settings, such as vibration frequency, TTS preferences, and
SOS contacts, enhances the device's adaptability to individual needs.
Capture frames Dataset/Train
model

Image
preprocessing

USECASE
DIAGRAM Output

Voice conversion

VIP's

Ear phone
FLOW CHART
Buzzer
FLOW CHART
MODULES
1.Image capture and preprocessing:

• This module involves capturing images using the camera module connected to the Raspberry Pi.
• Image preprocessing involves tasks like resizing, noise reduction, and contrast enhancement to
improve the quality of the input data.
• The captured image data is then passed to the preprocessing module where preprocessing
algorithms may be applied to enhance image quality or reduce noise.

Captured
Image Data
Image Capture Preprocessing
(Camera Module) Algorithms

Preprocessed
Image Data Output
2. Object Detection:
• This module performs object detection on the captured images to identify objects present in the
user’s surroundings.
• SSD algorithm is used for object detection.
• Preprocessed image data from the image capture module is input into the object detection module
where object detection algorithm analyze the images to identify and localize objects.

Preprocessed
Image Data

Detected
Object
Object Detection Text-To-Speech(TTS)
Algorithm Conversion

Voice Output for Visually Impaired


3. Text-to-Speech Conversion:

• The detected objects from object detection module is passed to the Text-to-Speech (TTS)
conversion module, which converts object labels into voice output
• This module converts the object labels into voice output using TTS libraries for visually
impaired user. .
• TTS libraries like pyttsx3 or gTTS (Google Text-to-Speech) can be used to generate speech
from text.

Detected Object Object Labels


Voice Output
from object
(TTS conversion)
detection module

Voice Output For


Visually Impaired User
4. Ultrasonic Sensor Data Processing:
• This module processes data from the ultrasonic sensor mounted on the blind walking stick to
detect obstacles.
• The ultrasonic sensor module collects distance measurements from the ultrasonic sensor mounted
on the blind walking stick.
• The distance data measurements are then processed and used by the Buzzer control module to
provide haptic feedback to the user based on obstacle proximity.

Distance Data
Measurement
Ultrasonic
Buzzer Control
Sensor Data

Haptic Feedback for


User
5. Buzzer Control:
• Controls the buzzer integrated into the blind walking stick to provide haptic feedback to the user
based on obstacle proximity.
• It enhances the functionality of the cane by providing intuitive, non-visual feedback to help blind
individuals navigate safely and independently.
• It translate the processed data into specific patterns.
• A continuous beep sound might indicate an obstacle directly ahead.

Proximity
data(Threshold
Distance data Haptic feedback
setting)
measurement from for user
ultrasonic sensor module (Buzzer Control)

Haptic feedback for


visually impaired user
ALGORITHM
SSD algorithm
• SSD (Single Shot MultiBox Detector) is a object detection algorithm that uses a CNN as its
base architecture.
• Known for its speed and accuracy.
• SSD is a single-pass algorithm.

Algorithm
1. The algorithm takes an image as input, which is represented as a matrix of pixel values.
This
matrix typically has three dimensions: height, width, and channels (RGB).

2. As the input image passes through the CNN layers, feature maps are generated at each
layer and these feature maps represent different aspects of the input image, such as edges,
textures &
shapes.
ALGORITHM ( cont.... )
3. Feature extraction is often performed using a pre-trained CNN, such as VGG16.

4. Multi Box is used to generate default bounding boxes at different aspect ratios and scales for each
feature map cell.

4.1 These anchor boxes serve as reference frames for detecting objects of different sizes and aspect
ratios.

4.2 These default boxes are centered on the cell and have different aspect ratios (e.g., 1:1, 1:2, 2:1)
and scales (e.g., smaller boxes for smaller objects and larger boxes for larger objects).

4.3 For each predicted bounding box, the detector predicts offsets (dx, dy, dw, dh) that adjust the
default box to better fit the object in the Image.
ALGORITHM ( cont....)

5. For each anchor box, make predictions for the presents of objects and there corresponding class
label(s) using a set of convolutional layers. These predictions include confidence scores for each
class and adjustments to the bounding box coordinates.

6.Apply non-maximum suppression to eliminate redundant detections. This step ensures that each
object is detected only once and removes overlapping boxes with lower confidence scores.

7. Based on the predictions and NMS results. These boxes represent the detected object in the
input image.

8. Stop.
FLOW CHART
OF
SSD ALGORITHM
OUTPUT
CONCLUSION
• The CNN Based object recognition and guidance system to assist the visually impaired
people gives a sense of surroundings and helps the visually impaired people visualize the
area.
• The system can be customized to recognize and track specific objects that are important
to the user, such as their cane or guide dog, enabling them to locate these items more
easily.
• A CNN based object recognition and tracking system can be a valuable tool for visually
impaired individuals, providing them with real-time information about their environment
and helping them navigate safely and independently.
• By processing live feeds from a wearable device and using CNN based algorithms to
identify and track objects, this system can provide audio or haptic feedback to the user,
alerting them to potential hazards and helping them avoid obstacles.
• Overall, this technology has the potential to greatly enhance the quality of life for
visually impaired individuals, giving them greater independence and confidence as they
navigate the world around them.
REFERENCES
Convolutional Neural Network for Object Detection System for Blind People. Y.C. Wong,
J.A. Lai, S.S.S. Ranjit, A.R. Syafeeza, N. A. Hamid.

Review Sensor-Based Assistive Devices for Visually-Impaired People: Current Status,


Challenges, and Future Directions :Wafa Elmannai and Khaled Elleithy.

A CNN-Based Wearable Assistive System for Visually Impaired People Walking


Outdoors by I-Hsuan Hsieh, Hsiao-Chu Cheng, Hao-Hsiang Ke, Prof. Dr. Hsiang-Chieh
Chen and I-Hsuan Hsieh.

Smart Cane for Blind People using Raspberry PI and Arduino Prutha G, Smitha B M,
Kruthi S, Sahana D P.
ANY QUERIES
THANK YOU

You might also like