Projectcse Smartglass
Projectcse Smartglass
Projectcse Smartglass
PhD Computer Science and Engineering, Noorul Islam Centre for Higher Education, 2018.
M.E Network and Internet Engineering, Karunya Deemed University, 2006. B.E Computer Science
and Engineering, MS University, 2004.
Research Interests
Publications
1. “Discontinuity Adaptive SAR Image Despeckling using Curvelet Based BM3D Technique”,
International Journal of Wavelets Multiresolution and Information Processing, Jan 2019.
2. “A Novel Approach Of Despeckling SAR Images Using Nonlocal Means Filtering ”, Journal Of
Indian Society Of remote Sensing (Springer), Vol. 45,Issue 3 ,Springer , 2017, pg 443-450.
3. “Comprehensive Survey On SAR Despeckling Techniques”, Indian Journal Of Science And
Technology, Vol 8(24),September 2015.
4. “Comparison Of Transform Domain Based SAR Despeckling Techniques”, Indian Journal Of
Science And Technology, Vol 9(13), April 2016.
5. “Comprehensive Survey On SAR Image Segmentation And Classification”, International
Journal Of Control Theory And Application,10(3),2017, pp 21-27.
6. “Smart Agro Farm-Solar Powered Soil and Weather Monitoring System”, Elseiver-Materials
Today, April 2019.
Papers Presented
1. ‘Survey on Recent CAD System for Liver Disease Diagnosis”, Department of Electronics and
Instrumentation Engineering, International Conference on Control, Instrumentation,
Communication and Computational Technologies (ICCICCT)(10th,11th July2014) at
NoorulIslamCentre for Higher Education.
2. “Comprehensive survey on SAR Despeckling Techniques”, Department of Electronics and
Instrumentation Engineering, International Conference on Soft Computing in Applied
Sciences and Engineering (23rd and 24th July 2015) at Noorul Islam Centre for Higher
Education.
3. “Comprehensive Survey on Computer Aided SAR Image Segmentation and Classification
Methods”, Department of Computer Science and Engineering, International Conference onNovel
issues and Challenges in Science and Engineering (28th and 29th July 2016) at Noorul Islam
Centre for Higher Education.
4. “High Throughput Multicast Routing in WMN using Rate guard combining Scheduling Scheme”,
National Conference on Management Initiatives and IT Interventions for EmergingAvenues (3rd
February 2016) at MarThoma Institute of Information Technology.
5. “Intelligent Document Scanner”, International Conference on Computing, Communication,
Nanophotonics, Nanoscience, Nanomaterials and Nanotechnology 7th and 8th
April 2016 at Holy Grace Academy of Engineering.
6. Women Safety App using Android enabled Watch”, International Conference on Computing,
Communication, Nanophotonics, Nanoscience, Nanomaterials and Nanotechnology 7th and 8th
April 2016 at Holy Grace Academy of Engineering.
7. “Object Detection from SAR images based on Curvelet Despeckling”, International multi-
conference on Computing, Communication, Electrical & Nanotechnology 26th and
27th April 2018 at Mangalam College of Engineering.
8 .“Smart Agro Farm Solar Powered Soil and Weather Monitoring System for Farmers”,
International multi-conference on Computing, Communication, Electrical & Nanotechnology25th
and 26th April 2019 at Mangalam College of Engineering.
9. “Digital Camera Authorization”, National Conference on Innovations in Computing andNetworks
on 7th May 2019 at Younus College of Engineering and Technology.
FACULTY PROFILE
1 Name: Dr. Soumya T.
Assistant Professor in CSE
Designation:
2 College of Engineering Muttathara
9. Professional Experience
Organisation Designation Period Duration
College of Engineering Muttathara Assistant 2020-till date 2 years
Professor
Image Sensing, Medical Imaging and Satellite March 20-21, 2015, 2 days
Image Processing IIITM-K, Trivandrum
Name of the Seminar/Symposia/Conference ICACCI 2016, ACM EVENT 2016, SPICES 2015,
ICACCI 2015, COCONET 2015, SIRS 2014, RS DAY
2013
OBJECTIVES
❖ Navigation Support: Providing audio cues and guidance to assist in safe and efficient
navigation, both indoors and outdoors.
❖ Object Recognition: Enabling the identification of objects, people, and obstacles through
smart spectacles' integrated technology, enhancing users' awareness.
❖ Text-to-Speech Functionality: Converting written text into spoken words, allowing users
to access printed information such as signs, menus, or documents.
❖ Facial Recognition: Helping users recognize familiar faces by using facial recognition
technology, enhancing social interactions.
These “Smart Glasses” are designed to help the blind people to read and translate the typed text
which is written in the English language. These kinds of inventions consider a solution to motivate
blind students to complete their education despite all their difficulties. Its main objective is to
develop a new way of reading texts for blind people and facilitate their communication. The first
task of the glasses is to scan any text image and convert it into audio text, so the person will listen
to the audio through a headphone that’s connected to the Glasses.
.All the computing and processing operations were done using the Raspberry pi 4 model B.The
Smart Spectacles are equipped with a compact camera module connected to a Raspberry Pi 4,
utilizing advanced image processing algorithms. The device employs computer vision techniques to
analyze the surroundings, identify obstacles, and convey vital information to the user in real-time.
A text-to-speech (TTS) system is integrated to convert visual information into audible cues,
facilitating a more intuitive and accessible interaction .This project aims to empower the visually
impaired community by leveraging the capabilities of Raspberry Pi 4 to create an affordable,
accessible, and versatile assistive technology solution. The Smart Spectacles demonstrate the
potential of combining hardware and software advancements to enhance the independence and
mobility of individuals with visual impairments.
As a future plan, it is possible to support many languages and enhance the design to make it smaller
and more comfortable to wear.
METHODOLOGY
Our system's camera network is ingeniously designed, with a camera strategically placed on the
blind user's glasses. This positioning allows the cameras to capture a comprehensive view of the
user's surroundings, focusing on faces of people and objects. This visual information serves as the
foundation for our innovative system, empowering blind users with a detailed understanding of the
environment in front of them.
The integration of a high-quality camera into smart glasses enables real-time environmental
interpretation through advanced computer vision. This empowers the glasses to detect obstacles,
recognize objects, and provide crucial information, fostering independence for visually impaired
individuals. With a focus solely on the camera, the solution remains discreet and lightweight,
offering a seamless and intuitive user experience without additional sensors. This innovation
highlights the transformative potential of technology in enhancing accessibility and spatial
awareness for the visually impaired
Object Detection
By leveraging sophisticated algorithms, these glasses can swiftly recognize and categorize objects in
the wearer's surroundings. This capability extends beyond mere visual identification, enabling the
glasses to provide users with context-specific information, fostering a more informed and accessible
environment. Whether aiding in the identification of everyday objects or assisting with navigation
by recognizing obstacles, object detection significantly enriches the utility of smart glasses for a
diverse range of applications.
Face Recognition
Unlock the potential of facial recognition effortlessly with the world's simplest face recognition
library, designed to operate seamlessly from Python or the command line.Leveraging the cutting-
edge capabilities of dlib's face recognition built with deeplearning, this library boasts an impressive
accuracy of 99.38% on the challenging Labeled Faces in the Wild benchmark, ensuring robust and
reliable performance.
Text-to-speech
Text-to-speech (TTS) technology is a transformative feature in smart glasses, converting written text
into spoken words. This capability enables seamless auditory communication, allowing users to
access information without relying on traditional visual displays. In the context of smart spectacles
for visually impaired individuals, TTS serves as a vital tool for relaying textual information, such as
signs or documents, directly to the user through clear and natural-sounding speech. This not only
enhances accessibility but also contributes to a more inclusive and efficient user experience,
bridging gaps created by visual impairments and opening up new avenues for communication and
independence.
Audio Output
Audio output is a fundamental component of smart glasses, providing users with a crucial channel
for receiving information. In the context of smart spectacles for visually impaired individuals, the
importance of clear and effective audio output cannot be overstated. Whether conveying
navigation instructions, object recognition details, or text-to-speech responses, the quality of the
auditory experience significantly influences the device's usability. The integration of high-fidelity
speakers ensures that users receive information in a manner that is both comprehensible and
contextually relevant, contributing to a more seamless and enriching interaction with the
surrounding environment
Algorithm
I. Boot up the system with the Raspberry Pi microcontroller as the central processing unit.
Ii. Receive user commands through voice input, buttons, or gestures. Process the user
commands for initiating subsequent actions.
iii. Activate the Pi Camera Module integrated into the smart glasses to capture realtime visual
data.
iv. Process the captured visual data using advanced facial recognition software. Analyze facial
attributes, emotions, gender, and age of individuals in the user's vicinity.
v. Relay the recognized information through an intelligently designed Text-toSpeech (TTS)
system. Convert the information into auditory feedback delivered through earbuds.
vi.Utilize the integrated ultrasonic sensor for distance measurement and object detection.
Enhance navigation capabilities and support obstacle avoidance.
vii.Deliver auditory feedback to the visually impaired user, providing meaningful insights into
their environment.
viii.Continuously monitor the surroundings and update the user with real-time information.
ix. Utilize the ultrasonic sensor data to ensure safe navigation by detecting obstacles and
providing alerts.
x. Implement adaptability to varying environmental conditions for reliable performance.
xi. Optimize power consumption for prolonged usage, considering the limitations of wearable
devices.
xii.Allow users to customize settings based on individual preferences for a personalized
experience.
BLOCK DIGRAM
DESIGN
Circuit Diagram
Expected Outcome
APPLICATIONS
❖ Social Inclusion: Promoting social interactions by aiding in recognizing faces, reading social
cues, and accessing information in various social settings, contributing to enhanced social
inclusion.
❖ Independent Living: Supporting daily activities such as shopping, cooking, and personal
tasks, empowering visually impaired individuals to live more independently.
➢ Spectacle (1)
➢ Earbuds (1)
➢ Wire (2m)
➢ Raspberry pi model 3 B
COST ESTIMATION
Voltage:230 V
Earbuds 1 42 hr battery 2500 2500
Fast charging
TOTAL ₹26275
Planning