0% found this document useful (0 votes)
88 views31 pages

Mini Project

Uploaded by

siljajohn614
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
88 views31 pages

Mini Project

Uploaded by

siljajohn614
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 31

CHAPTER 1 INTRODUCTION

Navigating everyday environments presents considerable challenges for visually impaired


individuals, often requiring the use of aids like canes or guide dogs. While useful, these aids
have limitations in real- time detection and identification of objects. To enhance the mobility
and independence of blind individuals, we propose an advanced solution: an object detection
and identification glove utilizing ultrasonic sensors, an Arduino UNO, and an ESP32 camera
module.
This glove integrates multiple technologies to provide comprehensive environmental
awareness. Ultrasonic sensors are employed to detect obstacles by emitting sound waves and
measuring their reflections, providing accurate distance measurements. This data is processed
by the Arduino UNO, which translates it into haptic feedback, allowing users to sense the
proximity and location of obstacles through vibrations.
In addition to obstacle detection, the ESP32 camera module captures visual information to
identify objects using machine learning algorithms. The identified objects are communicated
to the user via audio feedback through a speaker, offering contextual information that
significantly enhances situational awareness.
Combining ultrasonic sensing with visual recognition, the glove aims to overcome the
limitations of traditional aids, offering a real-time, user-friendly solution. Early prototypes
show promising results, with reliable obstacle detection up to 2 meters and accurate object
identification. The system is designed to be cost-effective and accessible, ensuring ease of use
for visually impaired individuals.
This paper details the design, implementation, and testing of the glove, highlighting its
potential to improve daily navigation.

CHAPTER 2 LITERATURE SURVEY

[1] Smart Gloves for Visually Challenged by Rakshitha R, Mahalakshmi H N, Priyanka


L, Mahadeva swamy.

The proposed system “Smart Glove For Blind” consists of Arduino UNO, Ultrasonic module,
WIFI and GPS module, heart rate module, fire sensor, panic button, power supply, buzzer and
vibrator etc. The project model comprises of the user's right gloves. Here we have used HC
SR04 Ultrasonic sensor to detect obstacle in front of the person within the range of 2cm-
400cm. The Ultrasonic sensor is a transducer and is a combination of one transmitter and
receiver paired as transceiver. The ultrasonic sensor delivers distance data to an Arduino,
which analyses it before ordering vibration motors and a buzzer to create vibration and sound.
When the object is close to the user, Arduino sends vibrate motor instruction to provide

1
continuous vibration to the user. A flame sensor is a gadget that detects the presence of a fire
or any other strong light source. We can detect infrared light up to a distance of 100cm with
this sensor, which has a detection angle of 60 degrees.

[2] Design of the Smart Glove to Aid the Visually Impaired, International Journal of
Advanced Research in Science & Technology (IJARST), Volume 5, Issue 7, May 2020 by
Pavan L P, Vinayaka S K, Chetan S, Yashavanth M R, Asha R.

The proposed system is designed to aid the blind in locating the desired object. The whole
system functions as an independent stand-alone unit. The system is analyzed thoroughly in
order to minimize power consumption as the system is running on battery power. The
standalone unit comprises of the Raspberry Pi which has a Universal Serial Bus (USB)
camera with a built-in microphone. The whole process of the proposed solution starts with the
user vocally communicating to the system about the object being looked for. The audio input
from the user received by the microphone connected to the Raspberry Pi is converted into
text, using the “pyttsx3” – speech to text module for python. The name of the object needed
by the user is extracted from the vocal command using keyword extraction technique. This
extracted keyword is passed to the object detection algorithm running DNN. The DNN used
for the smart glove is implemented using the Caffe framework. The DNN running on
Raspberry Pi processes the real-time video and locates the required object and tags it. The
object tracking algorithm then takes control to improve the frame rate. The object, once
located, must always remain in the center of the frame relative to the glove.

[3] An Affordable Hand-glove for the Blind using Ultrasonic Sensors by Abhishek Kanal,
Praja Chavan.

The system presents the implementation of an assistive glove for the blind. The basis of this
glove is a technology that helps the visually impaired in detecting any obstacles that may
appear in their path within 100 cm in any direction of the glove. When the user of this glove
encounters an object within 100 cm, the glove alerts the user with a loud beeping sound and
heavy vibrations. One of the key USPs of this glove is its extremely low manufacturing cost
as against the technologies used in other gloves which cost almost ten times of what the glove
presented in this paper does. The technology used in this glove involves heavy

2
communications between the Arduino UNO and Ultrasonic Sensors like the HC-SR04. This
glove can easily be marketed as a social impact project considering its ease of use and a very
competitive price.

CHAPTER 3 PROBLEM DEFINITION AND SOLUTIONS

3.1 PROBLEM DEFINITION


Visually impaired individuals face limitations with current assistive tools that do not provide
comprehensive information about their surroundings, particularly for objects at a distance or
in unfamiliar environments. This limitation restricts their independence and increases the risk
of accidents. Therefore, there is a pressing need for a wearable device that can accurately
detect and identify objects in real-time, thereby enhancing the safety and navigation
capabilities of visually impaired users.

*Object Detection: Implement sensors that can detect objects within a specified range and
provide accurate distance measurements.

3
*Object Identification: Utilize camera modules and image processing techniques to identify
common objects and provide detailed information to the user.

*System Integration: Develop a cohesive system that combines sensor data and image
processing results to deliver real-time feedback.

*User Feedback: Create an intuitive feedback mechanism, such as auditory signals or haptic
feedback, to convey information about detected and identified objects to the user.

*Wearable Design: Ensure the device is lightweight, compact, and comfortable to wear as a

glove, facilitating ease of use and unobtrusive operation.

3.2 OPTIMIZED SOLUTIONS

*Object Detection: Implement efficient algorithms for detecting objects using both ultrasonic
sensor data (for proximity) and camera images (for detailed identification).

*Object Identification: Utilize image processing techniques, potentially including machine


learning algorithms such as deep learning models , trained to recognize common objects.
*Feedback Mechanism: Design intuitive feedback methods such as audio cues (speech
synthesis for object identification), haptic feedback (vibration patterns for proximity alerts),
or both, to convey detected object information to the user in real-time.

*User Interaction: Ensure user-friendly interaction through simple controls or interfaces for
activating, configuring, and receiving feedback from the glove.

*Wearable Design: Optimize the glove's design for comfort, mobility, and usability in
different environments and weather conditions

*Battery Life: Efficient power management to prolong battery life, ensuring the device
remains operational for extended periods without frequent recharging.

*Real-World Testing: Conduct thorough testing in various real-world scenarios and


environments to validate the accuracy, reliability, and usability of the device.

4
*User Feedback: Gather feedback from visually impaired individuals and incorporate
improvements based on their experiences and suggestions.

*Enhanced Safety and Independence: Provide visually impaired individuals with reliable
realtime information about their surroundings, enabling safer and more independent
navigation.

CHAPTER 4 BLOCK DIAGRAM AND DESCRIPTION

4.1 BLOCK DIAGRAM OF PROPOSED SYSTEM

Fig 4.1 Block Diagram of Proposed System

4.2 BLOCK DESCRIPTIONS


4.2.1 ARDUINO UNO
The Arduino Uno is a popular microcontroller board based on the ATmega328P
microcontroller. It is widely used for developing interactive projects and prototyping due to
its simplicity and versatility. The board features 14 digital input/output pins (6 of which can
be used as PWM outputs), 6 Analog inputs, a 16 MHz quartz crystal, a USB connection for
programming and power, a power jack, an ICSP header, and a reset button.Here’s a brief
description of the pins on the Arduino Uno:

Power Pins:

5
1.Vin: The input voltage to the Arduino board when using an external power source (6-20V).
You can supply voltage through this pin or the DC power jack.
2.5V: The regulated 5V output from the regulator on the board. You can draw power from this
pin to power other components.

3.3V3: A 3.3V supply generated by the on-board regulator. The maximum current draw is
50mA.

4.GND: Ground pins. There are several ground pins on the board.

5.Reset: Bring this line LOW to reset the microcontroller.

Analog Pins

1. A0 - A5: These pins can be used as Analog inputs (A0 to A5). They can also be used as
digital I/O. Each pin provides 10 bits of resolution (i.e., 1024 different values).

Digital I/O Pins

1.D0 (RX): Used to receive (RX) TTL serial data.

2.D1 (TX): Used to transmit (TX) TTL serial data.

3.D2 - D13: These pins can be used as digital input or output using pin Mode (), Digital Write
(), and digital Read () functions.

Special Function Pins

1.D2 - D3 (External Interrupts): Can be configured to trigger an interrupt on a low value, a


rising or falling edge, or a change in value (attach Interrupt function).

2.D3, D5, D6, D9, D10, D11 (PWM): Provide 8-bit PWM output with the Analog Write ()
function.

3.D10 - D13 (SPI): Used for SPI communication. These pins support SPI communication
using the SPI library.

4.A4 (SDA) and A5 (SCL): Used for TWI (I2C) communication using the Wire library.

6
Communication Pins:

1.D0 (RX) and D1 (TX): Used for serial communication.

2.D10 (SS), D11 (MOSI), D12 (MISO), and D13 (SCK): Used for SPI communication.

3.A4 (SDA) and A5 (SCL): Used for I2C communication.

Other Pins

1.REF: Reference voltage for the Analog inputs. Used with Analog Reference.

2.Reset: Can be used to reset the microcontroller. Typically connected to a push button that
allows you to easily reset the board.

4.2.2 ESP 32-CAM


ESP32-CAM is a low-cost ESP32-based development board with onboard camera, small in
size. It is an ideal solution for IoT application, prototypes constructions and DIY projects. The
board integrates wi-Fi, Traditional Bluetooth and low power, with 2 high-performance 32-bit
LX6 CPUs. It adopts 7-stage pipeline architecture, on-chip sensor, Hall sensor, temperature
sensor and so on, and its main frequency adjustment ranges 80MHz to 240MHz. Fully
compliant with Wi-Fi 802.11b/g/n/e/i and Bluetooth 4.2 standards, it can be used as a master
mode to build an independent network controller, or as a slave to other host MCUs to add
networking capabilities to existing devices. ESP32- CAM can be widely used in various IoT
applications. It is suitable for home smart devices, industrial wireless control, wireless
monitoring, QR wireless identification, wireless positioning system signals and other IoT
applications. It is an ideal solution for IoT application Features:

* Onboard ESP32-S module, supports Wi-Fi + Bluetooth

* OV2640 camera with flash

* Onboard TF card slot, supports up to 4G TF card for data storage

* Supports multi sleep modes, deep sleep current as low as 6mA

* Control interface is accessible via pin header, easy to be integrated and embedded into user
products.

7
4.2.3 BUZZER

A buzzer or beeper is an audio Signaling device, which may be mechanical,


electromechanical, or piezoelectric (piezo for short). Typical uses of buzzers and beepers
include alarm devices, timers, train and confirmation of user input such as a mouse click or
keystroke. Buzzers can be categorized into active and passive ones. An active buzzer has a
built-in oscillating source, so it will make sounds when electrified. But a passive buzzer does
not have such source, so it will not tweet if DC signals are used; instead, you need to use
square waves whose frequency is between 2K and 5K to drive it. The active buzzer is often
more expensive than the passive one because of multiple built- in oscillating circuits.

4.2.4 VOLTAGE REGULATOR

The 7805- voltage regulator is a type of linear voltage regulator that is commonly used in
electronic circuits. It is a three-terminal device that provides a fixed output voltage of 5V and
can handle an input voltage range of 7V to 35V.

The 7805 -voltage regulator is widely used due to its simplicity, stability, and affordability. It
uses an internal voltage reference, an error amplifier, and a series pass transistor to maintain a
constant output voltage. The 7805- voltage regulator is also known for its low dropout
voltage, thermal overload protection, and s circuit protection. Some common applications of
the 7805voltage regulator include powering microcontrollers and other small electronic
devices, voltage stabilization in power supplies, voltage regulation in battery-powered
devices, and voltage conversion in automotive systems.

4.2.5 ULTRASONIC SENSOR


The HC-SR04 ultrasonic distance sensor uses sonar to determine distance to an object with
stable readings and high accuracy of 3mm. The module includes ultrasonic transmitter,
receiver and control circuit. In order to generate the ultrasound, you have to set the trig on
high state for 10 microseconds. The trig pin will send out the sonic burst which travels at the
speed of sound. The soundwave will be bounced back once it interacts with a solid or liquid
and will then be received in the echo pin. The distance between the sensor and the object will
be calculated using the time of travel. One of the outstanding features of this distance sensor

8
is that it can detect not only the distance between itself to a solid object, it can also detect
liquids.

Distance = Speed × Time

To calculate the range of an ultrasonic sensor, you can use the formula:

•d = (v * t)/2

•Centi meters = ((Microseconds / 2) / 29)

•To measure the distance the sound has travelled we use the formula: Distance = (Time x
SpeedOfSound) / 2. The "2" is in the formula because the sound has to travel back and forth.
First the sound travels away from the sensor, and then it bounces off of a surface and returns
back.

Fig 4.2 Ultrasonic Sensor

9
CHAPTER 5 COMPONENT SPECIFICATIONS

5.1 ARDUINO UNO

Fig 5.1 Arduino UNOTab

Table 5.1 Specifications of Arduino UNO

10
5.2 ULRASONIC SENSOR (HC-SR04)

Table 5.2 Pinout Configuration

11
Fig 5.2 HC-SR04

5.3 ESP 32-CAM

•SPI Flash: default 32Mbit

•RAM: built-in 520 KB +external 4MPSRAM

•Dimension: 27*40.5*4.5(±0.2)mm/1.06*1.59*0.18”

•Bluetooth: Bluetooth 4.2 BR/EDR and BLE standards

•Wi-Fi: 802.11b/g/n/e/I

•Support Interface: UART, SPI, I2C, PWM

•Support TF card: maximum support 4G

•IO port: 9

•serial Port Baud-rate: Default 115200 bps

•Image Output Format: JPEG (OV -2640 support Only), BMP,

GRAYSCALE

•Spectrum Range: 2412 ~2484MHz

•Antenna: onboard PCB antenna, gain 2dBi

12
•Transmit Power: 802.11b: 17±2 dBm (@11Mbps);

•802.11g: 14±2 dBm (@54Mbps);

•802.11n: 13±2 dBm (@MCS7)

•Receiving Sensitivity: CCK, (1Mbps: -90dBm)

•CCK, 11 Mbps: -85dBm;

•6 Mbps (1/2 BPSK): -88dBm;

•54 Mbps (3/4 64-QAM): -70dBm;

•MCS7 (65 Mbps, 72.2 Mbps): -67dBm

•Power consumption: Turn off the flash: 180mA@5V

•Turn on the flash and adjust the brightness to the maximum: 310mA@5V

•Deep-sleep: the lowest power consumption can reach 6mA@5V and Modern -sleep: up to
20mA@5V

•Light-sleep: up to 6.7mA@5V

•Security: WPA/WPA2/WPA2-Enterprise/WPS

•Power supply range: 5V

•Operating temperature: -20 °C ~ 85 °C

•Storage environment: -40 °C ~ 90 °C, < 90%RH

•Weight: 10g

13
Fig 5.3 ESP 32-CAM

5.4 LITHIUM ION BATTERY

Table 5.3 Lithium Ion Battery

14
5.5 VOLTAGE REGULATOR

Table 5.4 Specifications of Voltage Regulator

CHAPTER 6 6.1 FLOWCHART

15
Fig 6.1 Flowchart of Proposed System

16
CHAPTER 7
7.1 CIRCUIT DIAGRAM

Fig 7.1 Circuit Diagram of Proposed System

7.2 CIRCUIT EXPLANATION


To create gloves for the blind using an ultrasonic sensor and ESP32- CAM module, you'll
need to integrate these components into a cohesive circuit. Here’s a step-by-step outline of
how you can connect and configure them:

• Components Needed:

-ESP 32 CAM

-HC-SR04 ultrasonic sensor

-for feedback

17
Battery/power source for portable use

-Connecting wires

-Optional: resistors, capacitors, breadboard or PCB for circuit assembly

• Circuit Diagram:

1.Power Connections:

-Connect the VCC (5V or 3.3V) and GND pins of both the ultrasonic sensor and ESP32CAM
module to the appropriate power sources. Ensure they share a common ground.

2.Ultrasonic Sensor (HC-SR04):

-VCC to +5V or +3.3V (depending on sensor specifications)

-Trig pin to a GPIO pin on ESP32 (e.g., GPIO 4)

-Echo pin to another GPIO pin on ESP32 (e.g., GPIO 5)

-GND to ground

3.ESP32-CAM Module:

-VCC to +5V or +3.3V (check ESP32-CAM specifications, usually 5V tolerant)

-GND to ground

-Ensure all necessary connections for programming and communication (TX, RX, etc.)

4.Feedback Mechanism (Vibration Motor):

-Connect vibration motor(s) to GPIO pins on ESP32 that will be used for feedback (e.g.,
GPIO 13). Use a transistor or MOSFET if the motor requires more current than the ESP32
can supply directly.

5.Optional Components:

-If using additional indicators (LEDs, displays), connect them to suitable GPIO pins on the

18
• Software Setup:

1.Programming ESP32:

-Use Arduino IDE or Platform IO with ESP32 support.

-Install necessary libraries for ESP32-CAM and HC-SR04 (if not already included).

-Write code to initialize sensors, read distance from ultrasonic sensor, and provide feedback
through vibration motors or other indicators based on distance thresholds.

-Utilize ESP32-CAM capabilities to capture images (if desired) for further processing or
debugging.

2.Distance Calculation:

Implement code to calculate distances using the HC-SR04 sensor.

This involves sending a pulse, measuring the time until the echo returns, and converting this
to a distance value in centi-meters or inches.

3.Feedback Logic:

-Define thresholds for distance (e.g., if an obstacle is within 1 meter, activate vibration
motor).

-Adjust feedback intensity or pattern based on proximity to obstacles.

4.Wireless Communication (Optional):

-If integrating wireless communication (Wi-Fi), configure ESP32 to send data to a remote
server or mobile device for further processing or logging.

Testing and Calibration:

1.Testing:

-Ensure all connections are secure and correct.

-Test the gloves in various environments to verify distance measurement accuracy and
reliability of feedback mechanisms.

2.Calibration:

19
-Fine-tune distance thresholds and feedback patterns based on user feedback and practical
testing scenarios.

• Safety Considerations:

-Ensure components are securely mounted within the gloves to prevent accidental
disconnection or damage.

-Use suitable power sources and ensure all electrical connections are insulated to prevent
shorts or shocks.

By following these steps and considerations, you can create a functional circuit for gloves
designed to assist blind individuals by providing real-time feedback about obstacles using
ultrasonic sensing and ESP32-CAM capabilities. ESP32.

20
CHAPTER 8

RESULT

The results of the project on object detection and identification using ultrasonic sensors
demonstrated promising capabilities in real-time sensing and recognition tasks. Through
meticulous calibration and algorithm development, the system accurately detected objects
within the specified range, effectively differentiating between various targets based on their
reflective properties and distance from the sensor. The integration of machine learning models
enabled the system to classify objects with a high degree of accuracy, distinguishing between
predefined categories such as humans, obstacles, and vehicles. These results not only validate
the feasibility of employing ultrasonic sensors for reliable object detection but also highlight
the potential for practical applications in autonomous navigation, robotics, and security
systems. Moving forward, further enhancements in sensor fusion and algorithm optimization
could enhance the system's performance, making it even more robust and versatile in diverse
environments.

Fig 8.1 Object Detection and Detection Glove

CHAPTER 9 CONCLUSION AND FUTURE SCOPE

21
9.1 CONCLUSION
The Object Detection and Identification Glove for blind people represents a significant step
forward in assistive technology, aiming to enhance the independence and safety of visually
impaired individuals. Through the integration of advanced sensors, machine learning
algorithms, and haptic feedback, this innovative device allows users to perceive and identify
objects in their surroundings with greater accuracy and confidence.

The development and testing phases demonstrated the glove's effectiveness in recognizing a
variety of common objects, providing timely and comprehensible feedback to the user. While
there are still areas for improvement, such as optimizing the system's response time and
expanding the range of detectable objects, the current prototype shows promise for real-world
application.

Future work will focus on refining the design, enhancing user comfort, and conducting
extensive field tests to ensure the glove's reliability in diverse environments. The ultimate
goal is to make this technology widely accessible, offering a practical solution to the
everyday challenges faced by blind individuals.

Overall, the Object Detection and Identification Glove has the potential to significantly
improve the quality of life for blind people, fostering greater autonomy and confidence in
their daily activities.

9.2 FUTURE SCOPE


The Object Detection and Identification Glove for blind people presents numerous
opportunities for further development and enhancement. The following areas outline potential
future directions for this project:

1. Enhanced Object Recognition:

*Machine Learning Improvements: Leveraging more advanced and diverse datasets to


improve the accuracy and range of object recognition.
*Continuous Learning: Implementing machine learning models that can learn and adapt in
realtime based on user feedback and new object encounters.

22
2. Expanded Sensor Integration:

*Multimodal Sensing: Incorporating additional sensors such as infrared, ultrasonic, and


LIDAR to provide more comprehensive environmental mapping.

*3D Object Detection: Utilizing 3D cameras to enhance depth perception and improve the
glove's ability to identify objects in complex environments.

3. User Interface and Experience:

Customizable Feedback: Developing personalized haptic feedback patterns that users can
customize according to their preferences and needs.

* Auditory Feedback: Integrating audio cues or voice feedback for users who prefer
auditory information over tactile sensations.

4. Connectivity and Integration:

Smartphone Integration: Enabling the glove to connect with smartphones to leverage existing
accessibility features and apps.

* Cloud Connectivity: Implementing cloud-based processing to offload a complex


computation and enhance the glove's real-time performance.

5. Ergonomic Design:

*Comfort and Wearability: Improving the glove's design to ensure it is lightweight,


comfortable, and easy to wear for extended periods.

*Battery Life: Enhancing battery efficiency and exploring alternative power sources to extend
usage time without frequent recharging.

6. Field Testing and User Feedback:

*Extensive Trials: Conducting extensive field tests with a diverse group of users to gather
comprehensive feedback and identify areas for improvement.

*Iterative Design: Utilizing user feedback to iteratively improve the glove's functionality,
design, and overall user experience.
7. Regulatory Approvals and Market Readiness:

23
*Certification and Compliance: Ensuring the glove meets all necessary regulatory standards
for medical devices and assistive technologies.

*Commercialization: Developing a strategy for mass production and distribution, making the
technology affordable and accessible to a broad audience. 8. Collaborative Efforts:

*Partnerships with Organizations: Collaborating with organizations for the blind and visually
impaired to align the product with the specific needs and challenges faced by these
communities.

*Academic and Industry Research: Partnering with academic institutions and tech companies
to stay at the forefront of technological advancements and integrate cutting-edge research into
the product.

By focusing on these areas, the Object Detection and Identification Glove can evolve into a
more sophisticated, reliable, and user-friendly assistive device, significantly improving the
quality of life for blind and visually impaired individuals.

24
REFERENCE
[1]Gharib, W& Nagib, G. (2019). “Smart Cane for Blinds”. In Proc. 9th Int. Conf. on AI
Applications (pp. 253-262).

[2]L. Gopan and Aarthi, R., “A vision based DCNN for identify bottle object in indoor
environment”, Lecture Notes in Computational Vision and Biomechanics, vol. 28, pp. 447-
456, 2018.

[3]Syed Tahir Hussain Rizvi, M. Junaid Asif and Husnain Ashfaq “Visual Impairment Aid
using Haptic and Sound Feedback,”2019 International Conference on Communication,
Computing and Digital Systems (C- CODE).

[4]Neven Saleh, Mostafa Farghaly, Eslam Elshaer, Amr Mo, Smart glove-based gestures
recognition system for Arabic sign language”, Published in: 2020 International Conference on
Innovative Trends in Communication and Computer Engineering (ITCE) on 8-9 Feb. 2020.

[5]K. D. Kumar and Thangavel, S. K., “Assisting visually challenged person in the library
environment”, Lecture Notes in Computational Vision and Biomechanics, vol. 28, pp. 722-
733, 2018.

25
APPENDIX
DETECTION
const int Trig Pin = 12; //Trigger connected to PIN 7

const int EchoPin = 10; //Echo connected to PIN 8

int buz=5; //Buzzer to PIN 4 void setup()

{ Serial.begin(9600); pinMode(buz, OUTPUT);

} void loop() { long duration, cm;

pinMode(TrigPin, OUTPUT);

digitalWrite(TrigPin,LOW);

delayMicroseconds(2);

digitalWrite(TrigPin,HIGH);

delayMicroseconds(5);

digitalWrite(TrigPin,LOW);

pinMode(EchoPin,INPUT);

duration=pulseIn(EchoPin, HIGH);

cm=

microsecondsToCentimeters(duration);

if(cm<=100 && cm>0)

{
int d=map(cm, 50, 100, 500, 1000); digitalWrite(buz, HIGH);

delay(150); digitalWrite(buz, LOW); delay(d);

26
Serial.print(cm);

Serial.print("cm"); Serial.println(); delay(150); }

long microsecondsToCentimeters(long microseconds)

{ return microseconds / 29 /

2;

IDENTIFICATION

• IDENTIFICATION USING ARDUINO IDE

#include "WifiCam.hpp" #include <WiFi.h> static const char* WIFI_SSID =

"Enjoy"; static const char* WIFI_PASS = "12345677"; esp32cam::Resolution

initialResolution; WebServer server(80); void setup()

Serial.begin(115200); Serial.println(); delay(2000);

WiFi.persistent(false);

WiFi.mode(WIFI_STA);

WiFi.begin(WIFI_SSID,WIFI_PASS);

If (WiFi.waitForConnectResult() != WL_CONNECTED) { Serial.printf("WiFi failure %d\n",

WiFi.status()); delay(5000);

ESP.restart();

Serial.println("WiFi connected");

27
using namespace esp32cam; initialResolution

= Resolution::find(1024, 768); Config cfg;

cfg.setPins(pins::AiThinker);

cfg.setResolution(initialResolution);

cfg.setJpeg(80); bool ok = Camera.begin(cfg);

if (!ok) {

Serial.println("camera initialize failure"); delay(5000);

ESP.restart();

Serial.println("camera initialize success");

Serial.println("camera starting");

Serial.print("http://"); Serial.println(WiFi.localIP()); addRequestHandlers(); server.begin();

} void loop()

{ server.handleClien

t();

• IDENTIFICATION USING PYTHON

import cv2 import matplotlib.pyplot as plt

import cvlib as cv import urllib.request

import numpy as np from

28
cvlib.object_detection import draw_bbo

import concurrent.futures

url='http://192.168.1.107/cam-hi.jpg' im=None

def run1():

cv2.namedWindow("live transmission", cv2.WINDOW_AUTOSIZE)

while True: img_resp=urllib.request.urlopen(url)

imgnp=np.array(bytearray(img_resp.read()),dtype=np.uint8)

im = cv2.imdecode(imgnp,-1) cv2.imshow('live

transmission',im) key=cv2.waitKey(5)

if key==ord('q'):

break

cv2.destroyAllWindows() def

run2():

cv2.namedWindow("detection", cv2.WINDOW_AUTOSIZE)

while True: img_resp=urllib.request.urlopen(url)

imgnp=np.array(bytearray(img_resp.read()),dtype=np.uint8)

im = cv2.imdecode(imgnp,-1) bbox, label, conf =

cv.detect_common_objects(im) im = draw_bbox(im, bbox,

label, conf) cv2.imshow('detection',im) key=cv2.waitKey(5) if

key==ord('q'):

break

cv2.destroyAllWindows() if

_name_ == '_main_':

29
print("started") with

concurrent.futures.ProcessPoolExecutor() as executer:

f1= executer.submit(run1)

f2= executer.submit(run2)

30
31

You might also like