0% found this document useful (0 votes)
258 views61 pages

Embedded Night-Vision System For Pedestrian Detection - Doc

project

Uploaded by

217r1a05e2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
258 views61 pages

Embedded Night-Vision System For Pedestrian Detection - Doc

project

Uploaded by

217r1a05e2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 61

Embedded Night-Vision System for Pedestrian Detection

ABSTRACT
The paper describes the use of thermal camera and IR night vision system
for the detection of Pedestrians and objects that may cause accident at night
time. As per the survey most of the accidents cause is due to low vision ability
of human at night time, which leads to most dangerous and higher number of
accidents at night with respect to day time. This system include the IR night
vision camera which detects the object with the help of IR LED and photodiode
pair, this camera have capability to detect the object up to 100m. The thermal
camera detects the heat generated by any of the object like cars, Human animals
etc. which gives us the facility to detect the object for higher range and with low
reflective surface where IR night vision may fails. With the use of these two
cameras mounted on car which helps the driver to drive safely. In this system,
HOG (Histogram of orientated gradients) algorithm and support vector machine
(SVM) is performed with the help of OpenCV in Matlab and EmguCV in Visual
Basic 2012. The system is tested on the video recorded using these cameras, and
got good and efficient result. And this system is cost efficient and easy to
implement.
INTRODUCTION
The paper describes the use of thermal camera and IR night vision system
for the detection of Pedestrians and objects that may cause accident at night
time. As per the survey most of the accidents cause is due to low vision ability
of human at night time, which leads to most dangerous and higher number of
accidents at night with respect to day time. This system include the IR night
vision camera which detects the object with the help of IR LED and photodiode
pair, this camera have capability to detect the object up to 100m. The thermal
camera detects the heat generated by any of the object like cars, Human animals
etc. which gives us the facility to detect the object for higher range and with low
reflective surface where IR night vision may fails. With the use of these two
cameras mounted on car which helps the driver to drive safely. In this system,
HOG (Histogram of orientated gradients) algorithm and support vector machine
(SVM) is performed with the help of OpenCV in Matlab and EmguCV in Visual
Basic 2012. The system is tested on the video recorded using these cameras, and
got good and efficient result. And this system is cost efficient and easy to
implement.
LITERATURE SURVEY
The Local Binary Patterns (LBP) [4] operator is one of the most
successful texture descriptors and has been widely used in various applications.
The idea of this operator is to assign each pixel a code depending on the gray
levels of its neighborhoods. The gray level of the central pixel (ic) of
coordinates (xc, yc) is compared to that of its neighbors (in). This descriptor is
robust against monotonous gray scale changes caused, and against lighting
variations. Another important property that is its simplicity of calculation, which
allows analyzing images in difficult settings in real time.

Several studies have shown that the simultaneous or alternating use of


several types of descriptors allows a significant improvement of the results [5].
For example, the combined use of HOG and LBP provides better results than
independent use because HOG works poorly when the background is cluttered
with noisy edges. Local binary patterns are complementary in this aspect. It can
filter noises using the concept of uniform pattern. Characteristic analysis
determines the class of data membership. It is thus a question of classifying the
data. Numerous classification methods exist, each answering very specific
problems.

These will be the supervised learning classification methods, in order to


realize the final application which is pedestrian detection in real time. Support
Vector Machine (SVM) is a two-class classification method [6] that attempts to
separate positive and negative examples in all samples. The method then looks
for the hyperplane that separates the positive examples from the negative ones,
ensuring that the margin between the nearest positive and negative is maximal.
This ensures a generalization of the principle because new examples may not be
too similar to those used to find the hyperplane but be located on one side or the
other of the border.
The Relevance Vector Machine RVM method was developed by Tipping
[7]. It is a method that can also deal with regression problems. It uses the
classical linear model of SVM kernel machines, but uses a Bayesian
formulation to determine the parameters and select the relevant examples that
will make the final discriminant model possible. Boosting [8] is a method that
combines numerous algorithms that rely on sets of binary classifiers: boosting
optimizes their performance. The principle comes from the combination of
classifiers (also called hypotheses). By successive iterations, the knowledge of a
weak classifier is added to the final classifier (strong classifier). The classifier
provided is weighted by the quality of its classification: the better it ranks, the
more important it will be. Misclassified examples are boosted so that they
become more important to the weak learner in the next round, so that he or she
can make up for the lack. The pedestrian detection system needs acceleration to
enable real-time adaptive processing. Hardware acceleration has the potential to
speedup these algorithms, making real-time processing for many image and
video processing. The Hardware acceleration can be achieved using field
programmable gate arrays (FPGA) or Graphic Processor Unit (GPU), which are
devices consisting of reconfigurable hardware, allowing their function to be
customized for a specific application. For intensive computing, FPGAs have
very large logical resources (multipliers, accumulators). In addition, they offer
highly flexible architectures, they can easily divide the video source to
independently feed the display (video output) or different additive blocks for
subsequent video processing. Also, an FPGA can perform different processing
with independent clocks without the need for additional resources for time
multiplexing, unlike classical CPU or GPU processors. Thanks to advanced
semiconductor technologies, modern FPGA-SoC (Field Programmable Gate
Arrays System on Chip) generations are powerful enough to support realtime
image processing because of their high logical density, generic architecture and
their memory on chip. Today, faced with the integration density of FPGAs and
the progressive demand for logical resources of advanced applications, it is very
difficult or impossible to design IPs for embedded vision with the traditional
hardware description language (HDL). Indeed, an implementation of an
intensive image processing on FPGA requires a very important development
time and generally leads to problems of reliability in the design. As a result,
many efforts have been made to cope with this huge amount of resources
through the integration of tools that offer a design flow at a higher level of
abstraction than traditional HDL. Vivado High-Level Synthesis Tool speeds up
the creation of IPs by allowing C, C ++ and SystemC specifications to be
directly targeted in all Xilinx All Programmable SoC FPGAs without having to
create the RTL manually. It offers an opportunity to go faster to IP creation
while exploiting its properties. With the introduction of reconfigurable
platforms such as AP SoC and the advent of new high-level tools for
configuring them, FPGA-SoC image processing has emerged as a practical
solution for most computer vision problems and image processing. In this
context, we were interested in the design and implementation of an embedded
video processing architecture. This research aims to propose an embedded
architecture of a pedestrian detection algorithm on a hardware/software co-
design platform suitable for use as an embedded system.
EXISTING SYSTEM
Pedestrian detection in night-time conditions has been a key focus in advanced
driver assistance systems (ADAS) and autonomous vehicle technologies.
Traditional night-vision systems generally rely on infrared (IR) or thermal
imaging to enhance visibility under low-light conditions. These systems work
by capturing the heat emitted by pedestrians or the reflection of light from
external sources, making pedestrians distinguishable from the background. The
two primary types of night-vision technologies used are active infrared
systems, which involve projecting infrared light and detecting the reflected
signals, and passive infrared systems, which capture naturally emitted thermal
radiation.

Traditional algorithms, such as background subtraction, optical flow, and


edge detection, have been widely used for night-time pedestrian detection. In
addition, classical machine learning algorithms such as Support Vector
Machines (SVM), Histogram of Oriented Gradients (HOG) combined with
SVM, and AdaBoost classifiers have been employed to classify pedestrians
based on handcrafted features. These methods depend heavily on the extraction
of discriminative features from thermal or infrared images and often require
manually designed feature sets that capture motion, shape, or thermal contrast
between pedestrians and the environment.

However, these traditional methods have several limitations. First, they


struggle with low contrast and noise in thermal images, which affects their
ability to detect pedestrians accurately, especially in varying weather conditions
(rain, fog, etc.) or in highly cluttered backgrounds. Additionally, background
subtraction-based methods often fail when pedestrians are moving slowly,
causing them to be confused with static background objects. Moreover,
manually designed features, while effective in some cases, tend to be
suboptimal when dealing with more complex and dynamic environments, as
they lack the adaptability required for varying shapes, sizes, and postures of
pedestrians.

Another issue with these systems is their inability to generalize across different
thermal imaging devices or different ambient temperature ranges. The
performance of traditional pedestrian detection systems can degrade
significantly when the surrounding temperature matches that of the human body,
reducing the contrast between the pedestrian and the environment. Additionally,
in many cases, the real-time processing capability of traditional algorithms is
limited due to the high computational load required for feature extraction and
classification in thermal images, making them less suitable for fast-moving
applications like autonomous driving.

Despite these efforts, high false positives and missed detections remain key
challenges, limiting the overall reliability of traditional night-vision pedestrian
detection systems.
DISADVANTAGES
1. Limited Detection Range: Night-vision systems typically have a shorter
detection range compared to daylight systems, making it harder to detect
distant pedestrians.
2. Poor Performance in Adverse Weather: Systems often struggle in rain,
fog, or snow, as these conditions can obscure thermal imaging or infrared
signals.
3. False Positives and False Negatives: High rates of misclassifications,
such as detecting non-pedestrian objects or missing actual pedestrians,
reduce system reliability.
4. High Power Consumption: Embedded night-vision systems, especially
those using active infrared, can drain significant power, limiting their use
in battery-operated devices.
5. Expensive Hardware: Thermal cameras and infrared sensors are costly,
making the system expensive to implement and maintain.
6. Low Resolution: Many night-vision systems provide lower image
resolution, leading to less detailed images and making pedestrian
identification less accurate.
7. Latency in Real-time Processing: Real-time pedestrian detection can be
slow due to the high computational load of image processing in low light,
resulting in delayed responses.
8. Limited Field of View: Narrow field of view in some systems restricts
the area covered, increasing the risk of missing pedestrians outside the
system’s range.
9. Vulnerability to Ambient Light Interference: Sudden exposure to
headlights or streetlights can cause momentary blindness in infrared or
thermal-based systems.
10.Sensor Calibration Issues: Night-vision systems often require frequent
calibration to maintain accuracy, which can be difficult and time-
consuming in embedded systems.
PROPOSED SYSTEM
Pedestrian detection in low-light or nighttime conditions presents significant
challenges for intelligent transportation systems and autonomous driving. This
project proposes an embedded night-vision system for pedestrian detection
that leverages the capabilities of deep learning, specifically using the YOLOv2
(You Only Look Once, version 2) Convolutional Neural Network (CNN)
model. The system is designed to be integrated into vehicles or surveillance
platforms, where it can operate effectively in poor lighting environments to
identify pedestrians in real-time.

YOLOv2 for Efficient Pedestrian Detection

YOLOv2 is a state-of-the-art object detection algorithm known for its balance


between detection accuracy and speed. Unlike traditional detection systems that
operate using a region proposal and classification step, YOLOv2 frames object
detection as a single regression problem. This approach divides the input image
into a grid and directly predicts bounding boxes and associated confidence
scores for each grid cell, making the detection process much faster compared to
other methods like Fast R-CNN.

The embedded night-vision system benefits greatly from YOLOv2's efficiency,


allowing it to process images captured in real-time by infrared (IR) cameras or
thermal sensors, even with limited computational resources. By utilizing the
darknet-19 architecture, a streamlined version of YOLOv2 designed for
performance on embedded devices, the system is capable of identifying
pedestrians in infrared images at high frame rates, which is crucial for real-time
detection in vehicles or security systems.

The proposed system is comprised of three main components:


1. Night-vision Sensor Module: This module includes infrared or thermal
imaging cameras that capture images in low-light environments. The
cameras are sensitive to heat signatures emitted by pedestrians, making
them ideal for detecting human figures in dark conditions where
conventional RGB cameras struggle.

2. Embedded Processing Unit: A lightweight, low-power computing


device such as an NVIDIA Jetson or ARM-based processor is employed
for running the YOLOv2 model. The YOLOv2 CNN is optimized for
performance on embedded hardware, enabling efficient detection without
the need for high-end GPUs typically required for deep learning
inference.

3. Pedestrian Detection and Alert System: The real-time pedestrian


detection module processes the images captured by the sensor using
YOLOv2. Once pedestrians are detected, their locations are highlighted
with bounding boxes, and the system sends alerts (visual or auditory) to
the driver or user. This feature is crucial in preventing accidents by giving
early warnings when pedestrians are detected in hazardous areas.

To further enhance detection accuracy in night-vision scenarios, the system uses


pre-processing techniques to improve the quality of the infrared or thermal
images before feeding them into the YOLOv2 network. Techniques such as
contrast enhancement, noise reduction, and image sharpening are applied to
ensure that the pedestrians' heat signatures are more distinguishable, allowing
YOLOv2 to perform optimally despite the challenging visual conditions.

Moreover, the YOLOv2 model is trained on a customized dataset that includes


both day and night-time pedestrian images to ensure the robustness of detection
under different lighting conditions. Transfer learning is also employed to adapt
the pre-trained YOLOv2 model for thermal and infrared pedestrian detection,
further improving detection performance in night-vision applications.

The proposed embedded night-vision system for pedestrian detection offers a


highly efficient and accurate solution by leveraging the YOLOv2 CNN model.
Its ability to detect pedestrians in real-time, combined with the low-power
processing requirements, makes it suitable for integration into autonomous
vehicles, surveillance systems, and other safety-critical applications. By
enhancing night-time pedestrian detection capabilities, this system can play a
key role in reducing accidents and improving road safety in low-light
environments.
ADVANTAGES
1. Real-time Detection: YOLOv2 processes images faster, making the
system suitable for real-time pedestrian detection.
2. High Accuracy: CNN-based YOLOv2 improves detection accuracy, even
in low-light or night-time conditions.
3. Embedded Implementation: Lightweight architecture allows
deployment on embedded hardware with limited resources.
4. Wide Field of View: YOLOv2 detects pedestrians across various areas
within the frame, covering more ground.
5. Multi-scale Detection: Ability to detect pedestrians at different scales
and distances due to YOLOv2's multi-scale feature detection.
6. Low Latency: The system offers low latency, crucial for safety
applications in automotive and surveillance systems.
7. Energy Efficient: Embedded systems with optimized CNN models
consume less power, suitable for battery-operated devices.
8. Robust to Environmental Variations: Performs well under diverse
night-time lighting conditions, enhancing robustness.
9. Integrated Object Tracking: The YOLOv2 architecture can support
tracking pedestrians across frames.
10.Compact and Portable: The embedded system is compact, making it
easier to deploy in vehicles or surveillance units.
WORKING METHODOLOGY

Now a days with advance technologies drivers may get lots of information from
sensors such as upcoming traffic signals, diversions, traffic conditions and many
more information but this sensors may not provide accurate information about
pedestrian or any other objects at night due to darkness or low quality cameras.
To overcome from this problem author is evaluating performance of YOLOV2
CNN (convolution neural networks) object detection model but this model also
unable to detect objects from NIGHT VISION.

In propose work to detect objects from NIGHT VISION author using HAAR
HOG descriptor with ADABOOST algorithm and this algorithm providing
better detection compare to YOLOV2 and its false detection rate is also less.
This algorithm will clear the image using OPENCV and ADABOOST and then
apply HAAR HOG features to detect pedestrian in that image.

In propose work I am using 6 night vision images and YOLOV2 able to detect
pedestrian from 4 images and ADABOOST able to detect pedestrian from all 6
images but it is detecting some false images also as pedestrian due to this reason
ADABOOST detection accuracy will be 80% and YOLOV2 detection accuracy
will be 4/6 = 0.66.

ADABOOST detection rate = 6/6 * 100 = 100% - 20 (for false detection rate) =
80%

YOLOV2 = 4/6 * 100 = 66%


SYSTEM ARCHITECTURE
SYATEM REQUIREMENTS

➢ H/W System Configuration:-

➢ Processor - Pentium –IV

➢ RAM - 4 GB (min)

➢ Hard Disk - 20 GB

SOFTWARE REQUIREMENTS:

 Operating system : Windows 7 Ultimate.

 Coding Language : Python.


SYSTEM STUDY
FEASIBILITY STUDY

The feasibility of the project is analyzed in this phase and business


proposal is put forth with a very general plan for the project and some cost
estimates. During system analysis the feasibility study of the proposed system is
to be carried out. This is to ensure that the proposed system is not a burden to
the company. For feasibility analysis, some understanding of the major
requirements for the system is essential.

Three key considerations involved in the feasibility analysis are

 ECONOMICAL FEASIBILITY

 TECHNICAL FEASIBILITY

 SOCIAL FEASIBILITY

ECONOMICAL FEASIBILITY

This study is carried out to check the economic impact that the system
will have on the organization. The amount of fund that the company can pour
into the research and development of the system is limited. The expenditures
must be justified. Thus the developed system as well within the budget and this
was achieved because most of the technologies used are freely available. Only
the customized products had to be purchased.
TECHNICAL FEASIBILITY

This study is carried out to check the technical feasibility, that is, the
technical requirements of the system. Any system developed must not have a
high demand on the available technical resources. This will lead to high
demands on the available technical resources. This will lead to high demands
being placed on the client. The developed system must have a modest
requirement, as only minimal or null changes are required for implementing this
system.

SOCIAL FEASIBILITY

The aspect of study is to check the level of acceptance of the system by


the user. This includes the process of training the user to use the system
efficiently. The user must not feel threatened by the system, instead must accept
it as a necessity. The level of acceptance by the users solely depends on the
methods that are employed to educate the user about the system and to make
him familiar with it. His level of confidence must be raised so that he is also
able to make some constructive criticism, which is welcomed, as he is the final
user of the system.
SYSTEM DESIGN
UML DIAGRAMS

UML stands for Unified Modeling Language. UML is a standardized


general-purpose modeling language in the field of object-oriented software
engineering. The standard is managed, and was created by, the Object
Management Group.

The goal is for UML to become a common language for creating models of
object oriented computer software. In its current form UML is comprised of two
major components: a Meta-model and a notation. In the future, some form of
method or process may also be added to; or associated with, UML.

The Unified Modeling Language is a standard language for specifying,


Visualization, Constructing and documenting the artifacts of software system, as
well as for business modeling and other non-software systems.

The UML represents a collection of best engineering practices that have proven
successful in the modeling of large and complex systems.

The UML is a very important part of developing objects oriented software and
the software development process. The UML uses mostly graphical notations to
express the design of software projects.
GOALS:

The Primary goals in the design of the UML are as follows:

1. Provide users a ready-to-use, expressive visual modeling Language so


that they can develop and exchange meaningful models.

2. Provide extendibility and specialization mechanisms to extend the core


concepts.

3. Be independent of particular programming languages and development


process.

4. Provide a formal basis for understanding the modeling language.

5. Encourage the growth of OO tools market.

6. Support higher level development concepts such as collaborations,


frameworks, patterns and components.

7. Integrate best practices.


USECASE DIAGRAM:

A use case diagram in the Unified Modeling Language (UML) is a type


of behavioral diagram defined by and created from a Use-case analysis. Its
purpose is to present a graphical overview of the functionality provided by a
system in terms of actors, their goals (represented as use cases), and any
dependencies between those use cases. The main purpose of a use case diagram
is to show what system functions are performed for which actor. Roles of the
actors in the system can be depicted.

upload night vision image

night vision pedistrain detection


using YOLOV2

User
Night Vision Pedestrain Detection
using HAAR + AdaBoost

Exit
CLASS DIAGRAM:

In software engineering, a class diagram in the Unified Modeling


Language (UML) is a type of static structure diagram that describes the
structure of a system by showing the system's classes, their attributes,
operations (or methods), and the relationships among the classes. It explains
which class contains information.

user.
upload Nigh t Vision image
night vision pedestrain detection using YOLOV2
night vision pedestrain detection using HAAR+Adaboost
Exit
SEQUENCE DIAGRAM:

A sequence diagram in Unified Modeling Language (UML) is a kind of


interaction diagram that shows how processes operate with one another and in
what order. It is a construct of a Message Sequence Chart. Sequence diagrams
are sometimes called event diagrams, event scenarios, and timing diagrams.

User System
Application

Upload Night Vision image

Night Vision pedestrain detection using YOLOV2

Night Vision Pedestrain detection using HAAR +Adaboost

Exit
ACTIVITY DIAGRAM:

Activity diagrams are graphical representations of workflows of stepwise


activities and actions with support for choice, iteration and concurrency. In the
Unified Modeling Language, activity diagrams can be used to describe the
business and operational step-by-step workflows of components in a system. An
activity diagram shows the overall flow of control.

Collaboration diagram:

1: Upload Night Vision image


2: Night Vision pedestrain detection using YOLOV2
3: Night Vision Pedestrain detection using HAAR +Adaboost
4: Exit
User System
Application
SOFTWARE ENVIRONMENT
What is Python :-

Below are some facts about Python.

Python is currently the most widely used multi-purpose, high-level


programming language.

Python allows programming in Object-Oriented and Procedural paradigms.


Python programs generally are smaller than other programming languages like
Java.

Programmers have to type relatively less and indentation requirement of the


language, makes them readable all the time.

Python language is being used by almost all tech-giant companies like –


Google, Amazon, Facebook, Instagram, Dropbox, Uber… etc.

The biggest strength of Python is huge collection of standard library which can
be used for the following –

 Machine Learning

 GUI Applications (like Kivy, Tkinter, PyQt etc. )

 Web frameworks like Django (used by YouTube, Instagram,


Dropbox)

 Image processing (like Opencv, Pillow)

 Web scraping (like Scrapy, BeautifulSoup, Selenium)

 Test frameworks

 Multimedia
Advantages of Python :-

Let’s see how Python dominates over other languages.

1. Extensive Libraries

Python downloads with an extensive library and it contain code for various
purposes like regular expressions, documentation-generation, unit-testing, web
browsers, threading, databases, CGI, email, image manipulation, and more. So,
we don’t have to write the complete code for that manually.

2. Extensible

As we have seen earlier, Python can be extended to other languages. You can
write some of your code in languages like C++ or C. This comes in handy,
especially in projects.

3. Embeddable

Complimentary to extensibility, Python is embeddable as well. You can put your


Python code in your source code of a different language, like C++. This lets us
add scripting capabilities to our code in the other language.

4. Improved Productivity

The language’s simplicity and extensive libraries render programmers more


productive than languages like Java and C++ do. Also, the fact that you need to
write less and get more things done.

5. IOT Opportunities

Since Python forms the basis of new platforms like Raspberry Pi, it finds the
future bright for the Internet Of Things. This is a way to connect the language
with the real world.
6. Simple and Easy

When working with Java, you may have to create a class to print ‘Hello World’.
But in Python, just a print statement will do. It is also quite easy to
learn, understand, and code. This is why when people pick up Python, they have
a hard time adjusting to other more verbose languages like Java.

7. Readable

Because it is not such a verbose language, reading Python is much like reading
English. This is the reason why it is so easy to learn, understand, and code. It
also does not need curly braces to define blocks, and indentation is mandatory.
This further aids the readability of the code.

8. Object-Oriented

This language supports both the procedural and object-oriented programming


paradigms. While functions help us with code reusability, classes and objects let
us model the real world. A class allows the encapsulation of data and functions
into one.

9. Free and Open-Source

Like we said earlier, Python is freely available. But not only can you download
Python for free, but you can also download its source code, make changes to it,
and even distribute it. It downloads with an extensive collection of libraries to
help you with your tasks.

10. Portable

When you code your project in a language like C++, you may need to make
some changes to it if you want to run it on another platform. But it isn’t the
same with Python. Here, you need to code only once, and you can run it
anywhere. This is called Write Once Run Anywhere (WORA). However, you
need to be careful enough not to include any system-dependent features.
11. Interpreted

Lastly, we will say that it is an interpreted language. Since statements are


executed one by one, debugging is easier than in compiled languages.

Any doubts till now in the advantages of Python? Mention in the comment
section.
Advantages of Python Over Other Languages

1. Less Coding

Almost all of the tasks done in Python requires less coding when the same task
is done in other languages. Python also has an awesome standard library
support, so you don’t have to search for any third-party libraries to get your job
done. This is the reason that many people suggest learning Python to beginners.

2. Affordable

Python is free therefore individuals, small companies or big organizations can


leverage the free available resources to build applications. Python is popular and
widely used so it gives you better community support.

The 2019 Github annual survey showed us that Python has overtaken Java in
the most popular programming language category.

3. Python is for Everyone

Python code can run on any machine whether it is Linux, Mac or Windows.
Programmers need to learn different languages for different jobs but with
Python, you can professionally build web apps, perform data analysis
and machine learning, automate things,do web scraping and also build games
and powerful visualizations. It is an all-rounder programming language.
Disadvantages of Python

So far, we’ve seen why Python is a great choice for your project. But if you
choose it, you should be aware of its consequences as well. Let’s now see the
downsides of choosing Python over another language.

1. Speed Limitations

We have seen that Python code is executed line by line. But since Python is
interpreted, it often results in slow execution. This, however, isn’t a problem
unless speed is a focal point for the project. In other words, unless high speed is
a requirement, the benefits offered by Python are enough to distract us from its
speed limitations.

2. Weak in Mobile Computing and Browsers

While it serves as an excellent server-side language, Python is much rarely seen


on the client-side. Besides that, it is rarely ever used to implement smartphone-
based applications. One such application is called Carbonnelle.

The reason it is not so famous despite the existence of Brython is that it isn’t
that secure.

3. Design Restrictions

As you know, Python is dynamically-typed. This means that you don’t need to
declare the type of variable while writing the code. It uses duck-typing. But
wait, what’s that? Well, it just means that if it looks like a duck, it must be a
duck. While this is easy on the programmers during coding, it can raise run-time
errors.
4. Underdeveloped Database Access Layers

Compared to more widely used technologies like JDBC (Java DataBase


Connectivity) and ODBC (Open DataBase Connectivity), Python’s database
access layers are a bit underdeveloped. Consequently, it is less often applied in
huge enterprises.

5. Simple

No, we’re not kidding. Python’s simplicity can indeed be a problem. Take my
example. I don’t do Java, I’m more of a Python person. To me, its syntax is so
simple that the verbosity of Java code seems unnecessary.

This was all about the Advantages and Disadvantages of Python Programming
Language.
History of Python : -

What do the alphabet and the programming language Python have in common?
Right, both start with ABC. If we are talking about ABC in the Python context,
it's clear that the programming language ABC is meant. ABC is a general-
purpose programming language and programming environment, which had been
developed in the Netherlands, Amsterdam, at the CWI (Centrum Wiskunde
&Informatica). The greatest achievement of ABC was to influence the design of
Python.Python was conceptualized in the late 1980s. Guido van Rossum worked
that time in a project at the CWI, called Amoeba, a distributed operating system.
In an interview with Bill Venners1, Guido van Rossum said: "In the early 1980s,
I worked as an implementer on a team building a language called ABC at
Centrum voor Wiskunde en Informatica (CWI). I don't know how well people
know ABC's influence on Python. I try to mention ABC's influence because I'm
indebted to everything I learned during that project and to the people who
worked on it."Later on in the same Interview, Guido van Rossum continued: "I
remembered all my experience and some of my frustration with ABC. I decided
to try to design a simple scripting language that possessed some of ABC's better
properties, but without its problems. So I started typing. I created a simple
virtual machine, a simple parser, and a simple runtime. I made my own version
of the various ABC parts that I liked. I created a basic syntax, used indentation
for statement grouping instead of curly braces or begin-end blocks, and
developed a small number of powerful data types: a hash table (or dictionary, as
we call it), a list, strings, and numbers."
What is Machine Learning : -

Before we take a look at the details of various machine learning methods, let's
start by looking at what machine learning is, and what it isn't. Machine learning
is often categorized as a subfield of artificial intelligence, but I find that
categorization can often be misleading at first brush. The study of machine
learning certainly arose from research in this context, but in the data science
application of machine learning methods, it's more helpful to think of machine
learning as a means of building models of data.

Fundamentally, machine learning involves building mathematical models to


help understand data. "Learning" enters the fray when we give these
models tunable parameters that can be adapted to observed data; in this way the
program can be considered to be "learning" from the data. Once these models
have been fit to previously seen data, they can be used to predict and understand
aspects of newly observed data. I'll leave to the reader the more philosophical
digression regarding the extent to which this type of mathematical, model-based
"learning" is similar to the "learning" exhibited by the human brain.
Understanding the problem setting in machine learning is essential to using
these tools effectively, and so we will start with some broad categorizations of
the types of approaches we'll discuss here.
Categories Of Machine Leaning :-

At the most fundamental level, machine learning can be categorized into two
main types: supervised learning and unsupervised learning.

Supervised learning involves somehow modeling the relationship between


measured features of data and some label associated with the data; once this
model is determined, it can be used to apply labels to new, unknown data. This
is further subdivided into classification tasks and regression tasks: in
classification, the labels are discrete categories, while in regression, the labels
are continuous quantities. We will see examples of both types of supervised
learning in the following section.

Unsupervised learning involves modeling the features of a dataset without


reference to any label, and is often described as "letting the dataset speak for
itself." These models include tasks such as clustering and dimensionality
reduction. Clustering algorithms identify distinct groups of data, while
dimensionality reduction algorithms search for more succinct representations of
the data. We will see examples of both types of unsupervised learning in the
following section.
Need for Machine Learning

Human beings, at this moment, are the most intelligent and advanced species on
earth because they can think, evaluate and solve complex problems. On the
other side, AI is still in its initial stage and haven’t surpassed human intelligence
in many aspects. Then the question is that what is the need to make machine
learn? The most suitable reason for doing this is, “to make decisions, based on
data, with efficiency and scale”.

Lately, organizations are investing heavily in newer technologies like Artificial


Intelligence, Machine Learning and Deep Learning to get the key information
from data to perform several real-world tasks and solve problems. We can call it
data-driven decisions taken by machines, particularly to automate the process.
These data-driven decisions can be used, instead of using programing logic, in
the problems that cannot be programmed inherently. The fact is that we can’t do
without human intelligence, but other aspect is that we all need to solve real-
world problems with efficiency at a huge scale. That is why the need for
machine learning arises.
Challenges in Machines Learning :-

While Machine Learning is rapidly evolving, making significant strides with


cybersecurity and autonomous cars, this segment of AI as whole still has a long
way to go. The reason behind is that ML has not been able to overcome number
of challenges. The challenges that ML is facing currently are −

Quality of data − Having good-quality data for ML algorithms is one of the


biggest challenges. Use of low-quality data leads to the problems related to data
preprocessing and feature extraction.

Time-Consuming task − Another challenge faced by ML models is the


consumption of time especially for data acquisition, feature extraction and
retrieval.

Lack of specialist persons − As ML technology is still in its infancy stage,


availability of expert resources is a tough job.

No clear objective for formulating business problems − Having no clear


objective and well-defined goal for business problems is another key challenge
for ML because this technology is not that mature yet.

Issue of overfitting & underfitting − If the model is overfitting or underfitting, it


cannot be represented well for the problem.

Curse of dimensionality − Another challenge ML model faces is too many


features of data points. This can be a real hindrance.

Difficulty in deployment − Complexity of the ML model makes it quite difficult


to be deployed in real life.
Applications of Machines Learning :-

Machine Learning is the most rapidly growing technology and according to


researchers we are in the golden year of AI and ML. It is used to solve many
real-world complex problems which cannot be solved with traditional approach.
Following are some real-world applications of ML −

 Emotion analysis

 Sentiment analysis

 Error detection and prevention

 Weather forecasting and prediction

 Stock market analysis and forecasting

 Speech synthesis

 Speech recognition

 Customer segmentation

 Object recognition

 Fraud detection

 Fraud prevention

 Recommendation of products to customer in online shopping


How to Start Learning Machine Learning?

Arthur Samuel coined the term “Machine Learning” in 1959 and defined it as
a “Field of study that gives computers the capability to learn without being
explicitly programmed”.

And that was the beginning of Machine Learning! In modern times, Machine
Learning is one of the most popular (if not the most!) career choices. According
to Indeed, Machine Learning Engineer Is The Best Job of 2019 with
a 344% growth and an average base salary of $146,085 per year.

But there is still a lot of doubt about what exactly is Machine Learning and how
to start learning it? So this article deals with the Basics of Machine Learning
and also the path you can follow to eventually become a full-fledged Machine
Learning Engineer. Now let’s get started!!!

How to start learning ML?

This is a rough roadmap you can follow on your way to becoming an insanely
talented Machine Learning Engineer. Of course, you can always modify the
steps according to your needs to reach your desired end-goal!

Step 1 – Understand the Prerequisites

In case you are a genius, you could start ML directly but normally, there are
some prerequisites that you need to know which include Linear Algebra,
Multivariate Calculus, Statistics, and Python. And if you don’t know these,
never fear! You don’t need a Ph.D. degree in these topics to get started but you
do need a basic understanding.

(a) Learn Linear Algebra and Multivariate Calculus


Both Linear Algebra and Multivariate Calculus are important in Machine
Learning. However, the extent to which you need them depends on your role as
a data scientist. If you are more focused on application heavy machine learning,
then you will not be that heavily focused on maths as there are many common
libraries available. But if you want to focus on R&D in Machine Learning, then
mastery of Linear Algebra and Multivariate Calculus is very important as you
will have to implement many ML algorithms from scratch.

(b) Learn Statistics

Data plays a huge role in Machine Learning. In fact, around 80% of your time
as an ML expert will be spent collecting and cleaning data. And statistics is a
field that handles the collection, analysis, and presentation of data. So it is no
surprise that you need to learn it!!!
Some of the key concepts in statistics that are important are Statistical
Significance, Probability Distributions, Hypothesis Testing, Regression, etc.
Also, Bayesian Thinking is also a very important part of ML which deals with
various concepts like Conditional Probability, Priors, and Posteriors, Maximum
Likelihood, etc.

(c) Learn Python

Some people prefer to skip Linear Algebra, Multivariate Calculus and Statistics
and learn them as they go along with trial and error. But the one thing that you
absolutely cannot skip is Python! While there are other languages you can use
for Machine Learning like R, Scala, etc. Python is currently the most popular
language for ML. In fact, there are many Python libraries that are specifically
useful for Artificial Intelligence and Machine Learning such
as Keras, TensorFlow, Scikit-learn, etc.
So if you want to learn ML, it’s best if you learn Python! You can do that using
various online resources and courses such as Fork Python available Free on
GeeksforGeeks.

Step 2 – Learn Various ML Concepts

Now that you are done with the prerequisites, you can move on to actually
learning ML (Which is the fun part!!!) It’s best to start with the basics and then
move on to the more complicated stuff. Some of the basic concepts in ML are:

(a) Terminologies of Machine Learning

 Model – A model is a specific representation learned from data by


applying some machine learning algorithm. A model is also called a
hypothesis.

 Feature – A feature is an individual measurable property of the data. A set


of numeric features can be conveniently described by a feature vector.
Feature vectors are fed as input to the model. For example, in order to
predict a fruit, there may be features like color, smell, taste, etc.

 Target (Label) – A target variable or label is the value to be predicted by


our model. For the fruit example discussed in the feature section, the label
with each set of input would be the name of the fruit like apple, orange,
banana, etc.

 Training – The idea is to give a set of inputs(features) and it’s expected


outputs(labels), so after training, we will have a model (hypothesis) that
will then map new data to one of the categories trained on.

 Prediction – Once our model is ready, it can be fed a set of inputs to


which it will provide a predicted output(label).
(b) Types of Machine Learning

 Supervised Learning – This involves learning from a training dataset with


labeled data using classification and regression models. This learning
process continues until the required level of performance is achieved.

 Unsupervised Learning – This involves using unlabelled data and then


finding the underlying structure in the data in order to learn more and
more about the data itself using factor and cluster analysis models.

 Semi-supervised Learning – This involves using unlabelled data like


Unsupervised Learning with a small amount of labeled data. Using
labeled data vastly increases the learning accuracy and is also more cost-
effective than Supervised Learning.

 Reinforcement Learning – This involves learning optimal actions through


trial and error. So the next action is decided by learning behaviors that are
based on the current state and that will maximize the reward in the future.
Advantages of Machine learning :-

1. Easily identifies trends and patterns -

Machine Learning can review large volumes of data and discover specific trends
and patterns that would not be apparent to humans. For instance, for an e-
commerce website like Amazon, it serves to understand the browsing behaviors
and purchase histories of its users to help cater to the right products, deals, and
reminders relevant to them. It uses the results to reveal relevant advertisements
to them.

2. No human intervention needed (automation)

With ML, you don’t need to babysit your project every step of the way. Since it
means giving machines the ability to learn, it lets them make predictions and
also improve the algorithms on their own. A common example of this is anti-
virus softwares; they learn to filter new threats as they are recognized. ML is
also good at recognizing spam.

3. Continuous Improvement

As ML algorithms gain experience, they keep improving in accuracy and


efficiency. This lets them make better decisions. Say you need to make a
weather forecast model. As the amount of data you have keeps growing, your
algorithms learn to make more accurate predictions faster.

4. Handling multi-dimensional and multi-variety data


Machine Learning algorithms are good at handling data that are multi-
dimensional and multi-variety, and they can do this in dynamic or uncertain
environments.

5. Wide Applications

You could be an e-tailer or a healthcare provider and make ML work for you.
Where it does apply, it holds the capability to help deliver a much more
personal experience to customers while also targeting the right customers.
Disadvantages of Machine Learning :-

1. Data Acquisition

Machine Learning requires massive data sets to train on, and these should be
inclusive/unbiased, and of good quality. There can also be times where they
must wait for new data to be generated.

2. Time and Resources

ML needs enough time to let the algorithms learn and develop enough to fulfill
their purpose with a considerable amount of accuracy and relevancy. It also
needs massive resources to function. This can mean additional requirements of
computer power for you.

3. Interpretation of Results

Another major challenge is the ability to accurately interpret results generated


by the algorithms. You must also carefully choose the algorithms for your
purpose.

4. High error-susceptibility

Machine Learning is autonomous but highly susceptible to errors. Suppose you


train an algorithm with data sets small enough to not be inclusive. You end up
with biased predictions coming from a biased training set. This leads to
irrelevant advertisements being displayed to customers. In the case of ML, such
blunders can set off a chain of errors that can go undetected for long periods of
time. And when they do get noticed, it takes quite some time to recognize the
source of the issue, and even longer to correct it.

SYSTEM TEST

The purpose of testing is to discover errors. Testing is the process of trying to


discover every conceivable fault or weakness in a work product. It provides a
way to check the functionality of components, sub assemblies, assemblies
and/or a finished product It is the process of exercising software with the intent
of ensuring that the Software system meets its requirements and user
expectations and does not fail in an unacceptable manner. There are various
types of test. Each test type addresses a specific testing requirement.

TYPES OF TESTS

Unit testing

Unit testing involves the design of test cases that validate that the internal
program logic is functioning properly, and that program inputs produce valid
outputs. All decision branches and internal code flow should be validated. It is
the testing of individual software units of the application .it is done after the
completion of an individual unit before integration. This is a structural testing,
that relies on knowledge of its construction and is invasive. Unit tests perform
basic tests at component level and test a specific business process, application,
and/or system configuration. Unit tests ensure that each unique path of a
business process performs accurately to the documented specifications and
contains clearly defined inputs and expected results.

Integration testing

Integration tests are designed to test integrated software components to


determine if they actually run as one program. Testing is event driven and is
more concerned with the basic outcome of screens or fields. Integration tests
demonstrate that although the components were individually satisfaction, as
shown by successfully unit testing, the combination of components is correct
and consistent. Integration testing is specifically aimed at exposing the
problems that arise from the combination of components.

Functional test

Functional tests provide systematic demonstrations that functions tested are


available as specified by the business and technical requirements, system
documentation, and user manuals.

Functional testing is centered on the following items:

Valid Input : identified classes of valid input must be accepted.

Invalid Input : identified classes of invalid input must be rejected.

Functions : identified functions must be exercised.

Output : identified classes of application outputs must


be exercised.
Systems/Procedures : interfacing systems or procedures must be invoked.

Organization and preparation of functional tests is focused on requirements, key


functions, or special test cases. In addition, systematic coverage pertaining to
identify Business process flows; data fields, predefined processes, and
successive processes must be considered for testing. Before functional testing is
complete, additional tests are identified and the effective value of current tests is
determined.

System Test

System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results.
An example of system testing is the configuration oriented system integration
test. System testing is based on process descriptions and flows, emphasizing
pre-driven process links and integration points.

White Box Testing

White Box Testing is a testing in which in which the software tester has
knowledge of the inner workings, structure and language of the software, or at
least its purpose. It is purpose. It is used to test areas that cannot be reached
from a black box level.

Black Box Testing

Black Box Testing is testing the software without any knowledge of the inner
workings, structure or language of the module being tested. Black box tests, as
most other kinds of tests, must be written from a definitive source document,
such as specification or requirements document, such as specification or
requirements document. It is a testing in which the software under test is
treated, as a black box .you cannot “see” into it. The test provides inputs and
responds to outputs without considering how the software works.
Unit Testing

Unit testing is usually conducted as part of a combined code and unit test phase
of the software lifecycle, although it is not uncommon for coding and unit
testing to be conducted as two distinct phases.

Test strategy and approach

Field testing will be performed manually and functional tests will be written in
detail.

Test objectives

 All field entries must work properly.

 Pages must be activated from the identified link.

 The entry screen, messages and responses must not be delayed.

Features to be tested

 Verify that the entries are of the correct format

 No duplicate entries should be allowed

 All links should take the user to the correct page.


Integration Testing

Software integration testing is the incremental integration testing of two or more


integrated software components on a single platform to produce failures caused
by interface defects.

The task of the integration test is to check that components or software


applications, e.g. components in a software system or – one step up – software
applications at the company level – interact without error.

Test Results: All the test cases mentioned above passed successfully. No
defects encountered.

Acceptance Testing

User Acceptance Testing is a critical phase of any project and requires


significant participation by the end user. It also ensures that the system meets
the functional requirements.

Test Results: All the test cases mentioned above passed successfully. No
defects encountered.
IMPLIMENTATION
To test above 2 Algorithms I am using below NIGHT VISION images

In last image we can see we are unable to see any pedestrian but ADABOOST can detect it.

To run project double click on ‘run.bat’ file to get below screen

In above screen click on ‘Upload Night Vision Image’ button and upload image
In above screen I am selecting ‘test.png’ image and then click on ‘Open’ button to load image
and to get below screen

In above screen showing uploaded original image and hardly we can see the pedestrian and
now try to detect that pedestrian using YOLOV2 algorithm by clicking on ‘Night Vision
Pedestrian Detection using YOLOV2’ button
In above screen first image is the original image and second image is the YOLOV2 resultant
image and in second image we did not find any bounding box across pedestrian so YOLOV2
unable to detect that pedestrian and now click on ‘Night Vision Pedestrian Detection using
HAAR + AdaBoost’ button to get below result

In above screen first image is the original image and second image is the resultant image
from HAAR + ADABOOST algorithm and this algorithm able to detect pedestrian
successfully and putting bounding box across detected pedestrian.
Now test with other image

In above screen uploading 3.png and then below is the YOLOV2 result

In above image YOLOV2 able to detect the persons and now test with ADABOOST
In above screen we can see ADABOOST detecting both persons accurately. Similarly you
can upload other images and test the
CONCLUSION
In conclusion, the development of an embedded night-vision system for
pedestrian detection using the YOLOv2 CNN architecture has demonstrated
significant potential for improving safety in low-visibility conditions.
YOLOv2's real-time object detection capabilities, coupled with its ability to
identify pedestrians even in challenging lighting environments, make it a robust
solution for night-time applications. By leveraging infrared imaging and
optimizing the model for embedded systems, this approach addresses both
hardware constraints and performance requirements, ensuring effective
detection while maintaining low power consumption and real-time processing
capabilities.

The implementation of YOLOv2 allows for a balance between accuracy and


speed, which is crucial for real-world applications such as automotive safety
systems, surveillance, and autonomous driving. Its streamlined architecture
reduces computational complexity, making it suitable for embedded
environments without sacrificing detection performance. Additionally, the
integration of night-vision technologies enhances the system’s ability to detect
pedestrians in dark or poorly lit areas, which is critical for reducing accidents
and enhancing safety during nighttime driving.

However, there are challenges and areas for improvement, such as further
refining the system’s ability to cope with adverse weather conditions and
enhancing detection accuracy in highly cluttered environments. Future work
may focus on integrating more advanced sensors or hybrid models that combine
multiple detection methods to improve robustness and performance across
different night-time scenarios.

In summary, the embedded night-vision pedestrian detection system using


YOLOv2 CNN provides an effective and efficient solution for enhancing
pedestrian safety in low-visibility conditions. The system's combination of real-
time detection, low resource consumption, and adaptability to embedded
platforms positions it as a valuable tool for applications in modern intelligent
transportation systems.
FUTURE SCOPE OF THE PROJECT
The embedded night-vision system for pedestrian detection using YOLOv2
CNN holds significant potential for future developments, particularly as urban
environments continue to evolve and demand smarter safety solutions. One of
the primary avenues for advancement lies in enhancing the accuracy and
efficiency of pedestrian detection algorithms. While YOLOv2 provides a robust
framework for real-time object detection, future iterations could incorporate
more sophisticated neural network architectures, such as YOLOv5 or
transformer-based models, to improve detection accuracy under various
conditions, including occlusions and diverse lighting scenarios.

Additionally, integrating advanced sensor technologies, such as LiDAR or


thermal imaging, could enhance the system's capability to detect pedestrians in
challenging environments where standard cameras might struggle. This multi-
sensor fusion approach would not only improve detection reliability in low-light
conditions but also reduce false positives and negatives, leading to more
dependable safety mechanisms for pedestrians, particularly in urban areas with
high traffic volumes. Furthermore, advancements in hardware, such as the
development of more efficient edge computing platforms, will allow for the
deployment of more complex models without compromising the system's real-
time performance.

The implementation of machine learning techniques to adapt the detection


algorithms based on real-time data could also be explored. By employing
continual learning methodologies, the system could evolve and refine its
detection capabilities based on new pedestrian behavior patterns and
environmental changes, thus enhancing its adaptability. Moreover, the
incorporation of cloud computing could facilitate extensive data collection from
multiple deployed systems, allowing for centralized model training and updates.
This would ensure that all embedded systems remain up-to-date with the latest
algorithms and improvements.

Finally, the future scope of this project can extend to enhancing user interaction
through the development of mobile applications that provide alerts to
pedestrians and drivers in real time. Such applications could integrate with
smart city infrastructure to communicate detection data to nearby vehicles,
thereby improving overall road safety. As cities continue to integrate technology
into their infrastructure, the embedded night-vision system can play a vital role
in promoting pedestrian safety, particularly during nighttime or low-visibility
conditions, ultimately contributing to the development of smarter, safer urban
environments.
REFERENCES
1. Zhang, L., & Zhang, L. (2017). A review of pedestrian detection
methods. International Journal of Advanced Computer Science and
Applications, 8(8), 100-107.

2. Zheng, S., & Wang, Y. (2018). Pedestrian detection via deep learning: A
survey. Journal of Computer Science and Technology, 33(1), 1-20.

3. Moussa, A., & Elhoseny, M. (2021). Pedestrian detection in night-time


and low-light environments: A review. Journal of Ambient Intelligence
and Humanized Computing, 12(2), 2153-2168.

4. Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only
look once: Unified real-time object detection. In Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), 779-
788.

5. Redmon, J., & Farhadi, A. (2017). YOLO9000: Better, faster, stronger.


In Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), 7263-7271.

6. Liu, W., Anguelov, D., & Goh, Y. (2016). SSD: Single shot multibox
detector. In European Conference on Computer Vision (ECCV), 21-37.

7. Li, Y., & Zhu, Z. (2017). Real-time pedestrian detection in embedded


systems. IEEE Transactions on Intelligent Transportation Systems, 19(2),
416-425.

8. Kim, H., & Kim, Y. (2018). Implementation of YOLOv2 on FPGA for


real-time object detection. IEEE Access, 6, 16442-16452.

9. Guan, Y., & Zhang, C. (2020). Nighttime pedestrian detection using


infrared images. Sensors, 20(10), 2762.
10.Wang, H., Zhang, Q., & Liu, C. (2019). Robust pedestrian detection in
low-light environments using deep learning. Neurocomputing, 348, 38-
48.

11.Lin, T., & Chen, Y. (2019). Nighttime pedestrian detection using fusion
of visible and infrared images. IEEE Transactions on Image Processing,
28(12), 5965-5976.

12.Huang, K., & Liu, S. (2020). A lightweight pedestrian detection system


based on YOLOv2 for mobile applications. Sensors, 20(17), 4876.

13.Xiao, Y., & Wang, Y. (2020). Efficient pedestrian detection for intelligent
transportation systems using YOLOv2 on Raspberry Pi. IEEE Access, 8,
76359-76366.

14.Song, J., & Wu, Z. (2019). A real-time pedestrian detection system for
smart vehicles based on YOLOv2. Journal of Intelligent Transportation
Systems, 23(3), 224-232.

15.Pang, J., & Liu, H. (2020). Performance optimization of YOLOv2 for


embedded systems using model compression techniques. IEEE
Transactions on Circuits and Systems for Video Technology, 30(3), 674-
688.

16.Huang, Z., & Liu, J. (2019). Comparative evaluation of pedestrian


detection algorithms for low-light conditions. Journal of Visual
Communication and Image Representation, 62, 82-93.

17.Zhang, D., & Yang, X. (2019). Combining thermal and visible images
for pedestrian detection using YOLOv2. IEEE Transactions on Image
Processing, 28(5), 2236-2248.

18.Li, H., & Huang, J. (2020). Multispectral pedestrian detection using


YOLOv2. Sensors, 20(3), 731.
19.Ding, Y., & Zhang, L. (2020). A case study on deploying YOLOv2 for
real-time pedestrian detection in nighttime environments. International
Journal of Computer Applications, 975, 1-6.

20.Li, Q., & Zhang, L. (2018). An embedded pedestrian detection system


based on YOLOv2 for low-light scenarios. IEEE Transactions on
Embedded Computing and Systems, 18(4), 1-14.

You might also like