Abhishek Seminar
Abhishek Seminar
SEMINAR REPORT
CERTIFICATE
PLACE: PALAKKAD
DATE: 29-03-2021 STAFF – IN – CHARGE
ACKNOWLEDGEMENT
Last but not the least I express my sincere gratitude to my parents for their
valuable encouragement and also for being a source of inspiration.
ABHISHEK UR
18040828
VISION
MISSION
3. Inculcating social responsibility and ethical values among students through value
education.
VISION
MISSION
2. Imparting knowledge and training for current and advanced technology in the
field of Electronics.
ABSTRACT
One of the major challenges in optical camera communication (OCC) is low data
transmission rate, due to low sampling rate of a camera based receiver, compared with very
high-speed modulation of light emitting diodes (LEDs). This paper presents a high-speed
OCC using a novel capturing strategy called selective capture (SC). The proposed SC
scheme provides a high data rate of up to 10.272 kbps in OCC, when evaluated with a
Raspberry Pi camera module (RaspiCam) as the receiver. Experiments were conducted with
a 4×4 red, green and blue (RGB) LED array employed as the transmitter and time
synchronized based on a keyframe. The RaspiCam was tweaked with the SC values in terms
of pixels using the Linux commanding tool. This tweaked RaspiCam facilitates the selection
of the resolution and the SC area to selectively capture from the full camera capture frame.
Experiment results demonstrate that a capture speed of 435 frames per second (fps) can be
achieved, yielding a high-speed indoor OCC with a bit error rate (BER) of 10 -5 at a
transmission distance of 125 cm.
TABLE OF CONTENT
5 CONCLUSION 26
REFERENCE 27
LIST OF FIGURE
CHAPTER 1
INTRODUCTION
The acceleration of urbanization, the increasingly complicated urban traffic conditions and
the continuous expansion of the overall scale have posed great challenges to urban traffic
management. It is necessary that Intelligent Transport System is introduced for the future
development of urban traffic management. Based on this background, the visual sensing
technology is applied to the intelligent traffic control system. The CCD camera with 5
million resolution is designed and used to analyze real-time traffic entering at each
intersection of urban road and calculate the traffic volume of each lane and queue ˈsuch as
the vehicles length, average waiting time and other traffic parameters, which could provide
real-time data to the signal light control system for dynamic signal parameter configuration
to achieve intelligent single-point light signal control, trunk control and regional control.
The system can be popularized and applied in the fields of intelligent signal light control
and traffic information collection.
The rapid development of the social economy has made the contradiction between the
development of urban road transport facilities and the ever-expanding urban motor vehicles.
It has become a problem that the urban managers constantly think and correct, how to
effectively alleviate the increasingly congested urban traffic and the social problems caused
by . As an omnibearing service system, the intelligent transportation system can provide
comprehensive services for traffic participants . The intelligent transportation system can
complete the functions of information collection, processing, distribution, exchange and
analysis. Intelligent transportation system will provide the basis for easing the traffic and
relieving traffic pressure by connecting the entire city traffic together. However, in the
intelligent transportation system, the object of observation is still the motor vehicle.
Therefore, it is particularly important to observe and feedback the flow, queuing status and
waiting time at the intersection. The timeliness and accuracy of the feedback data affect the
road status discrimination and follow-up signal optimization strategy . Based on this ,an
observing system based on visual sensing technology I designed, which integrates vehicle
target detection, traffic parameter analysis, video recording, compression and transmission
CHAPTER 2
SYSTEM WORKING PRINCIPLE
The system can identify the video frame by frame and automatically match the
corresponding lane, track the passing vehicles and make judgment on their behavior, so as
to calculate the urban road intersection in all directions, the traffic flow of each lane, the
queue length, the queue waiting time and average vehicle speed. These real-time traffic
indicators are connected with the traffic signal to control the traffic signals at a single point.
They can also be used as sensors of the urban road network traffic guidance system to
provide input data for automatic traffic guidance and control of the overall road conditions,
at the same time, can take into account real-time traffic parameters to send, record and store
functions. System block diagram as shown in Fig. 1
Based on the above process, the system can not only adapt to the detection of vehicles under
general traffic conditions, but also achieve high system detection rate under traffic
conditions with severe traffic jam and small vehicle distance.
The system identifies the traffic parameters through video analysis, real-time analysis of
the vehicles entering the direction of the intersection of the urban road, calculates the traffic
parameters such as traffic flow, queuing length and average waiting time for each lane, and
provides it to the intelligent signal system for dynamic Signal parameters configuration, to
achieve intelligent single point light signal control, trunk control and area control.
General These technical specifications describe the minimum physical & functional
properties of a video detection system. The system shall be capable of monitoring all
licensed vehicles on the roadway, providing video detection for areas outlined in the
construction drawings.
Using online learning mechanisms and optical flow complementarity to track specific goals.
Online learning is based on the characteristics and optical flow method is based on the
movement. The two work together to track higher accuracy. The online learning and
tracking algorithm builds a weak classifier (such as a decision tree with fewer layers) by
acquiring different random samples to obtain different features, and then uses several weak
classifiers to judge together. At the same time, by deleting some weak classifier to
constantly update the model.
The optical flow method is a classical and traditional tracking method based on three
assumptions: (a). The brightness of adjacent frames is constant; (b). The distance of objects
moving between adjacent frames is very small; (c). The pixels of the same image have the
same motion. The equation of motion with time t can be obtained from calculus:
The above formula in the image smooth area, due to x I or y I very small, will lead to the
matrix rank less than 2, no solution. So need to extract feature points, to achieve optical
flow method.
Every line tracking module consists of three pins such as VCC, GND and signal pin. For
power on this line tracking module, positive voltages are applied at VCC pin and negative or
ground voltages are applied at its ground pin. It is powered up with dc voltages within the
range of 3.3V to 5V. Similarly, its third pin which is basically its output pin, output voltages
or signal is received from this pin. Any type controller such as Arduino or microcontroller
etc. could be installed with this line tracking module for attaining the output voltages or
signal. Beside this, it also consists of two infrared LEDs, one potentiometer and one power
LED.
In these infrared LEDs, first, one is used such as an infrared transmitter for transmitting the
infrared signal and the second one is used such as an infrared receiver for receiving the
infrared signal. The potentiometer is used for adjusting the distance or range of this line
tacking module. By increasing or decreasing the resistance of this potentiometer, the sensing
area of this line tracking module could be adjusted. It is very easy to adjust with the help of
any knob or screwdriver. Similarly, the power LED which is turned on when the module is
turned on and turned off when the module is turned off.
The working principle of the line tracking module is very simple, it is almost the same as
obstacles avoidance tracking module, but it has a low power transmitter. Because it consists
of an infrared transmitter which sends the infrared signal in its respective area. When an
object such as paper or line is present in this respective area then this paper, line or object
block this signal then it is reflected back toward the infrared receiver.
On the basis of this reflected signal, the resistance of this tracking module is changed which
is inversely proportional to output signal voltages. Means, when resistance is increased then
output voltages are decreased, similarly when it is decreased then output voltages are
increased. Because the output of this module is connected with any controller, therefore,
these voltages are received by the controller and on the basis of these voltages, the controller
decided what is object or thing, where is object and how much distance where it is placed.
Detection mechanism used in depth learning. Detection mechanism used in the "RBM +
neural network" strategy. Restricted Boltzmann Machine (RBM) is a network structure that
can abstract image features. By a hidden layer and a visible layer, visible and hidden layer
nodes only 0 and 1 two states (Assuming the hidden layer has m nodes, the visible layer has
n nodes).Constrained Boltzmann machine structure diagram is shown in Fig. 2.7
(b). A hidden layer node and a visible layer node form a "maximal potential group"
with 2 nodes.
Where, Wij denotes the connection weight between the visible node i and the hidden layer
node j, i a indicates the offset of the visible layer node i, j b indicates the offset of the hidden
layer node
When the state of visible layer is given, the activation probability of the jth node can be
obtained because of the condition independence among the hidden layer nodes:
When the hidden layer state is given, the activation probability of the ith visible node can be
obtained because of the condition independence between the nodes in each visual layer:
The purpose of RBM is to pass the visual layer data to the hidden layer by probability
deduction and then transfer from the hidden layer to the visual layer so that the state of the
visual layer and the hidden layer are as same as possible. Here we use maximum likelihood
estimation to get the model parameters to achieve this goal. It is very difficult to find the
maximum likelihood estimation directly. Using the contrast divergence (CD) can effectively
and quickly obtained.
This is a layer of RBM model. The whole model is to stack several RBMs together. The
RBM of each layer is optimized independently and then the whole is optimized by back
propagation. In this way, the n-layer RBM combination can abstract the image into very few
information, similar to the human brain to achieve the abstract effect. Then abstracting the
data into a very powerful neural network information. After deep learning, you can get very
high- precision vehicle test results.
There are many modules in front: motion compensation module, tracking module,
detection module. However, the results of these modules are not necessarily the same. So
need a comprehensive analysis module based on the comprehensive information returned by
other modules to determine the comprehensive analysis. In the system, Hidden Markov
Model (HMM) based on dynamic system is adopted as a comprehensive judgment analyzer.
HMM-based synthesis analyzer will use the other modules of the results, various
parameters and image time- domain frequency and other parameters as an observation
variable incoming, that will determine which module is correct as hidden variables.
Therefore, not only the current results of each module are taken into consideration, but also
the results of each module at the previous time are considered, so that the analysis is more
robust and accurate.
The training process is EM algorithm. The model has some parameters (observed
variables) and some parameters are unknown (hidden variables).
Step E: Assume a model parameter, and then calculate the hidden variable according to the
probability. This process is called step E, and the Viterbi algorithm is used. Step M: Using
the calculated latent and observed variables, the process of recalculating the model
parameters by maximum likelihood estimation becomes M steps. After E and M two steps
of continuous iteration, when the initial model parameters more reasonable, you can get the
optimal solution. After this comprehensive analysis module, to achieve a high precision
tracking and detection.
CHAPTER 3
THE MAIN FUNCTIONS OF THE SYSTEM
Using five million resolution of the DSP high-definition smart machine, the city road
intersection into the direction of the vehicle for real-time analysis, through the vehicle
detection, estimated long queues of vehicles in all directions. The system detects, locates
and tracks the vehicles in the video area through the video-based multi-target location and
tracking technology to calculate the important parameters such as the queue length in all
directions of urban road junctions. These real-time traffic indicators can be connected with
the signal, which control traffic signals on a single point, but also can be used as a sensor of
urban road network traffic guidance system to provide input data for the overall traffic
automatic grooming and control. At the same time, there are quantitative or qualitative
alarm for different degrees of queuing length, and alarm information will be uploaded to the
back-end management host. So we could achieve intelligent traffic lights control, real-time
warning of road emergencies, urban traffic guidance and other important application
functions. Queue length detection model is shown in Fig. 3.1
The system can complete the intersection, the traffic flow of lane information collection.
Through the real-time monitoring of lane vehicles and location tracking, traffic conditions
can be analyzed at the corresponding junctions. Specific statistics include traffic flow,
average speed, occupancy rate, vehicle type, lane number, driving direction and so on.
In order to complete the traffic flow monitoring function, more than 3 switching signals are
placed in different directions of each monitoring area to simulate the action of a toroidal
coil, and the signal is finally output through the RS232 / 485 interface.
Detection, positioning, tracking and statistics are from the front of vehicle, that can adapt to
different light conditions such as day and night, adapt to the harsh weather conditions. For
each lane and the total lane traffic flow statistics, daytime detection to achieve more than
98% of the parameters of the accuracy and at night to reach more than 92% , the statistical
period also can be set. Traffic flow test results is shown in Fig. 3.2
The system has the average speed detection function. By calibrating the video, the video
measures the average speed of the vehicle in the lane for a period of time. The system
continuously collects two frames of images in the video surveillance standard signal of the
vehicle and calculates the vehicle speed through the corresponding points in the two frames,
the displacement of the corresponding line and the corresponding block, and the frame rate
of the corresponding camera, so as to further calculate Average speed per lane. The average
traffic speed per lane during the green light period is estimated. The accuracy of daytime
detection speed reaches above 95% and the nighttime detection speed accuracy reaches
more than 90%.
The system has the vehicle queue waiting time detection function. Through the vehicle
tracking, combined with the signal light signal detector, we can analyze the waiting time for
each vehicle to pass the red light and the number of times to wait for the red light. Signal
light control algorithm can use these information to adjust the red light cycle, green signal
ratio, phase and other signal control parameters to ease traffic congestion.
The average waiting time for per lane vehicle is estimated according to the vehicle
trajectory. The core technology is long-time tracking of vehicles over 5 minutes. Waiting
time estimation accuracy of more than 90%.
OCC employs either a single or multiple-camera setup as the RX, which can now be found
in most mobile devices such as mobile phones, smart watches, laptops, tablets, as well as
vehicles and buildings [2]. Therefore, the existing LED lighting fixtures and widely used
cameras, i.e., cheaper and off-the-shelf devices, can be employed in OCC. While the
majority of VLC systems are limited to utilizing the visible light spectrum, OCC employing
different types of camera can utilize the entire light spectrum from visible light, NIR to UV-
C band. In OCC, the camera captures images/video streams of the intensity modulated light
sources, i.e., a single LED, multiple LEDs and digital display screens, and extracts the
information by means of image processing [17]. Thus, camera-based OCC can be used in
both indoor and outdoor environments for a range of applications [11]. The development of
complementary metal-oxide semiconductor (CMOS) sensor technologies has created a new
generation of high-pixel-resolution and high-speed built-in cameras [11,17]. Although most
cameras capture images at a FR of 30 to 60 fps, the OCC standardization covers high-speed
cameras, which can support a FR of up to thousands of fps [5,11]. Similar to VLC, OCC
predominantly relies on the directional line-of-sight (LOS) transmission mode. In addition,
OCC conveniently offers a MIMO capability by extracting the data from captured multiple
light sources. Therefore, for these reasons, OCC can be considered as a subset of VLC. It is
interesting to note that the imaging MIMO capability of the camera can also be exploited for
other applications, such as range estimation, shape detection, scene detection, etc.
CHAPTER 4
CONCEPT OF SELECTIVE CAPTURE
Where x’ and y’ are the top right coordinates and Nx’ and Ny’ are the width and height of
the captured image frame, respectively. This is a basic equation used to define the selective
area in the full capture frame. To define the SC area to capture the taillights of the moving
vehicle in the current scheme, we perform an image processing technique called template
matching that first determines the target template in the image, and then find the position of
the target template in a frame. The area similar to the template is then determined as the
target. It should be noted that this capture area needs to change dynamically as the distance
between the LED transmitter and the camera changes. The calculation of the SC area based
on the template matching technique is given [20],
𝑆𝐶 =𝑁𝑥′Σ𝑥′ =0
𝑁𝑦′Σ
𝑦′ =0
Dept of Electronics Engineering 2020-21 GPTC Palakkad
Seminar report 26
{𝑇 (𝑥 , 𝑦 ) − 𝐼 (𝑋 + 𝑥 , 𝑌 + 𝑦 )}
CHAPTER 5
CONCLUSION
The system is multi-target positioning and tracking technology to detect based on the
video, how to locate and track the vehicles in the video area and calculate the traffic flow,
queue length, queue waiting time , the average speed and other important parameters index
are the key . These real-time traffic indicators can be connected with the signal, control
traffic signals on a single point, but also can be used as a sensor of urban road network
traffic guidance system to provide input data for the overall traffic automatic grooming and
control.
REFERENCES
• WWW.IEEE.COM
• WWW.GOOGLE.COM