0% found this document useful (0 votes)
52 views

Night Time Vehicle Detection System Using Roi and Image Enhancement

Contact us for project abstract, enquiry, explanation, code, execution, documentation. Phone/Whatsap : 9573388833 Email : [email protected] Website : https://dcs.datapro.in/contact-us-2 Tags: btech, mtech, final year project, datapro, machine learning, cyber security, cloud computing, blockchain,

Uploaded by

dataprodcs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
0% found this document useful (0 votes)
52 views

Night Time Vehicle Detection System Using Roi and Image Enhancement

Contact us for project abstract, enquiry, explanation, code, execution, documentation. Phone/Whatsap : 9573388833 Email : [email protected] Website : https://dcs.datapro.in/contact-us-2 Tags: btech, mtech, final year project, datapro, machine learning, cyber security, cloud computing, blockchain,

Uploaded by

dataprodcs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
You are on page 1/ 11

ABSTRACT

In nighttime images, vehicle detection is a challenging task because of low contrast and
luminosity. In this article, the authors combine a novel region-of-interest (ROI) extraction
approach that fuses vehicle light detection and object proposals together with a nighttime
image enhancement approach based on improved multiscale retinex to extract accurate
ROIs and enhance images for accurate nighttime vehicle detection. Experimental results
demonstrate that the proposed nighttime image enhancement method, score-level
multifeature fusion, and the ROI extraction method are all effective for nighttime vehicle
detection. But the proposed vehicle detection method demonstrates 93.34 percent
detection rate and outperforms other models, detecting blurred and partly occluded
vehicles, as well as vehicles in a variety of sizes, numbers, locations, and backgrounds

iv
TABLE OF CONTENTS
CHAPTER NO TITLE PAGE NO
ASTRACT iv
LIST OF FIGURES vii
1 INTRODUCTION 1
1.1 Digital Image Processing 2
1.2 Image 3
1.2.1 Pixel 4
1.2.2 Image Processing 5
1.3 Image Preprocessing 8
1.3.1 Image BInarization 8
1.3.2 Edge Detection 8
1.3.3 ROI Extraction 9
1.3.4 Image Enhancement 9
1.4 Automatic Trimap Generation 11
1.5 Image Acquisition 11
1.6 Feature Extraction 12
1.6.1 Repeated Line Tracking 12
1.6.2 Even Gabor Filter 12
2 LITERATURE SURVEY 14
2.1 Vehicle Number Plate Detection using 14
FPGA implementation
2.2 Image Processing based vehicle detection 15
using Gaussian Model
2.3 Night-time surveillance using intelligent 16
transportation system
2.4 Issues raised in number plate detection 17
3 METHODOLOGY 19
3.1 Proposed System 19
3.2 Overview Of the System 19
3.3 OCR 21
3.3.1 What technology lies behind OCR? 22
3.3.2 What principles is fine reader OCR 23
v
based on?

3.3.3 Recognition of digital camera 23


images
3.3.4 How to use OCR software? 24
3.3.5 What benefits does OCR bring to 24
us?
3.4 Feature Extraction 24
3.5 General 25
3.6 Image processing 26
3.7 Grey level co-occurrence matrix 27
3.7.1 Extraction of GLCM 27
3.7.2 Extraction of texture Features of 28
the image
3.7.3 Angular second moment 29
3.8 Inverse Difference Moment 30
3.9 Entropy 30
3.10 Correlation 30
3.10.1 Texture Analysis Using the Gray 30
Level co-occurrence matrix(GLCM)
4 RESULTS AND DISCUSSION 32
5 CONCLUSION AND FUTURE WORK 35
REFERENCE 36
APPENDIX 37

vi
LIST OF FIGURES
FIGURE No. FIGURE NAME PAGE No.
1.1 An Vehicle image 3
1.2 A grey scale image 4
1.3 Pixel Representation 5
1.4 Structure of grey –scale image 6
1.5 8-bit grey scale 7
1.6 Different shades of grey 7
1.7 Color image 8
1.8 The grey value histogram 9
1.9 3X3 neighbourhood pixels 10
1.10 Image enhancement 11
4.1 Input Image 32
4.2 Grayscale Image 32
4.3 Binary Image 33
4.4 ROI Identification 33
4.5 OCR Character reader 34
4,6 Final Output 34

vii
CHAPTER-1

INTRODUCTION

Number plate recognition is an easier method for Vehicle identification. Nevertheless, it


is observed that the number plates of vehicles are in different shape and size and have
a different colour in various countries. Therefore, it is a challenging task. It is difficult to
detect the boundary of the Number plate from the input car images in outdoors scene
due to colour of characters of the number plate and Background of the Number plate
the gradients of the original image is adopted to detect candidate number plate regions.
There are also algorithms that are based on a combination of morphological operation,
segmentation and canny edge detector. License plate location algorithm consist of
steps like as Edge Detection, Morphological operation like dilation and erosion,
Smoothing, segmentation of characters and recognition of plate characters.

Number plates are utilized as distinguishing proof of vehicles everywhere


throughout the countries. The number plate recognition system uses a picture handling
technique for perceiving automobiles by their number plates. Number plate recognition
systems are utilized with the point of viable movement control and security applications
like access control to limited regions and pursue wanted vehicles. Experimentation of
number plate recognition has been led for quite a long while; it's as yet a troublesome
task. Number plate identification system investigates a data picture to distinguish some
nearby fixes containing number plate. Since a plate can exist anyplace in a photo with
fluctuated sizes, it's difficult to inspect every pixel of the picture to discover it. At the
point when a vehicle enters an input gate, the number plate can naturally be detected at
the entrance point and put away in the database. The Number Plate Recognition (NPR)
system for Indian number plate is troublesomely contrasted with the foreign number
plate as there's no standard took after for the proportion or ratio of the number plate
size. The recognition task is difficult because of the nature of the light, which causes the
image acquisition difficult. In NPR system photo-detection approach is used that
includes acquiring a photo of the vehicle, extracting the region of interest, and character
segmentation and extraction. It is troublesome to locate the bounding area or edge of

1
the number plate from the input vehicle image in the open-air scene because of the
shades of characters of the number plate and background of the same. The gradients of
the original picture are modified to discover applicant number plate region. There are
calculations that depend on a combo of morphological activity, division, segmentation,
and canny edge identifier. Number plate location detection comprises of steps like as
Edge Detection, a Morphological task like expansion and disintegration, smoothing, and
division of characters and recognition of plate number.

License Plate Recognition (LPR) is an image processing system whereby it is


used to recognize the vehicles by identifying the license plate. License Plate
Recognition systems are cameras that convert a picture of a vehicle’s license plate into
computer readable data that can be matched against data base list. LPR is one form of
intelligent transportation Systems (ITS) technology that not only recognizes and counts
vehicles, but distinguishes each as unique. The LPR system can be used to traffic
control management for recognizing vehicles that commit traffic violation, such as
entering restricted area without permission; occupying lanes reserved for public
transport, crossing red light, breaking speed limits; In others, like commercial vehicle
operations or secure-access control, a vehicle's license plate is checked against a
database of acceptable ones to determine whether a truck can bypass a weigh station
or a car can enter a gated community or parking lot etc. License plate recognition (LPR)
is a new tool for automatic vehicle and traffic monitoring by using digital image
processing. There have been various commercial systems for license plate recognition
around the world. In this system two types of classifier are used:

 (Optical character recognition)OCR-based method


 learning-based method

1.1 Digital image processing

Digital image processing is the use of computer algorithms to perform image processing
on digital images. As a subcategory or field of digital signal processing, digital image
processing has many advantages over analog image processing. It allows a much wider
range of algorithms to be applied to the input data and can avoid problems such as the

2
build-up of noise and signal distortion during processing. Since images are defined over
two dimensions (perhaps more) digital image processing may be modeled in the form of
multidimensional systems.

Digital image processing technology for medical applications was inducted into
the Space Foundation Space Technology Hall of Fame in 1994.In 2002 Raanan Fattel,
introduced Gradient domain image processing, a new way to process images in which
the differences between pixels are manipulated rather than the pixel values themselves.
Digital image processing allows the use of much more complex algorithms, and hence,
can offer both more sophisticated performance at simple tasks, and the implementation
of methods which would be impossible by analog means.

In particular, digital image processing is the only practical technology for:

 Enhancement
 Feature extraction
 Segmentation

1.2 IMAGE

An image is an array or a matrix of square pixels (picture elements) arranged in


columns and rows. An image (from Latin: imago) is an artifact, for example a two-
dimensional picture, that has a similar appearance to some subject usually a physical
object or a person.

Fig 1.1: An image

3
1.2.1 Pixel
Image processing is a subset of the electronic domain where in the image is
converted to an array of small integers, called pixels, representing a physical quantity
such as scene radiance, stored in a digital memory and processed by computer or other
digital hardware.
The image as being a two-dimensional function the function values give the
brightness of the image at any given point. Image brightness values can be any real
numbers in the range 0.0 (black) to 1.0 (white). The ranges of x and y will clearly
depend on the image, but all real values between their minima and maxima.

A digital image differs from a photo in that the x, y, and f(x, y) values are all
discrete. Usually they take on only integer values, so x and y ranging from 1to 256
each, and the brightness values also ranging from 0 (black) to 255 (white).

A digital image can be considered as a large array of discrete dots, each of which
has a brightness associated with it. These dots are called picture elements, or more
simply pixels. The pixels surrounding a given pixel constitute its neighbourhood.

A neighbourhood can be characterized by its shape in the same way as a matrix.


A 3*3 neigh bourhoods or 5*7 neigh bourhoods. Except in very special circumstances,
neigh bourhoods have odd numbers of rows and columns; this ensures that the current
pixel is in the centre of the neigh bourhood.

Fig 1.2: A grey scale image

4
Fig1.3: Pixel representation

1.2.2 IMAGE PROCESSING


Image processing is any form of signal processing for which the input is an
image, such as a photograph or video frame; the output of image processing may be
either an image or a set of characteristics or parameters related to the image. Most
image-processing techniques involve treating the image as a dimensional signal and
applying standard signal-processing techniques to it.

Image processing usually refers to digital image processing, but optical and
analog image processing also are possible. This article is about general techniques that
apply to all of them. The acquisition of images (producing the input image in the first
place) is referred to as imaging

Image processing allows one to enhance image features of interest while


attenuating detail irrelevant to a given application, and then extract useful information
about the scene from the enhanced image. This introduction is a practical guide to the
challenges, and the hardware and algorithms used to meet them.

An image is digitized to convert it to a form which can be stored in a computer's


memory or on some form of storage media such as a hard disk or CD-ROM. This
digitization procedure can be done by a scanner, or by a video camera connected to a
frame grabber board in a computer. Once the image has been digitized, it can be
operated upon by various image processing operations.

5
Image processing operations can be roughly divided into three major categories:
Image Compression, Image Enhancement and Restoration, and Measurement
Extraction. Image compression is familiar to most people. It involves reducing the
amount of memory needed to store a digital image.

Image defects which could be caused by the digitization process or by faults in


the imaging set-up (for example, bad lighting) can be corrected using Image
Enhancement techniques.

This means that each pixel in the image is stored as a number between 0 to 255,
where 0 represents a black pixel, 255 represents a white pixel and values in-between
represent shades of gray.

Fig 1.4: Structure of grey-scale image

Each pixel represents a value from 0 to 255 verifying the level of gray. These operations
can be extended to colour images too.

6
(a) (b)

Fig 1.10: Image enhancement(a)original image (b) enhanced image

1.4 AUTOMATIC TRIMAP GENERATION


The Automatic Trimap Generation is used to achieve good segmentation
performance for low quality images of finger-vein.Assuming that F(x, y) and B(x, y)
respectively represent a foreground image and a background image, an restored finger-
vein image R^(x, y) can be modelled as:

R^(x, y) = F(x, y) α(x, y) +B(x, y) (1 – α(x, y)) ---------- (1.1)

Where α(x, y) is the pixel’s foreground opacity. For convenience, R^ (x, y), F(x, y), B(x,
y) and α(x, y) are respectively represented by R^, F, B and a in the following. In nature,
the problem described in Equation is ill-posed since F, B and α are all unknown.
However, given the Trimap, α can be estimated accordingly based on Equation.

1.5 IMAGE ACQUISITION


Pre-processing is carried out on the image to improve the quality of the image so that
the main processing on the image becomes easier. This section shows the different pre-
processing algorithms that could have improve the image contrast for license plate
recognition. For accurate location of the license plate the vehicle must be perfectly
visible irrespective of whether the image is captured during day or night or non-
illumination. Sometimes the image may be too dark, contain blur, thereby making the
task of extracting the license plate difficult. Gray scale conversion from the 24-bit color

11

You might also like