Night Time Vehicle Detection System Using Roi and Image Enhancement
Night Time Vehicle Detection System Using Roi and Image Enhancement
In nighttime images, vehicle detection is a challenging task because of low contrast and
luminosity. In this article, the authors combine a novel region-of-interest (ROI) extraction
approach that fuses vehicle light detection and object proposals together with a nighttime
image enhancement approach based on improved multiscale retinex to extract accurate
ROIs and enhance images for accurate nighttime vehicle detection. Experimental results
demonstrate that the proposed nighttime image enhancement method, score-level
multifeature fusion, and the ROI extraction method are all effective for nighttime vehicle
detection. But the proposed vehicle detection method demonstrates 93.34 percent
detection rate and outperforms other models, detecting blurred and partly occluded
vehicles, as well as vehicles in a variety of sizes, numbers, locations, and backgrounds
iv
TABLE OF CONTENTS
CHAPTER NO TITLE PAGE NO
ASTRACT iv
LIST OF FIGURES vii
1 INTRODUCTION 1
1.1 Digital Image Processing 2
1.2 Image 3
1.2.1 Pixel 4
1.2.2 Image Processing 5
1.3 Image Preprocessing 8
1.3.1 Image BInarization 8
1.3.2 Edge Detection 8
1.3.3 ROI Extraction 9
1.3.4 Image Enhancement 9
1.4 Automatic Trimap Generation 11
1.5 Image Acquisition 11
1.6 Feature Extraction 12
1.6.1 Repeated Line Tracking 12
1.6.2 Even Gabor Filter 12
2 LITERATURE SURVEY 14
2.1 Vehicle Number Plate Detection using 14
FPGA implementation
2.2 Image Processing based vehicle detection 15
using Gaussian Model
2.3 Night-time surveillance using intelligent 16
transportation system
2.4 Issues raised in number plate detection 17
3 METHODOLOGY 19
3.1 Proposed System 19
3.2 Overview Of the System 19
3.3 OCR 21
3.3.1 What technology lies behind OCR? 22
3.3.2 What principles is fine reader OCR 23
v
based on?
vi
LIST OF FIGURES
FIGURE No. FIGURE NAME PAGE No.
1.1 An Vehicle image 3
1.2 A grey scale image 4
1.3 Pixel Representation 5
1.4 Structure of grey –scale image 6
1.5 8-bit grey scale 7
1.6 Different shades of grey 7
1.7 Color image 8
1.8 The grey value histogram 9
1.9 3X3 neighbourhood pixels 10
1.10 Image enhancement 11
4.1 Input Image 32
4.2 Grayscale Image 32
4.3 Binary Image 33
4.4 ROI Identification 33
4.5 OCR Character reader 34
4,6 Final Output 34
vii
CHAPTER-1
INTRODUCTION
1
the number plate from the input vehicle image in the open-air scene because of the
shades of characters of the number plate and background of the same. The gradients of
the original picture are modified to discover applicant number plate region. There are
calculations that depend on a combo of morphological activity, division, segmentation,
and canny edge identifier. Number plate location detection comprises of steps like as
Edge Detection, a Morphological task like expansion and disintegration, smoothing, and
division of characters and recognition of plate number.
Digital image processing is the use of computer algorithms to perform image processing
on digital images. As a subcategory or field of digital signal processing, digital image
processing has many advantages over analog image processing. It allows a much wider
range of algorithms to be applied to the input data and can avoid problems such as the
2
build-up of noise and signal distortion during processing. Since images are defined over
two dimensions (perhaps more) digital image processing may be modeled in the form of
multidimensional systems.
Digital image processing technology for medical applications was inducted into
the Space Foundation Space Technology Hall of Fame in 1994.In 2002 Raanan Fattel,
introduced Gradient domain image processing, a new way to process images in which
the differences between pixels are manipulated rather than the pixel values themselves.
Digital image processing allows the use of much more complex algorithms, and hence,
can offer both more sophisticated performance at simple tasks, and the implementation
of methods which would be impossible by analog means.
Enhancement
Feature extraction
Segmentation
1.2 IMAGE
3
1.2.1 Pixel
Image processing is a subset of the electronic domain where in the image is
converted to an array of small integers, called pixels, representing a physical quantity
such as scene radiance, stored in a digital memory and processed by computer or other
digital hardware.
The image as being a two-dimensional function the function values give the
brightness of the image at any given point. Image brightness values can be any real
numbers in the range 0.0 (black) to 1.0 (white). The ranges of x and y will clearly
depend on the image, but all real values between their minima and maxima.
A digital image differs from a photo in that the x, y, and f(x, y) values are all
discrete. Usually they take on only integer values, so x and y ranging from 1to 256
each, and the brightness values also ranging from 0 (black) to 255 (white).
A digital image can be considered as a large array of discrete dots, each of which
has a brightness associated with it. These dots are called picture elements, or more
simply pixels. The pixels surrounding a given pixel constitute its neighbourhood.
4
Fig1.3: Pixel representation
Image processing usually refers to digital image processing, but optical and
analog image processing also are possible. This article is about general techniques that
apply to all of them. The acquisition of images (producing the input image in the first
place) is referred to as imaging
5
Image processing operations can be roughly divided into three major categories:
Image Compression, Image Enhancement and Restoration, and Measurement
Extraction. Image compression is familiar to most people. It involves reducing the
amount of memory needed to store a digital image.
This means that each pixel in the image is stored as a number between 0 to 255,
where 0 represents a black pixel, 255 represents a white pixel and values in-between
represent shades of gray.
Each pixel represents a value from 0 to 255 verifying the level of gray. These operations
can be extended to colour images too.
6
(a) (b)
Where α(x, y) is the pixel’s foreground opacity. For convenience, R^ (x, y), F(x, y), B(x,
y) and α(x, y) are respectively represented by R^, F, B and a in the following. In nature,
the problem described in Equation is ill-posed since F, B and α are all unknown.
However, given the Trimap, α can be estimated accordingly based on Equation.
11