Report 2019
Report 2019
CHAPTER 3
High-level design (HLD) explains the architecture that would be used for
developing a software product. The architecture diagram provides an overview of an entire
system, identifying the main components that would be developed for the product and their
interfaces. The HLD uses possibly non technical to mildly technical terms that should be
understandable to the administrators of the system. In contrast low level design further
exposes the logical detailed design of each of these elements for programmers.
High level design is the design which is used to design the software related
requirements. In this chapter complete system design is generated and shows how the
modules, sub modules and the flow of the data between them are done and integrated. It is
very simple phase that shows the implementation process. The errors done here will be
modified in the coming processes.
The design consideration briefs about how the system behaves for hand gestures and
classify the corresponding notation for the given gesture.Pre-
processing,Segmentation,Feature Extraction,Classification,Sign Recognition.
Pre-Processing: Data set will take sample images and convert them to grayscale
images and finally to binary images.
Segmentation: The Binary images are used to extract the ROI(region of interest)
using Masking process.
Classification: Further using Eigen value we find Eigen vectors and use Eucledian
distance to find the minimun distance between nodal points for comparision.
image in dataset.
The system which is being used currently performs the task in image recognition
and classification of it. The input image(hand gesture) is matched with the images in the
database using Eucledian distance value.So, the matched image with minimum value is
recognized and classified to give the respective output.
The above figure 3.1 shows the system architecture for the proposed method.The
input image is pre-processed and converted to grey scale image to find the Threshold value
based on input image. Based on Threshold value further binary and segmented images are
obtained.
Figure 3.2: Use Case Diagram for Recognition and Classification of image.
The use case diagram for hand recognition and classification is as depicted in the above
figure 3.2. Initially the set of captured images are stored in a temporary file in MATLAB.
The obtained RGB image is converted in to gray scale image to reduce complexity. Then
the pre-processing techniques are applied on the obtained gray scale image. Based on the
observation the segmented image is used to obtain the region of interest(ROI) where the
significant features are extracted using PCA resulting in Eigen values. Further, we classify
the Eigen vector and using Eucledian distance we match the closest image in database and
recognize the required sign.
Module Specification is the way to improve the structure design by breaking down
the system into modules and solving it as independent task. By doing so the complexity is
reduced and the modules can be tested independently. The number of modules for this
model is two, namely hazed data collection set, data set de-hazing module.
A formal data collection process is necessary as it ensures that the data gathered
are both defined and accurate and that subsequent decisions based on arguments embodied
in the findings are valid. The process provides both a baseline from which to measure and
in certain cases an indication of what to improve shows the use case diagram for
datacollection.
As shown in the below figure 3.3 initially the set of captured images are stored in
a temporary file in MATLAB. The storage is linked to the file set account from which the
data is accessed. The obtained RGB image is converted in to gray scale image to reduce
complexity.
The main functionality of this module is to clean the data and address it to the pre-
processing method. The below figure 3.4 shows the block diagram for image dehazing
using dark channel prior. Once the grayscale image is obtained, a series of pre-processing
techniques are applied. The techniques are cleaning the data and reduction which also
includes filtering for enhancing the image. The Dark Channel Prior is estimated and the
atmospheric value is obtained and updated, then the image is restored to obtain the haze
free image.
As the name specifies so as the meaning of the words, it is the process which is
explained in detail like how the data flows between the different processes. The below
figure 3.5 depicts the flow diagram and is composed of the input, process and the output.
After each process the data which is flown between the systems need to be specified and
hence called the data flow diagram. In most of the times it is the initial step for designing
any of the systems to be implemented. It also shows from where the data originated, where
the data flowed and where it got stored. The obtained RGB image is converted in to gray
scale image to reduce complexity. The detected input hand image is classified once after
pre-processing,segmentation is done to extract the significant features which are then
matched with images in dataset and the two images(the input image and the image already
in database) having minimun value on applying Euclidean distance is the expected result,
The figure 3.6 shows the state chart diagram of the Hand recognition and
classification . The process starts with the solid circle, the first state is the reading the
hand gesture image as input and the second state is pre-processing to convert RGB to
gray and then to Binary image.In third state segmentation is done on binary image to
remove the ROI by masking process. In fourth level,we extract the significant features
using PCA and later in Fifth level we use Eucledian distance to find the minimun
distance value and finally we recgonize the sign which has closest match with images in
database.
3.6 SUMMARY
In third chapter, high level design of the proposed method is discussed.
Section 3.1 presents the design considerations for the project. Section 3.2 discusses
the system architecture of proposed system. The next section 3.3 describes
specification. Section 3.4 describes module specification for all the two modules.
The data flow diagram for the system is explained in section 3.5.
CHAPTER 4
DETAILED DESIGN
A detail design is the process of each individual module which is completed in the
earlier stage than implementation. It is the second phase of the project first is to design phase
and second phase is individual design of each phase options. It saves more time and another
plus point is to make implementation easier.
Haze is a natural phenomenon that obscures scenes, reduces visibility, and changes
colors. It is an annoying problem for photographers since it degrades image quality. It is also
a threat to the reliability of many applications, like outdoor surveillance, object detection,
and aerial imaging. So removing haze from images is important in computer vision or
graphics. But haze removal is highly challenging due to its mathematical ambiguity,
typically when the input is merely a single image. In this thesis, we propose a simple but
effective image prior, called dark channel prior, to remove haze from a single image. The
dark channel prior is a statistical property of out-door haze-free images: most patches in
these images should contain pixels which are dark in at least one color channel. Using this
prior with a haze imaging model, high quality haze-free images can be easily recovered.
The basic idea is to compute an accurate atmosphere veil that is not only smoother,
but also respect with depth information of the underlying image. First an initial atmosphere
scattering light through median filtering is obtained, then refine it by guided joint bilateral
filtering to generate a new atmosphere veil which removes the abundant texture information
and recovers the depth edge information. Finally, the problem with scene radiance is solved
using the atmosphere attenuation model. Compared with exiting state of the art dehazing
methods, this method could get a better dehazing effect at distant scene and places where
depth changes abruptly. The below figure 4.1 shows the flowchart for the proposed method.
This method is fast with linear complexity in the number of pixels of the input image;
further more, as this method can be performed in parallel, thus it can be further accelerated
using GPU, which makes this method applicable for real-time requirement The median filter
and adaptive gamma correction are used for enhancing transmission to avoid halo effect
problem. Then visibility restoration module utilizes average colour difference values and
enhanced transmission to restore an image with better quality. Finally the simulated result
shows that obtained restored image has better contrast and hazy free scene objects under
various weather conditions.
This section gives the detailed description of each module which includes pre-
processing techniques, estimating dark channel prior, estimating and updating the
atmospheric light, estimation of transmission value and recovery of image.
The RGB color model is an additive color model in which red, green, and blue light
are added together in various ways to reproduce a broad array of colors. The name of the
model comes from the initials of the three additive primary colors, red, green, and blue.The
main purpose of the RGB color model is for the sensing, representation, and display of
images in electronic systems, such as televisions and computers, though it has also been
used in conventional photography.
Before the electronic age, the RGB color model already had a solid theory behind it,
based in human perception of colors.RGB is a device-dependent color model: different
devices detect or reproduce a given RGB value differently, since the color elements (such
as phosphors or dyes) and their response to the individual R, G, and B levels vary from
manufacturer to manufacturer, or even in the same device over time. Thus an RGB value
does not define the same color across devices without some kind of color management.
B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 9
High Speed Image Dehazing Method Using Dark Channel Prior
Typical RGB input devices are color TV and video cameras, image scanners, and
digital cameras. Typical RGB output devices are TV sets of various technologies(CRT,
LCD, plasma), computer and mobile phone displays, video projectors, multicolor LED
displays, large screens such as Jumbo Tron. Color printers, on the other hand, are not RGB
devices, but subtractive color devices(typically CMYK color model).
4.2.2 Grayscale
Grayscale images are distinct from one-bit bi-tonal black-and-white images, which
in the context of computer imaging are images with only the two colors, black, and
white (also called bilevel or binary images). Grayscale images have many shades of gray in
between. Grayscale images are also called monochromatic, denoting the presence of only
one (mono) color (chrome).
Grayscale images are often the result of measuring the intensity of light at each pixel
in a single band of the electromagnetic spectrum (e.g. infrared, visible light, ultraviolet), and
in such cases they are monochromatic proper when only a given frequency is captured. But
also they can be synthesized from a full color image.
haze-free image and produce a good depth map. The approach is physically valid and is able
to handle distant objects in heavily hazy images.The result contains few artifacts.
The dark channel prior is based on the observation on haze-free outdoor images. In
most of the non-sky patches, at least one color channel has very low intensity at somepixels.
Inother words, the minimum intensity in such a patch should have a very low value.The low
intensities in the dark channel are mainly due to three factors: a) shadows. e.g., the shadows
of cars, buildings and the inside of windows in cityscape images, or the shadows of leaves,
trees and rocks in landscape images; b) colorful objects or surfaces. e.g., any object (for
example, green grass or tree or plant, red or yellow flower/leaf, and blue water surface)
lacking color in any color channel will result in low values in the dark channel; c) dark
objects or surfaces. e.g., dark tree trunk and stone. As the natural outdoor images are usually
full of shadows and colorful, the dark channels of these images are really dark.
First, the dark pixels can come from the shadows in the image. Outdoor images are
full of shadows, e.g., the shadows of trees, buildings, and cars. Objects with irregular
geometry like rocks and plants are easily shaded. In most cityscape images, the windows of
the buildings look dark from the out-side, because the indoor illumination is often much
weaker than the outdoor light. This can also be considered as a kind of shadows. The
example for this is shown in first row of below figure 4.2. Second, the dark pixels can come
from colorful objects. Any object with low reflectance in any color channel will result in
dark pixels.
A green color has low intensity in its red and blue channels, and a yellow color has
low intensity in its blue channels. Outdoor images often contain objects in various colors,
like flowers, leaves, cars, buildings, road signs, or pedestrians as shown in the second row
of the below figure 4.2.
Figure 4.2: Shadows, colorful objects, and black objects contribute dark pixels.
The colorfulness of these objects generates many dark pixels. Notice that by our
definition a dark pixel is not necessarily dark in terms of its total intensity; it is sufficient to
be dark in only one color channel. So a bright red pixel can be a dark pixel if only its green
or blue component is dark. Third, the dark pixels can come from black objects, like vehicles
tyres, road signs, and tree trunks which is shown in the third row of the above figure 4.2.
These dark pixels are particularly useful for in-vehicle camera which oversees the road
conditions. If an image patch includes at least one of these factors, this patch must have dark
pixels. This is the intuitive explanation of the observation.
The dark channel prior is based on the following observation on haze-free outdoor
images: in most of the non-sky patches, at least one color channel has very low intensity at
somepixels.In other words the minimum intensity in such a patch should have a very low
value. Formally, for an image J is defined as shown in the below equation 4.1.
Here, it is assumed that the atmospheric light A is given and further assume that the
transmission in a local patch Ω(x) is constant. The patch’s transmission is denoted as ˜ t(x).
Taking the min operation in the local patch on the haze imaging equation 4.2,
According to the dark channel prior, the dark channel Jdark of the haze-free radiance J tend
to be zero as shown in the below equation 4.3.
As Ac is always positive, the equation 4.3 leads to equation 4.4 as shown below,
Since the sky is at infinite and tends to has zero transmission, the below equation 4.6
gracefully handles both sky regions and non-sky regions.There is no need to separate the
sky regions beforehand.
In practice, even in clear days the atmosphere is not absolutely free of any particle.
So, the haze still exists when we look at distant objects. Moreover, the presence of haze is a
fundamental cue for human to perceive depth. This phenomenon is called aerial perspective.
If the haze is removed thoroughly, the image may seem unnatural and the feeling of depth
may be lost. So a very small amount of haze can be kept optionally for the distant objects
by introducing a constant parameter ω (0<ω≤1) for the above equation. From which we get
the below equation 4.7.
The below figure 4.7 is the estimated transmission map from an input haze image
using the patch size 15 × 15. It is reasonably good but contains some block effects since the
transmission is not always constant in a patch.
CHAPTER 5
IMPLEMENTATION
To implement High Speed Image De-hazing Method Using Dark Channel Prior
software’s used are
Some of the most commonly used fuctionsare :imread will read image from specified
location, imshow will display the output or images on the screen, rgb2gray fuction converts
colored image into grayscale image. Size fuction displays the size of the image in terms of
matrix format i.e no of rows and no of coloumns.
Matlab provides Graphical User Interface which exhibits one or more windows
consisting of controls known as componets which facilitates user to accomplish interactive
task. User need not want to create script or write commands in command prompt. Instead
user must be aware of how the programs are performs to accomplish tasks. It includes Radio
buttons, Toolbars, Sliders, Axes etc. Tools also help users to read and write data and
communicate with other GUI’s. Data are displayed in forms of tables or plots in GUI.
Input image is obtained from the dataset and shows the original image.
In this section the RGB image is converted to gray image because it is easy to
perform the operations such as estimating dark channel prior, atmospheric light and
transmission.
The dark channel prior is based on the statistics of outdoor haze-free images. It is
found that, in most of the local regions which do not cover the sky, it is very often that some
pixels (called dark pixels) have very low intensity in at least one color (RGB) channel.
Step5:Initiliase a dark channel matrix image with sizes obtained in step 1 i.e row and col
for j = 1 : m
fori = 1 : n
dark_channel(j,i) = min(patch(:));
end
end
The value of Air-Light from the sky region which has very low variation in intensity
of pixels is taken, which prevents from taking the value of Air-Light from a white object
founded in image.
n_pixels = m * n;
n_search_pixels = floor(n_pixels,param);
Step 5:Sort the pixels as per descending order and obtain therndexes
sort(dark_vec, 'descend');
Step 7:Run a loop for upto number of search pixesl to accumulate the summation
for k = 1 : n_search_pixels
end
Dark channel prior does not give the result for regions like sky region. So, to get the
values in sky region transmission value is estimated.
[m, n, ~] = size(image);
This section, considers the grayscale image and obtained image from the previous
step, and then does the cumulative sum and gives the haze free image.
mean_I = boxfilter(I, r) ./ N;
mean_p = boxfilter(p, r) ./ N;
mean_Ip = boxfilter(I.*p, r) ./ N;
b = mean_p - a .* mean_I;
mean_a = boxfilter(a, r) ./ N;
mean_b = boxfilter(b, r) ./ N;
q = mean_a .* I + mean_b;
5.4 SUMMARY
This chapter describes the implementation of high speed image dehazing method
using dark channel prior. Implementation requirement is deliberated in Section 5.1. Section
5.2 briefs about the programming language selected. Section 5.3 describes the pseudocode
for each module.
CHAPTER 6
SYSTEM TESTING
Throughtout the testing phase, the procedure will be implemented with the group of
test circumstances and the outcome of a procedure for the test case will be appraised to
identify whether the program is executing as projected. Faults found during testing will be
modified using the testing steps and modification will be recorded for forthcoming
reference. Some of the significant aim of the system testing is,
Test cases are the key aspect for the success of any system. A test case is a manuscript
which has a set of test data, prerequisites, predictable results and post conditions, designed
for a specific test scenario in imperative to validate acquiescence against a specific
requirement. Performance of a system is based on the cases written for each and every
modules of a system. Test cases are precarious for the prosperous performance of the system.
If the predicted outcome doesn’t match with the real outcome, then error log is displayed.
There are two elementary forms of test cases, namely
There must be atleast two test cases such as non-positive and non-negative test for
each requirement of an application in order to completely test and satisfy the requirement.
If the requirement has sub-requirement then each of the sub-requirement needs to have
atleast to test cases.
For applications or systems deprived of prescribed necessities, user can write the test
cases based on acknowledged usual action of the program of analogous class. In certain
positions of testing, test cases are not carved at all, however the events and the outcomes are
conveyed after the tests have been run.
Unit testing is the mechanism where individual model of the projected is tested. It
can also be called as differentiation testing, as the project is tested based on individual
model. Using the modules level designs depiction as a monitor significant device route is
tested to discover faults around the boundary of each module
The below table 6.1 shows the successful loading of the hazy image that is selected
by the user to do the processing. Haze images are suspended particles in atmosphere such as
fog, murk, mist, dust which causes poor visibility image and distorts the colors of the scene.
Haze image regards as a major challenge problem in many applications in the fields of image
processing and computer vision.
Hazy image can be modeled as a combination of scene radiance, air light and
transmission. The main challenge in de-hazing process is due to different density of haze
from one region to other in the haze image, also the weather condition at the time of image
capturing, The hazing images lose the color fidelity and contrast. Also the position of camera
and how far from the scene may be cause for image degradation.
Output Obtained Loaded haze image which is used for the processing is
shown in Figure 7.1.
Result Successful
The RGB image is converted to grayscale image and the successful test case for the
grayscale image as shown in the Table 6.2. the gray level represents the brightness of a pixel.
Here the RGB image is converted to grayscale image because it simply reduce complexity
from a 3D pixel value (R,G,B) to a 1D value. Grayscale images can be the result of
measuring the intensity of light at each pixel according to a particular weighted combination
of frequencies. The contrast ranges from black at the weakest intensity to white at the
strongest.
Result Successful
The table 6.3 shows the successful dark channel prior image which improves the
quality of the hazy image. The dark channel prior is based on the statistics of the outdoor
haze-free images. In hazy images, the intensity of the dark pixels in the channel is mainly
contributed by the airlight. Therefore, these dark pixels can directly provide an accurate
estimation of the haze transmission.
The low intensities in the dark channel are mainly due to three factors: a) shadows,
b)colorful objects or surfaces, c) dark objects or surfaces. As the natural outdoor images are
usually full of shadows and colorful, the dark channels of these images are really dark. By
considering the low intensity pixel values the dark channel image is obtained.
Result Successful.
The Table 6.4 shows the successful test case for the transmission estimation. Dark
channel prior does not give the result for regions like sky region. So, to get the values in sky
region transmission value is estimated and the atmospheric light value is used. Here the
convergence strategy and brightness map are used to compensate and correct the
transmission map which is estimated by the dark channel prior. The transmission map
describes the portion of the light that is not scattered and reaches the camera. Since the map
is a continuous function of depth, it does reflects the depth information in scene. Therefore
transmission estimation has to be done to get the values of atmospheric light for the sky
region.
Test Feature Improves the quality of the patches in the sky region.
The Table 6.5 shows the successful test case for the final dehazed image. After
refined transmission the haze free image is recovered as a final output. Here the local
contrast is improved which is the important step in dehazing process.
Result Successful.
Test Feature It gives three options where the user can select
Output Obtained Main GUI screen with three options is shown in Figure 7.6.
Result Successful
The Table 6.6 shows the successful test case for the main GUI when code is
executed. In this the process are selected for the execution of the proposed method. Here it
shows three options 1.Database, 2.Live Test, and 3.Image Preprocess. When the database is
clicked the image from the dataset is extracted and is processed. When live test is clicked
the processing is done on live video. When the image preprocess is selected it displays the
RGB histogram, Grayscale histogram, Transmission Estimation histogram and histogram of
sharpening.
Test Feature Displays the histogram for RGB image which is the original
image.
Output Expected RGB histogram.
Result Successful
The Table 6.7 shows the successful test case for the RGB histogram. A color
histogram is representation of the distribution of colors in an image. For digital images, a
color histogram represents the number of pixels that have colors in each of a fixed list of
color ranges, that span the image’s color space, the set of all possible color. The color
histogram can be built for any kind of color space, although the term is more often used for
three-dimensional spaces like RGB or HSV.
Result Successful
The Table 6.8 shows the successful test case for the Grayscale Histogram. This
histogram will show the grayscale equivalence of the values of the various pixels. The
grayscale equivalent of a colored image is what we commonly call the black-and-white
version of it. In digital photography, computer-generated imagery and grayscale is one in
which the vale of each pixel is a single sample representing only an amount of light, that is,
it carries only intensity information.
The values of the three channels of each pixel of the colored image are used to
determine what the grayscale equivalent of the pixel will be, which is an unsigned integer
between 0 and 255. Gray level refers to the predictable or deterministic change in the shades
or levels of gray in an image.
The Table 6.9 shows the successful test case for the transmission estimation
histogram. Here the convergence strategy and brightness map are used to compensate and
correct the transmission map which is estimated by the dark channel prior. After that
accurate and rough estimation of the transmission map is obtained.
Result Successful
Test Name Displays the histogram for original and secondary image.
Test Feature Image sharpening done for original and secondary image.
Result Successful
The Table 6.10 shows the successful test case for the sharpening histogram. Along
the direction towards histogram’s center, histogram gradient is positive on two ends. Such
positive gradient features will be destroyby sharpening operation. Image sharpening refers
to enhancement technique that highlights edges and fine details in an image. Sharpening
histogram improves the contrast of an image by changing the intensity level of the pixels
based on the intensity distribution if the input image.
Table 6.11: Test case for showing output of dehaze image with GUI.
Test Feature Dehazed image extraction from the haze image with GUI.
Result Successful
Table 6.11 shows the successful test case for output of dehaze image with GUI.
When the database is clicked the hazed input image is extracted from the dataset where all
the hazed images are stored. Then the processing takes place, which consists of
determination of Dark Channel Prior followed by atmospheric light and transmission
estimation. The output image obtained is recovered using guided filtering. Here it displays
the hazed input image, the dark channel image, transmission estimation image, and the final
dehazed output image.
6.3 SUMMARY
This chapter presents system testing in section 6.1, which consists of unit test cases
for the various modules of Image Dehazing system. Section 6.2 gives a complete view of
the testing which includes Test Name, Test Feature, Output Expected, Output Obtained and
Result.
CHAPTER 7
The Figure 7.1 shows the haze input image. When the code is executed the haze
image is taken as a input. Haze images are suspended particles in atmosphere such as fog,
murk, the mist, dust which causes poor visibility image and distorts the colors of the scene.
Dark Channel Prior method is used to remove haze from a single input image.
Haze image regards as a major challenge problem in many applications in the fields
of image processing and computer vision. Hazy images can be modeled as a combination of
scene radiance, Air light and transmission. The study of haze is closely related to the works
on scattering of light in the atmosphere. In imaging system, the main factors cause the image
degradation can be due to atmospheric absorption, reflecting and scattering.
The Figure 7.2 shows the gray scale image. Here the hazy image is converted to
grayscale image because it simply reduce complexity from a 3D pixel value (R,G,B) to a
1D value. The gray level value indicates the brightness of a pixel. The minimum gray level
is zero and the maximum gray level depends on the digitization depth of the image. In
contrast, in gray scalea pixel can take on any value between 0 and 255. Typically 0 is taken
to be black, and 255 is taken to be white.
These 2D images in gray level represent the linear attenuation coefficient mapping
of the sample in study. In order to do a micro structural characterization, the images in gray
leve must be segmented in a binary form discriminating matrix from porous phase. The
image segmentation procedure is based in a threshold set using its gray level histogram.
The Figure 7.3 shows the dark channel image. Based on the observation on haze-
free outdoor images, in most of the nonsky patches, at least one color channel has very low
intensity at some pixels, which is called the dark channel prior (DCP). By considering the
low intensity pixel values the dark channel image is obtained.
The low intensities in the dark channel are mainly due tothree factors: a) shadows.
e.g., the shadows of cars, buildingsand the inside of windows in cityscape images, or the
shadows of leaves, trees and rocks in landscape images; b)colorful objects or surfaces. e.g.,
any object (for example,green grass/tree/plant, red or yellow flower or leaf, and blue water
surface) lacking color in any color channel will resultin low values in the dark channel; c)
dark objects orsurfaces. e.g., dark tree trunk and stone. As the natural outdoor images are
usually full of shadows and colorful, thedark channels of these images are really dark.
The Figure 7.4 shows the transmission estimation image. The estimation of
transmission is the most important step for foggy scene rendering and consists in image
segmentation and refined map estimation using bilateral filter. The transmission map
describes the portion of the light that is not scattered and reaches the camera. Scince the map
is a continuous function of depth, it does reflects the depth information in scene.
The dark channel prior is not good prior for the sky regions since the sky is at infinite
and tends to has zero transmission. Therefore transmission estimation is used to estimate the
low intensity pixels in the sky regions.
The Figure 7.5 shows the dehazed output image. After refined transmission the haze
free image is recovered as a final output.Here the local contrast is improved which is the
important step in dehazing process.
Based on observation the local contrast is concluded whenever dense haze was exists
in image. For that, first we sharpen the RGB image. While in the second step we convert the
image to HSV color space, then we enhance the local contrast on the V channel by using the
Log filter, then we reconstruct the RGB image, which represent the final result as show in
Figure 7.5.
The Figure 7.6 shows the main GUI when code is executed. When code is executed
then this snapshot is displayed. In this the process are selected for the execution of the
proposed method. Here it shows three options 1.Database, 2.Live Test, and 3.Image
Preprocess. When the database is clicked the image from the dataset is extracted and is
processed. When live test is clicked the processing is done on live video. When the image
preprocess is selected it displays the RGB histogram, Grayscale histogram, Transmission
Estimation histogram and histogram of sharpening.
The Figure 7.7 shows the RGB histogram. A color histogram is representation of the
distribution of colors in an image. For digital images, a color histogram represents the
number of pixels that have colors in each of a fixed list of color ranges, that span the image’s
color space, the set of all possible color. The color histogram can be built for any kind of
color space, although the term is more often used for three-dimensional spaces like RGB or
HSV.
The Figure 7.8 shows the Grayscale Histogram. This histogram will show the
grayscale equivalence of the values of the various pixels. The grayscale equivalent of a
colored image is what we commonly call the black-and-white version of it. In digital
photography, computer-generated imagery and grayscale is one in which the vale of each
pixel is a single sample representing only an amount of light, that is, it carries only intensity
information.
The values of the three channels of each pixel of the colored image are used to
determine what the grayscale equivalent of the pixel will be, which is an unsigned integer
between 0 and 255. Gray level refers to the predictable or deterministic change in the shades
or levels of gray in an image. For an 8-bit grayscale image there are 256 different possible
intensities and so the histogram will graphically display 256 numbers showing the
distribution of pixels amongst those grayscale values. A grayscale image is a digital image
in which each pixel only contains one scalar value which is its intensity.
The Figure 7.9 shows the transmission estimation histogram. Here the convergence
strategy and brightness map are used to compensate and correct the transmission map which
is estimated by the dark channel prior. After that accurate and rough estimation of the
transmission map is obtained.
Assume that all pixels in a patch have the same transmission value. However, scene
depth is not always constant in a patch, which causing some halo artifacts. Therefore an edge
preserving filter that named guided filter is employed to get a refined estimation of the
transmission. When the transmission map is refined by the guided filter, the raius of a local
smooth window is bigger, guided image will get average linear output in a larger range,
making the edges and details of the image be more abundant and the transition be smoother,
so as to avoid the block effect and halo artifacts.
The Figure 7.10 shows the sharpening histogram. Along the direction towards
histogram’s center, histogram gradient is positive on two ends. Such positive gradient
features will be destroyby sharpening operation. Image sharpening refers to enhancement
technique that highlights edges and fine details in an image. Sharpening histogram improves
the contrast of an image by changing the intensity level of the pixels based on the intensity
distribution if the input image.
In an unsaturated image the near-black and near-white pixels often fall into edge and
texture regions sparsely, where high frequency components are quite abundant. However,
the inherence of sharpening manipulation is to add high-pass filter components to original
signal. If an image’s gray histogram is not too narrow, near-black and near-white pixels in
its sharpened copies will become blacker and whiter respectively, till turning into pure black
and pure white. With regard to high and low end saturated images, the histograms present
negative gradient characteristics even if they have not been sharpened.
The Figure 7.11 shows output of dehaze image with GUI. When the database is
clicked the hazed input image is extracted from the dataset where all the hazed images are
stored. Then the processing takes place, which consists of determination of Dark Channel
Prior followed by atmospheric light and transmission estimation. The output image obtained
is recovered using guided filtering. Here it displays the hazed input image, the dark channel
image, transmission estimation image, and the final dehazed output image.
7.2 SUMMARY
This chapter presents the experimental results in section 7.1, which consists of
snapshot of obtained images such as input haze image, grayscale image, transmission image
and final dehazed output image.
CHAPTER 8
8.1 CONCLUSION
In this project work, a very simple but powerful Haze removal method is used. The
dark channel prior is based on the statistics of the outdoor images. Applying the prior into
the haze imaging model, single image haze removal becomes simpler and more effective.
Since the dark channel prior is a kind of statistic, it may not work for some particular images.
When the scene objects are inherently similar to the atmospheric light and no shadow is cast
on them, the dark channel prior is invalid. The method used will underestimate the
transmission for these objects, such as the white marble. This project also shares the common
limitation of most haze removal methods the haze imaging model may be invalid. The
proposed system estimates the dark channel prior based on average filtering and estimates
the value of Air light. In the implemented system the results were very promised visually
and by using some quality metrics. One of important enhancement in this project is the
removing of haloes. Further, Haze removal technique for Live video streaming has been
implemented.
More advanced models can be used to describe complicated phenomena, such as the
sun’s influence on the sky region, and the blueish hue near the horizon. Further one can
intend to investigate haze removal based on these models in the future. Image de-hazing
technique can be further done for live video applications with higher performance. It can be
also done with surveillance camera’s both CCTV and IP stream where there is lot of haze.
REFERENCES
B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 43
High Speed Image Dehazing Method Using Dark Channel Prior