What Is Digital Image Processing (DIP) ?
What Is Digital Image Processing (DIP) ?
Digital Image Processing (DIP) is a technique by which we are used to manipulate the digital
images by the use of computer system. It is also used to enhance the images, to get some
important information from it.
It is also used in the conversion of signals from an image sensor into the digital images.
1. Image Acquisition:
This is the first step or process of the fundamental steps of digital
image processing. Image acquisition could be as simple as being
given an image that is already in digital form. Generally, the
image acquisition stage involves pre-processing, such as scaling
etc.
2. Image Enhancement:
Image enhancement is among the simplest and most appealing
areas of digital image processing. Basically, the idea behind
enhancement techniques is to bring out detail that is obscured,
or simply to highlight certain features of interest in an image.
Such as, changing brightness & contrast etc.
3. Image Restoration:
Image restoration is an area that also deals with improving the
appearance of an image. However, unlike enhancement, which
is subjective, image restoration is objective, in the sense that
restoration techniques tend to be based on mathematical or
probabilistic models of image degradation.
6. Compression:
Compression deals with techniques for reducing the storage
required to save an image or the bandwidth to transmit it.
Particularly in the uses of internet it is very much necessary to
compress data.
7. Morphological Processing:
Morphological processing deals with tools for extracting image
components that are useful in the representation and description of
shape.
8. Segmentation:
Segmentation procedures partition an image into its constituent parts
or objects. In general, autonomous segmentation is one of the most
difficult tasks in digital image processing. A rugged segmentation
procedure brings the process a long way toward successful solution of
imaging problems that require objects to be identified individually.
To convert an analog signal into a digital signal, both its axis(x,y) are converted into digital
format.
Sampling
In digital image processing, sampling is the reduction of a continuous-time signal to a
discrete-time signal. Sampling can be done for functions varying in space, time or any
other dimension and similar results are obtained in two or more dimensions. Sampling
takes two forms: Spatial and temporal. Spatial sampling is essentially the choice of 2D
resolution of an image whereas temporal sampling is the adjustment of the exposure
time of the CCD. Sampling is done on x-axis whereby infinite values are converted to
digital values.
What You Need To Know About Sampling
1. Sampling is the reduction of a continuous-time signal to a discrete-time signal.
2. In sampling, the values on the y-axis, usually amplitude, are continuous but
the time or x-axis is discretized.
3. Sampling is done prior to the quantization process.
4. The sampling rate determines the spatial resolution of the digitized image.
Quantization
Quantization is the process of mapping input values from a large set to output values in
a smaller set, often with a finite number of elements. Quantization is the opposite of
sampling. It is done on the y-axis. When you are quantizing an image, you are actually
dividing a signal into quanta (partitions). On the x axis of the signal, are the coordinate
values and on the y-axis, we have amplitudes. Therefore, digitizing the amplitudes is
what is referred to as quantization.
What You Need To Know About Quantization
1. The transition between continuous values of the image function and its digital
equivalent is referred to as quantization.
2. Quantization makes a sampled signal truly digital and ready for processing by
a computer.
3. In quantization, time or x-axis is continuous and the y-axis or amplitude is
discretized.
4. Quantization is done after sampling process.
5. The quantization level determines the number of grey levels in the digitized
image.
The sampling rate determines the The quantization level determines the
Resolution spatial resolution of the digitized number of grey levels in the digitized
image. image.
The histogram of the above picture of the Einstein would be something like this
The x axis of the histogram shows the range of pixel values. Since its an 8 bpp image,
that means it has 256 levels of gray or shades of gray in it. Thats why the range of x
axis starts from 0 and end at 255 with a gap of 50. Whereas on the y axis, is the count
of these intensities.
As you can see from the graph, that most of the bars that have high frequency lies in
the first half portion which is the darker portion. That means that the image we have got
is darker.
Applications of Histograms
1. In digital image processing, histograms are used for simple calculations in software.
2. It is used to analyze an image. Properties of an image can be predicted by the
detailed study of the histogram.
3. The brightness of the image can be adjusted by having the details of its histogram.
4. The contrast of the image can be adjusted according to the need by having details of
the x-axis of a histogram.
5. It is used for image equalization. Gray level intensities are expanded along the x-axis
to produce a high contrast image.
6. Histograms are used in thresholding as it improves the appearance of the image.
7. If we have input and output histogram of an image, we can determine which type of
transformation is applied in the algorithm.
Contrast
Contrast can be defined as the difference between maximum and minimum pixel
intensity in an image.
There are two methods of enhancing contrast.
The first one is called Histogram stretching that increase contrast.
The second one is called Histogram equalization that enhance contrast and it has
been discussed in our tutorial of histogram equalization.
Histogram Stretching
In histogram stretching, contrast of an image is increased. The contrast of an image is
defined between the maximum and minimum value of pixel intensity.
If we want to increase the contrast of an image, histogram of that image will be fully
stretched and covered the dynamic range of the histogram.
From histogram of an image, we can check that the image has low or high contrast.
Example of histogram stretching:
where f(x,y) denotes the value of each pixel intensity. For each f(x,y) in an image , we
will calculate this formula.
After doing this, we will be able to enhance our contrast.
The following image appear after applying histogram stretching.
Histogram equalization increases the dynamic range of pixel values and makes an equal
count of pixels at each level which produces a flat histogram with high contrast image.
While stretching histogram, the shape of histogram remains the same whereas in Histogram
equalization, the shape of histogram changes and it generates only one image.
0 0.11
1 0.22
2 0.55
3 0.66
4 0.77
5 0.88
6 0.99
7 1
Then in this step you will multiply the CDF value with (Gray levels (minus) 1) .
Considering we have an 3 bpp image. Then number of levels we have are 8. And 1
subtracts 8 is 7. So we multiply CDF by 7. Here what we got after multiplying.
0 0.11 0
1 0.22 1
2 0.55 3
3 0.66 4
4 0.77 5
5 0.88 6
6 0.99 6
7 1 7
Now we have is the last step, in which we have to map the new gray level values into
number of pixels.
Lets assume our old gray levels values has these number of pixels.
0 2
1 4
2 6
3 8
4 10
5 12
6 14
7 16
1 1 4
2 3 6
3 4 8
4 5 10
5 6 12
6 6 14
7 7 16
Now map these new values you are onto histogram, and you are done.
Lets apply this technique to our original image. After applying we got the following
image and its following histogram.
Histogram Equalization Image
Histogram Equalization histogram
Comparing both the histograms and images
Conclusion
As you can clearly see from the images that the new image contrast has been
enhanced and its histogram has also been equalized. There is also one important thing
to be note here that during histogram equalization the overall shape of the histogram
changes, where as in histogram stretching the overall shape of histogram remains
same.