Unit 1 1 Unit 1 2 Merged

Download as pdf or txt
Download as pdf or txt
You are on page 1of 45

UNIT I [15hrs]

Introduction to Image Processing:

Digital Image representation, Sampling & Quantization, Steps in image processing, Image
acquisition, color image representation. Intensity transforms functions, histogram processing,
Spatial filtering, Fourier transforms and its properties, Frequency domain filters Hough
transformation, Image Noise and restorations.

Digital Image representation

The digital image processing deals with developing a digital system that performs operations
on a digital image. An image is nothing more than a two dimensional signal. It is defined by
the mathematical function f(x,y) where x and y are the two co-ordinates horizontally and
vertically and the amplitude of f at any pair of coordinate (x, y) is called the intensity or gray
level of the image at that point. When x, y and the amplitude values of f are all finite discrete
quantities, we call the image a digital image. The field of image digital image processing refers
to the processing of digital image by means of a digital computer.

Sampling and Quantization in Digital Image Processing

1
Mostly the output of image sensors is in the form of analog signal. Now the problem is
that we cannot applyv digital image processing and its techniques on analog signals.

This is due to the fact that we cannot store the output of image sensors which are in the
form of analog signals because it requires infinite memory to store a signal that can have
infinite values. So we have to convert this analog signal into digital signal.

To create a digital image, we need to convert the continuous data into digital form. This
conversion from analog to digital involves two processes: sampling and quantization.

Sampling -> digitization of coordinate values


Quantization -> digitization of amplitude values

Sampling in Digital Image Processing:

• In this we digitize x-axis in sampling.


• It is done on the independent variable.
• For e.g. if y = sinx, it is done on x variable.

2
• There are some variations in the sampled signal which are random in nature.
These variations are due to noise.
• We can reduce this noise by more taking samples. More samples refer to
collecting more data i.e. more pixels (in case of an image) which will eventually
result in better image quality with less noise present.
• As we know that pixel is the smallest element in an image and for an image
represented in the form of a matrix, total no. of pixels is given by:

Total number of pixels = Total number of rows X Total number of columns

• The number of samples taken on the x-axis of a continuous signal refers to the
number of pixels of that image.
• For a CCD array, if the number of sensors on a CCD array is equal to the
number of pixels and number of pixels is equal to the number of samples taken,
therefore we can say that number of samples taken is equal to the number of
sensors on a CCD array.

No. of sensors on a CCD array = No. of pixels = No. of samples taken

• Oversampling is used for zooming. The difference between sampling and


zooming is that sampling is done on signals while zooming is done on the digital
image.

Quantization in Digital Image Processing:

• It is opposite of sampling as sampling is done on the x-axis, while quantization is


done on the y-axis.
• Digitizing the amplitudes is quantization. In this, we divide the signal amplitude
into quanta (partitions).

3
Relation of Quantization and gray level resolution:

Number of quantas (partitions) = Number of gray levels

• Number of gray levels here means number of different shades of gray.


• To improve image quality, we number of gray levels or gray level resolution up.
• If we increase this level to 256, it is known as the grayscale image.

Where,
L = gray level resolution
k = gray level

Gray level = number of bits per pixel (BPP) = number of levels per pixel

4
Steps in image processing

Image Acquisition

Image acquisition is the first step in digital image processing. In this step we get the image
in digital form. This is done using sensing materials like sensor strips and sensor arrays
and electromagnetic wave light source. The light source falls on an object and it gets
reflected or transmitted which gets captured by the sensing material. The sensor gives the
output image in voltage waveform in response to electric power being supplied to it. The
example of a situation where reflected light is captured is a visible light source. Whereas,
in X-ray light sources transmitted light rays are captured.

5
The image captured is analog image as the output is continuous. To digitise the image,
we use sampling and quantization where discretize the image. Sampling is discretizing
the image spatial coordinates whereas quantization is discretizing the image amplitude
values.

Image Enhancement

Image enhancement is the manipulation of an image for its specific purpose and
objectives. This is majorly used in photo beautify applications. These are performed
using filters. The filters are used to minimise noise in an image. Each filter is used for a
specific situation. Correlation operation is done between filters and input image matrix
to obtain enhanced output image in . To simplify the process, we perform multiplication
in the frequency domain which gives the same result. We transform the image from
spatial domain to frequency domain using discrete fourier transform (DFT) multiply
with filter and then go back to spatial domain using inverse discrete fourier transform
(IDFT). Some filters used in frequency domain are butterworth filter and gaussian
filter.

Majorly used filters are high pass filter and low pass filter. Low pass filter smoothens
the images by averaging the pixel of neighbouring value thus minimising the random
noise. It gives a blurring effect. It minimises the sharpening edges. High pass filter is
used to sharpen the images using spatial differentiation. Examples of high pass filters
are laplace filter and high boost filter. There are other non linear filters for different
purposes. For example, a median filter is used to eliminate salt and pepper noise.

Image Restoration

Like image enhancement, image restoration is related to improving an image. But image
enhancement is more of a subjective step where image restoration is more of an
objective step. Restoration is applied to a degraded image trying to recover back the
original model. Here firstly we try to estimate the degradation model and then find the
restored image.

We can estimate the degradation by observation, experimentation and mathematical


modelling. Observation is used when you do not know anything about the setup of the
6
image taken or the environment. In experimentation, we find the point spread function
of an impulse with a similar setup. In mathematical modelling, we even consider the
environment at which the image was taken and it is the best out of all the other three
methods.

To find the restored image, we generally use one of the three filters – inverse filter,
minimum mean square (weiner) filter, constrained least squares filter. Inverse filtering
is the simplest method but cannot be used in presence of noise. In the Wiener filter,
mean square error is minimised. In constrained least error filtering, we have a
constraint and it is the best method.

Colour Image Processing

Colour image processing is motivated by the fact that using colour it is easier to classify
and the human eye can easily see thousands of colours than shades of black and white.
Colour image processing is divided into types – pseudo colour or reduced colour
processing and full colour processing. In pseudo colour processing, the grey scale is
applied to one colour. It was used earlier. Now-a-days, full colour processing is used for
full colour sensors such as digital cameras or colour scanners as the price of full colour
sensor hardware is reduced significantly.

There are various colour models like RGB (Red Green Blue), CMY (Cyan Magenta
Yellow), HSI (Hue Saturation Intensity). Different colour models are used for different
purposes. RGB is understandable for computer monitors. Whereas CMY is
understandable for a computer printer. So there is an internal hardware which converts
RGB to CMY and vice versa. But humans cannot understand RGB or CMY, they
understand HSI.

7
Wavelets

Wavelets represent an image in various degrees of resolution. It is one of the members


of the class of linear transforms along with fourier, cosine, sine, Hartley, Slant, Haar,
Walsh-Hadamard. Transforms are coefficients of linear expansion which decompose a
function into a weighted sum of orthogonal or biorthogonal basis functions. All these
transforms are reversible and interconvertible. All of them express the same
information and energy. Hence all are equivalent. All the transforms vary in only the
manner how the information is represented.

Compression

Compression deals with decreasing the storage required to the image information or the
bandwidth required to transmit it. Compression technology has grown widely in this
era. Many people are knowledgeable about it by common image extension JPEG (Joint
Photographic Experts Group) which is a compression technology. This is done by
removing redundancy and irrelevant data. In the encoding process of compression, the
image goes through a series of stages – mapper, quantizer, symbol encoder. Mapper
may be reversible or irreversible. Example of mapper is run length encoding. Quantizer
reduces the accuracy and is an irreversible process. Symbol encoders assign small
values to more frequent data and is a reversible process.

To get back the original image, we perform decompression going through the stage of
symbol decoder and inverse mapper. Compression may be lossy or lossless. If after
compression we get the exact same image, then it is lossless compression else it is lossy
compression. Examples of lossless compression are huffman coding, bit plane coding,
LZW (Lempel Ziv Welch) coding, (PCM) pulse code modulation. Examples of lossy
compression are JPEG, PNG. Lossy compression is ideally used in the world as the
change is not visible to the naked eye and saves way better storage or bandwidth than
lossless compress.

Morphological Image Processing

In morphological image processing, we try to understand the structure of the image. We


find the image components present in digital images. It is useful in representing and
describing the images’ shape and structure. We find the boundary, hole, connected
components, convex hull, thinning, thickening, skeletons, etc. It is the fundamental step
for the upcoming stages.

8
Segmentation

Segmentation is based on extraction information from images on the basis of two


properties – similarity and discontinuity. For example, a sudden change in intensity
value represents an edge. Detection of isolation points, line detection, edge detection are
some of the tasks associated with segmentation. Segmentation can be done by various
methods like thresholding, clustering, superpixels, graph cuts, region growing, region
splitting and merging, morphological watersheds.

Feature Extraction

Feature extraction is the next step after segmentation. We extract features from images,
regions and boundaries. Example of feature extraction is corner detection. These
features should be independent and insensitive to variation of parameters such as
scaling, rotation, translation, illumination. Boundary features can be described by
boundary feature descriptors such as shape numbers and chain codes, fourier
descriptors and statistical moments.

Image Pattern Classification

In image pattern classification, we assign labels to images on the basis of features


extracted. For example, classify the image as a cat image. Classical methods for image
pattern classification are minimum-distance, correlation and Bayes classifier. Modern
methods for the same purpose use neural networks and deep learning models such as
deep convolutional neural networks. This method is ideal for image processing
techniques.

Image acquisition

In image processing, it is defined as the action of retrieving an image from some source,
usually a hardware-based source for processing. It is the first step in the workflow
sequence because, without an image, no processing is possible. The image that is
acquired is completely unprocessed.

Now the incoming energy is transformed into a voltage by the combination of input
electrical power and sensor material that is responsive to a particular type of energy
being detected. The output voltage waveform is the response of the sensor(s) and a
digital quantity is obtained from each sensor by digitizing its response.

9
Image Acquisition using a single sensor:

Example of a single sensor is a photodiode. Now to obtain a two-dimensional image


using a single sensor, the motion should be in both x and y directions.

• Rotation provides motion in one direction.


• Linear motion provides motion in the perpendicular direction.

10
Fig: Combining a single sensor with motion to generate a 2D image

This is an inexpensive method and we can obtain high-resolution images with high
precision control. But the downside of this method is that it is slow.

Image Acquisition using a line sensor (sensor strips):

• The sensor strip provides imaging in one direction.


• Motion perpendicular to the strip provides imaging in other direction.

Image Acquisition using an array sensor:


In this, individual sensors are arranged in the form of a 2-D array. This type of arrangement is
found in digital cameras. e.g. CCD array
In this, the response of each sensor is proportional to the integral of the light energy projected
onto the surface of the sensor. Noise reduction is achieved by letting the sensor integrate the
input light signal over minutes or ever hours.
Advantage: Since sensor array is 2D, a complete image can be obtained by focusing the
energy pattern onto the surface of the array.

11
Fig: An example of digital image acquisition using array sensor
The sensor array is coincident with the focal plane, it produces an output proportional to the
integral of light received at each sensor.
Digital and analog circuitry sweep these outputs and convert them to a video signal which is
then digitized by another section of the imaging system. The output is a digital image.

Color image representation

First of all, let’s see how an image is stored and represented numerically.
Pixels and colors¶
An image is made of many pixels, each of them with a particular color. Modern computers can
show up to 16.7 millions of different colors. It is impossible to store each one of these colors
personnally, they are instead represented as a combination of three primary colors: red, green
and blue (RGB color model).
Therefore, for each pixel, we need to store a total of 3 numbers corresponding to the amount
of each color. For most computers nowadays, each color can get 256 values (from 0 to 255), 0
being the absence of the color and 255 the highest intensity of it. You will understand better by
looking at the following table, showing some examples of colors represented in the RGB
model:

12
For an easier readability, we can also express those quantities of each color using a hexadecimal
code, with 2 digits for each color. The understanding of hexadecimal code is not needed for
this lecture, but if you are curious, I invite you to read the additional resources (1) or (2).
With all this, we understand that to store an image, we need a table representing each pixel,
and each pixel must contain 3 values (red, green, blue). We then need a list of lists. Additionally,
we need the position of each pixel.

Intensity transforms functions


Intensity transformations are applied on images for contrast manipulation or image
thresholding. These are in the spatial domain, i.e. they are performed directly on the pixels of
the image at hand, as opposed to being performed on the Fourier transform of the image. The
following are commonly used intensity transformations:
1. Image Negatives (Linear)
2. Log Transformations
3. Power-Law (Gamma) Transformations
4. Piecewise-Linear Transformation Functions

13
Spatial Domain Processes – Spatial domain processes can be described using the equation:

g(x,y)=T[f(x,y)] where f(x,y) is the input image, T is an


operator on f defined over a neighbourhood of the point (x, y), and g(x,y) is the output.

Image Negatives – Image negatives are discussed in this article. Mathematically, assume that
an image goes from intensity levels 0 to (L-1). Generally, L = 256. Then, the negative
transformation can be described by the expression s = L-1-r where r is the initial intensity level
and s is the final intensity level of a pixel. This produces a photographic negative.
Log Transformations –
Mathematically, log transformations can be expressed as s = clog(1+r). Here, s is the output
intensity, r>=0 is the input intensity of the pixel, and c is a scaling constant. c is given by
255/(log (1 + m)), where m is the maximum pixel value in the image. It is done to ensure that
the final pixel value does not exceed (L-1), or 255. Practically, log transformation maps a
narrow range of low-intensity input values to a wide range of output values. Consider the
following input image.
Power-Law (Gamma) Transformation –

Power-law (gamma) transformations can be mathematically expressed as .


Gamma correction is important for displaying images on a screen correctly, to prevent
bleaching or darkening of images when viewed from different types of monitors with different
display settings. This is done because our eyes perceive images in a gamma-shaped curve,
whereas cameras capture images in a linear fashion. Below is the Python code to apply gamma
correction.
Piecewise-Linear Transformation Functions –
These functions, as the name suggests, are not entirely linear in nature. However, they are linear
between certain x-intervals. One of the most commonly used piecewise-linear transformation
functions is contrast stretching. Contrast can be defined as:
Histogram processing

14
In digital image processing, the histogram is used for graphical representation of a digital
image. A graph is a plot by the number of pixels for each tonal value. Nowadays, image
histogram is present in digital cameras. Photographers use them to see the distribution of tones
captured.
In a graph, the horizontal axis of the graph is used to represent tonal variations whereas the
vertical axis is used to represent the number of pixels in that particular pixel. Black and dark
areas are represented in the left side of the horizontal axis, medium grey color is represented in
the middle, and the vertical axis represents the size of the area.
Histogram Equalization in Digital Image Processing
A digital image is a two-dimensional matrix of two spatial coordinates, with each cell
specifying the intensity level of the image at that point. So, we have an N x N matrix with
integer values ranging from a minimum intensity level of 0 to a maximum level of L-1,
where L denotes the number of intensity levels. Hence, the intensity levels of a pixel r can
take on values from 0,1,2,3,…. (L-1). Generally, L = 2m, where m is the number of bits
required to represent the intensity levels. Zero level intensity denotes complete black or
dark, whereas L-1 level indicates complete white or absence of grayscale.
Intensity Transformation:
Intensity transformation is a basic digital image processing technique, where the pixel
intensity levels of an image are transformed to new values using a mathematical
transformation function, so as to get a new output image. In essence, intensity
transformations is simply to implement the following function:
s=T(r)s=T(r)
where s is the new pixel intensity level and r is the original pixel intensity value of the
given image and r≥0.
With different forms of the transformation function T(r), we get different output images.
Common Intensity Transformation Functions:
1. Image negation: This reverses the grayscales of an image, making dark pixels whiter
and white pixels darker. This is completely analogous to the photographic negative, hence
the name.
s=L–1–rs=L–1–r
2. Log Transform: Here c is some constant. It is used for expanding the dark pixel values
in an image.
s=clog(1+r)s=clog(1+r)
3. Power-law Transform: Here c and γ are some arbitrary constants. This transform can
be used for a variety of purposes by varying the value of γ.
s=crγ s=crγ
Histogram Equalization:

15
The histogram of a digital image, with intensity levels between 0 and (L-1), is a function
h( rk ) = nk , where rk is the kth intensity level and nk is the number of pixels in the image
having that intensity level. We can also normalize the histogram by dividing it by the total
number of pixels in the image. For an N x N image, we have the following definition of a
normalized histogram function:
p(rk)=nk/N2p(rk)=nk/N2
This p(rk) function is the probability of the occurrence of a pixel with the intensity level
rk. Clearly,
∑p(rk)=1∑p(rk)=1
The histogram of an image, as shown in the figure, consists of the x-axis representing the
intensity levels rk and the y-axis denoting the h(rk) or the p(rk) functions.

Spatial filtering
Spatial Filtering technique is used directly on pixels of an image. Mask is
usually considered to be added in size so that it has specific center pixel. This
mask is moved on the image such that the center of the mask traverses all image
pixels.
Classification on the basis of Linearity
There are two types:
1. Linear Spatial Filter
2. Non-linear Spatial Filter
General Classification:
Smoothing Spatial Filter
Smoothing filter is used for blurring and noise reduction in the image. Blurring is pre-
processing steps for removal of small details and Noise Reduction is accomplished by blurring.
Types of Smoothing Spatial Filter
1. Linear Filter (Mean Filter)
2. Order Statistics (Non-linear) filter
These are explained as following below.
1. Mean Filter: Linear spatial filter is simply the average of the pixels contained in the
neighborhood of the filter mask. The idea is replacing the value of every pixel in an
image by the average of the grey levels in the neighborhood define by the filter mask.
Below are the types of mean filter:
o Averaging filter: It is used in reduction of the detail in image. All coefficients
are equal.
o Weighted averaging filter: In this, pixels are multiplied by different
coefficients. Center pixel is multiplied by a higher value than average filter.

16
2. Order Statistics Filter: It is based on the ordering the pixels contained in the image
area encompassed by the filter. It replaces the value of the center pixel with the value
determined by the ranking result. Edges are better preserved in this filtering. Below are
the types of order statistics filter:
o Minimum filter: 0th percentile filter is the minimum filter. The value of the
center is replaced by the smallest value in the window.
o Maximum filter: 100th percentile filter is the maximum filter. The value of the
center is replaced by the largest value in the window.
o Median filter: Each pixel in the image is considered. First neighboring pixels
are sorted and original values of the pixel is replaced by the median of the list.
Sharpening Spatial Filter
It is also known as derivative filter. The purpose of the sharpening spatial filter is just the
opposite of the smoothing spatial filter. Its main focus in on the removal of blurring and
highlight the edges. It is based on the first and second order derivative.
First Order Derivative:
• Must be zero in flat segments.
• Must be non zero at the onset of a grey level step.
• Must be non zero along ramps.
First order derivative in 1-D is given by:
f' = f(x+1) - f(x)
Second Order Derivative:
• Must be zero in flat areas.
• Must be non zero at the onset and end of a ramp.
• Must be zero along ramps.
Second order derivative in 1-D is given by:
f'' = f(x+1) + f(x-1) - 2f(x)
Fourier transforms and its properties
Fourier Transform: Fourier transform is the input tool that is used to decompose an image
into its sine and cosine components.
Properties of Fourier Transform:
• Linearity:
Addition of two functions corresponding to the addition of the two frequency spectrum
is called the linearity. If we multiply a function by a constant, the Fourier transform of
the resultant function is multiplied by the same constant. The Fourier transform of sum
of two or more functions is the sum of the Fourier transforms of the functions.

17
• Case I.
• If h(x) -> H(f) then ah(x) -> aH(f)
• Case II.
If h(x) -> H(f) and g(x) -> G(f) then h(x)+g(x) -> H(f)+G(f)
• Scaling:
Scaling is the method that is used to the change the range of the independent variables
or features of data. If we stretch a function by the factor in the time domain then squeeze
the Fourier transform by the same factor in the frequency domain.
If f(t) -> F(w) then f(at) -> (1/|a|)F(w/a)
• Differentiation:
Differentiating function with respect to time yields to the constant multiple of the initial
function.
If f(t) -> F(w) then f'(t) -> jwF(w)
• Convolution:
It includes the multiplication of two functions. The Fourier transform of a convolution
of two functions is the point-wise product of their respective Fourier transforms.
• If f(t) -> F(w) and g(t) -> G(w)
then f(t)*g(t) -> F(w)*G(w)
• Frequency Shift:
Frequency is shifted according to the co-ordinates. There is a duality between the time
and frequency domains and frequency shift affects the time shift.
If f(t) -> F(w) then f(t)exp[jw't] -> F(w-w')
• Time Shift:
The time variable shift also effects the frequency function. The time shifting property
concludes that a linear displacement in time corresponds to a linear phase factor in the
frequency domain.
If f(t) -> F(w) then f(t-t') -> F(w)exp[-jwt']

Frequency domain filters


Frequency Domain Filters are used for smoothing and sharpening of image by removal
of high or low frequency components. Sometimes it is possible of removal of very high
and very low frequency. Frequency domain filters are different from spatial domain
filters as it basically focuses on the frequency of the images. It is basically done for two
basic operation i.e., Smoothing and Sharpening.
These are of 3 types:

18
1. Low pass filter:
Low pass filter removes the high frequency components that means it keeps low
frequency components. It is used for smoothing the image. It is used to smoothen the
image by attenuating high frequency components and preserving low frequency
components.
Mechanism of low pass filtering in frequency domain is given by:
G(u, v) = H(u, v) . F(u, v)
where F(u, v) is the Fourier Transform of original image
and H(u, v) is the Fourier Transform of filtering mask
2. High pass filter:
High pass filter removes the low frequency components that means it keeps high
frequency components. It is used for sharpening the image. It is used to sharpen the
image by attenuating low frequency components and preserving high frequency
components.
Mechanism of high pass filtering in frequency domain is given by:
H(u, v) = 1 - H'(u, v)
where H(u, v) is the Fourier Transform of high pass filtering
and H'(u, v) is the Fourier Transform of low pass filtering

3. Band pass filter:


Band pass filter removes the very low frequency and very high frequency components that
means it keeps the moderate range band of frequencies. Band pass filtering is used to enhance
edges while reducing the noise at the same time.

Hough transformation
The Hough Transform is a popular technique in computer vision and image processing, used
for detecting geometric shapes like lines, circles, and other parametric curves. Named after
Paul Hough, who introduced the concept in 1962, the transform has evolved and found
numerous applications in various domains such as medical imaging, robotics, and autonomous
driving. In this article, we will discuss how Hough transformation is utilized in computer
vision.
What is Hough Transform?
A feature extraction method called the Hough Transform is used to find basic shapes in a
picture, like circles, lines, and ellipses. Fundamentally, it transfers these shapes’ representation

19
from the spatial domain to the parameter space, allowing for effective detection even in the
face of distortions like noise or occlusion.
How Does the Hough Transform Work?
The accumulator array, sometimes referred to as the parameter space or Hough space, is the
first thing that the Hough Transform creates. The available parameter values for the shapes that
are being detected are represented by this space. The slope (m) and y-intercept (b) of a line, for
instance, could be the parameters in the line detection scenario.
The Hough Transform calculates the matching curves in the parameter space for each edge
point in the image. This is accomplished by finding the curve that intersects the parameter
values at the spot by iterating over all possible values of the parameters. The “votes” or
intersections for every combination of parameters are recorded by the accumulator array.
In the end, the programme finds peaks in the accumulator array that match the parameters of
the shapes it has identified. These peaks show whether the image contains lines, circles, or
other shapes.
Variants and Techniques of Hough transform
The performance and adaptability of the Hough Transform have been improved throughout
time by a number of variations and techniques:
• Paul Hough’s initial formulation for line identification is known as the Standard
Hough Transform (SHT). It entails voting for every possible combination of
parameters and discretizing the parameter space.
• Probabilistic Hough Transform (PHT): The PHT randomly chooses a subset of edge
points and only applies line detection to those locations in order to increase efficiency.
For real-time applications, this minimizes processing complexity while maintaining
accuracy in the output.
• Generalized Hough Transform (GHT): By recording the spatial relationships of
every shape using a template, the GHT can detect any shape, in contrast to the SHT’s
limited ability to detect just specified shapes. After that, a voting system akin to the
SHT is used to match this template with the image.
• Accumulator Space Dimensionality: The classic Hough Transform can identify lines
in two dimensions, but it can also detect more complicated forms, such ellipses or
circles, in higher dimensions. Every extra dimension translates into an extra parameter
of the identified shape.
Image Noise and restorations
The principal source of noise in digital images arises during image acquisition and
transmission. The performance of imaging sensors is affected by a variety of
environmental and mechanical factors of the instrument, resulting in the addition of
undesirable noise in the image. Images are also corrupted during the transmission process
due to non-ideal channel characteristics.

20
Generally, a mathematical model of image degradation and its restoration is used for
processing. The figure below shows the presence of a degradation function h(x,y) and an
external noise n(x,y) component coming into the original image signal f(x,y) thereby
producing a final degraded image g(x,y). This part composes the degradation model.
Mathematically we can write the following :
g(x,y)= h(x,y)*h(x,y)+n(x,y) where n= Eta(Greek character)

The external noise is probabilistic in nature and there are several noise models used frequently
in the field of digital image processing. We have several probability density functions of the
noise.
Noise Models
Gaussian Noise:
Because of its mathematical simplicity, the Gaussian noise model is often used in practice and
even in situations where they are marginally applicable at best. Here, m is the mean and σ2 is
the variance.
Gaussian noise arises in an image due to factors such as electronic circuit noise and sensor
noise due to poor illumination or high temperature.

21

You might also like