preprocessing
preprocessing
ABSTRACT: Digital image processing is always an interesting field as it gives improved pictorial information for
human interpretation and processing of image data for storage, transmission, and representation for machine
perception. Image Processing is a technique to enhance raw images received from cameras/sensors placed on satellites,
space probes and aircrafts or pictures taken in normal day-to-day life for various applications. This field of image
processing significantly improved in recent times and extended to various fields of science and technology. The image
processing mainly deals with image acquisition, Image enhancement, image segmentation, feature extraction, image
classification etc.
KEYWORDS: Image Processing , Image enhancement, image segmentation, feature extraction, image classification,
I. INTRODUCTION
The basic definition of image processing refers to processing of digital image, i.e removing the noise and any kind of
irregularities present in an image using the digital computer. The noise or irregularity may creep into the image either
during its formation or during transformation etc. For mathematical analysis, an image may be defined as a two-
dimensional function f(x,y) where x and y are spatial (plane) coordinates, and the amplitude of f at any pair of
coordinates (x, y) is called the intensity or gray level of the image at that point. When x, y, and the intensity values of f
are all finite, discrete quantities, we call the image a digital image. It is very important that a digital image is composed
of a finite number of elements, each of which has a particular location and value. These elements are called picture
elements, image elements, pels, and pixels. Pixel is the most widely used term to denote the elements of a digital
image.
Various techniques have been developed in Image Processing during the last four to five decades. Most of the
techniques are developed for enhancing images obtained from unmanned spacecrafts, space probes and military
reconnaissance flights. Image Processing systems are becoming popular due to easy availability of powerful personnel
computers, large size memory devices, graphics software etc [1]. Image Processing is used in various applications such
as:
• Remote Sensing
• Medical Imaging
• Non-destructive Evaluation
• Forensic Studies
• Textiles
• Material Science.
• Military
• Film industry
• Document processing
• Graphic arts
• Printing Industry
II. DIGITAL IMAGE PROCESSING
The term digital image processing generally refers to processing of a two-dimensional picture by a digital computer
[2]. In a broader context, it implies digital processing of any two-dimensional data. A digital image is an array of real
numbers represented by a finite number of bits. The principle advantage of Digital Image Processing methods is its
versatility, repeatability and the preservation of original data precision. The various Image Processing techniques are:
• Image preprocessing
• Image enhancement
• Image segmentation
• Feature extraction
• Image classification
III. IMAGE PREPROCESSING
In image preprocessing, image data recorded by sensors on a satellite restrain errors related to geometry and
brightness values of the pixels. These errors are corrected using appropriate mathematical models which are either
definite or statistical models. Image enhancement is the modification of image by changing the pixel brightness values
to improve its visual impact. Image enhancement involves a collection of techniques that are used to improve the visual
appearance of an image, or to convert the image to a form which is better suited for human or machine interpretation.
Sometimes images obtained from satellites and conventional and digital cameras lack in contrast and brightness
because of the limitations of imaging sub systems and illumination conditions while capturing image. Images may have
different types of noise. In image enhancement, the goal is to accentuate certain image features for subsequent analysis
or for image display [3]. Examples include contrast and edge enhancement, pseudo-coloring, noise filtering,
sharpening, and magnifying. Image enhancement is useful in feature extraction, image analysis and an image display.
The enhancement process itself does not increase the inherent information content in the data. It simply emphasizes
certain specified image characteristics. Enhancement algorithms are generally interactive and application dependent.
Some of the enhancement techniques are:
a. Contrast Stretching
b. Noise Filtering
c. Histogram modification
a. Contrast Stretching
Some images (eg. over water bodies, deserts, dense forests, snow, clouds and under hazy conditions over
heterogeneous regions) are homogeneous i.e., they do not have much change in their levels. In terms of histogram
representation, they are characterized as the occurrence of very narrow peaks. The homogeneity can also be due to the
incorrect illumination of the scene [1]. Ultimately the images hence obtained are not easily interpretable due to poor
human perceptibility. This is because there exists only a narrow range of gray-levels in the image having provision for
wider range of gray-levels. The contrast stretching methods are designed exclusively for frequently encountered
situations. Different stretching techniques have been developed to stretch the narrow range to the whole of the available
dynamic range.
.
Figure 1 contrast stretching
b. Noise Filtering
Noise Filtering is used to filter the unnecessary information from an image. It is also used to remove various
types of noises from the images. Mostly this feature is interactive. Various filters like low pass, high pass, mean,
median etc., are available [1].
c. Histogram Modification
Histogram has a lot of importance in image enhancement. It reflects the characteristics of image. By modifying the
histogram, image characteristics can be modified. One such example is Histogram Equalization. Histogram
equalization is a nonlinear stretch that redistributes pixel values so that there is approximately the same number of
pixels with each value within a range. The result approximates a flat histogram. Therefore, contrast is increased at the
peaks and lessened at the tails [1].
Segmentation is one of the key problems in image processing. Image segmentation is the process that subdivides an
image into its constituent parts or objects. The level to which this subdivision is carried out depends on the problem
being solved, i.e., the segmentation should stop when the objects of interest in an application have been isolated e.g., in
autonomous air-to-ground target acquisition, suppose our interest lies in identifying vehicles on a road, the first step is
to segment the road from the image and then to segment the contents of the road down to potential vehicles. Image
thresholding techniques are used for image segmentation.
After thresholding a binary image is formed where all object pixels have one gray level and all background pixels
have another - generally the object pixels are 'black' and the background is 'white'. The best threshold is the one that
selects all the object pixels and maps them to 'black'. Various approaches for the automatic selection of the threshold
have been proposed. Thresholding can be defined as mapping of the gray scale into the binary set {0, 1} :
where S(x, y) is the value of the segmented image, g(x, y) is the gray level of the pixel (x, y) and T(x, y) is the
threshold value at the coordinates (x, y). In the simplest case T(x, y) is coordinate independent and a constant for the
whole image. It can be selected, for instance, on the basis of the gray level histogram. When the histogram has two
pronounced maxima, which reflect gray levels of object(s) and background, it is possible to select a single threshold for
the entire image. A method which is based on this idea and uses a correlation criterion to select the best threshold, is
described below. Sometimes gray level histograms have only one maximum. This can be caused, e.g., by
inhomogeneous illumination of various regions of the image. In such case it is impossible to select a single
thresholding value for the entire image and a local binarization technique must be applied. General methods to solve
the problem of binarization of in homogeneously illuminated images, however, are not available.
Segmentation of images involves sometimes not only the discrimination between objects and the background, but
also separation between different regions. One method for such separation is known as watershed segmentation.
V. FEATURE EXTRACTION
The feature extraction techniques are developed to extract features in synthetic aperture radar images. This technique
extracts high-level features needed in order to perform classification of targets. Features are those items which uniquely
describe a target, such as size, shape, composition, location etc. Segmentation techniques are used to isolate the desired
object from the scene so that measurements can be made on it subsequently. Quantitative measurements of object
features allow classification and description of the image.
When the pre-processing and the desired level of segmentation has been achieved, some feature extraction technique
is applied to the segments to obtain features, which is followed by application of classification and post processing
techniques. It is essential to focus on the feature extraction phase as it has an observable impact on the efficiency of the
recognition system. Feature selection of a feature extraction method is the single most important factor in achieving
high recognition performance. Feature extraction has been given as “extracting from the raw data information that is
most suitable for classification purposes, while minimizing the within class pattern variability and enhancing the
between class pattern variability”. Thus, selection of a suitable feature extraction technique according to the input to be
applied needs to be done with utmost care. Taking into consideration all these factors, it becomes essential to look at
the various available techniques for feature extraction in a given domain, covering vast possibilities of cases [4].
Various types of feature extraction methods are shown in Table 1.
The simulation results showed that the proposed algorithm performs better with the total transmission energy metric
than the maximum number of hops metric. The proposed algorithm provides energy efficient path for data transmission
and maximizes the lifetime of entire network. As the performance of the proposed algorithm is analyzed between two
metrics in future with some modifications in design considerations the performance of the proposed algorithm can be
compared with other energy efficient algorithm. We have used very small network of 5 nodes, as number of nodes
increases the complexity will increase. We can increase the number of nodes and analyze the performance.
Image classification is the labeling of a pixel or a group of pixels based on its grey value [5]. Classification is one of the
most often used methods of information extraction. In Classification, usually multiple features are used for a set of
pixels i.e., many images of a particular object are needed. In Remote Sensing area, this procedure assumes that the
imagery of a specific geographic area is collected in multiple regions of the electromagnetic spectrum and is in good
registration. Most of the information extraction techniques rely on analysis of the spectral reflectance properties of such
imagery and employ special algorithms designed to perform various types of 'spectral analysis'. The process of
multispectral classification can be performed using either of the two methods: Supervised or Unsupervised [1].
In Supervised classification, the identity and location of some of the land cover types such as urban, wetland, forest
etc., are known as priori through a combination of field works and toposheets. The analyst attempts to locate specific
sites in the remotely sensed data that represents homogeneous examples of these land cover types. These areas are
commonly referred as TRAINING SITES because the spectral characteristics of these known areas are used to 'train'
the classification algorithm for eventual land cover mapping of reminder of the image. Multivariate statistical
parameters are calculated for each training site. Every pixel both within and outside these training sites is then
evaluated and assigned to a class of which it has the highest likelihood of being a member [6].
In an Unsupervised classification, the identities of land cover types has to be specified as classes within a scene are not
generally known as priori because ground truth is lacking or surface features within the scene are not well defined. The
computer is required to group pixel data into different spectral classes according to some statistically determined
criteria [1].
The comparison in medical area is the labeling of cells based on their shape, size, color and texture, which act as
features. This method is also useful for MRI images.
M. Mansourpour , M.A. Rajabi , J.A.R. Blais proposed the Frost Filter technique for image preprocessing. This
filter assumes multiplicative noise and stationary noise statistics [7]. A gradient based adaptive median filter is used
for removal of speckle noises in SAR images. This method is used to reduce/remove the speckle noise, preserves
information, edges and spatial resolution and it was proposed by S.Manikandan, , Chhabi Nigam, J P Vardhani and
A.Vengadarajan [8]. The Wavelet Coefficient Shrinkage (WCS) filter is based on the use of Symmetric Daubechies
(SD) wavelets [9]. The WCS filter developer by L. Gagnon and A. Jouan in 1997. Discrete Wavelet Transform
(DWT) has been employed in order to preserve the high-frequency components of the image [10]. In order to achieve
a sharper image, an intermediate stage for estimating the high-frequency sub bands has been proposed by P.
Karunakar, V. Praveen and O. Ravi Kumar.
Maximally Stable Extremal Regions (MSER) algorithm and spectral clustering (SC) method is proposed by
Yang Gui, Xiaohu Zhang and Yang Shang to provide effective and robust segmentation [11]. Modified SRG
(MSRG) procedure was developed by Young Gi Byun, You Kyung Han, and Tae Byeong Chae[12]. The Holder
exponent is used as a tool to utilize the spatial and spectral information together to compute the degree of texture
around each pixel in the high-resolution panchromatic images. This method was proposed by Debasish Chakraborty,
Gautam Kumar Sen and Sugata Hazra in 2009 [13]. Ousseini Lankoande, Majeed M. Hayat, and Balu Santhanam
used a novel Markov Random Field (MRF) based segmentation algorithm. This is derived from the statistical
properties of speckle noise [14].
John F. Vesecky, Martha P. Smith and Ramin Samadani report image processing techniques for extracting the
characteristics of pressure ridge features in SAR images of sea ice. Bright filamentary features are identified and
broken into segments bounded by either junction between linear features or ends of features. Ridge statistics are
computed using the filamentary segment properties [15]. Karvonen, J. and Kaarna.A have studied the feature
extraction from sea ice SAR images based on non-negative factorization methods. The methods are the sparseness-
constrained non-negative matrix factorization (SC-NMF) and Non-negative tensor factorization (NTF) [16]. The
Neural Network algorithm uses both backscatter data and textural characteristics of the images [17]. Gray-level co-
occurrence matrix (GLCM) method was proposed by Natalia Yu. Zakhvatkina, Vitaly Yu. Alexandrov, Ola M.
Johannessen, Stein Sandven and Ivan Ye. Frolov.
Wang, Tan, Yang and Xuezhi proposed a multi-level SAR sea ice image classification method Euclidean
distance discriminant method [17]. The K-Nearest Neighbor (KNN) Algorithm is a method for classifying objects
based on the closest or most similar training samples in the feature space [18]. This algorithm was proposed by
Kanika Kalra, Anil Kumar Goswami and Rhythm Gupta. Independent component analysis (ICA) is used by
Karvonen, J. and Simila, M. to compute sets of basis vectors for image data, i.e. for small randomly selected image
windows [19]. A supervised neural network learning architecture was used by Lars Kaleschke and Stefan Kern for
classification, namely Kohonen's Learning Vector Quantization (LVQ). The LVQ neural network classification was
found to be very flexible by learning from examples [20].
VIII. CONCLUSION
This report has examined various stages of image processing techniques. An overview of all related image
processing methods such as preprocessing, segmentation, feature extraction and classification techniques have been
presented in this paper. Recent research in image processing techniques is also presented in this literature review.
REFERENCES
1. K.M.M. Rao "OVERVIEW OF IMAGE PROCESSING" Readings in Image Processing Fundamentals Of Digital Image Processing - Anil K.
Jain, Prentice-Hall, 1989.
2. Digital Image Processing - Kenneth R. Castleman, Prentice-Hall, 1996.
3. Gaurav Kumar, Pradeep Kumar Bhatia"A Detailed Review of Feature Extraction in Image Processing Systems" 2014 Fourth International
Conference on Advanced Computing & Communication Technologies.
4. Computer Image Processing And Recognition - Ernest L.Hal, Academic Press, 1979.
5. Digital Image Processing - Chellappa, 2nd Edition, IEEE Computer Society Press, 1992.
6. M. Mansourpour , M.A. Rajabi , J.A.R. Blais” EFFECTS AND PERFORMANCE OF SPECKLE NOISE REDUCTION FILTERS ON
ACTIVE RADAR AND SAR IMAGES” isprs, XXXVI/1-W41.
7. S.Manikandan, , Chhabi Nigam, J P Vardhani and A.Vengadarajan “Gradient based Adaptive Median filter for removal of Speckle noise in
Airborne Synthetic Aperture Radar Images” ICEEA ,2011.
8. L. Gagnon and A. Jouan “Speckle Filtering of SAR Images - A Comparative Study between Complex-Wavelet-Based and Standard Filters”
SPIE, 1997.
9. P. Karunakar, V. Praveen and O. Ravi Kumar”Discrete Wavelet Transform-Based Satellite Image Resolution Enhancement” Advance in
Electronic and Electric Engineering, ISSN 2231-1297, Volume 3, Number 4, pp. 405-412, 2013.
10. Young Gi Byun, You Kyung Han, and Tae Byeong Chae” A Multispectral Image Segmentation Approach for Object-based Image
Classification of High Resolution Satellite Imagery” KSCE,2012.
11. Debasish Chakraborty, Gautam Kumar Sen and Sugata Hazra” High-resolution satellite image segmentation using Holder exponents ” J. Earth
Syst. Sci. 118, No. 5, October 2009, pp. 609–617.
12. Ousseini Lankoande, Majeed M. Hayat, and Balu Santhanam “Segmentation of SAR Images Based on Markov Random Field Model”.
13. Leen-Kiat Soh and Tsatsoulis” A feature extraction technique for synthetic aperture radar (SAR) sea ice imagery” IEEE, 1993.
14. John F. Vesecky, Martha P. Smith and Ramin Samadani"Extraction of Ridge Feature Characteristics from Sar Images of Sea Ice" IEEE, 1989.
15. Karvonen, J. and Kaarna, A."Sea Ice SAR Feature Extraction by Non-Negative Matrix and Tensor Factorization" IEEE, 2008.
16. Natalia Yu. Zakhvatkina, Vitaly Yu. Alexandrov, Ola M. Johannessen, Stein Sandven, and Ivan Ye. Frolov "Classification of Sea Ice Types in
ENVISAT Synthetic Aperture Radar Images" IEEE, 2012.
17. Wang, Tan, Yang and Xuezhi”A multi-level SAR sea ice image classification method by incorporating egg-code-based expert knowledge”
IEEE, 2012.
18. Kanika Kalra, Anil Kumar Goswami and Rhythm Gupta "A Comparative Study of Supervised Image Classification Algorithms for Satellite
Images" IJEEDC, 2013.
19. Karvonen.J and Simila.M "Independent component analysis for sea ice SAR image classification" IEEE, 2001.
BIOGRAPHY
B. Chitradevi is a Assistant Professor in the Computer Science Department, Thanthai Hans Roever
College,Bharathidasan University, She finished M.Sc.,M.Phil,B.Ed,(Ph.D) pursuing degrees . Her research interests are
Image Processing,Data Mining and Neural Network Algorithms etc.
P.Srimathi is a Assistant Professor in the Computer Application Department, Thanthai Hans Roever
College,Bharathidasan University, She finished M.Sc.,M.Phil degrees . Her research interests are Image
Processing,Data Mining and Neural Network Algorithms etc.