0% found this document useful (0 votes)
20 views

A Review on Digital Image Processing Applications

Uploaded by

mradamqdinotaher
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

A Review on Digital Image Processing Applications

Uploaded by

mradamqdinotaher
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

ISSN: 2320-5407 Int. J. Adv. Res.

8(06), 726-734

Journal Homepage: -www.journalijar.com

Article DOI:10.21474/IJAR01/11152
DOI URL: http://dx.doi.org/10.21474/IJAR01/11152

RESEARCH ARTICLE
A REVİEW ON DİGİTAL IMAGE PROCESSİNG: APPLİCATİONS, TECHNİQUES AND
APPROACHES İN VARİOUS FİELDS

Reshma Deshmukh and Anup Vibhute


SVERI‘s College of Engineering, Pandharpur.
……………………………………………………………………………………………………....
Manuscript Info Abstract
……………………. ………………………………………………………………
Manuscript History Image processing is a complex process that involves various techniques
Received: 10 April 2020 and algorithms. There is no fixed pattern and hence there are limits on
Final Accepted: 12 May 2020 adopting the image processing in any process. Present work is a
Published: June 2020 thorough review of the different applications of immage processing
technique in various fields of modern engineering. Digital image
Key words: -
Image Processing, DIP, Image processing (DIP) is being used in various applications such as video
Restoration, Image Segmentation, Image modification, biometric systems in offices, endoscopy, quality
Enhancement, Image Processing monitoring in textile and other prodcution industries, digital
Algorithms SIFT, SURF, BRIEF ORB
photograph enhancing, signature varification etc. It also gives thorough
insight of various techniques involved in the image processing such as
image aquisition, image segmentation, image transformation, image
restoration, image compression etc. The advantages and limitations of
certain strategies and algorithms used in image processing such as
SIFT, SURF, BRIEF and ORB are discussed in detail.

Copy Right, IJAR, 2020, All rights reserved.


……………………………………………………………………………………………………....
Introduction:-
There are two types of image processing viz. Analog image processing (AIP) and digital image processing (DIP).
AIP can be used for the hard copies like printouts and photographs. Image analysts use a range of fundamentals of
interpretation while using these visual techniques. In DIP, computer algorithms are used to perform image
processing. Digital image processing is preferred over the AIP.

Numbers of algorithms are available to be used with the input data.

Problems such as noise or distortion in signal are avoided. Fast, efficient and versatile DIP has become indispensible
part of modern engineering and preferred over AIP [1].

The image processing emphasizes on the target area to be studied as well as the knowledge of analyst. Association is
another important tool in image processing through visual techniques. Hence during the analysis, one has to apply
personal knowledge and collateral data to image processing. Image processing is closely related with the artificial
vision or simulated vision hence it has at least one of the probable goals.
1. Hallucination Means monitoring and describing the object which do not exist in prectice but their features are
described
2. İmage restoration and/or sharpeningin order to improve the quality of image i.e. contrast, brightness, tint, hue,
sharpness etc.

Corresponding Author:- Reshma Deshmukh 726


Address:- SVERI‘s College of Engineering, Pandharpur.
ISSN: 2320-5407 Int. J. Adv. Res. 8(06), 726-734

3. İmage repossesion Here we search for the image of interest. The parameters of desired image such as shape,
colours, pattern etc. are mentioned and we seek the desired image from provided images
4. Measurement of pattern This is done to measure the range of objects in a given image or targeted area
5. İmage acknowledgement This is a process of differentiating specific objects in the image [2].

As there are no fixed techniques for all applications, we need to study all possible techniques their advantages and
disadvantages. Each technique has its own advantages and limitations. The review provides discussion about
various applications is given. The review provides thorough knowledge of various techniques adopted in image
processing preferably DIP. Finally, the review discusses few algorithms commonly used in image matching.

Applications of image processing:


There are several applications of image processing as mentioned below

Image sharpening and restoration:


It is the process of enhancing the visual impact of the image so as to improve the information content. This involves
sharpening of edges/ boundaries, and/ or contrasting. Many algorithms for accomplishing contrast enhancement
have been developed and applied to problems in image processing. Few are mentioned below
1. Contrast enhancement
2. Intensity, hue, and saturation transformations
3. Density slicing
4. Edge enhancement
5. Making digital mosaics
6. Producing synthetic stereo images
7. Noise removal using a Wiener filter
8. Linear contrast adjustment
9. Median filtering
10. Unsharp mask filtering

Character recognition:
Processing speed of scanned images can be improved by character recognition. It identifies and extracts the text
content from different data fields. For example, when we scan a form and use document imaging software to process
it, OCR (Optical Character Recognition) allows us to transfer information directly from the document to an
electronic database. OCR converts scanned or photo images of typewritten or printed text into binary data. It is a
technique of digitizing printed manuscripts such that following processes such as editing, search, text to speech
conversion, key data extraction and text mining can be done. In previous stages, these automations were performed
with images of every character. This limited the use of fonts to only one at a time. Modern era of today has provided
great versatility on size, type and orientation of various fonts. Some marketable methods are capable of duplicating
formatted output that very much look like the original scanned pane including columns, images and other non-
textualcomponents [3].

Target recognition:
Automatic Target Detection/Recognition (ATDR) detects, classify and track the target objects embedded in an
image produced by laser radar (LR), synthetic- aperture radar (SAR), or an infrared or video camera. ATDR
converts the signal from the sensor into a digital image then separate the target from background or surrounding area
by extracting a coarse shape or outline of the target object.

Finally, it identifies the object by matching the features describing the target object [4]. Segmentation is one of the
main steps in image analysis for detection, andrecognition or identification of objects. Several techniques and
algorithms have been developed for image segmentation process like (i) Edge Detection Methods, (ii) Region
splitting and (iii) Region Growing Methods, (iv) Clustering Methods. Many algorithms and techniques are utilized
in ATR process, but no satisfactory single approach has yet been found. We can make a high performance ATR
system by blending several approaches together.

Pattern recognition:
In this type the objects or images are identified by matching them with different patterns provided. The matchning is
done on the basis of similarities in shape, colour etc. Best example is the ‗finding Woldo problem‘.

727
ISSN: 2320-5407 Int. J. Adv. Res. 8(06), 726-734

Biometric indetification:
Biometrics refers to metrics related to the characteristics of human beings, such as fingerprints, hand geometry,
signatures, retina and iris patterns, voice waves, DNA etc.
New and emerging biometric techniques are:
1. Human scent recognition
2. EEG biometrics
3. Skin spectroscopy
4. Knuckles texture
5. Finger nails recognition [5].

The comparative analysis of different biometric techniques is mentioned in table 1 Most popular among the
biometric recognition is fingerprint identification. Every individual has unique fingerprints. Most fingerprint
matching systems are based on four types of fingerprint representation schemes: gray scale image, phase image,
skeleton image, and minutiae. Due to its distinctiveness, compactness, and compatibility with features used by
human fingerprint experts, minutiae-based representation has become the most widely adopted fingerprint
representation scheme [6].

Table 1:- Comparison of different biometric techniques.


Biometrics Facial Iris scan Finger Finger Voice Lips
Recognition print vein Recognition recognition
Accuracy Low High Medium High Low Medium
Cost High High Low Medium Medium Medium
Size of Large Small Small medium Small Small
template
Long term Low Medium Low High Low Medium
stability
Security level Low medium Low High Low High

Feature Extractor:
Feature extraction is the core of fingerprint technology. The quality of captured image is enhanced by removing the
noise using noise reduction algorithm which processes the image and determines minutiae. Most frequently used
minutiae in applications are points of bifurcation and ridge endings.

Matcher:
The fingerprint images are matched to those in the database for either identification or matching i.e. one-to-many
matching or one-to-one matching.

There are some other applications such as remote sensing, transmission encoding, machine vision, colour
processing, video processing and microscopy.

Techniques of image processing:


Figure 1 represents different techniques of image processing.

Figure 1:- Techniques of Image Processing.

728
ISSN: 2320-5407 Int. J. Adv. Res. 8(06), 726-734

Detail explaination is given below.

Image Segmentation:
Segmentation is a process of dividing the image into component regions or objects; in order to help in annotation of
the object scene. Image segmentation is done to identify the content of the image carefully. In this context, edge
detection is a fundamental tool for image segmentation. Several general-purpose algorithms and techniques are
adopted for image segmentation. Despite of availability of several methods of segmentation, selection of method and
hence the algorithm relies on the nature and type of image. Thus the selection of method of segmentation and the
appropriate algorithm remains a challenge image processing and computer vision [5].

Depending on the approaches, the segmentation may be either region based, edge Based, threshold or feature based
clustering.

Region based segmentation:


Process of dividing an image into regions (small groups of connected pixels having similar properties). Regions are
used to interpret images. A region may correspond to particular object or different parts of an object. Region-based
techniques are generally better in noisy images where borders are difficult to detect. Fair accuracy levels are offered
in region based methods.

Edge based segmentation:


It focuses on the discontinuous intensity values and similar intensity values. In case of discontinuous intensity
values, the partition of image based on abrupt changes in intensity, such as edges or boundaries. Edge detection is
the problem of primary value in image analysis. The obtained boundary marks are edges of the desired object.
Hence by the detection of its edges, the object can be segmented from the image. The output that is received by
applying edge detection algorithm is a binary image. Edge based methods are interactive in nature.

There are three fundamental steps in edge detection:


1. Filtering &Enhancement: In order to facilitate the detection of edges, it is essential to repress as much noise as
possible and determine changes in intensity in the neighbourhood of a point, without destroying the true edges.
2. Detection of edge points: Determine which edge pixels should be discarded as noise and which should be
retained (usually, thresholding provides the criterion used for detection).
3. Edge localization: Not all of the points in an image are edges for a particular application. Edge localization
determines the exact location of an edge. Edge thinning and linking are usually required in this step. [5]

Thresholding:
Thresholding is a simple and powerful technique for segmenting images having light objects on shady background.
By choosing appropriate threshold T, it converts a multi level image into a binary image and then divide image
pixels into several regions and separate objects from background. Depending on T, there are two approaches viz.
Local thresholding (when T is constant) and global thresholding (T have multiple values) [7]. If the background
illumination is uneven then the global thresholding fails. If the intensity of any pixel (x, y) is greater than or equal to
the threshold value, then it is considered as a part of the object otherwise it belongs to background. Resistance to
noise is less when we adopt this method.

Feature Based segmentation:


Clustering is the process of grouping together of objects based on properties so that each cluster contains similar
objects which are dissimilar to the objects of other clusters. Clustering is performed by different algorithms using
various methods for computing or finding the cluster. The quality of the good clustering methods produces high
intra-cluster and low inter-cluster similarities. A general approach to image clustering emphasises on following
issues:
1. How to represent the image?
2. How to organize the data?
3. How to classify an image to a certain cluster?

The Clustering methods are classified into 2 types.

729
ISSN: 2320-5407 Int. J. Adv. Res. 8(06), 726-734

K mean clustering:
K-mean is one of the fast, robust, simplest unsupervised learning algorithms that solve the well known clustering
problem. The method is to classify the given data set through a certain number of k clusters that are fixed a priori.
K-mean clustering algorithms gives optimal result when data set are dissimilar.

Fuzzy C-Mean [FCM] Algorithm etc:


Fuzzy Clustering is a method which allows the objects to belong to more than one cluster with different
membership. This is the one of the effective method forpattern recognition. Most commonly used fuzzy clustering
algorithms is the Fuzzy C-Mean [8]. By using FCM we can retain information of the data set. In FCM, the data point
is assigned membership to each cluster center as a result of which data point may belong to more than one cluster
center. However, the evils of digital image segmentation still put some great challenges for computer vision. Many
researchers had created several methods to deal with the problem of image segmentation. Zimmer et al. [9] created a
active contour (snakers) method to detect the mobility of live cells. Coskun et al. [10] used the inverse modelling to
detect the mobility of living cells. Mukherjee et al. [11] applied thresholding to handle the tracking problem.
Researchers are preparing several image segmentation algorithms for the segmentation process as per requirements.

Model based segmentation:


This technique is based on Markov random field. For color segmentation inbuilt region constraint are used. To
define accuracy of edges MRF is joined with edge detection [12]. This method contains the relations amongst color
components.

Image compression:
Compression of the records of the digital images to eliminaite the duplication of the data in order to store and
transmit it more efficiently is called compression of image [13]. The image compression is of two types viz. lossy
and lossless.

Lossless compression:
If the quality of image remains unaltred after compression, it is lossless. It is mostly used for medical imaging,
technical drawing contents and for archival purposes etc

Lossy compression:
In lossy compression the quality of data decreases after compression. Lossy approaches are used in those
environments in which minor loss of quality is acceptable for high speed. The common is JPEG which compresses
full color or gray scale images. In which the image is divided into eight by eight blocks such that no overlapping is
formed among them. JPEG use discrete cosine transforms technique for compression [14]. There is another
technique for compression known as Wavelet transform. Through wavelet data is divided into different frequency
components each one is studied separatly. Wavelets have advantages over traditional Fourier approaches in
examining physical circumstances.

Classification of image:
To extract the information from the images, label, and pixels from the images, classification is done. Many images
of the same objects are required. For effective classification appropriate scheme and adequate training samples are
required. Classification is done as per user‘s need [15]. Numerous classification approaches are accessible like as
artificial neural networks, expert systems and fuzzy logic etc. Various types of classification algorithms like as per
pixel, sub pixel, per field. ‗Per-pixel classification‘ is mostly used method. ‗Sub pixel algorithms‘ comprises of
varied pixel problem, provide higher level of accuracy. For fine 3D resolution, ‗data per field classification‘ is the
best option.

The classification techniques may either be supervised or unsupervised. In supervised classification, spectral
signatures which are obtained from training samples are used to classify an image. With the help of multivariate
classification tools, image is classified. In unsupervised classification the output depends on machine without any
interaction with the user. In this techniques pixels belong to same category are grouped into one class.

Image Restoration:
Obtaining a perfect image from corrupted and noisy image is done by a process known as image restoration [16].
Restoration rebuilds those images spoiled due to noise or system error. Degradation may be due to noise from the

730
ISSN: 2320-5407 Int. J. Adv. Res. 8(06), 726-734

sensor, camera misfocus and atmospheric disturbance. There are two types of procedures are used to restore the
image. The picture whose quality is degraded via some reasons,

There are various processes involved in restoration of image.


1. İnverse filtering: In this method we look at an image assuming a known blurring function. We will see that
restoration is good when noise is absent.
2. Weiner Filtering: It provides optimal trade-off between de-noising and inverse filtering. The results are better
than those with straight inverse filtering.
3. Wavelet Restoration:Here three wavelet based algorithms are used to restore the image.
4. Blind De-convolution:If neither assumption nor the information about the additive noise in image is present,
then blind filtering is done. These are the typical steps followed during the process [17, 18].

Image enhancement:
It increases the quality of image by applying various filters. Prior knowledge of degradation is necessary to restore
the image. Restoration of the images might be achieved via two types of model viz. degradation Model and
restoration model [17].

Image enhancement is done to make the image more suitable than the originalone, for a specific application such as
graphic display. Quantifying the criterion for enhancement is difficult task, hence various image enhancement
techniques are adopted. Image enhancement is of two types depending on the aproaches.

Spatial domain enhancement (SDE):


In which pixel value varies as a function of intentsity of the surrounding

Frequency domain enhancement (FDE):


It may involve the point processing this involves following steps
1. İntensity transformation
2. İmage negative
3. Contrast stretching
4. Grey level slicing
5. Histogram processing
6. İmage subtraction
7. İmage averaging [18].

Aquisition of image:
It is the first step of any visualization scheme. When the image is obtained then various processes are applied on the
image. Basically, an image acquisition is a process through which images are retrieved from various resources. The
most common method for image acquisition is real time acquisition. Real time aquisition creates a pool of files
which are processed automatically. An image acquisition method is useful to produce 3D geometric data [19].

Image Representation:
Image representation means converting the raw data in such a way so that computer processing can apply on it.
Basically, two types of techniques are used to represent the pictures.

Boundary representation:
It displays internal shape of the picture. Thus it focuses on the shape of the object, whether it is corner, rounded or
any other shape.

Region representation:
It is used when the internal properties of image are to be studied. Depends upon level of processing of images via
machine there are four methods of image representation such as pixel based, Block based, Region based and
Hierarchical based. Image representation is suitable for the formation of entities, knowledge based models extracted
from image databases that are created using predefined decision rules [20].

731
ISSN: 2320-5407 Int. J. Adv. Res. 8(06), 726-734

Image transformation:
Arithmetic operations on images or complex mathematical operations which convert images from one representation
to another. Mathematical Operations include simple image arithmetic, Fourier, fast Hartley transform, Hough
transform and Radon transform [21-23]. Most popular is Fourer Transform (FT).

Image matching and different approaches:


Feature detection and image matching are still the important areas to work on in machine vision and robotics. Ideal
feature detection technique should be adopting with transformations such as rotation, scale, illumination, noise and
affine transformations in image. Ideal features should be highly distinctive, such that a single feature to be correctly
matched with high probability [23-24]. Image matching in DIP is a complex process and it uses several algorithms.
Given below are 3 most popular approaches.

Scale Invariant Feature Transform (SIFT):


Lowe‘s SIFT solves the image rotation, affine transformations, intensity, and viewpoint change in matching
features. The SIFT algorithm has 4 basic steps.

Intimation of scale space extrema using the Difference of Gaussian (DoG).

Key point localization where the key point candidates are localized and refined by eliminating the low contrast
points.

Key point orientation assignment based on local image gradient and Descriptor generation to compute the local
image descriptor for each key point based on image gradient magnitude and orientation. SIFT is a feature detector
developed by Lowe in 2004 [25]. Although SIFT is very efficient in object recognition applications, it requires a
large computational complexity which is a major drawback especially for real-time applications [25-26]. To improve
the computational complexity, several variants and extensions of SIFT are developed [27-30].

Speed up Robust Feature (SURF):


SURF approximates the DoG with box filters. Squares are used instead of Gaussian averaging as the convolution
with square is much faster if the integral image is used. This is used simultaneously for different scales. The SURF
uses a BLOB detector which is based on the Hessian matrix to find the points of interest. For orientation assignment,
it uses wavelet responses in both horizontal and vertical directions by applying adequate Gaussian weights. SURF
uses the wavelet responses for feature description. A neighbourhood around the key point is selected and divided
into subregions. For each subregion the wavelet responses are represented to get SURF feature descriptor. The sign
of Laplacian which is already computed in the detection is used for underlying interest points. The sign of the
Laplacian distinguishes bright blobs on dark backgrounds from the reverse case. In case of matching, the features
are compared only if they have same type of contrast (based on sign) which allows faster matching [30]. SURF
technique, which is an approximation of SIFT, performs faster than SIFT without reducing the quality of the
detected points [31]. Both SIFT and SURF are thus based on a descriptor and a detector.

Binary Robust Independent Elementary Features (BRIEF):


Rublee et al. proposed Oriented FAST and Rotated BRIEF (ORB) as another efficient alternative for SIFT and
SURF. It requires less complexity than SIFT with almost similar matching performance [32,33]. Any of these
feature detection methods can also be employed in remote sensing application such as sea ice applications, e.g.
Zehen Lieu et al. used the SIFT algorithm based matching to find the icebergs whose shapes has changed due to
collision or splits [34]. Also feature tracking algorithms were used for ice motion tracking e.g. Ronald Kwok [35].
Little literature is available on comparison of SIFT and SURF [36-38] however, there are no papers on the
comparison of ORB with them. Fish eye distortions are used for creating hemispherical panoramic images caused by
lens of camera or manually created by using spherical distortions. Planetariums apply fish eye paper ion of night
sky, flight simulations in order to create immersive environment for the trainee‘s uses the fish eye paper ion, some
motion-picture formats also use these paper ions [39]. Meteorology uses fish eye lenses to capture cloud formation.

Oriented FAST and Rotated BRIEF ORB:


ORB is a fusion of the FAST key point detector and BRIEF descriptor with some modifications [32]. Initially to
determine the key points, it uses FAST. Then a Harris corner measure is applied to find top N points. FAST does not
compute the orientation and is rotation variant. The intensity weighted centroid of the patch is computed from the

732
ISSN: 2320-5407 Int. J. Adv. Res. 8(06), 726-734

corner at center. The moments of the vector from this corner point to centroid gives the orientation to improve the
rotation invariance. In plane rotation degrades the performance of BRIEF. In ORB, a rotation matrix is computed
using the orientation of patch and then as per orientation, BRIEF descriptors are steered.

Conclusion:-
Image processing has become indispensible part of modern engineering as it is being adopted in various fields such
as biometry, signature identification, image enhancing, quality control in various production industries, machine
vision techniques etc. There are mainly two types of image procession viz. AIP and DIP. DIP has its own
advantages. DIP has various applications such as image sharpening and restoration, character recognition, signature
or pattern recognition, target recognition, emage enhancing. Each application adopts a single or combination of
different techniques such as image aquisition, segmentation, transformation, enhancement etc. Various algorithms
are adopted for image matchnig and DIP such as SIFT, SURF, BRIEF and ORB etc. Thus this review describes
various aspects of DIP and gives thorough insight on applications, techniques and algorithms adopted.

References:-
1. J. Kuruvilla, D. Sukumaran, A. Sankar, S. P. Joy, A review on image processing and segmentation, international
conference on data mining and advanced computing, (SAPIENCE), 198-203, 2016
2. B. Basavaprasad, M. Ravi,―A study on the importance of image processing and its applications‖, IJRET:
International Journal of research in Engineering and Technology eISSN: 2319-1163 |, pISSN: 2321-7308
3. A. F. Mollah, N. Majumder, S. Basu, M. Nasipuri, "Design of an Optical Character Recognition System for
Camera based Handheld Devices", IJCSI International
4. G.N.Srinivasan, G. Shobha,‖Segmentation Techniques for Target Recognition‖, International Journal Of
Computers And Communication,Issue 3, Volume 1, 2007
5. J. Kuruvilla,, A. Sankar, D. Sukumaran,‖A Study on image analysis of Myristica fragrans for Automatic
Harvesting‖IOSR Journal of Computer Engineering (IOSR-JCE)e-ISSN: 2278-0661,p-ISSN: 2278-8727PP50-
55
6. J. Feng, A. K. Jain, Fingerprint Reconstruction: From Minutiae to Phase, IEEE transactions on pattern analysis
and machine intelligence.
7. Muller H, Michoux N, Bandon D, Geissbuhler A. A review of content based image retrieval systems in medical
applications clinical benefits and future directions. Int J Med Inform 2004;73:1
8. A.J. Patil, C.S.Patil, R.R.Karhe, M.A.Aher.‖Comparative Study of Different Clustering
9. Algorithms, International Journal of Advanced Research in Electrical, Electronics and
10. Instrumentation Energy,ISSN ONLINE(2278-8875) PRINT (2320-3765)
11. Zimmer, C., Labruyre, E., Meas-Yedid, V., Guilln, N., and Olivo-Marin, J. (2002). Segmentation and tracking
of migrating cells in videomicroscopy with parametric active contours:a tool for cellbased drug testing. IEEE
TransMed Imaging, 21(10):1212–21.
12. H. Coskun, Y. Li, M.A.Mackey, (2007). Ameboid cell motility: A model and inverse problem, with an
application to livecell imaging data. Journal of Theoretical Biology, 244(2): 169-179
13. O. Dzyubachyk, W. Niessen, E. Meijering :Advanced Level – Set Based Multiple - Cell
14. Segmentation and Tracking in Time – Lapse Fluorescence Microscopy Images. In IEEE
15. International Symposium on Biomedical Imaging: From Nano to Macro Edited by: Olivo- Marin JC, Bloch I,
Laine A. IEEE, Piscataway, NJ; 2008:185-188.
16. J. Luo, R. T. Cray, and H.C. Lee, Incorporation of derivative priors in adaptive
17. Bayesian color image segmentation.Proc. ICIP‘97, Vol. 3, pp. 58-61, Oct 26-29, 1997 Santa Barbara,CA.
http://dx.doi.org/10.1109/ICIP.1998.727372
18. S.,Dhawan., A Review of Image Compression and Comparison of its Algorithms. International Journal of
Electronics & Communication Technology, 2(1),(2011).
19. Wallace, G. K.,The JPEG Still Picture Compression Standard.Comm.ACM, 34(4), (1991).
20. D. LU, Q. Weng, A survey of image classification methods and techniques for improving classification
performance, International Journal of Remote Sensing, 28(5), 823–870.
http://dx.doi.org/10.1080/01431160600746456
21. P. Li,H. LI,O., Fuzzy techniques in image restoration research—a survey, International Journal Of
Computational Cognition, 2(2), 131–149(2004).
22. M. Maru, ―Image Restoration Techniques: A Survey‖, International Journal of Computer Trends and
Technology ,3(12), 2014

733
ISSN: 2320-5407 Int. J. Adv. Res. 8(06), 726-734

23. S. Mathur, R. Purohit, A. Vyas, A Review on basics of Digital Image Processing, ETRASECT - 2016
Conference Proceedings, International Journal of Engineering Research & Technology (IJERT) ISSN: 2278-
0181www.ijert.org
24. Moustakidesa,G.,Briassoulisb,D.,E. Psarakisa,E.,Dimasb,3D image acquisition and NURBS based geometry
modelling of natural objects, Advances in Engineering Software, 955–969,(2000).
25. Kuriakose,B., Preena,K.,P. ―A Review on 2D Image Representation Methods‖, International Journal of
Engineering Research & Technology (IJERT), 4(4), (2015).
26. Webpage: http://bme.med.upatras.gr/improc/index.htm
27. Nurul Hakeem, Abd Rahim "Development of image processing software using visual basic 6.0" (2008) Faculty
of Electrical & Electronic Engineering, Universiti Malaysia Pahang.
28. Castleman, K. R., Digital Image Processing. Englewood Cliffs, NJ:
29. Prentice-Hall, Inc., 1979
30. B. Moghaddam, C. Nastar and A. Pentland, ―A Bayesian similarity measure for deformable image matching,‖
Image and Vision Computing, vol. 19, no. 5, pp. 235-244, 2001.
31. B. Shan, ―A Novel Image Correlation Matching Approach,‖ JMM, vol. 5, no. 3, 2010.
32. David G Lowe, ―Distinctive Image Features from Scale-Invariant Keypoints,‖ International Journal of
Computer Vision, vol.50, No. 2, 2004, pp.91-110.
33. E. Karami, M. Shehata, A. Smith, ―Image Identification Using SIFT Algorithm: Performance Analysis Against
Different Image Deformations,‖ in Proceedings of the 2015 Newfoundland Electrical and Computer
Engineering Conference,St. john‘s, Canada, November, 2015.
34. M. Güzel, 'A Hybrid Feature Extractor using Fast Hessian Detector and SIFT', Technologies, vol. 3, no. 2, pp.
103-110, 2015.
35. Liang-Chi Chiu, Tian-Sheuan Chang, Jiun-Yen Chen and N. Chang, 'Fast SIFT Design for Real-Time Visual
Feature Extraction', IEEE Trans. on Image Process., vol. 22, no. 8, pp. 3158-3167, 2013.
36. Y. Ke and R. Sukthankar, ―PCA-SIFT: A more distinctive representation for local image descriptors.‖ in Proc.
CVPR. Vol. 2, pp. 506–513, 2004.
37. Herbert Bay, Tinne Tuytelaars and Luc Van Gool, "Speeded-up robust features (SURF)," Computer vision and
image understanding, vol.110, No.3, 2008, pp. 346-359.
38. M. Calonder, V. Lepetit, C. Strecha, and P. Fua. ―BRIEF: Binary robust independent elementary features,‖ In
European Conference on Computer Vision, 2010.
39. Ethan Rublee, Vincent Rabaud, Kurt Konolige and Gary Bradski, ―ORB: and efficient alternative to SIFT or
SURF,‖ IEEE International Conference on Computer Vision, 2011.
40. Zhen Liu, Ziying Zhao, Yida Fan, Dong Tian, “Automated change detection of multi-level icebergs near Mertz
Glacier region using feature vector matching,‖ The international conference on Image processing, computer
vision and Pattern Recogniton, 2013.
41. R. Kwok and N. Untersteiner, ―The thinning of Arctic sea ice,‖ Physics Today, vol. 64, no. 4, p. 36, 2011.
42. P. Sykora, P. Kamencay and R. Hudec, ―Comparison of SIFT and SURF Methods for Use on Hand Gesture
Recognition based on Depth Map‖, AASRI Procedia, vol. 9, pp. 19-24, 2014.
43. P. M. Panchal, S. R. Panchal, and S. K. Shah, ―A Comparison of SIFT and SURF,‖ International Journal of
Innovative Research in Computer and Communication Engineering, vol. 1, no. 2, pp. 323-327, 2013.
44. N. Y. Khan, B. McCane, G. Wyvill, ―SIFT and SURF Performance Evaluation Against Various Image
Deformations on Benchmark Dataset,‖ in Proceedings of 2011 International Conference of Digital Image
Computing Techniques and Applications.
45. J. Faidit, ―Planetariums in the world,‖ Proceedings of the International Astronomical Union, vol. 5, no. 260,
2009.

734

You might also like