Majorreportfinal 2
Majorreportfinal 2
Majorreportfinal 2
CLASSIFIER
B.SWAPNA
T.PRADEEP KUMAR
V.PRAVALLIKA
B.SWAPNA (15261A0467)
T.PRADEEP KUMAR (15261A04B4)
V.PRAVALLIKA (15261A04B7)
This is to certify that the Major project work entitled “Content Based Image Retrieval using SVM
Classifier” is a bonafide work carried out by
B. Swapna (15261A0467)
T.Pradeep Kumar (15261A04B4)
V.Pravallika (15261A04B7)
(Signature)
(Signature)
-------------------------- --------------------------
Dr.T.R.Vijayalakshmi Dr. S P Singh
Assistant Professor
Professor & Head
i
ACKNOWLEDGEMENT
We express our deep sense of gratitude to our Guide Dr. T R Vijaya
Lakshmi, Mahatma Gandhi Institute Of Technology, Hyderabad, for his
invaluable guidance and encouragement in carrying out our Project.
We wish to express our sincere thanks to Dr. S.P.Singh, Head of the
Department of Electronics and Communication Engineering, M.G.I.T., for
permitting us to pursue our Project in Mahatma Gandhi Institute Of Technology
and encouraging us throughout the Project.
Finally, we thank all the people who have directly or indirectly helped us through
the course of our Project.
B.Swapna
T.Pradeep Kumar
V.Pravallika
ii
ABSTRACT
Content Based Image Retrieval (CBIR) systems are used to find images that are visually
similar to a query image. A common technique used to implement a CBIR system is bag
of visual words, also known as bag of features. Instead of using actual words as in
document retrieval, bag of features uses image features such as color, texture, and shape
as the visual words that describe an image. Image features are an important part of CBIR
systems. Image features can also be local image features such as speeded up robust
features (SURF), histogram of gradients (HOG), or local binary patterns (LBP). The
benefit of the bag-of-features approach is that the type of features used to create the
visual word vocabulary can be customized to fit the application. The image extraction is
done SVM classifier by computing RGB histogram for every image in database. Given
query image , feature vector is computed and is compared with feature vector of every
image in database and the highest scored/ least valued confusion matrix images will be
returned as output. This method is presently implemented in several applications such as
fingerprint identification, biodiversity information systems, digital libraries, crime
prevention, medicine, historical research, many others. But rightnow texture retrieval is
discarded as it has some faulty results. More research is to be carriedout for extracting
images based on Shape and Texture analysis, Color Image histogram, Image ranking in
Euclidean Distance method
iii
TABLE OF CONTENTS
iv
CHAPTER4 RESULTS AND CONCLUSION
4.1 Simulation Results 43
4.2 Conclusion 46
4.3 Future Enhancements 47
REFERENCES 49
v
LIST OF FIGURES
Fig no Description Page no.
1.1 Classification of Image Retrieval 1
1.2 Image Retrieval from Database 3
1.3 Image Database 6
2.1 RGB color of a Image 9
2.2 Sample image and its corresponding Histogram 10
2.3 Different types of Textures 12
2.4 Image Extraction 12
2.5 Classical co-occurance Matrix 13
2.6 Haar wavelet example 16
2.7 Daubechies wavelet example 17
2.9 Polygonal Approximation 18
3.1 Color map entries 22
3.2 Pixel values in an Intensity Image define Grey Scale 23
3.3 Binary Image 24
3.4 View of RGB model looking from White to Origin 24
3.5 (a) Original Color image
(b) Matlab RBG matrix 25
3.6 (a) HSV coordinate system
(b) HSV color model 25
3.7 RGB computation 26
3.8 Color Feature extraction 27
3.9 (a) Original Grey Scale Image
(b) Matlab Grey Scale Intensity 29
3.10 Bitdata format matrix 29
3.11 Block diagram of CBIR 31
3.12 Flowchart of Datasheet 34
3.13 Graphical User Interface 37
vi
CHAPTER 4: RESULTS AND CONCLUSION
4.1 Browse for folder 43
4.2 Browse for Image 44
4.3 Load Datasheet 44
4.4 Resultant images after browse 45
vii
CONTENT BASED IMAGE RETRIEVAL
CHAPTER-1
INTRODUCTION
With the advancement in internet and multimedia technologies, a huge amount of
multimedia data in the form of audio, video and images has been used in many fields
like medical treatment, satellite data, video and still images repositories, digital
forensics and surveillance system. This has created an ongoing demand of systems
that can store and retrieve multimedia data in an effective way. Many multimedia
information storage and retrieval systems have been developed till now for catering
these demands.
Text Based Image Retrieval (TBIR) systems, where the search is based on automatic
or manual annotation of images. A conventional TBIR searches the database for the
similar text surrounding the image as given in the query string. The commonly used
TBIR system is Google Images. The text based systems are fast as the string matching
is computationally less time consuming process. However, it is sometimes difficult to
express the whole visual content of images in words and TBIR may end up in
producing irrelevant results. In addition annotation of images is not always correct
and consumes a lot of time. For finding the alternative way of searching and
overcoming the limitations imposed by TBIR systems more intuitive and user friendly
content based image retrieval systems (CBIR) were developed.
1
CONTENT BASED IMAGE RETRIEVAL
Problem Motivation:
Image databases and collections can be enormous in size, containing hundreds,
thousands or even millions of images. The conventional method of image retrieval is
searching for a keyword that would match the descriptive keyword assigned to the
image by a human categorizer. Currently under development, even though several
systems exist, is the retrieval of images based on their content, called Content Based
Image Retrieval, CBIR. While computationally expensive, the results are far more
accurate than conventional image indexing. Hence, there exists a tradeoff between
2
CONTENT BASED IMAGE RETRIEVAL
accuracy and computational cost. This tradeoff decreases as more efficient algorithms
are utilized and increased computational power becomes inexpensive. "Content-
based" means that the search will analyze the actual contents of the image. The term
'content' in this context might refer colors, shapes, textures, or any other information
that can be derived from the image itself. Without the ability to examine image
content, searches must rely on metadata such as captions or keywords. Such metadata
must be generated by a human and stored alongside each image in the database.
Problems with traditional methods of image indexing have led to the rise of interest in
techniques for retrieving images on the basis of automatically-derived features such as
color, texture and shape – a technology now generally referred to as Content-Based
Image Retrieval (CBIR). However, the technology still lacks maturity, and is not yet
being used on a significant scale. In the absence of hard evidence on the effectiveness
of CBIR techniques in practice, opinion is still sharply divided about their usefulness
in handling real-life queries in large and diverse image collections. The concepts
which are presently used for CBIR system are all under research.
3
CONTENT BASED IMAGE RETRIEVAL
Problem Statement:
The problem involves entering an image as a query into a software application that is
designed to employ CBIR techniques in extracting visual properties, and matching
them. This is done to retrieve images in the database that are visually similar to the
query image.
Proposed Solution
The solution initially proposed was to extract the primitive features of a query image
and compare them to those of database images. The image features under
consideration were colour, texture and shape. Thus, using matching and comparison
algorithms, the colour, texture and shape features of one image are compared and
matched to the corresponding features of another image. This comparison is
performed using colour, texture and shape distance metrics. In the end, these metrics
are performed one after another, so as to retrieve database images that are similar to
the query. The similarity between features was to be calculated using algorithms used
by well known CBIR systems such as IBM's QBIC. For each specific feature there
was a specific algorithm for extraction and another for matching. Accomplishments
What was accomplished was a software application that retrieved images based on the
features of texture and colour, only. Colour extraction and comparison were
performed using colour histograms and the quadratic distance algorithm,
respectively. Texture extraction and comparison are performed using an energy level
algorithm and the Euclidean distance algorithm, respectively. Due to the time it took
to research the algorithm used to check colour similarity and the trials and failures
with different image formats there was no time to go on to the next important feature,
which was shape. This was unfortunate, since we had accomplished a lot in terms of
research on the topic.
Overview of Report
This report is divided into three main sections. The first section deals with a generator
introduction to CBIR. The second concerns the background of the features employed
in CBIR. The third deals with the technical part which is a full explanation of the
algorithms used, and how they worked, and a mention of the things that didn't work.
4
CONTENT BASED IMAGE RETRIEVAL
Let us start with the word “image”. The surrounding world is composed of images.
Humans are using their eyes, containing 1.5x10^8 sensors, to obtaining images from
the surrounding world in the visible portion of the electromagnetic spectrum
(wavelengths between 400 and 700 nano metres). The light changes on the retina are
sent to image processor centre in the cortex.
In the image database systems geographical maps, pictures, medical images, pictures
in medical atlases, pictures obtaining by cameras, microscopes, telescopes, video
cameras, paintings, drawings and architectures plans, drawings of industrial parts,
space images are considered as images.
There are different models for color image representation. In the seventeen century
Sir Isaac Newton showed that a beam of sunlight passing through a glass prism comes
into view as a rainbow of colors. Therefore, he first understood that white light is
composed of many colors. Typically, the computer screen can display 2^8 or 256
different shades of gray. For color images this makes 2^(3x8) = 16,777,216 different
colors.
Clerk Maxwell showed in the late nineteen century that every color image cough be
created using three images – Red, green and Blue image. A mix of these three images
can produce every color. This model, named RGB model, is primarily used in image
representation. The RGB image could be presented as a triple(R, G, B) where usually
R, G, and B take values in the range [0, 255]. Another color model is YIQ model
(lamination (Y) , phase (I), quadrature phase (Q)). It is the base for the color television
standard. Images are presented in computers as a matrix of pixels. They have finite
area. If we decrease the pixel dimension the pixel brightness will become close to the
real brightness.
1.3 IMAGE DATABASESYSTEMS
Geographical information systems(GIS), robotics systems , CAD/CAM systems,
earth resources systems, medical databases, virtual reality systems, information
retrieval systems, art gallery and museum catalogues, animal and plant atlases, sky
star maps, meteorological maps, catalogues in shops and many other place. There are
sets of international organizations dealing with different aspects of image storage,
analysis and retrieval. Some of them are: AIA (Automated Imaging/Machine vision),
5
CONTENT BASED IMAGE RETRIEVAL
6
CONTENT BASED IMAGE RETRIEVAL
7
CONTENT BASED IMAGE RETRIEVAL
CHAPTER-2
CONTENT BASED IMAGE RETRIEVAL TECHNIQUES
In contrast to the text-based approach of the systems, CBIR operates on a totally
different principle, retrieving stored images from a collection by comparing features
automatically extracted from the images themselves. The commonest features used
are mathematical measures of color, texture or shape. A typical system allows users to
formulate queries by submitting an example of the type of image being sought, though
some offer alternatives such as selection from a palette or sketch input. The system
then identifies those stored images whose feature values match those of the query
most closely, and displays thumbnails of these images on the screen. Some of the
more commonly used types of feature used for image retrieval are described below.
2.1 COLOR RETRIVAL
Definition:
One of the most important features that make possible the recognition of images by
humans is colour. Colour is a property that depends on the reflection of light to the
eye and the processing of that information in the brain. We use coloureveryday to tell
the difference between objects, places, and the time of day. Usually colours are
defined in three dimensional colour spaces. These could either be RGB (Red, Green,
and Blue), HSV (Hue, Saturation, and Value) or HSB (Hue, Saturation, and
Brightness). The last two are dependent on the human perception of hue, saturation,
and brightness. Most image formats such as JPEG, BMP, GIF, use the RGB colour
space to store information . The RGB colour space is defined as a unit cube with red,
green, and blue axes. Thus, a vector with three co-ordinates represents the colour in
this space. When all three coordinates are set to zero the colour perceived is black.
When all three coordinates are set to 1 the colour perceived is white. The other colour
spaces operate in a similar fashion but with a different perception.
Each image added to the collection is analyzed to compute a color histogram which
shows the proportion of pixels of each color within the image. The color histogram
for each image is then stored in the database. At search time, the user can either
specify the desired proportion of each color , or submit an example image from which
a color histogram is calculated. Either way, the matching process then retrieves those
images whose color histograms match those of the query most closely. The matching
technique most commonly used, histogram intersection, was first developed .Variants
8
CONTENT BASED IMAGE RETRIEVAL
of this technique are now used in a high proportion of current CBIR systems. Methods
of improving on original technique include the use of cumulative color histograms,
combining histogram intersection with some element of spatial matching, and the use
of region-based color querying. The results from some of these systems can look
quiteimpressive.
Methods of Representation:
The main method of representing colour information of images in CBIR systems is
through colour histograms. A colour histogram is a type of bar graph, where each bar
represents a particular colour of the colour space being used. In MatLab for example
you can get a colour histogram of an image in the RGB or HSV colour space. The
bars in a colour histogram are referred to as bins and they represent the x-axis. The
number of bins depends on the number of colours there are in an image. The y-axis
denotes the number of pixels there are in each bin. In other words how many pixels in
an image are of a particular colour. An example of a colour histogram in the HSV
colour space can be seen with the following image:
9
CONTENT BASED IMAGE RETRIEVAL
10
CONTENT BASED IMAGE RETRIEVAL
leaves and grass). A variety of techniques has been used for measuring texture
similarity; the best-established rely on comparing values of what are known as
second-order statistics calculated from query and stored images. Essentially, these
calculate the relative brightness of selected pairs of pixels from each image. From
these it is possible to calculate measures of image texture such as the degree of
contrast, coarseness, directionality and regularity, or periodicity, directionality and
randomness. Alternative methods of texture analysis for retrieval include the use of
Gabor filters and fractals. Texture queries can be formulated in a similar manner to
color queries, by selecting examples of desired textures from a palette, or by
supplying an example query image. The system then retrieves images with texture
measures most similar in value to the query. A recent extension of the technique is the
texture thesaurus, which retrieves textured regions in images on the basis of similarity
to automatically-derived code words representing important classes of texture within
the collection.
Methods of Representation:
There are three principal approaches used to describe texture; statistical, structural and
spectral.
Statistical techniques characterize textures using the statistical properties of the grey
levels of the points/pixels comprising a surface image. Typically, these properties are
computed using: the grey level co-occurrence matrix of the surface, or the wavelet
transformation of the surface.
Structural techniques characterize textures as being composed of simple primitive
structures called “texels” (or texture elements). These are arranged regularly on a
surface according to some surface arrangement rules.
Spectral techniques are based on properties of the Fourier spectrum and describe
global periodicity of the grey levels of a surface by identifying high-energy peaks in
the Fourier spectrum .
11
CONTENT BASED IMAGE RETRIEVAL
Co-occurrence Matrix :
Originally proposed by R.M. Haralick, the co-occurrence matrix representation of
texture features explores the grey level spatial dependence of texture. A mathematical
definition of the co-occurrence matrix is as follows:
- Given a position operator P(i,j),
- let A be an n x n matrix
- whose element A[i][j] is the number of times that points with grey level (intensity)
g[i] occur, in the position specified by P, relative to points with grey level g[j].
- Let C be the n x n matrix that is produced by dividing A with the total number of
point pairs that satisfy P. C[i][j] is a measure of the joint probability that a pair of
points satisfying P will have values g[i], g[j].
- C is called a co-occurrence matrix defined by P.
For example; with an 8 grey-level image representation and a vector t that considers
only one neighbour, we would find
12
CONTENT BASED IMAGE RETRIEVAL
- let A be an n x n matrix
- whose element A[i][j] is the number of times that points with grey level (intensity)
g[i] occur, in the position specified by P, relative to points with grey level g[j].
- Let C be the n x n matrix that is produced by dividing A with the total number of
point pairs that satisfy P. C[i][j] is a measure of the joint probability that a pair of
points satisfying P will have values g[i], g[j].
- C is called a co-occurrence matrix defined by P.
Examples for the operator P are: “i above j”, or “i one position to the right and two
below j”, etc. [4] This can also be illustrated as follows. Let t be a translation, then a
co-occurrence matrix Ct of a region is defined for every grey-level (a, b) by [1]: Here,
Ct(a, b) is the number of site-couples, denoted by (s, s + t) that are separated by a
translation vector t, with a being the grey-level of s, and b being the grey-level of s +
t. For example; with an 8 grey-level image representation and a vector t that considers
only one neighbor.
Figure.4: Classical Co-occurrence matrix At first the co-occurrence matrix is
constructed, based on the orientation and distance between image pixels. Then
meaningful statistics are extracted from the matrix as the texture representation.
13
CONTENT BASED IMAGE RETRIEVAL
Hence, for each Haralick texture feature, we obtain a co-occurrence matrix. These co-
occurrence matrices represent the spatial distribution and the dependence of the grey
levels within a local area. Each (i,j) th entry in the matrices, represents the probability
of going from one pixel with a grey level of 'i' to another with a grey level of 'j' under
a predefined distance and angle. From these matrices, sets of statistical measures are
computed, called feature vectors.
Tamura Texture:
By observing psychological studies in the human visual perception, Tamura explored
the texture representation using computational approximations to the three main
texture features of: coarseness, contrast, and directionality. Each of these texture
features are approximately computed using algorithms…
Coarseness is the measure of granularity of an image, or average size of regions that
have the same intensity.
Contrast is the measure of vividness of the texture pattern. Therefore, the bigger the
blocks that make up the image, the higher the contrast. It is affected by
the use of varying black and white intensities.
14
CONTENT BASED IMAGE RETRIEVAL
15
CONTENT BASED IMAGE RETRIEVAL
Daubechies Wavelet
The Daubechies wavelet family is defined as
Boundary-based shape representation only uses the outer boundary of the shape. This
is done by describing the considered region using its external characteristics; i.e., the
pixels along the object boundary. Region-based shape representation uses the entire
shape region by describing the considered region using its internal characteristics; i.e.,
the pixels contained in that region
The ability to retrieve by shape is perhaps the most obvious requirement at the
primitive level. Unlike texture, shape is a fairly well-defined concept – and there is
considerable evidence that natural objects are primarily recognized by their shape. A
number of features characteristic of object size are computed for every object
identified within each stored image. Queries are then answered by computing the
same set of features for the query image, and retrieving those stored images whose
features most closely match those of the query.
Shape matching of three-dimensional objects is a more challenging task – particularly
where only a single 2-D view of the object in question is available. While no general
solution to this problem is possible, some useful inroads have been made into the
problem of identifying at least some instances of a given object from different
viewpoints. One approach has been to build up a set of plausible 3-D models from the
available 2-D image, and match them with other models in the database. Another is to
generate a series of alternative 2-D views of each database object, each of which is
matched with the query image. Related research issues in this area include defining 3-
D shape similarity measures, and providing a means for users to formulate 3-D shape
queries.
17
CONTENT BASED IMAGE RETRIEVAL
Methods of Representation:
For representing shape features mathematically, we have :
Boundary-based:
Polygonal Models, boundary partitioning
Fourier Descriptors
Splines, higher order constructs
Curvature Models
Region-based:
Super quadrics
Fourier Descriptors
Implicit Polynomials
Blum's skeletons
The most successful representations for shape categories are Fourier Descriptor and
Moment Invariants :
The main idea of Fourier Descriptor is to use the Fourier transformed boundary as the
shape feature.
The main idea of Moment invariants is to use region-based moments, which are
invariant to transformations as the shape feature.
18
CONTENT BASED IMAGE RETRIEVAL
19
CONTENT BASED IMAGE RETRIEVAL
CHAPTER-3
IMAGE RETRIEVAL USING MATLAB
3.1 MATLABINTRODUCTION
MATLAB is a software package for high-performance numerical computation and
visualization. It provides an interactive environment with hundreds of built-in
functions for technical computation, graphics and animation. It also provides easy
extensibility with its own high level programming language. The name MATLAB
stands for Matrix Laboratory.
MATLAB is an efficient program for vector and matrix data processing. It contains
ready functions for matrix manipulations and image visualization and allows a
program to have modular structure. Because of these facts MATLAB has been chosen
as prototyping software.
MATLAB provides a suitable environment for image processing. Although
MATLAB is slower than some languages (such as C), its built in functions and syntax
makes it a more versatile and faster programming environment for image processing.
Once an algorithm is finalized in MATLAB, the programmer can change it to C (or
another faster language) to make the program run faster.
3.2 IMAGE REPRESENTATION IN MATLAB
An image is stored as a matrix using standard Matlab matrix conventions. There are
five basic types of images supported by Matlab: Indexed images
Intensity images
Binary images
RGB images
8-bit images
MATLAB handles images as matrices. This involves breaking each pixel of an image
down into the elements of a matrix. MATLAB distinguishes between color and
grayscale images and therefore their resulting image matrices differ slightly.
3.2.1 Indexed image:
An indexed image consists of an array and a colormap matrix. An indexed image uses
direct mapping of pixel values in the array to colormap values. By convention, this
documentation uses the variable name X to refer to the array and map to refer to the
color map.
20
CONTENT BASED IMAGE RETRIEVAL
The color map matrix is an m-by-3 array of class double containing floating-point
values in the range [0,1]. Each row of map specifies the red, green, and blue
components of a single color.
The pixel values in the array are direct indices into a color map. The color of each
image pixel is determined by using the corresponding value of X as an index into
map. The relationship between the values in the image matrix and the color map
depends on the class of the image matrix:
If the image matrix is of class single or double, the color map normally contains
integer values in the range [1, p], where p is the length of the color map. The value 1
points to the first row in the color map, the value 2 points to the second row, and so
on.
If the image matrix is of class logical, uint8 or uint16, the color map normally
contains integer values in the range [0, p–1]. The value 0 points to the first row in the
color map, the value 1 points to the second row, and so on.
A color map is often stored with an indexed image and is automatically loaded with
the image when you use the imread function. After you read the image and the color
map into the workspace as separate variables, you must keep track of the association
between the image and color map. However, you are not limited to using the default
color map—you can use any color map that you choose.
The following figure illustrates the structure of an indexed image. In the figure, the
image matrix is of class double, so the value 5 points to the fifth row of the color map.
Pixel Values Index to Color map Entries in an Indexed Image
21
CONTENT BASED IMAGE RETRIEVAL
22
CONTENT BASED IMAGE RETRIEVAL
23
CONTENT BASED IMAGE RETRIEVAL
24
CONTENT BASED IMAGE RETRIEVAL
25
CONTENT BASED IMAGE RETRIEVAL
Color Conversion:
In order to use a good color space for a specific application, color conversion is
needed between color spaces. The good color space for image retrieval system should
preserve the perceived color differences. In other words, the numerical Euclidean
difference should approximate the human perceived difference.
RGB to HSV Conversion:
In Figure 3.1.c (ii), the obtainable HSV colors lie within a triangle whose vertices are
defined by the three primary colors in RGB space:
The hue of the point P is the measured angle between the line connecting P to the
triangle center and line connecting RED point to the triangle center. The saturation of
the point P is the distance between P and triangle center. The value (intensity) of the
point P is represented as height on a line perpendicular to the triangle and passing
through its center. The gray scale points are situated onto the same line. And the
conversion formula is as follows :
26
CONTENT BASED IMAGE RETRIEVAL
27
CONTENT BASED IMAGE RETRIEVAL
Gray scale:
A gray scale image is a mixture of black and white colors. These colors, or as some
may term as ‘shades’, are not composed of Red, Green or Blue colors. But instead
they contain various increments of colors between white and black. Therefore to
represent this one range, only one color channel is needed. Thus we only need a 2
dimensional matrix; m by n by MATLAB terms this type of matrix as an Intensity
Matrix, because the values of such a matrix represent intensities of one color.
For an image which as height of 5 pixels and width of 10 pixels the resulting matrix
would be a 5 by 10 matrix for grayscale image.
28
CONTENT BASED IMAGE RETRIEVAL
Most 8-bit image formats store a local image palette of 256 colors in addition to the
raw image data. If such an image is to be displayed on 8-bit graphics hardware, the
graphics hardware's global palette will be overwritten with the local image palette.
29
CONTENT BASED IMAGE RETRIEVAL
This can result in other images on the screen having wildly distorted colors due to
differences in their palettes.For this reason, on 8-bit graphics hardware, programs such
as web browsers must address this issue when simultaneously displaying multiple
images from different sources. Each image may have its own palette, but the colors in
each image will be remapped to a single palette, probably using some form of
dithering
30
CONTENT BASED IMAGE RETRIEVAL
The process of retrieving desired images from a large collection on the basis of
features (such as color, texture and shape) that can be automatically extracted from
the images themselves. The features used for retrieval can be either primitive or
semantic, but the extraction process must be predominantly automatic. In typical
Content-based image retrieval systems, the visual contents of the images in the
database are extracted and described by multi- dimensional feature vectors. The
feature vectors of the images in the database form a feature database. To retrieve
images, users provide the retrieval system with example images or sketched figures.
The system then changes these examples into its internal representation of feature
vectors. The similarities /distances between the feature vectors of the query example
or sketch and those of the images in the database are then calculated and retrieval is
performed with the aid of an indexing scheme. The indexing scheme provides an
efficient way to search for the imagedatabase.
Five methods to retrieve both Color and Gray scale images.
The methods used are as follows:
31
CONTENT BASED IMAGE RETRIEVAL
32
CONTENT BASED IMAGE RETRIEVAL
33
CONTENT BASED IMAGE RETRIEVAL
34
CONTENT BASED IMAGE RETRIEVAL
While using CBIR system or any system which is designed such that comparison is to
be performed with an already existing database then database should be a standard
database, so some of the standard databases used for CBIR in different field are as
follows:
Database of Astronomical test images provided by IAPR technical committee 13.
C. A. Glaseby and G. W. Horgan: Image Analysis for the Biological Sciences (John
Wiley, 1995) this database is provided from books
National Design Repository over 55,000 3D CAD and solid models of (mostly)
mechanical/machined engineering designs. ( Geometric& Intelligent Computing
Laboratory / Drexel University)
UNC's 3D image database many images (Format: GIF)
AT&T: The Database of Faces (formerly 'The ORL Database of Faces') (Format:
PGM)
Caltech Image Database 450 frontal face images of 27 or so unique people. (Format:
GIF)
CMU PIE Database A database of 41,368 face images of 68 people captured under 13
poses, 43 illuminations conditions, and with 4 different expressions.
CMU VASC Image Database Images, sequences, stereo pairs (thousands of images)
(Format: Sun Raster image)
CEDAR CDROM database of handwritten words, ZIP Codes, Digits and Alphabetic
characters (Format: unknown)
NIST Fingerprint and handwriting datasets - thousands of images (Format: unknown)
El Salvador Atlas of Gastrointestinal Video Endoscopy Images and Videos of his-res
of studies taken from Gastrointestinal Video endoscopy. (Format: jpg, mpg, gif) The
Mammographic Image Analysis Society (MIAS) mini-database.
Mammography Image Databases 100 or more images of mammograms with ground
truth. Additional images available by request, and links to several other
mammography databases are provided. (Format: homebrew) Optic flow: the Barron
and Fleet study.
Optical flow test image sequences 6 synthetic and 4 real image sequences (Format:
Sun Raster image)
Particle image sequences real and synthetic image sequences used for testing a
Particle Image Velocimetry application. These images may be used for the test of
optical flow and image matching algorithms. (Format: pgm (raw)) ( LIMSI-
35
CONTENT BASED IMAGE RETRIEVAL
CNRS/CHM/IMM/vision / LIMSI-CNRS)
Groningen Natural Image Database 4000+ 1536x1024 (16 bit) calibrated outdoor
images (Format: homebrew)
IEN Image Library 1000+ images, mostly outdoor sequences (Format: raw, ppm)
University of Oulu wood and knots database: 1000+ color images, including
classification. (Format: PPM)
In our project we have created a database by name “retsys”, in that database we have
created tables for each methods.
3.6 GRAPHICAL USERINTERFACE
It is an acronym for GUI. User always wants a friendly environment so that they can
easily and effectively use the system without actually going into the finer details of
the working. So, to create such a user friendly platform for the system we need
Graphic User Interface where all the functionalities of the system is available for the
user in graphics form and as we know user always find easier to understand and
comprehend images and graphics rather than in textual form.
The graphical user interface (GUI) is an important part of software development. The
designing of the GUI have to solve the following problems: learning time, speed of
performance, rate of errors by users, retention over time, and subjective satisfaction
.This software is, at the moment, intended to be used only for testing purposes. The
most important property of this software is that the results of different test queries can
be seen quickly and the results can be saved safely on a disk. Thus the visual layout is
not as important as in case of a commercial software product.
In the GUI developed for our project first we ask user whether they want to search or
insert into database as we have given option to user in which they can also add images
and values to a database.
36
CONTENT BASED IMAGE RETRIEVAL
37
CONTENT BASED IMAGE RETRIEVAL
3.7 ADVANTAGES
Improved ,smoother, and more professional interface to the geospatial CBIR
Users to focus on the same photo and to focus on the sections they like Save
time and improve computer performance
Viewer will be able to view smaller parts either individually or in clusters they choose
and can at any time zoom in or out of the photo without losing the image quality
38
CONTENT BASED IMAGE RETRIEVAL
3.8 APPLICATIONS
The CBIR technology has been used in several applications such as fingerprint
identification, biodiversity information systems, digital libraries, crime prevention,
medicine, historical research, among others. Some of these applications are presented
in this section.
Medical Applications:
The use of CBIR can result in powerful services that can benefit biomedical
information systems. Three large domains can instantly take advantage of CBIR
techniques: teaching, research, and diagnostics. From the teaching perspective,
searching tools can be used to find important cases to present to students. Research
also can be enhanced by using services combining image content information with
different kinds of data. For example, scientists can use mining tools to discover
unusual patterns among textual (e.g., treatments reports, and patient records) and
image content information. Similarity queries based on image content descriptors can
also help the diagnostic process. Clinicians usually use similar cases for case- based
reasoning in their clinical decision-making process. In this sense, while textual data
can be used to find images of interest, visual features can be used to retrieve relevant
180 RITA.
Biodiversity Information Systems:
Biologists gather many kinds of data for biodiversity studies, including spatial data,
and images of living beings. Ideally, Biodiversity Information Systems (BIS) should
help researchers to enhance or complete their knowledge and understanding about
species and their habitats by combining textual, image content-based, and
geographical queries. An example of such a query might start by providing an image
as input (e.g., a photo of a fish) and then asking the system to“Retrieve all data base
images containing fish whose finsare shaped like those of the fish in this photo”. A
combination of this query with textual and spatial predicates would consist of “Show
the drainages where the fish species with ‘large eyes’ coexists with fish whose fins
are shaped like those of the fish in the photo”.
39
CONTENT BASED IMAGE RETRIEVAL
Digital Libraries:
There are several digital libraries that support services based on image content. One
example is the digital museum of butterflies, aimed at building a digital collection of
Taiwanese butterflies. This digital library includes a module responsible for content-
based image retrieval based on color, texture, and patterns. In a different image
context, Zhu et al. present a content-based image retrieval digital library that supports
geographical image retrieval. The system manages air photos which can be retrieved
through texture descriptors. Place names associated with retrieved images can be
displayed by cross-referencing with a Geographical Name Information System
(GNIS) gazetter. In this same domain, Bergman et al. describe architecture for
storage and retrieval of satellite images and video data from a collection of
heterogeneous archives. Other initiatives cover different concepts of the CBIR area.
Publishing and advertising. Photographs and pictures are used extensively in the
publishing industry, to illustrate books and articles in newspapers and magazines.
Many national and regional newspaper publishers maintain their own libraries of
photographs, or will use those available from the Press Association, Reuters and other
agencies. The photographic collections will be indexed and filed under, usually, broad
subject headings (e.g. local scenes, buildings or personalities as well as pictures
covering national and international themes). Increasingly, electronic methods of
storage and access are appearing, alongside developments in automated methods of
newspaper production, greatly improving the speed and accuracy of the retrieval
process. Advertisements and advertising campaigns rely heavily on still and moving
imagery to promote the products or services. The growth of commercial stock
photograph libraries, such as Getty Images and Corbis, reflects the lucrative nature of
the industry.
Architectural and engineering design:
Photographs are used in architecture to record finished projects, including interior and
exterior shots of buildings as well particular features of the design. Traditionally these
photographs will be stored as hardcopy or in slide format, accessible by, say, project
number and perhaps name, and used for reference by the architects in making
presentations to clients and for teaching purposes. Larger architects’ practices with
more ample resources, have introduced digital cameras and the electronic storage of
photographs.
40
CONTENT BASED IMAGE RETRIEVAL
The images used in most branches of engineering include drawings, plans, machine
parts, and so on. Computer Aided Design (CAD) is used extensively in the design
process. A prime need in many applications is the need to make effective use of
standard parts, in order to maintain competitive pricing. Hence many engineering
firms maintain extensive design archives. CAD and 2-D modelling are also
extensively used in architectural design, with 3-D modelling and other visualization
techniques increasingly being used for communicating with clients. A recent survey of
IT in architectural firms emphasized the dominance of CAD (especially 2-D) in the
design process, though it concluded that object-based, intelligent 3-D modeling
systems will become more important in the future.
Crime prevention
The police use visual information to identify people or to record the scenes of crime
for evidence; over the course of time, these photographic records become a valuable
archive. In the UK, it is common practice to photograph everyone who is arrested and
to take their fingerprints. The photograph will be filed with the main record for the
person concerned, which in a manual system is a paper-based file. In a computer-
based system, the photograph will be digitized and linked to the corresponding textual
records. Until convicted, access to photographic information is restricted and, if the
accused is acquitted, all photographs and fingerprints are deleted. If convicted, the
fingerprints are passed to the National Fingerprint Bureau. Currently, there is a
national initiative investigating a networked Automated Fingerprint Recognition
system involving BT and over thirty regional police forces. Other uses of images in
law enforcement include face recognization, DNA matching, shoe sole impressions,
and surveillance systems. The Metropolitan Police Force in London is involved with a
project which is setting up an international database of the images of stolen objects.
Societal factors:
It is a truism to observe that images are currently used in all walks of life. The
influence of television and video games in today’s society is clear for all to see. The
commonest single reason for storing, transmitting and displaying images is probably
for recreational use, though this category includes a wide variety of different attitudes
and interaction styles, from passively watching the latest episode of a soap opera to
actively analyzing a tennis star’s shots in the hope of improving one’s own game.
41
CONTENT BASED IMAGE RETRIEVAL
42
CONTENT BASED IMAGE RETRIEVAL
CHAPTER- 4
RESULTS AND CONCLUSION
Image database contains 500 colored images of different categories. Color features
are extracted using color moment. A total of 50 query images are used for the system
evaluation. For each query the top 35 retrieved images are displayed for feedback.
The system performance can be measured by similarity measurements.
43
CONTENT BASED IMAGE RETRIEVAL
44
CONTENT BASED IMAGE RETRIEVAL
45
CONTENT BASED IMAGE RETRIEVAL
4.2 CONCLSION:
The extent to which CBIR technology is currently in routine use is clearly still very
limited. In particular, CBIR technology has so far had little impact on the more
general applications of image searching, such as journalism or home entertainment.
Only in very specialist areas such as crime prevention has CBIR technology been
adopted to any significant extent. This is no coincidence – while the problems of
image retrieval in a general context have not yet been satisfactorily solved, the well-
known artificial intelligence principle of exploiting natural constraints has been
successfully adopted by system designers working within restricted domains where
shape, color or texture features play an important part in retrieval.
CBIR at present is still very much a research topic. The technology is exciting but
immature, and few operational image archives have yet shown any serious interest in
adoption. The crucial question that this report attempts to answer is whether CBIR
will turn out to be a flash in the pan, or the wave of the future. Our view is that CBIR
is here to stay. It is not as effective as some of its more ardent enthusiasts claim – but
it is a lot better than many of its critics allow, and its capabilities are improving all the
time. Most current keyword-based image retrieval systems leave a great deal to be
desired. In hard-nosed commercial terms, only one application of CBIR (video asset
management) appears our view is that CBIR is here to stay. It is not as effective as
some of its more ardent enthusiasts claim – but it is a lot better than many of its critics
allow, and its capabilities are improving all the time. And as we argue in section
Error! Reference source not found. above, most current keyword-based image
retrieval systems leave a great deal to be desired. In hard-nosed commercial terms,
only one application of CBIR (video asset management) appears to be cost-effective –
but few conventional image management systems could pass the test of commercial
viability either.
The process of designing of CBIR system has been successfully carried out and the
expected outcome is achieved. The main functions that a CBIRS should perform are:
Constructing feature vectors from the image based on its content and storing it in the
database.
Similarity comparison and segmentation. Retrieving the images based on the feature
vectors.
The above mentioned functions are thoroughly checked using software called
46
CONTENT BASED IMAGE RETRIEVAL
MATLAB 7.0. Also the design is very simple and easy to implement.
The dramatic rise in the sizes of image databases has stirred the development of
effective and efficient retrieval systems. The development of these systems started
with retrieving images using textual connotations but later introduced image retrieval
based on content. This came to be known as CBIR or Content Based Image Retrieval.
Systems using CBIR retrieve images based on visual features such as colour, texture
and shape, as opposed to depending on image descriptions or textual indexing. In this
project, we have various modes of representing and retrieving the image properties of
colour, texture and shape. The application performs a simple colour-based search in
an image database for an input query image, using colour histograms. It then
compares the colour histograms of different images. Further enhancing the search, the
application performs a texture-based search in the colour results, using wavelet
decomposition and energy level calculation. It then compares the texture features. A
more detailed step would further enhance these texture results, using a shape-based
search. CBIR is still a developing science. As image compression, digital image
processing, and image feature extraction techniques become more developed, CBIR
maintains a steady pace of development in the research field. Furthermore, the
development of powerful processing power and faster and cheaper memories
contribute heavily to CBIR development. This development promises an immense
range of future applications using CBIR.
4.3 FUTURE ENHANCEMENTS:
Developments and studies are going on for further improvements in design and
performance of “CONTENT BASED IMAGE RETRIEVAL SYSTEMS”.
In our project we have only done color analysis and the information
about object location, shape, and texture is discarded. Thus this project showed that
images retrieved by using the above mentioned methods may not be semantically
related even though they share similar color distribution in some results.
47
CONTENT BASED IMAGE RETRIEVAL
48
CONTENT BASED IMAGE RETRIEVAL
REFERENCES
[3]. Datta, Ritendra; Dhiraj Joshi; Jia Li; James Z. Wang (2008). "Image Retrieval:
Ideas, Influences, and Trends of the New Age". ACM Computing Surveys.
[4]. www.engineersgarage.com
49