Project For Iris Segmentation
Project For Iris Segmentation
Anis-Ul-Islam Rafid(150104083)
Zahid Al Mamun (150104087)
Amit Niloy (150104105)
Fabliha Fairooz(150104108)
CSE-4228- DIP Lab-Fall 18
1
1 Introduction & motivation
The iris is a thin circular diaphragm, which lies between the cornea and the lens of the human eye. A front-on view
of the iris is shown in Figure 1. The iris is perforated close to its centre by a circular aperture known as the pupil.
The function of the iris is to control the amount of light entering through the pupil, and this is done by the sphincter
and the dilator muscles, which adjust the size of the pupil. The average diameter of the iris is 12 mm, and the pupil
size can vary from 10 to 80 percents of the iris diameter
Among various biometric technologies, such as finger-prints and face, iris recognition has a relatively short history
of use. There are few large-scale experimental evaluations reported in the literature, and essentially none where the
image dataset is available to other researchers. One constraint of current iris recognition systems, which is perhaps
not widely appreciated, is that they require substantial user cooperation in order to acquire an image of sufficient
quality for use. A number of groups have explored iris recognition algorithms and some systems have already been
implemented and put into commercial practice by companies such as Iridian Technologies, Inc., whose system is
based on the use of Daugman’s algorithm. A typical iris recognition system generally consists of the following basic
modules: I. image acquisition, iris location, and pre-processing,
II iris texture feature extraction and signature encoding, and
III iris signature matching for recognition or verification.
Iris recognition, a relatively new biometric technology, has great advantages, such as variability, stability and se-
curity, thus it is the most promising for high security environments. The proposed system here is a simple system
design and implemented to find the iris from the image using Hough Transform Algorithm. Canny Edge detector
has been used to get edge image to use it as an input to the Hough Transform. To get the general idea of Hough
Transform, the Hough Transform for circle is also implemented. RGB value of 3-D accumulator array of peaks of
inner circle and outer circle has been performed.
We have divided our work into two models . In the first model we segmented the iris and in the 2nd mode, we
removed the backgroung of the image and just kept the iris and it’s segments of the image.
2 Dataset
We downloaded eye image and then applied segmentation. Most of the image is 330*360 dimension and few are
640*480 dimension.the type of the image files are .bmp and .jpg.
2
h
to an Optimal Edge Detection, while Sobel was empirically derived. The following figure shows the difference
between Sobel method and Canny method practically.
1. HOUGH TRANSFORM :
Hough Transform is a technique to extract shapes from the image. The classical version of the Hough transform
identifies straight lines from the images, but the extended version identifies regular or non-regular shapes from
the image. The transform universally used today was invented by Richard Duda and Peter Hart in 1972, who
called it a ”generalized Hough transform” after the related 1962 patent of Paul Hough.In Hough Transform,
input image is taken as binary image, the image to which edge detection has been applied. Thus the points
which need to be transformed are those which likely to lie on an ”edge” in the image. The transform itself is
rounded into an arbitrary number of bins, each representing an approximate definition of a possible shape for
which transformed is performed e.g. Line, Circle etc... Each feature point in the edge detected image is said
to vote for a set of bins corresponding to the shapes that consist the feature point
By simply incrementing the value stored in each bin for every feature lying on that shape, an array is built
up which shows which shapes fit most closely to the data in the image. By finding the bins with the highest
value, the most likely shapes can be extracted. The simple way of finding these peaks is by applying some form
of threshold, but different techniques may yield better results in different circumstances determining which
shapes are found as well as how many.
3
Figure 4: a) an eye image (from the CASIA database) b) corresponding edge map c) edge map with only horizontal
gradients d) edge map with only vertical gradients
Where r is radius of circle and (Xc,Yc) is center co-ordinates of the circle. Again in xy-space all featured
pixels, obtained after applying the edge detection algorithm, will be processed. Each edge pixel, residing
on the circle will be represented as a circle in the parameterized space. And in parameter space, if the
circles coincide at say (Xci,Yci) point then, that point is center of the circle in the xy-plane.
4
h
5
5 Comparison
In the first model there are noises in the image . But in the next model we just extracted iris, so better visualization
is found and also it becomes easier in case of matching .
6 Discussion
In this project we segmented iris from the input image.We implemented two models.In the first model we used
canny edge detection and Hough Transform to segment the iris.In the second model, we used inpolygoan function to
remove the background.There are some complexities in this process.In future, we will try to reduce the complexities
and try to improve the accuracy of the process.
7 Reference
[1] ”Recognition of Human Iris Patterns for Biometric Identification, Libor Masek”.2003