A Survey of Iris Datasets
A Survey of Iris Datasets
Published in:
Image and Vision Computing
DOI:
10.1016/j.imavis.2021.104109
Publication date:
2021
License:
CC BY
Document Version:
Final published version
Link to publication
Copyright
No part of this publication may be reproduced or transmitted in any form, without the prior written permission of the author(s) or other rights
holders to whom publication rights have been transferred, unless permitted by a license attached to the publication (a Creative Commons
license or other), or unless exceptions to copyright law apply.
a r t i c l e i n f o a b s t r a c t
Article history: Research on human eye image processing and iris recognition has grown steadily over the last few decades. It is
Received 30 December 2019 important for researchers interested in this discipline to know the relevant datasets in this area to (i) be able to
Received in revised form 10 October 2020 compare their results and (ii) speed up their research using existing datasets rather than creating custom
Accepted 6 January 2021
datasets. In this paper, we provide a comprehensive overview of the existing publicly available datasets and
Available online 23 January 2021
their popularity in the research community using a bibliometric approach. We reviewed 158 different iris
Keywords:
datasets referenced from the 689 most relevant research articles indexed by the Web of Science online library.
Biometrics We categorized the datasets and described the properties important for performing relevant research. We pro-
Iris recognition vide an overview of the databases per category to help investigators conducting research in the domain of iris rec-
Iris datasets ognition to identify relevant datasets.
Human iris © 2021 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license
(http://creativecommons.org/licenses/by/4.0/).
1. Introduction institution. This adds additional constraints that limit the popularity of
certain datasets among researchers.
Publicly available datasets of human iris images play a major role in The main purpose of this work is to help navigate among the 158 da-
research into iris recognition. Most of the available datasets share a sub- tabases used in iris recognition research that are often declared to be
stantial number of properties (e.g., near-infrared imaging) and meet the publicly available or not explicitly stated otherwise. In this paper, we re-
requirements of the widespread and de facto standard recognition view existing and publicly available (for research purposes) datasets of
method introduced by John Daugman [1]. With the recent popularity human irises. We categorized the datasets based on the research areas
of mobile computing and deep learning in biometrics, new databases for which they are suitable. In particular, we focus on the imaging pro-
have been introduced, containing more challenging images of irises. It cess with respect to current trends in imaging. We created a list of the
can be difficult for newcomers to the iris recognition field to identify databases by (i) reviewing the relevant journal papers indexed by the
the major and appropriate databases suitable for their research topics. Web of Science library and (ii) by searching through online search en-
When entering or conducting research in biometrics, researchers typi- gines. We also analyze the popularity of the databases and, based on
cally go through an extensive amount of published work to identify that analysis, we discuss trends in iris imaging. Within the analysis,
the state-of-the-art methods, data sources, and benchmark datasets. we also critically reviewed the databases to understand their suitability
However, many of the datasets, either very popular benchmarks or for particular iris recognition research tasks.
niche datasets, are not available, despite the claims of the authors, due In addition to reviewing the datasets, we also attempted to identify
to a variety of reasons. Although there are public search engines1 provid- the original reference (the first publication, if it exists) introducing the
ing access to freely available research datasets, biometric datasets are typ- dataset as well as the earliest research performed and published using
ically not included. Due to the personal nature of the data, the dataset the dataset. We also report the number of classes and iris images
providers typically allow their use only for noncommercial research pur- contained in each dataset, as these are often the most important proper-
poses. In addition, authors typically follow the access carefully and require ties (for data-driven research, e.g., machine-learning approaches).
a signature from the researcher or a legal representative of the research We reviewed 689 papers on iris recognition or related research from
the most relevant journals. Based on the review, we aim to answer the
⁎ Corresponding author. following research questions (RQs):
E-mail addresses: [email protected] (L. Omelina), [email protected] (J. Goga),
[email protected] (J. Pavlovicova), [email protected] (M. Oravec), 1. RQ 1: What are the existing and available databases?
[email protected] (B. Jansen).
2. RQ 2: What are the most popular iris databases?
1
An example of a dataset search engine is Google Dataset Search: https://toolbox.google. 3. RQ 3: What are the differences, downsides, and commonalities
com/datasetsearch among the databases?
https://doi.org/10.1016/j.imavis.2021.104109
0262-8856/© 2021 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license
(http://creativecommons.org/licenses/by/4.0/).
L. Omelina, J. Goga, J. Pavlovicova et al. Image and Vision Computing 108 (2021) 104109
4. RQ 4: What are the common properties of the popular databases? There are other reviews [10,11] on the topic of iris segmentation and
5. RQ 5: What areas in the field of iris recognition lack an available recognition; however, the authors do not discuss available datasets.
database?
6. RQ 6: What are appropriate recommendations for creating an 3. Popularity of databases
iris database?
Using the most appropriate dataset for a given problem is a basic as-
The rest of this article is organized as follows. Section 2 provides a sumption for the successful validation of any scientific method. To the
description of related work, mainly other reviews or surveys related to best of our knowledge, currently there is no extensive review of avail-
iris recognition. In section 3, we present the results of a bibliometric able iris image datasets; therefore, it is difficult to select the appropriate
analysis where we answer RQ 1 and RQ 2. Section 4, reviewing existing dataset. The selection process involves extensive and time-consuming
databases and answering RQ 3, presents a critical review and compari- research, which is often a reason for creating a custom dataset instead.
son of existing iris databases. In section 5, we describe common attri- Authors typically justify the new database by reviewing a few (up to
butes of the popular databases, answering RQ 4. In section 6, we five) empirically chosen popular databases that are often unrelated to
discuss limitations and areas with underdeveloped datasets (answering the present research (e.g., [12,13]).
RQ 5) and formulate general recommendations for creating new In this section, we discuss the popularity of iris image databases
datasets (RQ 6) to reach scientific relevance (section 8). based on bibliometric research and provide statistical information
about the available ones. The popularity of databases helps identify re-
2. Related work search areas that receive low attention. The quantified popularity of
iris databases is also useful as an indicator of research trends and under-
There are multiple review papers on the topic of iris recognition and developed areas. We define popularity as the citation rank within the
eye image processing. These surveys focus on processing and recogni- selected library. Next, we describe our systematic review of the
tion and devote only a limited space to discussing available datasets. literature.
Nguyen et al. [2] provided an extensive survey of long-range iris rec- Owing to space constraints, the complete list of identified databases
ognition. They discuss existing systems and their limitations; however, (with details) and the list of reviewed publications can be found in the
they only briefly discuss three publicly available datasets, MBGC, supplementary materials.
UBIRIS V2.0, and CASIA-Iris-Distance. The authors discuss limitations
of the recognition methods. However, it is not clear whether the pre- 3.1. Identification of relevant studies
sented datasets are sufficient for future research (e.g., due to limitations
in hardware, mainly sensors and optics). There are many sources of literature relevant to iris recognition. For
Alonso-Fernandez & Bigun [3] reviewed research related to the purpose of this survey, we focused on the most scientifically rele-
periocular biometrics. The authors briefly describe five publicly avail- vant papers, that is, studies that have the highest impact on the research
able iris datasets and four periocular datasets (often also used for iris community. Therefore, we selected the Web of Science online library, as
recognition research, although the captured iris is typically very small, many of the most-cited publications in iris recognition known to the au-
50 pixels). The authors also point out the limitations of sensors and thors of this study, are indexed by this library.
see a future for imaging at a distance, but do not explain why the se- We selected 1,012 journal articles listed in the Web of Science online li-
lected databases would have future perspectives. brary. We searched for the simultaneous presence of two keywords: “iris”
While Farmanullah [4] provided an extensive review of segmenta- and “recognition.” We limited the search to (i) English language, (ii) Sci-
tion methods for non-ideal and unconstrained biometric iris systems, ence Citation Index Expanded, and (iii) only the research areas:
together with results on 21 datasets, he omits any description of the COMPUTER SCIENCE, ENGINEERING, IMAGING SCIENCE PHOTOGRAPHIC
datasets used or reasons for selecting them. TECHNOLOGY, OPTICS, TELECOMMUNICATIONS, MATHEMATICAL COM-
De Marsico et al. [5] provided a survey of machine-learning methods PUTATIONAL BIOLOGY, AUTOMATION CONTROL SYSTEMS, MATHEMAT-
for iris recognition. While the authors provide results on 11 publicly ICS, SCIENCE TECHNOLOGY OTHER TOPICS, and ROBOTICS.
available datasets, actual comparisons and descriptions of the datasets
are absent. While most of the results are performed on the CASIA Iris
3.2. Primary study selection and study quality assessment
Dataset v.1, they concluded that more extensive experimentation on
the later datasets is needed.
From the 1,012 studies, we selected 689 relevant articles, that is, the
In their survey on understanding iris images, Bowyer et al. [6] se-
articles where authors report the use of at least one iris image database
lected and described 10 datasets. However, one of them is no longer
in their research. The relevant articles were selected based on the ab-
available (BATH), and two datasets, ICE2005 and ICE2006, are available
stract, description of the experiment (most of the time an experimental
only within a much larger dataset, ND-IRIS-0405, where it is not clear
results section) and conclusion. We excluded:
which files correspond to which particular datasets.2
Neves et al. [7] described biometric recognition in surveillance sce-
• articles that appeared in the search results but are not related to
narios. The authors refer to four iris datasets with limited descriptions.
research of the human iris (e.g. studies that referred to the popu-
They refer to the datasets as the main biometric datasets; however, as
lar iris flower data set),
we show later (see Section 3), there are other datasets with higher sig-
• duplicates or the same studies published in different sources, and
nificance (in terms of impact) for the research community.
• studies that are related to human iris, but do not use any iris
Rattani and Derakhshani [8] provided a survey of methods for ocular
database in the research (review studies, iris enrollment
recognition in the visible spectrum. The authors describe seven datasets
methods, etc.).
collected in the visible spectrum. However, some datasets mentioned by
the authors are no longer available, despite the claims of the original au-
Some iris databases (e.g., the CASIA Iris Database) are continuously
thors of the datasets.3
evolving, and the number of included images is increasing. Despite the
2
existence of release versions of the CASIA database, some reviewed pa-
The ND-IRIS-0405 dataset contains a description of the files that were used in the ICE
2005 challenge. However, ICE 2006 files are not explicitly described.
pers have used intermediate versions that do not correspond to any ver-
3
For example, despite multiple attempts to obtain the VSSIRIS dataset [9] and sion published online. In such cases, we classified the database to the
contacting the authors, we were not able to download it. closest release version available in terms of the number of images.
2
L. Omelina, J. Goga, J. Pavlovicova et al. Image and Vision Computing 108 (2021) 104109
Fig. 1. Popularity of iris image databases. The treemap chart shows the cited databases with the number of citations among the reviewed publications.
3.3. Review results and databases used 453 times in 305 articles, or in more than 44% of the studies. Actually,
the CASIA Iris Database v. 1, although being obsolete [14], is still the
Based on our research, we found that 57.37% (397) of the publica- most popular and most cited iris image database (used mostly as a
tions evaluated their method only on one database, and the average benchmark in 144 (21%) studies).
number of databases used was 1.87 per publication (see Fig. 2 for To help navigate the existing publicly available databases, we pro-
more details). 30.35% (210) of publications were evaluated on custom vide descriptions and compare several characteristics, focusing on the
(non-published) databases, where 22.54% (156) publications used suitability of the databases for particular research topics. The definition
only a custom database. of “publicly available” is rather loose in this domain and is especially
The overall popularity (in terms of uses and citations) of the data- sensitive due to the nature of the data. Most of the databases require
bases is illustrated in Fig. 1. The iris databases produced by the Chinese signing a license agreement to obtain a copy. However, this is under-
Academy of Sciences (CASIA) are the most popular and have been used standable, as it provides legal protection for the people whose data is
contained in the database as well as for the authors of the databases.
In addition, many authors put additional constrains, mainly:
3
L. Omelina, J. Goga, J. Pavlovicova et al. Image and Vision Computing 108 (2021) 104109
databases, we received no response; 34 databases are private or not 4.1. Controlled environment
available anymore; 7 databases are available for a fee and one database
was available for joint research only. Because joint research usually in- Early publicly available datasets were captured in a controlled envi-
cludes sharing of the intellectual property rights, we did not consider ronment where the different properties, both extrinsic and intrinsic,
these databases as publicly available. were kept constant. The aim of these databases is to perform fundamen-
From the descriptions of the databases, we studied the spectrum in tal research and to evaluate the actual variability of irises and the re-
which the images were captured (see Fig. 4). 102 databases were cap- peatability of the capture over time, where changes in the
tured (out of 158) in the near-infrared spectrum; 35 databases were environment could negatively influence the evaluation. A list of the da-
captured in the visible spectrum, and 16 databases were captured in tabases collected for this purpose is summarized in Table 2 and sample
both the visible spectrum and near-infrared spectrum. We verified the images from these databases are shown in Fig. 6. Because early research
number of samples in the available databases and the spectral domain on iris recognition has focused on the development of fundamental as-
in which the images were taken (see Fig. 5). There were 1,378,867 iris pects of recognition methods, the capture process and environment
images in the available databases combined, where the number of were constrained to iris imaging. This also includes the number of ses-
near-infrared spectrum images is substantially larger (1,257,468) than sions, typically two, in which the iris images were collected. In addition,
the number of images captured in the visible spectrum (138,989). sessions are typically organized within a short time span and use a sin-
gle sensor. Data collection organized in more than two sessions is an ex-
ception. We identified that only eight databases (out of 158) were
4. Databases
collected in three or more sessions (see Table 1).
The first publicly available database was the CASIA Iris Database v.1
Early research on iris recognition was performed on datasets that
[26] collected by the National Laboratory of Pattern Recognition, Insti-
were not publicly available, but were collected by each research group
tute of Automation, CASIA. The database was captured by a custom-
separately, often even for a specific study or paper. One of the first con-
made NIR camera, and the authors of this database manually processed
sistently reported results was obtained by John Daugman when using
the images by replacing the pupil area (and the specular reflections)
the UAE database (publicly unavailable). The first publicly available
with a constant intensity value. Because the manual intervention
dataset was CASIA v. 1 [15], introduced in 2003. Since then, the CASIA
made the problem artificially simple, it is not recommended to use
database (including its updated versions) has become the most popular
this database in iris biometrics research [14] as results obtained from
benchmark for the evaluation of iris recognition methods.
this database might be misleading. Since the initial release of CASIA
With the popularity of iris recognition in various use-cases, there has
v.1, the database was continuously updated to the current CASIA v.4
been a need for additional benchmarks, and therefore, new publicly
[27]. In addition, the structuring introduced in the newer version
available databases of human iris images. Each new database typically
helps to explore the influence of different effects, for example,
exploits one or more properties of iris recognition. These properties
intra-class variations (CASIA-Iris-Lamp), correlations in twins (CASIA-
can be split into two groups: intrinsic properties and extrinsic properties.
Iris-Twins), influence of the capture distance (CASIA-Iris-Distance),
cross-sensor compatibility, influence of aging, and unconstrained cap-
• Intrinsic properties are inherent to the technology or the acquisition
ture from mobile devices (CASIA-Iris-Mobile-V1.0).
process. For instance, the spectrum at which the iris is captured
CASIA-IrisV4-Interval [27] contains 2,639 images (395 classes) cap-
(near infrared (NIR) or visible), the size of the iris in the image,
tured by a custom NIR camera in an indoor environment. The database
and the sensor type (dedicated or integrated in a mobile device).
is well-suited to study the detailed features of iris texture.
• Extrinsic properties are not related to technology but are typically
CASIA-IrisV4-Lamp [27] contains 16,212 images (819 classes) cap-
related to the use case. For instance, the influence of aging, images
tured by a dedicated iris scanner (OKI Irispass-H). The images were cap-
captured under unconstrained conditions (influence of glasses,
tured in an indoor environment with lamps (visible light illumination of
specular reflections, outdoor imaging, etc.), and the possibility
the rooms) both on and off.
of spoofing with an artificial iris.
CASIA-IrisV4-Thousand [27] contains 20,000 images (2,000 classes)
captured by a dedicated iris scanner (Irisking IKEMB-100). Similar to the
In this section, we provide descriptions of the identified research da-
CASIA-IrisV4-Lamp, the images were captured in an indoor environ-
tabases. We identified 158 databases that are mostly declared to be pub-
ment with lamps (visible light illumination of the rooms) both on and
licly available for research purposes. We divided the databases into six
off. This database was the first publicly available iris database with
groups: (i) databases collected in a controlled environment, (ii) syn-
more than 1,000 subjects.
thetic databases, (iii) multimodal databases, (iv) databases exploring in-
In 2007, the IIT Delhi Iris Database (IITD-V1) [28,29] was introduced.
trinsic properties, (v) databases exploring extrinsic properties, and (vi)
It contains 1,120 NIR images (224 classes) captured in a constrained en-
uncommon research databases.
vironment and is limited to Indian subjects.
The CUHK Iris Image Dataset [30,31] contains 254 images (36 clas-
ses) captured in the NIR spectrum. This database was among the first
publicly available iris databases but is rather small in size.
4
L. Omelina, J. Goga, J. Pavlovicova et al. Image and Vision Computing 108 (2021) 104109
Fig. 4. Databases divided according to the spectrum in which they were captured. “Other”
means non-visible and non-near-infrared spectrum, e.g., near-ultraviolet or thermal
spectra.
the synthetic samples can be better controlled, the databases with real
biometric data are still the ultimate benchmark. Existing datasets con-
taining synthetic iris images are summarized in Table 3 and example
images from these databases are shown in Fig. 7.
The WVU Synthetic Texture Based Iris Dataset [36] contains 7,000
synthetic iris NIR images of 1,000 classes using a texture-based ap-
proach [35]. The database was created using a computational model
that generated iris textures representing the global iris appearance.
The WVU Synthetic Model Based Iris Dataset [34] contains 160,000
synthetic NIR iris images in 10,000 classes generated using a model- Fig. 6. Iris images from datasets captured in controlled environments.
based and anatomy - based approach [33]. Forty controllable parame-
ters were used (e.g., iris size, pupil size, and fiber size) to generate
these images, resulting in a diverse database simulating effects like
Table 1
noise, rotation, blur, motion blur, low contrast, and specular reflections. Available databases collected in 3 or more sessions.
The CASIA-IrisV4-Syn [27] database contains 10,000 images in 1,000
Database name Number of Year Number of
classes created by processing the images of real irises and applying
sessions images (classes)
patch-based sampling to create a new prototype from which the new
images were created [32]. The database contains images generated ND-CrossSensor-Iris 2012 [16] 27 2012 117,503 (1352)
ND-CrossSensor-Iris 2013 [17] 27 2013 116,564 (1352)
using the CASIA-IrisV1 database [26].
WVU Twins Day Dataset 5 2010 N/A (152+)
Although existing databases of synthetic iris images contain sub- 2010–2015 [18–20]
stantially more images than real iris image databases, their popularity CASIA-IrisV1 Aging [21,22] 4 2014 36,240 (100)
remains limited. We speculate that the reasons are twofold: (i) there ND-TimeLapse-Iris 2012 [23,24] 4 2012 6797 (46)
BiosecurID 4 2009 3200 (800)
is uncertainty regarding the degree to which synthetic iris images
ND-IrisTemplate-Aging 3 2012 11,776 (176 ∪ 314 ∪ 362)
share the properties and quality of real iris scans, and (ii) despite no per- 2008–2010 [25]
sonal information being involved, the databases are available only on
requests, requiring signing a written license agreement. In addition, all
synthetic databases are designed to match the properties of real iris
codes when the traditional Daugman's method is used. More recently, Table 2
Databases of iris images captured in controlled environments.
Table 3
Synthetic iris databases.
5
L. Omelina, J. Goga, J. Pavlovicova et al. Image and Vision Computing 108 (2021) 104109
Table 4
Multimodal databases containing iris modalities.
6
L. Omelina, J. Goga, J. Pavlovicova et al. Image and Vision Computing 108 (2021) 104109
Although the ideal size of the iris in an image has been well studied,
many of the reported results were performed on simulated or
downsampled images. The downsampling process does not result in
identical images, owing to different sensor sizes and iris distances
(e.g., due to noise or sensor filters) and therefore does not fully corre-
spond to realistic environments. Daugman recommended a minimum
iris radius of 70 pixels, but 100 to 140 pixels is more typical in field trials
[45]. The National Institute of Standards and Technology (NIST) stan-
dard and reports follow prior research and refer to an iris radius of at
least 100 pixels. While theoretical limitations can point towards a spe-
cific size of the iris, the ideal size is influenced by multiple variables,
such as noise, quality of the optics (mainly diffraction, which limits spa-
Fig. 9. Iris images captured at a distance.
tial resolution), and sensor type and sensitivity.
7
L. Omelina, J. Goga, J. Pavlovicova et al. Image and Vision Computing 108 (2021) 104109
Table 6
Iris image databases created using smartphone devicesa.
VISOB [49,50] Frontal cameras of: iPhone 5 s, Samsung Note 4, Oppo N1 VIS 75,428 (1,100)
CASIA Iris M1 (mobile) [51–53] NIR 11,000 (1,260)
subset S1 (custom) CASIA module v.1 NIR 1,400 (140)
subset S2 (custom) CASIA module v.2 NIR 6,000 (400)
subset S3 a phone with iris NIR camera NIR 3,600 (720)
CASIA BTAS [54] (custom) CASIA module v.2 NIR 4,500 (300)
MICHE DB [55–57] iPhone 5, Samsung Galaxy (IV + Tablet II) VIS 3,732 (184)
CSIP [58,59] Xperia Arc S, iPhone 4, THL W200, Huawei Ideos X3 VIS 2,004 (100)
a
The authors do not disclose whether the subjects in the individual subsets of the CASIA Iris M1 (mobile) database overlap. The sum of classes from the three subsets, 1260, assumes no
overlap. If there is overlap, it can be fewer
6
Images provided by Miles Research: www.milesresearch.com
7
Despite capturing three different color bands separately, authors often present visible
light images as a single spectrum. Capturing the visible light spectrum in a single band
would require a monochromatic camera with a visible light filter equally distributed
Fig. 10. Iris images captured using smartphones. across all pixels.
8
L. Omelina, J. Goga, J. Pavlovicova et al. Image and Vision Computing 108 (2021) 104109
Fig. 11. Iris images from databases compiled for research into iris recognition in visible
light.
be argued that there are actually four spectral bands captured, as color
cameras capture three colors (red, green, and blue) separately.
The CROSS-EYED [71,72] database created for the purpose of the
CROSS-EYED competition [72] contains 11,520 images (240 classes) in
both visible (VIS/RGB) and NIR spectra. This database was captured by
a dual-spectrum sensor acquiring the images synchronously, eliminat-
ing any possible influence of pupil dilation on the reported results.
The images were captured at a distance of 1.5–2 m and contain the oc-
ular region, making it also suitable for periocular recognition.
The IIITD Multi-spectral Periocular Database [73,77] contains 1,240
ocular images captured in three spectral bands (visible, night vision,
and NIR). The database contains 5 images per eye for each of 62 subjects
(124 classes irises) and for each spectrum (1,860 images in total). Im-
ages for the visible and night vision spectra contain the whole ocular
area; therefore, two eyes are in each image. Most of the images are, un-
Fig. 12. Iris images captured in multiple spectra. (The license of the CROSS-EYED database
fortunately, too out-of-focus for iris recognition, and the spectral bands
does not allow us to display the images.)
for the three spectra are not defined by the authors. For instance, it is
unclear whether the images labeled as captured in night vision contain
images in the NIR band or closer to the ultraviolet band, as both of them
are often used in night vision systems. Because the visible color spec- near-infrared spectrum. The main goal of the database is to identify fea-
trum corresponds to regular color images, it can be argued that color tures created by melanin pigment in the visible spectrum that are not
images contain three different bands of visible spectra. available in the near-infrared spectrum [74].
The University of Tehran IRIS (UTIRIS) database [74,75] contains The Off Axis/Angle Iris Dataset, Release 1 [76] collected at West Vir-
1,540 (770 per spectrum) images from 79 subjects (158 classes) cap- ginia University contains 865 images ((146 + 38 classes) captured by a
tured in the visible spectrum (using a consumer RGB camera) and color camera8 in NIR mode and 597 images (146 classes) captured by a
monochrome camera). The database contains images captured under
different gaze directions (0, 15, and 30 degrees of rotation with respect
Table 8 to the camera).
Databases of iris images captured in multiple-spectra.
9
L. Omelina, J. Goga, J. Pavlovicova et al. Image and Vision Computing 108 (2021) 104109
LG2200 sensor, where every subject occurs at least twice (1,352 classes
in total). Images captured with the LG2200 sensor are also available in a
post-processed form, given the non-unit pixel aspect ratio (i.e., non- variability of the iris features, and this variability cannot be, or can be
square pixels) of the sensor. only with difficulty, technologically reduced. Active research also covers
The ND-CrossSensor-Iris-2013 (BTAS-2013) database [17,80] con- (but not only):
tains 29,939 images from an LG4000 sensor and 117,503 images from • influence of aging
an LG2200 sensor (1,352 classes in total). This dataset is very similar • influence of occlusions or contact lenses
to the ND-CrossSensor-Iris-2012 and contains post-processed images • spoofing/liveness detection
(due to non-square pixels) from the LG2200 sensor. • genetic relationship
Within the ICB Competition on Cross-sensor Iris Recognition 2015 While some of the databases have been created as a benchmark to
(CSIR 2015) [52,81], another publicly available dataset was introduced. overcome a certain problem (e.g., influence of aging), other databases
However, only the training dataset containing 8,000 images was intro- serve to verify the quality of existing methods. For instance, although
duced. It contains iris images of 200 eyes from 100 subjects, 4,000 im- some iris features are provably genetically related (e.g., iris color), the
ages captured by an IKEMB-220 sensor, and 4,000 images captured by features suggested in Daugman's method [45] have shown statistical in-
an EyeGuard AD100 sensor. dependence, even in genetically related irises. Methods such as map-
The CSIP database [58,59] described in Section 4.4.2 is a database ping or learning the variability presented in the data are typically used
captured by four mass-market smartphone devices. to overcome extrinsic factors. Most of the methods are therefore
based on machine learning rather than being hand-crafted.
10
L. Omelina, J. Goga, J. Pavlovicova et al. Image and Vision Computing 108 (2021) 104109
9 10
http://iris2017.livdet.org See the supplementary materials for more details
11
L. Omelina, J. Goga, J. Pavlovicova et al. Image and Vision Computing 108 (2021) 104109
Table 11
Iris databases for liveness detection.
Database name Genuine irises Number of images (classes) Textured contact lenses Fake printouts
LivDet-Iris-2013 [90]
Notre Dame subset 2,800 (N/A) 1,400 (N/A) –
Clarkson subset 516 (64) 840 (6) –
Warsaw subset 852 (284) – 815 (276)
LivDet-Iris-2015 [91]
Clarkson LG subset 828 (45) 1,152 (7) 1,746 (45)
Clarkson Dalsa subset 1,078 (N/A) 1,431 (N/A) 1,746 (N/A)
Warsaw subset 2,854 (100) – 4,705 (100)
LivDet-Iris-2017 [79]
Notre Dame subset 1,500 (N/A) 1,500 (N/A) –
IITD-WVU subset 2,952 (N/A) 1,701 (N/A) 5,806 (N/A)
Clarkson subset 3954 (50) 1887 (12) 2,254 (49)
Warsaw subset 5,168 (470) – 6,845 (470)
ATVS-FIr [92,93] 800 (100) – 800 (100)
Sclera Liveness Dataset [95,96] 500 (50) – 500 (50)
ETPADv1 [97,98] 200 (100) – 2 × 400 recordings (100)
ETPADv2 [97,99] 400 (200) – 2 × 400 recordings (200)
The Jilin University Iris Biometric and Information Security Lab purpose of processing the sclera region of the eye; however, images
(JLUBRIRIS V1-V6) Database [107–109] contains 729 video recordings also have sufficient resolution for iris recognition.
(360 classes) captured under the NIR spectrum using a self-developed Another database containing challenging, off-angle, iris images is the
camera. The database contains five-second video recordings that are Off Axis/Angle Iris Dataset, Release 1 [76]. This database contains images
stored as individual images. captured in the visible spectrum and in night vision (NIR-enabled)
The Multi-Angle Sclera Dataset (MASD) v. 1 [95,104] contains 2,624 mode, as discussed in Section 4.4.4.
images (164 classes) of eyes captured in the visible spectrum for the
4.5.4. Periocular databases
Several datasets have been created for periocular recognition re-
search that might find applications in iris recognition research (see
Table 13 and Fig. 17). The two most popular are the Multiple Biometrics
Grand Challenge (MBGC) [111,112] and the Face and Ocular Challenge
Series (FOCS) [113,114] datasets.
The databases developed for MBGC (two versions v. 1 and v. 2) were
designed to investigate the potential of the fusion of face and iris bio-
metrics as well as off-angle iris images from video. The database con-
tains 14 images and 3 video sequences for 140 different subjects (280
classes) captured in the NIR spectrum.
The database developed for FOCS [113,114] consists of 9,588 still im-
ages (136 subjects, 272 classes) of a single iris and eye region. The still
images were captured in the NIR spectrum. The eye regions were ex-
tracted from NIR video sequences collected from the Iris on the Move
system used in MBGC v. 2. Therefore, there is an overlap between the
FOCS and MBGC databases.
The IIITD Cataract Mobile Periocular Database [117,118] contains
2,380 ocular images (145 classes), captured before and after cataract
surgery. Despite the large resolution of the images (4608 × 3456 pixels),
the quality of the images is rather low, as they were captured by a reg-
ular smartphone camera (MicroMax A350 Canvas Knight). Therefore, its
suitability for iris recognition research is questionable.
The UBI Periocular Dataset [115,116] is a periocular image database
captured under non-controlled acquisition conditions, containing more
variations in pose, illumination, and scale than MBGC and FRGC. The da-
tabase contains 10,950 images (522 classes) collected at various dis-
tances from 4 to 8 m.
Another periocular dataset originating from UBI is the CSIP database
[58,59] described in Section 4.4.2 captured by state-of-the-art
smartphones (2014). The CSIP database contains 2004 images of 50
people (100 classes) captured by four different smartphones. The im-
ages vary in resolution and are also provided with iris segmentation
masks and information on the acquisition conditions.
12
L. Omelina, J. Goga, J. Pavlovicova et al. Image and Vision Computing 108 (2021) 104109
Table 12
Iris databases containing different types of noise.
Fig. 17. Iris image databases used for periocular recognition research.
Table 14
Iris image databases of identical twins.
4.5.6. Aging
Table 13 The structure of the human iris has been presented as one of the
Periocular databases.
most stable biometric characteristics that does not change significantly
Database name Spectrum Number of images (classes) over a lifetime [24]. Due to a lack of data collected over larger time
UBI Periocular [115,116] VIS 10,950 (522) spans, it is difficult to verify this assumption. Some researchers observed
FOCS [113,114] NIR 9,588 (272) that for longer time spans, the distribution of genuine iris features shifts
IIITD Cataract Mobile [117,118] VIS 2,380 (145) closer to the distribution of impostor iris features; thus, the recognition
CSIP [58,59] VIS 2,004 (100)
MBGC v.1 & v.2 [111, 112] NIR 1,960 (280)
11
+420 videos The WVU Twins Day Dataset 2010–1015 is not available to non-US nationals or re-
searchers outside the USA.
13
L. Omelina, J. Goga, J. Pavlovicova et al. Image and Vision Computing 108 (2021) 104109
Table 15
Iris image databases collected to study the effects of aging.
14
L. Omelina, J. Goga, J. Pavlovicova et al. Image and Vision Computing 108 (2021) 104109
during the shooting. In total, it contains data from 28 horses (14 mares, that guarantees to the subjects the possibility to retract their agreement
10 stallions, and 4 geldings). for the use of their data and removal of the subject-associated information
from the database (if possible). Due to the nature of the biometric data, it
is possible to uniquely identify the subject (in the case of iris data, even
5. Common properties of popular databases
with a high precision) [141]. Thus, possible changes in the datasets
might undermine the purpose and consistency of the reported data over
After identifying the iris databases with the highest impact, we iden-
years. Due to recent issues related to the leaking of personal data, similar
tified four common properties of the most cited databases17:
regulations are being discussed globally.
1. Longevity - Our findings confirm that the most cited databases
Other substantial limitations of the available databases are imperfec-
have been available for the longest time. A good example is the CASIA-
tions in the described capture setup and protocol. Many of the parame-
Iris-V1 database, which is the most cited database and has been avail-
ters are considered irrelevant by the authors for their own research, but
able continuously since its introduction in 2003. Despite a rather small
that information might be vital for others and can therefore broaden the
scale and recommendations not to use it, it is still being cited and
potential application areas of the database. The properties of optical sys-
often serves as a benchmark dataset. Because there are already many
tems are rarely described. Many databases also lack descriptions of at
published research papers, this dataset has become a tool for comparing
least one of the following: sensor type and model, spectral range of cap-
new results with a wide range of past results. In contrast to CASIA-Iris-
tured images, capture distance, and ecological validity. Because many
V1, the extensive BATH database [135] reached only limited popularity
datasets provide only cropped regions of the eye, information about ap-
mainly due to its relatively short lifespan.
erture, shutter speed, and sensitivity are missing, although most of the
2. Limited access restrictions - The datasets that are publicly avail-
capture systems include this information in the images by default. In
able without requiring researchers to sign a license are more popular
the case of capturing with mobile devices (e.g., smartphones), data
(e.g., the UPOL iris dataset [65,66]). In addition, licenses for many datasets
from the inertial measurement unit (IMU) (i.e., an accelerometer and
require a signature of the institutional legal representative (e.g., datasets
a gyroscope) might provide valuable information to remove the nega-
from Notre Dame University [136]. These datasets are less popular than
tive effects of the rolling shutter and identify motion blur [142]. In addi-
datasets where the signature of an individual researcher is sufficient.
tion, many datasets provide images in a compressed format only, thus
(e.g., the CASIA Iris dataset). A license is legally binding a document and
reducing the information captured by the sensor.
typically protects the subjects as well as the institution providing the da-
A detailed description of the protocol and the capture setup is impor-
tabase. In addition because iris images are a type of personal data, this
tant because of differences in imaging requirements for different direc-
protection is in many countries regulated and enforced by law (e.g., the
tions of research. For instance, iris capture in motion shares the core
EU General Data Protection Regulation [137]).
problems that are recognized in iris capture by mobile handheld devices
3. Scale - The numbers of subjects and images in a dataset also in-
- the relative movement of subjects with respect to the sensor and a sub-
fluence the popularity. A sufficient number of samples in a dataset is a
stantially smaller sensor compared to the distance between the sensor and
significant requirement for performing statistically relevant research.
the eye. Despite similarities in research problems, research on the iris in
Datasets with more samples (i.e., with higher statistical relevance) can
motion focuses on using novel sensors and optical systems [2,143],
often serve as objective benchmarks. While the size of the dataset is a
while mobile iris capture focuses on using additional sensor information
strong requirement, it is often not sufficient, and other properties,
available in mobile devices (IMU or multiple imaging sensors) [144,145]
such as a good protocol, must be met as well.
and computational methods to process the captured images [146,147].
4. Timing - We found that the most popular datasets introduced
Many research papers point out the absence of databases suitable for
novel aspects and enabled research that was not possible with the pub-
testing a particular parameter (i.e., a constrained environment with vari-
lic datasets available prior to their publication. Good examples are (i)
ability of only a single parameter), which makes research conclusions and
the CASIA-Iris-V1 dataset–the first publicly available iris dataset, (ii)
underlying reasons rather unclear, typically stating that more research is
the UBIRIS v.1 dataset–the first database to introduce unconstrained im-
needed. In these cases, having a detailed description of the protocol could
aging, and (iii) the UPOL [65] database–the first that contains high-
solve the problem. Some of these issues could be avoided by keeping the
resolution images taken in the visible spectrum.
EXIF/metadata (their removal is a standard practice for reasons of pri-
vacy) that includes camera parameters. Databases created using
6. Discussion custom-built cameras typically do not provide this information and lack
any description of the protocol. While publicly available cameras include
Existing iris datasets have matured over the last two decades, but such specifications in the image file by default, in the case of custom hard-
there are still limitations that need to be addressed. The major limitation ware, many aspects remain hidden to other users of the database.
is their availability, meaning that many of the datasets have a rather short We also observed an inconsistency among datasets that were cap-
lifespan during which they are available. A good example is the BATH iris tured under visible light. In some cases, the authors used a monochro-
dataset [135], which has been used by many studies; however, it is no lon- matic sensor with a band-pass filter that captures the entire visible
ger available. In some cases, we also observed selective availability, where band of light; others use mass market cameras that capture the visible
the authors decided to share the dataset depending on the requesting in- light in three separate spectral bands (separately for the colors red,
stitution (institutions with a lower profile might typically have more green, and blue). The spectral sensitivity of the visible light filter differs
problems obtaining a dataset). This undermines the reproducibility of from that of the individual color filters (even when the color bands are
the research by independent researchers and prevents new researchers combined) and therefore should not be treated as analogous. In addi-
from publishing results obtained using such a database. tion, most consumer color cameras contain a Bayer filter that limits
The effect of aging has been studied on public datasets created in a the resolution of individual bands to one quarter for red and blue spec-
time span of at most eight years [138]. The limits of long - term iris rec- tra and one half in the case of green; thus, 2/3 of the color information is
ognition are defined by the difficulties in following up on a large group actually computed and not measured.
of people over a long period [139]. From our review, we also conclude that datasets with synthetic im-
A relatively new problem arises from the new data privacy regulations ages have not gained popularity in iris recognition research. Despite
that protect the privacy of subjects. For instance, in Europe, the GDPR reg- their containing large numbers of samples (larger than traditional
ulation contains the right to erasure (a.k.a. the right to be forgotten) [140] datasets), researchers opt for real-world images. We suspect that
these datasets lack the realism of studied effects that appear in less
17
Sorted according to importance, where the first is the most important constrained environments.
15
L. Omelina, J. Goga, J. Pavlovicova et al. Image and Vision Computing 108 (2021) 104109
Much of the research on iris capture at a distance has focused on it is a challenging task to present a global overview in the format of a re-
building a traditional optical system with mirrors for the capture, but search article. For this reason, we added a large table as supplementary
only a limited amount is focused on computational iris capture, material summarizing the properties, as well as additional information
e.g., using super - resolution [148]. (e.g., download links where the datasets can be obtained). We encourage
the reader to review the supplementary table. From our bibliometric
6.1. Related research study, we found that databases from CASIA are the most cited [26,27] (al-
though we discourage using the CASIA Iris v.1 Database), followed by the
We have presented a review of existing databases that were created databases collected at the University of Beira Interior [46,62]. We therefore
for the evaluation of biometric methods, more specifically, iris and ocu- recommend these datasets as benchmarks when it is a priority to compare
lar recognition. There are other research areas that use images of the their methods with published state of the art. In addition, to obtain these
human eye and iris that could benefit from the databases reviewed for datasets, a license signed by a researcher is sufficient, in contrast to the sig-
this study. nature of the institutional legal representative, typically required by others.
Gaze estimation and eye tracking have been used in many For comparison studies, it is advisable to use datasets created for certain
applications from human–computer interaction [149] to healthcare ap- challenges or competitions, as they come with a standardized protocol
plications as a diagnostic tool. These applications typically have other for evaluation. For the evaluation of iris images captured by smartphone
well-established datasets (e.g., [150]), but share mostly initial process- cameras, those are MICHE [55] and VISOB [50]. To study spoofing and
ing of iris images, i.e., iris localization and segmentation. Hence, iris rec- liveness detection, there is LivDet 2017 [161]. To study combinations of dif-
ognition databases could be of use as an additional data source. In ferent modalities, there are two MBGC datasets [112]. In addition to CASIA
addition, pupil tracking (capturing changes in the size of the pupil and UBI, the Computer Vision Research Lab at the University of Notre
over a short time span, typically a few seconds), has been shown to be Dame [136] has invested a substantial effort in the development of publicly
an important medical biomarker reflecting levels of neurotransmitter available biometric datasets (including human iris data). Their website
and neuronal activity [151]. Another related research area is eye blink hosts 14 different high-quality iris datasets (and more from other modali-
detection. It has been used as a countermeasure against spoofing in ties), which is the most concentrated web resource that we identified
face [152] and iris recognition methods [153] and in medical applica- [136]. Although our bibliometric review did not show as high popularity
tions, e.g., to diagnose computer vision syndrome [154]. for these as for the datasets at CASIA or UBI, we would like to encourage
Databases traditionally used for related research have limited use in researchers to explore these datasets in more detail.
biometrics, because they typically do not contain information about
identity. Owing to insufficient metadata, the potential in iris recognition 8. Recommendation for creating a good iris database
lies mainly in unsupervised machine-learning methods and unsuper-
vised pre-training of the models with a large number of parameters. Important aspects of collecting and sharing research data have been
discussed by various scientific communities [174,175].
6.2. Challenges and competitions
• Plan availability for years to come - The adoption of a new
Objective and independent evaluation helps in comparing scientific benchmark in the area or iris recognition tends to be rather
methodology, and ultimately stimulates the progress within particular slow. To keep a database available, it is important to allocate re-
domains. The iris recognition method introduced by J. Daugman [45], sources necessary for the database distribution for several years
has been the baseline for years; this is in part due to its simplicity and into the future. The main resources needed are (i) technical - a
near perfect recognition rate when combined with appropriate imaging. stable url for the promoting website as well as infrastructure
Numerous improvements have been proposed, however, even with this that keeps the website online, and (ii) personal – an appointed
well-defined baseline the results have varied tremendously; a result of person responsible for license management as well as for solving
the implementation methods, datasets, and evaluation protocols used problems that interested users might experience.
[6]. These differences prevent objective comparison between methods, • Make access simple - we observed that the databases with
a problem which is amplified by the frequent unavailability of the licenses that can be signed by the individual researchers tend to
dataset itself, the effect of which is highlighted in section 3. be more popular. Requiring the signature of the legal institutional
Many of these problems can be prevented through the creation of representative, especially within a university environment (typi-
benchmark datasets and independent evaluation. This guarantees the cally being the rector), is a major hurdle for young researchers. In
objective comparison between methods through both unified protocols many cases, they opt to create their own database instead. If the
and conditions. Such evaluations are typically organized in the form of signature from the institutional representative is necessary, we
competitions and/or challenges. In addition to the creation of publicly recommend publishing the full license agreement as well as sam-
available datasets with uniform metrics, this solution encourages com- ple of the images from the database on the project website. This
petition among researchers. helps in deciding whether the database is useful for particular re-
Details of existing competitions and challenges are beyond the scope search before starting the administrative procedures to obtain
of this review, hence in Table 16 only the most popular challenges are necessary approvals.
presented; grouped in terms of the number of participating research • Include a statistically relevant number of samples - Acquiring and
teams, and sorted chronologically. Further details can be found in the handling test subjects is one of the most challenging tasks when
following review articles [5,155–157]. creating a biometric database. The number of subjects included
should be as large as possible [176]; however, there is always a
7. Summary minimum size for obtaining statistically relevant results. Al-
though this minimum is difficult to quantify for the general
Our review shows that there is a substantial heterogeneity in available case, the statistical significance of 100 samples obtained from
databases in terms of size (from 18 to >1,300 classes), sensors used, image the same subjects is not the same as 1000 samples obtained
quality, etc. This variability means that, for many research questions, there from 100 different subjects.
is a database available, but it is not always easy for researchers to identify • Make the database unique - Many authors that adopt certain da
the best option. Our review provides guidance for identifying the proper tabases, continue using them in subsequent publications. As we
database, but also provides recommendations for creating new ones. Be- have shown in previous sections, a database is typically intro-
cause there are many attributes that might be of interest to researchers, duced to explore certain properties or problems in a systematic
16
L. Omelina, J. Goga, J. Pavlovicova et al. Image and Vision Computing 108 (2021) 104109
Table 16
Competitions and challenges related to iris recognition.
manner. A successful database should help users make novel In this paper, we reviewed the existing iris image datasets. We fo-
findings and research conclusions. Hence, the database should cused mainly on the availability and popularity of the datasets. We
fulfill the needs of novel research areas where the benchmarks raised six different RQs. We identified 158 different datasets, of which
are not yet established. The authors of this review aspire to assist only 81 were actually available (answering RQ 1). The full list is pro-
in this task with this review and make the needs more obvious to vided in the supplementary materials. We identified that databases cre-
database creators. ated by CASIA are the most cited (answering RQ 2). We provided
• Extensive protocol and setup description - Although most of the descriptions of the available databases in a structured form (answering
available datasets were created for testing a particular hypothesis RQ 3; see Section 4). Subsequently, we identified common properties of
or for a particular research purpose, authors frequently argue that popular databases in Section 5 (answering RQ 4). In Section 6, we
the dataset can be useful beyond a single research topic. To in- discussed the limitations of the databases and areas lacking in available
crease the potential of the dataset, it is important to provide an ex- databases. Lastly, based on our review, we formulated appropriate rec-
tensive description of the protocol and setup. We observed that ommendations for creating an iris database in Section 8.
important information is frequently missing, limiting the use of A limitation of this study is that we searched only the Web of Science
the datasets, for instance, the wavelength of the NIR illumination, online library when performing the bibliometric research. Another lim-
distance at which the images were collected, and descriptions of itation is that we relied on e-mail and phone contacts when obtaining
the sensor or the optical system used. the dataset.
We aspire to bring clarity in the availability of databases to support
reproducible research. However, new databases are continuously
9. Conclusions being created, while others become unavailable. This requires a contin-
uous effort to keep track of the state-of-the-art and popular databases.
The aim of biometric datasets is to enable the testing of biometric We plan to keep this list updated and available on our website.18
systems or methods. The publicly available datasets, serving as bench-
marks, enable objective and reproducible research. Despite the claims
of the authors, many of the datasets declared publicly available are not 18
The full list of available datasets will be published at: https://irisdata.etrovub.be and
available, for a variety of reasons. https://irisdata.fei.stuba.sk
17
L. Omelina, J. Goga, J. Pavlovicova et al. Image and Vision Computing 108 (2021) 104109
CRediT authorship contribution statement [19] P.J. Phillips, P.J. Flynn, K.W. Bowyer, R.W.V. Bruegge, P.J. Grother, G.W. Quinn, M.
Pruitt, Distinguishing identical twins by face recognition, Face and Gesture 2011,
Pp. 185–192, March 2011.
Lubos Omelina: Conceptualization, Methodology, Writing - original [20] K. Hollingsworth, K.W. Bowyer, P.J. Flynn, Similarity of iris texture between identi-
draft, Investigation. Jozef Goga: Data curation, Writing - original draft, cal twins, 2010 IEEE Computer Society Conference on Computer Vision and Pattern
Recognition - Workshops 2010, pp. 22–29.
Investigation. Jarmila Pavlovičová: Writing - review & editing, Valida- [21] CASIA Iris Ageing database v. 1, http://biometrics.idealtest.org/dbDetailForUser.
tion. Miloš Oravec: Writing - review & editing, Funding acquisition. do?id=14 Accessed: 2018-11-01.
Bart Jansen: Writing - review & editing, Supervision, Funding [22] P. Wild, J. Ferryman, A. Uhl, Impact of (segmentation) quality on long vs. short-
timespan assessments in iris recognition performance, IET, Biometrics 4 (4)
acquisition.
(2015) 227–235.
[23] ND-TimeLapseIris-2012 database, https://sites.google.com/a/nd.edu/public-cvrl/
data-sets Accessed: 2018-11-02.
[24] S.E. Baker, K.W. Bowyer, P.J. Flynn, P.J. Phillips, Template aging in Iris biometrics,
Declaration of Competing Interest vols. 205–218, Springer London, London, 2013.
[25] S.P. Fenker, K.W. Bowyer, Analysis of template aging in iris biometrics, 2012 IEEE
The authors declare that they have no known competing financial Computer Society Conference on Computer Vision and Pattern Recognition Workshops
June 2012, pp. 45–51.
interests or personal relationships that could have appeared to influ-
[26] CASIA Iris Image Database Version 1.0, http://biometrics.idealtest.org/
ence the work reported in this paper. dbDetailForUser.do?id=1 Accessed: 2019-12-12.
The authors declare the following financial interests/personal rela- [27] CASIA Iris Database V4, http://biometrics.idealtest.org/dbDetailForUser.do?id=14
tionships which may be considered as potential competing interests: Accessed: 2018-11-05.
[28] IIT Delhi Iris Database (Version 1.0), http://www4.comp.polyu.edu.hk/csajaykr/
IITD/Database_Iris.htm Accessed: 2018-10-24.
Acknowledgments [29] A. Kumar, A. Passi, Comparison and combination of iris matchers for reliable per-
sonal authentication, Pattern Recogn. 43 (3) (2010) 1016–1026.
[30] C.-N. Chun, R. Chung, Iris recognition for palm-top application, in: D. Zhang, A.K.
The research described in the paper was done within the project No. Jain (Eds.), Biometric Authentication, Springer Berlin Heidelberg, Berlin, Heidel-
1/0867/17 of the Slovak Grant Agency VEGA. berg 2004, pp. 426–433.
[31] CUHK Iris Image Dataset, http://www.mae.cuhk.edu.hk/~cvl/main_database.htm
Accessed: 2020-05-20.
Appendix A. Supplementary data
[32] Z. Wei, T. Tan, Z. Sun, Synthesis of large realistic iris databases using patch-based
sampling, 2008 19th International Conference on Pattern Recognition Dec 2008,
Supplementary data to this article can be found online at https://doi. pp. 1–4.
org/10.1016/j.imavis.2021.104109. [33] J. Zuo, N.A. Schmid, X. Chen, On generation and analysis of synthetic iris images,
IEEE Transactions on Information Forensics and Security 2 (March 2007) 77–90.
[34] WVU Synthetic Iris Model Based, https://citer.clarkson.edu/biometric-dataset-col-
References lections/synthetic-iris-model-based Accessed: 2018-11-07.
[35] S. Shah, A. Ross, Generating synthetic irises by feature agglomeration, 2006 Inter-
[1] J. Daugman, The importance of being random: statistical principles of iris recogni- national Conference on Image Processing Oct 2006, pp. 317–320.
tion, Pattern Recogn. 36 (2) (2003) 279–291. [36] WVU Synthetic Iris Textured Based, https://citer.clarkson.edu/biometric-dataset-
[2] K. Nguyen, C. Fookes, R. Jillela, S. Sridharan, A. Ross, Long range iris recognition: a collections/synthetic-iris-textured-based Accessed: 2018-11-07.
survey, Pattern Recogn. 72 (2017) 123–143. [37] A.K. Jain, A. Ross, S. Prabhakar, An introduction to biometric recognition, IEEE
Transactions on Circuits and Systems for Video Technology, 14, Jan 2004, pp. 4–20.
[3] F. Alonso-Fernandez, J. Bigun, A survey on periocular biometrics research, Pattern
[38] S. Crihalmeanu, A. Ross, S. Schuckers, L. Hornak, A protocol for multibiometric data
Recogn. Lett. 82 (2016) 92–105 An insight on eye biometrics.
acquisition, storage and dissemination, tech. Rep., WVU, lane department of com-
[4] F. Jan, Segmentation and localization schemes for non-ideal iris biometric systems,
puter Science and Electr. Eng, 2007.
Signal Process. 133 (2017) 192–212.
[39] P.A. Johnson, P. Lopez-Meyer, N. Sazonova, F. Hua, S. Schuckers, Quality in face and
[5] M.D. Marsico, A. Petrosino, S. Ricciardi, Iris recognition through machine learning iris research ensemble q-fire, 2010 Fourth IEEE International Conference on Bio-
techniques: A survey, Pattern Recogn. Lett. 82 (2016) 106–115 An insight on eye metrics: Theory, Applications and Systems (BTAS) 2010, pp. 1–6.
biometrics. [40] Quality-Face/Iris Research Ensemble (Q-FIRE), https://citer.clarkson.edu/research-
[6] K.W. Bowyer, K. Hollingsworth, P.J. Flynn, Image understanding for iris biometrics: resources/biometric-dataset-collections-2/quality-faceiris-research-ensemble-q-
a survey, Comput. Vis. Image Underst. 110 (2) (2008) 281–307. fire/ Accessed: 2020-05-20.
[7] J. Neves, F. Narducci, S. Barra, H. Proença, Biometric recognition in surveillance sce- [41] N. Kihal, S. Chitroub, A. Polette, I. Brunette, J. Meunier, Efficient multimodal ocular
narios: a survey, Artif. Intell. Rev. 46 (Dec 2016) 515–541. biometric system for person authentication based on iris texture and corneal
[8] A. Rattani, R. Derakhshani, Ocular biometrics in the visible spectrum: a survey, shape, IET Biometrics 6 (6) (2017) 379–386.
Image Vis. Comput. 59 (2017) 1–16. [42] Biometric Iris-Cornea Database, http://www-labs.iro.umontreal.ca/labimage/
[9] K.B. Raja, R. Raghavendra, V.K. Vemuri, C. Busch, Smartphone based visible iris rec- IrisCorneaDataset/ Accessed:2019-05-03.
ognition using deep sparse filtering, Pattern Recogn. Lett. 57 (2015) 33–42 Mobile [43] Y. Yin, L. Liu, X. Sun, Sdumla-hmt: A multimodal biometric database, in: Z. Sun, J.
Iris CHallenge Evaluation part I (MICHE I). Lai, X. Chen, T. Tan (Eds.), Biometric Recognition, Springer Berlin Heidelberg, Ber-
[10] Y. Alvarez-Betancourt, M. Garcia-Silvente, An overview of iris recognition: a lin, Heidelberg 2011, pp. 260–268.
bibliometric analysis of the period 2000–2012, Scientometrics 101 (Dec 2014) [44] Shandong University, Machine Learning and Applications Group (SDUMLA) - the
2003–2033. Homologous Multi-modal Traits Database, http://mla.sdu.edu.cn/info/1006/1195.
[11] R.R. Jillela, A. Ross, Segmenting iris images in the visible spectrum with applica- htm Accessed: 2020-05-25.
tions in mobile biometrics, Pattern Recogn. Lett. 57 (2015) 4–16 Mobile Iris CHal- [45] J. Daugman, How Iris recognition works, IEEE Transact. Circ. Syst. Video Technology
lenge Evaluation part I (MICHE I). 14 (Jan. 2004) 21–30.
[12] M. Trokielewicz, Iris recognition with a database of iris images obtained in visible [46] H. Proenca, S. Filipe, R. Santos, J. Oliveira, L.A. Alexandre, The ubiris.v2: a database
light using smartphone camera, 2016 IEEE International Conference on Identity, of visible wavelength iris images captured on-the-move and at-a-distance, IEEE
Security and Behavior Analysis (ISBA) Feb 2016, pp. 1–6. Trans. Pattern Anal. Mach. Intell. 32 (Aug 2010) 1529–1535.
[47] C.N. Padole, H. Proenca, Periocular recognition: Analysis of performance degrada-
[13] D. Yadav, N. Kohli, M. Vatsa, R. Singh, A. Noore, Unconstrained visible spectrum iris
tion factors, 2012 5th IAPR International Conference on Biometrics (ICB) March
with textured contact lens variations: Database and benchmarking, 2017 IEEE In-
2012, pp. 439–445.
ternational Joint Conference on Biometrics (IJCB) Oct 2017, pp. 574–580.
[48] W. Dong, Z. Sun, T. Tan, A design of iris recognition system at a distance, 2009 Chi-
[14] P.J. Phillips, K.W. Bowyer, P.J. Flynn, Comments on the casia version 1.0 iris data set,
nese Conference on Pattern Recognition Nov 2009, pp. 1–5.
IEEE Trans. Pattern Anal. Mach. Intell. 29 (Oct. 2007) 1869–1870.
[49] A. Rattani, R. Derakhshani, S.K. Saripalle, V. Gottemukkula, Icip 2016 competition
[15] L. Ma, Y. Wang, T. Tan, Iris recognition based on multichannel gabor filtering, Pro- on mobile ocular biometric recognition, 2016 IEEE International Conference on
ceedings of the International Conference on Asian Conference on Computer Vision Image Processing (ICIP) Sept 2016, pp. 320–324.
2002, pp. 279–283. [50] Visible light mobile Ocular Biometric (VISOB) Dataset ICIP2016 Challenge Version,
[16] S.S. Arora, M. Vatsa, R. Singh, A. Jain, On iris camera interoperability, 2012 IEEE Fifth http://sce2.umkc.edu/cibit/dataset.html Accessed: 2019-04-14.
International Conference on Biometrics: Theory, Applications and Systems (BTAS) [51] CASIA-Iris-Mobile-V1.0 - Casia mobile database (datasets S1, S2 and S3), http://
Sept 2012, pp. 346–352. biometrics.idealtest.org/dbDetailForUser.do?id=13 Accessed: 2018-10-24.
[17] L. Xiao, Z. Sun, R. He, T. Tan, Coupled feature selection for cross-sensor iris recogni- [52] Q. Zhang, H. Li, Z. Sun, T. Tan, Deep feature fusion for iris and periocular biometrics
tion, 2013 IEEE Sixth International Conference on Biometrics: Theory, Applications on mobile devices, IEEE Transactions on Information Forensics and Security 13
and Systems (BTAS) Sept 2013, pp. 1–6. (11) (2018) 2897–2912.
[18] Twins Day Dataset 2010–-1015, https://biic.wvu.edu/data-sets/twins-day-dataset- [53] Q. Zhang, H. Li, M. Zhang, Z. He, Z. Sun, T. Tan, Fusion of face and iris biometrics on
2010-1015 Accessed: 2019-05-14. mobile devices using near-infrared images, Biometric Recognition (J. Yang, J. Yang,
18
L. Omelina, J. Goga, J. Pavlovicova et al. Image and Vision Computing 108 (2021) 104109
Z. Sun, S. Shan, W. Zheng, and J. Feng, Eds.), Springer International Publishing, [87] Iris Combined Spoofing Database, http://iab-rubric.org/resources.html Accessed:
Cham 2015, pp. 569–578. 2018-10-31.
[54] M. Zhang, Q. Zhang, Z. Sun, S. Zhou, N.U. Ahmed, The btas competition on mobile [88] P. Gupta, S. Behera, M. Vatsa, R. Singh, On iris spoofing using print attack, 2014
iris recognition, 2016 IEEE 8th International Conference on Biometrics Theory, Ap- 22nd International Conference on Pattern Recognition Aug 2014, pp. 1681–1686.
plications and Systems (BTAS) Sep. 2016, pp. 1–7. [89] Iris Combined Spoofing Database, http://iab-rubric.org/resources.html Accessed:
[55] MICHE - Mobile Iris CHallenge Evaluation, http://biplab.unisa.it/MICHE/ Accessed: 2018-10-30.
2020-05-27. [90] D. Yambay, J.S. Doyle, K.W. Bowyer, A. Czajka, S. Schuckers, Livdet-iris 2013 - iris
[56] M.D. Marsico, M. Nappi, D. Riccio, H. Wechsler, Mobile iris challenge evaluation liveness detection competition 2013, IEEE International Joint Conference on Biomet-
(miche)-i, biometric iris dataset and protocols, Pattern Recogn. Lett. 57 (2015) rics, Pp. 1–8, Sept 2014.
17–23 Mobile Iris CHallenge Evaluation part I (MICHE I). [91] D. Yambay, B. Walczak, S. Schuckers, A. Czajka, Livdet-iris 2015 - iris liveness detec-
[57] M. De Marsico, M. Nappi, F. Narducci, H. Proena, Insights into the results of miche i tion competition 2015, 2017 IEEE International Conference on Identity, Security
- mobile iris challenge evaluation, Pattern Recogn. 74 (Feb. 2018) 286–304. and Behavior Analysis (ISBA) Feb 2017, pp. 1–6.
[58] G. Santos, E. Grancho, M.V. Bernardo, P.T. Fiadeiro, Fusing iris and periocular infor- [92] ATVS-FIr iris database, https://atvs.ii.uam.es/atvs/fir_db.html Accessed: 2019-05-
mation for cross-sensor recognition, Pattern Recogn. Lett. 57 (2015) 52–59 Mobile 04.
Iris CHallenge Evaluation part I (MICHE I). [93] J. Fierrez, J. Ortega-Garcia, D.T. Toledano, J. Gonzalez-Rodriguez, Biosec baseline
[59] Cross Sensor Iris and Periocular Database, http://csip.di.ubi.pt Accessed: 2019-04- corpus: A multimodal biometric database, Pattern Recogn. 40 (4) (2007)
18. 1389–1392.
[60] M. De Marsico, M. Nappi, H. Proença, Results from miche ii – mobile iris challenge [94] V. Ruiz-Albacete, P. Tome-Gonzalez, F. Alonso-Fernandez, J. Galbally, J. Fierrez, J.
evaluation ii, Pattern Recogn. Lett. 91 (2017) 3–10 Mobile Iris CHallenge Evalua- Ortega-Garcia, Direct attacks using fake images in iris verification, in: B.
tion (MICHE-II). Schouten, N.C. Juul, A. Drygajlo, M. Tistarelli (Eds.), Biometrics and Identity Man-
[61] Q. Zhang, H. Li, Z. Sun, Z. He, T. Tan, Exploring complementary features for iris rec- agement, Springer Berlin Heidelberg, Berlin, Heidelberg 2008, pp. 181–190.
ognition on mobile devices, 2016 International Conference on Biometrics (ICB) [95] Multi-Angle Sclera Dataset (MASD) version 1, https://sites.google.com/site/
June 2016, pp. 1–8. dasabhijit2048/datatsets Accessed: 2020-05-27.
[62] H. Proença, L.A. Alexandre, Ubiris: A noisy iris image database, Proceedings of the [96] A. Das, U. Pal, M.A. Ferrer, M. Blumenstein, A framework for liveness detection for
13th International Conference on Image Analysis and Processing, ICIAP’05, Springer- direct attacks in the visible spectrum for multimodal ocular biometrics, Pattern
Verlag, Berlin, Heidelberg 2005, pp. 970–977. Recogn. Lett. 82 (2016) 232–241 An insight on eye biometrics.
[63] M. Edwards, A. Gozdzik, K. Ross, J. Miles, E. Parra, Quantitative measures of iris [97] I. Rigas, O.V. Komogortsev, Gaze estimation as a framework for iris liveness detec-
color using high resolution photographs, Am. J. Phys. Anthropol. 147 (01) (2012) tion, IEEE International Joint Conference on Biometrics Sep. 2014, pp. 1–8.
141–149. [98] Eye Tracker Print-Attack Database (ETPAD) v1, https://userweb.cs.txstate.edu/
[64] MILES Iris Dataset, https://drive.google.com/drive/folders/0B5OBp4zckpLnU3Y ~ok11/etpad_v1.html Accessed: 2019-05-16.
xMnozSGhGelE Accessed: 2020-05-20. [99] Eye Tracker Print-Attack Database (ETPAD) v2, https://userweb.cs.txstate.edu/
[65] M. Dobeš, L. Machala, P. Tichavský, J. Pospíšil, Human eye iris recognition using the ~ok11/etpad_v2.html Accessed: 2019-05-16.
mutual information, Optik - Int. J. Light Electron Optics 115 (9) (2004) 399–404. [100] K.W. Bowyer, P.J. Flynn, The ND-IRIS-0405 iris image dataset, Technical Report,
[66] Palackỳ University Olomouc (UPOL) Iris Image Dataset, http://phoenix.inf.upol.cz/ University of Notre Dame, CoRR (2009).
iris/ Accessed: 2020-10-10. [101] “ND-Iris-0405 Data Set.” https://sites.google.com/a/nd.edu/public-cvrl/data-sets.
[67] M. Dehnavi, M. Eshghi, Design and implementation of a real time and train less eye Accessed: 2018-11-02.
state recognition system, EURASIP J. Adv. Signal Process. 2012 (2012) 02. [102] P.J. Phillips, W.T. Scruggs, A.J. O’Toole, P.J. Flynn, K.W. Bowyer, C.L. Schott, M.
[68] Eye SBU database, http://facultymembers.sbu.ac.ir/eshghi/index.html Accessed: Sharpe, Frvt 2006 and ice 2006 large-scale experimental results, IEEE Trans. Pat-
2020-05-20. tern Anal. Mach. Intell. 32 (May 2010) 831–846.
[69] The Hong Kong Polytechnic University Cross-Spectral Iris Images Database, http:// [103] P. J, K.W. Bowyer, P.J. Flynn, X. Liu, W.T. Scruggs, The iris challenge evaluation 2005,
www4.comp.polyu.edu.hk/~csajaykr/polyuiris.htm Accessed: 2018-10-24. 2008 IEEE Second International Conference on Biometrics: Theory, Applications
[70] P.R. Nalla, A. Kumar, Toward more accurate iris recognition using cross-spectral and Systems Sept 2008, pp. 1–8.
matching, Trans. Img. Proc. 26 (Jan. 2017) 208–221. [104] A. Das, U. Pal, M.A.F. Ballester, M. Blumenstein, Multi-angle based lively sclera bio-
[71] A. Sequeira, L. Chen, P. Wild, J. Ferryman, F. Alonso-Fernandez, K.B. Raja, R. metrics at a distance, 2014 IEEE Symposium on Computational Intelligence in Bio-
Raghavendra, C. Busch, J. Bigun, Cross-eyed - cross-spectral iris/periocular recogni- metrics and Identity Management (CIBIM) 2014, pp. 22–29.
tion database and competition, 2016 International Conference of the Biometrics [105] Multimedia University Iris Database (MMU) - V1 and V2, https://mmuexpert.
Special Interest Group (BIOSIG) 2016, pp. 1–5. mmu.edu.my/ccteo Accessed: 2020-10-01.
[72] Cross-Spectrum Iris/Periocular Recognition Competition Database, https://sites.goo- [106] C.C. Teo, H.F. Neo, G.K.O. Michael, C. Tee, K.S. Sim, A robust iris segmentation with
gle.com/site/crossspectrumcompetition/cross-eyed-2016 Accessed: 2020-05-21. fuzzy supports, in: K.W. Wong, B.S.U. Mendis, A. Bouzerdoum (Eds.), Neural Infor-
[73] A. Sharma, S. Verma, M. Vatsa, R. Singh, On cross spectral periocular recognition, mation Processing. Theory and Algorithm, Springer Berlin Heidelberg, Berlin, Hei-
2014 IEEE International Conference on Image Processing (ICIP) Oct 2014, delberg 2010, pp. 532–539.
pp. 5007–5011. [107] Y. Chen, Y. Liu, X. Zhu, H. Chen, F. He, Y. Pang, Novel approaches to improve iris rec-
[74] M.S. Hosseini, B.N. Araabi, H. Soltanian-Zadeh, Pigment melanin: pattern for iris ognition system performance based on local quality evaluation and feature fusion,
recognition, IEEE Trans. Instrum. Meas. 59 (April 2010) 792–804. TheScientificWorldJournal 670934 (2014) 02.
[75] University of Tehran IRIS (UTIRIS) database, https://utiris.wordpress.com/ [108] G. Huo, Y. Liu, X. Zhu, H. Dong, Secondary iris recognition method based on local
Accessed: 2018-12-13. energy-orientation feature, Journal of Electronic Imaging 24 (1) (2015) 1–13.
[76] N. D. Kalka, J. Zuo, N. A. Schmid, and B. Cukic, “Image quality assessment for iris [109] Jilin University Iris Biometric and Information Security Lab (JLUBRIRIS v1-v6) Data-
biometric,” in SPIE Proceedings Vol. 6202: Biometric Technology for Human Identifica- base, http://biis.jlu.edu.cn/ Accessed: 2019-05-01.
tion III , Vol. 6202, Pp. 6202–11, SPIE, 2006. [110] C.C. Teo, H.T. Ewe, An efficient one-dimensional fractal analysis for iris recognition,
[77] IIITD Multi-spectral Periocular Database, http://iab-rubric.org/resources/ Proceedings of WSCG 2005, 2005.
impdatabase.html Accessed: 2018-10-30. [111] P. J. Phillips, P. J. Flynn, J. R. Beveridge, W. T. Scruggs, A. J. O'Toole, D. Bolme, K. W.
[78] IIITD-WVU Mobile Iris Spoofing Dataset, http://iab-rubric.org/resources.html Bowyer, B. A. Draper, G. H. Givens, Y. M. Lui, H. Sahibzada, J. A., Scallan, I., and S.
Accessed: 2018-10-30. Weimer, “Overview of the multiple biometrics grand challenge,” in Proceedings of
[79] D. Yambay, B. Becker, N. Kohli, D. Yadav, A. Czajka, K.W. Bowyer, S. Schuckers, R. the Third International Conference on Advances in Biometrics, ICB ‘09, (Berlin, Heidel-
Singh, M. Vatsa, A. Noore, D. Gragnaniello, C. Sansone, L. Verdoliva, L. He, Y. Ru, berg), pp. 705–714, Springer-Verlag, 2009.
H. Li, N. Liu, Z. Sun, T. Tan, Livdet iris 2017 - iris liveness detection competition [112] Multiple Biometric Grand Challenge (MBGC), https://www.nist.gov/programs-
2017, 2017 IEEE International Joint Conference on Biometrics (IJCB) Oct 2017, projects/multiple-biometric-grand-challenge-mbgc Accessed: 2019-04-14.
pp. 733–741. [113] R. Jillela, A.A. Ross, V.N. Boddeti, B.V.K.V. Kumar, X. Hu, R. Plemmons, P. Pauca, Iris
[80] ND-CrossSensor-Iris-2013 database, https://sites.google.com/a/nd.edu/public-cvrl/ segmentation for challenging periocular images, vols. 281–308, Springer London,
data-sets Accessed: 2018-11-02. London, 2013.
[81] Dataset provided within the ICB Competition on Cross-sensor Iris Recognition, [114] Face and Ocular Challenge Series (FOCS), https://www.nist.gov/programs-pro-
http://biometrics.idealtest.org/2015/csir2015.jsp Accessed: 2018-11-05. jects/face-and-ocular-challenge-series-focs Accessed: 2019-04-14.
[82] S.E. Baker, A. Hentz, K.W. Bowyer, P.J. Flynn, Degradation of iris recognition perfor- [115] C.N. Padole, H. Proença, Compensating for pose and illumination in unconstrained
mance due to non-cosmetic prescription contact lenses, Comput. Vis. Image periocular biometrics, IJBM 5 (2013) 336–359.
Underst. 114 (9) (2010) 1030–1044. [116] “UBI Periocular Dataset.” http://socia-lab.di.ubi.pt/~ubipr/index.html. Accessed:
[83] J.S. Doyle, K.W. Bowyer, Robust detection of textured contact lenses in iris recogni- 2019-04-08.
tion using bsif, IEEE Access 3 (2015) 1672–1683. [117] Cataract Mobile Periocular Database (CMPD), http://www.iab-rubric.org/re-
[84] D. Yadav, N. Kohli, J.S. Doyle, R. Singh, M. Vatsa, K.W. Bowyer, Unraveling the effect sources/cmpd.html Accessed: 2018-10-29.
of textured contact lenses on iris recognition, IEEE Transactions on Information Fo- [118] R. Keshari, S. Ghosh, A. Agarwal, R. Singh, M. Vatsa, Mobile periocular matching
rensics and Security 9 (May 2014) 851–862. with pre-post cataract surgery, 2016 IEEE International Conference on Image Pro-
[85] J.S. Doyle, K.W. Bowyer, P.J. Flynn, Variation in accuracy of textured contact lens de- cessing (ICIP) Sept 2016, pp. 3116–3120.
tection based on sensor and lens pattern, 2013 IEEE Sixth International Conference [119] M. Singh, S. Nagpal, M. Vatsa, R. Singh, A. Noore, A. Majumdar, Gender and ethnic-
on Biometrics: Theory, Applications and Systems (BTAS) Sept 2013, pp. 1–7. ity classification of iris images using deep class-encoder, In: IEEE Int. Jt. Conf. Bio-
[86] N. Kohli, D. Yadav, M. Vatsa, R. Singh, A. Noore, Detecting medley of iris spoofing metrics, IJCB 2017, 2018 (2018)https://doi.org/10.1109/BTAS.2017.8272755.
attacks using desist, 2016 IEEE 8th International Conference on Biometrics Theory, [120] ND-TWINS-2009–-2010 Still Face database, https://cvrl.nd.edu/projects/data/#nd-
Applications and Systems (BTAS) Sept 2016, pp. 1–6. twins-2009-2010 Accessed: 2019-05-14.
19
L. Omelina, J. Goga, J. Pavlovicova et al. Image and Vision Computing 108 (2021) 104109
[121] H. Mehrotra, M. Vatsa, R. Singh, B. Majhi, Does iris change over time? PLoS One [150] K. Krafka, A. Khosla, P. Kellnhofer, H. Kannan, S. Bhandarkar, W. Matusik, A.
e78333 (2013) 11. Torralba, Eye tracking for everyone, 2016 IEEE Conference on Computer Vision
[122] P. Basak, S. De, M. Agarwal, A. Malhotra, M. Vatsa, R. Singh, Multimodal biometric and Pattern Recognition (CVPR) June 2016, pp. 2176–2184.
recognition for toddlers and pre-school children, 2017 IEEE International Joint [151] S. Joshi, Y. Li, R. Kalwani, J. Gold, Relationships between pupil diameter and neuro-
Conference on Biometrics (IJCB) Oct 2017, pp. 627–633. nal activity in the locus coeruleus, colliculi and cingulate cortex, Neuron 89 (1)
[123] J.E. Tapia, C.A. Perez, K.W. Bowyer, Gender classification from the same iris code (2016) 221–234.
used for recognition, IEEE Transactions on Information Forensics and Security 11 [152] M. Szwoch, P. Pieniążek, Eye blink based detection of liveness in biometric authen-
(Aug 2016) 1760–1770. tication systems using conditional random fields, Computer Vision and Graphics,
[124] B. Brown, A.J. Adams, G. Haegerstrom-Portnoy, R.T. Jones, M.C. Flom, Pupil size Springer Berlin Heidelberg, Berlin, Heidelberg 2012, pp. 669–676.
after use of marijuana and alcohol, Am J. Ophthalmol. 83 (3) (1977) 350–354. [153] A.S. Adhau, D.K. Shedge, Iris recognition methods of a blinked eye in nonideal con-
[125] J.E. Richman, K.G. McAndrew, D. Decker, S.C. Mullaney, An evaluation of pupil size dition, 2015 International Conference on Information Processing (ICIP) 2015,
standards used by police officers for detecting drug impairment, Optometry - J. pp. 75–79.
Am. Optomet. Associat. 75 (3) (2004) 175–182. [154] A. Fogelton, W. Benesova, Eye blink completeness detection, Comput. Vis. Image
[126] S.S. Arora, M. Vatsa, R. Singh, A. Jain, Iris recognition under alcohol influence: A pre- Underst. 176-177 (2018) 10.
liminary study, 2012 5th IAPR International Conference on Biometrics (ICB) March [155] A. Czajka, K.W. Bowyer, Presentation attack detection for iris recognition: an as-
2012, pp. 336–341. sessment of the state-of-the-art, ACM Comput. Surv. (July 2018) 51.
[127] I. Tomeo-Reyes, V. Chandran, Part based bit error analysis of iris codes, Pattern [156] D. Yambay, A. Czajka, K. Bowyer, M. Vatsa, R. Singh, A. Noore, N. Kohli, D. Yadav, S.
Recogn. 60 (2016) 306–317. Schuckers, Review of Iris Presentation Attack Detection Competitions, Springer In-
[128] P. Jain, P. Finger, Iris varix: 10-year experience with 28 eyes, Indian J. Ophthalmol. ternational Publishing, Cham, 2019 169–183.
67 (03) (2019) 350. [157] L. A. Zanlorensi, R. Laroca, E. Luz, A. S. B. J. au2, L. S. Oliveira, And D. Menotti, “Ocular
[129] The Eye Cancer Foundation Dataset, http://www.eyecancercure.com/research/ recognition databases and competitions: A survey,” 2019.
image-gallery https://eyecancer.com/eye-cancer/image-galleries/iris-tumors [158] H. Proenca, L.A. Alexandre, The nice.I: Noisy iris challenge evaluation - part i, 2007
Accessed: 2019-05-16. First IEEE International Conference on Biometrics: Theory, Applications, and Sys-
[130] M. Trokielewicz, A. Czajka, P. Maciejewicz, Database of iris images acquired in the tems 2007, pp. 1–4.
presence of ocular pathologies and assessment of iris recognition reliability for [159] K.W. Bowyer, The results of the nice.ii iris biometrics competition, Pattern Recog-
disease-affected eyes, 2015 IEEE 2nd International Conference on Cybernetics nition Letters, 33, 2012, pp. 965–969 , Noisy Iris Challenge Evaluation II - Recogni-
(CYBCONF) June 2015, pp. 495–500. tion of Visible Wavelength Iris Images Captured At-a-distance and On-the-move.
[131] M. Trokielewicz, A. Czajka, P. Maciejewicz, Assessment of iris recognition reliability [160] Man Zhang, Jing Liu, Z. Sun, T. Tan, Su Wu, F. Alonso-Fernandez, V. Némesin, N.
for eyes affected by ocular pathologies, 2015 IEEE 7th International Conference on Othman, K. Noda, P. Li, E. Hoyle, A. Joshi, The first icb* competition on iris recogni-
Biometrics Theory, Applications and Systems (BTAS) Sep. 2015, pp. 1–6. tion, IEEE International Joint Conference on Biometrics, Pp. 1–6, 2014.
[132] Z. Luo, J. Dai, Y. Jia, J. He, An improved bovine iris segmentation method, MATEC [161] LivDet - Liveness Detection Competitions, http://livdet.org/ Accessed: 2020-05-27.
Web of Conferences, 267, 2019, p. 03002 , 01. [162] A. Sequeira, J. Monteiro, A. Rebelo, H. Oliveira, Mobbio: A multimodal database cap-
tured with a portable handheld device, VISAPP 2014 - Proceedings of the 9th Inter-
[133] L.Z. Menglu Zhang, An iris localization algorithm based on geometrical features of
national Conference on Computer Vision Theory and Applications, Vol. 3, 01, 2014.
cow eyes, Proc SPIE, vol. 7495, 2009.
[163] A. Sequeira, H. Oliveira, J. Monteiro, J. Monteiro, J. Cardoso, Mobilive 2014 -mobile
[134] M. Trokielewicz, M. Szadkowski, Iris and periocular recognition in arabian race
iris liveness detection competition, IJCB 2014–2014 IEEE/IAPR International Joint
horses using deep convolutional neural networks, 2017 IEEE International Joint
Conference on Biometrics, 09, 2014.
Conference on Biometrics (IJCB) Oct 2017, pp. 510–516.
[164] The First CCBR Competition on Iris Recognition, http://biometrics.idealtest.org/
[135] D.M. Monro, S. Rakshit, D. Zhang, Dct-based iris recognition, IEEE Trans. Pattern
2014/CCIR2014.jsp Accessed: 2020-10-10.
Anal. Mach. Intell. 29 (apr 2007) 586–595.
[165] A. Das, U. Pal, M. Ferrerc, M. Blumensteina, Ssbc 2015: Sclera segmentation
[136] The Notre Dame Computer Vision Research Laboratory (CVRL), Datasets, https://
benchmarking competition, 2015 IEEE 7th International Conference on Biometrics
cvrl.nd.edu/projects/data/ Accessed: 2019-12-15.
Theory, Applications and Systems (BTAS), 09, 2015, pp. 1–6.
[137] Data protection in EU, https://ec.europa.eu/info/law/law-topic/data-protection_en
[166] A. Das, U. Pal, M. Ferrer, M. Blumenstein, Ssrbc 2016: Sclera segmentation and rec-
Accessed: 2019-12-15.
ognition benchmarking competition, 2016 International Conference on Biometrics
[138] A. Czajka, Template ageing in iris recognition, BIOSIGNALS 2013 - Proceedings of (ICB), 06, 2016, pp. 1–6.
the International Conference on Bio-Inspired Systems and Signal Processing, 04, [167] A.F. Sequeira, L. Chen, J. Ferryman, P. Wild, F. Alonso-Fernandez, J. Bigun, K.B. Raja,
2013, pp. 1–8. R. Raghavendra, C. Busch, T. de Freitas Pereira, S. Marcel, S.S. Behera, M. Gour, V.
[139] K. Browning and N. M. Orlans, “Biometric Aging - Effects of Aging on Iris Recogni- Kanhangad, Cross-eyed 2017: Cross-spectral iris/periocular recognition competi-
tion.” https://www.mitre.org/publications/technical-papers/biometric-aging-ef- tion, 2017 IEEE International Joint Conference on Biometrics (IJCB) 2017,
fects-of-aging-on-iris-recognition, 2014. MITRE Corp.,Techniacl paper, Accessed: pp. 725–732.
2020-04-28. [168] A. Das, U. Pal, M. Ferrer, M. Blumenstein, D. Stepec, P. Rot, Z. Emersic, P. Peer, V.
[140] Council of European Union, Regulation (eu) 2016/679 of the european parliament Struc, S.V. Aruna Kumar, B.S. Harish, Sserbc 2017: Sclera segmentation and eye rec-
and of the council, https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri= ognition benchmarking competition, 2017 IEEE International Joint Conference on
CELEX:32016R0679 2016. Biometrics (IJCB), 10, 2017, pp. 742–747.
[141] E. Simperl, K. O’Hara, R. Gomer, Analytical Report 3: Open Data and Privacy, tech. [169] A. Das, U. Pal, M. Ferrer, M. Blumenstein, D. Stepec, P. Rot, Z. Emersic, P. Peer, V.
rep., European Data Portal, 06, 2016 https://www.europeandataportal.eu/sites/de- Struc, Ssbc 2018: Sclera segmentation benchmarking competition, 2018 Interna-
fault/files/open_data_and_privacy_v1_final_clean.pdf Accessed: 2020-04-28. tional Conference on Biometrics (ICB), 02, 2018, pp. 303–308.
[142] A. Karpenko, D. Jacobs, J. Baek, M. Levoy, Digital video stabilization and rolling [170] A. Das, U. Pal, M. Blumenstein, z. sun, Sclera segmentation benchmarking competi-
shutter correction using gyroscopes, tech. rep, Computer Science Tech Report, tion in cross-resolution environment, ICB 2019, 06, 2019.
Stanford University, Sept. 2011. [171] P. Das, J. McGrath, Z. Fang, A. Boyd, G. Jang, A. Mohammadi, S. Purnapatra, D.
[143] J.R. Matey, O. Naroditsky, K. Hanna, R. Kolczynski, D.J. LoIacono, S. Mangru, M. Yambay, S. Marcel, M. Trokielewicz, P. Maciejewicz, K. Bowyer, A. Czajka, S.
Tinker, T.M. Zappia, W.Y. Zhao, Iris on the move: acquisition of images for iris rec- Schuckers, J. Tapia, S. Gonzalez, M. Fang, N. Damer, F. Boutros, A. Kuijper, R.
ognition in less constrained environments, Proc. IEEE 94 (11) (2006) 1936–1947. Sharma, C. Chen, A. Ross, Iris liveness detection competition (livdet-iris) – the
[144] D. Crouse, H. Han, D. Chandra, B. Barbello, A.K. Jain, Continuous authentication of 2020 edition, 2020.
mobile user: Fusion of face image and inertial measurement unit data, 2015 Inter- [172] VISible light mobile Ocular Biometric (VISOB) 2.0 dataset (WCCI/IJCNN2020
national Conference on Biometrics (ICB) 2015, pp. 135–142. Challenge Version), https://sce.umkc.edu/research-sites/cibit/dataset.html
[145] A. Das, C. Galdi, H. Han, R. Ramachandra, J. Dugelay, A. Dantcheva, Recent advances Accessed: 2020-10-10.
in biometric technology for mobile devices, In 2018 IEEE 9th International Confer- [173] M. Vitek, A. Das, Y. Pourcenoux, A. Missler, C. Paumier, S. Das, I. Ghosh, D.R. Lucio, L.
ence on Biometrics Theory, Applications and Systems (BTAS), 2018. Zanlorensi, D. Menotti, F. Boutros, N. Damer, J. Grebe, A. Kuijper, J. Hu, Y. He, C.
[146] Z. Ali, U. Park, J. Nang, J. Park, T. Hong, S. Park, Periocular recognition using umlbp Wang, H. Liu, Y. Wang, R. Vyas, Ssbc 2020: Sclera segmentation benchmarking
and attribute features, KSII Transactions on Internet and Information Systems 11 competition in the mobile environment, IJCB 2020, 10, 2020.
(12) (2017) 6133–6151. [174] A. de Waard, H. Cousijn, and I. J. Aalbersberg, “10 aspects of highly effective re-
[147] S. Barra, A. Casanova, F. Narducci, S. Ricciardi, Ubiquitous iris recognition by means search data.” https://www.elsevier.com/connect/10-aspects-of-highly-effective-
of mobile devices, Pattern Recogn. Lett. 57 (2015) 66–73 Mobile Iris CHallenge research-data, 2015. https://www.elsevier.com/connect/10-aspects-of-highly-ef-
Evaluation part I (MICHE I). fective-research-data, Accessed: 2019-12-12.
[148] F. Alonso-Fernandez, R.A. Farrugia, J. Bigun, J. Fierrez, E. Gonzalez-Sosa, A survey of [175] H. Aguinis, N.S. Hill, J. Bailey, Best practices in data collection and preparation: rec-
super-resolution in iris biometrics with evaluation of dictionary-learning, IEEE Ac- ommendations for reviewers, editors, and authors, Organ. Res. Methods
cess 7 (2019) 6519–6544. 109442811983648 (2019) 03.
[149] M. Bielikova, M. Konopka, J. Simko, R. Moro, J. Tvarozek, P. Hlaváč, E. Kuric, Eye- [176] ISO IEC 19795–1:2006 - Information technology – Biometric performance testing
tracking en masse: group user studies, lab infrastructure, and practices, J. Eye and reporting – Part 1: Principles and framework, tech. rep, 2006.
Mov. Res. 11 (2018) 08.
20