ImageNet Large Scale Visual Recognition Challenge PDF
ImageNet Large Scale Visual Recognition Challenge PDF
Abstract The ImageNet Large Scale Visual Recogni- lenges of collecting large-scale ground truth annotation,
tion Challenge is a benchmark in object category classi- highlight key breakthroughs in categorical object recog-
fication and detection on hundreds of object categories nition, provide a detailed analysis of the current state
and millions of images. The challenge has been run an- of the field of large-scale image classification and ob-
nually from 2010 to present, attracting participation ject detection, and compare the state-of-the-art com-
from more than fifty institutions. puter vision accuracy with human accuracy. We con-
This paper describes the creation of this benchmark clude with lessons learned in the five years of the chal-
dataset and the advances in object recognition that lenge, and propose future directions and improvements.
have been possible as a result. We discuss the chal-
Keywords Dataset · Large-scale · Benchmark ·
Object recognition · Object detection
O. Russakovsky*
Stanford University, Stanford, CA, USA
E-mail: [email protected]
J. Deng* 1 Introduction
University of Michigan, Ann Arbor, MI, USA
(* = authors contributed equally) Overview. The ImageNet Large Scale Visual Recogni-
H. Su tion Challenge (ILSVRC) has been running annually
Stanford University, Stanford, CA, USA for five years (since 2010) and has become the standard
J. Krause benchmark for large-scale object recognition.1 ILSVRC
Stanford University, Stanford, CA, USA follows in the footsteps of the PASCAL VOC chal-
S. Satheesh lenge (Everingham et al., 2012), established in 2005,
Stanford University, Stanford, CA, USA which set the precedent for standardized evaluation of
S. Ma recognition algorithms in the form of yearly competi-
Stanford University, Stanford, CA, USA tions. As in PASCAL VOC, ILSVRC consists of two
Z. Huang components: (1) a publically available dataset, and (2)
Stanford University, Stanford, CA, USA an annual competition and corresponding workshop. The
A. Karpathy dataset allows for the development and comparison of
Stanford University, Stanford, CA, USA categorical object recognition algorithms, and the com-
A. Khosla petition and workshop provide a way to track the progress
Massachusetts Institute of Technology, Cambridge, MA, USA and discuss the lessons learned from the most successful
M. Bernstein and innovative entries each year.
Stanford University, Stanford, CA, USA
1
A. C. Berg In this paper, we will be using the term object recogni-
UNC Chapel Hill, Chapel Hill, NC, USA tion broadly to encompass both image classification (a task
requiring an algorithm to determine what object classes are
L. Fei-Fei present in the image) as well as object detection (a task requir-
Stanford University, Stanford, CA, USA ing an algorithm to localize all objects present in the image).
2 Olga Russakovsky* et al.
The publically released dataset contains a set of successful of these algorithms in this paper, and com-
manually annotated training images. A set of test im- pare their performance with human-level accuracy.
ages is also released, with the manual annotations with- Finally, the large variety of object classes in ILSVRC
held.2 Participants train their algorithms using the train- allows us to perform an analysis of statistical properties
ing images and then automatically annotate the test of objects and their impact on recognition algorithms.
images. These predicted annotations are submitted to This type of analysis allows for a deeper understand-
the evaluation server. Results of the evaluation are re- ing of object recognition, and for designing the next
vealed at the end of the competition period and authors generation of general object recognition algorithms.
are invited to share insights at the workshop held at the
International Conference on Computer Vision (ICCV) Goals. This paper has three key goals:
or European Conference on Computer Vision (ECCV)
1. To discuss the challenges of creating this large-scale
in alternate years.
object recognition benchmark dataset,
ILSVRC annotations fall into one of two categories:
2. To highlight the developments in object classifica-
(1) image-level annotation of a binary label for the pres-
tion and detection that have resulted from this ef-
ence or absence of an object class in the image, e.g.,
fort, and
“there are cars in this image” but “there are no tigers,”
3. To take a closer look at the current state of the field
and (2) object-level annotation of a tight bounding box
of categorical object recognition.
and class label around an object instance in the image,
e.g., “there is a screwdriver centered at position (20,25) The paper may be of interest to researchers working
with width of 50 pixels and height of 30 pixels”. on creating large-scale datasets, as well as to anybody
interested in better understanding the history and the
Large-scale challenges and innovations. In creating the current state of large-scale object recognition.
dataset, several challenges had to be addressed. Scal- The collected dataset and additional information
ing up from 19,737 images in PASCAL VOC 2010 to about ILSVRC can be found at:
1,461,406 in ILSVRC 2010 and from 20 object classes to
http://image-net.org/challenges/LSVRC/
1000 object classes brings with it several challenges. It
is no longer feasible for a small group of annotators to
annotate the data as is done for other datasets (Fei-Fei 1.1 Related work
et al., 2004; Criminisi, 2004; Everingham et al., 2012;
Xiao et al., 2010). Instead we turn to designing novel We briefly discuss some prior work in constructing bench-
crowdsourcing approaches for collecting large-scale an- mark image datasets.
notations (Su et al., 2012; Deng et al., 2009, 2014).
Some of the 1000 object classes may not be as easy Image classification datasets. Caltech 101 (Fei-Fei et al.,
to annotate as the 20 categories of PASCAL VOC: e.g., 2004) was among the first standardized datasets for
bananas which appear in bunches may not be as easy multi-category image classification, with 101 object classes
to delineate as the basic-level categories of aeroplanes and commonly 15-30 training images per class. Caltech
or cars. Having more than a million images makes it in- 256 (Griffin et al., 2007) increased the number of object
feasible to annotate the locations of all objects (much classes to 256 and added images with greater scale and
less with object segmentations, human body parts, and background variability. The TinyImages dataset (Tor-
other detailed annotations that subsets of PASCAL VOC ralba et al., 2008) contains 80 million 32x32 low resolu-
contain). New evaluation criteria have to be defined to tion images collected from the internet using synsets in
take into account the facts that obtaining perfect man- WordNet (Miller, 1995) as queries. However, since this
ual annotations in this setting may be infeasible. data has not been manually verified, there are many
Once the challenge dataset was collected, its scale errors, making it less suitable for algorithm evaluation.
allowed for unprecedented opportunities both in evalu- Datasets such as 15 Scenes (Oliva and Torralba, 2001;
ation of object recognition algorithms and in developing Fei-Fei and Perona, 2005; Lazebnik et al., 2006) or re-
new techniques. Novel algorithmic innovations emerge cent Places (Zhou et al., 2014) provide a single scene
with the availability of large-scale training data. The category label (as opposed to an object category).
broad spectrum of object categories motivated the need The ImageNet dataset (Deng et al., 2009) is the
for algorithms that are even able to distinguish classes backbone of ILSVRC. ImageNet is an image dataset
which are visually very similar. We highlight the most organized according to the WordNet hierarchy (Miller,
2
In 2010, the test annotations were later released publicly; 1995). Each concept in WordNet, possibly described by
since then the test annotation have been kept hidden. multiple words or word phrases, is called a “synonym
ImageNet Large Scale Visual Recognition Challenge 3
set” or “synset”. ImageNet populates 21,841 synsets of object categories than ILSVRC (91 in COCO versus
WordNet with an average of 650 manually verified and 200 in ILSVRC object detection) but more instances
full resolution images. As a result, ImageNet contains per category (27K on average compared to about 1K
14,197,122 annotated images organized by the semantic in ILSVRC object detection). Further, it contains ob-
hierarchy of WordNet (as of August 2014). ImageNet is ject segmentation annotations which are not currently
larger in scale and diversity than the other image clas- available in ILSVRC. COCO is likely to become another
sification datasets. ILSVRC uses a subset of ImageNet important large-scale benchmark.
images for training the algorithms and some of Ima-
geNet’s image collection protocols for annotating addi-
tional images for testing the algorithms. Large-scale annotation. ILSVRC makes extensive use
of Amazon Mechanical Turk to obtain accurate annota-
tions (Sorokin and Forsyth, 2008). Works such as (Welin-
Image parsing datasets. Many datasets aim to provide
der et al., 2010; Sheng et al., 2008; Vittayakorn and
richer image annotations beyond image-category labels.
Hays, 2011) describe quality control mechanisms for
LabelMe (Russell et al., 2007) contains general pho-
this marketplace. (Vondrick et al., 2012) provides a de-
tographs with multiple objects per image. It has bound-
tailed overview of crowdsourcing video annotation. A
ing polygon annotations around objects, but the ob-
related line of work is to obtain annotations through
ject names are not standardized: annotators are free
well-designed games, e.g. (von Ahn and Dabbish, 2005).
to choose which objects to label and what to name
Our novel approaches to crowdsourcing accurate image
each object. The SUN2012 (Xiao et al., 2010) dataset
annotations are in Sections 3.1.3, 3.2.1 and 3.3.3.
contains 16,873 manually cleaned up and fully anno-
tated images more suitable for standard object detec-
tion training and evaluation. SIFT Flow (Liu et al., Standardized challenges. There are several datasets with
2011) contains 2,688 images labeled using the LabelMe standardized online evaluation similar to ILSVRC: the
system. The LotusHill dataset (Yao et al., 2007) con- aforementioned PASCAL VOC (Everingham et al., 2012),
tains very detailed annotations of objects in 636,748 Labeled Faces in the Wild (Huang et al., 2007) for
images and video frames, but it is not available for free. unconstrained face recognition, Reconstruction meets
Several datasets provide pixel-level segmentations: for Recognition (Urtasun et al., 2014) for 3D reconstruc-
example, MSRC dataset (Criminisi, 2004) with 591 im- tion and KITTI (Geiger et al., 2013) for computer vi-
ages and 23 object classes, Stanford Background Dataset sion in autonomous driving. These datasets along with
(Gould et al., 2009) with 715 images and 8 classes, ILSVRC help benchmark progress in different areas of
and the Berkeley Segmentation dataset (Arbelaez et al., computer vision. Works such as (Torralba and Efros,
2011) with 500 images annotated with object bound- 2011) emphasize the importance of examining the bias
aries. OpenSurfaces segments surfaces from consumer inherent in any standardized dataset.
photographs and annotates them with surface proper-
ties, including material, texture, and contextual infor-
mation (Bell et al., 2013) .
The closest to ILSVRC is the PASCAL VOC dataset 1.2 Paper layout
(Everingham et al., 2010, 2014), which provides a stan-
dardized test bed for object detection, image classifi- We begin with a brief overview of ILSVRC challenge
cation, object segmentation, person layout, and action tasks in Section 2. Dataset collection and annotation
classification. Much of the design choices in ILSVRC are described at length in Section 3. Section 4 discusses
have been inspired by PASCAL VOC and the simi- the evaluation criteria of algorithms in the large-scale
larities and differences between the datasets are dis- recognition setting. Section 5 provides an overview of
cussed at length throughout the paper. ILSVRC scales the methods developed by ILSVRC participants.
up PASCAL VOC’s goal of standardized training and Section 6 contains an in-depth analysis of ILSVRC
evaluation of recognition algorithms by more than an results: Section 6.1 documents the progress of large-
order of magnitude in number of object classes and im- scale recognition over the years, Section 6.2 concludes
ages: PASCAL VOC 2012 has 20 object classes and that ILSVRC results are statistically significant, Sec-
21,738 images compared to ILSVRC2012 with 1000 ob- tion 6.3 thoroughly analyzes the current state of the
ject classes and 1,431,167 annotated images. field of object recognition, and Section 6.4 compares
The recently released COCO dataset (Lin et al., state-of-the-art computer vision accuracy with human
2014b) contains more than 328,000 images with 2.5 mil- accuracy. We conclude and discuss lessons learned from
lion object instances manually segmented. It has fewer ILSVRC in Section 7.
4 Olga Russakovsky* et al.
The goal of ILSVRC is to estimate the content of pho- The single-object localization task, introduced in 2011,
tographs for the purpose of retrieval and automatic built off of the image classification task to evaluate the
annotation. Test images are presented with no initial ability of algorithms to learn the appearance of the tar-
annotation, and algorithms have to produce labelings get object itself rather than its image context.
specifying what objects are present in the images. New
test images are collected and labeled especially for this Data for the single-object localization task consists
competition and are not part of the previously pub- of the same photographs collected for the image classi-
lished ImageNet dataset (Deng et al., 2009). fication task, hand labeled with the presence of one of
ILSVRC over the years has consisted of one or more 1000 object categories. Each image contains one ground
of the following tasks (years in parentheses):3 truth label. Additionally, every instance of this category
is annotated with an axis-aligned bounding box.
1. Image classification (2010-2014): Algorithms pro-
duce a list of object categories present in the image. For each image, algorithms produce a list of object
2. Single-object localization (2011-2014): Algorithms categories present in the image, along with a bounding
produce a list of object categories present in the im- box indicating the position and scale of one instance
age, along with an axis-aligned bounding box indi- of each object category. The quality of a labeling is
cating the position and scale of one instance of each evaluated based on the object category label that best
object category. matches the ground truth label, with the additional re-
3. Object detection (2013-2014): Algorithms produce quirement that the location of the predicted instance is
a list of object categories present in the image along also accurate (see Section 4.2).
with an axis-aligned bounding box indicating the
position and scale of every instance of each object
category.
This section provides an overview and history of each
of the three tasks. Table 1 shows summary statistics. 2.3 Object detection task
3 Dataset construction at large scale classification (Section 3.1), single-object localization (Sec-
tion 3.2), and object detection (Section 3.3), focusing
Our process of constructing large-scale object recogni- on the three key steps for each dataset.
tion image datasets consists of three key steps.
The first step is defining the set of target object
3.1 Image classification dataset construction
categories. To do this, we select from among the ex-
isting ImageNet (Deng et al., 2009) categories. By us-
The image classification task tests the ability of an algo-
ing WordNet as a backbone (Miller, 1995), ImageNet
rithm to name the objects present in the image, without
already takes care of disambiguating word meanings
necessarily localizing them.
and of combining together synonyms into the same ob-
We describe the choices we made in constructing
ject category. Since the selection of object categories
the ILSVRC image classification dataset: selecting the
needs to be done only once per challenge task, we use a
target object categories from ImageNet (Section 3.1.1),
combination of automatic heuristics and manual post-
collecting a diverse set of candidate images by using
processing to create the list of target categories appro-
multiple search engines and an expanded set of queries
priate for each task. For example, for image classifica-
in multiple languages (Section 3.1.2), and finally filter-
tion we may include broader scene categories such as
ing the millions of collected images using the carefully
a type of beach, but for single-object localization and
designed crowdsourcing strategy of ImageNet (Deng et al.,
object detection we want to focus only on object cate-
2009) (Section 3.1.3).
gories which can be unambiguously localized in images
(Sections 3.1.1 and 3.3.1).
3.1.1 Defining object categories for the image
The second step is collecting a diverse set of can- classification dataset
didate images to represent the selected categories. We
use both automatic and manual strategies on multiple The 1000 categories used for the image classification
search engines to do the image collection. The process is task were selected from the ImageNet (Deng et al.,
modified for the different ILSVRC tasks. For example, 2009) categories. The 1000 synsets are selected such
for object detection we focus our efforts on collecting that there is no overlap between synsets: for any synsets
scene-like images using generic queries such as “African i and j, i is not an ancestor of j in the ImageNet hierar-
safari” to find pictures likely to contain multiple ani- chy. These synsets are part of the larger hierarchy and
mals in one scene (Section 3.3.2). may have children in ImageNet; however, for ILSVRC
The third (and most challenging) step is annotat- we do not consider their child subcategories. The synset
ing the millions of collected images to obtain a clean hierarchy of ILSVRC can be thought of as a “trimmed”
dataset. We carefully design crowdsourcing strategies version of the complete ImageNet hierarchy. Figure 1
targeted to each individual ILSVRC task. For example, visualizes the diversity of the ILSVRC2012 object cat-
the bounding box annotation system used for localiza- egories.
tion and detection tasks consists of three distinct parts The exact 1000 synsets used for the image classifica-
in order to include automatic crowdsourced quality con- tion and single-object localization tasks have changed
trol (Section 3.2.1). Annotating images fully with all over the years. There are 639 synsets which have been
target object categories (on a reasonable budget) for used in all five ILSVRC challenges so far. In the first
object detection requires an additional hierarchical im- year of the challenge synsets were selected randomly
age labeling system (Section 3.3.3). from the available ImageNet synsets at the time, fol-
We describe the data collection and annotation pro- lowed by manual filtering to make sure the object cat-
cedure for each of the ILSVRC tasks in order: image egories were not too obscure. With the introduction of
6 Olga Russakovsky* et al.
Fig. 1 The diversity of data in the ILSVRC image classification and single-object localization tasks. For each of the eight
dimensions, we show example object categories along the range of that property. Object scale, number of instances and image
clutter for each object category are computed using the metrics defined in Section 3.2.2 and in Appendix B. The other properties
were computed by asking human subjects to annotate each of the 1000 object categories (Russakovsky et al., 2013).
ImageNet Large Scale Visual Recognition Challenge 7
the object localization challenge in 2011 there were 321 labeling tasks, we present the users with a set of can-
synsets that changed: categories such as “New Zealand didate images and the definition of the target synset
beach” which were inherently difficult to localize were (including a link to Wikipedia). We then ask the users
removed, and some new categories from ImageNet con- to verify whether each image contains objects of the
taining object localization annotations were added. In synset. We encourage users to select images regardless
ILSVRC2012, 90 synsets were replaced with categories of occlusions, number of objects and clutter in the scene
corresponding to dog breeds to allow for evaluation of to ensure diversity.
more fine-grained object classification, as shown in Fig- While users are instructed to make accurate judg-
ure 2. The synsets have remained consistent since year ment, we need to set up a quality control system to
2012. Appendix A provides the complete list of object ensure this accuracy. There are two issues to consider.
categories used in ILSVRC2012-2014. First, human users make mistakes and not all users fol-
low the instructions. Second, users do not always agree
with each other, especially for more subtle or confus-
3.1.2 Collecting candidate images for the image ing synsets, typically at the deeper levels of the tree.
classification dataset The solution to these issues is to have multiple users
independently label the same image. An image is con-
Image collection for ILSVRC classification task is the sidered positive only if it gets a convincing majority of
same as the strategy employed for constructing Ima- the votes. We observe, however, that different categories
geNet (Deng et al., 2009). Training images are taken require different levels of consensus among users. For
directly from ImageNet. Additional images are collected example, while five users might be necessary for obtain-
for the ILSVRC using this strategy and randomly par- ing a good consensus on Burmese cat images, a much
titioned into the validation and test sets. smaller number is needed for cat images. We develop a
We briefly summarize the process; (Deng et al., 2009) simple algorithm to dynamically determine the number
contains further details. Candidate images are collected of agreements needed for different categories of images.
from the Internet by querying several image search en- For each synset, we first randomly sample an initial
gines. For each synset, the queries are the set of Word- subset of images. At least 10 users are asked to vote
Net synonyms. Search engines typically limit the num- on each of these images. We then obtain a confidence
ber of retrievable images (on the order of a few hundred score table, indicating the probability of an image being
to a thousand). To obtain as many images as possi- a good image given the consensus among user votes. For
ble, we expand the query set by appending the queries each of the remaining candidate images in this synset,
with the word from parent synsets, if the same word we proceed with the AMT user labeling until a pre-
appears in the glossary of the target synset. For exam- determined confidence score threshold is reached.
ple, when querying “whippet”, according to WordNet’s
glossary a “small slender dog of greyhound type de-
Empirical evaluation. Evaluation of the accuracy of the
veloped in England”, we also use “whippet dog” and
large-scale crowdsourced image annotation system was
“whippet greyhound.” To further enlarge and diversify
done on the entire ImageNet (Deng et al., 2009). A to-
the candidate pool, we translate the queries into other
tal of 80 synsets were randomly sampled at every tree
languages, including Chinese, Spanish, Dutch and Ital-
depth of the mammal and vehicle subtrees. An inde-
ian. We obtain accurate translations using WordNets in
pendent group of subjects verified the correctness of
those languages.
each of the images. An average of 99.7% precision is
achieved across the synsets. We expect similar accuracy
3.1.3 Image classification dataset annotation on ILSVRC image classification dataset since the im-
age annotation pipeline has remained the same. To ver-
Annotating images with corresponding object classes ify, we manually checked 1500 ILSVRC2012-2014 image
follows the strategy employed by ImageNet (Deng et al., classification test set images (the test set has remained
2009). We summarize it briefly here. unchanged in these three years). We found 5 annotation
To collect a highly accurate dataset, we rely on hu- errors, corresponding as expected to 99.7% precision.
mans to verify each candidate image collected in the
previous step for a given synset. This is achieved by us- 3.1.4 Image classification dataset statistics
ing Amazon Mechanical Turk (AMT), an online plat-
form on which one can put up tasks for users for a Using the image collection and annotation procedure
monetary reward. With a global user base, AMT is par- described in previous sections, we collected a large-
ticularly suitable for large scale labeling. In each of our scale dataset used for ILSVRC classification task. There
8 Olga Russakovsky* et al.
PASCAL ILSVRC
birds
···
cats
···
dogs
···
Fig. 2 The ILSVRC dataset contains many more fine-grained classes compared to the standard PASCAL VOC benchmark;
for example, instead of the PASCAL “dog” category there are 120 different breeds of dogs in ILSVRC2012-2014 classification
and single-object localization tasks.
are 1000 object classes and approximately 1.2 million object class label, corresponding to one object that is
training images, 50 thousand validation images and 100 present in an image. For the single-object localization
thousand test images. Table 2 (top) documents the size task, every validation and test image and a subset of the
of the dataset over the years of the challenge. training images are annotated with axis-aligned bound-
ing boxes around every instance of this object.
Every bounding box is required to be as small as
3.2 Single-object localization dataset construction possible while including all visible parts of the object
instance. An alternate annotation procedure could be
The single-object localization task evaluates the ability to annotate the full (estimated) extent of the object:
of an algorithm to localize one instance of an object e.g., if a person’s legs are occluded and only the torso
category. It was introduced as a taster task in ILSVRC is visible, the bounding box could be drawn to include
2011, and became an official part of ILSVRC in 2012. the likely location of the legs. However, this alterna-
The key challenge was developing a scalable crowd- tive procedure is inherently ambiguous and ill-defined,
sourcing method for object bounding box annotation. leading to disagreement among annotators and among
Our three-step self-verifying pipeline is described in Sec- researchers (what is the true “most likely” extent of
tion 3.2.1. Having the dataset collected, we perform this object?). We follow the standard protocol of only
detailed analysis in Section 3.2.2 to ensure that the annotating visible object parts (Russell et al., 2007; Ev-
dataset is sufficiently varied to be suitable for evalu- eringham et al., 2010).5
ation of object localization algorithms.
3.2.1 Bounding box object annotation system
Object classes and candidate images. The object classes
for single-object localization task are the same as the
We summarize the crowdsourced bounding box anno-
object classes for image classification task described
tation system described in detail in (Su et al., 2012).
above in Section 3.1. The training images for localiza-
The goal is to build a system that is fully automated,
tion task are a subset of the training images used for
5
image classification task, and the validation and test Some datasets such as PASCAL VOC (Everingham et al.,
images are the same between both tasks. 2010) and LabelMe (Russell et al., 2007) are able to provide
more detailed annotations: for example, marking individual
object instances as being truncated. We chose not to provide
Bounding box annotation. Recall that for the image this level of detail in favor of annotating more images and
classification task every image was annotated with one more object instances.
ImageNet Large Scale Visual Recognition Challenge 9
Year Train images (per class) Val images (per class) Test images (per class)
ILSVRC2010 1,261,406 (668-3047) 50,000 (50) 150,000 (150)
ILSVRC2011 1,229,413 (384-1300) 50,000 (50) 100,000 (100)
ILSVRC2012-14 1,281,167 (732-1300) 50,000 (50) 100,000 (100)
Year Train images with Train bboxes Val images with Val bboxes Test images with
bbox annotations annotated bbox annotations annotated bbox annotations
(per class) (per class) (per class) (per class)
ILSVRC2011 315,525 (104-1256) 344,233 (114-1502) 50,000 (50) 55,388 (50-118) 100,000
ILSVRC2012-14 523,966 (91-1268) 593,173 (92-1418) 50,000 (50) 64,058 (50-189) 100,000
Table 2 Scale of ILSVRC image classification task (top) and single-object localization task (bottom). The numbers in paren-
theses correspond to (minimum per class - maximum per class). The 1000 classes change from year to year but are consistent
between image classification and single-object localization tasks in the same year. All images from the image classification task
may be used for single-object localization.
highly accurate, and cost-effective. Given a collection clean (object presence is correctly verified) and the cov-
of images where the object of interest has been veri- erage verification tasks give correct results, the amount
fied to exist, for each image the system collects a tight of work of the drawing task is always that of providing
bounding box for every instance of the object. exactly one bounding box.
There are two requirements: Quality control on Tasks 2 and 3 is implemented
by embedding “gold standard” images where the cor-
– Quality Each bounding box needs to be tight, i.e.
rect answer is known. Worker training for each of these
the smallest among all bounding boxes that contains
subtasks is described in detail in (Su et al., 2012).
all visible parts of the object. This facilitates the
object detection learning algorithms by providing
Empirical evaluation. The system is evaluated on 10
the precise location of each object instance;
categories with ImageNet (Deng et al., 2009): balloon,
– Coverage Every object instance needs to have a
bear, bed, bench, beach, bird, bookshelf, basketball hoop,
bounding box. This is important for training local-
bottle, and people. A subset of 200 images are ran-
ization algorithms because it tells the learning algo-
domly sampled from each category. On the image level,
rithms with certainty what is not the object.
our evaluation shows that 97.9% images are completely
The core challenge of building such a system is ef- covered with bounding boxes. For the remaining 2.1%,
fectively controlling the data quality with minimal cost. some bounding boxes are missing. However, these are
Our key observation is that drawing a bounding box is all difficult cases: the size is too small, the boundary is
significantly more difficult and time consuming than blurry, or there is strong shadow.
giving answers to multiple choice questions. Thus qual- On the bounding box level, 99.2% of all bound-
ity control through additional verification tasks is more ing boxes are accurate (the bounding boxes are visi-
cost-effective than consensus-based algorithms. This leads bly tight). The remaining 0.8% are somewhat off. No
to the following workflow with simple basic subtasks: bounding boxes are found to have less than 50% inter-
section over union overlap with ground truth.
1. Drawing A worker draws one bounding box around
Additional evaluation of the overall cost and an anal-
one instance of an object on the given image.
ysis of quality control can be found in (Su et al., 2012).
2. Quality verification A second worker checks if the
bounding box is correctly drawn.
3. Coverage verification A third worker checks if all 3.2.2 Single-object localization dataset statistics
object instances have bounding boxes.
Using the annotation procedure described above, we
The sub-tasks are designed following two principles. collect a large set of bounding box annotations for the
First, the tasks are made as simple as possible. For ex- ILSVRC single-object classification task. All 50 thou-
ample, instead of asking the worker to draw all bound- sand images in the validation set and 100 thousand im-
ing boxes on the same image, we ask the worker to draw ages in the test set are annotated with bounding boxes
only one. This reduces the complexity of the task. Sec- around all instances of the ground truth object class
ond, each task has a fixed and predictable amount of (one object class per image). In addition, in ILSVRC2011
work. For example, assuming that the input images are 25% of training images are annotated with bounding
10 Olga Russakovsky* et al.
boxes the same way, yielding more than 310 thousand Class name in Closest class in Avg object scale (%)
PASCAL VOC ILSVRC-DET PASCAL ILSVRC-
annotated images with more than 340 thousand anno- (20 classes) (200 classes) VOC DET
tated object instances. In ILSVRC2012 40% of training aeroplane airplane 29.7 22.4
images are annotated, yielding more than 520 thousand bicycle bicycle 29.3 14.3
annotated images with more than 590 thousand anno- bird bird 15.9 20.1
tated object instances. Table 2 (bottom) documents the boat watercraft 15.2 16.5
bottle wine bottle 7.3 10.4
size of this dataset. bus bus 29.9 22.1
In addition to the size of the dataset, we also ana- car car 14.0 13.4
lyze the level of difficulty of object localization in these cat domestic cat 46.8 29.8
images compared to the PASCAL VOC benchmark. We chair chair 12.8 10.1
compute statistics on the ILSVRC2012 single-object lo- cow cattle 19.3 13.5
dining table table 29.1 30.3
calization validation set images compared to PASCAL dog dog 37.0 28.9
VOC 2012 validation images. horse horse 29.5 18.5
Real-world scenes are likely to contain multiple in- motorbike motorcyle 32.0 20.7
stances of some objects, and nearby object instances are person person 17.5 19.3
particularly difficult to delineate. The average object potted plant flower pot 12.3 8.1
sheep sheep 12.2 17.3
category in ILSVRC has 1.61 target object instances sofa sofa 41.7 44.4
on average per positive image, with each instance hav- train train 35.4 35.1
ing on average 0.47 neighbors (adjacent instances of tv/monitor tv or monitor 14.6 11.2
the same object category). This is comparable to 1.69 Table 3 Correspondences between the object classes in the
instances per positive image and 0.52 neighbors per in- PASCAL VOC (Everingham et al., 2010) and the ILSVRC
stance for an average object class in PASCAL. detection task. Object scale is the fraction of image area (re-
ported in percent) occupied by an object instance. It is com-
As described in (Hoiem et al., 2012), smaller ob- puted on the validation sets of PASCAL VOC 2012 and of
jects tend to be significantly more difficult to local- ILSVRC-DET. The average object scale is 24.1% across the
ize. In the average object category in PASCAL the ob- 20 PASCAL VOC categories and 20.3% across the 20 corre-
ject occupies 24.1% of the image area, and in ILSVRC sponding ILSVRC-DET categories. Section 3.3.4 reports ad-
ditional dataset statistics.
35.8%. However, PASCAL has only 20 object categories
while ILSVRC has 1000. The 537 object categories of
ILSVRC with the smallest objects on average occupy
the same fraction of the image as PASCAL objects:
24.1%. Thus even though on average the object in-
stances tend to be bigger in ILSVRC images, there are
more than 25 times more object categories than in PAS- The second challenge is obtaining a much more var-
CAL VOC with the same average object scale. ied set of scene images than those used for the image
Appendix B and (Russakovsky et al., 2013) have classification and single-object localization datasets. Sec-
additional comparisons. tion 3.3.2 describes the procedure for utilizing as much
data from the single-object localization dataset as pos-
sible and supplementing it with Flickr images queried
3.3 Object detection dataset construction using hundreds of manually designed high-level queries.
Fig. 4 Random selection of images in ILSVRC detection validation set. The images in the top 4 rows were taken from
ILSVRC2012 single-object localization validation set, and the images in the bottom 4 rows were collected from Flickr using
scene-level queries.
tage of all the positive examples available. The second is images collected from Flickr specifically for the de-
source (24%) is negative images which were part of the tection task. These images were added for ILSVRC2014
original ImageNet collection process but voted as neg- following the same protocol as the second type of images
ative: for example, some of the images were collected in the validation and test set. This was done to bring
from Flickr and search engines for the ImageNet synset the training and testing distributions closer together.
“animals” but during the manual verification step did
not collect enough votes to be considered as containing
an “animal.” These images were manually re-verified
for the detection task to ensure that they did not in
fact contain the target objects. The third source (13%)
ImageNet Large Scale Visual Recognition Challenge 13
Object Presence
Input: Image i, queries Q, directed graph G over Q
Is there Is there Is there
Output: Labels L : Q → {“yes”, “no”}
Image
Year Train images Train bboxes annotated Val images Val bboxes annotated Test
(per class) (per class) (per class) (per class ) images
ILSVRC2013 395909 345854 21121 55501 40152
(417-561-66911 pos, (438-660-73799) (23-58-5791 pos, (31-111-12824)
185-4130-10073 neg) rest neg)
ILSVRC2014 456567 478807 21121 55501 40152
(461-823-67513 pos, (502-1008-74517) (23-58-5791 pos, (31-111-12824)
42945-64614-70626 neg) rest neg)
Table 4 Scale of ILSVRC object detection task. Numbers in parentheses correspond to (minimum per class - median per
class - maximum per class).
We elaborate further on these and other more minor common ancestor of cij and Ci in the ImageNet hier-
challenges with large-scale evaluation. Appendix F de- archy. The height of a node is the length of the longest
scribes the submission protocol and other details of run- path to a leaf node (leaf nodes have height zero).
ning the competition itself. However, in practice we found that all three mea-
sures of error (top-5, top-1, and hierarchical) produced
the same ordering of results. Thus, since ILSVRC2012
4.1 Image classification we have been exclusively using the top-5 metric which
is the simplest and most suitable to the dataset.
The scale of ILSVRC classification task (1000 categories
and more than a million of images) makes it very ex- 4.2 Single-object localization
pensive to label every instance of every object in every
image. Therefore, on this dataset only one object cate- The evaluation for single-object localization is similar
gory is labeled in each image. This creates ambiguity in to object classification, again using a top-5 criteria to al-
evaluation. For example, an image might be labeled as low the algorithm to return unannotated object classes
a “strawberry” but contain both a strawberry and an without penalty. However, now the algorithm is con-
apple. Then an algorithm would not know which one sidered correct only if it both correctly identifies the
of the two objects to name. For the image classification target class Ci and accurately localizes one of its in-
task we allowed an algorithm to identify multiple (up stances. Figure 7(middle row) shows some examples.
to 5) objects in an image and not be penalized as long Concretely, an image is associated with object class
as one of the objects indeed corresponded to the ground Ci , with all instances of this object class annotated with
truth label. Figure 7(top row) shows some examples. bounding boxes Bik . An algorithm returns {(cij , bij )}5j=1
Concretely, each image i has a single class label Ci . of class labels cij and associated locations bij . The error
An algorithm is allowed to return 5 labels ci1 , . . . ci5 , of a prediction j is:
and is considered correct if cij = Ci for some j.
dij = max(d(cij , Ci ), min d(bij , Bik )) (2)
Let the error of a prediction dij = d(cij , Ci ) be 1 k
if cij 6= Ci and 0 otherwise. The error of an algorithm Here d(bij , Bik ) is the error of localization, defined as 0
is the fraction of test images on which the algorithm if the area of intersection of boxes bij and Bik divided
makes a mistake: by the areas of their union is greater than 0.5, and 1
N otherwise. (Everingham et al., 2010) The error of an
1 X algorithm is computed as in Eq. 1.
error = min dij (1)
N i=1 j Evaluating localization is inherently difficult in some
images. Consider a picture of a bunch of bananas or a
We used two additional measures of error. First, we carton of apples. It is easy to classify these images as
evaluated top-1 error. In this case algorithms were pe- containing bananas or apples, and even possible to lo-
nalized if their highest-confidence output label ci1 did calize a few instances of each fruit. However, in order
not match ground truth class Ci . Second, we evaluated for evaluation to be accurate every instance of banana
hierarchical error. The intuition is that confusing two or apple needs to be annotated, and that may be impos-
nearby classes (such as two different breeds of dogs) is sible. To handle the images where localizing individual
not as harmful as confusing a dog for a container ship. object instances is inherently ambiguous we manually
For the hierarchical criteria, the cost of one misclassifi- discarded 3.5% of images since ILSVRC2012. Some ex-
cation, d(cij , Ci ), is defined as the height of the lowest amples of discarded images are shown in Figure 8.
16 Olga Russakovsky* et al.
Fig. 7 Tasks in ILSVRC. The first column shows the ground truth labeling on an example image, and the next three show
three sample outputs with the corresponding evaluation score.
Fig. 8 Images marked as “difficult” in the ILSVRC2012 single-object localization validation set. Please refer to Section 4.2
for details.
4.3 Object detection matched to a ground truth box according to the thresh-
old criteria, and 0 otherwise. For a given object class,
The criteria for object detection was adopted from PAS- let N be the total number of ground truth instances
CAL VOC (Everingham et al., 2010). It is designed to across all images. Given a threshold t, define recall as
penalize the algorithm for missing object instances, for the fraction of the N objects detected by the algorithm,
duplicate detections of one instance, and for false posi- and precision as the fraction of correct detections out
tive detections. Figure 7(bottom row) shows examples. of the total detections returned by the algorithm. Con-
For each object class and each image Ii , an algo- cretely,
rithm returns predicted detections (bij , sij ) of predicted P
ij 1[sij ≥ t]zij
locations bij with confidence scores sij . These detec- Recall(t) = (3)
N
tions are greedily matched to the ground truth boxes P
ij 1[sij ≥ t]zij
{Bik } using Algorithm 2. For every detection j on im- P recision(t) = P (4)
age i the algorithm returns zij = 1 if the detection is ij 1[sij ≥ t]
ImageNet Large Scale Visual Recognition Challenge 17
Input: Bounding box predictions with confidence for smaller objects we loosen the threshold in ILSVRC
scores {(bj , sj )}M
j=1 and ground truth boxes B to allow for the annotation to extend up to 5 pixels on
on image I for a given object class.
Output: Binary results {zj }M average in each direction around the object. Concretely,
j=1 of whether or not
prediction j is a true positive detection if the ground truth box B is of dimensions w × h then
Let U = B be the set of unmatched objects;
Order {(bj , sj )}M
j=1 in descending order of sj ; wh
thr(B) = min 0.5, (5)
for j=1 . . . M do (w + 10)(h + 10)
Let C = {Bk ∈ U : IOU(Bk , bj ) ≥ thr(Bk )};
if C =
6 ∅ then In practice, this changes the threshold only on objects
Let k∗ = arg max{k : Bk ∈C} IOU(Bk , bj );
which are smaller than approximately 25 × 25 pixels,
Set U = U \Bk∗ ;
Set zj = 1 since true positive detection; and affects 5.5% of objects in the detection validation
else set.
Set zj = 0 since false positive detection;
end
end Practical consideration. One additional practical con-
Algorithm 2: The algorithm for greedily matching sideration for ILSVRC detection evaluation is subtle
object detection outputs to ground truth labels. The and comes directly as a result of the scale of ILSVRC.
standard thr(Bk ) = 0.5 (Everingham et al., 2010). In PASCAL, algorithms would often return many de-
ILSVRC computes thr(Bk ) using Eq. 5 to better han- tections per class on the test set, including ones with
dle low-resolution objects. low confidence scores. This allowed the algorithms to
reach the level of high recall at least in the realm of
very low precision. On ILSVRC detection test set if
The final metric for evaluating an algorithm on a an algorithm returns 10 bounding boxes per object per
given object class is average precision over the different image this would result in 10×200×40K = 80M detec-
levels of recall achieved by varying the threshold t. The tions. Each detection contains an image index, a class
winner of each object class is then the team with the index, 4 bounding box coordinates, and the confidence
highest average precision, and then winner of the chal- score, so it takes on the order of 28 bytes. The full set of
lenge is the team that wins on the most object classes.8 detections would then require 2.24Gb to store and sub-
mit to the evaluation server, which is impractical. This
Difference with PASCAL VOC. Evaluating localization means that algorithms are implicitly required to limit
of object instances which occupy very few pixels in the their predictions to only the most confident locations.
image is challenging. The PASCAL VOC approach was
to label such instances as “difficult” and ignore them
during evaluation. However, since ILSVRC contains a 5 Methods
more diverse set of object classes including, for exam-
ple, “nail” and “ping pong ball” which have many very The ILSVRC dataset and the competition has allowed
small instances, it is important to include even very significant algorithmic advances in large-scale image recog-
small object instances in evaluation. nition and retrieval.
In Algorithm 2, a predicted bounding box b is con-
sidered to have properly localized by a ground truth
bounding box B if IOU (b, B) ≥ thr(B). The PASCAL
5.1 Challenge entries
VOC metric uses the threshold thr(B) = 0.5. However,
for small objects even deviations of a few pixels would This section is organized chronologically, highlighting
be unacceptable according to this threshold. For exam- the particularly innovative and successful methods which
ple, consider an object B of size 10 × 10 pixels, with a participated in the ILSVRC each year. Tables 5, 6 and 7
detection window of 20 × 20 pixels which fully contains list all the participating teams. We see a turning point
that object. This would be an error of approximately 5 in 2012 with the development of large-scale convolu-
pixels on each dimension, which is average human an- tional neural networks.
notation error. However, the IOU in this case would be
100/400 = 0.25, far below the threshold of 0.5. Thus
ILSVRC2010. The first year the challenge consisted
8
In this paper we focus on the mean average precision of just the classification task. The winning entry from
across all categories as the measure of a team’s performance.
NEC team (Lin et al., 2011) used SIFT (Lowe, 2004)
This is done for simplicity and is justified since the ordering
of teams by mean average precision was always the same as and LBP (Ahonen et al., 2006) features with two non-
the ordering by object categories won. linear coding representations (Zhou et al., 2010; Wang
18 Olga Russakovsky* et al.
et al., 2010) and a stochastic SVM. The honorable men- The influence of the success of the SuperVision model
tion XRCE team (Perronnin et al., 2010) used an im- can be clearly seen in ILSVRC2013 and ILSVRC2014.
proved Fisher vector representation (Perronnin and Dance,
2007) along with PCA dimensionality reduction and
ILSVRC2013. There were 24 teams participating in the
data compression followed by a linear SVM. Fisher vector-
ILSVRC2013 competition, compared to 21 in the pre-
based methods have evolved over five years of the chal-
vious three years combined. Following the success of the
lenge and continued performing strongly in every ILSVRC
deep learning-based method in 2012, the vast majority
from 2010 to 2014.
of entries in 2013 used deep convolutional neural net-
works in their submission. The winner of the classifica-
ILSVRC2011. The winning classification entry in 2011 tion task was Clarifai, with several large deep convolu-
was the 2010 runner-up team XRCE, applying high- tional networks averaged together. The network archi-
dimensional image signatures (Perronnin et al., 2010) tectures were chosen using the visualization technique
with compression using product quantization (Sanchez of (Zeiler and Fergus, 2013), and they were trained
and Perronnin, 2011) and one-vs-all linear SVMs. The on the GPU following (Zeiler et al., 2011) using the
single-object localization competition was held for the dropout technique (Krizhevsky et al., 2012).
first time, with two brave entries. The winner was the The winning single-object localization OverFeat sub-
UvA team using a selective search approach to gener- mission was based on an integrated framework for us-
ate class-independent object hypothesis regions (van de ing convolutional networks for classification, localiza-
Sande et al., 2011b), followed by dense sampling and tion and detection with a multiscale sliding window
vector quantization of several color SIFT features (van de approach (Sermanet et al., 2013). They were the only
Sande et al., 2010), pooling with spatial pyramid match- team tackling all three tasks.
ing (Lazebnik et al., 2006), and classifying with a his- The winner of object detection task was UvA team,
togram intersection kernel SVM (Maji and Malik, 2009) which utilized a new way of efficient encoding (van de
trained on a GPU (van de Sande et al., 2011a). Sande et al., 2014) densely sampled color descriptors (van de
Sande et al., 2010) pooled using a multi-level spatial
ILSVRC2012. This was a turning point for large-scale pyramid in a selective search framework (Uijlings et al.,
object recognition, when large-scale deep neural net- 2013). The detection results were rescored using a full-
works entered the scene. The undisputed winner of both image convolutional network classifier.
the classification and localization tasks in 2012 was the
SuperVision team. They trained a large, deep convolu-
ILSVRC2014. 2014 attracted the most submissions, with
tional neural network on RGB values, with 60 million
36 teams submitting 123 entries compared to just 24
parameters using an efficient GPU implementation and
teams in 2013 – a 1.5x increase in participation.9 As
a novel hidden-unit dropout trick (Krizhevsky et al.,
in 2013 almost all teams used convolutional neural net-
2012; Hinton et al., 2012). The second place in image
works as the basis for their submission. Significant progress
classification went to the ISI team, which used Fisher
has been made in just one year: image classification er-
vectors (Sanchez and Perronnin, 2011) and a stream-
ror was almost halved since ILSVRC2013 and object
lined version of Graphical Gaussian Vectors (Harada
detection mean average precision almost doubled com-
and Kuniyoshi, 2012), along with linear classifiers us-
pared to ILSVRC2013. Please refer to Section 6.1 for
ing Passive-Aggressive (PA) algorithm (Crammer et al.,
details.
2006). The second place in single-object localization
In 2014 teams were allowed to use outside data for
went to the VGG, with an image classification sys-
training their models in the competition, so there were
tem including dense SIFT features and color statis-
six tracks: provided and outside data tracks in each
tics (Lowe, 2004), a Fisher vector representation (Sanchez
of image classification, single-object localization, and
and Perronnin, 2011), and a linear SVM classifier, plus
object detection tasks.
additional insights from (Arandjelovic and Zisserman,
The winning image classification with provided data
2012; Sanchez et al., 2012). Both ISI and VGG used
team was GoogLeNet, which explored an improved con-
(Felzenszwalb et al., 2010) for object localization; Su-
volutional neural network architecture combining the
perVision used a regression model trained to predict
multi-scale idea with intuitions gained from the Heb-
bounding box locations. Despite the weaker detection
bian principle. Additional dimension reduction layers
model, SuperVision handily won the object localization
allowed them to increase both the depth and the width
task. A detailed analysis and comparison of the Super-
Vision and VGG submissions on the single-object local- 9
Table 7 omits 4 teams which submitted results but chose
ization task can be found in (Russakovsky et al., 2013). not to officially participate in the challenge.
ILSVRC 2010
Hminmax 54.4 Massachusetts Institute of Technology Jim Mutch, Sharat Chikkerur, Hristo Paskov, Ruslan Salakhutdinov, Stan Bileschi, Hueihan Jhuang
IBM 70.1 IBM research† , Georgia Tech‡ Lexing Xie† , Hua Ouyang‡ , Apostol Natsev†
ISIL 44.6 Intelligent Systems and Informatics Lab., The University of Tatsuya Harada, Hideki Nakayama, Yoshitaka Ushiku, Yuya Yamashita, Jun Imura, Yasuo Kuniyoshi
Tokyo
ITNLP 78.7 Harbin Institute of Technology Deyuan Zhang, Wenfeng Xuan, Xiaolong Wang, Bingquan Liu, Chengjie Sun
NEC 28.2 NEC Labs America† , University of Illinois at Urbana- Yuanqing Lin† , Fengjun Lv† , Shenghuo Zhu† , Ming Yang† , Timothee Cour† , Kai Yu† , LiangLiang Cao‡ ,
Champaign‡ , Rutgers∓ Zhen Li‡ , Min-Hsuan Tsai‡ , Xi Zhou‡ , Thomas Huang‡ , Tong Zhang∓
(Lin et al., 2011)
NII 74.2 National Institute of Informatics, Tokyo,Japan† , Hefei Nor- Cai-Zhi Zhu† , Xiao Zhou‡ , Shinı́chi Satoh†
mal Univ. Heifei, China‡
NTU 58.3 CeMNet, SCE, NTU, Singapore Zhengxiang Wang, Liang-Tien Chia
UCI 46.6 University of California Irvine Hamed Pirsiavash, Deva Ramanan, Charless Fowlkes
XRCE 33.6 Xerox Research Centre Europe Jorge Sanchez, Florent Perronnin, Thomas Mensink
(Perronnin et al., 2010)
ILSVRC 2011
Codename CLS LOC Institutions Contributors and references
ISI 36.0 - Intelligent Systems and Informatics lab, University of Tokyo Tatsuya Harada, Asako Kanezaki, Yoshitaka Ushiku, Yuya Yamashita, Sho Inaba, Hiroshi Muraoka, Yasuo
Kuniyoshi
NII 50.5 - National Institute of Informatics, Japan Duy-Dinh Le, Shinı́chi Satoh
UvA 31.0 42.5 University of Amsterdam† , University of Trento‡ Koen E. A. van de Sande† , Jasper R. R. Uijlings‡ , Arnold W. M. Smeulders† , Theo Gevers† , Nicu Sebe‡ ,
Cees Snoek†
(van de Sande et al., 2011b)
XRCE 25.8 56.5 Xerox Research Centre Europe† , CIII‡ Florent Perronnin† , Jorge Sanchez†‡
(Sanchez and Perronnin, 2011)
ILSVRC 2012
Codename CLS LOC Institutions Contributors and references
ISI 26.2 53.6 University of Tokyo† , JST PRESTO‡ Naoyuki Gunji† , Takayuki Higuchi† , Koki Yasumoto† , Hiroshi Muraoka† , Yoshitaka Ushiku† , Tatsuya
Harada† ‡ , Yasuo Kuniyoshi†
(Harada and Kuniyoshi, 2012)
LEAR 34.5 - LEAR INRIA Grenoble† , TVPA Xerox Research Centre Thomas Mensink† ‡ , Jakob Verbeek† , Florent Perronnin‡ , Gabriela Csurka‡
Europe‡ (Mensink et al., 2012)
VGG 27.0 50.0 University of Oxford Karen Simonyan, Yusuf Aytar, Andrea Vedaldi, Andrew Zisserman
(Arandjelovic and Zisserman, 2012; Sanchez et al., 2012)
SuperVision 16.4 34.2 University of Toronto Alex Krizhevsky, Ilya Sutskever, Geoffrey Hinton
(Krizhevsky et al., 2012)
UvA 29.6 - University of Amsterdam Koen E. A. van de Sande, Amir Habibian, Cees G. M. Snoek
(Sanchez and Perronnin, 2011; Scheirer et al., 2012)
XRCE 27.1 - Xerox Research Centre Europe† , LEAR INRIA ‡ Florent Perronnin† , Zeynep Akata†‡ , Zaid Harchaoui‡ , Cordelia Schmid‡
(Perronnin et al., 2012)
Table 5 Teams participating in ILSVRC2010-2012, ordered alphabetically. Each method is identified with a codename used in the text. We report flat top-5 classification
and single-object localization error, in percents (lower is better). For teams which submitted multiple entries we report the best score. In 2012, SuperVision also submitted
entries trained with the extra data from the ImageNet Fall 2011 release, and obtained 15.3% classification error and 33.5% localization error. Key references are provided
where available. More details about the winning entries can be found in Section 5.1.
19
ILSVRC 2013
20
Codename CLS LOC DET Insitutions Contributors and references
Adobe 15.2 - - Adobe† , University of Illinois at Urbana-Champaign‡ Hailin Jin† , Zhe Lin† , Jianchao Yang† , Tom Paine‡
(Krizhevsky et al., 2012)
AHoward 13.6 - - Andrew Howard Consulting Andrew Howard
BUPT 25.2 - - Beijing University of Posts and Telecommunications† , Orange Labs Chong Huang† , Yunlong Bian† , Hongliang Bai‡ , Bo Liu† , Yanchao Feng† , Yuan Dong†
International Center Beijing‡
decaf 19.2 - - University of California Berkeley Yangqing Jia, Jeff Donahue, Trevor Darrell
(Donahue et al., 2013)
Deep Punx 20.9 - - Saint Petersburg State University Evgeny Smirnov, Denis Timoshenko, Alexey Korolev
(Krizhevsky et al., 2012; Wan et al., 2013; Tang, 2013)
Delta - - 6.1 National Tsing Hua University Che-Rung Lee, Hwann-Tzong Chen, Hao-Ping Kang, Tzu-Wei Huang, Ci-Hong Deng, Hao-
Che Kao
IBM 20.7 - - University of Illinois at Urbana-Champaign† , IBM Watson Re- Zhicheng Yan† , Liangliang Cao‡ , John R Smith‡ , Noel Codella‡ ,Michele Merler‡ , Sharath
search Center‡ , IBM Haifa Research Center∓ Pankanti‡ , Sharon Alpert∓ , Yochay Tzur∓ ,
MIL 24.4 - - University of Tokyo Masatoshi Hidaka, Chie Kamada, Yusuke Mukuta, Naoyuki Gunji, Yoshitaka Ushiku, Tat-
suya Harada
Minerva 21.7 Peking University† , Microsoft Research‡ , Shanghai Jiao Tong Tianjun Xiao†‡ , Minjie Wang∓‡ , Jianpeng Li§‡ , Yalong Baiς ‡ , Jiaxing Zhang‡ , Kuiyuan
University∓ , XiDian University§ , Harbin Institute of Technologyς Yang‡ , Chuntao Hong‡ , Zheng Zhang‡
(Wang et al., 2014)
NEC - - 19.6 NEC Labs America† , University of Missouri ‡ Xiaoyu Wang† , Miao Sun‡ , Tianbao Yang† , Yuanqing Lin† , Tony X. Han‡ , Shenghuo Zhu†
(Wang et al., 2013)
NUS 13.0 National University of Singapore Min Lin*, Qiang Chen*, Jian Dong, Junshi Huang, Wei Xia, Shuicheng Yan (* = equal
contribution)
(Krizhevsky et al., 2012)
Orange 25.2 Orange Labs International Center Beijing† , Beijing University of Hongliang BAI† , Lezi Wang‡ , Shusheng Cen‡ , YiNan Liu‡ , Kun Tao† , Wei Liu† , Peng Li† ,
Posts and Telecommunications‡ Yuan Dong†
OverFeat 14.2 30.0 (19.4) New York University Pierre Sermanet, David Eigen, Michael Mathieu, Xiang Zhang, Rob Fergus, Yann LeCun
(Sermanet et al., 2013)
Quantum 82.0 - - Self-employed† , Student in Troy High School, Fullerton, CA‡ Henry Shu† , Jerry Shu‡
(Batra et al., 2013)
SYSU - - 10.5 Sun Yat-Sen University, China. Xiaolong Wang
(Felzenszwalb et al., 2010)
Toronto - - 11.5 University of Toronto Yichuan Tang*, Nitish Srivastava*, Ruslan Salakhutdinov (* = equal contribution)
Trimps 26.2 - - The Third Research Institute of the Ministry of Public Security, Jie Shao, Xiaoteng Zhang, Yanfeng Shang, Wenfei Wang, Lin Mei, Chuanping Hu
P.R. China
UCLA - - 9.8 University of California Los Angeles Yukun Zhu, Jun Zhu, Alan Yuille
UIUC - - 1.0 University of Illinois at Urbana-Champaign Thomas Paine, Kevin Shih, Thomas Huang
(Krizhevsky et al., 2012)
UvA 14.3 - 22.6 University of Amsterdam, Euvision Technologies Koen E. A. van de Sande, Daniel H. F. Fontijne, Cees G. M. Snoek, Harro M. G. Stokman,
Arnold W. M. Smeulders
(van de Sande et al., 2014)
VGG 15.2 46.4 - Visual Geometry Group, University of Oxford Karen Simonyan, Andrea Vedaldi, Andrew Zisserman
(Simonyan et al., 2013)
ZF 13.5 - - New York University Matthew D Zeiler, Rob Fergus
(Zeiler and Fergus, 2013; Zeiler et al., 2011)
Adobe - 11.6 - 30.1 - - Adobe† , UIUC‡ Hailin Jin† , Zhaowen Wang‡ , Jianchao Yang† , Zhe Lin†
BDC 11.3 - ◦ - - - Institute for Infocomm Research† , Uni- Olivier Morre† ‡ , Hanlin Goh† , Antoine Veillard‡ , Vijay Chandrasekhar† (Krizhevsky et al., 2012)
versit Pierre et Marie Curie‡
Berkeley - - - - - 34.5 UC Berkeley Ross Girshick, Jeff Donahue, Sergio Guadarrama, Trevor Darrell, Jitendra Malik (Girshick et al., 2013,
2014)
BREIL 16.0 - ◦ - - - KAIST department of EE Jun-Cheol Park, Yunhun Jang, Hyungwon Choi, JaeYoung Jun (Chatfield et al., 2014; Jia, 2013)
Brno 17.6 - 52.0 - - - Brno University of Technology Martin Kolář, Michal Hradiš, Pavel Svoboda (Krizhevsky et al., 2012; Mikolov et al., 2013; Jia, 2013)
CASIA-2 - - - - 28.6 - Chinese Academy of Science† , South- Peihao Huang† , Yongzhen Huang† , Feng Liu‡ , Zifeng Wu† , Fang Zhao† , Liang Wang† , Tieniu
east University‡ Tan† (Girshick et al., 2014)
CASIAWS - 11.4 - ◦ - - CRIPAC, CASIA Weiqiang Ren, Chong Wang, Yanhua Chen, Kaiqi Huang, Tieniu Tan (Arbeláez et al., 2014)
Cldi 13.9 - 46.9 - - - KAIST† , Cldi Inc.‡ Kyunghyun Paeng† , Donggeun Yoo† , Sunggyun Park† , Jungin Lee‡ , Anthony S. Paek‡ , In So Kweon† ,
Seong Dae Kim† (Krizhevsky et al., 2012; Perronnin et al., 2010)
CUHK - - - - - 40.7 The Chinese University of Hong Kong Wanli Ouyang, Ping Luo, Xingyu Zeng, Shi Qiu, Yonglong Tian, Hongsheng Li, Shuo Yang, Zhe Wang,
Yuanjun Xiong, Chen Qian, Zhenyao Zhu, Ruohui Wang, Chen-Change Loy, Xiaogang Wang, Xiaoou
Tang (Ouyang et al., 2014; Ouyang and Wang, 2013)
DeepCNet 17.5 - ◦ - - - University of Warwick Ben Graham (Graham, 2013; Schmidhuber, 2012)
DeepInsight - - - - - 40.5 NLPR† , HKUST‡ Junjie Yan† , Naiyan Wang‡ , Stan Z. Li† , Dit-Yan Yeung‡ (Girshick et al., 2014)
FengjunLv 17.4 - ◦ - - - Fengjun Lv Consulting Fengjun Lv (Krizhevsky et al., 2012; Harel et al., 2007)
GoogLeNet 6.7 - 26.4 - - 43.9 Google Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Drago Anguelov, Dumitru Erhan,
Andrew Rabinovich (Szegedy et al., 2014)
HKUST - - - - 28.9 - Hong Kong U. of Science and Tech.† , Cewu Lu† , Hei Law*† , Hao Chen*‡ , Qifeng Chen*∓ , Yao Xiao*† Chi Keung Tang† (Uijlings et al., 2013;
Chinese U. of H. K.‡ , Stanford U.∓ Girshick et al., 2013; Perronnin et al., 2010; Felzenszwalb et al., 2010)
MIL 18.3 - 33.7 - - 30.4 The University of Tokyo† , IIT Senthil Purushwalkam† ‡ , Yuichiro Tsuchiya† , Atsushi Kanehira† , Asako Kanezaki† , Tatsuya
Guwahati‡ Harada† (Kanezaki et al., 2014; Girshick et al., 2013)
MPG UT - - - - - 26.4 The University of Tokyo Riku Togashi, Keita Iwamoto, Tomoaki Iwase, Hideki Nakayama (Girshick et al., 2014)
MSRA 8.1 - 35.5 - 35.1 - Microsoft Research† , Xi’an Jiaotong Kaiming He† , Xiangyu Zhang‡ , Shaoqing Ren∓ , Jian Sun† (He et al., 2014)
U.‡ , U. of Science and Tech. of China∓
NUS - - - - 37.2 - National University of Singapore† , Jian Dong† , Yunchao Wei† , Min Lin† , Qiang Chen‡ , Wei Xia† , Shuicheng Yan† (Lin et al., 2014a; Chen
IBM Research Australia‡ et al., 2014)
NUS-BST 9.8 - ◦ - - - National Univ. of Singapore† , Beijing Min Lin† , Jian Dong† , Hanjiang Lai† , Junjun Xiong‡ , Shuicheng Yan† (Lin et al., 2014a; Howard, 2014;
Samsung Telecom R&D Center† Krizhevsky et al., 2012)
Orange 15.2 14.8 42.8 42.7 - 27.7 Orange Labs Beijing† , BUPT China‡ Hongliang Bai† , Yinan Liu† , Bo Liu‡ , Yanchao Feng‡ , Kun Tao† , Yuan Dong† (Girshick et al., 2014)
PassBy 16.7 - ◦ - - - LENOVO† , HKUST‡ , U. of Macao∓ Lin Sun† ‡ , Zhanghui Kuang† , Cong Zhao† , Kui Jia∓ , Oscar C.Au‡ (Jia, 2013; Krizhevsky et al., 2012)
SCUT 18.8 - ◦ - - - South China Univ. of Technology Guo Lihua, Liao Qijun, Ma Qianli, Lin Junbin
Southeast - - - - 30.5 - Southeast U.† , Chinese A. of Sciences‡ Feng Liu† , Zifeng Wu‡ , Yongzhen Huang‡
SYSU 14.4 - 31.9 - - - Sun Yat-Sen University Liliang Zhang, Tianshui Chen, Shuye Zhang, Wanglan He, Liang Lin, Dengguang Pang, Lingbo Liu
Trimps - 11.5 - 42.2 - 33.7 The Third Research Institute of the Jie Shao, Xiaoteng Zhang, JianYing Zhou, Jian Wang, Jian Chen, Yanfeng Shang, Wenfei Wang, Lin
Ministry of Public Security Mei, Chuanping Hu (Girshick et al., 2014; Manen et al., 2013; Howard, 2014)
TTIC 10.2 - 48.3 - - - Toyota Technological Institute at George Papandreou† , Iasonas Kokkinos‡ (Papandreou, 2014; Papandreou et al., 2014; Jojic et al., 2003;
Chicago† , Ecole Centrale Paris‡ Krizhevsky et al., 2012; Sermanet et al., 2013; Dubout and Fleuret, 2012; Iandola et al., 2014)
UI 99.5 - ◦ - - - University of Isfahan Fatemeh Shafizadegan, Elham Shabaninia (Yang et al., 2009)
UvA 12.1 - ◦ - 32.0 35.4 U. of Amsterdam and Euvision Tech. Koen van de Sande, Daniel Fontijne, Cees Snoek, Harro Stokman, Arnold Smeulders (van de Sande et al.,
2014)
VGG 7.3 - 25.3 - - - University of Oxford Karen Simonyan, Andrew Zisserman (Simonyan and Zisserman, 2014)
XYZ 11.2 - ◦ - - - The University of Queensland Zhongwen Xu and Yi Yang (Krizhevsky et al., 2012; Jia, 2013; Zeiler and Fergus, 2013; Lin et al., 2014a)
Table 7 Teams participating in ILSVRC2014, ordered alphabetically. Each method is identified with a codename used in the text. For classificaton and single-object
localization we report flat top-5 error, in percents (lower is better). For detection we report mean average precision, in percents (higher is better). CLSo,LOCo,DETo
corresponds to entries using outside training data (officially allowed in ILSVRC2014). ◦ means localization error greater than 60% (localization submission was required
with every classification submission). Key references are provided where available. More details about the winning entries can be found in Section 5.1.
21
22 Olga Russakovsky* et al.
of the network significantly without incurring signifi- with an efficient algorithmic implementation and GPU
cant computational overhead. In the image classifica- computing resources) it became possible to learn neural
tion with external data track, CASIAWS won by using networks directly from the image data, without need-
weakly supervised object localization from only clas- ing to create multi-stage hand-tuned pipelines of ex-
sification labels to improve image classification. MCG tracted features and discriminative classifiers. The ma-
region proposals (Arbeláez et al., 2014) pretrained on jor breakthrough came in 2012 with the win of the Su-
PASCAL VOC 2012 data are used to extract region perVision team on image classification and single-object
proposals, regions are represented using convolutional localization tasks (Krizhevsky et al., 2012), and by 2014
networks, and a multiple instance learning strategy is all of the top contestants were relying heavily on con-
used to learn weakly supervised object detectors to rep- volutional neural networks.
resent the image. Further, over the past few years there has been a
In the single-object localization with provided data lot of focus on large-scale recognition in the computer
track, the winning team was VGG, which explored the vision community . Best paper awards at top vision con-
effect of convolutional neural network depth on its ac- ferences in 2013 were awarded to large-scale recognition
curacy by using three different architectures with up to methods: at CVPR 2013 to ”Fast, Accurate Detection
19 weight layers with rectified linear unit non-linearity, of 100,000 Object Classes on a Single Machine” (Dean
building off of the implementation of Caffe (Jia, 2013). et al., 2013) and at ICCV 2013 to ”From Large Scale
For localization they used per-class bounding box re- Image Categorization to Entry-Level Categories” (Or-
gression similar to OverFeat (Sermanet et al., 2013). In donez et al., 2013). Additionally, several influential lines
the single-object localization with external data track, of research have emerged, such as large-scale weakly
Adobe used 2000 additional ImageNet classes to train supervised localization work of (Kuettel et al., 2012)
the classifiers in an integrated convolutional neural net- which was awarded the best paper award in ECCV 2012
work framework for both classification and localization, and large-scale zero-shot learning, e.g., (Frome et al.,
with bounding box regression. At test time they used 2013).
k-means to find bounding box clusters and rank the
clusters according to the classification scores.
In the object detection with provided data track, the 6 Results and analysis
winning team NUS used the RCNN framework (Gir-
shick et al., 2013) with the network-in-network method 6.1 Improvements over the years
(Lin et al., 2014a) and improvements of (Howard, 2014).
Global context information was incorporated follow- State-of-the-art accuracy has improved significantly from
ing (Chen et al., 2014). In the object detection with ILSVRC2010 to ILSVRC2014, showcasing the massive
external data track, the winning team was GoogLeNet progress that has been made in large-scale object recog-
(which also won image classification with provided data). nition over the past five years. The performance of the
It is truly remarkable that the same team was able to winning ILSVRC entries for each task and each year are
win at both image classification and object detection, shown in Figure 9. The improvement over the years is
indicating that their methods are able to not only clas- clearly visible. In this section we quantify and analyze
sify the image based on scene information but also accu- this improvement.
rately localize multiple object instances. Just like most
teams participating in this track, GoogLeNet used the
6.1.1 Image classification and single-object localization
image classification dataset as extra training data.
improvement over the years
5.2 Large scale algorithmic innovations There has been a 4.2x reduction in image classification
error (from 28.2% to 6.7%) and a 1.7x reduction in
ILSVRC over the past five years has paved the way for single-object localization error (from 42.5% to 25.3%)
several breakthroughs in computer vision. since the beginning of the challenge. For consistency,
The field of categorical object recognition has dra- here we consider only teams that use the provided train-
matically evolved in the large-scale setting. Section 5.1 ing data. Even though the exact object categories have
documents the progress, starting from coded SIFT fea- changed (Section 3.1.1), the large scale of the dataset
tures and evolving to large-scale convolutional neural has remained the same (Table 2), making the results
networks dominating at all three tasks of image classifi- comparable across the years. The dataset has not changed
cation, single-object localization, and object detection. since 2012, and there has been a 2.4x reduction in image
With the availability of so much training data (along classification error (from 16.4% to 6.7%) and a 1.3x in
ImageNet Large Scale Visual Recognition Challenge 23
Image classification
Year Codename Error (percent) 99.9% Conf Int
2014 GoogLeNet 6.66 6.40 - 6.92
2014 VGG 7.32 7.05 - 7.60
2014 MSRA 8.06 7.78 - 8.34
2014 AHoward 8.11 7.83 - 8.39
2014 DeeperVision 9.51 9.21 - 9.82
2013 Clarifai† 11.20 10.87 - 11.53
2014 CASIAWS† 11.36 11.03 - 11.69 Fig. 10 For each object class, we consider the best perfor-
2014 Trimps† 11.46 11.13 - 11.80 mance of any entry submitted to ILSVRC2012-2014, includ-
2014 Adobe† 11.58 11.25 - 11.91 ing entries using additional training data. The plots show
2013 Clarifai 11.74 11.41 - 12.08 the distribution of these “optimistic” per-class results. Perfor-
2013 NUS 12.95 12.60 - 13.30 mance is measured as accuracy for image classification (left)
2013 ZF 13.51 13.14 - 13.87 and for single-object localization (middle), and as average
2013 AHoward 13.55 13.20 - 13.91 precision for object detection (right). While the results are
2013 OverFeat 14.18 13.83 - 14.54 very promising in image classification, the ILSVRC datasets
2014 Orange† 14.80 14.43 - 15.17 are far from saturated: many object classes continue to be
2012 SuperVision† 15.32 14.94 - 15.69 challenging for current algorithms.
2012 SuperVision 16.42 16.04 - 16.80
2012 ISI 26.17 25.71 - 26.65
2012 VGG 26.98 26.53 - 27.43 dence interval. We run a large number of bootstrap-
2012 XRCE 27.06 26.60 - 27.52 ping rounds (from 20,000 until convergence). Table 8
2012 UvA 29.58 29.09 - 30.04 shows the results of the top entries to each task of
Single-object localization
Year Codename Error (percent) 99.9% Conf Int ILSVRC2012-2014. The winning methods are statis-
2014 VGG 25.32 24.87 - 25.78 tically significantly different from the other methods,
2014 GoogLeNet 26.44 25.98 - 26.92 even at the 99.9% level.
2013 OverFeat 29.88 29.38 - 30.35
2014 Adobe† 30.10 29.61 - 30.58
2014 SYSU 31.90 31.40 - 32.40
2012 SuperVision† 33.55 33.05 - 34.04
6.3 Current state of categorical object recognition
2014 MIL 33.74 33.24 - 34.25
2012 SuperVision 34.19 33.67 - 34.69 Besides looking at just the average accuracy across hun-
2014 MSRA 35.48 34.97 - 35.99 dreds of object categories and tens of thousands of im-
2014 Trimps† 42.22 41.69 - 42.75
ages, we can also delve deeper to understand where
2014 Orange† 42.70 42.18 - 43.24
2013 VGG 46.42 45.90 - 46.95 mistakes are being made and where researchers’ efforts
2012 VGG 50.03 49.50 - 50.57 should be focused to expedite progress.
2012 ISI 53.65 53.10 - 54.17 To do so, in this section we will be analyzing an
2014 CASIAWS† 61.96 61.44 - 62.48 “optimistic” measurement of state-of-the-art recogni-
Object detection
Year Codename AP (percent) 99.9% Conf Int tion performance instead of focusing on the differences
2014 GoogLeNet† 43.93 42.92 - 45.65 in individual algorithms. For each task and each object
2014 CUHK† 40.67 39.68 - 42.30 class, we compute the best performance of any entry
2014 DeepInsight† 40.45 39.49 - 42.06 submitted to any ILSVRC2012-2014, including meth-
2014 NUS 37.21 36.29 - 38.80
2014 UvA† 35.42 34.63 - 36.92
ods using additional training data. Since the test sets
2014 MSRA 35.11 34.36 - 36.70 have remained the same, we can directly compare all
2014 Berkeley† 34.52 33.67 - 36.12 the entries in the past three years to obtain the most
2014 UvA 32.03 31.28 - 33.49 “optimistic” measurement of state-of-the-art accuracy
2014 Southeast 30.48 29.70 - 31.93
2014 HKUST 28.87 28.03 - 30.20
on each category.
2013 UvA 22.58 22.00 - 23.82 For consistency with the object detection metric
2013 NEC† 20.90 20.40 - 22.15 (higher is better), in this section we will be using image
2013 NEC 19.62 19.14 - 20.85 classification and single-object localization accuracy in-
2013 OverFeat† 19.40 18.82 - 20.61
stead of error, where accuracy = 1 − error.
2013 Toronto 11.46 10.98 - 12.34
2013 SYSU 10.45 10.04 - 11.32
2013 UCLA 9.83 9.48 - 10.77 6.3.1 Range of accuracy across object classes
Table 8 We use bootstrapping to construct 99.9% confi-
dence intervals around the result of up to top 5 submissions Figure 10 shows the distribution of accuracy achieved
to each ILSVRC task in 2012-2014. † means the entry used by the “optimistic” models across the object categories.
external training data. The winners using the provided data
for each track and each year are bolded. The difference be- The image classification model achieves 94.6% accu-
tween the winning method and the runner-up each year is racy on average (or 5.4% error), but there remains a
significant even at the 99.9% level. 41.0% absolute difference inaccuracy between the most
ImageNet Large Scale Visual Recognition Challenge 25
and least accurate object class. The single-object local- object scale in the image, not all variation in accuracy
ization model achieves 81.5% accuracy on average (or can be accounted for by scale alone.
18.5% error), with a 77.0% range in accuracy across For every object class, we compute its average scale,
the object classes. The object detection model achieves or the average fraction of image area occupied by an in-
44.7% average precision, with an 84.7% range across the stance of the object class on the ILSVRC2012-2014 val-
object classes. It is clear that the ILSVRC dataset is far idation set. Since the images and object classes in the
from saturated: performance on many categories has re- image classification and single-object localization tasks
mained poor despite the strong overall performance of are the same, we use the bounding box annotations of
the models. the single-object localization dataset for both tasks. In
that dataset the object classes range from “swimming
6.3.2 Qualitative examples of easy and hard classes trunks” with scale of 1.5% to “spider web” with scale
of 85.6%. In the object detection validation dataset
Figures 11 and 12 show the easiest and hardest classes the object classes range from “sunglasses” with scale
for each task, i.e., classes with the best and worst results of 1.3% to “sofa” with scale of 44.4%.
obtained with the “optimistic” models. Figure 13 shows the performance of the “optimistic”
For image classification, 121 out of 1000 object classes method as a function of the average scale of the object
have 100% image classification accuracy according to in the image. Each dot corresponds to one object class.
the optimistic estimate. Figure 11 (top) shows a ran- We observe a very weak positive correlation between ob-
dom set of 10 of them. They contain a variety of classes, ject scale and image classification accuracy: ρ = 0.14.
such as mammals like “red fox” and animals with dis- For single-object localization and object detection the
tinctive structures like “stingray”. The hardest classes correlation is stronger, at ρ = 0.40 and ρ = 0.41 re-
in the image classification task, with accuracy as low as spectively. It is clear that not all variation in accuracy
59.0%, include metallic and see-through man-made ob- can be accounted for by scale alone. Nevertheless, in
jects, such as “hook” and “water bottle,” the material the next section we will normalize for object scale to
“velvet” and the highly varied scene class “restaurant.” ensure that this factor is not affecting our conclusions.
For single-object localization, the 10 easiest classes
with 99.0 − 100% accuracy are all mammals and birds. 6.3.4 Per-class accuracy as a function of object
The hardest classes include metallic man-made objects properties.
such as “letter opener” and “ladle”, plus thin structures
such as “pole” and “spacebar” and highly varied classes Besides considering image-level properties we can also
such as “wing”. The most challenging class “spacebar” observe how accuracy changes as a function of intrin-
has a only 23.0% localization accuracy. sic object properties. We define three properties in-
Object detection results are shown in Figure 12. The spired by human vision: the real-world size of the ob-
easiest classes are living organisms such as “dog” and ject, whether it’s deformable within instance, and how
“tiger”, plus “basketball” and “volleyball” with distinc- textured it is. For each property, the object classes are
tive shape and color, and a somewhat surprising “snow- assigned to one of a few bins (listed below). These prop-
plow.” The easiest class “butterfly” is not yet perfectly erties are illustrated in Figure 1.
detected but is very close with 92.7% AP. The hard- Human subjects annotated each of the 1000 im-
est classes are as expected small thin objects such as age classification and single-object localization object
“flute” and “nail”, and the highly varied “lamp” and classes from ILSVRC2012-2014 with these properties. (Rus-
“backpack” classes, with as low as 8.0% AP. sakovsky et al., 2013). By construction (see Section 3.3.1),
each of the 200 object detection classes is either also
one of 1000 object classes or is an ancestor of one or
6.3.3 Per-class accuracy as a function of image
more of the 1000 classes in the ImageNet hierarchy. To
properties
compute the values of the properties for each object de-
tection class, we simply average the annotated values of
We now take a closer look at the image properties to
the descendant classes.
try to understand why current algorithms perform well
In this section we draw the following conclusions
on some object classes but not others. One hypothesis
about state-of-the-art recognition accuracy as a func-
is that variation in accuracy comes from the fact that
tion of these object properties:
instances of some classes tend to be much smaller in
images than instances of other classes, and smaller ob- – Real-world size: XS for extra small (e.g. nail),
jects may be harder for computers to recognize. In this small (e.g. fox), medium (e.g. bookcase), large (e.g.
section we argue that while accuracy is correlated with car) or XL for extra large (e.g. church)
26 Olga Russakovsky* et al.
Image classification
Easiest classes
Hardest classes
Single-object localization
Easiest classes
Hardest classes
Fig. 11 For each object category, we take the best performance of any entry submitted to ILSVRC2012-2014 (including
entries using additional training data). Given these “optimistic” results we show the easiest and harder classes for each task.
The numbers in parentheses indicate classification and localization accuracy. For image classification the 10 easiest classes are
randomly selected from among 121 object classes with 100% accuracy. Object detection results are shown in Figure 12.
ImageNet Large Scale Visual Recognition Challenge 27
Object detection
Easiest classes
Hardest classes
Fig. 12 For each object category, we take the best performance of any entry submitted to ILSVRC2012-2014 (including entries
using additional training data). Given these “optimistic” results we show the easiest and harder classes for the object detection
task, i.e., classes with best and worst results. The numbers in parentheses indicate average precision. Image classification and
single-object localization results are shown in Figure 11.
Fig. 13 Performance of the “optimistic” method as a function of object scale in the image, on each task. Each dot corresponds
to one object class. Average scale (x-axis) is computed as the average fraction of the image area occupied by an instance of
that object class on the ILSVRC2014 validation set. “Optimistic” performance (y-axis) corresponds to the best performance
on the test set of any entry submitted to ILSVRC2012-2014 (including entries with additional training data). The test set
has remained the same over these three years. We see that accuracy tends to increase as the objects get bigger in the image.
However, it is clear that far from all the variation in accuracy on these classes can be accounted for by scale alone.
28 Olga Russakovsky* et al.
The image classification and single-object localiza- Real-world size. In Figure 14(top, left) we observe that
tion “optimistic” models performs better on large in the image classification task the “optimistic” model
and extra large real-world objects than on smaller tends to perform significantly better on objects which
ones. The “optimistic” object detection model sur- are larger in the real-world. The classification accuracy
prisingly performs better on extra small objects than is 93.6% − 93.9% on XS, S and M objects compared to
on small or medium ones. 97.0% on L and 96.4% on XL objects. Since this is after
– Deformability within instance: Rigid (e.g., mug) normalizing for scale and thus can’t be explained by
or deformable (e.g., water snake) the objects’ size in the image, we conclude that either
The “optimistic” model on each of the three tasks (1) larger real-world objects are easier for the model to
performs statistically significantly better on deformable recognize, or (2) larger real-world objects usually occur
objects compared to rigid ones. However, this ef- in images with very distinctive backgrounds.
fect disappears when analyzing natural objects sep- To distinguish between the two cases we look Fig-
arately from man-made objects. ure 14(top, middle). We see that in the single-object
– Amount of texture: none (e.g. punching bag), low localization task, the L objects are easy to localize at
(e.g. horse), medium (e.g. sheep) or high (e.g. hon- 82.4% localization accuracy. XL objects, however, tend
eycomb) to be the hardest to localize with only 73.4% localiza-
The “optimistic” model on each of the three tasks tion accuracy. We conclude that the appearance of L
is significantly better on objects with at least low objects must be easier for the model to learn, while
level of texture compared to untextured objects. XL objects tend to appear in distinctive backgrounds.
The image background make these XL classes easier for
These and other findings are justified and discussed in the image-level classifier, but the individual instances
detail below. are difficult to accurately localize. Some examples of L
objects are “killer whale,” “schooner,” and “lion,” and
Experimental setup. We observed in Section 6.3.3 that some examples of XL objects are “boathouse,” “mosque,”
objects that occupy a larger area in the image tend to “toyshop” and “steel arch bridge.”
be somewhat easier to recognize. To make sure that In Figure 14(top,right) corresponding to the object
differences in object scale are not influencing results in detection task, the influence of real-world object size is
this section, we normalize each bin by object scale. We not as apparent. One of the key reasons is that many of
discard object classes with the largest scales from each the XL and L object classes of the image classification
bin as needed until the average object scale of object and single-object localization datasets were removed in
classes in each bin across one property is the same (or constructing the detection dataset (Section 3.3.1) since
as close as possible). For real-world size property for they were not basic categories well-suited for detection.
example, the resulting average object scale in each of There were only 3 XL object classes remaining in the
the five bins is 31.6%−31.7% in the image classification dataset (“train,” “airplane” and “bus”), and none af-
and single-object localization tasks, and 12.9% − 13.4% ter scale normalization.We omit them from the analy-
in the object detection task.11 sis. The average precision of XS, S, M objects (44.5%,
Figure 14 shows the average performance of the “op- 39.0%, and 38.5% mAP respectively) is statistically in-
timistic” model on the object classes that fall into each significant from average precision on L objects: 95%
bin for each property. We analyze the results in detail confidence interval of L objects is 37.5% − 59.5%. This
below. Unless otherwise specified, the reported accura- may be due to the fact that there are only 6 L object
cies below are after the scale normalization step. classes remaining after scale normalization; all other
To evaluate statistical significance, we compute the real-world size bins have at least 18 object classes.
95% confidence interval for accuracy using bootstrap- Finally, it is interesting that performance on XS ob-
ping: we repeatedly sample the object classes within jects of 44.5% mAP (CI 40.5% − 47.6%) is statistically
the bin with replacement, discard some as needed to significantly better than performance on S or M ob-
normalize by scale, and compute the average accuracy jects with 39.0% mAP and 38.5% mAP respectively.
of the “optimistic” model on the remaining classes. We Some examples of XS objects are “strawberry,” “bow
report the 95% confidence intervals (CI) in parentheses. tie” and “rugby ball.”
11
For rigid versus deformable objects, the average scale in Deformability within instance. In Figure 14(second row)
each bin is 34.1% − 34.2% for classification and localization,
it is clear that the “optimistic” model performs statis-
and 13.5%−13.7% for detection. For texture, the average scale
in each of the four bins is 31.1% − 31.3% for classification and tically significantly worse on rigid objects than on de-
localization, and 12.7% − 12.8% for detection. formable objects. Image classification accuracy is 93.2%
ImageNet Large Scale Visual Recognition Challenge 29
Real-world size
Amount of texture
Fig. 14 Performance of the “optimistic” computer vision model as a function of object properties. The x-axis corresponds to
object properties annotated by human labelers for each object class (Russakovsky et al., 2013) and illustrated in Figure 1. The
y-axis is the average accuracy of the “optimistic” model. Note that the range of the y-axis is different for each task to make
the trends more visible. The black circle is the average accuracy of the model on all object classes that fall into each bin. We
control for the effects of object scale by normalizing the object scale within each bin (details in Section 6.3.4). The color bars
show the model accuracy averaged across the remaining classes. Error bars show the 95% confidence interval obtained with
bootstrapping. Some bins are missing color bars because less than 5 object classes remained in the bin after scale normalization.
For example, the bar for XL real-world object detection classes is missing because that bin has only 3 object classes (airplane,
bus, train) and after normalizing by scale no classes remain.
30 Olga Russakovsky* et al.
on rigid objects (CI 92.6% − 93.8%), much smaller than sification accuracy is 90.5% on untextured objects (CI
95.7% on deformable ones. Single-object localization ac- 89.3% − 91.6%), lower than 94.6% on low-textured ob-
curacy is 76.2% on rigid objects (CI 74.9% − 77.4%), jects. Single-object localization accuracy is 71.4% on
much smaller than 84.7% on deformable ones. Object untextured objects (CI 69.1%−73.3%), lower than 80.2%
detection mAP is 40.1% on rigid objects (CI 37.2% − on low-textured objects. Object detection mAP is 33.2%
42.9%), much smaller than 44.8% on deformable ones. on untextured objects (CI 29.5% − 35.9%), lower than
We can further analyze the effects of deformabil- 42.9% on low-textured objects.
ity after separating object classes into “natural” and Texture is correlated with whether the object is nat-
“man-made” bins based on the ImageNet hierarchy. De- ural or man-made, at 0.35 correlation for image classi-
formability is highly correlated with whether the object fication and single-object localization, and 0.46 corre-
is natural or man-made: 0.72 correlation for image clas- lation for object detection. To determine if this is a
sification and single-object localization classes, and 0.61 contributing factor, in Figure 14(bottom row) we break
for object detection classes. Figure 14(third row) shows up the object classes into natural and man-made and
the effect of deformability on performance of the model show the accuracy on objects with no texture versus
for man-made and natural objects separately. objects with low texture. We observe that the model
Man-made classes are significantly harder than nat- is still statistically significantly better on low-textured
ural classes: classification accuracy 92.8% (CI 92.3% − object classes than on untextured ones, both on man-
93.3%) for man-made versus 97.0% for natural, localiza- made and natural object classes independently.12
tion accuracy 75.5% (CI 74.3% − 76.5%) for man-made
versus 88.5% for natural, and detection mAP 38.7% (CI
35.6 − 41.3%) for man-made versus 50.9% for natural. 6.4 Human accuracy on large-scale image classification
However, whether the classes are rigid or deformable
within this subdivision is no longer significant in most Recent improvements in state-of-the-art accuracy on
cases. For example, the image classification accuracy is the ILSVRC dataset are easier to put in perspective
92.3% (CI 91.4% − 93.1%) on man-made rigid objects when compared to human-level accuracy. In this sec-
and 91.8% on man-made deformable objects – not sta- tion we compare the performance of the leading large-
tistically significantly different. scale image classification method with the performance
There are two cases where the differences in per- of humans on this task.
formance are statistically significant. First, for single- To support this comparison, we developed an inter-
object localization, natural deformable objects are eas- face that allowed a human labeler to annotate images
ier than natural rigid objects: localization accuracy of with up to five ILSVRC target classes. We compare hu-
87.9% (CI 85.9% − 90.1%) on natural deformable ob- man errors to those of the winning ILSRC2014 image
jects is higher than 85.8% on natural rigid objects – classification model, GoogLeNet (Section 5.1). For this
falling slightly outside the 95% confidence interval. This analysis we use a random sample of 1500 ILSVRC2012-
difference in performance is likely because deformable 2014 image classification test set images.
natural animals tend to be easier to localize than rigid
natural fruit.
Annotation interface. Our web-based annotation inter-
Second, for object detection, man-made rigid ob-
face consists of one test set image and a list of 1000
jects are easier than man-made deformable objects: 38.5%
ILSVRC categories on the side. Each category is de-
mAP (CI 35.2% − 41.7%) on man-made rigid objects is
scribed by its title, such as “cowboy boot.” The cate-
higher than 33.0% mAP on man-made deformable ob-
gories are sorted in the topological order of the Ima-
jects. This is because man-made rigid objects include
geNet hierarchy, which places semantically similar con-
classes like “traffic light” or “car” whereas the man-
cepts nearby in the list. For example, all motor vehicle-
made deformable objects contain challenging classes like
related classes are arranged contiguously in the list. Ev-
“plastic bag,” “swimming trunks” or “stethoscope.”
ery class category is additionally accompanied by a row
of 13 examples images from the training set to allow for
Amount of texture. Finally, we analyze the effect that faster visual scanning. The user of the interface selects 5
object texture has on the accuracy of the “optimistic” categories from the list by clicking on the desired items.
model. Figure 14(fourth row) demonstrates that the
12
model performs better as the amount of texture on the Natural object detection classes are removed from this
analysis because there are only 3 and 13 natural untextured
object increases. The most significant difference is be-
and low-textured classes respectively, and none remain after
tween the performance on untextured objects and the scale normalization. All other bins contain at least 9 object
performance on objects with low texture. Image clas- classes after scale normalization.
ImageNet Large Scale Visual Recognition Challenge 31
Fig. 15 Representative validation images that highlight common sources of error. For each image, we display the ground truth
in blue, and top 5 predictions from GoogLeNet follow (red = wrong, green = right). GoogLeNet predictions on the validation
set images were graciously provided by members of the GoogLeNet team. From left to right: Images that contain multiple
objects, images of extreme closeups and uncharacteristic views, images with filters, images that significantly benefit from the
ability to read text, images that contain very small and thin objects, images with abstract representations, and example of a
fine-grained image that GoogLeNet correctly identifies but a human would have significant difficulty with.
classes (usually many more than five), with little in- 2. Image filters. Many people enhance their photos
dication of which object is the focus of the image. with filters that distort the contrast and color dis-
This error is only present in the Classification set- tributions of the image. We found that 13 (13%)
ting, since every image is constrained to have ex- of the images that GoogLeNet incorrectly classified
actly one correct label. In total, we attribute 24 contained a filter. Thus, we posit that GoogLeNet is
(24%) of GoogLeNet errors and 12 (16%) of human not very robust to these distortions. In comparison,
errors to this category. It is worth noting that hu- only one image among the human errors contained
mans can have a slight advantage in this error type, a filter, but we do not attribute the source of the
since it can sometimes be easy to identify the most error to the filter.
salient object in the image. 3. Abstract representations. GoogLeNet struggles
2. Incorrect annotations. We found that approxi- with images that depict objects of interest in an ab-
mately 5 out of 1500 images (0.3%) were incorrectly stract form, such as 3D-rendered images, paintings,
annotated in the ground truth. This introduces an sketches, plush toys, or statues. An example is the
approximately equal number of errors for both hu- abstract shape of a bow drawn with a light source in
mans and GoogLeNet. night photography, a 3D-rendered robotic scorpion,
or a shadow on the ground, of a child on a swing.
We attribute approximately 6 (6%) of GoogLeNet
Types of errors that the computer is more susceptible to
errors to this type of error and believe that humans
than the human:
are significantly more robust, with no such errors
1. Object small or thin. GoogLeNet struggles with seen in our sample.
recognizing objects that are very small or thin in 4. Miscellaneous sources. Additional sources of er-
the image, even if that object is the only object ror that occur relatively infrequently include ex-
present. Examples of this include an image of a treme closeups of parts of an object, unconventional
standing person wearing sunglasses, a person hold- viewpoints such as a rotated image, images that
ing a quill in their hand, or a small ant on a stem of a can significantly benefit from the ability to read
flower. We estimate that approximately 22 (21%) of text (e.g. a featureless container identifying itself as
GoogLeNet errors fall into this category, while none “face powder”), objects with heavy occlusions, and
of the human errors do. In other words, in our sam- images that depict a collage of multiple images. In
ple of images, no image was mislabeled by a human general, we found that humans are more robust to
because they were unable to identify a very small all of these types of error.
or thin object. This discrepancy can be attributed
to the fact that a human can very effectively lever- Types of errors that the human is more susceptible to
age context and affordances to accurately infer the than the computer:
identity of small objects (for example, a few barely 1. Fine-grained recognition. We found that humans
visible feathers near person’s hand as very likely be- are noticeably worse at fine-grained recognition (e.g.
longing to a mostly occluded quill). dogs, monkeys, snakes, birds), even when they are
ImageNet Large Scale Visual Recognition Challenge 33
and 3.3.4 we tried to put those concerns to rest by an- object localization to object detection), the next chal-
alyzing the statistics of the ILSVRC dataset and con- lenge would be to tackle pixel-level object segmenta-
cluding that it is comparable with, and in many cases tion. The recently released large-scale COCO dataset (Lin
much more challenging than, the long-standing PAS- et al., 2014b) is already taking a step in that direction.
CAL VOC benchmark (Everingham et al., 2010). Second, as datasets grow even larger in scale, it may
The second is regarding the errors in ground truth become impossible to fully annotate them manually.
labeling. We went through several rounds of in-house The scale of ILSVRC is already imposing limits on the
post-processing of the annotations obtained using crowd- manual annotations that are feasible to obtain: for ex-
sourcing, and corrected many common sources of errors ample, we had to restrict the number of objects labeled
(e.g., Appendix E). The major remaining source of an- per image in the image classification and single-object
notation errors stem from fine-grained object classes, localization datasets. In the future, with billions of im-
e.g., labelers failing to distinguish different species of ages, it will become impossible to obtain even one clean
birds. This is a tradeoff that had to be made: in order label for every image. Datasets such as Yahoo’s Flickr
to annotate data at this scale on a reasonable budget, Creative Commons 100M,13 released with weak human
we had to rely on non-expert crowd labelers. However, tags but no centralized annotation, will become more
overall the dataset is encouragingly clean. By our esti- common.
mates, 99.7% precision is achieved in the image classi- The growth of unlabeled or only partially labeled
fication dataset (Sections 3.1.3 and 6.4) and 97.9% of large-scale datasets implies two things. First, algorithms
images that went through the bounding box annota- will have to rely more on weakly supervised training
tion system have all instances of the target object class data. Second, even evaluation might have to be done
labeled with bounding boxes (Section 3.2.1). after the algorithms make predictions, not before. This
The third criticism we encountered is over the rules means that rather than evaluating accuracy (how many
of the competition regarding using external training of the test images or objects did the algorithm get right)
data. In ILSVRC2010-2013, algorithms had to only use or recall (how many of the desired images or objects did
the provided training and validation set images and an- the algorithm manage to find), both of which require
notations for training their models. With the growth of a fully annotated test set, we will be focusing more on
the field of large-scale unsupervised feature learning, precision: of the predictions that the algorithm made,
however, questions began to arise about what exactly how many were deemed correct by humans.
constitutes “outside” data: for example, are image fea- We are eagerly awaiting the future development of
tures trained on a large pool of “outside” images in an object recognition datasets and algorithms, and are grate-
unsupervised fashion allowed in the competition? Af- ful that ILSVRC served as a stepping stone along this
ter much discussion, in ILSVRC2014 we took the first path.
step towards addressing this problem. We followed the
PASCAL VOC strategy and created two tracks in the Acknowledgements We thank Stanford University, UNC
competition: entries using only “provided” data and en- Chapel Hill, Google and Facebook for sponsoring the chal-
tries using “outside” data, meaning any images or an- lenges, and NVIDIA for providing computational resources
to participants of ILSVRC2014. We thank our advisors over
notations not provided as part of ILSVRC training or the years: Lubomir Bourdev, Alexei Efros, Derek Hoiem, Ji-
validation sets. However, in the future this strategy will tendra Malik, Chuck Rosenberg and Andrew Zisserman. We
likely need to be further revised as the computer vision thank the PASCAL VOC organizers for partnering with us
field evolves. For example, competitions can consider in running ILSVRC2010-2012. We thank all members of the
Stanford vision lab for supporting the challenges and putting
allowing the use of any image features which are pub- up with us along the way. Finally, and most importantly, we
lically available, even if these features were learned on thank all researchers that have made the ILSVRC effort a suc-
an external source of data. cess by competing in the challenges and by using the datasets
to advance computer vision.
Given the massive algorithmic breakthroughs over the Appendix A ILSVRC2012-2014 image
past five years, we are very eager to see what will hap- classification and single-object localization
pen in the next five years. There are many potential object categories
directions of improvement and growth for ILSVRC and abacus, abaya, academic gown, accordion, acorn, acorn squash, acoustic gui-
other large-scale image datasets. tar, admiral, affenpinscher, Afghan hound, African chameleon, African crocodile,
African elephant, African grey, African hunting dog, agama, agaric, aircraft car- machine, Shetland sheepdog, shield, Shih-Tzu, shoe shop, shoji, shopping bas-
rier, Airedale, airliner, airship, albatross, alligator lizard, alp, altar, ambulance, ket, shopping cart, shovel, shower cap, shower curtain, siamang, Siamese cat,
American alligator, American black bear, American chameleon, American coot, Siberian husky, sidewinder, silky terrier, ski, ski mask, skunk, sleeping bag,
American egret, American lobster, American Staffordshire terrier, amphibian, slide rule, sliding door, slot, sloth bear, slug, snail, snorkel, snow leopard, snow-
analog clock, anemone fish, Angora, ant, apiary, Appenzeller, apron, Arabian mobile, snowplow, soap dispenser, soccer ball, sock, soft-coated wheaten ter-
camel, Arctic fox, armadillo, artichoke, ashcan, assault rifle, Australian terrier, rier, solar dish, sombrero, sorrel, soup bowl, space bar, space heater, space
axolotl, baboon, backpack, badger, bagel, bakery, balance beam, bald eagle, bal- shuttle, spaghetti squash, spatula, speedboat, spider monkey, spider web, spin-
loon, ballplayer, ballpoint, banana, Band Aid, banded gecko, banjo, bannister, dle, spiny lobster, spoonbill, sports car, spotlight, spotted salamander, squirrel
barbell, barber chair, barbershop, barn, barn spider, barometer, barracouta, bar- monkey, Staffordshire bullterrier, stage, standard poodle, standard schnauzer,
rel, barrow, baseball, basenji, basketball, basset, bassinet, bassoon, bath towel, starfish, steam locomotive, steel arch bridge, steel drum, stethoscope, stingray,
bathing cap, bathtub, beach wagon, beacon, beagle, beaker, bearskin, beaver, stinkhorn, stole, stone wall, stopwatch, stove, strainer, strawberry, street sign,
Bedlington terrier, bee, bee eater, beer bottle, beer glass, bell cote, bell pepper, streetcar, stretcher, studio couch, stupa, sturgeon, submarine, suit, sulphur but-
Bernese mountain dog, bib, bicycle-built-for-two, bighorn, bikini, binder, binoc- terfly, sulphur-crested cockatoo, sundial, sunglass, sunglasses, sunscreen, suspen-
ulars, birdhouse, bison, bittern, black and gold garden spider, black grouse, black sion bridge, Sussex spaniel, swab, sweatshirt, swimming trunks, swing, switch,
stork, black swan, black widow, black-and-tan coonhound, black-footed ferret, syringe, tabby, table lamp, tailed frog, tank, tape player, tarantula, teapot,
Blenheim spaniel, bloodhound, bluetick, boa constrictor, boathouse, bobsled, teddy, television, tench, tennis ball, terrapin, thatch, theater curtain, thimble,
bolete, bolo tie, bonnet, book jacket, bookcase, bookshop, Border collie, Border three-toed sloth, thresher, throne, thunder snake, Tibetan mastiff, Tibetan ter-
terrier, borzoi, Boston bull, bottlecap, Bouvier des Flandres, bow, bow tie, box rier, tick, tiger, tiger beetle, tiger cat, tiger shark, tile roof, timber wolf, titi,
turtle, boxer, Brabancon griffon, brain coral, brambling, brass, brassiere, break- toaster, tobacco shop, toilet seat, toilet tissue, torch, totem pole, toucan, tow
water, breastplate, briard, Brittany spaniel, broccoli, broom, brown bear, bub- truck, toy poodle, toy terrier, toyshop, tractor, traffic light, trailer truck, tray,
ble, bucket, buckeye, buckle, bulbul, bull mastiff, bullet train, bulletproof vest, tree frog, trench coat, triceratops, tricycle, trifle, trilobite, trimaran, tripod, tri-
bullfrog, burrito, bustard, butcher shop, butternut squash, cab, cabbage butter- umphal arch, trolleybus, trombone, tub, turnstile, tusker, typewriter keyboard,
fly, cairn, caldron, can opener, candle, cannon, canoe, capuchin, car mirror, car umbrella, unicycle, upright, vacuum, valley, vase, vault, velvet, vending machine,
wheel, carbonara, Cardigan, cardigan, cardoon, carousel, carpenter’s kit, car- vestment, viaduct, vine snake, violin, vizsla, volcano, volleyball, vulture, waffle
ton, cash machine, cassette, cassette player, castle, catamaran, cauliflower, CD iron, Walker hound, walking stick, wall clock, wallaby, wallet, wardrobe, war-
player, cello, cellular telephone, centipede, chain, chain mail, chain saw, chain- plane, warthog, washbasin, washer, water bottle, water buffalo, water jug, water
link fence, chambered nautilus, cheeseburger, cheetah, Chesapeake Bay retriever, ouzel, water snake, water tower, weasel, web site, weevil, Weimaraner, Welsh
chest, chickadee, chiffonier, Chihuahua, chime, chimpanzee, china cabinet, chi- springer spaniel, West Highland white terrier, whippet, whiptail, whiskey jug,
ton, chocolate sauce, chow, Christmas stocking, church, cicada, cinema, cleaver, whistle, white stork, white wolf, wig, wild boar, window screen, window shade,
cliff, cliff dwelling, cloak, clog, clumber, cock, cocker spaniel, cockroach, cocktail Windsor tie, wine bottle, wing, wire-haired fox terrier, wok, wolf spider, wom-
shaker, coffee mug, coffeepot, coho, coil, collie, colobus, combination lock, comic bat, wood rabbit, wooden spoon, wool, worm fence, wreck, yawl, yellow lady’s
book, common iguana, common newt, computer keyboard, conch, confectionery, slipper, Yorkshire terrier, yurt, zebra, zucchini
consomme, container ship, convertible, coral fungus, coral reef, corkscrew, corn,
cornet, coucal, cougar, cowboy boot, cowboy hat, coyote, cradle, crane, crane,
crash helmet, crate, crayfish, crib, cricket, Crock Pot, croquet ball, crossword
puzzle, crutch, cucumber, cuirass, cup, curly-coated retriever, custard apple,
daisy, dalmatian, dam, damselfly, Dandie Dinmont, desk, desktop computer,
dhole, dial telephone, diamondback, diaper, digital clock, digital watch, dingo, Appendix B Additional single-object
dining table, dishrag, dishwasher, disk brake, Doberman, dock, dogsled, dome,
doormat, dough, dowitcher, dragonfly, drake, drilling platform, drum, drumstick, localization dataset statistics
dugong, dumbbell, dung beetle, Dungeness crab, Dutch oven, ear, earthstar,
echidna, eel, eft, eggnog, Egyptian cat, electric fan, electric guitar, electric lo-
comotive, electric ray, English foxhound, English setter, English springer, enter-
tainment center, EntleBucher, envelope, Eskimo dog, espresso, espresso maker, We consider two additional metrics of object localiza-
European fire salamander, European gallinule, face powder, feather boa, fid-
dler crab, fig, file, fire engine, fire screen, fireboat, flagpole, flamingo, flat- tion difficulty: chance performance of localization and
coated retriever, flatworm, flute, fly, folding chair, football helmet, forklift, foun-
tain, fountain pen, four-poster, fox squirrel, freight car, French bulldog, French the level of clutter. We use these metrics to compare
horn, French loaf, frilled lizard, frying pan, fur coat, gar, garbage truck, gar-
den spider, garter snake, gas pump, gasmask, gazelle, German shepherd, Ger- ILSVRC2012-2014 single-object localization dataset to
man short-haired pointer, geyser, giant panda, giant schnauzer, gibbon, Gila
monster, go-kart, goblet, golden retriever, goldfinch, goldfish, golf ball, golfcart, the PASCAL VOC 2012 object detection benchmark.
gondola, gong, goose, Gordon setter, gorilla, gown, grand piano, Granny Smith,
grasshopper, Great Dane, great grey owl, Great Pyrenees, great white shark, The measures of localization difficulty are computed
Greater Swiss Mountain dog, green lizard, green mamba, green snake, green-
house, grey fox, grey whale, grille, grocery store, groenendael, groom, ground
beetle, guacamole, guenon, guillotine, guinea pig, gyromitra, hair slide, hair
on the validation set of both datasets. According to
spray, half track, hammer, hammerhead, hamper, hamster, hand blower, hand-
held computer, handkerchief, hard disc, hare, harmonica, harp, hartebeest, har-
both of these measures of difficulty there is a subset of
vester, harvestman, hatchet, hay, head cabbage, hen, hen-of-the-woods, hermit
crab, hip, hippopotamus, hog, hognose snake, holster, home theater, honeycomb,
ILSVRC which is as challenging as PASCAL but more
hook, hoopskirt, horizontal bar, hornbill, horned viper, horse cart, hot pot, hot-
dog, hourglass, house finch, howler monkey, hummingbird, hyena, ibex, Ibizan
than an order of magnitude greater in size. Figure 16
hound, ice bear, ice cream, ice lolly, impala, Indian cobra, Indian elephant, in-
digo bunting, indri, iPod, Irish setter, Irish terrier, Irish water spaniel, Irish
shows the distributions of different properties (object
wolfhound, iron, isopod, Italian greyhound, jacamar, jack-o’-lantern, jackfruit,
jaguar, Japanese spaniel, jay, jean, jeep, jellyfish, jersey, jigsaw puzzle, jinrik- scale, chance performance of localization and level of
isha, joystick, junco, keeshond, kelpie, Kerry blue terrier, killer whale, kimono,
king crab, king penguin, king snake, kit fox, kite, knee pad, knot, koala, Ko- clutter) across the different classes in the two datasets.
modo dragon, komondor, kuvasz, lab coat, Labrador retriever, lacewing, ladle,
ladybug, Lakeland terrier, lakeside, lampshade, langur, laptop, lawn mower, leaf
beetle, leafhopper, leatherback turtle, lemon, lens cap, Leonberg, leopard, lesser
panda, letter opener, Lhasa, library, lifeboat, lighter, limousine, limpkin, liner, Chance performance of localization (CPL). Chance per-
lion, lionfish, lipstick, little blue heron, llama, Loafer, loggerhead, long-horned
beetle, lorikeet, lotion, loudspeaker, loupe, lumbermill, lycaenid, lynx, macaque, formance on a dataset is a common metric to con-
macaw, Madagascar cat, magnetic compass, magpie, mailbag, mailbox, mail-
lot, maillot, malamute, malinois, Maltese dog, manhole cover, mantis, maraca, sider. We define the CPL measure as the expected ac-
marimba, marmoset, marmot, mashed potato, mask, matchstick, maypole, maze,
measuring cup, meat loaf, medicine chest, meerkat, megalith, menu, Mexican curacy of a detector which first randomly samples an
hairless, microphone, microwave, military uniform, milk can, miniature pinscher,
miniature poodle, miniature schnauzer, minibus, miniskirt, minivan, mink, mis-
sile, mitten, mixing bowl, mobile home, Model T, modem, monarch, monastery,
object instance of that class and then uses its bounding
mongoose, monitor, moped, mortar, mortarboard, mosque, mosquito net, mo-
tor scooter, mountain bike, mountain tent, mouse, mousetrap, moving van, mud
box directly as the proposed localization window on all
turtle, mushroom, muzzle, nail, neck brace, necklace, nematode, Newfoundland,
night snake, nipple, Norfolk terrier, Norwegian elkhound, Norwich terrier, note-
other images (after rescaling the images to the same
book, obelisk, oboe, ocarina, odometer, oil filter, Old English sheepdog, or-
ange, orangutan, organ, oscilloscope, ostrich, otter, otterhound, overskirt, ox,
size). Concretely, let B1 , B2 , . . . , BN be all the bound-
oxcart, oxygen mask, oystercatcher, packet, paddle, paddlewheel, padlock, paint-
brush, pajama, palace, panpipe, paper towel, papillon, parachute, parallel bars,
ing boxes of the object instances within a class, then
park bench, parking meter, partridge, passenger car, patas, patio, pay-phone,
peacock, pedestal, Pekinese, pelican, Pembroke, pencil box, pencil sharpener,
perfume, Persian cat, Petri dish, photocopier, pick, pickelhaube, picket fence, P P
pickup, pier, piggy bank, pill bottle, pillow, pineapple, ping-pong ball, pinwheel,
pirate, pitcher, pizza, plane, planetarium, plastic bag, plate, plate rack, platy- i j6=i IOU (Bi , Bj ) ≥ 0.5
pus, plow, plunger, Polaroid camera, pole, polecat, police van, pomegranate, CPL = (6)
Pomeranian, poncho, pool table, pop bottle, porcupine, pot, potpie, potter’s N (N − 1)
wheel, power drill, prairie chicken, prayer rug, pretzel, printer, prison, proboscis
monkey, projectile, projector, promontory, ptarmigan, puck, puffer, pug, punch-
ing bag, purse, quail, quill, quilt, racer, racket, radiator, radio, radio telescope, Some of the most difficult ILSVRC categories to lo-
rain barrel, ram, rapeseed, recreational vehicle, red fox, red wine, red wolf, red-
backed sandpiper, red-breasted merganser, redbone, redshank, reel, reflex cam- calize according to this metric are basketball, swim-
era, refrigerator, remote control, restaurant, revolver, rhinoceros beetle, Rhode-
sian ridgeback, rifle, ringlet, ringneck snake, robin, rock beauty, rock crab, rock ming trunks, ping pong ball and rubber eraser, all with
python, rocking chair, rotisserie, Rottweiler, rubber eraser, ruddy turnstone,
ruffed grouse, rugby ball, rule, running shoe, safe, safety pin, Saint Bernard, less than 0.2% CPL. This measure correlates strongly
saltshaker, Saluki, Samoyed, sandal, sandbar, sarong, sax, scabbard, scale, schip-
perke, school bus, schooner, scoreboard, scorpion, Scotch terrier, Scottish deer-
hound, screen, screw, screwdriver, scuba diver, sea anemone, sea cucumber, sea
(ρ = 0.9) with the average scale of the object (fraction
lion, sea slug, sea snake, sea urchin, Sealyham terrier, seashore, seat belt, sewing of image occupied by object). The average CPL across
36 Olga Russakovsky* et al.
the 1000 ILSVRC categories is 20.8%. The 20 PASCAL afternoon tea, ant bridge building, armadillo race, armadillo yard, artist studio,
auscultation, baby room, banjo orchestra, banjo rehersal, banjo show, califone
categories have an average CPL of 8.7%, which is the headphones & media player sets, camel dessert, camel tourist, carpenter drilling,
carpentry, centipede wild, coffee shop, continental breakfast toaster, continen-
same as the CPL of the 562 most difficult categories of tal breakfast waffles, crutch walking, desert scorpion, diner, dining room, din-
ing table, dinner, dragonfly friendly, dragonfly kid, dragonfly pond, dragonfly
ILSVRC. wild, drying hair, dumbbell curl, fan blow wind, fast food, fast food restau-
rant, firewood chopping, flu shot, goldfish aquarium, goldfish tank, golf cart
on golf course, gym dumbbell, hamster drinking water, harmonica orchestra,
harmonica rehersal, harmonica show, harp ensemble, harp orchestra, harp re-
hersal, harp show, hedgehog cute, hedgehog floor, hedgehog hidden, hippo bird,
Clutter. Intuitively, even small objects are easy to lo- hippo friendly, home improvement diy drill, horseback riding, hotel coffee ma-
chine, hotel coffee maker, hotel waffle maker, jellyfish scuba, jellyfish snorkling,
calize on a plain background. To quantify clutter we kitchen, kitchen counter coffee maker, kitchen counter toaster, kitchenette, koala
feed, koala tree, ladybug flower, ladybug yard, laundromat, lion zebra friendly,
employ the objectness measure of (Alexe et al., 2012), lunch, mailman, making breakfast, making waffles, mexican food, motorcycle
racing, office, office fan, opossum on tree branch, orchestra, panda play, panda
which is a class-generic object detector evaluating how tree, pizzeria, pomegranate tree, porcupine climbing trees, power drill carpenter,
purse shop, red panda tree, riding competition, riding motor scooters, school sup-
likely a window in the image contains a coherent ob- plies, scuba starfish, sea lion beach, sea otter, sea urchin habitat, shopping for
school supplies, sitting in front of a fan, skunk and cat, skunk park, skunk wild,
ject (of any class) as opposed to background (sky, wa- skunk yard, snail flower, snorkling starfish, snowplow cleanup, snowplow pile,
snowplow winter, soccer game, south american zoo, starfish sea world, starts
ter, grass). For every image m containing target ob- shopping, steamed artichoke, stethoscope doctor, strainer pasta, strainer tea,
syringe doctor, table with food, tape player, tiger circus, tiger pet, using a can
ject instances at positions B1m , B2m , . . . , we use the pub- opener, using power drill, waffle iron breakfast, wild lion savana, wildlife pre-
serve animals, wiping dishes, wombat petting zoo, zebra savana, zoo feeding, zoo
licly available objectness software to sample 1000 win- in australia
Fig. 16 Distribution of various measures of localization difficulty on the ILSVRC2012-2014 single-object localization (dark
green) and PASCAL VOC 2012 (light blue) validation sets. Object scale is fraction of image area occupied by an average
object instance. Chance performance of localization and level of clutter are defined in Appendix B. The plots on top contain
the full ILSVRC validation set with 1000 classes; the plots on the bottom contain 200 ILSVRC classes with the lowest chance
performance of localization. All plots contain all 20 classes of PASCAL VOC.
◦ (21) flute: a high-pitched musical instrument that looks like a straight ◦ (52) laptop, laptop computer
tube and is usually played sideways (please do not confuse with oboes, which ◦ (53) printer (please do not consider typewriters to be printers)
have a distinctive straw-like mouth piece and a slightly flared end) ◦ (54) computer keyboard
◦ (22) oboe: a slender musical instrument roughly 65cm long with metal ◦ (55) lamp
keys, a distinctive straw-like mouthpiece and often a slightly flared end ◦ electric cooking appliance (an appliance which generates heat to cook food
(please do not confuse with flutes) or boil water)
◦ (23) saxophone: a musical instrument consisting of a brass conical tube, ◦ (56) microwave, microwave oven
often with a u-bend at the end ◦ (57) toaster
• food: something you can eat or drink (includes growing fruit, vegetables and ◦ (58) waffle iron
mushrooms, but does not include living animals) ◦ (59) coffee maker: a kitchen appliance used for brewing coffee automati-
◦ food with bread or crust: pretzel, bagel, pizza, hotdog, hamburgers, etc cally
◦ (24) pretzel ◦ (60) vacuum, vacuum cleaner
◦ (25) bagel, beigel ◦ (61) dishwasher, dish washer, dishwashing machine
◦ (26) pizza, pizza pie ◦ (62) washer, washing machine: an electric appliance for washing clothes
◦ (27) hotdog, hot dog, red hot ◦ (63) traffic light, traffic signal, stoplight
◦ (28) hamburger, beefburger, burger ◦ (64) tv or monitor: an electronic device that represents information in visual
◦ (29) guacamole form
◦ (30) burrito ◦ (65) digital clock: a clock that displays the time of day digitally
◦ (31) popsicle (ice cream or water ice on a small wooden stick) • kitchen items: tools,utensils and appliances usually found in the kitchen
◦ fruit ◦ electric cooking appliance (an appliance which generates heat to cook food
◦ (32) fig or boil water)
◦ (33) pineapple, ananas ◦ (56) microwave, microwave oven
◦ (34) banana ◦ (57) toaster
◦ (35) pomegranate ◦ (58) waffle iron
◦ (36) apple ◦ (59) coffee maker: a kitchen appliance used for brewing coffee automati-
◦ (37) strawberry cally
◦ (38) orange ◦ (61) dishwasher, dish washer, dishwashing machine
◦ (39) lemon ◦ (66) stove
◦ vegetables ◦ things used to open cans/bottles: can opener or corkscrew
◦ (40) cucumber, cuke ◦ (67) can opener (tin opener)
◦ (41) artichoke, globe artichoke ◦ (68) corkscrew
◦ (42) bell pepper ◦ (69) cocktail shaker
◦ (43) head cabbage ◦ non-electric item commonly found in the kitchen: pot, pan, utensil, bowl,
◦ (44) mushroom etc
• items that run on electricity (plugged in or using batteries); including clocks, ◦ (70) strainer
microphones, traffic lights, computers, etc ◦ (71) frying pan (skillet)
◦ (45) remote control, remote ◦ (72) bowl: a dish for serving food that is round, open at the top, and has
◦ electronics that blow air no handles (please do not confuse with a cup, which usually has a handle
◦ (46) hair dryer, blow dryer and is used for serving drinks)
◦ (47) electric fan: a device for creating a current of air by movement of a ◦ (73) salt or pepper shaker: a shaker with a perforated top for sprinkling
surface or surfaces (please do not consider hair dryers) salt or pepper
◦ electronics that can play music or amplify sound ◦ (74) plate rack
◦ (48) tape player ◦ (75) spatula: a turner with a narrow flexible blade
◦ (49) iPod ◦ (76) ladle: a spoon-shaped vessel with a long handle; frequently used to
◦ (50) microphone, mike transfer liquids from one container to another
◦ computer and computer peripherals: mouse, laptop, printer, keyboard, etc ◦ (77) refrigerator, icebox
◦ (51) computer mouse • furniture (including benches)
38 Olga Russakovsky* et al.
◦ (78) bookshelf: a shelf on which to keep books • vehicle: any object used to move people or objects from place to place
◦ (79) baby bed: small bed for babies, enclosed by sides to prevent baby from ◦ a vehicle with wheels
falling ◦ (144) golfcart, golf cart
◦ (80) filing cabinet: office furniture consisting of a container for keeping ◦ (145) snowplow: a vehicle used to push snow from roads
papers in order ◦ (146) motorcycle (or moped)
◦ (81) bench (a long seat for several people, typically made of wood or stone) ◦ (147) car, automobile (not a golf cart or a bus)
◦ (82) chair: a raised piece of furniture for one person to sit on; please do not
◦ (148) bus: a vehicle carrying many passengers; used for public transport
confuse with benches or sofas, which are made for more people
◦ (149) train
◦ (83) sofa, couch: upholstered seat for more than one person; please do not
confuse with benches (which are made of wood or stone) or with chairs (which ◦ (150) cart: a heavy open wagon usually having two wheels and drawn by
are for just one person) an animal
◦ (84) table ◦ (151) bicycle, bike: a two wheeled vehicle moved by foot pedals
• clothing, article of clothing: a covering designed to be worn on a person’s body ◦ (152) unicycle, monocycle
◦ (85) diaper: Garment consisting of a folded cloth drawn up between the legs ◦ a vehicle without wheels (snowmobile, sleighs)
and fastened at the waist; worn by infants to catch excrement ◦ (153) snowmobile: tracked vehicle for travel on snow
◦ swimming attire: clothes used for swimming or bathing (swim suits, swim ◦ (154) watercraft (such as ship or boat): a craft designed for water trans-
trunks, bathing caps) portation
◦ (86) swimming trunks: swimsuit worn by men while swimming ◦ (155) airplane: an aircraft powered by propellers or jets
◦ (87) bathing cap, swimming cap: a cap worn to keep hair dry while swim- • cosmetics: toiletry designed to beautify the body
ming or showering ◦ (156) face powder
◦ (88) maillot: a woman’s one-piece bathing suit ◦ (157) perfume, essence (usually comes in a smaller bottle than hair spray
◦ necktie: a man’s formal article of clothing worn around the neck (including ◦ (158) hair spray
bow ties)
◦ (159) cream, ointment, lotion
◦ (89) bow tie: a man’s tie that ties in a bow
◦ (160) lipstick, lip rouge
◦ (90) tie: a long piece of cloth worn for decorative purposes around the
neck or shoulders, resting under the shirt collar and knotted at the throat • carpentry items: items used in carpentry, including nails, hammers, axes,
(NOT a bow tie) screwdrivers, drills, chain saws, etc
◦ headdress, headgear: clothing for the head (hats, helmets, bathing caps, etc) ◦ (161) chain saw, chainsaw
◦ (87) bathing cap, swimming cap: a cap worn to keep hair dry while swim- ◦ (162) nail: pin-shaped with a head on one end and a point on the other
ming or showering ◦ (163) axe: a sharp tool often used to cut trees/ logs
◦ (91) hat with a wide brim ◦ (164) hammer: a blunt hand tool used to drive nails in or break things apart
◦ (92) helmet: protective headgear made of hard material to resist blows (please do not confuse with axe, which is sharp)
◦ (93) miniskirt, mini: a very short skirt ◦ (165) screwdriver
◦ (94) brassiere, bra: an undergarment worn by women to support their breasts ◦ (166) power drill: a power tool for drilling holes into hard materials
◦ (95) sunglasses • school supplies: rulers, erasers, pencil sharpeners, pencil boxes, binders
• living organism (other than people): dogs, snakes, fish, insects, sea urchins, ◦ (167) ruler,rule: measuring stick consisting of a strip of wood or metal or
starfish, etc. plastic with a straight edge that is used for drawing straight lines and mea-
◦ living organism which can fly suring lengths
◦ (96) bee ◦ (168) rubber eraser, rubber, pencil eraser
◦ (97) dragonfly ◦ (169) pencil sharpener
◦ (98) ladybug ◦ (170) pencil box, pencil case
◦ (99) butterfly
◦ (171) binder, ring-binder
◦ (100) bird
• sports items: items used to play sports or in the gym (such as skis, raquets,
◦ living organism which cannot fly (please don’t include humans)
gymnastics bars, bows, punching bags, balls)
◦ living organism with 2 or 4 legs (please don’t include humans):
◦ mammals (but please do not include humans) ◦ (172) bow: weapon for shooting arrows, composed of a curved piece of re-
◦ feline (cat-like) animal: cat, tiger or lion silient wood with a taut cord to propel the arrow
◦ (101) domestic cat ◦ (173) puck, hockey puck: vulcanized rubber disk 3 inches in diameter that
◦ (102) tiger is used instead of a ball in ice hockey
◦ (103) lion ◦ (174) ski
◦ canine (dog-like animal): dog, hyena, fox or wolf ◦ (175) racket, racquet
◦ (104) dog, domestic dog, canis familiaris ◦ gymnastic equipment: parallel bars, high beam, etc
◦ (105) fox: wild carnivorous mammal with pointed muzzle and ears ◦ (176) balance beam: a horizontal bar used for gymnastics which is raised
and a bushy tail (please do not confuse with dogs) from the floor and wide enough to walk on
◦ animals with hooves: camels, elephants, hippos, pigs, sheep, etc ◦ (177) horizontal bar, high bar: used for gymnastics; gymnasts grip it with
◦ (106) elephant their hands (please do not confuse with balance beam, which is wide enough
◦ (107) hippopotamus, hippo to walk on)
◦ (108) camel ◦ ball
◦ (109) swine: pig or boar ◦ (178) golf ball
◦ (110) sheep: woolly animal, males have large spiraling horns (please ◦ (179) baseball
do not confuse with antelope which have long legs) ◦ (180) basketball
◦ (111) cattle: cows or oxen (domestic bovine animals)
◦ (181) croquet ball
◦ (112) zebra
◦ (182) soccer ball
◦ (113) horse
◦ (183) ping-pong ball
◦ (114) antelope: a graceful animal with long legs and horns directed
upward and backward ◦ (184) rugby ball
◦ (115) squirrel ◦ (185) volleyball
◦ (116) hamster: short-tailed burrowing rodent with large cheek pouches ◦ (186) tennis ball
◦ (117) otter ◦ (187) punching bag, punch bag, punching ball, punchball
◦ (118) monkey ◦ (188) dumbbell: An exercising weight; two spheres connected by a short bar
◦ (119) koala bear that serves as a handle
◦ (120) bear (other than pandas) • liquid container: vessels which commonly contain liquids such as bottles, cans,
◦ (121) skunk (mammal known for its ability fo spray a liquid with a etc.
strong odor; they may have a single thick stripe across back and tail, ◦ (189) pitcher: a vessel with a handle and a spout for pouring
two thinner stripes, or a series of white spots and broken stripes ◦ (190) beaker: a flatbottomed jar made of glass or plastic; used for chemistry
◦ (122) rabbit ◦ (191) milk can
◦ (123) giant panda: an animal characterized by its distinct black and ◦ (192) soap dispenser
white markings ◦ (193) wine bottle
◦ (124) red panda: Reddish-brown Old World raccoon-like carnivore
◦ (194) water bottle
◦ (125) frog, toad
◦ (195) cup or mug (usually with a handle and usually cylindrical)
◦ (126) lizard: please do not confuse with snake (lizards have legs)
◦ (127) turtle • bag
◦ (128) armadillo ◦ (196) backpack: a bag carried by a strap on your back or shoulder
◦ (129) porcupine, hedgehog ◦ (197) purse: a small bag for carrying money
◦ living organism with 6 or more legs: lobster, scorpion, insects, etc. ◦ (198) plastic bag
◦ (130) lobster: large marine crustaceans with long bodies and muscular • (199) person
tails; three of their five pairs of legs have claws • (200) flower pot: a container in which plants are cultivated
◦ (131) scorpion
◦ (132) centipede: an arthropod having a flattened body of 15 to 173
segments each with a pair of legs, the foremost pair being modified as
prehensors
◦ (133) tick (a small creature with 4 pairs of legs which lives on the blood
of mammals and birds)
◦ (134) isopod: a small crustacean with seven pairs of legs adapted for
Appendix E Modification to bounding box
crawling
◦ (135) ant system for object detection
◦ living organism without legs: fish, snake, seal, etc. (please don’t include
plants)
◦ living organism that lives in water: seal, whale, fish, sea cucumber, etc.
◦ (136) jellyfish The bounding box annotation system described in Sec-
◦ (137) starfish, sea star
◦ (138) seal tion 3.2.1 is used for annotating images for both the
◦ (139) whale
◦ (140) ray: a marine animal with a horizontally flattened body and single-object localization dataset and the object de-
enlarged winglike pectoral fins with gills on the underside
◦ (141) goldfish: small golden or orange-red fishes tection dataset. However, two additional manual post-
◦ living organism that slides on land: worm, snail, snake
◦ (142) snail processing are needed to ensure accuracy in the object
◦ (143) snake: please do not confuse with lizard (snakes do not have
legs) detection scenario:
ImageNet Large Scale Visual Recognition Challenge 39
Ambiguous objects. The first common source of error other bounding box of the same object class.We again
was that workers were not able to accurately differenti- manually verified all of these cases in-house. In approx-
ate some object classes during annotation. Some com- imately 40% of the cases the two bounding boxes cor-
monly confused labels were seal and sea otter, backpack rectly corresponded to different people in a crowd, to
and purse, banjo and guitar, violin and cello, brass in- stacked plates, or to musical instruments nearby in an
struments (trumpet, trombone, french horn and brass), orchestra. In the other 60% of cases one of the boxes
flute and oboe, ladle and spatula. Despite our best ef- was randomly removed.
forts (providing positive and negative example images These verification steps complete the annotation pro-
in the annotation task, adding text explanations to alert cedure of bounding boxes around every instance of ev-
the user to the distinction between these categories) ery object class in validation, test and a subset of train-
these errors persisted. ing images for the detection task.
In the single-object localization setting, this prob-
lem was not as prominent for two reasons. First, the Training set annotation. With the optimized algorithm
way the data was collected imposed a strong prior on of Section 3.3.3 we fully annotated the validation and
the object class which was present. Second, since only test sets. However, annotating all training images with
one object category needed to be annotated per image, all target object classes was still a budget challenge.
ambiguous images could be discarded: for example, if Positive training images taken from the single-object
workers couldn’t agree on whether or not a trumpet was localization dataset already had bounding box annota-
in fact present, this image could simply be removed. In tions of all instances of one object class on each im-
contrast, for the object detection setting consensus had age. We extended the existing annotations to the de-
to be reached for all target categories on all images. tection dataset by making two modification. First, we
To fix this problem, once bounding box annota- corrected any bounding box omissions resulting from
tions were collected we manually looked through all merging fine-grained categories: i.e., if an image be-
cases where the bounding boxes for two different object longed to the ”dalmatian” category and all instances of
classes had significant overlap with each other (about ”dalmatian” were annotated with bounding boxes for
3% of the collected boxes). About a quarter of these single-object localization, we ensured that all remain-
boxes were found to correspond to incorrect objects ing ”dog” instances are also annotated for the object
and were removed. Crowdsourcing this post-processing detection task. Second, we collected significantly more
step (with very stringent accuracy constraints) would training data for the person class because the existing
be possible but it occurred in few enough cases that it annotation set was not diverse enough to be representa-
was faster (and more accurate) to do this in-house. tive (the only people categories in the single-object lo-
calization task are scuba diver, groom, and ballplayer).
Duplicate annotations. The second common source of To compensate, we additionally annotated people in a
error were duplicate bounding boxes drawn on the same large fraction of the existing training set images.
object instance. Despite instructions not to draw more
than one bounding box around the same object instance
and constraints in the annotation UI enforcing at least Appendix F Competition protocol
a 5 pixel difference between different bounding boxes,
these errors persisted. One reason was that sometimes Competition format. At the beginning of the competi-
the initial bounding box was not perfect and subsequent tion period each year we release the new training/validation/test
labelers drew a slightly improved alternative. images, training/validation annotations, and competi-
This type of error was also present in the single- tion specification for the year. We then specify a dead-
object localization scenario but was not a major cause line for submission, usually approximately 4 months af-
for concern. A duplicate bounding box is a slightly per- ter the release of data. Teams are asked to upload a
turbed but still correct positive example, and single- text file of their predicted annotations on test images
object localization is only concerned with correctly lo- by this deadline to a provided server. We then evaluate
calizing one object instance. For the detection task algo- all submissions and release the results.
rithms are evaluated on the ability to localize every ob- For every task we released code that takes a text file
ject instance, and penalized for duplicate detections, so of automatically generated image annotations and com-
it is imperative that these labeling errors are corrected pares it with the ground truth annotations to return a
(even if they only appear in about 0.6% of cases). quantitative measure of algorithm accuracy. Teams can
Approximately 1% of bounding boxes were found use this code to evaluate their performance on the val-
to have significant overlap of more than 50% with an- idation data.
40 Olga Russakovsky* et al.
As described in (Everingham et al., 2014), there are Chatfield, K., Simonyan, K., Vedaldi, A., and Zisser-
three options for measuring performance on test data: man, A. (2014). Return of the devil in the de-
(i) Release test images and annotations, and allow par- tails: Delving deep into convolutional nets. CoRR,
ticipants to assess performance themselves; (ii) Release abs/1405.3531.
test images but not test annotations – participants sub- Chen, Q., Song, Z., Huang, Z., Hua, Y., and Yan, S.
mit results and organizers assess performance; (iii) Nei- (2014). Contextualizing object detection and classi-
ther test images nor annotations are released – partic- fication. volume PP.
ipants submit software and organizers run it on new Crammer, K., Dekel, O., Keshet, J., Shalev-Shwartz,
data and assess performance. In line with the PASCAL S., and Singer, Y. (2006). Online passive-aggressive
VOC choice, we opted for option (ii). Option (i) allows algorithms. Journal of Machine Learning Research,
too much leeway in overfitting to the test data; option 7:551–585.
(iii) is infeasible, especially given the scale of our test Criminisi, A. (2004). Microsoft Research Cambridge
set (40K-100K images). (MSRC) object recognition image database (version
We released ILSVRC2010 test annotations for the 2.0). http://research.microsoft.com/vision/
image classification task, but all other test annotations cambridge/recognition.
have remained hidden to discourage fine-tuning results Dean, T., Ruzon, M., Segal, M., Shlens, J., Vijaya-
on the test data. narasimhan, S., and Yagnik, J. (2013). Fast, accu-
rate detection of 100,000 object classes on a single
Evaluation protocol after the challenge. After the chal- machine. In CVPR.
lenge period we set up an automatic evaluation server Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and
that researchers can use throughout the year to con- Fei-Fei, L. (2009). ImageNet: a large-scale hierarchi-
tinue evaluating their algorithms against the ground cal image database. In CVPR.
truth test annotations. We limit teams to 2 submis- Deng, J., Russakovsky, O., Krause, J., Bernstein, M.,
sions per week to discourage parameter tuning on the Berg, A. C., and Fei-Fei, L. (2014). Scalable multi-
test data, and in practice we have never had a problem label annotation. In CHI.
with researchers abusing the system. Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang,
*Bibliography N., Tzeng, E., and Darrell, T. (2013). Decaf: A
deep convolutional activation feature for generic vi-
Ahonen, T., Hadid, A., and Pietikinen, M. (2006). Face
sual recognition. CoRR, abs/1310.1531.
description with local binary patterns: Application to
Dubout, C. and Fleuret, F. (2012). Exact acceleration
face recognition. PAMI, 28.
of linear object detectors. In Proceedings of the Eu-
Alexe, B., Deselares, T., and Ferrari, V. (2012). Mea-
ropean Conference on Computer Vision (ECCV).
suring the objectness of image windows. In PAMI.
Everingham, M., , Eslami, S. M. A., Van Gool, L.,
Arandjelovic, R. and Zisserman, A. (2012). Three
Williams, C. K. I., Winn, J., and Zisserman, A.
things everyone should know to improve object re-
(2014). The Pascal Visual Object Classes (VOC)
trieval. In CVPR.
challenge - a Retrospective. IJCV.
Arbelaez, P., Maire, M., Fowlkes, C., and Malik, J.
Everingham, M., Gool, L. V., Williams, C., Winn, J.,
(2011). Contour detection and hierarchical image seg-
and Zisserman, A. (2005-2012). PASCAL Visual Ob-
mentation. IEEE TPAMI, 33.
ject Classes Challenge (VOC). http://www.pascal-
Arbeláez, P., Pont-Tuset, J., Barron, J., Marques,
network.org/challenges/VOC/voc2012/workshop/index.html.
F., and Malik, J. (2014). Multiscale combinatorial
Everingham, M., Van Gool, L., Williams, C. K. I.,
grouping. In Computer Vision and Pattern Recogni-
Winn, J., and Zisserman, A. (2010). The Pas-
tion.
cal Visual Object Classes (VOC) challenge. IJCV,
Batra, D., Agrawal, H., Banik, P., Chavali, N., Math-
88(2):303–338.
ialagan, C. S., and Alfadda, A. (2013). Cloudcv:
Fei-Fei, L., Fergus, R., and Perona, P. (2004). Learn-
Large-scale distributed computer vision as a cloud
ing generative visual models from few examples: an
service.
incremental bayesian approach tested on 101 object
Bell, S., Upchurch, P., Snavely, N., and Bala, K. (2013).
categories. In CVPR.
OpenSurfaces: A richly annotated catalog of sur-
Fei-Fei, L. and Perona, P. (2005). A bayesian hierar-
face appearance. In ACM Transactions on Graphics
chical model for learning natural scene categories. In
(SIGGRAPH).
CVPR.
Berg, A., Farrell, R., Khosla, A., Krause, J., Fei-Fei, L.,
Felzenszwalb, P., Girshick, R., McAllester, D., and Ra-
Li, J., and Maji, S. (2013). Fine-Grained Competi-
manan, D. (2010). Object detection with discrimina-
tion. https://sites.google.com/site/fgcomp2013/.
ImageNet Large Scale Visual Recognition Challenge 41
tively trained part based models. PAMI, 32. Kanezaki, A., Inaba, S., Ushiku, Y., Yamashita, Y.,
Frome, A., Corrado, G., Shlens, J., Bengio, S., Dean, Muraoka, H., Kuniyoshi, Y., and Harada, T. (2014).
J., Ranzato, M., and Mikolov, T. (2013). Devise: A Hard negative classes for multiple object detection.
deep visual-semantic embedding model. In Advances In ICRA.
In Neural Information Processing Systems, NIPS. Khosla, A., Jayadevaprakash, N., Yao, B., and Fei-Fei,
Geiger, A., Lenz, P., Stiller, C., and Urtasun, R. (2013). L. (2011). Novel dataset for fine-grained image cat-
Vision meets robotics: The kitti dataset. Interna- egorization. In First Workshop on Fine-Grained Vi-
tional Journal of Robotics Research (IJRR). sual Categorization, CVPR.
Girshick, R., Donahue, J., Darrell, T., and Malik., J. Krizhevsky, A., Sutskever, I., and Hinton, G. (2012).
(2014). Rich feature hierarchies for accurate object ImageNet classification with deep convolutional neu-
detection and semantic segmentation. In CVPR. ral networks. In NIPS.
Girshick, R. B., Donahue, J., Darrell, T., and Malik, J. Kuettel, D., Guillaumin, M., and Ferrari, V. (2012).
(2013). Rich feature hierarchies for accurate object Segmentation Propagation in ImageNet. In eccv.
detection and semantic segmentation (v4). CoRR. Lazebnik, S., Schmid, C., and Ponce, J. (2006). Be-
Gould, S., Fulton, R., and Koller, D. (2009). Decom- yond bags of features: Spatial Pyramid Matching for
posing a scene into geometric and semantically con- recognizing natural scene categories. In CVPR.
sistent regions. In ICCV. Lin, M., Chen, Q., and Yan, S. (2014a). Network in
Graham, B. (2013). Sparse arrays of signatures for on- network. ICLR.
line character recognition. CoRR. Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P.,
Griffin, G., Holub, A., and Perona, P. (2007). Caltech- Ramanan, D., Dollr, P., and Zitnick, C. L. (2014b).
256 object category dataset. Technical Report 7694, Microsoft COCO: Common Objects in Context. In
Caltech. ECCV.
Harada, T. and Kuniyoshi, Y. (2012). Graphical gaus- Lin, Y., Lv, F., Cao, L., Zhu, S., Yang, M., Cour, T.,
sian vector for image categorization. In NIPS. Yu, K., and Huang, T. (2011). Large-scale image clas-
Harel, J., Koch, C., and Perona, P. (2007). Graph-based sification: Fast feature extraction and SVM training.
visual saliency. In NIPS. In CVPR.
He, K., Zhang, X., Ren, S., , and Su, J. (2014). Spatial Liu, C., Yuen, J., and Torralba, A. (2011). Nonparamet-
pyramid pooling in deep convolutional networks for ric scene parsing via label transfer. IEEE Transac-
visual recognition. In ECCV. tions on Pattern Analysis and Machine Intelligence,
Hinton, G. E., Srivastava, N., Krizhevsky, A., 33(12).
Sutskever, I., and Salakhutdinov, R. (2012). Improv- Lowe, D. G. (2004). Distinctive image features from
ing neural networks by preventing co-adaptation of scale-invariant keypoints. IJCV, 60(2):91–110.
feature detectors. CoRR, abs/1207.0580. Maji, S. and Malik, J. (2009). Object detection using
Hoiem, D., Chodpathumwan, Y., and Dai, Q. (2012). a max-margin hough transform. In CVPR.
Diagnosing error in object detectors. In ECCV. Manen, S., Guillaumin, M., and Van Gool, L. (2013).
Howard, A. (2014). Some improvements on deep con- Prime Object Proposals with Randomized Prim’s Al-
volutional neural network based image classification. gorithm. In ICCV.
ICLR. Mensink, T., Verbeek, J., Perronnin, F., and Csurka,
Huang, G. B., Ramesh, M., Berg, T., and Learned- G. (2012). Metric Learning for Large Scale Image
Miller, E. (2007). Labeled faces in the wild: A Classification: Generalizing to New Classes at Near-
database for studying face recognition in uncon- Zero Cost. In ECCV.
strained environments. Technical Report 07-49, Uni- Mikolov, T., Chen, K., Corrado, G., and Dean, J.
versity of Massachusetts, Amherst. (2013). Efficient estimation of word representations
Iandola, F. N., Moskewicz, M. W., Karayev, S., Gir- in vector space. ICLR.
shick, R. B., Darrell, T., and Keutzer, K. (2014). Miller, G. A. (1995). Wordnet: A lexical database for
Densenet: Implementing efficient convnet descriptor english. Commun. ACM, 38(11).
pyramids. CoRR. Oliva, A. and Torralba, A. (2001). Modeling the shape
Jia, Y. (2013). Caffe: An open source convolutional of the scene: A holistic representation of the spatial
architecture for fast feature embedding. http:// envelope. IJCV.
caffe.berkeleyvision.org/. Ordonez, V., Deng, J., Choi, Y., Berg, A. C., and Berg,
Jojic, N., Frey, B. J., and Kannan, A. (2003). Epitomic T. L. (2013). From large scale image categorization
analysis of appearance and shape. In ICCV. to entry-level categories. In IEEE International Con-
ference on Computer Vision (ICCV).
42 Olga Russakovsky* et al.
Ouyang, W., Luo, P., Zeng, X., Qiu, S., Tian, Y., Li, Sorokin, A. and Forsyth, D. (2008). Utility data anno-
H., Yang, S., Wang, Z., Xiong, Y., Qian, C., Zhu, tation with Amazon Mechanical Turk. In InterNet08.
Z., Wang, R., Loy, C. C., Wang, X., and Tang, X. Su, H., Deng, J., and Fei-Fei, L. (2012). Crowdsourc-
(2014). Deepid-net: multi-stage and deformable deep ing annotations for visual object detection. In AAAI
convolutional neural networks for object detection. Human Computation Workshop.
CoRR, abs/1409.3505. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S.,
Ouyang, W. and Wang, X. (2013). Joint deep learning Anguelov, D., Erhan, D., and Rabinovich, A. (2014).
for pedestrian detection. In ICCV. Going deeper with convolutions. Technical report.
Papandreou, G. (2014). Deep epitomic convolutional Tang, Y. (2013). Deep learning using support vector
neural networks. CoRR. machines. CoRR, abs/1306.0239.
Papandreou, G., Chen, L.-C., and Yuille, A. L. (2014). Thorpe, S., Fize, D., Marlot, C., et al. (1996). Speed
Modeling image patches with a generic dictionary of of processing in the human visual system. nature,
mini-epitomes. 381(6582):520–522.
Perronnin, F., Akata, Z., Harchaoui, Z., and Schmid, C. Torralba, A. and Efros, A. A. (2011). Unbiased look at
(2012). Towards good practice in large-scale learning dataset bias. In CVPR’11.
for image classification. In CVPR. Torralba, A., Fergus, R., and Freeman, W. (2008). 80
Perronnin, F. and Dance, C. R. (2007). Fisher kernels million tiny images: A large data set for nonparamet-
on visual vocabularies for image categorization. In ric object and scene recognition. In PAMI.
CVPR. Uijlings, J., van de Sande, K., Gevers, T., and Smeul-
Perronnin, F., Sánchez, J., and Mensink, T. (2010). Im- ders, A. (2013). Selective search for object recogni-
proving the fisher kernel for large-scale image classi- tion. International Journal of Computer Vision.
fication. In ECCV (4). Urtasun, R., Fergus, R., Hoiem, D., Torralba, A.,
Russakovsky, O., Deng, J., Huang, Z., Berg, A., and Fei- Geiger, A., Lenz, P., Silberman, N., Xiao, J.,
Fei, L. (2013). Detecting avocados to zucchinis: what and Fidler, S. (2013-2014). Reconstruction meets
have we done, and where are we going? In ICCV. recognition challenge. http://ttic.uchicago.edu/
Russell, B., Torralba, A., Murphy, K., and Freeman, ~rurtasun/rmrc/.
W. T. (2007). LabelMe: a database and web-based van de Sande, K. E. A., Gevers, T., and Snoek, C.
tool for image annotation. IJCV. G. M. (2010). Evaluating color descriptors for object
Sanchez, J. and Perronnin, F. (2011). High-dim. signa- and scene recognition. IEEE Transactions on Pattern
ture compression for large-scale image classification. Analysis and Machine Intelligence, 32(9):1582–1596.
In CVPR. van de Sande, K. E. A., Gevers, T., and Snoek, C. G. M.
Sanchez, J., Perronnin, F., and de Campos, T. (2012). (2011a). Empowering visual categorization with the
Modeling spatial layout of images beyond spatial gpu. IEEE Transactions on Multimedia, 13(1):60–70.
pyramids. In PRL. van de Sande, K. E. A., Snoek, C. G. M., and Smeul-
Scheirer, W., Kumar, N., Belhumeur, P. N., and Boult, ders, A. W. M. (2014). Fisher and vlad with flair.
T. E. (2012). Multi-attribute spaces: Calibration for In Proceedings of the IEEE Conference on Computer
attribute fusion and similarity search. In CVPR. Vision and Pattern Recognition.
Schmidhuber, J. (2012). Multi-column deep neural net- van de Sande, K. E. A., Uijlings, J. R. R., Gevers, T.,
works for image classification. In CVPR. and Smeulders, A. W. M. (2011b). Segmentation as
Sermanet, P., Eigen, D., Zhang, X., Mathieu, M., Fer- selective search for object recognition. In ICCV.
gus, R., and LeCun, Y. (2013). Overfeat: Integrated Vittayakorn, S. and Hays, J. (2011). Quality assessment
recognition, localization and detection using convo- for crowdsourced object annotations. In BMVC.
lutional networks. CoRR, abs/1312.6229. von Ahn, L. and Dabbish, L. (2005). Esp: Labeling im-
Sheng, V. S., Provost, F., and Ipeirotis, P. G. (2008). ages with a computer game. In AAAI Spring Sympo-
Get another label? Improving data quality and data sium: Knowledge Collection from Volunteer Contrib-
mining using multiple, noisy labelers. In SIGKDD. utors.
Simonyan, K., Vedaldi, A., and Zisserman, A. (2013). Vondrick, C., Patterson, D., and Ramanan, D. (2012).
Deep fisher networks for large-scale image classifica- Efficiently scaling up crowdsourced video annotation.
tion. In NIPS. International Journal of Computer Vision.
Simonyan, K. and Zisserman, A. (2014). Very deep con- Wan, L., Zeiler, M., Zhang, S., LeCun, Y., and Fergus,
volutional networks for large-scale image recognition. R. (2013). Regularization of neural networks using
CoRR, abs/1409.1556. dropconnect. In Proc. International Conference on
Machine learning (ICML’13).
ImageNet Large Scale Visual Recognition Challenge 43
Wang, J., Yang, J., Yu, K., Lv, F., Huang, T., and
Gong, Y. (2010). Locality-constrained Linear Coding
for image classification. In CVPR.
Wang, M., Xiao, T., Li, J., Hong, C., Zhang, J., and
Zhang, Z. (2014). Minerva: A scalable and highly ef-
ficient training platform for deep learning. In APSys.
Wang, X., Yang, M., Zhu, S., and Lin, Y. (2013). Re-
gionlets for generic object detection. In ICCV.
Welinder, P., Branson, S., Belongie, S., and Perona, P.
(2010). The multidimensional wisdom of crowds. In
NIPS.
Xiao, J., Hays, J., Ehinger, K., Oliva, A., and Torralba.,
A. (2010). SUN database: Large-scale scene recogni-
tion from Abbey to Zoo. CVPR.
Yang, J., Yu, K., Gong, Y., and Huang, T. (2009). Lin-
ear spatial pyramid matching using sparse coding for
image classification. In CVPR.
Yao, B., Yang, X., and Zhu, S.-C. (2007). Introduction
to a large scale general purpose ground truth dataset:
methodology, annotation tool, and benchmarks.
Zeiler, M. D. and Fergus, R. (2013). Visualizing
and understanding convolutional networks. CoRR,
abs/1311.2901.
Zeiler, M. D., Taylor, G. W., and Fergus, R. (2011).
Adaptive deconvolutional networks for mid and high
level feature learning. In ICCV.
Zhou, B., Lapedriza, A., Xiao, J., Torralba, A., and
Oliva, A. (2014). Learning deep features for scene
recognition using places database. NIPS.
Zhou, X., Yu, K., Zhang, T., and Huang, T. (2010).
Image classification using super-vector coding of local
image descriptors. In ECCV.