Anderson Bottom-Up and Top-Down CVPR 2018 Paper
Anderson Bottom-Up and Top-Down CVPR 2018 Paper
Anderson Bottom-Up and Top-Down CVPR 2018 Paper
Abstract
6077
input regions correspond to a uniform grid of equally sized plying attention to salient image regions. We are aware
and shaped neural receptive fields – irrespective of the con- of two papers. Jin et al. [18] use selective search [41]
tent of the image. To generate more human-like captions to identify salient image regions, which are filtered with
and question answers, objects and other salient image re- a classifier then resized and CNN-encoded as input to an
gions are a much more natural basis for attention [10, 35]. image captioning model with attention. The Areas of At-
In this paper we propose a combined bottom-up and top- tention captioning model [30] uses either edge boxes [50]
down visual attention mechanism. The bottom-up mech- or spatial transformer networks [17] to generate image fea-
anism proposes a set of salient image regions, with each tures, which are processed using an attention model based
region represented by a pooled convolutional feature vec- on three bi-linear pairwise interactions [30]. In this work,
tor. Practically, we implement bottom-up attention using rather than using hand-crafted or differentiable region pro-
Faster R-CNN [32], which represents a natural expression posals [41, 50, 17], we leverage Faster R-CNN [32], es-
of a bottom-up attention mechanism. The top-down mecha- tablishing a closer link between vision and language tasks
nism uses task-specific context to predict an attention distri- and recent progress in object detection. With this approach
bution over the image regions. The attended feature vector we are able to pre-train our region proposals on object
is then computed as a weighted average of image features detection datasets. Conceptually, the advantages should
over all regions. be similar to pre-training visual representations on Ima-
We evaluate the impact of combining bottom-up and top- geNet [34] and leveraging significantly larger cross-domain
down attention on two tasks. We first present an image cap- knowledge. We additionally apply our method to VQA, es-
tioning model that takes multiple glimpses of salient im- tablishing the broad applicability of our approach.
age regions during caption generation. Empirically, we find
that the inclusion of bottom-up attention has a significant 3. Approach
positive benefit for image captioning. Our results on the
Given an image I, both our image captioning model and
MSCOCO test server establish a new state-of-the-art for the
our VQA model take as input a possibly variably-sized set
task, achieving CIDEr / SPICE / BLEU-4 scores of 117.9,
of k image features, V = {v 1 , ..., v k }, v i ∈ RD , such that
21.5 and 36.9. respectively (outperforming all published
each image feature encodes a salient region of the image.
and unpublished work at the time). Demonstrating the
The spatial image features V can be variously defined as the
broad applicability of the method, we additionally present
output of our bottom-up attention model, or, following stan-
a VQA model using the same bottom-up attention features.
dard practice, as the spatial output layer of a CNN. We de-
Using this model we obtain first place in the 2017 VQA
scribe our approach to implementing a bottom-up attention
Challenge, achieving 70.3% overall accuracy on the VQA
model in Section 3.1. In Section 3.2 we outline the architec-
v2.0 test-standard server. Code, models and pre-computed
ture of our image captioning model and in Section 3.3 we
image features are available from the project website1 .
outline our VQA model. We note that for the top-down at-
tention component, both models use simple one-pass atten-
2. Related Work tion mechanisms, as opposed to the more complex schemes
A large number of attention-based deep neural networks of recent models such as stacked, multi-headed, or bidirec-
have been proposed for image captioning and VQA. Typ- tional attention [46, 16, 20, 28] that could also be applied.
ically, these models can be characterized as top-down ap- 3.1. Bottom-Up Attention Model
proaches, with context provided by a representation of a
partially-completed caption in the case of image caption- The definition of spatial image features V is generic.
ing [33, 27, 47, 45], or a representation of the question in However, in this work we define spatial regions in terms of
the case of VQA [11, 28, 44, 46, 49]. In each case attention bounding boxes and implement bottom-up attention using
is applied to the output of one or more layers of a CNN, Faster R-CNN [32]. Faster R-CNN is an object detection
by predicting a weighting for each spatial location in the model designed to identify instances of objects belonging
CNN output. However, determining the optimal number of to certain classes and localize them with bounding boxes.
image regions invariably requires an unwinnable trade-off Other region proposal networks could also be trained as an
between coarse and fine levels of detail. Furthermore, the attentive mechanism [31, 25].
arbitrary positioning of the regions with respect to image Faster R-CNN detects objects in two stages. The first
content may make it more difficult to detect objects that are stage, described as a Region Proposal Network (RPN), pre-
poorly aligned to regions and to bind visual concepts asso- dicts object proposals. A small network is slid over features
ciated with the same object. at an intermediate level of a CNN. At each spatial loca-
Comparatively few previous works have considered ap- tion the network predicts a class-agnostic objectness score
and a bounding box refinement for anchor boxes of multi-
1 http://www.panderson.me/up-down-attention ple scales and aspect ratios. Using greedy non-maximum
6078
region i, v i is defined as the mean-pooled convolutional
feature from this region, such that the dimension D of the
image feature vectors is 2048. Used in this fashion, Faster
R-CNN effectively functions as a ‘hard’ attention mecha-
nism, as only a relatively small number of image bounding
box features are selected from a large number of possible
configurations.
To pretrain the bottom-up attention model, we first ini-
tialize Faster R-CNN with ResNet-101 pretrained for clas-
sification on ImageNet [34]. We then train on Visual
Genome [21] data. To aid the learning of good feature
representations, we add an additional training output for
predicting attribute classes (in addition to object classes).
To predict attributes for region i, we concatenate the mean
pooled convolutional feature v i with a learned embedding
of the ground-truth object class, and feed this into an addi-
tional output layer defining a softmax distribution over each
attribute class plus a ‘no attributes’ class.
The original Faster R-CNN multi-task loss function con-
tains four components, defined over the classification and
bounding box regression outputs for both the RPN and the
final object class proposals respectively. We retain these
components and add an additional multi-class loss compo-
nent to train the attribute predictor. In Figure 2 we provide
some examples of model output.
3.2. Captioning Model
Given a set of image features V , our proposed caption-
ing model uses a ‘soft’ top-down attention mechanism to
weight each feature during caption generation, using the
existing partial output sequence as context. This approach
is broadly similar to several previous works [33, 27, 45].
Figure 2. Example output from our Faster R-CNN bottom-up at- However, the particular design choices outlined below
tention model. Each bounding box is labeled with an attribute class make for a relatively simple yet high-performing baseline
followed by an object class. Note however, that in captioning and
model. Even without bottom-up attention, our captioning
VQA we utilize only the feature vectors – not the predicted labels.
model achieves performance comparable to state-of-the-art
on most evaluation metrics (refer Table 1).
At a high level, the captioning model is composed of two
suppression with an intersection-over-union (IoU) thresh-
LSTM [15] layers using a standard implementation [9]. In
old, the top box proposals are selected as input to the second
the sections that follow we will refer to the operation of the
stage. In the second stage, region of interest (RoI) pooling
LSTM over a single time step using the following notation:
is used to extract a small feature map (e.g. 14 × 14) for each
box proposal. These feature maps are then batched together ht = LSTM(xt , ht−1 ) (1)
as input to the final layers of the CNN. The final output of
where xt is the LSTM input vector and ht is the LSTM
the model consists of a softmax distribution over class la-
output vector. Here we have neglected the propagation of
bels and class-specific bounding box refinements for each
memory cells for notational convenience. We now describe
box proposal.
the formulation of the LSTM input vector xt and the output
In this work, we use Faster R-CNN in conjunction with vector ht for each layer of the model. The overall caption-
the ResNet-101 [13] CNN. To generate an output set of im- ing model is illustrated in Figure 3.
age features V for use in image captioning or VQA, we take
the final output of the model and perform non-maximum
3.2.1 Top-Down Attention LSTM
suppression for each object class using an IoU threshold.
We then select all regions where any class detection prob- Within the captioning model, we characterize the first
ability exceeds a confidence threshold. For each selected LSTM layer as a top-down visual attention model, and the
6079
second LSTM layer as a language model, indicating each
layer with superscripts in the equations that follow. Note
that the bottom-up attention model is described in Sec-
tion 3.1, and in this section its outputs are simply consid-
ered as features V . The input vector to the attention LSTM
at each time step consists of the previous output of the lan-
guage LSTM, concatenated
P with the mean-pooled image
feature v̄ = k1 i v i and an encoding of the previously
generated word, given by:
x1t = [h2t−1 , v̄, We Πt ] (2)
6080
14 14x300 512
Question Word embedding GRU w
512
Top-down attention weights N
512 w w σ
w w softmax k 512 Predicted scores of
kx2048 2048 candidate answers
Image features Σ w
Concatenation Weighted sum over Element-wise
image locations product
Figure 4. Overview of the proposed VQA model. A deep neural network implements a joint embedding of the question and image features
{v 1 , ..., v k } . These features can be defined as the spatial output of a CNN, or following our approach, generated using bottom-up attention.
Output is generated by a multi-label classifier operating over a fixed set of candidate answers. Gray numbers indicate the dimensions of
the vector representations between layers. Yellow elements use learned parameters.
highest log-probability of the set. In contrast, we observe the k image features v i as follows:
that very few unrestricted caption samples score higher than
the greedily-decoded caption. Using this approach, we ai = wTa fa ([v i , q]) (15)
complete CIDEr optimization in a single epoch.
where wTa is a learned parameter vector. Equation 4 and
3.3. VQA Model Equation 5 (neglecting subscripts t) are used to calculate the
normalized attention weight and the attended image feature
Given a set of spatial image features V , our proposed v̂. The distribution over possible output responses y is given
VQA model also uses a ‘soft’ top-down attention mecha- by:
nism to weight each feature, using the question represen-
tation as context. As illustrated in Figure 4, the proposed h = fq (q) ◦ fv (v̂) (16)
model implements the well-known joint multimodal em- p(y) = σ(Wo fo (h)) (17)
bedding of the question and the image, followed by a pre-
diction of regression of scores over a set of candidate an- Where h is a joint representation of the question and the
swers. This approach has been the basis of numerous pre- image, and Wo ∈ R|Σ|×M are learned weights.
vious models [16, 20, 38]. However, as with our captioning Due to space constraints, some important aspects of our
model, implementation decisions are important to ensure VQA approach are not detailed here. For full specifics of
that this relatively simple model delivers high performance. the VQA model including a detailed exploration of archi-
The learned non-linear transformations within the net- tectures and hyperparameters, refer to Teney et al. [37].
work are implemented with gated hyperbolic tangent acti-
vations [7]. These are a special case of highway networks 4. Evaluation
[36] that have shown a strong empirical advantage over tra-
4.1. Datasets
ditional ReLU or tanh layers. Each of our ‘gated tanh’ lay-
ers implements a function fa : x ∈ Rm → y ∈ Rn with 4.1.1 Visual Genome Dataset
parameters a = {W, W ′ , b, b′ } defined as follows:
We use the Visual Genome [21] dataset to pretrain our
bottom-up attention model, and for data augmentation when
ỹ = tanh (W x + b) (12)
training our VQA model. The dataset contains 108K images
g = σ(W ′ x + b′ ) (13) densely annotated with scene graphs containing objects, at-
y = ỹ ◦ g (14) tributes and relationships, as well as 1.7M visual question
answers.
where σ is the sigmoid activation function, W, W ′ ∈ Rn×m For pretraining the bottom-up attention model, we use
are learned weights, b, b′ ∈ Rn are learned biases, and ◦ is only the object and attribute data. We reserve 5K images
the Hadamard (element-wise) product. The vector g acts for validation, and 5K images for future testing, treating the
multiplicatively as a gate on the intermediate activation ỹ. remaining 98K images as training data. As approximately
Our proposed approach first encodes each question as the 51K Visual Genome images are also found in the MSCOCO
hidden state q of a gated recurrent unit [5] (GRU), with each captions dataset [23], we are careful to avoid contamination
input word represented using a learned word embedding. of our MSCOCO validation and test sets. We ensure that
Similar to Equation 3, given the output q of the GRU, we any images found in both datasets are contained in the same
generate an unnormalized attention weight ai for each of split in both datasets.
6081
Cross-Entropy Loss CIDEr Optimization
BLEU-1 BLEU-4 METEOR ROUGE-L CIDEr SPICE BLEU-1 BLEU-4 METEOR ROUGE-L CIDEr SPICE
SCST:Att2in [33] - 31.3 26.0 54.3 101.3 - - 33.3 26.3 55.3 111.4 -
SCST:Att2all [33] - 30.0 25.9 53.4 99.4 - - 34.2 26.7 55.7 114.0 -
Ours: ResNet 74.5 33.4 26.1 54.4 105.4 19.2 76.6 34.0 26.5 54.9 111.1 20.2
Ours: Up-Down 77.2 36.2 27.0 56.4 113.5 20.3 79.8 36.3 27.7 56.9 120.1 21.4
Relative Improvement 4% 8% 3% 4% 8% 6% 4% 7% 5% 4% 8% 6%
Table 1. Single-model image captioning performance on the MSCOCO Karpathy test split. Our baseline ResNet model obtains similar
results to SCST [33], the existing state-of-the-art on this test set. Illustrating the contribution of bottom-up attention, our Up-Down model
achieves significant (3–8%) relative gains across all metrics regardless of whether cross-entropy loss or CIDEr optimization is used.
Table 2. Breakdown of SPICE F-scores over various subcategories on the MSCOCO Karpathy test split. Our Up-Down model outperforms
the ResNet baseline at identifying objects, as well as detecting object attributes and the relations between objects.
As the object and attribute annotations consist of freely occur at least five times, resulting in a model vocabulary
annotated strings, rather than classes, we perform extensive of 10,010 words. To evaluate caption quality, we use the
cleaning and filtering of the training data. Starting from standard automatic evaluation metrics, namely SPICE [1],
2,000 object classes and 500 attribute classes, we manually CIDEr [42], METEOR [8], ROUGE-L [22] and BLEU [29].
remove abstract classes that exhibit poor detection perfor-
mance in initial experiments. Our final training set contains
4.1.3 VQA v2.0 Dataset
1,600 object classes and 400 attribute classes. Note that we
do not merge or remove overlapping classes (e.g. ‘person’, To evaluate our proposed VQA model, we use the recently
‘man’, ‘guy’), classes with both singular and plural versions introduced VQA v2.0 dataset [12], which attempts to mini-
(e.g. ‘tree’, ‘trees’) and classes that are difficult to precisely mize the effectiveness of learning dataset priors by balanc-
localize (e.g. ‘sky’, ‘grass’, ‘buildings’). ing the answers to each question. The dataset, which was
When training the VQA model, we augment the VQA used as the basis of the 2017 VQA Challenge2 , contains
v2.0 training data with Visual Genome question and answer 1.1M questions with 11.1M answers relating to MSCOCO
pairs provided the correct answer is present in model’s an- images.
swer vocabulary. This represents about 30% of the available We perform standard question text preprocessing and to-
data, or 485K questions. kenization. Questions are trimmed to a maximum of 14
words for computational efficiency. The set of candidate an-
4.1.2 Microsoft COCO Dataset swers is restricted to correct answers in the training set that
appear more than 8 times, resulting in an output vocabulary
To evaluate our proposed captioning model, we use the size of 3,129. Our VQA test server submissions are trained
MSCOCO 2014 captions dataset [23]. For validation of on the training and validation sets plus additional questions
model hyperparameters and offline testing, we use the and answers from Visual Genome. To evaluate answer qual-
‘Karpathy’ splits [19] that have been used extensively for ity, we report accuracies using the standard VQA metric [2],
reporting results in prior work. This split contains 113,287 which takes into account the occasional disagreement be-
training images with five captions each, and 5K images re- tween annotators for the ground truth answers.
spectively for validation and testing. Our MSCOCO test
server submission is trained on the entire MSCOCO 2014 4.2. ResNet Baseline
training and validation set (123K images). To quantify the impact of bottom-up attention, in both
We follow standard practice and perform only minimal our captioning and VQA experiments we evaluate our full
text pre-processing, converting all sentences to lower case,
tokenizing on white space, and filtering words that do not 2 http://www.visualqa.org/challenge.html
6082
BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE-L CIDEr SPICE
c5 c40 c5 c40 c5 c40 c5 c40 c5 c40 c5 c40 c5 c40 c5 c40
Review Net [47] 72.0 90.0 55.0 81.2 41.4 70.5 31.3 59.7 25.6 34.7 53.3 68.6 96.5 96.9 18.5 64.9
Adaptive [27] 74.8 92.0 58.4 84.5 44.4 74.4 33.6 63.7 26.4 35.9 55.0 70.5 104.2 105.9 19.7 67.3
PG-BCMR [24] 75.4 - 59.1 - 44.5 - 33.2 - 25.7 - 55 - 101.3 - - -
SCST:Att2all [33] 78.1 93.7 61.9 86.0 47.0 75.9 35.2 64.5 27.0 35.5 56.3 70.7 114.7 116.7 20.7 68.9
LSTM-A3 [48] 78.7 93.7 62.7 86.7 47.6 76.5 35.6 65.2 27 35.4 56.4 70.5 116 118 - -
Ours: Up-Down 80.2 95.2 64.1 88.8 49.1 79.4 36.9 68.5 27.6 36.7 57.1 72.4 117.9 120.5 21.5 71.5
Table 3. Highest ranking published image captioning results on the online MSCOCO test server. Our submission, an ensemble of 4
models optimized for CIDEr with different initializations, outperforms previously published work on all reported metrics. At the time of
submission (18 July 2017), we also outperformed all unpublished test server submissions.
Figure 5. Example of a generated caption showing attended image regions. For each generated word, we visualize the attention weights
on individual pixels, outlining the region with the maximum attention weight in red. Avoiding the conventional trade-off between coarse
and fine levels of detail, our model focuses on both closely-cropped details, such as the frisbee and the green player’s mouthguard when
generating the word ‘playing’, as well as large regions, such as the night sky when generating the word ‘dark’.
model (Up-Down) against prior work as well as an ab- proach on the test portion of the Karpathy splits. For fair
lated baseline. In each case, the baseline (ResNet), uses comparison, results are reported for models trained with
a ResNet [13] CNN pretrained on ImageNet [34] to encode standard cross-entropy loss, and for models optimized for
each image in place of the bottom-up attention mechanism. CIDEr. Note that the SCST approach uses ResNet-101 en-
In image captioning experiments, similarly to previous coding of full images, similar to our ResNet baseline. All
work [33] we encode the full-sized input image with the results are reported for a single model with no fine-tuning of
final convolutional layer of Resnet-101, and use bilinear the input ResNet / R-CNN model. However, the SCST re-
interpolation to resize the output to a fixed size spatial sults are from the best of four random initializations, while
representation of 10×10. This is equivalent to the maxi- our results are from a single initialization.
mum number of spatial regions used in our full model. In Relative to the SCST models, our ResNet baseline ob-
VQA experiments, we encode the resized input image with tains slightly better performance under cross-entropy loss,
ResNet-200 [14]. In separate experiments we use evaluate and slightly worse performance when optimized for CIDEr
the effect of varying the size of the spatial output from its score. After incorporating bottom-up attention, our full
original size of 14×14, to 7×7 (using bilinear interpolation) Up-Down model shows significant improvements across all
and 1×1 (i.e., mean pooling without attention). metrics regardless of whether cross-entropy loss or CIDEr
optimization is used. Using just a single model, we obtain
4.3. Image Captioning Results
the best reported results for the Karpathy test split. As illus-
In Table 1 we report the performance of our full model trated in Table 2, the contribution from bottom-up attention
and the ResNet baseline in comparison to the existing state- is broadly based, illustrated by improved performance in
of-the-art Self-critical Sequence Training [33] (SCST) ap- terms of identifying objects, object attributes and also the
6083
Yes/No Number Other Overall
Ours: ResNet (1×1) 76.0 36.5 46.8 56.3
Ours: ResNet (14×14) 76.6 36.2 49.5 57.9
Ours: ResNet (7×7) 77.6 37.7 51.5 59.4
Ours: Up-Down 80.3 42.8 55.8 63.2
Relative Improvement 3% 14% 8% 6%
Table 4. Single-model performance on the VQA v2.0 validation
set. The use of bottom-up attention in the Up-Down model pro- Question: What room are they in? Answer: kitchen
vides a significant improvement over the best ResNet baseline
across all question types, even though the ResNet baselines use Figure 6. VQA example illustrating attention output. Given the
almost twice as many convolutional layers. question ‘What room are they in?’, the model focuses on the stove-
top, generating the answer ‘kitchen’.
Yes/No Number Other Overall
d-LSTM+n-I [26, 12] 73.46 35.18 41.83 54.22 pable of focusing on fine details or large image regions.
MCB [11, 12] 78.82 38.28 53.36 62.27 This capability arises because the attention candidates in
UPMC-LIP6 82.07 41.06 57.12 65.71 our model consist of many overlapping regions with varying
Athena 82.50 44.19 59.97 67.59
scales and aspect ratios – each aligned to an object, several
HDU-USYD-UNCC 84.50 45.39 59.01 68.09
related objects, or an otherwise salient image patch.
Ours: Up-Down 86.60 48.64 61.15 70.34
Unlike conventional approaches, when a candidate atten-
Table 5. VQA v2.0 test-standard server accuracy as at 8 August tion region corresponds to an object, or several related ob-
2017, ranking our submission against published and unpublished jects, all the visual concepts associated with those objects
work for each question type. Our approach, an ensemble of 30
appear to be spatially co-located – and are processed to-
models, outperforms all other leaderboard entries.
gether. In other words, our approach is able to consider all
of the information pertaining to an object at once. This is
relationships between objects. also a natural way for attention to be implemented. In the
Table 3 reports the performance of 4 ensembled models human visual system, the problem of integrating the sepa-
trained with CIDEr optimization on the official MSCOCO rate features of objects in the correct combinations is known
evaluation server, along with the highest ranking previously as the feature binding problem, and experiments suggest
published results. At the time of submission (18 July 2017), that attention plays a central role in the solution [40, 39].
we outperform all other test server submissions on all re- We include an example of VQA attention in Figure 6.
ported evaluation metrics.
5. Conclusion
4.4. VQA Results
We present a novel combined bottom-up and top-down
In Table 4 we report the single model performance of visual attention mechanism. Our approach enables atten-
our full Up-Down VQA model relative to several ResNet tion to be calculated more naturally at the level of objects
baselines on the VQA v2.0 validation set. The addition and other salient regions. Applying this approach to im-
of bottom-up attention provides a significant improvement age captioning and visual question answering, we achieve
over the best ResNet baseline across all question types, state-of-the-art results in both tasks, while improving the
even though the ResNet baseline uses approximately twice interpretability of the resulting attention weights.
as many convolutional layers. Table 5 reports the perfor- At a high level, our work more closely unifies tasks
mance of 30 ensembled models on the official VQA 2.0 involving visual and linguistic understanding with recent
test-standard evaluation server, along with the previously progress in object detection. While this suggests several
published baseline results and the highest ranking other en- directions for future research, the immediate benefits of our
tries. At the time of submission (8 August 2017), we out- approach may be captured by simply replacing pretrained
perform all other test server submissions. Our submission CNN features with pretrained bottom-up attention features.
also achieved first place in the 2017 VQA Challenge. Acknowledgements This research is partially supported by an
Australian Government Research Training Program (RTP) Schol-
4.5. Qualitative Analysis
arship, by the Australian Research Council Centre of Excellence
To help qualitatively evaluate our attention methodology, for Robotic Vision (project number CE140100016), by a Google
in Figure 5 we visualize the attended image regions for dif- award through the Natural Language Understanding Focused Pro-
ferent words generated by our Up-Down captioning model. gram, and under the Australian Research Councils Discovery
As indicated by this example, our approach is equally ca- Projects funding scheme (project number DP160102156).
6084
References [18] J. Jin, K. Fu, R. Cui, F. Sha, and C. Zhang. Aligning
where to see and what to tell: image caption with region-
[1] P. Anderson, B. Fernando, M. Johnson, and S. Gould. based attention and scene factorization. arXiv preprint
SPICE: Semantic propositional image caption evaluation. In arXiv:1506.06272, 2015. 2
ECCV, 2016. 6
[19] A. Karpathy and F.-F. Li. Deep visual-semantic alignments
[2] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. L. for generating image descriptions. In CVPR, 2015. 6
Zitnick, and D. Parikh. VQA: Visual Question Answering.
[20] V. Kazemi and A. Elqursh. Show, ask, attend, and answer: A
In ICCV, 2015. 6
strong baseline for visual question answering. arXiv preprint
[3] T. J. Buschman and E. K. Miller. Top-down versus bottom-
arXiv:1704.03162, 2017. 2, 5
up control of attention in the prefrontal and posterior parietal
[21] R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz,
cortices. Science, 315(5820):1860–1862, 2007. 1
S. Chen, Y. Kalantidis, L.-J. Li, D. A. Shamma, M. Bern-
[4] X. Chen, T.-Y. L. Hao Fang, R. Vedantam, S. Gupta,
stein, and L. Fei-Fei. Visual genome: Connecting language
P. Dollar, and C. L. Zitnick. Microsoft COCO cap-
and vision using crowdsourced dense image annotations.
tions: Data collection and evaluation server. arXiv preprint
arXiv preprint arXiv:1602.07332, 2016. 3, 5
arXiv:1504.00325, 2015. 1
[22] C. Lin. Rouge: a package for automatic evaluation of sum-
[5] K. Cho, B. van Merrienboer, C. Gulcehre, F. Bougares,
maries. In ACL Workshop, 2004. 6
H. Schwenk, and Y. Bengio. Learning phrase representations
[23] T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ra-
using RNN encoder-decoder for statistical machine transla-
manan, P. Dollar, and C. L. Zitnick. Microsoft COCO: Com-
tion. In EMNLP, 2014. 5
mon objects in context. In ECCV, 2014. 5, 6
[6] M. Corbetta and G. L. Shulman. Control of goal-directed
and stimulus-driven attention in the brain. Nature Reviews [24] S. Liu, Z. Zhu, N. Ye, S. Guadarrama, and K. Murphy. Im-
Neuroscience, 3(3):201–215, 2002. 1 proved image captioning via policy gradient optimization of
spider. In ICCV, 2017. 7
[7] Y. N. Dauphin, A. Fan, M. Auli, and D. Grangier. Language
modeling with gated convolutional networks. arXiv preprint [25] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-
arXiv:1612.08083, 2016. 5 Y. Fu, and A. C. Berg. SSD: Single shot multibox detector.
arXiv preprint arXiv:1512.02325, 2015. 2
[8] M. Denkowski and A. Lavie. Meteor Universal: Language
Specific Translation Evaluation for Any Target Language. In [26] J. Lu, X. Lin, D. Batra, and D. Parikh. Deeper
Proceedings of the EACL 2014 Workshop on Statistical Ma- lstm and normalized cnn visual question answering
chine Translation, 2014. 6 model. https://github.com/VT-vision-lab/
[9] J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, VQA_LSTM_CNN, 2015. 8
S. Venugopalan, K. Saenko, and T. Darrell. Long-term recur- [27] J. Lu, C. Xiong, D. Parikh, and R. Socher. Knowing when
rent convolutional networks for visual recognition and de- to look: Adaptive attention via a visual sentinel for image
scription. In CVPR, 2015. 3 captioning. In CVPR, 2017. 1, 2, 3, 7
[10] R. Egly, J. Driver, and R. D. Rafal. Shifting visual attention [28] J. Lu, J. Yang, D. Batra, and D. Parikh. Hierarchical
between objects and locations: evidence from normal and question-image co-attention for visual question answering.
parietal lesion subjects. Journal of Experimental Psychol- In NIPS, 2016. 1, 2
ogy: General, 123(2):161, 1994. 2 [29] K. Papineni, S. Roukos, T. Ward, and W. Zhu. Bleu: a
[11] A. Fukui, D. H. Park, D. Yang, A. Rohrbach, T. Darrell, and method for automatic evaluation of machine translation. In
M. Rohrbach. Multimodal compact bilinear pooling for vi- ACL, 2002. 6
sual question answering and visual grounding. In EMNLP, [30] M. Pedersoli, T. Lucas, C. Schmid, and J. Verbeek. Areas of
2016. 1, 2, 8 attention for image captioning. In ICCV, 2017. 2
[12] Y. Goyal, T. Khot, D. Summers-Stay, D. Batra, and [31] J. Redmon, S. K. Divvala, R. B. Girshick, and A. Farhadi.
D. Parikh. Making the V in VQA matter: Elevating the role You only look once: Unified, real-time object detection. In
of image understanding in Visual Question Answering. In CVPR, 2016. 2
CVPR, 2017. 1, 6, 8 [32] S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: To-
[13] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning wards real-time object detection with region proposal net-
for image recognition. In CVPR, 2016. 3, 7 works. In NIPS, 2015. 2
[14] K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in [33] S. J. Rennie, E. Marcheret, Y. Mroueh, J. Ross, and V. Goel.
deep residual networks. arXiv preprint arXiv:1603.05027, Self-critical sequence training for image captioning. In
2016. 7 CVPR, 2017. 1, 2, 3, 4, 6, 7
[15] S. Hochreiter and J. Schmidhuber. Long short-term memory. [34] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh,
Neural Computation, 1997. 3 S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein,
[16] A. Jabri, A. Joulin, and L. van der Maaten. Revisit- A. C. Berg, and L. Fei-Fei. Imagenet large scale visual recog-
ing visual question answering baselines. arXiv preprint nition challenge. IJCV, 2015. 2, 3, 7
arXiv:1606.08390, 2016. 2, 5 [35] B. J. Scholl. Objects and attention: The state of the art. Cog-
[17] M. Jaderberg, K. Simonyan, A. Zisserman, and nition, 80(1):1–46, 2001. 2
K. Kavukcuoglu. Spatial transformer networks. In [36] R. K. Srivastava, K. Greff, and J. Schmidhuber. Highway
NIPS, 2015. 2 networks. arXiv preprint arXiv:1505.00387v1, 2015. 5
6085
[37] D. Teney, P. Anderson, X. He, and A. van den Hengel. Tips
and tricks for visual question answering: Learnings from the
2017 challenge. In CVPR, 2018. 5
[38] D. Teney and A. van den Hengel. Zero-shot visual question
answering. arXiv preprint arXiv:1611.05546, 2016. 5
[39] A. Treisman. Perceptual grouping and attention in visual
search for features and for objects. Journal of Experimental
Psychology: Human Perception and Performance, 8(2):194,
1982. 8
[40] A. M. Treisman and G. Gelade. A feature-integration theory
of attention. Cognitive Psychology, 12:97–136, 1980. 8
[41] J. R. Uijlings, K. E. Van De Sande, T. Gevers, and A. W.
Smeulders. Selective search for object recognition. IJCV,
2013. 2
[42] R. Vedantam, C. L. Zitnick, and D. Parikh. CIDEr:
Consensus-based image description evaluation. In CVPR,
2015. 4, 6
[43] R. J. Williams. Simple statistical gradient-following al-
gorithms for connectionist reinforcement learning. Mach.
Learn., 8(3-4):229–256, May 1992. 4
[44] H. Xu and K. Saenko. Ask, attend and answer: Exploring
question-guided spatial attention for visual question answer-
ing. In ECCV, 2016. 1, 2
[45] K. Xu, J. Ba, R. Kiros, K. Cho, A. C. Courville, R. Salakhut-
dinov, R. S. Zemel, and Y. Bengio. Show, attend and tell:
Neural image caption generation with visual attention. In
ICML, 2015. 1, 2, 3
[46] Z. Yang, X. He, J. Gao, L. Deng, and A. J. Smola. Stacked
attention networks for image question answering. In CVPR,
2016. 1, 2
[47] Z. Yang, Y. Yuan, Y. Wu, R. Salakhutdinov, and W. W. Co-
hen. Review networks for caption generation. In NIPS, 2016.
1, 2, 7
[48] T. Yao, Y. Pan, Y. Li, Z. Qiu, and T. Mei. Boosting image
captioning with attributes. In ICCV, 2017. 7
[49] Y. Zhu, O. Groth, M. Bernstein, and L. Fei-Fei. Visual7w:
Grounded question answering in images. In CVPR, 2016. 1,
2
[50] L. Zitnick and P. Dollár. Edge boxes: Locating object pro-
posals from edges. In ECCV, 2014. 2
6086