0% found this document useful (0 votes)
15 views15 pages

ACORN: Performant and Predicate-Agnostic Search Over Vector Embeddings and Structured Data

The document presents ACORN, a novel approach for efficient and predicate-agnostic hybrid search over vector embeddings and structured data. ACORN addresses limitations of existing methods by utilizing a hierarchical navigable small world (HNSW) index and introducing predicate subgraph traversal, achieving state-of-the-art performance with significantly higher throughput on various datasets. The evaluation shows that ACORN can support high-cardinality and unbounded predicate sets, outperforming prior systems by a factor of 2 to 1,000 times.

Uploaded by

srm12043729
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views15 pages

ACORN: Performant and Predicate-Agnostic Search Over Vector Embeddings and Structured Data

The document presents ACORN, a novel approach for efficient and predicate-agnostic hybrid search over vector embeddings and structured data. ACORN addresses limitations of existing methods by utilizing a hierarchical navigable small world (HNSW) index and introducing predicate subgraph traversal, achieving state-of-the-art performance with significantly higher throughput on various datasets. The evaluation shows that ACORN can support high-cardinality and unbounded predicate sets, outperforming prior systems by a factor of 2 to 1,000 times.

Uploaded by

srm12043729
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

ACORN: Performant and Predicate-Agnostic Search Over Vector

Embeddings and Structured Data


Liana Patel Peter Kraft
Stanford University DBOS, Inc.
Stanford, USA USA
[email protected] [email protected]

Carlos Guestrin Matei Zaharia


Stanford University UC Berkeley
Stanford, USA Berkeley, USA
arXiv:2403.04871v1 [cs.IR] 7 Mar 2024

[email protected] [email protected]

ABSTRACT predicate filtering. For example, customers on an e-commerce site


Applications increasingly leverage mixed-modality data, and must can search for t-shirts similar to a reference image, while filtering on
jointly search over vector data, such as embedded images, text and price [64]. Similarly, researchers performing a literature review may
video, as well as structured data, such as attributes and keywords. search with both natural language queries and filters on publication
Proposed methods for this hybrid search setting either suffer from date, keywords or topics [54]. Likewise, a data scientist working on
poor performance or support a severely restricted set of search pred- outlier detection can find misclassified images by retrieving those
icates (e.g., only small sets of equality predicates), making them im- that look similar to a reference dog but have the label "cat" [2, 7].
practical for many applications. To address this, we present ACORN, To leverage diverse data modalities, applications need data man-
an approach for performant and predicate-agnostic hybrid search. agement systems that effectively support hybrid search queries, i.e.,
ACORN builds on Hierarchical Navigable Small Worlds (HNSW), a similarity search with structured predicates. Such systems require
state-of-the-art graph-based approximate nearest neighbor index, (1) query performance, i.e., efficient and accurate search despite
and can be implemented efficiently by extending existing HNSW variance in workload characteristics, such as selectivity, attribute
libraries. ACORN introduces the idea of predicate subgraph traver- correlations, and scale, and (2) expressive query semantics: sup-
sal to emulate a theoretically ideal, but impractical, hybrid search port for diverse query predicates that may not be known a priori
strategy. ACORN’s predicate-agnostic construction algorithm is (e.g., user-entered keywords, range searches, or regex matching).
designed to enable this effective search strategy, while supporting a Unfortunately, existing systems fall short of these goals. Three
wide array of predicate sets and query semantics. We systematically commonly used methods are pre-filtering [62, 64], post-filtering
evaluate ACORN on both prior benchmark datasets, with simple, [1, 5, 62, 64, 67], and specialized data structures for low-cardinality
low-cardinality predicate sets, and complex multi-modal datasets predicate sets [25, 49, 63, 66]. Pre-filtering first finds all records in
not supported by prior methods. We show that ACORN achieves the dataset that pass the query predicate, then performs brute force
state-of-the-art performance on all datasets, outperforming prior similarity-search over the filtered vector set. This approach scales
methods with 2–1,000× higher throughput at a fixed recall. poorly, becoming inefficient for medium to high selectivity predi-
cates on large datasets. Alternatively, post-filtering first searches an
CCS CONCEPTS ANN index, then removes results that fail the query predicate. Since
the database vectors closest to the query vector may not pass the
• Information systems → Information retrieval query pro-
predicate, post-filtering methods must typically expand the search
cessing; Data structures.
scope. This is often expensive, particularly for search predicates
with low selectivity or low correlation to the query vector, as we
KEYWORDS show in Figure 2. Milvus [62], Weaviate [1], AnalyticDB-V [64],
Vector Search, Approximate Nearest Neighbor Search, Hybrid Search and FAISS-IVF [5] build systems using these two core methods, and
suffer from their performance limitations.
1 INTRODUCTION Recognizing these limitations, recent works construct specialized
Due to the representation strength of modern deep learning mod- indices designed for hybrid search workloads with low-cardinality
els, vector embeddings have become a powerful first-class datatype predicate sets consisting of equality predicate operators. For ex-
for wide-ranging applications that use retrieval-augmented gener- ample, Filtered-DiskANN [25] outperforms prior baselines, but
ation [3, 65] or similarity-based search [18, 21, 42]. As a result, restricts the cardinality of the predicate set to about 1,000 and only
vector databases and indices are seeing increasing adoption in supports equality predicates. HQANN [66] and NHQ [12] similarly
many production use cases. These systems provide an efficient constrain the predicate set to a small number of equality filters and
approximate-nearest-neighbor (ANN) search interface over embed- in addition allow only a single structured attributes per dataset
ded unstructured data e.g., images, text, video, or user profiles. entry. These methods are often impractical since many applica-
However, many applications must jointly query both unstructured tions have large, or unbounded predicate sets that are unknown
and structured data, requiring ANN search in combination with a priori. In general, the possible predicate set’s cardinality grows
exponentially with each attribute’s cardinality, which itself may be
large. Thus, we instead propose a predicate-agnostic index, which
can support arbitrary and unbounded predicate sets.
In this paper, we propose ACORN (ANN Constraint-Optimized
Retrieval Network), a novel approach for performant and predicate-
agnostic hybrid search that can serve high-cardinality and un-
bounded predicate sets. We propose two indices: ACORN-𝛾, de-
signed for high-efficiency search, and ACORN-1, designed for low
construction overhead in resource-constrained settings. Both meth-
ods modify the hierarchical navigable small world (HNSW) index, Figure 1: Schematic drawing of search over an HNSW index. The
a state-of-the-art graph-based index for ANN search, and are easy search path is shown by blue arrows, beginning on level 2 and ending
to implement in existing HNSW libraries. on level 0 at the query point, shown in green.
ACORN tackles both the performance limitations of pre- and
post- filtering, as well as the semantic limitations of specialized 47, 48, 58, 68]. In this work build on HNSW, a graph-based method
indices. ACORN proposes the idea of predicate subgraph traversal that is empirically one of the best-performing on high-dimensional
during search. As the name implies, the search strategy traverses datasets, and we adapt it to support hybrid search.
the subgraph of the ACORN index induced by the set of nodes Graph-based ANN methods have gained popularity due their
that pass the query predicate. ACORN designs the index such that state-of-the-art performance on varied ANN benchmarks [13, 57].
these arbitrary predicate subgraphs approximate an HNSW index. These methods typically perform search using a greedy routing
Unlike pre- and post-filtering, this allows ACORN to provide sub- strategy that traverses a graph index, starting from a pre-defined
linear retrieval times despite variance in correlation between query entry point. The index itself forms a proximity graph 𝐺 (𝑉 , 𝐸), such
vectors and predicates, which we find to be a major challenge for that each dataset point is represented by a vertex and edges connect
existing hybrid search systems. ACORN also serves wide-ranging nearby points. The index construction algorithm typically aims to
predicate sets by employing a predicate-agnostic construction that approximate subgraphs of the Delaunay graph [38]. While the De-
alters HNSW’s algorithm to create a denser graph. Specifically, launay graph guarantees convergence of a greedy routing algorithm,
we introduce a predicate-agnostic neighbor expansion strategy in it is impossible to efficiently construct for arbitrary metric spaces
ACORN-𝛾 based on target predicate selectivity thresholds, which [51]. Thus graph methods focus on more tractable approximations
can be estimated empirically with or without knowing the predicate of Delaunay subgraphs, such as the Relative Neighbor Graph (RNG)
set. In conjunction, we propose a predicate-agnostic compression [37, 60], and the Nearest-Neighbor Graph (NNG) [8, 20].
heuristic to efficiently manage the index space footprint while main-
taining efficient search. We also explore the trade-off space between 2.1 Hierarchical Navigable Small Worlds
search performance and construction overhead, designing ACORN- As Figure 1 illustrates, HNSW forms a hierarchical, multi-level
1 to approximate ACORN-𝛾’s search performance while further graph index with bounded degree. Below we briefly summarize the
reducing the time-to-index (TTI) for resource-constrained settings. HNSW search and construction algorithm.
We systematically evaluate ACORN-𝛾 and ACORN-1 on four The HNSW construction algorithm iteratively inserts each
datasets: SIFT1M [35], Paper [63], LAION [55], and TripClick [54]. point into the graph index, to construct a navigable graph with
Our evaluation includes both prior benchmark datasets, with sim- bounded degree, specified by parameter 𝑀. For each inserted ele-
ple, low-cardinality predicate sets, which prior specialized indices ment 𝑣, first, a maximum layer index 𝑙 is stochastically chosen using
can serve, as well as more complex datasets with millions of possi- an exponentially decaying probability distribution, normalized by
ble predicates, which existing indices cannot to handle. On each, the constant 𝑚𝐿 = 1/𝑙𝑛(𝑀). The level assignment probability en-
ACORN-𝛾 achieves state-of-the-art hybrid search performance with sures that the expected characteristic path length increases with the
2–1000× higher queries per second (QPS) at 0.9 recall compared level index. Intuitively, the upper-most level contains the longest-
to prior methods. Specifically, ACORN achieves 2-10× higher QPS range links, which will be traversed first by the search algorithm,
on prior benchmarks, over 30× higher QPS on new benchmarks, and the bottom-most level contains the shortest-range links, which
and over 1,000x higher QPS at scale, on a 25-million-vector dataset. are traversed last by the search algorithm. The insertion procedure
We find that ACORN-1 empirically approximates ACORN-𝛾, at- then proceeds in two phases. In the first phase, a greedy search is
taining at most 5× lower QPS at fixed recall but 9–53× lower TTI performed iteratively from the top layer, beginning at a pre-defined
compared to ACORN-𝛾. Our detailed evaluation demonstrates the entrypoint down to the (𝑙 + 1)th layer. At each of these levels, the
effectiveness of ACORN’s predicate-subgraph traversal strategy greedy subroutine chooses a single node that becomes the entry-
and predicate-agnostic construction techniques. point into the next layer. In the second phase, the greedy search
iterates over level 𝑙 to level 0. The greedy search at each level now
chooses efc nodes as candidate edges. Of these candidates, at most
2 BACKGROUND 𝑀 are selected to become neighbors of 𝑣 according to an RNG-based
Existing methods for Approximate Nearest Neighbor (ANN) search pruning algorithm [31]. At level 0, the degree bound is increased
can be broadly categorized as tree-based [15–17, 19, 28, 45, 50, 2 × 𝑀, which is shown to empirically improve performance.
56], hashing-based [9–11, 24, 26, 29, 30, 40, 41, 44, 46, 52, 59, 69], The HNSW search algorithm begins its traversal from a pre-
quantization-based [23, 27, 34, 35, 39], and graph-based [22, 25, 32, defined entry point at the upper-most layer of the multilayer graph,
ACORN: Search Over Vector Embeddings and Structured Data

Algorithm 1: HNSW-ANN-SEARCH(𝑥𝑞 , 𝐾, efs)


Input: query vector 𝑥𝑞 , number of nearest neighbors to return 𝐾,
size of dynamic candidate list efs
Output: 𝐾 nearest elements to 𝑥𝑞
𝑒 ← entry-point to hnsw graph
𝑊 ← ∅ // set of current nearest
𝐿 ← 𝑙𝑒𝑣𝑒𝑙 (𝑒 ) // Top hnsw level
for 𝑙 ← 𝐿...1 do
𝑒 ← SEARCH-LAYER(𝑥𝑞 , 𝑒, ef = 1, 𝑙)
end
𝑊 ← SEARCH-LAYER(𝑥𝑞 , 𝑒, ef = efs, 𝑙 = 0)
return 𝐾 nearest elements from 𝑊 to 𝑥𝑞
Figure 2: Schematic drawing of a dataset with no predicate clus-
tering (top), a dataset with predicate clustering and positive query
correlation (middle), and a dataset with predicate clustering and
illustrated in Figure 1. The traversal then follows an iterative search negative query correlation (bottom). Dark blue circles show points
strategy from the top level downwards. At each level a greedy that pass the predicate, and light gray circles show points that fail
search is used to choose a single node, which becomes the entry- the predicate. The query vectors are shown in green.
point into the next level. Once the bottom level is reached, rather
than greedily choosing a single node, the search algorithm greedily
chooses 𝐾 nearest elements to return. We outline this process in truth set of 𝐾 nearest neighbors to 𝑥𝑞 that satisfy 𝑝𝑞 , and 𝑅 is the
Algorithm 1. The search parameter efs provides a tradeoff between retrieved set.
search quality and efficiency by controlling the size of the dynamic
candidate list stored during the bottom level’s greedy search. 3.2 Search Performance of Baseline Methods
We now analyze the search complexity of two predominant baseline
3 PROBLEM DEFINITION AND CHALLENGES methods, pre- and post-filtering. We will consider how varied work-
In this section we formally define the hybrid search setting and then load characteristics impact the search behavior of these methods.
analyze the performance challenges that existing predicate-agnostic Through our analysis, we will make the standard assumption that
methods, i.e., pre- and post-filtering, face. Our analysis leads us distance computations dominate search performance. We note that
to explore several important workload characteristics. Specifically, HNSW’s unfiltered search complexity is 𝑂 (log(𝑛) + 𝐾).
we will consider predicate selectivity, the dataset size, and query Pre-filtering linearly scans 𝑋𝑝 , computing distances over each
correlation, which we introduce, formally define and find to be a point that passes the search predicate. This yields a hybrid search
major challenge for post-filtering methods. complexity of 𝑂 (|𝑋𝑝 |) = 𝑂 (𝑠𝑛 + 𝐾). While pre-filtering always
We will later leverage our understanding of existing performance achieves perfect recall, its search complexity scales poorly for large
challenges in Section 4 to formulate a theoretically ideal hybrid dataset sizes or selectivities, growing linearly in either variable.
search solution. Then, in Section 7, we will revisit the workload char- Post-filtering, by contrast, performs ANN-search over 𝑋 to find
acteristics discussed in this section to rigorously evaluate ACORN’s the closet query vector to 𝑥𝑞 , then expands the search scope to
search performances. find 𝐾 vectors that pass the query predicate, 𝑝. Intuitively, search
performance varies depending on correlation between the query
3.1 Hybrid Search Definitions vector and the vectors in 𝑋𝑝 . When the vectors of 𝑋𝑝 are close to the
query vector, post-filtering over HNSW has a search complexity of
Let 𝐷 = {𝑒 1, 𝑒 2, ..., 𝑒𝑛 } = {(𝑥 1, 𝑎 1 ), (𝑥 2, 𝑎 2 ), ..., (𝑥𝑛 , 𝑎𝑛 )} be a dataset 𝑂 (log(𝑛) + 𝐾). If the vectors in 𝑋𝑝 are uniformly distributed within
consisting of 𝑛 entities, each with a vector component, 𝑥𝑖 ∈ R𝑑 , 𝑋 , then post-filtering’s expected search complexity is 𝑂 (log(𝑛) +
and a structured attribute-tuple, 𝑎𝑖 , associated with entity 𝑒𝑖 . Let 𝐾/𝑠). However, vectors of 𝑋𝑝 may be far away from the query
𝑋 = {𝑥 1, 𝑥 2, ..., 𝑥𝑛 } denote the set of vectors in the dataset, and vector, leading to a worst case of 𝑂 (𝑛) search performance.
𝑑𝑖𝑠𝑡 (𝑎, 𝑏) is the metric distance between any two points. Let 𝐴 = We see that the search performance of either baseline is not robust
{𝑎 1, 𝑎 2, ..., 𝑎𝑛 } be the set of structured attributes in the dataset. We to variations in selectivity, dataset size, and query correlation. We
will denote 𝑋𝑝 ⊆ 𝑋 as the subset of vectors corresponding to empirically verify these limitations in section 7 (figures 9, 10).
entities in the dataset that pass a given predicate 𝑝. We refer to the
selectivity (𝑠) of predicate 𝑝 as the fraction of entities from 𝐷 that 3.2.1 Formalizing Query Correlation. We will now formalize the
satisfy the predicate, where 0 ≤ 𝑠 ≤ 1. notion of query correlation, which we find is key challenge for
We consider the hybrid search problem, described as follows. post-filtering-based systems. As Figure 2 shows, query correlation
Given a dataset 𝐷, target 𝐾, and query 𝑞 = (𝑥𝑞 , 𝑝𝑞 ), where 𝑥𝑞 ∈ occurs when the vectors of 𝑋𝑝 are non-uniformly distributed in
R𝑑 , and 𝑝𝑞 is a predicate, retrieve 𝑥𝑞 ’s 𝐾 nearest neighbors that 𝑋 and instead cluster together relative to the vectors in X. We
pass the predicate 𝑝𝑞 . We will specifically focus on the problem refer to this phenomenon as predicate clustering. When predicate
of approximate nearest neighbor search w.r.t 𝑥𝑞 . Here, our goal is clustering occurs, a query vector may be either close or far away
to maximize both search accuracy and search efficiency. We will from the predicate cluster containing its search targets, inducing
measure accuracy by recall@𝐾 = 𝐺∩𝑅 𝐾 , where 𝐺 is the ground query correlation.
Definition: Query Correlation. We will consider the query-
to-target distances for the given dataset compared to the expected
query-to-target distances for a hypothetical dataset, under which
no clustering is present. Formally, we define the query correlation
of the hybrid search workload 𝑄 over dataset 𝐷 as:
 
𝐶 (𝐷, 𝑄) = E (𝑥𝑖 ,𝑝𝑖 ) ∈𝑄 E𝑅𝑖 [𝑔(𝑥𝑖 , 𝑅𝑖 )] − 𝑔(𝑥𝑖 , 𝑋𝑝𝑖 )
We let 𝑅𝑖 be a random set variable of |𝑋𝑝𝑖 | vectors drawn uni-
formly from 𝑋 , defined for each hybrid query (𝑥𝑖 , 𝑝𝑖 ) ∈ 𝑄. We Figure 3: An illustration of the predicate subgraph, shown by the
green nodes. ACORN searches over the predicate subraph to emulate
define 𝑔(𝑥, 𝑆) = min𝑦 ∈𝑆 𝑑𝑖𝑠𝑡 (𝑥, 𝑦) to be the function mapping the
search over an oracle partition index.
query vector 𝑥 to the minimum distance of neighbors from the Table 1: Summary of Notation
given vector set 𝑆 ⊆ 𝑅𝑑 . Note that 𝑔(𝑥𝑖 , 𝑋𝑝𝑖 ) is the ground-truth
hybrid-search target of the query (𝑥𝑖 , 𝑝𝑖 ). Symbol Description
If, on average, query vectors are closer to their targets in 𝑋𝑝𝑖 , the 𝛾 neighbor expansion factor for ACORN index
true dataset of hybrid search targets, than in 𝑅𝑖 , the no-clustering 𝑀𝛽 compression parameter for ACORN index
dataset, then the workload has positive query correlation. If the ef size of dynamic candidate list in ACORN greedy search
reverse is true, the workload has negative query correlation. We 𝑀 degree bound for traversed nodes during ACORN search
may also consider nearest-neighbor distance rather than the metric 𝑚𝐿 = 1/ln 𝑀 level normalization constant for ACORN index
distance in the above definition. We also note that we can easily 𝑒 entry-point to ACORN index
𝑒𝑝 entry-point to ACORN’s predicate 𝑝’s subgraph
extend this definition to consider 𝐾 targets of the hybrid search,
𝑙 (𝑣) maximum level index of node 𝑣 in ACORN index
rather than one, by summing distances over the 𝐾 search targets.
𝑁 𝑙 (𝑣) neighbor list of node 𝑣 at level 𝑙
𝑁𝑝𝑙 (𝑣) filtered neighbors of node 𝑣 at level 𝑙 under predicate 𝑝
4 THEORETICAL IDEAL HYBRID SEARCH 𝑋𝑝 vector dataset that passes predicate 𝑝
PERFORMANCE WITH HNSW 𝑠 selectivity
𝑛 size of dataset
For a given hybrid search query, we define the theoretically ideal
search performance using HNSW data structures as the perfor-
𝑝, as shown in Figure 3. We modify the HNSW construction algo-
mance attainable if we knew the search predicate 𝑝𝑞 during con-
rithms so that arbitrary predicate subgraphs emulate an HNSW
struction. In this case, we could construct an HNSW index over
oracle partition index without the need to explicitly construct one.
𝑋𝑝 . We call this the oracle partition index for that query. The
ACORN-𝛾 achieves this by constructing a denser version of HNSW,
complexity of searching this index is 𝑂𝑠 (log 𝑠𝑛 + 𝐾). Notably, the
which we parameterize by a neighbor list expansion factor, 𝛾, a com-
search performance of the oracle partition index outperforms both
pression factor, 𝑀𝛽 , and the HNSW parameters, efc and 𝑀. Then by
pre- and post-filtering across variations in predicate selectivity, data
adding a filter step during search to ignore neighbors that fail the
size, and query correlation. While pre-filtering’s search scales in
predicate, we find ACORN-𝛾’s search can efficiently navigate to and
|𝑋𝑝 |, search over the oracle partition scales sublinearly in |𝑋𝑝 |. The
traverse over the predicate subgraph, even under variations in query
oracle partition is also robust under variations in query correlation:
correlation. Meanwhile, ACORN-1 expands neighbor lists during
it does not require the search scope expansion used in post-filtering.
search rather than during construction to approximate ACORN-𝛾’s
Despite its ideal search performance, the oracle partition index
dense graph structure without building it.1
requires us to know all search predicates in advance and to create
Overall, ACORN prescribes a simple and general framework for
a full HNSW index per predicate. In practice, the oracle partition
performant hybrid search based on the idea of predicate-subgraph
index is not possible to construct because query predicate sets are
traversal. The core techniques we propose are predicate-agnostic
often unknown during construction and have high or unbounded
neighbor-list expansions and pruning during construction in com-
cardinality. Building an HNSW per predicate would require prohib-
bination with predicate-based filtering during search. While this
itive amounts of space and time. Thus, in this work, we will instead
framework can be applied to a variety of graph-based ANN in-
approximate search over the oracle partition index for a particular
dices, in this work we focus on HNSW due their state-of-the-art
query, without ever explicitly constructing this index.
performance and widespread use.
5 ACORN OVERVIEW 5.1 ACORN-𝛾 Search Algorithm
We now describe ACORN, a predicate-agnostic approach for state-
Algorithm 2 outlines the greedy search algorithm ACORN uses
of-the-art hybrid search. We propose two variants, which we refer
at each level, beginning from the top level at a pre-defined entry-
to as ACORN-𝛾 (5.1, 5.2) and ACORN-1 (5.3). We design ACORN-𝛾
point. The main difference between ACORN’s search algorithm and
to achieve efficient search performance, and we design ACORN-
that of HNSW is how neighbor look-ups (line 9) are performed at
1 to approximate ACORN-𝛾’s search performance while further
reducing the algorithm’s time to index (TTI) and space footprint 1 For highly selective queries where even ACORN’s predicate subgraph would be
for resource-constrained settings. disconnected within the larger ACORN graph, ACORN falls back to pre-filtering,
which is effective for such queries. ACORN is configured with a minimum selectivity,
ACORN’s core idea is to search over the index’s predicate sub- 𝑠𝑚𝑖𝑛 , under which it should use pre-filtering when a query is estimated to be more
graph, i.e., the subgraph induced by 𝑋𝑝 for a given search predicate selective than 𝑠𝑚𝑖𝑛 . We describe how to configure 𝛾 based on 𝑠𝑚𝑖𝑛 in Section 5.2.
ACORN: Search Over Vector Embeddings and Structured Data

Figure 4: Diagram of ACORN’s neighbor selection strategies. Blue nodes represent neighbors that pass the query predicate. Sub-figure (a)
shows the simple predicate-based filter applied to uncompressed edge lists of size 𝑀 · 𝛾 , followed by truncation to size 𝑀 = 3. Sub-figure (b)
shows the compression-based heuristic. Sub-figure (c) shows the neighbor expansion strategy used in ACORN-1.

Algorithm 2: ACORN-SEARCH-LAYER(𝑥𝑞 , 𝑝𝑞 , 𝑒, 𝑒 𝑓 , 𝑙) neighbors, before performing filtering and truncation. This proce-
Input: query vector 𝑥𝑞 , query predicate 𝑝𝑞 , entry-point 𝑒, number dure entails two phases. The first phase iterates through the first
of nearest neighbors to return 𝑒 𝑓 , level to search 𝑙 𝑀𝛽 nodes of 𝑁 𝑙 (𝑣), simply filtering as in the previous strategy.
Output: 𝑒 𝑓 nearest elements to 𝑥𝑞 The second phase iterates over the remainder of the neighbor list,
1 𝑇 ← 𝑒 // visited set expanding the search neighborhood to include neighbors of neigh-
2 𝐶 ← 𝑒 // candidate set bors, before again filtering according to the query predicate. 𝑀𝛽 is
3 𝑊 ← 𝑒 // dynamic list of found nearest neighbors a construction parameter which we will discuss in the next section.
4 while |𝐶 | > 0 do
5 𝑐 ← extract arg min𝑥 ∈𝐶 ∥𝑥𝑞 − 𝑥 ∥ 5.2 ACORN-𝛾 Construction Algorithm
6 𝑓 ← get arg max𝑥 ∈𝑊 ∥𝑥𝑞 − 𝑥 ∥
We construct the ACORN-𝛾 index by applying two core modifica-
7 if 𝑑𝑖𝑠𝑡 (𝑐, 𝑥𝑞 ) > 𝑑𝑖𝑠𝑡 (𝑓 , 𝑥𝑞 ) and |𝑊 | ≥ 𝑒 𝑓 𝑐
tions to the HNSW indexing algorithm: first, we expand each node’s
8 break
neighbor list, and then we apply a novel predicate-agnostic pruning
9 𝑛𝑒𝑖𝑔ℎ𝑏𝑜𝑟ℎ𝑜𝑜𝑑 ← GET-NEIGHBORS(𝑐, 𝑙, 𝑝𝑞 )
method to compress the index. Both of these steps are summarized
10 for each 𝑣 ∈ neighborhood[1:𝑀 ]
in Figure 5.
11 if 𝑣 ∉ 𝑇
12 𝑇 ←𝑇 ∪𝑣 Neighbor List Expansion. While HNSW collects 𝑀 approxi-
13 𝑓 ← arg max𝑥 ∈𝑊 ∥𝑥𝑞 − 𝑥 ∥ mate nearest neighbors as candidate edges for each node in the
14 if 𝑑𝑖𝑠𝑡 (𝑣, 𝑥𝑞 ) < 𝑑𝑖𝑠𝑡 (𝑓 , 𝑥𝑞 ) or |𝑊 | < 𝑒 𝑓 index, ACORN collects 𝑀 · 𝛾 approximate nearest neighbors as
15 𝐶 ←𝐶 ∪𝑣 candidate edges per node. To find these candidates during construc-
16 𝑊 ←𝑊 ∪𝑣 tion, ACORN uses a metadata-agnostic search over its graph index.
17 if |𝑊 | > 𝑒 𝑓 Specifically, the neighbor lookup strategy at each node, 𝑣, on level
18 remove furthest element from 𝑊 to 𝑥𝑞 𝑙, simply accesses the neighbor list 𝑁 𝑙 (𝑣) and returns the first 𝑀
19 nodes. Note that although each node contains up to 𝑀 ·𝛾 neighbors,
20 end we assume by construction that 𝑀 neighbors per node are sufficient
21 return 𝑊 for maintaining navigability of the graph index. Thus, considering
truncated neighbor lists while traversing the graph allows us to
avoid unnecessary distance computations and TTI slowdowns.
1 , where 𝑠
One simple choice for 𝛾 is 𝑠𝑚𝑖𝑛 𝑚𝑖𝑛 is the minimum pred-
each visited node, 𝑐. While HNSW simply checks the neighbor list, icate selectivity we plan to serve before resorting to pre-filtering. As
𝑁 𝑙 (𝑐), ACORN performs additional steps to recover an appropriate we discuss in Section 6, ACORN’s indexing time and space footprint
neighborhood for the given search predicates. increase proportionally to 𝛾. Meanwhile, pre-filtering becomes a
Specifically, ACORN-𝛾 uses two neighbor look-up strategies, a competitive baseline at low predicate selectivity values, as we show
simple filter method, shown in Figure 4(a), and a compression-based in Figure 9a. Thus, ACORN is able to balance construction and
heuristic, shown in Figure 4(b), which is compatible with the com- search efficiency by using pre-filtering as a fall-back for queries
pression strategy we optionally apply during construction, detailed with low-selectivity predicates. This leads to a simple cost-based
in Section 5.2. For each visited node, 𝑣, the filter-based neighbor model during search: if the estimated predicate selectivity of a given
look-ups simply scan the neighbor list 𝑁 𝑙 (𝑣) to find the sub-list of query is greater than 1/𝛾, search the ACORN-𝛾 index, otherwise
neighbors that pass the predicate, 𝑁𝑝𝑙 (𝑣). If 𝑁𝑝𝑙 (𝑣) contains more pre-filter. We note that leveraging pre-filtering in this way may de-
than 𝑀 nodes, we take the first 𝑀 and return this as 𝑣’s neighbor- grade search efficiency, but not result quality, when errors occur in
hood. The compression-based neighbor look-ups instead partially selectivity estimates. If a query’s true predicate selectivity is above
expand the neighbor set 𝑁 𝑙 (𝑣) to include a subset of 𝑣’s two-hop 1/𝛾, but the estimate is below, the system will mistakenly pre-filter,
We now briefly describe why HNSW’s pruning, a metadata-
blind mechanism, is insufficient for hybrid search. Consider the
simple scenario shown in Figure 5. For a node, 𝑣, inserted into
the HNSW index at an arbitrary level 𝑙, the algorithm generates
candidates neighbors 𝑎, 𝑏 and 𝑐. HNSW’s pruning rule iterates over
𝑣’s candidate neighbor list in order of nearest to farthest neighbors.
Node 𝑏 is pruned since there exists a neighbor 𝑎 such that 𝑏 is closer
to 𝑎 than to 𝑣. This RNG-approximation strategy corresponds to
pruning the longest edge of the triangle formed by a triplet 𝑣, 𝑎, 𝑏.
In this case, we can prune the edge 𝑣 − 𝑏 and expect a search path
to traverse from 𝑣 to 𝑏 via 𝑎. The problem with this technique
Figure 5: A comparison of HNSW and ACORN-𝛾 ’s strategies for (a) arises when we consider the hybrid search setting for an arbitrary
selecting candidate edges, shown for 𝑀=3, and (b) pruning candidate
predicate. Say 𝑣 and 𝑏 pass a given query predicate 𝑝𝑞 , but 𝑎 does
edges for each inserted node 𝑣, shown for 𝑀=3, 𝑀𝛽 =2, 𝛾 =2.
not. Then 𝑣, 𝑏, 𝑎 do not form a triangle in the predicate subgraph,
and we cannot expect to find the path from 𝑣 to 𝑏 through 𝑎. As a
obtaining perfect recall at possibly lower QPS than if the ACORN result, HNSW’s pruning mechanism will falsely prune edge 𝑣 − 𝑏.
index was instead searched. If the reverse is true, the system will If we had complete knowledge of all possible query predicates, we
mistakenly search the ACORN index, whereas pre-filtering would could ensure that we only prune edges of triangles such that all
have offered similar QPS and perfect recall. three vertices always exist in the same subset of possible predicate
Compression. A key challenge with ACORN-𝛾’s neighbor ex- subgraphs. FilteredDiskANN [25] takes this approach by restricting
pansion step is that it increases index size and TTI. The increased the set of possible query predicates. However, for arbitrary query
index size poses a significant issue particularly for memory-resident predicates, ensuring this property holds becomes intractable.
graph indices, like HNSW. To address this, we introduce a predicate-
agnostic pruning technique. While we could apply compression to
5.3 ACORN-1
the full index, as discussed in Section 6.1, we specifically target
the bottom level’s neighbor lists since they contribute most signifi- We now describe ACORN-1, an alternative approach which aims to
cantly to the indexing overhead. This follows from the exponentially approximate ACORN-𝛾’s search performance, while further min-
decaying level assignment probability ACORN uses. imizing index size and TTI. ACORN-1 achieves this by perform-
The core idea of the pruning procedure is to precisely retain ing the neighbor expansion step solely during search, rather than
each node’s nearby neighbors in the index, while approximating during construction, as ACORN-𝛾 does. ACORN-1’s construction
farther away neighbor during search. We use the tunable compres- corresponds to the original HNSW index without pruning. This
sion parameter, 𝑀𝛽 , where 0 ≤ 𝑀𝛽 , ≤ 𝑀 · 𝛾. During construction, construction corresponds to ACORN-𝛾’s construction algorithm,
ACORN chooses each node’s final neighbor list by automatically with fixed parameters 𝛾 = 1 and 𝑀𝛽 = 𝑀.
retaining the nearest 𝑀𝛽 candidate edges and aggressively prun- ACORN-1’s main difference from ACORN-𝛾 during search, is
ing the remaining candidates. During search we can recover the its neighbor lookup strategy. Specifically, at each visited node, 𝑣,
first 𝑀𝛽 neighbors of each node 𝑣 directly from the neighbor list during greedy search, ACORN-1 uses a full neighbor list expansion
to consider all one-hop and two-hop neighbors of 𝑣, before applying
𝑁 𝑙 (𝑣), and approximate remaining neighbors by looking at 2-hop
the predicate filter and truncating the resulting neighbor list to size
neighbors during search, as we described in Section 5.1.
𝑀. Figure 4(c) outlines this procedure.
Figure 5 outlines this pruning procedure applied to node 𝑣’s
candidate neighbor list. The algorithm iterates over the ordered
candidate edge list and keeps the first 𝑀𝛽 candidates. Over the
6 DISCUSSION
remaining sub-list of candidates, the algorithm applies the following In this section we analyze the ACORN index’s space complexity,
pruning procedure at each node. Let H be the dynamic set of 𝑣’s construction complexity and search performance. We focus our
chosen two-hop neighbors, initialized to ∅. We prune candidate attention on ACORN-𝛾, since ACORN-1’s index construction rep-
𝑐 if it is contained in 𝐻 ; otherwise, we keep 𝑐 and add all of its resents a special case of ACORN-𝛾 for fixed parameters (𝛾 = 1,
neighbors to 𝐻 . The pruning procedure stops after iterating over 𝑀𝛽 = 𝑀), and we empirically show that ACORN-1 search approxi-
all candidates, or if |𝐻 | plus the number of chosen edges exceeds mates ACORN-𝛾 in Section 7. We note that our analysis in Sections
𝑀 · 𝛾. The pruned and ordered neighbor list is then stored in the 6.2 and 6.3 considers the complexity scaling of the search procedure
ACORN index and H is discarded. under the assumption that we build the exact Delaunay graphs
We highlight that the neighbor expansion during search, de- rather than approximate ones.
scribed in Section 5.1, can recover pruned neighbors regardless of
the query predicate. It follows from ACORN’s pruning rule that any 6.1 Index Size
node 𝑥 that was pruned from some node 𝑣’s neighbor list, 𝑁 𝑙 (𝑣), The average memory consumption per node of the ACORN-𝛾 index
must be in the neighbor list 𝑁 𝑙 (𝑦) such that 𝑦 is a neighbor of 𝑣 is 𝑂 (𝑀𝛽 + 𝑀 + 𝑚𝐿 · 𝑀 · 𝛾), assuming the number of bytes per edge
with index greater than 𝑀𝛽 . During search, the neighbor lookup at is constant. For comparison, average memory consumption per
𝑣 on level 𝑙 will perform a neighbor-list expansion for all neighbors node for the HNSW index scales 𝑂 (𝑀 +𝑚𝐿 · 𝑀). Overall, ACORN-𝛾
with an index greater than 𝑀𝛽 , thus checking 𝑁 𝑙 (𝑦) and finding 𝑥. increases the bottom-level’s memory consumption by 𝑂 (𝑀𝛽 ) per
ACORN: Search Over Vector Embeddings and Structured Data

node, and increases the higher levels memory consumption by a oracle partition index. We will then describe ACORN’s expected
factor of 𝛾 per node. search complexity. We define 𝑙 : 𝑋 → N to be the mapping of nodes
To understand ACORN’s memory consumption we evaluate the to there maximum level index in ACORN-𝛾.
average number of neighbors stored per node. At level 0, com-
pression is applied to the candidate edge lists of size 𝑀 ∗ 𝛾 result- 6.3.1 Index and Search Properties. Intuitively, for a given query,
ing in neighbor sets of length 𝑀𝛽 plus a compressed set which ACORN’s predicate subgraph will emulate the HNSW oracle parti-
scales 𝑂 (𝑀). We show this empirically in figure 12. On higher tion index when the predicate subgraph forms a hierarchical struc-
levels, nodes have at most 𝑀 ∗ 𝛾 edges. We multiply this by the ture, each node in the subgraph has degree close to 𝑀, the subgraph
average number of levels that an element is added to, given by has a fixed entrypoint at its maximum level index that we can effi-
E[𝑙 + 1] = 𝐸 [− ln(unif(0, 1)) ∗ 𝑚𝐿 ] = 𝑚𝐿 + 1. ciently find during search, and the subgraph is connected. We will
While we specifically target compression to level 0 in this work, examine each of these properties separately and consider when they
because it uses the most space, compression could be applied to hold. We also note one main difference between ACORN’s predi-
more levels in bottom-up order to further reduce the index size for cate subgraphs and HNSW that arises due to ACORN’s predicate-
large datasets. Denoting 𝑛𝑐 as the chosen number of compressed lev- agnostic pruning: each level of ACORN approximates a KNN graph,
els, the average memory consumption per node in this generalized while each level of HNSW approximates a RNG graph. While this
case is 𝑂 (𝑛𝑐 (𝑀𝛽 + 𝑀) + (𝑚𝐿 − 𝑛𝑐)(𝑀 · 𝛾). difference does not affect ACORN’s expected search complexity in
Section 6.3.2, Malkov et al. [48] demonstrated that the RNG-based
pruning empirically improves performance.
6.2 Construction Complexity Hierarchy. First, we observe that the arbitrary predicate sub-
For fixed parameters 𝑀, 𝑀𝛽 and efc, ACORN-𝛾’s overall expected graph 𝐺 (𝑋𝑝 ) forms a controllable hierarchy similar to the HNSW
construction complexity scales 𝑂 (𝑛 ·𝛾 ·log(𝑛) ·log(𝛾)). Compared to oracle partition index built over 𝑋𝑝 with parameter 𝑀. This is by
HNSW, which has 𝑂 (𝑛 · log(𝑛)) expected construction complexity, design. ACORN-𝛾’s construction fixes 𝑀, and consequently 𝑚𝐿 ,
ACORN-𝛾 increases TTI by a factor of 𝛾 ·log(𝛾)) due to the expanded the level normalization constant. As a result, nodes of 𝑋𝑝 in the
edge lists it generates. ACORN-𝛾 index are sampled at rates equal to the level probabilities
We now describe ACORN’s construction complexity in detail of the HNSW partition. Ensuring this level sampling holds allows
by decomposing it into the following three factors (i) the number us to bound the expected greedy search path length at each level
of nodes in the dataset, given by 𝑛 (ii) the expected number of by a constant, 𝑆, as Malkov et al. [48] previously show.
levels searched to insert each node into the index, and (iii) the ex- Bounded Degree. Next, we will describe degree bounds, an im-
pected complexity of searching on each level. By design, ACORN’s portant factor that impacts greedy search efficiency and conver-
expected maximum level index scales 𝑂 (log 𝑛) according to its level- gence. While HNSW upper bounds the degree of each node by M
assignment probability, which is the same as HNSW. This provides during construction, ACORN-𝛾 enforces this upper bound during
our bound on (ii). search. This ensures ACORN’s search performs a constant num-
Turning our attention to (iii), we will first consider the length of ber of distance computations per visited node. We now focus our
the search path and then consider the computation cost incurred at attention on lower bounding the degree of nodes visited during
each visited node. For the HNSW level probability assignment, it is ACORN-𝛾’s search over the predicate subgraph.
known that the expected greedy search path length is bounded by If a node in the predicate subgraph has degree much lower than
1
a constant 𝑆 = 1−exp(−𝑚 [48]. We can bound ACORN’s expected
𝐿)
𝑀, this could adversely impact the search convergence and thus
search path length by 𝑂 (𝛾) since the path reaches a greedy minima recall. For a dataset and query predicate that exhibit no predicate
in a constant number of steps and proceeds to expand the search clustering, for any node 𝑣 in 𝐺 (𝑋𝑝 ),
scope by at most 𝑀 ·𝛾 nodes to collect up to 𝑀 ·𝛾 candidate neighbors  
during construction. E |𝑁𝑝𝑙 (𝑣)| = |𝑁 𝑙 (𝑣)| · 𝑠 = 𝛾 · 𝑀 · 𝑠 > 𝑀, ∀𝑠 > 𝑠𝑚𝑖𝑛
The computation complexity at each visited node along the This also holds as a lower bounds for datasets with predicate clus-
search path is 𝑂 (log(𝛾)), seen as follows. For each node visited, tering, in which case 𝑃𝑟 (𝑥 ∈ 𝑁𝑝𝑙 (𝑣)) > 𝑠, ∀𝑥 ∈ 𝑁 𝑙 (𝑣) where 𝑣 is
we first check its neighbor list to find at most 𝑀 un-visited nodes, a node in the predicate cluster. Thus we will continue our lower
on which we perform distance computations in 𝑂 (𝑀 · 𝑑) time. bound analysis of node degrees under the worst case assumption of
Then, we update the sorted lists of candidate nodes and results in no predicate clustering. Using the binomial concentration inequal-
𝑂 (𝑀 ·𝑑 ·𝑙𝑜𝑔(𝛾 ·𝑀)) time. Treating 𝑀 and 𝛾 as constants, we see that ity with parameter 𝑠, and union-bounding over the expected search
at each visited node the computation complexity is 𝑂 (log 𝛾) and path length, we show that for the search path P = 𝑣 1 − ... − 𝑣 𝑦 over
for greedy search at each level, the complexity is 𝑂 (𝛾 · log(𝛾)). Mul- an arbitrary predicate subgraph:
tiplying by 𝑛 · log(𝑛) yields ACORN’s final expected construction hØ i
complexity, 𝑂 (𝑛 · 𝛾 · log(𝑛) · log(𝛾). |𝑁𝑝 (𝑣)| ≤ (1 − 𝛿)𝑀 ≤ 𝑂 log 𝑛 · exp(−𝛿 2𝛾𝑀𝑠/2)

𝑃𝑟
𝑣∈ P
6.3 Search Analysis
We also analyze the probability that the subgraph traversal gets
Turning our attention to ACORN-𝛾’s search algorithm, we will
disconnected, which we bound by:
first point out several properties of HNSW that ACORN’s predicate hØ
subgraphs aim to emulate. In Figure 7 we empirically show that i
|𝑁𝑝 (𝑣)| ≤ 0 ≤ 𝑂 log 𝑛 · (1 − 𝑠) 𝑀 ·𝛾

𝑃𝑟
ACORN’s search performance approximates that of the HNSW 𝑣∈ P
We see that both bounds decay exponentially in 𝛾. complexity 𝑂 (log(1/𝑠). We see this because the expected maximum
Fixed Entry-point. Similar to HNSW, ACORN’s search begins level index of the full ACORN index graph scales 𝑂 (log 𝑛) based
from a fixed entry-point, chosen during construction. This pre- on its level-assignment probability [48]. Meanwhile, the predicate
defined entry-point provides a simple and effective strategy that subgraph 𝐺 (𝑋𝑝 ) of size 𝑠 ·𝑛 has an expected maximum level index of
is also predicate-independent and robust to variations in query 𝑂 (log(𝑠 · 𝑛)), once again according to its level sampling procedure.
correlation, as we empirically show in Figure 10. The second stage of the search traverses the predicate subgraph
Intuitively, we expect the search to successfully navigate from in expected 𝑂 ((𝑑 + 𝛾) · log(𝑠 · 𝑛)) complexity. As we previously de-
ACORN’s fixed entry-point, 𝑒, to the predicate-subgraph entry- scribe, the expected maximum level index of the predicate subgraph
point, 𝑒𝑝 , when we find a node that passes the predicate on an upper scales 𝑂 (log(𝑠 · 𝑛)). At each level, the expected greedy path length
level of the index that is fully connected. In this case, there will can be bounded by a constant 𝑆 due to the index level sampling pro-
exist a one-hop path from 𝑒 to 𝑒𝑝 . We consider 𝑒𝑝 to be an arbitrary cedure employed during construction. For each node visited along
node that passes a given predicate 𝑝 and is on the maximum level of the greedy path, we perform distance computations in 𝑂 (𝑑) time
the predicate subgraph. The index’s neighbor expansion parameter, on at most 𝑀 neighbors, and perform a constant-time predicate
𝛾, causes the index’s upper levels to be denser and, specifically evaluations over at most 𝑀 · 𝛾 neighbors.
those with less than 𝑀 · 𝛾 nodes, to be fully connected. When these
fully connected levels contain at least one node that passes the 7 EVALUATION
predicate, the search is guaranteed to route from 𝑒 to 𝑒𝑝 . Since We evaluate ACORN through a series of experiments on real and
ACORN samples all nodes with equal probability at each level, the synthetic datasets. Overall, our results show the following:
probability that nodes passing a given predicate, 𝑝, exist on some
level is proportional to the predicate’s selectivity, which takes a • ACORN-𝛾 achieves state-of-the-art hybrid search perfor-
lower bound of 𝑠𝑚𝑖𝑛 = 1/𝛾. mance, outperforming existing methods by 2-1,000× higher
Connectivity. We note that neither HNSW nor ACORN provides QPS at 0.9 recall on both prior benchmark datasets with
theoretical guarantees on connectivity over its level graphs for arbi- simple, low-cardinality predicate sets, and more complex
trary datasets. Thus we instead rely primarily on empirical results datasets with high-cardinality predicate sets. Specifically,
for our analysis. However, for some cases, we can expect ACORN’s ACORN achieves 2-10× higher QPS on prior benchmarks,
predicate subgraph to be connected when the HNSW oracle parti- over 30× higher QPS on new benchmarks, and over 1,000x
tion is connected. Two such cases are when 𝑋𝑝 exhibits no predicate higher QPS at scale on a 25-million-vector dataset.
clustering, or 𝑋𝑝 is clustered around a single region. In either case, • ACORN-𝛾 and ACORN-1 are predicate-agnostic methods,
each node has an expected degree of at least 𝑀 and each level ap- providing robust search performance under variations in
proximates a KNN graph, which is connected when 𝐾 >> log 𝑛. We predicate operators, predicate selectivity, query correlation,
empirically show in Figure 13a that ACORN’s predicate subgraphs and dataset size.
exhibit connectivity for real datasets and hybrid search queries. To • ACORN-1 and ACORN-𝛾 exhibit trade-offs between search
analyze potential connectivity problems, we recommend bench- performance and construction overhead. While ACORN-
marking ACORN’s hybrid search performance against HNSW’s 𝛾 achieves up to 5× higher QPS than ACORN-1 at fixed
ANN search performance using equivalent 𝑀 and efc parameters. If recalls, ACORN-1 can be constructed with 9-53× lower
a significant gap in accuracy exists, we recommend incrementally time-to-index (TTI).
increasing 𝛾 from its its initial value of 1/𝑠𝑚𝑖𝑛 . We now discuss our results in detail. We first describe the datasets
(7.1) and baselines (7.2) we use. Then, we present a systematic
6.3.2 Search Complexity. ACORN-𝛾’s expected search complexity
evaluation of ACORN’s search peformance (7.3). Finally, we assess
scales:
ACORN’s construction efficiency (7.4). We run all experiments on
𝑂 ((𝑑 + 𝛾) · log(𝑠 · 𝑛) + log(1/𝑠))
an AWS m5d.24xlarge instance with 370 GB of RAM, 96 vCPUs,
This approximates the HNSW oracle partition’s expected search and 196 threads.
complexity, 𝑂 (𝑑 · log(𝑠 ·𝑛)). Intuitively, ACORN-𝛾’s search path per-
forms some filtering at the upper levels before likely entering and 7.1 Datasets
traversing the predicate sub-graph, during which ACORN incurs a
We conduct our experiments on two datasets with low-cardinality
small overhead compared to HNSW search in order to perform the
predicate sets (LCPS) and two datasets with high-cardinality predi-
predicate filtering step over each neighbor list.
cate sets (HCPS). The LCPS datasets allow us to benchmark prior
We derive ACORN-𝛾’s search complexity by considering two
works that only support a constrained set of query predicates. The
stages of its search traversal. In the first stage, search begins from a
HCPS datasets consist of more complex and realistic query work-
pre-defined entry-point 𝑒, which need not pass the query predicate.
loads, allowing us to more rigorously evaluate ACORN’s search
In this stage, the search performs filtering only, dropping down
performance. Table 2 provides a concise summary of all datasets.
each level on which the filtered neighbor list, 𝑁𝑝 (𝑒), is found to be
empty. Once the traversal reaches the first node, 𝑒𝑝 that passes the
predicate, it enters the second stage, beginning its traversal over 3 On the TripClick dataset, we create two distinct query workloads, described in Section

the predicate subgraph 𝐺 (𝑋𝑝 ). 7.1.2. The average selectivity for either workload is .17 (keywords), and .26 (dates).
3 On the LAION dataset, we create four distinct query workloads, described in Section
In stage 1 the greedy search path on each layer has length 1, 7.1.2. These workloads have average selectivities of .10 (no-cor), .13 (pos-cor), .069
and occurs over 𝑂 (log 𝑛 − log(𝑠 · 𝑛)) expected levels, yielding the (neg-cor), .056 (regex).
ACORN: Search Over Vector Embeddings and Structured Data

Table 2: Datasets

Base Data Query Workload


# Vectors Vector Vector Source Structured Data Predicate Operators Avg. Query Predicate
Dim Data Selectivity Cardinality
SIFT1M 1,000,000 128 images random int. equals(𝑦) 0.083 12
Paper 2,029,997 200 passages random int. equals(𝑦) 0.083 12
TripClick 1,055,976 768 passages clinical area list & contains(𝑦1 ∨ 𝑦2 ∨ ...) & 0.17, 0.36 2 > 108
publication date between(𝑦1 , 𝑦2 )
LAION (1M) 1,000,448 512 images text captions & regex-match(𝑦) & 0.056 - 0.13 3 > 1011
keyword list contains(𝑦1 ∨ 𝑦2 ∨ ...)
LAION (25M) 24,653,427 512 same as above same as above same as above same as above same as above

7.1.1 Datasets with Low Cardinality Predicate Sets. We use SIFT1M list. We assign each image embedding its keyword list by taking the
[35] and Paper [63], the two largest publically-available datasets 3 words with highest text-to-image CLIP scores from a candidate
used to evaluate recent specialized indices [25, 63]. For both datasets, list of 30 common adjectives and nouns (e.g., "animal", "scary").
we follow related works [25, 62, 63] to generate structured attributes To evaluate a series of micro-benchmarks, we generate four
and query predicates: for each base vector, we assign a random query workloads. For each query workload, we sample 1K vectors
integer in the range 1 − 12 to represent structured attributes; and from the dataset as query vectors. We construct the regex query
for each query vector, the associated query predicate performs an workload with predicates that perform regex-matching over the im-
exact match with a randomly chosen integer in the attribute value age captions. For each query predicate, we randomly choose strings
domain. The resulting query predicate set has a cardinality of 12. of 2-10 regex tokens (e.g., "^[0-9]"). In addition, we construct three
SIFT1M: The SIFT1M dataset was introduced by Jegou et al. in query workloads with predicates, similar to TripClick, that take
2011 for ANN search. It consists of a collection of 1M base vectors, a keyword list and filter out entities that do not have at least one
and 10K query vectors. All of the vectors are 128-dimensional local matching keyword. Using this setup, we are able to easily control
SIFT descriptors [43] from INRIA Holidays images [33]. for correlation in the workload, and we generate a no correlation
Paper: Introduced by Wang et al. in 2022, the Paper dataset con- (no-cor), positive correlation (pos-cor), and negative correlation
sists of about 2M base vectors and 10K query vectors. The dataset (neg-cor) workload. Figure 6 demonstrates some example queries
is generated by extracting and embedding the textual content from and multi-modal retrieval results taken from each.
an in-house corpus of academic papers.
7.1.2 Datasets with High Cardinality Predicate Sets. We use the 7.2 Benchmarked Methods
TripClick and LAION datasets in our experiments with HCPS datasets. We briefly overview the methods we benchmark along with tested
TripClick: The TripClick dataset, introduced by Rekabsaz et al. parameters. We implement ACORN-𝛾, ACORN-1, pre-filtering, and
in 2021 for text retrieval, contains a real hybrid search query work- HNSW post-filtering in C++ in the FAISS codebase [5].
load and base dataset from the click logs of a health web search HNSW Post-filtering: To implement HNSW post-filtering, for
engine. Each query consists of natural language search terms along each hybrid query with predicate selectivity 𝑠, we over-search the
with optional filters on clinical areas (e.g. "cardiology", "infectious HNSW index, gathering 𝐾/𝑠 candidate results before applying the
disease", "surgery") and publication years. Each entity in the base query filter. We note that this differes to some prior work [25],
dataset consists of a text passage, with a list of associated clinical where HNSW post-filtering is implemented by collecting only 𝐾
areas and a publication date. The dataset contains 28 unique clinical candidate results, leading to significantly worse baseline query
areas and publication dates ranging from 1900 to 2020, resulting in performance than ours. For the SIFT1M, Paper and LAION datasets,
over 228 possible query predicates total. We construct two query we use the FAISS’s default HNSW construction parameters: 𝑀 =
workloads, one consisting of queries that used date filters (dates) 32, efc = 40. For the TripClick dataset, we find that the HNSW
and another consisting of queries that used clincal area filters (ar- index for these parameters is unable to obtain high recalls for the
eas). We generate 768-dimensional vectors from the query texts and standard ANN search task, thus we perform parameter tuning, as
passage texts using DPR [36], a widely-used, pre-trained encoder is standard. We perform a grid search for 𝑀 ∈ {32, 64, 128} and
for open-domain Q&A. The resulting dataset has about 1𝑀 base efc ∈ {40, 80, 120, 160, 200} and choose the pair the obtains the
vectors, and we use a random sample of 1K queries for each query highest QPS at 0.9 Recall for ANN search. For TripClick, we choose
workload. 𝑀 = 128, efc = 200. We generate each recall-QPS curve by varying
LAION: The LAION dataset [55] consists of 400M image embed- the search parameter efs from 10 to 800 in step sizes of 50.
dings and captions describing each image. The vector embeddings Pre-filtering: We implement pre-filtering by first generating a list
are generated from web-scraped images using CLIP [53], a multi- of dataset entries that pass the query predicate and then perform-
modal language-vision model. In our evaluation, we construct two ing brute force search using FAISS’s optimized implementation for
base datasets using 1M and 25M LAION subsets, both consisting of distance comparisons. We also efficiently implement all contains
image vectors and text captions as a structured attribute. We also predicate evaluations using bitsets since the corresponding struc-
generate an additional structured attribute consisting of a keyword tured attributes have low cardinality.
Figure 6: The figure contrasts retrieval results using vector-only similarity search (bottom left) versus hybrid search (right) on the LAION
dataset. Both use the same query image (top left), and the hybrid search queries also include a structured query filter consisting of a keyword
list, here containing a single keyword. The table on the right shows examples from three hybrid search query workloads: positive query
correlation (top), no query correlation (middle), and negative query correlation (bottom).

Filtered-DiskANN: We evaluate both algorithms implemented in to be no larger than the Vamana indices on the LCPS datasets and
FilteredDiskANN [4], namely FilteredVamana and StitchedVamana. no larger than twice the size of the flat indices for HCPS datasets.
For both, we follow the recommended construction and search We use 𝑀𝛽 values of 32 for LAION-1M and LAION-25M, 64 for
parameters according to the hyper-parameter tuning procedure SIFT1M, Paper, and 128 for TripClick. We choose the construction
described by Gollapudi et al. [25]. For FilteredVamana, we use parameter 𝛾 according to the expected minimum selectivity query
construction parameters 𝐿 = 90, 𝑅 = 96, which generated the predicates of each dataset i.e., 𝛾 = 12 for SIFT1M and Paper, 𝛾 = 30
Pareto-Optimal recall-QPS curve from a parameter sweep over 𝑅 ∈ for LAION, and 𝛾 = 80 for TripClick. To generate the recall-QPS
{32, 64, 96} and L between 50 and 100. For StitchedVamana, we use curve, we follow the same procedure described above for HNSW
construction parameters 𝑅𝑠𝑚𝑎𝑙𝑙 = 32, 𝐿𝑠𝑚𝑎𝑙𝑙 = 100, 𝑅𝑠𝑡𝑖𝑡𝑐ℎ𝑒𝑑 = 64 post-filtering.
and 𝛼 = 1.2, which generated the Pareto-Optimal recall-QPS curve ACORN-1: We construct ACORN-1 and generate the recall-QPS
from a parameter sweep over 𝑅𝑠𝑚𝑎𝑙𝑙 , 𝑅𝑠𝑡𝑖𝑡𝑐ℎ𝑒𝑑 ∈ {32, 64, 96} and curve following the same procedure we use for ACORN-𝛾, except
𝐿𝑠𝑚𝑎𝑙𝑙 between 50 and 100. To generate the recall-QPS curves we that we fix 𝛾 = 1 and 𝑀𝛽 = 𝑀.
vary 𝐿 from 10 to 650 in increments of 20 for FilteredVamana, and
𝐿𝑠𝑚𝑎𝑙𝑙 from 10 to 330 in increments of 20 for StitchedVamana. 7.3 Search Performance Results
NHQ: We evaluate the two algorithms, NHQ-NPG_NSW and
We will begin our evaluation with benchmarks on the LCPS datasets,
NHQ-NPG_KGraph, proposed in [63]. For both we use the recom- on which we are able to run all baseline methods as well as the
mended parameters in the released codebase [12]. These parameters oracle partition method. We will then present an evaluation on the
were selected using a hyperparameter grid search in order to gener- HCPS datasets. On these datasets, the FilteredDiskANN and NHQ
ate the Pareto-optimal recall-QPS curve for either algorithm on the algorithms fail because they assume are unable to handle the high
SIFT1M and Paper datasets. We generate the recall-QPS curve by cardinality query predicate sets and non-equality predicate oper-
varying 𝐿 between 10 and 310 in steps of 20. In Figures 8b and 7b, ators. As of this writing, we also find that Milvus cannot support
we show the query performance of KGraph, the more performant regex-match predicates and contains predicates over variable
of the two algorithms. length lists. As a result, we instead focus on comparing ACORN
Milvus: We test four Milvus algorithms: IVF-Flat, IVF-SQ8, HNSW, to the pre- and post-filtering baselines for the HCPS datasets. We
and IVF-PQ [6]. For each we test the same parameters as Gollapudi report QPS averaged over 50 trials.
et al. [25]. Since we find that the four Milvus algorithms achieve
similar search performance, for simplicity, Figures 8b and 7b show 7.3.1 Benchmarks on LCPS Datasets. Figure 7 shows that ACORN-
only the method with Pareto-Optimal recall-QPS performance. 𝛾 achieves state-of-the-art hybrid search performance and best
Oracle Partition Index: We implement this method by construct- approximates the theoretically ideal oracle partition strategy on
ing an HNSW index for each possible query predicate in the LCPS the SIFT1M and Paper datasets. Notably, even compared to NHQ
datasets. For a given hybrid query, we search the HNSW partition and FilteredDiskANN, which specialize for LCPS datasets, ACORN-
corresponding to the query’s predicate. To construct each HNSW 𝛾 consistently achieves 2-10× higher QPS at fixed recall values,
partition and generate the recall-QPS curve, we use the same pa- while maintaining generality. Additionally, we see ACORN-1 ap-
rameters as the HNSW post-filtering method, described above. proximates ACORN-𝛾’s search performance, attaining about 1.5-5×
ACORN-𝛾: We choose the construction parameters 𝑀 and efc lower QPS than ACORN-𝛾 across a range of recall values.
to be the same as the HNSW post-filtering baseline, described To further investigate the relative search efficiency of ACORN-
above. We find that ACORN-𝛾’s search performance is relatively in- 𝛾 and ACORN-1, we turn our attention to Table 3, which shows
sensitive to the choice of the construction parameter 𝑀𝛽 , as Figure the number of distance computations required of either method
12c shows. Thus, to maintain modest construction overhead, we to obtain Recall@10 equal to 0.8. We see that the oracle partition
choose 𝑀𝛽 to be a small multiple of 𝑀, i.e., 𝑀𝛽 = 𝑀 or 𝑀𝛽 = 2𝑀, method is the most efficient, requiring the fewest number of dis-
picking the parameter for each dataset that obtains higher QPS at tance computations on both datasets. ACORN-𝛾 is the next most
0.9 Recall. Specifically, we constrain the memory budget of the index efficient according to number of distance computations. While
ACORN: Search Over Vector Embeddings and Structured Data

(a) SIFT1M Dataset (b) Paper Dataset (a) TripClick (areas) (b) TripClick (dates) (c) LAION1M (regex)

Figure 7: Recall@10 vs QPS on SIFT1M and Paper Figure 8: Recall@10 vs QPS on TripClick and LAION-1M

ACORN-𝛾 approximates the oracle partition method, it’s predicate- due to the presence of varied query correlation and predicate selec-
agnostic design precludes the same RNG-based pruning used to tivity, which we further explore further next.
construct the oracle partitions. Rather than approximating RNG- Varied Predicate Selectivity: We use the Tripclick dataset to eval-
graphs, ACORN-𝛾’s levels approximate KNN-graphs, which are less uate ACORN’s search performance across a range of realistic pred-
efficient to search over explaining the performance gap. The table icate selectivities. Figure 9 demonstrates that for each predicate
additionally shows that ACORN-1 is less efficient than ACORN- selectivity percentile, ACORN-𝛾 achieves 5-50x higher QPS at 0.9
𝛾, which is explained by the candidate edge generation used in recall compared to the next best-performing baseline. Once again
ACORN-1. While the ACORN-𝛾 index stores up to 𝑀 × 𝛾 edges ACORN-1 trails behind ACORN-𝛾. We see that for low selectiv-
per node during construction, ACORN-1 stores only up to 𝑀 edges ity predicates, the pre-filtering method is most competitive, while
per node during construction, and approximates an edge list of size the post-filtering baselines suffers from over 10× lower QPS than
𝑀 ∗ 𝛾 for each node during search using its neighbor expansion ACORN at fixed recall. However, for high selectivity predicates, pre-
strategy. This approximation results in slight degradation to neigh- filtering becomes less competitive while the post-filtering baseline
bor list quality and thus search performance. Finally, we see from obtains higher throughput, although its recall remains low.
the table, that HNSW post-filtering is the least efficient of the listed Varied Query Correlation: Next we control for query correlation
methods. This is because while ACORN-1 and ACORN-𝛾 almost and evaluate ACORN on three different query workloads using
exclusively traverse over nodes that pass the query predicates, the the LAION-1M dataset. Figure 10 demonstrates that ACORN-𝛾
post-filtering algorithm is less discriminating and wastes distance is robust to variations in query correlation and attains 28-100×
computations on nodes failing the query predicate. higher QPS at 0.9 recall than the next best baseline in each case. In
Returning to Figure 7, we see that the relative search efficiency, the negative correlation case, the performance gap between post-
measured by QPS versus recall, of the oracle partition method, filtering and the ACORN methods is the largest since post-filtering
ACORN-𝛾, and ACORN-1 is not only affected by distance compu- cannot successfully route towards nodes that pass the predicate. In
tations, but is also affected by vector dimensionality. We see that the positive correlation case, ACORN-𝛾 once again outperforms the
both ACORN-1 and ACORN-𝛾 perform closer to the oracle partition baselines, but post-filtering become more competitive, although it
method on the Paper dataset, while the performance gap grows is still unable to attain recall above 0.9. The pre-filtering method’s
slightly on SIFT1M. This is due to the cost of performing the fil- QPS remains relatively unchanged, and is only affected by small
tering step over neighbor lists during search, which, relative to variations in predicate selectivity for each query workload. As
the cost of distance computations, is higher on SIFT1M than Paper before, ACORN-1 approaches ACORN-𝛾’s search performance.
since SIFT1M uses slightly lower-dimensional vectors. Scaling Dataset Size: Figure 11 shows ACORN’s search perfor-
mance on LAION-25M with the no-correlation query workload,
7.3.2 Benchmarks on HCPS Datasets. Figure 8 shows that ACORN demonstrating that the performance gap between ACORN and ex-
outperforms the baselines by 30 − 50× higher QPS at 0.9 recall on isting baselines only grows as the dataset size scales. At 0.9 recall,
TripClick and LAION-1M, and as before, ACORN-1 approximates ACORN-𝛾 achieves over three orders of magnitude higher QPS than
ACORN-𝛾’s search performance. On both datasets, pre-filtering is the next best-performing baseline. As before, ACORN-1’s search
prohibitively expensive, obtaining perfect recall at the price of effi- performance approximates that of ACORN-𝛾.
ciency. Meanwhile, post-filtering fails to obtain high recall, likely
7.4 Index Construction
Table 3: # Distance Computations to Achieve 0.8
Recall We will now evaluate ACORN’s construction procedure, includ-
ing its indexing time and space footprint, ACORN-𝛾’s compres-
sion procedure, and the predicate subgraph quality resulting from
SIFT 1M Paper
ACORN-𝛾’s neighbor expansion approach.
Oracle Partition 398.0 281.1
ACORN-𝛾 611.0 (+53.5%) 383.7 (+36.6%) 7.4.1 TTI and Space Footprint. First, we analyze ACORN’s space
ACORN-1 999.6 (+151.0%) 567.8 (+101.2%) footprint and indexing time. Table 4 and 5 show the time-to-index
HNSW Post-filter 1837.8 (+362.6%) 1425.5 (+406.2%) and index size of ACORN-𝛾 and ACORN-1 compared to the best-
* Percentage difference is shown in parenthesis and is relative to performing baselines. The reported index sizes for each method
oracle partition method show the total space footprint of both vector storage and the index
(a) 1p Sel (s=0.0127) (b) 25p Sel (s=0.0485) (c) 50p Sel (s=0.1215) (d) 75p Sel (s=0.2529) (e) 99p Sel (s=0.6164)

Figure 9: Recall@10 vs QPS for Varied Selectivity Query Filters on TripClick


Table 4: TTI (s)

TripClick LAION-1M LAION-25M Sift1M Paper


ACORN-𝛾 9902.9 835.8 38,007.5 148.9 255.6
ACORN-1 322.9 25.9 705.3 8.6 27.0
HNSW 891.0 32.9 1,147.2 11.3 29.2
FilteredVamana NA NA NA 18.3 51.9
(a) Neg. Correlation (b) No Correlation (c) Pos. Correlation StitchedVamana NA NA NA 69.2 189.7

Figure 10: Recall@10 vs QPS on LAION1M


Table 5: Index Size (GB)

TripClick LAION-1M LAION-25M Sift1M Paper


ACORN-𝛾 4.9 2.4 59 0.98 2.5
ACORN-1 4.6 2.3 59 0.93 2.4
HNSW 4.1 2.2 54 .75 2.1
Flat Index 3.1 1.9 47 .51 1.6
FilteredVamana NA NA NA .61 1.8
StitchedVamana NA NA NA 1.3 3.5
Figure 11: Recall@10 vs QPS on LAION-25M
Table 6: ACORN-𝛾 Average Out Degree
TripClick LAION-1M LAION-25M Sift1M Paper
itself. All methods are measured using the parameters reported in
Section 7.2. Level 0 (compressed) 191 50.1 49.4 87.5 86.0
Level 1 8,075 960 960 384 384
We first consider ACORN-𝛾’s construction overhead. Table 4 Level 2 54.0 919 937 363 359
shows that across all datasets, ACORN-𝛾’s TTI is at most 11× higher Level 3 0 25.3 689 25.3 57.4
Level 4 NA 0 16 0 1.0
than HNSW’s, and at most 2.15× higher than that of StitchedVa- 𝑀 ·𝛾 10,240 960 960 384 384
mana, the best performing specialized index. Table 5 shows that 𝑀𝛽 128 32 32 64 64
ACORN-𝛾’s index size is at most 1.3× larger than that of HNSW,
and at least 25% smaller than that of StitchedVamana. The reason
construction: i) ACORN’s predicate-agnostic pruning strategy at
for ACORN-𝛾’s increased index size and TTI compared to HNSW
varied levels of compression indicated by different 𝑀𝛽 (Mb) values,
is it’s candidate-edge generation step during construction, which
where 𝑀𝑏 = 768 represents no pruning, and lower values represent
expands each neighbor list. Meanwhile, ACORN-1 achieves the low-
more aggressive pruning, ii) a metadata-aware RNG-based pruning
est TTI of all listed baselines in table 4, and its index size is at most
approach, which is employed by FilteredDiskANN’s algorithms,
1.25× HNSW’s index size and at least 25% smaller than StitchedVa-
and iii) HNSW’s metadata-blind pruning. We consider TTI, space
mana’s index size. We see that while ACORN-𝛾 achieves superior
footprint, the number of candidates edges pruned per node and
search performance by leveraging a neighbor-list expansion during
search performance. The figure represents space footprint measured
construction, ACORN-1 provides a close approximation at lower
by the average out degree of nodes on on level 0, the level on which
TTI and space footprint by instead performing the neighbor-list
each pruning strategy is applied. In addition, the figure shows
expansion during search. The two algorithms exhibit a trade-off
search performance measured by recall at 20,000 QPS. We note that
between search performance and construction overhead.
the recall ranges of the recall-QPS curve generated by different
7.4.2 ACORN-𝛾 Pruning. Given ACORN-𝛾’s higher construction pruning methods varied significantly, leading us to choose a QPS
overhead, we investigate the efficiency of its predicate-agnostic threshold rather than a recall threshold. Interestingly, Figure 12
compression strategy in reducing index construction costs while shows that ACORN’s pruning can significantly reduce both the TTI
maintaining search performance. First, Table 6 shows ACORN- and space footprint by aggressively pruning candidate edges, while
𝛾’s average out-degree per level for each dataset, confirming that maintaining search performance. In comparison, applying HNSW
compression on level 0 leads to significantly smaller neighbor lists, pruning to the index results in significantly degraded hybrid search
compared to level without compression, which may have neighbor performance. Meanwhile the metadata-aware RNG-base pruning
lists as large as 𝑀 · 𝛾. results in similar search performance to ACORN-𝛾’s pruning, but it
Turning our attention to Figure 12, we evaluate three different is less efficient by TTI and space footprint than ACORN’s pruning
pruning strategies applied to ACORN-𝛾’s neighbor lists during for small values of 𝑀𝛽 (e.g., 𝑀𝛽 = 32, 64).
ACORN: Search Over Vector Embeddings and Structured Data

(a) TTI (b) Space footprint


(a) # SCC (b) Graph Height (c) Avg Out Degree
Figure 13: Graph quality of ACORN-𝛾 predicate subgraph evalu-
ated by (a) average number of strongly connected components per
level, (b) graph height, and (c) average out degree of nodes across
uncompressed levels. Results are shown for the TripClick dataset
with 1, 25, 50, 75, and 99 percentile selectivity predicates to generate
(c) # Edges Pruned (d) Search Perf.
the predicate subgraph and HNSW oracle partition.
Figure 12: Comparison of pruning methods on SIFT1M and their
impact on TTI (a), space footprint of the index (b), the number of post-filtering. Milvus [62] likewise creates an approved list of points
candidate edges pruned (c) and search performance (d). 𝑀𝛽 values by maintaining a distribution of attributes over the dataset in order
used for ACORN-𝛾 are shown along the x-axis. to map commonly used query filters to a list of approved points
before performing pre- or post-filtering. Several space-partitioning
7.4.3 Graph Quality. Finally, we investigate the graph quality of
indices like FAISS-IVF [14, 34] and LSH [10] store metadata infor-
ACORN-𝛾’s predicate subgraphs. Figure 13 compares graph connec-
mation in the index, allowing them to rapidly filter entities during
tivity, graph height, and out degrees for HNSW oracle partitions and
post-filtering. Despite the optimized filtering steps in each of these
ACORN-𝛾 predicate subgraphs across varied predicate selectivities
approaches, the core problems of pre- and post-filtering remain,
on the TripClick dataset’s real hybrid search queries.
particularly for low correlation or selectivity predicates.
From Figure 13a, we see ACORN-𝛾’s predicate-subgraph con-
Specialized Indices. Alternatively, several recent works develop
nectivity empirically matches or exceeds that of the HNSW oracle
novel graph-based algorithms for hybrid search, often improving
partition across selectivities, demonstrating the effectiveness of
performance for a constrained set of predicates. NHQ [63] encodes
ACORN-𝛾’s neighbor expansion strategy. Next, Figure 13b shows
attributes alongside vectors, and then uses a "fusion distance" dur-
that the controlled hierarchy of ACORN-𝛾’s predicate subgraphs
ing search that accounts for vector distances as well as attribute
emulate that of the HNSW oracle partitions. Malkov et al. show
matches. This approach supports only equality query predicates
that HNSW search performance is sensitive to graph height [48];
and assumes each dataset entity has only one structured attribute.
thus, this result helps explain ACORN-𝛾’s ability to emulate the
Filtered-DiskANN [25] proposes two algorithms: FilteredVamana
search efficiency of the oracle partition. Lastly, Figure 13c examines
and StitchedVamana. Both methods constrain the query filter cardi-
the average out degree resulting from performing the search-time
nality to about 1, 000 with only equality predicates so that the index
filtering, described in Figure 4(a), over the ACORN-𝛾 index. We note
construction steps can use this knowledge to appropriately generate
that sufficiently high, but bounded, out-degrees are important for
and prune candidate edge lists. Similarly HQI [49] optimizes batch
emulating HNSW’s navigability properties, as discussed in Section
query-processing by assuming a limited cardinality of 20 query
6.3. The figure confirms that ACORN’s predicate subgraphs have
predicates to design an efficient partitioning scheme. On the other-
average out-degrees consistently close to and bounded by 𝑀. As
hand, Qdrant [61] proposes to densify an HNSW graph and perform
expected, the HNSW oracle partition has significantly lower aver-
a filtered greedy search. While this approach aligns intuitively with
age out-degrees than nodes on ACORN-𝛾’s uncompressed levels
ACORN’s neighbor-list expansions during construction, Qdrant’s
because HNSW applies RNG-based pruning. We also note, that the
proposal inadvertently flattens the graph by directly increasing the
ACORN predicate subgraph with 1 percentile selectivity has lower
HNSW parameter 𝑀, which impacts HNSW’s level normalization
average out degrees than the other predicate subgraphs because
constant. Malkov et al. show that HNSW’s performance is sensitive
the low selectivity predicates result in fewer than 128 nodes on the
to its number of levels, and flattening the graph degrades search
largest uncompressed levels, thus capping the maximum out degree
performance [48]. In addition, Qdrant’s proposed method does not
per node below 𝑀 = 128. Overall, we observe that ACORN-𝛾 pro-
provide a solution for dealing with the increased memory overhead
duces high quality predicate subgraphs, which empirically emulate
after creating a denser HNSW.
several HNSW properties related to search efficiency.
9 CONCLUSION
8 RELATED WORK We proposed ACORN, the first approach for efficient hybrid search
Pre- & Post-filtering-based Systems. Many hybrid search sys- across vectors and structured data that supports large and diverse
tems rely on pre- and post-filtering. While several systems have sets of query predicates. ACORN uses a simple, yet effective, search
developed pre-processing methods to perform faster filtering dur- strategy based on the core idea of predicate subgraph traversal. We
ing search, these systems fail to reduce the excessive and expensive presented two indices, ACORN-𝛾 and ACORN-1, that implement
distance computations which bottleneck performance. Weaviate [1] this search strategy by modifying the HNSW indexing algorithm.
creates an inverted index for structured data ahead of time, then Our results show that ACORN achieves state-of-the-art hybrid
uses it at query time to create a bitmap of eligible candidates during search performance on both prior benchmarks, involving simple,
low-cardinality query predicate sets, as well as more complex bench- Theory of computing. ACM, Victoria British Columbia Canada, 537–546. https:
marks involving new predicate operators and high cardinality pred- //doi.org/10.1145/1374376.1374452
[20] Wei Dong, Charikar Moses, and Kai Li. 2011. Efficient k-nearest neighbor graph
icate sets. Across both types of benchmarks, ACORN-𝛾 achieves construction for generic similarity measures. In Proceedings of the 20th interna-
2–1,000× higher QPS at 0.9 recall than prior methods, and ACORN- tional conference on World wide web (WWW ’11). Association for Computing Ma-
chinery, New York, NY, USA, 577–586. https://doi.org/10.1145/1963405.1963487
1 approximates ACORN-𝛾’s search performance with 9–53× lower [21] Ming Du, Arnau Ramisa, Amit Kumar K C, Sampath Chanda, Mengjiao Wang,
TTI for resource-constrained settings. Neelakandan Rajesh, Shasha Li, Yingchuan Hu, Tao Zhou, Nagashri Laksh-
minarayana, Son Tran, and Doug Gray. 2022. Amazon Shop the Look: A Vi-
sual Search System for Fashion and Home. In Proceedings of the 28th ACM
ACKNOWLEDGMENTS SIGKDD Conference on Knowledge Discovery and Data Mining (KDD ’22). As-
The authors would like to thank Peter Bailis for his valuable feed- sociation for Computing Machinery, New York, NY, USA, 2822–2830. https:
//doi.org/10.1145/3534678.3539071
back on this work. [22] Cong Fu, Chao Xiang, Changxu Wang, and Deng Cai. 2019. Fast approximate
This research was supported in part by affiliate members and nearest neighbor search with the navigating spreading-out graph. Proceedings
other supporters of the Stanford DAWN project, including Meta, of the VLDB Endowment 12, 5 (Jan. 2019), 461–474. https://doi.org/10.14778/
3303753.3303754
Google, and VMware, as well as Cisco, SAP, and a Sloan Fellow- [23] Tiezheng Ge, Kaiming He, Qifa Ke, and Jian Sun. 2014. Optimized Product
ship. Any opinions, findings, and conclusions or recommendations Quantization. IEEE Transactions on Pattern Analysis and Machine Intelligence 36,
4 (April 2014), 744–755. https://doi.org/10.1109/TPAMI.2013.240 Conference
expressed in this material are those of the authors and do not nec- Name: IEEE Transactions on Pattern Analysis and Machine Intelligence.
essarily reflect the views of the sponsors. [24] Aristides Gionis, Piotr Indyk, and Rajeev Motwani. 1999. Similarity Search in
High Dimensions via Hashing. In Proceedings of the 25th International Conference
on Very Large Data Bases (VLDB ’99). Morgan Kaufmann Publishers Inc., San
REFERENCES Francisco, CA, USA, 518–529.
[1] [n. d.]. Filtered Vector Search | Weaviate - vector database. https://weaviate.io/ [25] Siddharth Gollapudi, Neel Karia, Varun Sivashankar, Ravishankar Krishnaswamy,
developers/weaviate/concepts/prefiltering Nikit Begwani, Swapnil Raz, Yiyong Lin, Yin Zhang, Neelam Mahapatro, Premku-
[2] [n. d.]. Pre-label and enrich data with bulk classifications. https://labelbox.ghost. mar Srinivasan, Amit Singh, and Harsha Vardhan Simhadri. 2023. Filtered-
io/blog/pre-label-and-enrich-your-data-with-bulk-classifications/ DiskANN: Graph Algorithms for Approximate Nearest Neighbor Search with
[3] [n. d.]. Q&A over Documents - LlamaIndex 0.8.43. https://gpt-index.readthedocs. Filters. In Proceedings of the ACM Web Conference 2023. ACM, Austin TX USA,
io/en/latest/ 3406–3416. https://doi.org/10.1145/3543507.3583552
[4] 2023. DiskANN. https://github.com/microsoft/DiskANN original-date: 2020-06- [26] Long Gong, Huayi Wang, Mitsunori Ogihara, and Jun Xu. 2020. iDEC: indexable
18T06:18:06Z. distance estimating codes for approximate nearest neighbor search. Proceedings
[5] 2023. Faiss. https://github.com/facebookresearch/faiss of the VLDB Endowment 13, 9 (May 2020), 1483–1497. https://doi.org/10.14778/
[6] 2023. Milvus Documentation. https://github.com/milvus-io/milvus-docs 3397230.3397243
original-date: 2020-05-27T09:12:23Z. [27] Ruiqi Guo, Philip Sun, Erik Lindgren, Quan Geng, David Simcha, Felix Chern, and
[7] 2023. visual-layer/fastdup. https://github.com/visual-layer/fastdup Sanjiv Kumar. 2020. Accelerating large-scale inference with anisotropic vector
[8] Ann Arbor Algorithms. 2023. KGraph: A Library for Approximate Nearest quantization. In Proceedings of the 37th International Conference on Machine
Neighbor Search. https://github.com/aaalgo/kgraph original-date: 2015-05- Learning (ICML’20, Vol. 119). JMLR.org, 3887–3896.
29T12:38:24Z. [28] Michael E. Houle and Michael Nett. 2015. Rank-Based Similarity Search: Reduc-
[9] Alexandr Andoni and Piotr Indyk. 2008. Near-optimal hashing algorithms for ing the Dimensional Dependence. IEEE Transactions on Pattern Analysis and
approximate nearest neighbor in high dimensions. Commun. ACM 51, 1 (Jan. Machine Intelligence 37, 1 (Jan. 2015), 136–150. https://doi.org/10.1109/TPAMI.
2008), 117–122. https://doi.org/10.1145/1327452.1327494 2014.2343223 Conference Name: IEEE Transactions on Pattern Analysis and
[10] Alexandr Andoni, Piotr Indyk, Thijs Laarhoven, Ilya Razenshteyn, and Ludwig Machine Intelligence.
Schmidt. 2015. Practical and optimal LSH for angular distance. In Proceedings [29] Piotr Indyk and Rajeev Motwani. 1998. Approximate nearest neighbors: towards
of the 28th International Conference on Neural Information Processing Systems - removing the curse of dimensionality. In Proceedings of the thirtieth annual
Volume 1 (NIPS’15). MIT Press, Cambridge, MA, USA, 1225–1233. ACM symposium on Theory of computing (STOC ’98). Association for Computing
[11] Alexandr Andoni and Ilya Razenshteyn. 2015. Optimal Data-Dependent Hashing Machinery, New York, NY, USA, 604–613. https://doi.org/10.1145/276698.276876
for Approximate Near Neighbors. In Proceedings of the forty-seventh annual ACM [30] Omid Jafari, Parth Nagarkar, and Jonathan Montaño. 2020. mmLSH: A Practical
symposium on Theory of Computing (STOC ’15). Association for Computing Ma- and Efficient Technique for Processing Approximate Nearest Neighbor Queries
chinery, New York, NY, USA, 793–801. https://doi.org/10.1145/2746539.2746553 on Multimedia Data. In Similarity Search and Applications (Lecture Notes in
[12] AshenOn3. 2023. NHQ: An Efficient and Robust Framework for Approximate Computer Science), Shin’ichi Satoh, Lucia Vadicamo, Arthur Zimek, Fabio Carrara,
Nearest Neighbor Search with Attribute Constraint. https://github.com/ Ilaria Bartolini, Martin Aumüller, Björn Þór Jónsson, and Rasmus Pagh (Eds.).
AshenOn3/NHQ original-date: 2021-09-09T08:28:21Z. Springer International Publishing, Cham, 47–61. https://doi.org/10.1007/978-3-
[13] Martin Aumüller, Erik Bernhardsson, and Alexander Faithfull. 2020. ANN- 030-60936-8_4
Benchmarks: A benchmarking tool for approximate nearest neighbor algorithms. [31] J.W. Jaromczyk and G.T. Toussaint. 1992. Relative neighborhood graphs and
Information Systems 87 (Jan. 2020), 101374. https://doi.org/10.1016/j.is.2019.02. their relatives. Proc. IEEE 80, 9 (Sept. 1992), 1502–1517. https://doi.org/10.1109/
006 5.163414 Conference Name: Proceedings of the IEEE.
[14] Dmitry Baranchuk, Artem Babenko, and Yury Malkov. 2018. Revisiting the [32] Suhas Jayaram Subramanya, Fnu Devvrit, Harsha Vardhan Simhadri, Ravishankar
Inverted Indices for Billion-Scale Approximate Nearest Neighbors. https: Krishnawamy, and Rohan Kadekodi. 2019. DiskANN: Fast Accurate Billion-point
//doi.org/10.48550/arXiv.1802.02422 arXiv:1802.02422 [cs]. Nearest Neighbor Search on a Single Node. In Advances in Neural Information
[15] Jon Louis Bentley. 1975. Multidimensional binary search trees used for associative Processing Systems, Vol. 32. Curran Associates, Inc. https://papers.nips.cc/paper_
searching. Commun. ACM 18, 9 (Sept. 1975), 509–517. https://doi.org/10.1145/ files/paper/2019/hash/09853c7fb1d3f8ee67a61b6bf4a7f8e6-Abstract.html
361002.361007 [33] Herve Jegou, Matthijs Douze, and Cordelia Schmid. 2008. Hamming Embedding
[16] Erik Bernhardsson. [n. d.]. annoy: Approximate Nearest Neighbors in and Weak Geometric Consistency for Large Scale Image Search. In Computer
C++/Python optimized for memory usage and loading/saving to disk. https: Vision – ECCV 2008 (Lecture Notes in Computer Science), David Forsyth, Philip
//github.com/spotify/annoy Torr, and Andrew Zisserman (Eds.). Springer, Berlin, Heidelberg, 304–317. https:
[17] Alina Beygelzimer, Sham Kakade, and John Langford. 2006. Cover trees for //doi.org/10.1007/978-3-540-88682-2_24
nearest neighbor. In Proceedings of the 23rd international conference on Machine [34] Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2017. Billion-scale similarity
learning (ICML ’06). Association for Computing Machinery, New York, NY, USA, search with GPUs. http://arxiv.org/abs/1702.08734 arXiv:1702.08734 [cs].
97–104. https://doi.org/10.1145/1143844.1143857 [35] Herve Jégou, Matthijs Douze, and Cordelia Schmid. 2011. Product Quantization
[18] Fedor Borisyuk, Siddarth Malreddy, Jun Mei, Yiqun Liu, Xiaoyi Liu, Piyush Ma- for Nearest Neighbor Search. IEEE Transactions on Pattern Analysis and Machine
heshwari, Anthony Bell, and Kaushik Rangadurai. 2021. VisRel: Media Search at Intelligence 33, 1 (Jan. 2011), 117–128. https://doi.org/10.1109/TPAMI.2010.57
Scale. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery Conference Name: IEEE Transactions on Pattern Analysis and Machine Intelli-
& Data Mining (KDD ’21). Association for Computing Machinery, New York, NY, gence.
USA, 2584–2592. https://doi.org/10.1145/3447548.3467081 [36] Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey
[19] Sanjoy Dasgupta and Yoav Freund. 2008. Random projection trees and low Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense Passage Retrieval for Open-
dimensional manifolds. In Proceedings of the fortieth annual ACM symposium on Domain Question Answering. https://arxiv.org/abs/2004.04906v3
ACORN: Search Over Vector Embeddings and Structured Data

[37] Philip M. Lankford. 1969. Regionalization: Theory and Alternative Algorithms. [57] Harsha Vardhan Simhadri, George Williams, Martin Aumüller, Matthijs Douze,
Geographical Analysis 1, 2 (1969), 196–212. https://doi.org/10.1111/j.1538-4632. Artem Babenko, Dmitry Baranchuk, Qi Chen, Lucas Hosseini, Ravishankar Kr-
1969.tb00615.x _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1538- ishnaswamy, Gopal Srinivasa, Suhas Jayaram Subramanya, and Jingdong Wang.
4632.1969.tb00615.x. 2022. Results of the NeurIPS’21 Challenge on Billion-Scale Approximate Nearest
[38] D. T. Lee and B. J. Schachter. 1980. Two algorithms for constructing a Delaunay Neighbor Search. http://arxiv.org/abs/2205.03763 arXiv:2205.03763 [cs].
triangulation. International Journal of Computer & Information Sciences 9, 3 (June [58] Aditi Singh, Suhas Jayaram Subramanya, Ravishankar Krishnaswamy, and Har-
1980), 219–242. https://doi.org/10.1007/BF00977785 sha Vardhan Simhadri. 2021. FreshDiskANN: A Fast and Accurate Graph-Based
[39] V. Lempitsky and A. Babenko. 2012. The inverted multi-index. IEEE Computer ANN Index for Streaming Similarity Search. https://doi.org/10.48550/arXiv.2105.
Society, 3069–3076. https://doi.org/10.1109/CVPR.2012.6248038 ISSN: 1063-6919. 09613 arXiv:2105.09613 [cs].
[40] Mingjie Li, Ying Zhang, Yifang Sun, Wei Wang, Ivor W. Tsang, and Xuemin Lin. [59] Narayanan Sundaram, Aizana Turmukhametova, Nadathur Satish, Todd Mostak,
2020. I/O Efficient Approximate Nearest Neighbour Search based on Learned Piotr Indyk, Samuel Madden, and Pradeep Dubey. 2013. Streaming similar-
Functions. 2020 IEEE 36th International Conference on Data Engineering (ICDE) ity search over one billion tweets using parallel locality-sensitive hashing.
(April 2020), 289–300. https://doi.org/10.1109/ICDE48307.2020.00032 Conference Proceedings of the VLDB Endowment 6, 14 (Sept. 2013), 1930–1941. https:
Name: 2020 IEEE 36th International Conference on Data Engineering (ICDE) //doi.org/10.14778/2556549.2556574
ISBN: 9781728129037 Place: Dallas, TX, USA Publisher: IEEE. [60] Godfried T. Toussaint. 1980. The relative neighbourhood graph of a finite planar
[41] Wanqi Liu, Hanchen Wang, Ying Zhang, Wei Wang, Lu Qin, and Xuemin Lin. 2021. set. Pattern Recognition 12, 4 (Jan. 1980), 261–268. https://doi.org/10.1016/0031-
EI-LSH: An early-termination driven I/O efficient incremental c-approximate 3203(80)90066-7
nearest neighbor search. The VLDB Journal 30, 2 (March 2021), 215–235. https: [61] Andrei Vasnetsov. [n. d.]. Filtrable HNSW - Qdrant. https://qdrant.tech/articles/
//doi.org/10.1007/s00778-020-00635-4 filtrable-hnsw/
[42] Yiding Liu, Weixue Lu, Suqi Cheng, Daiting Shi, Shuaiqiang Wang, Zhicong [62] Jianguo Wang, Xiaomeng Yi, Rentong Guo, Hai Jin, Peng Xu, Shengjun Li, Xi-
Cheng, and Dawei Yin. 2021. Pre-trained Language Model for Web-scale Retrieval angyu Wang, Xiangzhou Guo, Chengming Li, Xiaohai Xu, Kun Yu, Yuxing Yuan,
in Baidu Search. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Yinghao Zou, Jiquan Long, Yudong Cai, Zhenxiang Li, Zhifeng Zhang, Yihua Mo,
Discovery & Data Mining (KDD ’21). Association for Computing Machinery, New Jun Gu, Ruiyi Jiang, Yi Wei, and Charles Xie. 2021. Milvus: A Purpose-Built Vec-
York, NY, USA, 3365–3375. https://doi.org/10.1145/3447548.3467149 tor Data Management System. In Proceedings of the 2021 International Conference
[43] David G. Lowe. 2004. Distinctive Image Features from Scale-Invariant Keypoints. on Management of Data (SIGMOD ’21). Association for Computing Machinery,
International Journal of Computer Vision 60, 2 (Nov. 2004), 91–110. https://doi. New York, NY, USA, 2614–2627. https://doi.org/10.1145/3448016.3457550
org/10.1023/B:VISI.0000029664.99615.94 [63] Mengzhao Wang, Lingwei Lv, Xiaoliang Xu, Yuxiang Wang, Qiang Yue, and
[44] Kejing Lu and Mineichi Kudo. 2020. R2LSH: A Nearest Neighbor Search Scheme Jiongkang Ni. 2022. Navigable Proximity Graph-Driven Native Hybrid Queries
Based on Two-dimensional Projected Spaces. In 2020 IEEE 36th International with Structured and Unstructured Constraints. http://arxiv.org/abs/2203.13601
Conference on Data Engineering (ICDE). 1045–1056. https://doi.org/10.1109/ arXiv:2203.13601 [cs].
ICDE48307.2020.00095 ISSN: 2375-026X. [64] Chuangxian Wei, Bin Wu, Sheng Wang, Renjie Lou, Chaoqun Zhan, Feifei Li,
[45] Kejing Lu, Hongya Wang, Wei Wang, and Mineichi Kudo. 2020. VHP: approxi- and Yuanzhe Cai. 2020. AnalyticDB-V: a hybrid analytical engine towards query
mate nearest neighbor search via virtual hypersphere partitioning. Proceedings fusion for structured and unstructured data. Proceedings of the VLDB Endowment
of the VLDB Endowment 13, 9 (May 2020), 1443–1455. https://doi.org/10.14778/ 13, 12 (Aug. 2020), 3152–3165. https://doi.org/10.14778/3415478.3415541
3397230.3397240 [65] Brie Wolfson. 2023. Building chat langchain. https://blog.langchain.dev/building-
[46] Qin Lv, William Josephson, Zhe Wang, Moses Charikar, and Kai Li. 2017. chat-langchain-2/
Intelligent probing for locality sensitive hashing: multi-probe LSH and be- [66] Wei Wu, Junlin He, Yu Qiao, Guoheng Fu, Li Liu, and Jin Yu. 2022. HQANN:
yond. Proceedings of the VLDB Endowment 10, 12 (Aug. 2017), 2021–2024. Efficient and Robust Similarity Search for Hybrid Queries with Structured and
https://doi.org/10.14778/3137765.3137836 Unstructured Constraints. http://arxiv.org/abs/2207.07940 arXiv:2207.07940
[47] Yury Malkov, Alexander Ponomarenko, Andrey Logvinov, and Vladimir Krylov. [cs].
2014. Approximate nearest neighbor algorithm based on navigable small world [67] Qianxi Zhang, Shuotao Xu, Qi Chen, Guoxin Sui, Jiadong Xie, Zhizhen Cai, Yaoqi
graphs. Information Systems 45 (Sept. 2014), 61–68. https://doi.org/10.1016/j.is. Chen, Yinxuan He, Yuqing Yang, Fan Yang, Mao Yang, and Lidong Zhou. 2023.
2013.10.006 {VBASE}: Unifying Online Vector Similarity Search and Relational Queries via
[48] Yu A. Malkov and D. A. Yashunin. 2018. Efficient and robust approximate Relaxed Monotonicity. 377–395. https://www.usenix.org/conference/osdi23/
nearest neighbor search using Hierarchical Navigable Small World graphs. http: presentation/zhang-qianxi
//arxiv.org/abs/1603.09320 arXiv:1603.09320 [cs]. [68] Weijie Zhao, Shulong Tan, and Ping Li. 2020. SONG: Approximate Nearest
[49] Jason Mohoney, Anil Pacaci, Shihabur Rahman Chowdhury, Ali Mousavi, Ihab F. Neighbor Search on GPU. In 2020 IEEE 36th International Conference on Data
Ilyas, Umar Farooq Minhas, Jeffrey Pound, and Theodoros Rekatsinas. 2023. Engineering (ICDE). 1033–1044. https://doi.org/10.1109/ICDE48307.2020.00094
High-Throughput Vector Similarity Search in Knowledge Graphs. http://arxiv. ISSN: 2375-026X.
org/abs/2304.01926 arXiv:2304.01926 [cs]. [69] Bolong Zheng, Xi Zhao, Lianggui Weng, Nguyen Quoc Viet Hung, Hang Liu,
[50] Marius Muja and David G. Lowe. 2014. Scalable Nearest Neighbor Algorithms and Christian S. Jensen. 2020. PM-LSH: A fast and accurate LSH framework for
for High Dimensional Data. IEEE Transactions on Pattern Analysis and Machine high-dimensional approximate NN search. Proceedings of the VLDB Endowment
Intelligence 36, 11 (Nov. 2014), 2227–2240. https://doi.org/10.1109/TPAMI.2014. 13, 5 (Jan. 2020), 643–655. https://doi.org/10.14778/3377369.3377374
2321376 Conference Name: IEEE Transactions on Pattern Analysis and Machine
Intelligence.
[51] Gonzalo Navarro. 2002. Searching in metric spaces by spatial approximation. The
VLDB Journal 11, 1 (Aug. 2002), 28–46. https://doi.org/10.1007/s007780200060
[52] Yongjoo Park, Michael Cafarella, and Barzan Mozafari. 2015. Neighbor-sensitive
hashing. Proceedings of the VLDB Endowment 9, 3 (Nov. 2015), 144–155. https:
//doi.org/10.14778/2850583.2850589
[53] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh,
Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark,
Gretchen Krueger, and Ilya Sutskever. 2021. Learning Transferable Visual Models
From Natural Language Supervision. https://doi.org/10.48550/arXiv.2103.00020
arXiv:2103.00020 [cs].
[54] Navid Rekabsaz, Oleg Lesota, Markus Schedl, Jon Brassey, and Carsten Eickhoff.
2021. TripClick: The Log Files of a Large Health Web Search Engine. In Proceed-
ings of the 44th International ACM SIGIR Conference on Research and Development
in Information Retrieval. 2507–2513. https://doi.org/10.1145/3404835.3463242
arXiv:2103.07901 [cs].
[55] Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk,
Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, and Aran Komatsuzaki.
2021. LAION-400M: Open Dataset of CLIP-Filtered 400 Million Image-Text Pairs.
https://doi.org/10.48550/arXiv.2111.02114 arXiv:2111.02114 [cs].
[56] Chanop Silpa-Anan and Richard Hartley. 2008. Optimised KD-trees for fast
image descriptor matching. IEEE Computer Society, 1–8. https://doi.org/10.
1109/CVPR.2008.4587638

You might also like