Loading

Dense vector search in Elasticsearch

Stack Serverless

Dense vectors use neural embeddings to represent semantic meaning. They translate text, images, or other data into fixed-length vectors of floating-point numbers. Content with similar meaning is mapped to nearby points in vector space, making dense vector search a powerful technique for:

  • Finding semantically similar content
  • Matching natural language questions with relevant answers
  • Performing image and multimedia similarity search
  • Delivering content-based recommendations
Tip

For most use cases, the semantic_text field type is the recommended starting point. It provides automatic model management and sensible defaults for vector search.

To implement dense vector search in Elasticsearch, you need both an index configuration and a way to generate embeddings:

  1. Index documents with embeddings

  2. Query the index with k-NN search

    • Use the knn query to retrieve results based on vector similarity

Better Binary Quantization (BBQ) is an advanced vector quantization technique for dense_vector fields. It compresses embeddings into compact binary form, enabling faster similarity search and reducing memory usage. This improves both search relevance and cost efficiency, especially when used with HNSW (Hierarchical Navigable Small World).

Learn more about how BBQ works, supported algorithms, and configuration examples in the Better Binary Quantization (BBQ) documentation.