Patterns for Databricks Vector Search: create endpoints and indexes, query with filters, manage embeddings. Use when building RAG applications, semantic search, or similarity matching. Covers both storage-optimized and standard endpoints.
89
86%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Patterns for creating, managing, and querying vector search indexes for RAG and semantic search applications.
Use this skill when:
Databricks Vector Search provides managed vector similarity search with automatic embedding generation and Delta Lake integration.
| Component | Description |
|---|---|
| Endpoint | Compute resource hosting indexes (Standard or Storage-Optimized) |
| Index | Vector data structure for similarity search |
| Delta Sync | Auto-syncs with source Delta table |
| Direct Access | Manual CRUD operations on vectors |
| Type | Latency | Capacity | Cost | Best For |
|---|---|---|---|---|
| Standard | 20-50ms | 320M vectors (768 dim) | Higher | Real-time, low-latency |
| Storage-Optimized | 300-500ms | 1B+ vectors (768 dim) | 7x lower | Large-scale, cost-sensitive |
| Type | Embeddings | Sync | Use Case |
|---|---|---|---|
| Delta Sync (managed) | Databricks computes | Auto from Delta | Easiest setup |
| Delta Sync (self-managed) | You provide | Auto from Delta | Custom embeddings |
| Direct Access | You provide | Manual CRUD | Real-time updates |
from databricks.sdk import WorkspaceClient
w = WorkspaceClient()
# Create a standard endpoint
endpoint = w.vector_search_endpoints.create_endpoint(
name="my-vs-endpoint",
endpoint_type="STANDARD" # or "STORAGE_OPTIMIZED"
)
# Note: Endpoint creation is asynchronous; check status with get_endpoint()# Source table must have: primary key column + text column
index = w.vector_search_indexes.create_index(
name="catalog.schema.my_index",
endpoint_name="my-vs-endpoint",
primary_key="id",
index_type="DELTA_SYNC",
delta_sync_index_spec={
"source_table": "catalog.schema.documents",
"embedding_source_columns": [
{
"name": "content", # Text column to embed
"embedding_model_endpoint_name": "databricks-gte-large-en"
}
],
"pipeline_type": "TRIGGERED" # or "CONTINUOUS"
}
)results = w.vector_search_indexes.query_index(
index_name="catalog.schema.my_index",
columns=["id", "content", "metadata"],
query_text="What is machine learning?",
num_results=5
)
for doc in results.result.data_array:
score = doc[-1] # Similarity score is last column
print(f"Score: {score}, Content: {doc[1][:100]}...")# For large-scale, cost-effective deployments
endpoint = w.vector_search_endpoints.create_endpoint(
name="my-storage-endpoint",
endpoint_type="STORAGE_OPTIMIZED"
)# Source table must have: primary key + embedding vector column
index = w.vector_search_indexes.create_index(
name="catalog.schema.my_index",
endpoint_name="my-vs-endpoint",
primary_key="id",
index_type="DELTA_SYNC",
delta_sync_index_spec={
"source_table": "catalog.schema.documents",
"embedding_vector_columns": [
{
"name": "embedding", # Pre-computed embedding column
"embedding_dimension": 768
}
],
"pipeline_type": "TRIGGERED"
}
)import json
# Create index for manual CRUD
index = w.vector_search_indexes.create_index(
name="catalog.schema.direct_index",
endpoint_name="my-vs-endpoint",
primary_key="id",
index_type="DIRECT_ACCESS",
direct_access_index_spec={
"embedding_vector_columns": [
{"name": "embedding", "embedding_dimension": 768}
],
"schema_json": json.dumps({
"id": "string",
"text": "string",
"embedding": "array<float>",
"metadata": "string"
})
}
)
# Upsert data
w.vector_search_indexes.upsert_data_vector_index(
index_name="catalog.schema.direct_index",
inputs_json=json.dumps([
{"id": "1", "text": "Hello", "embedding": [0.1, 0.2, ...], "metadata": "doc1"},
{"id": "2", "text": "World", "embedding": [0.3, 0.4, ...], "metadata": "doc2"},
])
)
# Delete data
w.vector_search_indexes.delete_data_vector_index(
index_name="catalog.schema.direct_index",
primary_keys=["1", "2"]
)# When you have pre-computed query embedding
results = w.vector_search_indexes.query_index(
index_name="catalog.schema.my_index",
columns=["id", "text"],
query_vector=[0.1, 0.2, 0.3, ...], # Your 768-dim vector
num_results=10
)Hybrid search combines vector similarity (ANN) with BM25 keyword scoring. Use it when queries contain exact terms that must match — SKUs, error codes, proper nouns, or technical terminology — where pure semantic search might miss keyword-specific results. See search-modes.md for detailed guidance on choosing between ANN and hybrid search.
# Combines vector similarity with keyword matching
results = w.vector_search_indexes.query_index(
index_name="catalog.schema.my_index",
columns=["id", "content"],
query_text="SPARK-12345 executor memory error",
query_type="HYBRID",
num_results=10
)# filters_json uses dictionary format
results = w.vector_search_indexes.query_index(
index_name="catalog.schema.my_index",
columns=["id", "content"],
query_text="machine learning",
num_results=10,
filters_json='{"category": "ai", "status": ["active", "pending"]}'
)Storage-Optimized endpoints use SQL-like filter syntax via the databricks-vectorsearch package's filters parameter (accepts a string):
from databricks.vector_search.client import VectorSearchClient
vsc = VectorSearchClient()
index = vsc.get_index(endpoint_name="my-storage-endpoint", index_name="catalog.schema.my_index")
# SQL-like filter syntax for storage-optimized endpoints
results = index.similarity_search(
query_text="machine learning",
columns=["id", "content"],
num_results=10,
filters="category = 'ai' AND status IN ('active', 'pending')"
)
# More filter examples
# filters="price > 100 AND price < 500"
# filters="department LIKE 'eng%'"
# filters="created_at >= '2024-01-01'"# For TRIGGERED pipeline type, manually sync
w.vector_search_indexes.sync_index(
index_name="catalog.schema.my_index"
)# Retrieve all vectors (for debugging/export)
scan_result = w.vector_search_indexes.scan_index(
index_name="catalog.schema.my_index",
num_results=100
)| Topic | File | Description |
|---|---|---|
| Index Types | index-types.md | Detailed comparison of Delta Sync (managed/self-managed) vs Direct Access |
| End-to-End RAG | end-to-end-rag.md | Complete walkthrough: source table → endpoint → index → query → agent integration |
| Search Modes | search-modes.md | When to use semantic (ANN) vs hybrid search, decision guide |
| Operations | troubleshooting-and-operations.md | Monitoring, cost optimization, capacity planning, migration |
# List endpoints
databricks vector-search endpoints list
# Create endpoint
databricks vector-search endpoints create \
--name my-endpoint \
--endpoint-type STANDARD
# List indexes on endpoint
databricks vector-search indexes list-indexes \
--endpoint-name my-endpoint
# Get index status
databricks vector-search indexes get-index \
--index-name catalog.schema.my_index
# Sync index (for TRIGGERED)
databricks vector-search indexes sync-index \
--index-name catalog.schema.my_index
# Delete index
databricks vector-search indexes delete-index \
--index-name catalog.schema.my_index| Issue | Solution |
|---|---|
| Index sync slow | Use Storage-Optimized endpoints (20x faster indexing) |
| Query latency high | Use Standard endpoint for <100ms latency |
| filters_json not working | Storage-Optimized uses SQL-like string filters via databricks-vectorsearch package's filters parameter |
| Embedding dimension mismatch | Ensure query and index dimensions match |
| Index not updating | Check pipeline_type; use sync_index() for TRIGGERED |
| Out of capacity | Upgrade to Storage-Optimized (1B+ vectors) |
query_vector truncated by MCP tool | MCP tool calls serialize arrays as JSON and can truncate large vectors (e.g. 1024-dim). Use query_text instead (for managed embedding indexes), or use the Databricks SDK/CLI to pass raw vectors |
Databricks provides built-in embedding models:
| Model | Dimensions | Context Window | Use Case |
|---|---|---|---|
databricks-gte-large-en | 1024 | 8192 tokens | English text, high quality |
databricks-bge-large-en | 1024 | 512 tokens | English text, general purpose |
# Use with managed embeddings
embedding_source_columns=[
{
"name": "content",
"embedding_model_endpoint_name": "databricks-gte-large-en"
}
]The following MCP tools are available for managing Vector Search infrastructure. For a full end-to-end walkthrough, see end-to-end-rag.md.
| Tool | Description |
|---|---|
create_or_update_vs_endpoint | Create or update an endpoint (STANDARD or STORAGE_OPTIMIZED). Idempotent — returns existing if found |
get_vs_endpoint | Get endpoint details by name. Omit name to list all endpoints in the workspace |
delete_vs_endpoint | Delete an endpoint (all indexes must be deleted first) |
# Create or update an endpoint
result = create_or_update_vs_endpoint(name="my-vs-endpoint", endpoint_type="STANDARD")
# Returns {"name": "my-vs-endpoint", "endpoint_type": "STANDARD", "created": True}
# List all endpoints
endpoints = get_vs_endpoint() # omit name to list all| Tool | Description |
|---|---|
create_or_update_vs_index | Create or update an index. Idempotent — auto-triggers initial sync for DELTA_SYNC indexes |
get_vs_index | Get index details by index_name. Pass endpoint_name (no index_name) to list all indexes on an endpoint |
delete_vs_index | Delete an index by fully-qualified name (catalog.schema.index_name) |
# Create a Delta Sync index with managed embeddings
result = create_or_update_vs_index(
name="catalog.schema.my_index",
endpoint_name="my-vs-endpoint",
primary_key="id",
index_type="DELTA_SYNC",
delta_sync_index_spec={
"source_table": "catalog.schema.docs",
"embedding_source_columns": [{"name": "content", "embedding_model_endpoint_name": "databricks-gte-large-en"}],
"pipeline_type": "TRIGGERED"
}
)
# Get a specific index by name — parameter is index_name, not name
index = get_vs_index(index_name="catalog.schema.my_index")
# List all indexes on an endpoint
indexes = get_vs_index(endpoint_name="my-vs-endpoint")| Tool | Description |
|---|---|
query_vs_index | Query index with query_text, query_vector, or hybrid (query_type="HYBRID"). Prefer query_text over query_vector — MCP tool calls can truncate large embedding arrays (1024-dim) |
manage_vs_data | CRUD operations on Direct Access indexes. operation: "upsert", "delete", "scan", "sync" |
# Query an index
results = query_vs_index(
index_name="catalog.schema.my_index",
columns=["id", "content"],
query_text="machine learning best practices",
num_results=5
)
# Upsert data into a Direct Access index
manage_vs_data(
index_name="catalog.schema.my_index",
operation="upsert",
inputs_json=[{"id": "doc1", "content": "...", "embedding": [0.1, 0.2, ...]}]
)
# Trigger manual sync for a TRIGGERED pipeline index
manage_vs_data(index_name="catalog.schema.my_index", operation="sync")columns_to_sync matters — only synced columns are available in query results; include all columns you needdatabricks-vectorsearch package's filters parameter which accepts both formatsVectorSearchRetrieverTool or the Databricks managed Vector Search MCP serverb4071a0
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.