Select and optimize embedding models for semantic search and RAG applications. Use when choosing embedding models, implementing chunking strategies, or optimizing embedding quality for specific domains.
Install with Tessl CLI
npx tessl i github:wshobson/agents --skill embedding-strategies81
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Voyage AI model selection
Voyage AI library
0%
100%
Legal model
100%
100%
Code model
100%
100%
General/default model
100%
100%
API key from env
100%
100%
Separate model instances
0%
100%
Document embedding method
100%
100%
Query embedding method
100%
100%
Without context: $0.3491 · 4m 35s · 18 turns · 121 in / 4,998 out tokens
With context: $0.6587 · 6m 45s · 25 turns · 430 in / 6,689 out tokens
Local model prefixes and chunking
Local embedding library
100%
100%
Recommended model name
0%
100%
Normalized embeddings
100%
100%
Query prefix applied
100%
100%
Document prefix (E5) or no-prefix (BGE)
100%
100%
Chunking with overlap
100%
100%
Metadata stored
100%
100%
Cosine similarity ranking
100%
100%
Without context: $0.8039 · 8m 46s · 25 turns · 201 in / 12,793 out tokens
With context: $0.9980 · 8m 56s · 31 turns · 431 in / 13,035 out tokens
OpenAI batching and quality evaluation
OpenAI model selection
100%
100%
Batching loop
100%
100%
Batch size 100
100%
100%
Matryoshka dimensions param
100%
100%
Precision@k metric
100%
100%
Recall@k metric
100%
100%
MRR metric
100%
100%
NDCG@k metric
100%
100%
Without context: $1.1627 · 10m · 37 turns · 289 in / 15,169 out tokens
With context: $1.0525 · 10m 5s · 36 turns · 557 in / 12,251 out tokens
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.