Select and optimize embedding models for semantic search and RAG applications. Use when choosing embedding models, implementing chunking strategies, or optimizing embedding quality for specific domains.
75
66%
Does it follow best practices?
Impact
91%
1.65xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/llm-application-dev/skills/embedding-strategies/SKILL.mdFinancial document embedding pipeline
Voyage package import
0%
100%
Finance domain model
0%
100%
API key from environment
0%
50%
Token-based chunking
0%
100%
Chunk size 512
0%
100%
Chunk overlap 50
0%
100%
cl100k_base encoding
0%
100%
document_id metadata
100%
100%
chunk_index metadata
100%
100%
Unique record id
100%
100%
EmbeddedDocument dataclass/structure
100%
100%
Code search indexing with tree-sitter
Code-specific model
0%
53%
VoyageAI package
0%
0%
Tree-sitter chunking attempt
0%
100%
Chunking fallback
70%
100%
Context-prefixed embedding
53%
100%
Embedding normalization
33%
66%
Cosine similarity search
100%
100%
Function/class extraction
0%
100%
Embedding model selection and quality evaluation
BGE query prefix
100%
100%
E5 query prefix
100%
100%
E5 passage prefix
100%
100%
OpenAI batch size 100
100%
100%
Matryoshka dimension param
100%
100%
Precision@K metric
100%
100%
Recall@K metric
100%
100%
MRR metric
100%
100%
NDCG@K metric
100%
100%
No model mixing
100%
100%
Results JSON output
100%
100%
91fe43e
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.