Select and optimize embedding models for semantic search and RAG applications. Use when choosing embedding models, implementing chunking strategies, or optimizing embedding quality for specific domains.
Install with Tessl CLI
npx tessl i github:wshobson/agents --skill embedding-strategies81
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-structured skill description with explicit 'Use when' guidance and good trigger term coverage for the embedding/RAG domain. The main weakness is that the capability description could be more specific about concrete actions beyond 'select and optimize'. Overall, it effectively communicates when Claude should select this skill.
Suggestions
Expand specificity by listing more concrete actions such as 'compare embedding model benchmarks', 'configure vector dimensions', 'evaluate retrieval accuracy', or 'tune chunk overlap parameters'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (embedding models, semantic search, RAG) and some actions (select, optimize, implement chunking strategies), but lacks comprehensive concrete actions like 'compare model benchmarks', 'configure vector dimensions', or 'evaluate retrieval quality'. | 2 / 3 |
Completeness | Clearly answers both what ('Select and optimize embedding models for semantic search and RAG applications') and when ('Use when choosing embedding models, implementing chunking strategies, or optimizing embedding quality for specific domains') with explicit trigger guidance. | 3 / 3 |
Trigger Term Quality | Good coverage of natural terms users would say: 'embedding models', 'semantic search', 'RAG', 'chunking strategies', 'embedding quality'. These are terms users working in this space would naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | Clear niche focused specifically on embedding models and RAG applications. The specific terms 'embedding models', 'chunking strategies', and 'RAG' create distinct triggers unlikely to conflict with general ML or document processing skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides highly actionable, executable code templates for embedding workflows across multiple providers and use cases. However, it suffers from being overly comprehensive in a single file, lacking validation checkpoints in workflows, and including some time-sensitive information (2026 model comparison) that may become outdated. The content would benefit from splitting detailed templates into separate files and adding explicit verification steps.
Suggestions
Add validation checkpoints to the embedding pipeline templates (e.g., verify embedding dimensions, check for empty results, validate chunk sizes before embedding)
Split detailed code templates into separate reference files (e.g., CHUNKING.md, EVALUATION.md) and keep SKILL.md as a concise overview with links
Move the model comparison table to a separate MODELS.md file or add a note about checking current benchmarks, as model recommendations change frequently
Add error handling examples and recovery steps for common failures (API rate limits, token limit exceeded, empty embeddings)
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is comprehensive but includes some unnecessary verbosity, such as the extensive model comparison table with 2026 date reference and lengthy code templates that could be more condensed. Some explanatory comments in code are helpful but others state the obvious. | 2 / 3 |
Actionability | Provides fully executable, copy-paste ready code templates for multiple embedding providers (Voyage AI, OpenAI, local models), chunking strategies, and evaluation metrics. All code is complete with proper imports and realistic implementations. | 3 / 3 |
Workflow Clarity | The embedding pipeline diagram and templates show the process flow, but there are no explicit validation checkpoints or error recovery steps. For operations like batch embedding or chunking, there's no guidance on verifying results or handling failures. | 2 / 3 |
Progressive Disclosure | Content is organized into logical sections with a clear structure, but the skill is monolithic with ~400 lines of inline code that could be split into separate reference files. The Resources section provides external links but no internal file references for detailed topics. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (609 lines); consider splitting into references/ and linking | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.