Expert in building Retrieval-Augmented Generation systems. Masters embedding models, vector databases, chunking strategies, and retrieval optimization for LLM applications. Use when: building RAG, vector search, embeddings, semantic search, document retrieval.
64
64%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Quality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description has strong trigger terms and completeness with an explicit 'Use when' clause, making it easy for Claude to select appropriately. Its main weakness is that the capability descriptions lean toward buzzwordy topic listing ('Masters embedding models') rather than concrete actions. The word 'Expert' and 'Masters' are self-aggrandizing fluff that don't add informational value.
Suggestions
Replace vague claims like 'Masters embedding models' and 'Expert in' with concrete actions such as 'Configures embedding pipelines, sets up vector database indexes, implements chunking strategies, and optimizes retrieval quality for LLM applications'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (RAG systems) and lists relevant technical areas (embedding models, vector databases, chunking strategies, retrieval optimization), but these read more like topic areas than concrete actions. Phrases like 'Masters embedding models' are vague claims rather than specific capabilities like 'configure chunking strategies' or 'set up vector database indexes'. | 2 / 3 |
Completeness | Clearly answers both 'what' (building RAG systems, working with embedding models, vector databases, chunking strategies, retrieval optimization) and 'when' with an explicit 'Use when:' clause listing trigger scenarios. | 3 / 3 |
Trigger Term Quality | Includes strong natural trigger terms that users would actually say: 'RAG', 'vector search', 'embeddings', 'semantic search', 'document retrieval'. These cover the most common ways users would phrase requests in this domain. | 3 / 3 |
Distinctiveness Conflict Risk | RAG, vector search, embeddings, and semantic search form a clear and distinct niche. These terms are unlikely to conflict with other skills unless there's another RAG-specific skill, and the combination of triggers is highly specific to this domain. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
22%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill reads more like a high-level conceptual overview or persona description than actionable technical guidance. It lacks any executable code examples despite being a technical skill about building RAG systems, and multiple sections are incomplete (anti-patterns with no content, sharp edges with truncated solutions). The role-playing introduction and capabilities/requirements lists waste tokens on information Claude already possesses.
Suggestions
Replace the abstract bullet-point 'code blocks' with actual executable code examples showing concrete implementations (e.g., a working chunking function using LangChain or LlamaIndex, a vector store setup with specific libraries like ChromaDB or Pinecone).
Complete the Sharp Edges table by providing actual code solutions after each colon, or remove the colons and provide inline actionable guidance.
Fill in the Anti-Patterns section with brief explanations of why each is problematic and what to do instead, or remove the section if it duplicates Sharp Edges.
Remove the persona/role-playing introduction and Requirements section, replacing them with a concrete quick-start workflow showing a minimal end-to-end RAG pipeline with validation steps.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill includes unnecessary role-playing preamble ('I bridge the gap...I obsess over chunking boundaries') and a 'Requirements' section listing things Claude already knows. The capabilities list is also largely redundant given the skill description. However, the patterns and sharp edges sections are reasonably concise. | 2 / 3 |
Actionability | Despite being labeled as code blocks, the content is entirely bullet-point descriptions and abstract guidance rather than executable code. There are no concrete implementations, no specific library usage, no copy-paste ready examples. The Sharp Edges table references solutions but the solutions are cut off or missing entirely (e.g., 'Use semantic chunking that respects document structure:' with no actual code following the colon). | 1 / 3 |
Workflow Clarity | The Hierarchical Retrieval pattern hints at a multi-step process but provides no validation checkpoints, no error handling, and no concrete sequencing. The Sharp Edges table lists solutions that are incomplete (ending with colons and no follow-through). There are no feedback loops or verification steps for any of the described processes. | 1 / 3 |
Progressive Disclosure | The content has some structural organization with sections for Patterns, Anti-Patterns, and Sharp Edges. However, the Anti-Patterns section lists items with no explanation at all, the Sharp Edges solutions are truncated, and there are no references to deeper documentation files. The content is neither well-split nor does it provide navigation to more detailed resources. | 2 / 3 |
Total | 6 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
Reviewed
Table of Contents