CtrlK
BlogDocsLog inGet started
Tessl Logo

rag-implementation

Build Retrieval-Augmented Generation (RAG) systems for LLM applications with vector databases and semantic search. Use when implementing knowledge-grounded AI, building document Q&A systems, or integrating LLMs with external knowledge bases.

Install with Tessl CLI

npx tessl i github:wshobson/agents --skill rag-implementation
What are skills?

81

Does it follow best practices?

Validation for skill structure

SKILL.md
Review
Evals

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a solid skill description with excellent trigger term coverage and clear 'Use when' guidance that explicitly defines when to select this skill. The main weakness is that the capabilities section describes the system type to build rather than listing specific concrete actions Claude would perform (like chunking, embedding, indexing, querying).

Suggestions

Replace 'Build RAG systems' with specific actions like 'Chunk documents, generate embeddings, configure vector stores, implement retrieval pipelines, and optimize semantic search queries'

DimensionReasoningScore

Specificity

Names the domain (RAG systems) and mentions components like 'vector databases and semantic search', but doesn't list multiple concrete actions - it describes what to build rather than specific actions like 'index documents', 'chunk text', 'query embeddings', etc.

2 / 3

Completeness

Clearly answers both what ('Build RAG systems with vector databases and semantic search') and when ('Use when implementing knowledge-grounded AI, building document Q&A systems, or integrating LLMs with external knowledge bases') with explicit trigger guidance.

3 / 3

Trigger Term Quality

Good coverage of natural terms users would say: 'RAG', 'Retrieval-Augmented Generation', 'vector databases', 'semantic search', 'knowledge-grounded AI', 'document Q&A', 'knowledge bases', 'LLM applications'. These are terms practitioners naturally use.

3 / 3

Distinctiveness Conflict Risk

Clear niche focused specifically on RAG/retrieval systems with distinct triggers like 'vector databases', 'semantic search', 'knowledge-grounded AI'. Unlikely to conflict with general coding or document processing skills.

3 / 3

Total

11

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a comprehensive RAG implementation guide with excellent, executable code examples covering multiple patterns and configurations. The main weaknesses are its length (could benefit from splitting into multiple files) and lack of explicit validation/verification steps for operations like document indexing and retrieval quality checks. The content assumes appropriate Claude competence in most areas but includes some unnecessary 'Purpose' explanations.

Suggestions

Add explicit validation steps: verify document indexing succeeded, check retrieval quality before proceeding, validate embedding dimensions match index configuration

Split into multiple files: move Vector Store Configurations, Advanced RAG Patterns, and Evaluation Metrics to separate reference documents, keeping SKILL.md as a concise overview with links

Remove 'Purpose' descriptions for each component - Claude understands what vector databases and embeddings do

Add a troubleshooting workflow with explicit 'if X fails, check Y, then retry' feedback loops for common failure modes

DimensionReasoningScore

Conciseness

The skill is comprehensive but includes some unnecessary explanations (e.g., 'Purpose' descriptions for each component that Claude already understands). The embedding model table and vector database options list could be more concise, though the code examples are appropriately lean.

2 / 3

Actionability

Excellent executable code examples throughout - the Quick Start with LangGraph is complete and copy-paste ready, all advanced patterns include working Python code, and vector store configurations are fully specified with real imports and initialization.

3 / 3

Workflow Clarity

While the patterns are well-organized, there are no explicit validation checkpoints or error recovery steps. For a system involving document indexing and retrieval (which can fail silently), missing validation steps like 'verify documents were indexed' or 'check retrieval quality before deploying' is a gap.

2 / 3

Progressive Disclosure

The content is well-structured with clear sections, but it's a monolithic 400+ line document. The evaluation metrics, vector store configurations, and advanced patterns could be split into separate reference files with the main SKILL.md providing an overview and links.

2 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (571 lines); consider splitting into references/ and linking

Warning

Total

10

/

11

Passed

Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.