CtrlK
BlogDocsLog inGet started
Tessl Logo

rag-implementation

Build Retrieval-Augmented Generation (RAG) systems for LLM applications with vector databases and semantic search. Use when implementing knowledge-grounded AI, building document Q&A systems, or integrating LLMs with external knowledge bases.

67

2.07x
Quality

56%

Does it follow best practices?

Impact

83%

2.07x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/llm-application-dev/skills/rag-implementation/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

82%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a solid description that clearly identifies its domain (RAG systems) and includes an explicit 'Use when' clause with relevant trigger scenarios. Its main weakness is that the 'what' portion is somewhat high-level—it says 'build RAG systems' without enumerating the specific actions involved (chunking, embedding, indexing, retrieval, reranking). The trigger terms are strong and natural for the target audience.

Suggestions

Add more specific concrete actions like 'chunk documents, generate embeddings, index into vector stores, implement retrieval pipelines, and rerank results' to improve specificity.

Narrow the scope slightly to reduce overlap risk—consider clarifying what distinguishes this from general LLM integration or document processing skills.

DimensionReasoningScore

Specificity

Names the domain (RAG systems) and mentions some components (vector databases, semantic search), but doesn't list multiple concrete actions beyond 'build'. Lacks specifics like 'chunk documents, generate embeddings, query vector stores, rerank results'.

2 / 3

Completeness

Clearly answers both 'what' (build RAG systems with vector databases and semantic search) and 'when' (explicit 'Use when' clause covering knowledge-grounded AI, document Q&A systems, and integrating LLMs with external knowledge bases).

3 / 3

Trigger Term Quality

Good coverage of natural terms users would say: 'RAG', 'Retrieval-Augmented Generation', 'vector databases', 'semantic search', 'document Q&A', 'knowledge bases', 'LLM applications'. These are terms developers naturally use when seeking this functionality.

3 / 3

Distinctiveness Conflict Risk

While RAG is a specific niche, terms like 'LLM applications' and 'knowledge bases' are broad enough to potentially overlap with general LLM skills, prompt engineering skills, or document processing skills. The RAG-specific terms help but the scope is somewhat wide.

2 / 3

Total

10

/

12

Passed

Implementation

29%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is essentially a comprehensive RAG reference guide with excellent, executable code examples but poor structure as a skill document. It is far too verbose, cataloging options and concepts Claude already knows (what embeddings are, lists of vector DBs with taglines), while lacking any coherent workflow with validation checkpoints. The content would benefit enormously from being restructured into a lean overview with references to detailed sub-files.

Suggestions

Reduce the main SKILL.md to a concise overview (~50-80 lines) with a clear end-to-end workflow (chunk → embed → store → retrieve → validate → generate → evaluate) and move detailed patterns, vector store configs, and chunking strategies into separate referenced files.

Remove catalog-style listings that Claude already knows (vector DB descriptions, what embeddings are, retrieval strategy definitions) and keep only the opinionated recommendations and concrete code.

Add explicit validation checkpoints to the workflow: e.g., 'After indexing, run a test query to verify retrieval quality before building the full pipeline' and 'If retrieval precision < 0.5, try hybrid search or reranking before proceeding.'

Remove the embedding model table with specific dates ('2026') or move it to a separate versioned reference file, as time-sensitive information clutters the main skill.

DimensionReasoningScore

Conciseness

Extremely verbose at ~400+ lines. Includes extensive catalog-style listings of vector databases, embedding models, retrieval strategies, and reranking methods that Claude already knows. The embedding model table with dimensions, the list of vector DB options with taglines, and explanations of what embeddings are ('Convert text to numerical vectors for similarity search') are all unnecessary padding. Much of this reads like a tutorial/reference document rather than a lean skill.

1 / 3

Actionability

The code examples are concrete, executable, and copy-paste ready. Every pattern (hybrid search, multi-query, HyDE, parent document retriever, chunking strategies, vector store configs, reranking) includes complete, runnable Python code with proper imports and realistic usage patterns.

3 / 3

Workflow Clarity

There is no clear end-to-end workflow with sequenced steps and validation checkpoints. The skill presents a collection of patterns and code snippets but never guides through a complete RAG implementation process (e.g., ingest → chunk → embed → store → retrieve → validate retrieval quality → generate → evaluate). No validation steps, no error recovery, no feedback loops for what to do when retrieval quality is poor.

1 / 3

Progressive Disclosure

Monolithic wall of content with no references to external files. Everything is inlined — vector store configs, chunking strategies, evaluation metrics, prompt templates — all in one massive document. Content like the vector store configurations, evaluation code, and advanced patterns should be split into separate referenced files for a skill this large.

1 / 3

Total

6

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (543 lines); consider splitting into references/ and linking

Warning

Total

10

/

11

Passed

Repository
wshobson/agents
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.