CtrlK
BlogDocsLog inGet started
Tessl Logo

rag-implementation

Build Retrieval-Augmented Generation (RAG) systems for LLM applications with vector databases and semantic search. Use when implementing knowledge-grounded AI, building document Q&A systems, or integrating LLMs with external knowledge bases.

71

2.12x
Quality

66%

Does it follow best practices?

Impact

70%

2.12x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./tests/ext_conformance/artifacts/agents-wshobson/llm-application-dev/skills/rag-implementation/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a solid skill description with strong trigger terms and clear completeness through an explicit 'Use when' clause. Its main weakness is that the capability description stays at a somewhat high level—mentioning 'build RAG systems' without enumerating specific sub-tasks like chunking, embedding, indexing, or retrieval pipeline configuration. The trigger term coverage and distinctiveness are both strong.

Suggestions

Add more specific concrete actions such as 'chunk documents, generate embeddings, configure vector stores, build retrieval pipelines' to improve specificity.

DimensionReasoningScore

Specificity

Names the domain (RAG systems) and some actions ('build', 'implementing', 'integrating'), but doesn't list multiple concrete specific actions like chunking strategies, embedding generation, vector store configuration, or retrieval pipeline setup.

2 / 3

Completeness

Clearly answers both 'what' (build RAG systems with vector databases and semantic search) and 'when' (explicit 'Use when' clause covering knowledge-grounded AI, document Q&A systems, and integrating LLMs with external knowledge bases).

3 / 3

Trigger Term Quality

Good coverage of natural terms users would say: 'RAG', 'Retrieval-Augmented Generation', 'vector databases', 'semantic search', 'document Q&A', 'knowledge bases', 'LLM applications'. These are terms users naturally use when seeking this capability.

3 / 3

Distinctiveness Conflict Risk

RAG systems, vector databases, and semantic search form a clear niche that is unlikely to conflict with other skills. The combination of these specific technologies creates a distinct trigger profile.

3 / 3

Total

11

/

12

Passed

Implementation

42%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill provides highly actionable, executable code examples covering a comprehensive range of RAG patterns, which is its primary strength. However, it is severely over-scoped and verbose for a single SKILL.md file—it reads more like a complete reference manual than a focused skill. The content would benefit enormously from splitting into multiple files with the main skill serving as a concise overview with navigation links.

Suggestions

Split content into separate files (e.g., VECTOR_STORES.md, CHUNKING.md, ADVANCED_PATTERNS.md, EVALUATION.md) and keep SKILL.md as a concise overview with the Quick Start and links to detailed guides.

Remove explanatory text that Claude already knows (e.g., 'Purpose: Store and retrieve document embeddings efficiently', 'Purpose: Convert text to numerical vectors for similarity search', listing what each vector DB is known for).

Add validation checkpoints to the workflow: verify index creation, test retrieval quality with a sample query before building the full pipeline, and validate embedding dimensions match the index configuration.

Remove the exhaustive embedding model comparison table and vector store option lists—these are reference material better suited to a separate file or omitted entirely since Claude knows these tools.

DimensionReasoningScore

Conciseness

This is extremely verbose at ~400+ lines, covering every possible RAG pattern, vector store, chunking strategy, and evaluation approach. Much of this is reference material Claude already knows (e.g., what BM25 is, what embeddings are, listing vector database options). The 'Purpose' explanations and option lists are unnecessary padding.

1 / 3

Actionability

The code examples are concrete, executable, and copy-paste ready using real libraries (LangChain, LangGraph, Pinecone, etc.). Each pattern includes complete, runnable code with proper imports and realistic configurations.

3 / 3

Workflow Clarity

The Quick Start provides a clear sequential workflow (retrieve → generate), and patterns are individually clear. However, there are no validation checkpoints for the overall RAG pipeline setup (e.g., verifying embeddings are correct, checking index creation succeeded, validating retrieval quality before deploying). The evaluation section exists but isn't integrated into a build workflow.

2 / 3

Progressive Disclosure

This is a monolithic wall of content with no references to separate files. All advanced patterns, vector store configs, chunking strategies, evaluation code, and optimization techniques are inlined. This should be split into separate files (e.g., CHUNKING.md, VECTOR_STORES.md, PATTERNS.md, EVALUATION.md) with the SKILL.md serving as an overview.

1 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (571 lines); consider splitting into references/ and linking

Warning

Total

10

/

11

Passed

Repository
Dicklesworthstone/pi_agent_rust
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.