CtrlK
BlogDocsLog inGet started
Tessl Logo

rag-architect

Designs and implements production-grade RAG systems by chunking documents, generating embeddings, configuring vector stores, building hybrid search pipelines, applying reranking, and evaluating retrieval quality. Use when building RAG systems, vector databases, or knowledge-grounded AI applications requiring semantic search, document retrieval, context augmentation, similarity search, or embedding-based indexing.

98

1.08x
Quality

100%

Does it follow best practices?

Impact

97%

1.08x

Average score across 6 eval scenarios

SecuritybySnyk

Passed

No known issues

SKILL.md
Quality
Evals
Security

Evaluation results

88%

-6%

Document Ingestion Pipeline for Legal Knowledge Base

Document ingestion pipeline design

Criteria
Without context
With context

No default 512 chunk size

100%

100%

Document-type-specific sizing

100%

100%

Rationale for chunk size

100%

100%

Source metadata on chunks

100%

100%

Section/heading metadata

100%

60%

Timestamp or indexed_at metadata

100%

100%

Document preprocessing

40%

30%

Deduplication mechanism

100%

100%

Idempotent re-run design

100%

100%

Markdown-aware chunking

100%

87%

Chunk index/position metadata

100%

100%

100%

Retrieval Pipeline Upgrade for Developer Documentation Search

Hybrid search and reranking pipeline

Criteria
Without context
With context

Hybrid search implemented

100%

100%

RRF or equivalent fusion

100%

100%

Retrieve-more then rerank fewer

100%

100%

Reranking step present

100%

100%

Not cosine-only

100%

100%

Decoupled embedding model

100%

100%

Empty result edge case

100%

100%

Fusion weighting documented

100%

100%

Design covers all four aspects

100%

100%

Deduplication before rerank

100%

100%

100%

1%

RAG Architecture for Multi-Tenant Clinical Documentation Platform

RAG architecture design and evaluation plan

Criteria
Without context
With context

Ingestion pipeline diagram

100%

100%

Retrieval pipeline diagram

100%

100%

Vector DB recommendation with comparison

100%

100%

Chunking parameters specified

85%

100%

Chunking rationale provided

100%

100%

Retrieval metrics specified

100%

100%

RAGAS or equivalent framework

100%

100%

Multi-tenant isolation addressed

100%

100%

Embedding versioning plan

100%

100%

Decoupled embedding adapter

100%

100%

Monitoring plan

100%

100%

Hybrid search in retrieval design

100%

100%

97%

-3%

Selecting an Embedding Model for a Developer Documentation Search System

Embedding model evaluation and selection

Criteria
Without context
With context

Multiple models benchmarked

100%

100%

Code-appropriate model included

100%

75%

Multilingual model considered

100%

100%

Retrieval metric computed

100%

100%

Comparison table or summary

100%

100%

Model selection stated

100%

100%

Alternative model compared

100%

100%

Versioning or migration noted

100%

100%

Does NOT just default without evidence

100%

100%

Domain relevance addressed

100%

100%

100%

54%

Evaluating RAG Retrieval Quality Before Production Launch

RAGAS evaluation with quality thresholds

Criteria
Without context
With context

RAGAS framework used

83%

100%

context_precision metric included

0%

100%

context_recall metric included

30%

100%

faithfulness metric included

37%

100%

answer_relevancy metric included

0%

100%

context_precision threshold check

0%

100%

context_recall threshold check

0%

100%

Pass/fail verdict present

100%

100%

Retrieval metrics not skipped

100%

100%

Metric explanations in report

100%

100%

100%

Hardening a RAG Retrieval Pipeline for Production

Edge case handling in retrieval pipeline

Criteria
Without context
With context

Empty search results handled

100%

100%

Whitespace query handled

100%

100%

Malformed chunk handling in ingestion

100%

100%

Partial batch not silently dropped

100%

100%

Empty results edge case test

100%

100%

Malformed document test

100%

100%

Empty/whitespace query test

100%

100%

Function signatures unchanged

100%

100%

Tenant filter preserved

100%

100%

Deduplication logic preserved

100%

100%

Repository
jeffallan/claude-skills
Evaluated
Agent
Claude Code
Model
Claude Sonnet 4.6

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.