Implement LangChain RAG pipelines with document loaders, text splitters, embeddings, and vector stores (Chroma, Pinecone, FAISS). Trigger: "langchain RAG", "langchain documents", "langchain vector store", "langchain embeddings", "document loaders", "text splitters", "retrieval".
61
73%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/saas-packs/langchain-pack/skills/langchain-data-handling/SKILL.mdQuality
Discovery
82%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description with excellent specificity and trigger term coverage for the LangChain RAG domain. Its main weakness is the lack of an explicit 'Use when...' clause that describes the situations or user intents that should activate this skill, instead relying on a list of trigger keywords. The description is concise, uses third-person voice correctly, and occupies a clear niche.
Suggestions
Add an explicit 'Use when...' clause describing scenarios, e.g., 'Use when the user needs to build retrieval-augmented generation pipelines, load and chunk documents for search, or set up vector store integrations using LangChain.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions and components: 'document loaders, text splitters, embeddings, and vector stores (Chroma, Pinecone, FAISS)' along with the overarching task of implementing 'LangChain RAG pipelines'. | 3 / 3 |
Completeness | The 'what' is clearly answered (implement LangChain RAG pipelines with specific components). However, there is no explicit 'Use when...' clause — the trigger terms are listed but don't constitute a proper 'when should Claude use it' guidance, they're more like keyword tags. Per rubric guidelines, a missing 'Use when...' clause caps completeness at 2. | 2 / 3 |
Trigger Term Quality | Includes a comprehensive set of natural trigger terms users would actually say: 'langchain RAG', 'langchain documents', 'langchain vector store', 'langchain embeddings', 'document loaders', 'text splitters', 'retrieval'. These cover common variations well. | 3 / 3 |
Distinctiveness Conflict Risk | The description is highly specific to LangChain RAG pipelines with named vector stores (Chroma, Pinecone, FAISS), making it clearly distinguishable from general coding skills, other ML skills, or generic document processing skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid, highly actionable RAG pipeline skill with complete, executable code examples covering the full document-to-answer workflow. Its main weaknesses are verbosity (particularly the duplicated Python section and some unnecessary commentary) and the lack of validation checkpoints between pipeline steps. The content would benefit from splitting into a concise overview with references to detailed sub-files for each vector store option and the Python equivalent.
Suggestions
Add validation checkpoints between steps (e.g., verify document count after loading, check chunk sizes after splitting, verify vector dimensions match before storing) to catch errors early in the pipeline.
Move the Python RAG equivalent and alternative vector store configurations (Pinecone vs FAISS) into separate referenced files to reduce the main skill's token footprint.
Remove obvious comments like '// reads PINECONE_API_KEY from env' and metadata structure explanations that Claude can infer from the code.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is mostly efficient with executable code examples, but includes some unnecessary elements: the full Python equivalent section is redundant given the TypeScript focus, comments explaining obvious things (e.g., 'Each chunk preserves metadata from the source document'), and the embedding pricing info adds marginal value. The skill is quite long for what could be more tightly expressed. | 2 / 3 |
Actionability | Every step provides fully executable, copy-paste ready TypeScript code with correct imports, concrete method calls, and realistic examples. The code covers the complete pipeline from document loading through RAG chain invocation, with specific model names, parameters, and output handling. | 3 / 3 |
Workflow Clarity | The steps are clearly sequenced (load → split → embed → store → query → RAG chain) and logically ordered. However, there are no validation checkpoints between steps—no verification that documents loaded correctly, that chunks are reasonable size, or that embeddings succeeded before proceeding to vector store creation. For a pipeline involving external APIs and potential data loss (overwriting indexes), feedback loops are missing. | 2 / 3 |
Progressive Disclosure | The content is structured with clear section headers and a logical progression, but it's monolithic—the Python equivalent, multiple vector store options, and the full RAG chain could be split into separate reference files. External links are provided at the end but inline content is heavy for a single SKILL.md with no bundle files to offload to. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
a04d1a2
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.