Build LangChain LCEL chains with prompts, parsers, and composition. Use when creating prompt templates, building RunnableSequence pipelines, parallel/branching chains, or multi-step processing workflows. Trigger: "langchain chains", "langchain prompts", "LCEL workflow", "langchain pipeline", "prompt template", "RunnableSequence".
67
82%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Passed
No known issues
Quality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that clearly defines its scope within the LangChain LCEL ecosystem. It provides specific actions, explicit trigger guidance with a 'Use when' clause, and a dedicated trigger term list that covers natural user language. The domain-specific terminology makes it highly distinctive and unlikely to conflict with other skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: building LCEL chains, prompts, parsers, composition, prompt templates, RunnableSequence pipelines, parallel/branching chains, and multi-step processing workflows. | 3 / 3 |
Completeness | Clearly answers both 'what' (build LCEL chains with prompts, parsers, and composition) and 'when' (explicit 'Use when' clause listing specific scenarios, plus a 'Trigger' list with concrete terms). | 3 / 3 |
Trigger Term Quality | Includes a rich set of natural keywords users would actually say: 'langchain chains', 'langchain prompts', 'LCEL workflow', 'langchain pipeline', 'prompt template', 'RunnableSequence'. These cover both high-level and specific terms a developer would use. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with LangChain-specific terminology like 'LCEL', 'RunnableSequence', and 'langchain chains'. Unlikely to conflict with generic coding or other framework skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid, actionable skill with excellent executable code examples covering all major LCEL composition patterns. Its main weaknesses are the lack of validation/debugging checkpoints within workflows and some verbosity (the Python equivalent section and some explanatory text could be trimmed). The content would benefit from integrating verification steps and being slightly more concise.
Suggestions
Add validation/debugging guidance within workflows, e.g., how to inspect intermediate outputs in a RunnableSequence using `.pick()` or logging, and how to verify chain correctness before production use.
Consider removing or significantly trimming the Python equivalent section—if both languages are needed, split into separate files or make the skill explicitly bilingual from the start.
Integrate error handling into the workflow examples rather than as a separate table, e.g., show a try/catch around chain invocation with specific recovery steps.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is mostly efficient with good code examples, but includes some unnecessary sections like the Python equivalent (which adds bulk without being essential to the TypeScript-focused skill) and the overview restates what the description already covers. The error handling table is useful but some entries explain basic concepts Claude would know. | 2 / 3 |
Actionability | All code examples are fully executable TypeScript (and Python) with correct imports, concrete invocations, and expected output comments. Every pattern (sequential, parallel, branching, passthrough) has a complete, copy-paste-ready example. | 3 / 3 |
Workflow Clarity | The skill presents composition patterns clearly but lacks explicit validation checkpoints or feedback loops. There's no guidance on verifying chain outputs, debugging intermediate steps, or handling failures mid-pipeline. The error handling table is reactive rather than integrated into the workflow. | 2 / 3 |
Progressive Disclosure | The content is well-structured with clear headers and logical progression from simple to complex patterns. However, at ~180 lines it's fairly long and could benefit from splitting detailed patterns (parallel, branching) into separate reference files. The 'Next Steps' link to workflow-b is good, but no bundle files exist to offload content to. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
a04d1a2
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.