Build LangChain chains and prompts for structured LLM workflows. Use when creating prompt templates, building LCEL chains, or implementing sequential processing pipelines. Trigger with phrases like "langchain chains", "langchain prompts", "LCEL workflow", "langchain pipeline", "prompt template".
Install with Tessl CLI
npx tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill langchain-core-workflow-a87
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-structured skill description with strong trigger terms and clear 'what/when' guidance. The explicit trigger phrases section is particularly helpful for skill selection. The main weakness is that the capability description could be more specific about concrete actions beyond 'build chains and prompts'.
Suggestions
Add more specific concrete actions like 'compose chain components', 'configure output parsers', 'set up retrieval chains' to improve specificity
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (LangChain) and some actions ('Build chains and prompts', 'creating prompt templates', 'building LCEL chains'), but lacks comprehensive specific actions like what operations can be performed on chains or what types of templates are supported. | 2 / 3 |
Completeness | Clearly answers both what ('Build LangChain chains and prompts for structured LLM workflows') and when ('Use when creating prompt templates, building LCEL chains, or implementing sequential processing pipelines') with explicit trigger guidance. | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger terms including 'langchain chains', 'langchain prompts', 'LCEL workflow', 'langchain pipeline', 'prompt template' - these are terms users would naturally use when needing this skill. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with LangChain-specific terminology (LCEL, chains, prompt templates) that clearly differentiates it from generic LLM or workflow skills. Unlikely to conflict with other skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
85%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a strong skill file with excellent actionability through complete, executable code examples and clear workflow progression. The main weakness is some verbosity in meta-sections (Prerequisites, Output) that explain things Claude already knows or that the code demonstrates. The structure and navigation are well-organized with appropriate external references.
Suggestions
Remove or condense the 'Prerequisites' section - Claude doesn't need to be told it should understand prompt engineering basics
Remove the 'Output' section as it merely describes what the code examples already demonstrate
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is mostly efficient with good code examples, but includes some unnecessary elements like 'Prerequisites' section explaining basics Claude knows, and the 'Output' section that describes what the code already demonstrates. | 2 / 3 |
Actionability | Provides fully executable, copy-paste ready Python code throughout. All examples include complete imports, concrete variable names, and expected outputs in comments. | 3 / 3 |
Workflow Clarity | Clear numbered steps (1-4) with logical progression from simple templates to complex composition. The multi-step processing example demonstrates the complete workflow with clear data flow between stages. | 3 / 3 |
Progressive Disclosure | Well-structured with clear sections, appropriate use of external resource links for deeper documentation, and a clear 'Next Steps' pointer to the next workflow. Content is appropriately scoped for a single skill file. | 3 / 3 |
Total | 11 / 12 Passed |
Validation
75%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 12 / 16 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
metadata_version | 'metadata' field is not a dictionary | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
body_steps | No step-by-step structure detected (no ordered list); consider adding a simple workflow | Warning |
Total | 12 / 16 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.