CtrlK
BlogDocsLog inGet started
Tessl Logo

langchain-core-workflow-a

Build LangChain LCEL chains with prompts, parsers, and composition. Use when creating prompt templates, building RunnableSequence pipelines, parallel/branching chains, or multi-step processing workflows. Trigger: "langchain chains", "langchain prompts", "LCEL workflow", "langchain pipeline", "prompt template", "RunnableSequence".

84

Quality

82%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description that clearly identifies its domain (LangChain LCEL), lists specific capabilities, provides explicit 'Use when' guidance, and includes well-chosen trigger terms. It uses proper third-person voice and is concise without being vague. The explicit trigger term list is a nice addition that further aids skill selection.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: building LCEL chains, prompts, parsers, composition, prompt templates, RunnableSequence pipelines, parallel/branching chains, and multi-step processing workflows.

3 / 3

Completeness

Clearly answers both 'what' (build LCEL chains with prompts, parsers, and composition) and 'when' (explicit 'Use when' clause covering creating prompt templates, building pipelines, parallel/branching chains, multi-step workflows) with explicit trigger terms listed.

3 / 3

Trigger Term Quality

Excellent coverage of natural terms users would say: 'langchain chains', 'langchain prompts', 'LCEL workflow', 'langchain pipeline', 'prompt template', 'RunnableSequence'. These are terms developers would naturally use when seeking help with LangChain.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with LangChain-specific terminology (LCEL, RunnableSequence) that clearly carves out a niche. Unlikely to conflict with general coding skills or other framework-specific skills.

3 / 3

Total

12

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a solid, actionable skill with excellent executable code examples covering all major LCEL composition patterns. Its main weaknesses are moderate verbosity (the Python equivalent section and some redundant framing) and the lack of explicit validation/verification steps in the workflow—there's no guidance on testing or debugging chains after building them. The structure is good but the content length could benefit from splitting detailed examples into referenced files.

Suggestions

Add a brief validation/testing step showing how to verify a chain works (e.g., invoke with test input and check output shape), which would improve workflow clarity.

Consider moving the Python equivalent section to a separate reference file and linking to it, reducing inline content length and improving progressive disclosure.

Remove the Overview paragraph since the title and section structure already convey the scope.

DimensionReasoningScore

Conciseness

The content is mostly efficient with executable code examples, but includes some unnecessary sections like the Python equivalent (which adds bulk without being essential to the TypeScript-focused skill) and minor verbosity in section headers and comments. The overview paragraph restates what the title already conveys.

2 / 3

Actionability

All code examples are fully executable TypeScript (and Python) with correct imports, concrete patterns, and copy-paste ready snippets. Each composition pattern (sequential, parallel, branching, passthrough) is demonstrated with complete, runnable code.

3 / 3

Workflow Clarity

While individual patterns are clearly demonstrated, there's no explicit workflow for building a chain end-to-end with validation checkpoints. The error handling table is helpful but there's no feedback loop guidance—e.g., how to verify a chain works correctly before deploying, or how to debug when output doesn't match expectations.

2 / 3

Progressive Disclosure

The content is well-structured with clear sections and references to external docs and a next-steps pointer to workflow-b. However, the inline content is quite long (~180 lines of code examples) and the Python equivalent section could be a separate reference file. The error handling table and resources are well-placed.

2 / 3

Total

9

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.