CtrlK
BlogDocsLog inGet started
Tessl Logo

langchain-core-workflow-b

Build LangChain agents with tool calling for autonomous task execution. Use when creating AI agents, implementing tool/function calling, binding tools to models, or building autonomous multi-step workflows. Trigger: "langchain agents", "langchain tools", "tool calling", "create agent", "function calling", "createToolCallingAgent".

80

Quality

77%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/saas-packs/langchain-pack/skills/langchain-core-workflow-b/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description that excels in trigger term coverage and completeness, with explicit 'Use when' and 'Trigger' clauses that make it easy for Claude to select appropriately. The main weakness is that the 'what' portion could be more specific about the concrete actions performed (e.g., binding tools to models, configuring agent executors, handling tool responses). Overall it is well above average and clearly distinguishable.

Suggestions

Expand the capability description with more specific concrete actions, e.g., 'Build LangChain agents with tool calling, bind tools to chat models, configure agent executors, and handle multi-step autonomous workflows.'

DimensionReasoningScore

Specificity

Names the domain (LangChain agents) and a key action (build agents with tool calling for autonomous task execution), but doesn't list multiple specific concrete actions like binding tools, creating chains, handling multi-step reasoning, etc. in a detailed way.

2 / 3

Completeness

Clearly answers both 'what' (build LangChain agents with tool calling for autonomous task execution) and 'when' (explicit 'Use when' clause with multiple trigger scenarios, plus an explicit 'Trigger:' list).

3 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms including 'langchain agents', 'langchain tools', 'tool calling', 'create agent', 'function calling', and the specific API name 'createToolCallingAgent'. These cover both natural language and technical terms users would actually say.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with LangChain-specific terminology and function names like 'createToolCallingAgent'. Unlikely to conflict with generic coding skills or other AI framework skills due to the specific LangChain focus.

3 / 3

Total

11

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a solid, actionable skill with excellent executable code examples covering the full agent workflow. Its main weaknesses are the lack of validation checkpoints in the workflow (important for agent loops and tool execution), and the content could be more concise by splitting secondary topics (Python equivalent, streaming, direct binding) into referenced files rather than inlining everything.

Suggestions

Add explicit validation checkpoints, e.g., 'Verify agent output: check result.intermediateSteps to confirm tools were called as expected before deploying' and 'Test each tool independently before passing to AgentExecutor'.

Move the Python equivalent, streaming events, and direct tool binding sections into separate referenced files to keep SKILL.md as a focused overview.

Remove or significantly trim the weather tool example — a single tool example (calculator) is sufficient to demonstrate the pattern, with a brief note about adding more tools.

DimensionReasoningScore

Conciseness

The content is mostly efficient with executable code examples, but includes some unnecessary elements like the Python equivalent section (which adds significant length for a TypeScript-focused skill), inline comments explaining obvious things, and the weather tool mock data that could be trimmed. The overall length (~200 lines) is reasonable but could be tightened.

2 / 3

Actionability

Every step provides fully executable, copy-paste ready TypeScript code with proper imports, concrete examples, and expected outputs. Tool definitions include complete Zod schemas, the agent setup is complete with all required configuration, and even the error handling table provides specific fixes.

3 / 3

Workflow Clarity

Steps are clearly sequenced (define tools → create agent → run → add memory → stream → alternative approach), but there are no validation checkpoints or feedback loops. For agent development involving tool execution (which can fail or loop), there should be explicit verification steps like checking intermediateSteps or validating tool outputs before proceeding.

2 / 3

Progressive Disclosure

The skill has good section structure and links to external resources at the end, but the content is somewhat monolithic — the Python equivalent, streaming, and direct tool binding sections could be split into separate reference files. The inline error handling table is appropriate, but the overall file is long for a SKILL.md overview.

2 / 3

Total

9

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.