CtrlK
BlogDocsLog inGet started
Tessl Logo

langchain-architecture

Design LLM applications using LangChain 1.x and LangGraph for agents, memory, and tool integration. Use when building LangChain applications, implementing AI agents, or creating complex LLM workflows.

74

2.34x
Quality

66%

Does it follow best practices?

Impact

82%

2.34x

Average score across 3 eval scenarios

SecuritybySnyk

Risky

Do not use without reviewing

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/llm-application-dev/skills/langchain-architecture/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a solid skill description that clearly identifies its technology niche (LangChain 1.x, LangGraph) and includes an explicit 'Use when' clause with relevant trigger terms. Its main weakness is that the capability description could be more specific—listing concrete actions like 'build retrieval chains', 'configure agent tools', or 'set up conversation memory' rather than the somewhat abstract 'Design LLM applications'.

Suggestions

Replace the vague 'Design LLM applications' with more specific concrete actions like 'Build retrieval chains, configure agent tools, set up conversation memory, and create multi-step workflows'.

DimensionReasoningScore

Specificity

Names the domain (LLM applications, LangChain, LangGraph) and mentions some capabilities (agents, memory, tool integration), but doesn't list multiple concrete actions—'Design' is somewhat vague and there are no specific operations like 'create chains', 'configure retrieval', 'set up streaming', etc.

2 / 3

Completeness

Clearly answers both 'what' (design LLM applications using LangChain 1.x and LangGraph for agents, memory, and tool integration) and 'when' (explicit 'Use when building LangChain applications, implementing AI agents, or creating complex LLM workflows').

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'LangChain', 'LangGraph', 'AI agents', 'LLM workflows', 'memory', 'tool integration'. These cover the main terms a developer would use when seeking help with this technology stack.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive due to specific technology references (LangChain 1.x, LangGraph) which create a clear niche. Unlikely to conflict with generic coding skills or other AI framework skills.

3 / 3

Total

11

/

12

Passed

Implementation

42%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill provides excellent, executable code examples covering a wide range of LangChain/LangGraph patterns, which is its primary strength. However, it is severely bloated—much of the descriptive content (Core Concepts bullet lists, feature descriptions, 'When to Use' section) explains things Claude already knows and wastes tokens. The monolithic structure with no file references and the absence of validation/error-recovery steps in workflows are significant weaknesses.

Suggestions

Remove or drastically reduce the 'Core Concepts' descriptive sections (memory types list, document processing components, callbacks description) since Claude already knows these—keep only the code examples that demonstrate usage.

Split content into separate files: keep SKILL.md as a concise overview with Quick Start, then reference files like PATTERNS.md (RAG, multi-agent), MEMORY.md, TESTING.md, and PERFORMANCE.md.

Add explicit validation and error-handling steps to workflows—e.g., after RAG retrieval, check if documents were returned; in multi-agent orchestration, add a max-iteration guard to prevent infinite supervisor loops.

Remove the 'When to Use This Skill' section entirely—it's 7 lines of obvious context that adds no actionable value.

DimensionReasoningScore

Conciseness

Extremely verbose at ~500+ lines. Explains concepts Claude already knows (what StateGraph provides, what Document Loaders are, what LangSmith does). The 'Core Concepts' section is largely descriptive bullet points that don't add actionable value. The 'When to Use This Skill' list and feature descriptions are padding.

1 / 3

Actionability

The code examples are concrete, executable, and copy-paste ready. Patterns include complete StateGraph definitions, tool schemas with Pydantic, memory configurations, streaming, testing, and performance optimization—all with real, runnable Python code.

3 / 3

Workflow Clarity

Multi-step patterns (RAG, multi-agent, workflow) are presented as code but lack explicit validation checkpoints or error recovery steps. There's no guidance on what to verify between steps, how to debug failures, or feedback loops for when things go wrong in production.

2 / 3

Progressive Disclosure

This is a monolithic wall of content with no references to external files. Everything from quick start to advanced multi-agent patterns, memory, callbacks, streaming, testing, and performance optimization is inlined. Content should be split across multiple files with clear navigation from a concise overview.

1 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (635 lines); consider splitting into references/ and linking

Warning

Total

10

/

11

Passed

Repository
wshobson/agents
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.