CtrlK
BlogDocsLog inGet started
Tessl Logo

agentic-development

Build AI agents with Pydantic AI (Python) and Claude SDK (Node.js)

42

Quality

30%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/agentic-development/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description identifies a clear domain (AI agent building) and names specific frameworks, which is helpful for differentiation. However, it lacks concrete actions, natural trigger terms users would say, and critically has no 'Use when...' clause to guide skill selection. It reads more like a title than a functional description.

Suggestions

Add a 'Use when...' clause with explicit triggers, e.g., 'Use when the user asks about building AI agents, using Pydantic AI, the Anthropic Claude SDK, tool use, function calling, or agentic workflows.'

List specific concrete actions the skill covers, e.g., 'Define agent tools, configure agent loops, handle structured outputs, manage conversation state, implement streaming responses.'

Include common keyword variations users might naturally say, such as 'pydantic-ai', 'anthropic SDK', 'agentic', 'tool use', 'function calling', 'agent framework'.

DimensionReasoningScore

Specificity

Names the domain (AI agents) and specifies two frameworks (Pydantic AI, Claude SDK) with their languages, but doesn't list concrete actions like 'create tool definitions', 'configure agent loops', 'handle streaming responses', etc.

2 / 3

Completeness

Describes what (build AI agents with specific frameworks) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, missing 'Use when' caps completeness at 2, and the 'what' is also thin, so this scores 1.

1 / 3

Trigger Term Quality

Includes some relevant keywords like 'AI agents', 'Pydantic AI', 'Claude SDK', 'Python', 'Node.js', but misses common user terms like 'agentic', 'tool use', 'function calling', 'agent framework', 'pydantic-ai', or 'anthropic SDK'.

2 / 3

Distinctiveness Conflict Risk

Mentioning specific frameworks (Pydantic AI, Claude SDK) provides some distinctiveness, but 'Build AI agents' is broad enough to overlap with other agent-building or SDK-related skills. The dual-framework scope also increases potential conflict with Python-specific or Node.js-specific skills.

2 / 3

Total

7

/

12

Passed

Implementation

27%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill attempts to be a comprehensive guide to agentic development across multiple frameworks and languages, but suffers severely from trying to cover too much in a single file. The content is excessively verbose with many concepts Claude already understands (what agents are, what tools do, basic architecture patterns), and most code examples are illustrative pseudocode rather than executable implementations. The strongest sections are the concrete Pydantic AI and Claude SDK examples at the top, but these are buried in hundreds of lines of generic agent development advice.

Suggestions

Split content into separate files by framework (pydantic-ai.md, claude-sdk.md, gemini.md) and reference them from a lean SKILL.md overview, matching the 'Load with' pattern already mentioned in the header.

Remove explanatory content Claude already knows: agent architecture diagrams, what tools/memory/guardrails are conceptually, and the OpenAI 'Three Components' diagram. Focus only on implementation patterns.

Make code examples executable by either providing complete working implementations or explicitly noting which functions need user implementation. Replace pseudocode patterns like `llmCall()`, `createAgent()`, and `executeTool()` with real library calls.

Cut the model selection table and framework-agnostic advice (anti-patterns list, generic checklist) which are general knowledge, and focus the skill on the specific patterns and code needed to build agents with the two default frameworks.

DimensionReasoningScore

Conciseness

Extremely verbose at ~600+ lines. Includes extensive explanations of concepts Claude already knows (agent architecture diagrams, what tools/memory/guardrails are), covers multiple frameworks (Pydantic AI, Claude SDK, OpenAI, Gemini) with redundant patterns, and includes project structure templates that are generic knowledge. Much of this content could be cut by 60-70% without losing actionable value.

1 / 3

Actionability

Contains many code examples that appear executable, but most are pseudocode-like patterns with undefined functions (executeTool, runCommand, llmCall, createAgent) and incomplete implementations. The Pydantic AI and Claude SDK examples at the top are the most concrete and copy-paste ready, but the bulk of the middle sections use illustrative TypeScript that wouldn't compile without significant additional code.

2 / 3

Workflow Clarity

The Explore-Plan-Execute-Verify workflow is clearly sequenced with verification steps, which is good. However, the verification implementations are pseudocode with undefined helpers, and there are no concrete validation commands or checkpoints for the actual agent development process itself. The workflow pattern is described abstractly rather than being tied to specific, executable steps.

2 / 3

Progressive Disclosure

This is a monolithic wall of text with everything inline. Despite the header mentioning 'Load with: base.md + llm-patterns.md + [language].md', the skill itself contains all content for multiple languages, multiple frameworks, and multiple concerns (architecture, tools, memory, guardrails, testing, model selection) that should be split into separate referenced files. No content is delegated to external files despite the massive length.

1 / 3

Total

6

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (857 lines); consider splitting into references/ and linking

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
alinaqi/claude-bootstrap
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.