CtrlK
BlogDocsLog inGet started
Tessl Logo

build-mcp

Guide for creating high-quality MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. Use when building MCP servers to integrate external APIs or services, whether in Python (FastMCP) or Node/TypeScript (MCP SDK).

70

Quality

62%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/mcp/skills/build-mcp/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a solid description that clearly identifies its niche (MCP server development), provides explicit 'Use when' guidance, and includes strong trigger terms covering both Python and TypeScript ecosystems. Its main weakness is that the 'what' portion is somewhat high-level—it could benefit from listing more specific concrete actions like defining tools, configuring transports, or handling authentication.

Suggestions

Add more specific concrete actions to the 'what' portion, e.g., 'defining tools and resources, configuring transports, handling authentication, structuring server projects' to improve specificity.

DimensionReasoningScore

Specificity

The description names the domain (MCP servers) and a general action ('creating high-quality MCP servers that enable LLMs to interact with external services through well-designed tools'), but does not list multiple specific concrete actions like defining tools, handling authentication, setting up transport layers, etc.

2 / 3

Completeness

Clearly answers both 'what' (creating high-quality MCP servers that enable LLMs to interact with external services) and 'when' ('Use when building MCP servers to integrate external APIs or services, whether in Python (FastMCP) or Node/TypeScript (MCP SDK)').

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'MCP', 'Model Context Protocol', 'MCP servers', 'FastMCP', 'MCP SDK', 'external APIs', 'Python', 'Node', 'TypeScript', and 'tools'. These cover the main variations a user building MCP servers would naturally use.

3 / 3

Distinctiveness Conflict Risk

MCP server development is a very specific niche with distinct trigger terms (MCP, Model Context Protocol, FastMCP, MCP SDK). This is unlikely to conflict with general coding skills or other integration-related skills.

3 / 3

Total

11

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides a comprehensive but overly verbose guide to MCP server development. Its main strength is the well-structured four-phase workflow with clear references to external documentation, but it suffers from significant redundancy (reference files mentioned 2-3 times each), excessive explanation of concepts Claude already understands, and a lack of concrete executable code examples in the main body. The content would benefit greatly from aggressive trimming — likely 50%+ could be cut while preserving all actionable information.

Suggestions

Cut the content by at least 50% — remove explanations of basic concepts (DRY principle, error handling, what pagination is), eliminate repeated references to the same guide files, and trim padded transitional phrases.

Add concrete, executable code examples for at least one complete tool implementation in both Python and TypeScript, rather than deferring all code to reference files.

Strengthen the validation feedback loop in Phase 3 with explicit validate-fix-retry steps, especially for the build/test process (e.g., 'If build fails: read error, fix, rebuild. Only proceed when build succeeds.').

Consolidate the reference file listing — mention each file once with clear context about when to load it, rather than repeating references across Phase 1, Phase 2, Phase 3, and the Reference Files section.

DimensionReasoningScore

Conciseness

The skill is extremely verbose at ~300+ lines, with significant redundancy. It explains concepts Claude already knows (what MCP is, what error handling means, what DRY principle is), repeats references to the same guide files multiple times across sections, and includes padded phrases like 'Now that you have a comprehensive plan' and 'To ensure quality, review the code for'. Much of the content reads like a tutorial for a junior developer rather than concise instructions for an AI agent.

1 / 3

Actionability

The skill provides some concrete guidance (specific URLs to fetch, XML format for evaluations, specific commands like `python -m py_compile` and `npm run build`), but most of the content is high-level direction rather than executable code. The actual implementation details are deferred to reference files that aren't provided. Tool annotation examples and the evaluation XML template are concrete, but the core implementation guidance is largely abstract checklists.

2 / 3

Workflow Clarity

The four-phase workflow is clearly sequenced and logically ordered, with numbered sub-steps. However, validation checkpoints are weak — Phase 3.2 mentions testing but the feedback loop is vague ('verify Python syntax', 'ensure it completes without errors'). There's no explicit validate-fix-retry loop for the implementation phase, and the warning about hanging processes is good but the recovery guidance is incomplete.

2 / 3

Progressive Disclosure

The skill references multiple external files (python_mcp_server.md, node_mcp_server.md, mcp_best_practices.md, evaluation.md) with clear links, which is good progressive disclosure structure. However, the main SKILL.md itself contains too much inline content that overlaps with what the reference files presumably cover (e.g., detailed tool implementation steps, quality checklists). The references are also repeated multiple times throughout the document and again in a summary section at the end, adding redundancy.

2 / 3

Total

7

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
NeoLabHQ/context-engineering-kit
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.