Guide for creating high-quality MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. Use when building MCP servers to integrate external APIs or services, whether in Python (FastMCP) or Node/TypeScript (MCP SDK).
Install with Tessl CLI
npx tessl i github:ComposioHQ/awesome-claude-skills --skill mcp-builderOverall
score
82%
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a solid description with excellent trigger terms and completeness. The main weakness is the lack of specific concrete actions - it describes the purpose at a high level but doesn't enumerate the specific tasks the skill helps with (e.g., defining tools, handling resources, implementing prompts). The description successfully establishes a clear niche that won't conflict with other skills.
Suggestions
Add specific concrete actions like 'define tool schemas, implement resource handlers, configure authentication, structure server responses' to improve specificity
Consider adding more action-oriented trigger phrases like 'create MCP tool', 'expose API via MCP', or 'build LLM integration'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (MCP servers) and general purpose (enable LLMs to interact with external services), but lacks specific concrete actions like 'define tool schemas', 'handle authentication', or 'implement request handlers'. | 2 / 3 |
Completeness | Clearly answers both what ('Guide for creating high-quality MCP servers that enable LLMs to interact with external services through well-designed tools') and when ('Use when building MCP servers to integrate external APIs or services') with explicit trigger guidance. | 3 / 3 |
Trigger Term Quality | Good coverage of natural terms: 'MCP', 'Model Context Protocol', 'MCP servers', 'external APIs', 'services', 'Python', 'FastMCP', 'Node', 'TypeScript', 'MCP SDK' - these are terms users would naturally use when seeking this skill. | 3 / 3 |
Distinctiveness Conflict Risk | MCP servers are a specific niche with distinct terminology (FastMCP, MCP SDK, Model Context Protocol). Unlikely to conflict with general API or server development skills due to the specialized MCP focus. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
70%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured skill with excellent workflow clarity and progressive disclosure. The four-phase approach with explicit checkpoints and clear references to language-specific guides demonstrates strong organization. However, the skill could be more concise by trimming explanatory content Claude already knows, and more actionable by including key code snippets inline rather than deferring everything to reference files.
Suggestions
Add inline code examples for common patterns (e.g., basic tool registration in both Python and TypeScript) rather than deferring all code to reference files
Condense the 'Agent-Centric Design Principles' section - these are concepts Claude understands; focus on the specific patterns unique to MCP
Remove explanatory phrases like 'The quality of an MCP server is measured by...' that explain concepts Claude already knows
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is reasonably efficient but includes some unnecessary explanation (e.g., 'The quality of an MCP server is measured by how well it enables LLMs to accomplish real-world tasks'). The agent-centric design principles section, while valuable, could be more condensed since Claude understands these concepts. | 2 / 3 |
Actionability | Provides good structural guidance and references to external files, but lacks inline executable code examples. Most concrete implementation details are deferred to reference files rather than provided directly. The XML evaluation format example is helpful but most other guidance is procedural rather than copy-paste ready. | 2 / 3 |
Workflow Clarity | Excellent four-phase workflow with clear sequencing (Research → Implementation → Review → Evaluation). Includes explicit validation checkpoints like 'verify Python syntax', 'run npm run build', and quality checklists. The warning about MCP servers hanging and safe testing approaches demonstrates good feedback loop awareness. | 3 / 3 |
Progressive Disclosure | Exemplary progressive disclosure with clear overview structure and well-signaled one-level-deep references. Each phase points to specific reference files (Python Guide, TypeScript Guide, Evaluation Guide) with descriptive links. The Reference Files section provides a clear navigation index. | 3 / 3 |
Total | 10 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.