CtrlK
BlogDocsLog inGet started
Tessl Logo

mcp-builder

Guide for creating high-quality MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. Use when building MCP servers to integrate external APIs or services, whether in Python (FastMCP) or Node/TypeScript (MCP SDK).

89

1.60x

Quality

87%

Does it follow best practices?

Impact

88%

1.60x

Average score across 3 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

SKILL.md
Quality
Evals
Security

Evaluation results

91%

38%

Open Library Research Assistant MCP Server

Python MCP server structure and quality

Criteria
Without context
With context

Pydantic v2 model_config

0%

100%

Async/await I/O

0%

100%

Module-level constants

100%

100%

Shared utility functions

100%

100%

Comprehensive tool docstrings

75%

100%

Tool annotations present

0%

100%

Response format options

25%

0%

Character limit / truncation

37%

100%

Human-readable identifiers

100%

100%

Input validation constraints

12%

100%

Syntax check passes

100%

100%

Type hints throughout

100%

87%

Without context: $0.4619 · 1m 52s · 25 turns · 30 in / 6,691 out tokens

With context: $0.6541 · 2m 37s · 24 turns · 209 in / 9,969 out tokens

86%

31%

Country Information MCP Server (TypeScript)

TypeScript MCP server quality and build

Criteria
Without context
With context

Zod .strict() schemas

0%

0%

server.registerTool usage

0%

100%

TypeScript strict mode

100%

100%

No any types

100%

100%

Explicit Promise<T> returns

22%

55%

Build passes

100%

100%

Tool annotations

0%

100%

Comprehensive tool descriptions

100%

100%

Shared utility functions

100%

100%

Input validation constraints

100%

100%

Response format option

0%

100%

Without context: $0.3561 · 1m 33s · 19 turns · 23 in / 6,122 out tokens

With context: $0.7161 · 2m 43s · 31 turns · 286 in / 9,670 out tokens

88%

31%

Improve an MCP Server and Write Its Evaluation Suite

Workflow tool design and evaluation creation

Criteria
Without context
With context

Workflow consolidation

100%

100%

Task-based tool names

100%

100%

Consistent tool name prefixes

50%

75%

Actionable error messages

100%

100%

High-signal responses

62%

87%

Evaluation XML structure

22%

100%

10 evaluation questions

11%

100%

Complex multi-tool questions

55%

22%

Verifiable answers

0%

75%

Async I/O in refactored server

0%

100%

Design notes present

100%

100%

Syntax check passes

100%

100%

Without context: $0.4741 · 2m 35s · 19 turns · 24 in / 9,440 out tokens

With context: $1.0855 · 5m 5s · 30 turns · 286 in / 18,699 out tokens

Repository
majiayu000/claude-skill-registry
Evaluated
Agent
Claude Code
Model
Claude Sonnet 4.6

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.