CtrlK
BlogDocsLog inGet started
Tessl Logo

fastmcp-python-tests

Write and evaluate effective Python tests using pytest. Use when writing tests, reviewing test code, debugging test failures, or improving test coverage. Covers test design, fixtures, parameterization, mocking, and async testing.

86

Quality

82%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a well-crafted skill description that hits all the key criteria. It uses third person voice, provides specific capabilities, includes an explicit 'Use when...' clause with natural trigger terms, and is clearly scoped to Python/pytest testing. The description is concise yet comprehensive, covering both high-level purpose and specific sub-topics.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions and concepts: 'write and evaluate effective Python tests', 'test design, fixtures, parameterization, mocking, and async testing'. These are concrete, actionable capabilities.

3 / 3

Completeness

Clearly answers both what ('Write and evaluate effective Python tests using pytest') and when ('Use when writing tests, reviewing test code, debugging test failures, or improving test coverage') with explicit trigger guidance.

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'tests', 'pytest', 'test failures', 'test coverage', 'fixtures', 'parameterization', 'mocking', 'async testing'. These cover common variations of how users would describe testing needs.

3 / 3

Distinctiveness Conflict Risk

Clearly scoped to Python testing with pytest specifically, which creates a distinct niche. The mention of pytest, fixtures, parameterization, and mocking makes it unlikely to conflict with general coding or other language testing skills.

3 / 3

Total

12

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a solid, actionable skill with excellent concrete code examples and project-specific constraints that add real value (e.g., the v3 API accessor warning, asyncio_mode auto). Its main weaknesses are moderate verbosity in explanatory text that Claude doesn't need, and the lack of a clear end-to-end workflow for the test-writing process. The content would benefit from trimming philosophical explanations and potentially splitting project-specific rules into a separate reference file.

Suggestions

Trim explanatory sentences Claude already knows (e.g., 'A test that tests multiple things is harder to debug and maintain', 'Test your code with real implementations when possible') to improve conciseness.

Add an explicit end-to-end workflow section (e.g., 1. Write test → 2. Run with pytest -x → 3. Check failures → 4. Fix → 5. Run full suite) with validation checkpoints to improve workflow clarity.

Consider splitting project-specific rules (FastMCP transport, inline-snapshot, asyncio_mode) into a separate referenced file to improve progressive disclosure.

DimensionReasoningScore

Conciseness

Generally efficient but includes some unnecessary guidance Claude already knows (e.g., 'A test that tests multiple things is harder to debug and maintain', 'Don't mock what you own' philosophy explanations). The 'Bad' examples with comments explaining why they're bad add some verbosity, though they do serve a teaching purpose.

2 / 3

Actionability

Excellent executable examples throughout — parameterization, fixtures, mocking, error testing, async patterns, and inline snapshots all have copy-paste ready code. The running commands section provides specific CLI invocations. The API version constraint with concrete accessor names is highly actionable.

3 / 3

Workflow Clarity

The checklist at the end provides a good validation checkpoint, and the inline-snapshot commands are clearly sequenced. However, there's no explicit workflow for the overall test-writing process (write → run → validate → fix cycle). For a skill covering test writing and debugging, a clearer end-to-end workflow with feedback loops would strengthen this.

2 / 3

Progressive Disclosure

The content is well-organized with clear section headers and logical grouping, but it's a fairly long monolithic document (~150 lines of content). Some sections like the project-specific rules (FastMCP transport, inline snapshots) could be split into referenced files, especially since they're project-specific rather than general pytest guidance.

2 / 3

Total

9

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
Jamie-BitFlight/claude_skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.