CtrlK
BlogDocsLog inGet started
Tessl Logo

fastmcp-python-tests

Write and evaluate effective Python tests using pytest. Use when writing tests, reviewing test code, debugging test failures, or improving test coverage. Covers test design, fixtures, parameterization, mocking, and async testing.

89

Quality

86%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a well-crafted skill description that clearly communicates what the skill does, when to use it, and covers specific pytest-related concepts. It uses third person voice, includes natural trigger terms, and has an explicit 'Use when...' clause with multiple trigger scenarios. The description is concise yet comprehensive.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions and concepts: 'write and evaluate effective Python tests', 'test design, fixtures, parameterization, mocking, and async testing'. These are concrete, actionable capabilities.

3 / 3

Completeness

Clearly answers both what ('Write and evaluate effective Python tests using pytest') and when ('Use when writing tests, reviewing test code, debugging test failures, or improving test coverage') with explicit trigger guidance.

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'tests', 'pytest', 'test failures', 'test coverage', 'fixtures', 'mocking', 'parameterization', 'async testing'. These cover a wide range of natural user queries about Python testing.

3 / 3

Distinctiveness Conflict Risk

Clearly scoped to Python testing with pytest specifically, which is a distinct niche. The mention of pytest, fixtures, parameterization, and mocking makes it unlikely to conflict with general coding or other language testing skills.

3 / 3

Total

12

/

12

Passed

Implementation

72%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a solid, well-structured testing skill with excellent actionability — nearly every section includes executable code examples with clear good/bad comparisons. The project-specific rules (asyncio_mode auto, in-memory transport, inline-snapshot, result.data API constraint) are high-value additions that Claude wouldn't know otherwise. Minor weaknesses include some unnecessary explanatory prose and a lack of explicit end-to-end workflow sequencing for the test-writing process.

Suggestions

Trim explanatory sentences Claude already knows (e.g., 'A test that tests multiple things is harder to debug and maintain') to improve conciseness.

Add a brief end-to-end workflow section showing the sequence: write test → run with pytest → check output → fix failures → re-run, to improve workflow clarity.

DimensionReasoningScore

Conciseness

Generally efficient but includes some unnecessary guidance Claude already knows (e.g., 'A test that tests multiple things is harder to debug and maintain', 'Don't mock what you own' philosophy explanations). The good/bad code comparisons add value but some commentary could be trimmed.

2 / 3

Actionability

Provides fully executable, copy-paste ready code examples throughout — parameterization, fixtures, mocking with AsyncMock, error testing, inline snapshots, and specific CLI commands. The API version constraint with concrete accessor patterns (result.data vs result[0].text) is highly actionable.

3 / 3

Workflow Clarity

The checklist at the end provides a good summary, and the inline-snapshot commands are clearly sequenced. However, there's no explicit workflow for writing a test from scratch (discover → write → run → validate), and the running tests section is just a list of commands without guidance on when to use each. For a skill covering test writing, a clearer end-to-end sequence would help.

2 / 3

Progressive Disclosure

For a standalone skill with no bundle files, the content is well-organized into logical sections (Core Principles, Test Structure, Project-Specific Rules, Fixtures, Mocking, Error Testing, Running Tests, Checklist). Each section is appropriately sized and the structure supports easy scanning and discovery.

3 / 3

Total

10

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
Jamie-BitFlight/claude_skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.