CtrlK
BlogDocsLog inGet started
Tessl Logo

testing-patterns

Cross-language testing strategies and patterns. Triggers on: test pyramid, unit test, integration test, e2e test, TDD, BDD, test coverage, mocking strategy, test doubles, test isolation.

73

1.17x
Quality

61%

Does it follow best practices?

Impact

92%

1.17x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./data/skills-md/0xdarkmatter/claude-mods/testing-patterns/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

64%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description has strong trigger term coverage with a well-curated list of testing-related keywords users would naturally use. However, it falls short on specificity of capabilities—it describes a topic area rather than concrete actions the skill performs. The 'what' portion needs more detail about specific outputs or guidance the skill provides.

Suggestions

Replace the vague 'strategies and patterns' with concrete actions like 'Guides test architecture decisions, recommends mocking approaches, designs test suites across languages, and advises on coverage targets'.

Reframe 'Triggers on:' as a 'Use when...' clause that describes user scenarios, e.g., 'Use when the user asks about structuring tests, choosing between testing approaches, or improving test coverage across codebases'.

DimensionReasoningScore

Specificity

The description names the domain ('cross-language testing strategies and patterns') and implies actions like advising on test strategies, but it doesn't list concrete actions such as 'generate unit tests', 'configure test runners', or 'create mock objects'. It stays at the level of topic coverage rather than specific capabilities.

2 / 3

Completeness

The 'what' is partially addressed ('cross-language testing strategies and patterns') though vaguely, and the 'when' is addressed via 'Triggers on:' with explicit trigger terms. However, the 'what' lacks specificity about concrete actions the skill performs, and the trigger guidance uses 'Triggers on:' rather than a clear 'Use when...' clause explaining the scenario. This is close to a 3 but the weak 'what' portion holds it back.

2 / 3

Trigger Term Quality

The description includes a strong set of natural trigger terms that users would actually say: 'test pyramid', 'unit test', 'integration test', 'e2e test', 'TDD', 'BDD', 'test coverage', 'mocking strategy', 'test doubles', 'test isolation'. These are well-chosen and cover common variations of testing-related queries.

3 / 3

Distinctiveness Conflict Risk

While the testing domain is reasonably specific, 'cross-language testing strategies and patterns' is broad enough to potentially overlap with language-specific testing skills or general software development skills. Terms like 'unit test' and 'test coverage' could easily trigger alongside other coding or quality assurance skills.

2 / 3

Total

9

/

12

Passed

Implementation

57%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill reads more like a textbook summary of testing concepts than actionable guidance for Claude. Most of the content (test pyramid, test doubles definitions, AAA pattern) is knowledge Claude already possesses, making the token investment inefficient. The progressive disclosure structure is good, but the core content would benefit from focusing on project-specific conventions and decision-making guidance rather than restating universal testing knowledge.

Suggestions

Remove or drastically condense sections covering concepts Claude already knows (test pyramid, AAA pattern, test doubles definitions) and focus on project-specific conventions or non-obvious decision criteria.

Add executable code examples for more sections—particularly database isolation and external service isolation—instead of pseudocode descriptions.

Include a brief inline workflow for the TDD cycle rather than only deferring to a reference file, so the skill provides a clear step-by-step process for writing tests.

Replace the generic 'What to Test' lists with concrete examples tied to common patterns Claude would encounter in this project.

DimensionReasoningScore

Conciseness

The content is reasonably organized but includes information Claude already knows well—the test pyramid concept, definitions of test types, AAA pattern, and test doubles are all standard knowledge. The ASCII art pyramid and verbose type descriptions add tokens without adding novel value.

2 / 3

Actionability

The skill provides some concrete guidance (naming conventions, AAA example, test doubles table) but most content is descriptive rather than executable. The code examples are limited to one Python snippet; database isolation and service isolation sections use pseudocode/descriptions rather than executable commands. The checklist is useful but generic.

2 / 3

Workflow Clarity

There's no clear multi-step workflow for how to approach testing a new feature or codebase. The content presents categories and options but doesn't sequence them into a decision-making or execution flow. The TDD workflow is deferred to a reference file rather than summarized inline.

2 / 3

Progressive Disclosure

The skill provides a clear overview with well-signaled references to deeper content (tdd-workflow.md, mocking-strategies.md, test-data-patterns.md, ci-testing.md) and a script reference. References are one level deep and clearly labeled by topic.

3 / 3

Total

9

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
NeverSight/skills_feed
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.