CtrlK
BlogDocsLog inGet started
Tessl Logo

fix-tests

Systematically fix all failing tests after business logic changes or refactoring

64

Quality

56%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/tdd/skills/fix-tests/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

50%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description communicates a clear purpose—fixing failing tests after code changes—but lacks the specificity and explicit trigger guidance needed for reliable skill selection. It would benefit from listing concrete actions and adding an explicit 'Use when...' clause with natural user trigger terms.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when tests are failing after refactoring, business logic changes, or code modifications and the user needs to systematically update or fix the test suite.'

Include more natural trigger terms users would say, such as 'broken tests', 'test failures', 'tests are red', 'make tests pass', 'fix test suite'.

List specific concrete actions the skill performs, e.g., 'Analyzes test output, identifies root causes of failures, updates assertions, refactors test fixtures, and verifies all tests pass.'

DimensionReasoningScore

Specificity

Names the domain (fixing failing tests) and a general action (systematically fix), but doesn't list specific concrete actions like 'analyze test output', 'update assertions', 'refactor test fixtures', etc.

2 / 3

Completeness

Describes what it does (fix failing tests) and implies when (after business logic changes or refactoring), but lacks an explicit 'Use when...' clause with clear trigger guidance, which caps this at 2 per the rubric guidelines.

2 / 3

Trigger Term Quality

Includes relevant terms like 'failing tests', 'refactoring', and 'business logic changes', but misses common user phrases like 'broken tests', 'test failures', 'tests are red', 'make tests pass', or 'test suite'.

2 / 3

Distinctiveness Conflict Risk

Somewhat specific to test-fixing after code changes, but could overlap with general testing skills, debugging skills, or refactoring skills since the scope isn't tightly bounded.

2 / 3

Total

8

/

12

Passed

Implementation

62%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides a well-structured orchestration workflow for fixing failing tests with clear sequencing and iteration loops, which is its strongest aspect. However, it suffers from moderate verbosity (explaining concepts Claude already understands), vague references to external skills without concrete paths, and lacks specific executable commands—relying on placeholders instead. The agent template is a useful addition but would benefit from more concrete examples.

Suggestions

Remove the 'Context' section entirely—Claude already understands why tests fail after refactoring; this saves tokens without losing clarity.

Provide concrete example commands for common test frameworks (e.g., `npx jest --testPathPattern={FILE}`, `pytest {FILE}`) instead of only using {TEST_COMMAND} placeholders.

Specify exact file paths for referenced skills (sadd skill, TDD skill) or remove the 'if available' hedging and instead instruct Claude to search for them in a specific location.

Trim the 'Success Criteria' section which largely restates constraints already mentioned in the workflow steps.

DimensionReasoningScore

Conciseness

The skill has some unnecessary context (explaining why tests fail after business logic changes is something Claude already knows) and the 'Context' section is redundant. However, the workflow steps and agent template are reasonably efficient. The success criteria section restates what's already implied.

2 / 3

Actionability

The workflow provides a structured process and includes an agent instructions template, but lacks concrete executable commands (test commands are placeholders like {TEST_COMMAND}), doesn't specify exact tool invocations for launching agents, and references 'sadd skill' and 'TDD skill' without clear paths. The guidance is more procedural than copy-paste ready.

2 / 3

Workflow Clarity

The multi-step workflow is clearly sequenced with numbered steps, includes explicit validation checkpoints (step 7: verify all fixes by running full test suite), and has a feedback loop (step 8: iterate if needed, return to step 5). The preparation → analysis → fixing → verification flow is logical and well-structured.

3 / 3

Progressive Disclosure

The skill references external resources like 'sadd skill', 'TDD skill', and '@README.md' but these references are vague ('if available') without clear file paths. There are no bundle files to support the references. The content is reasonably organized with clear sections but could benefit from explicit links to supporting materials.

2 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
NeoLabHQ/context-engineering-kit
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.