CtrlK
BlogDocsLog inGet started
Tessl Logo

tdg-personal/click-path-audit

"Trace every user-facing button/touchpoint through its full state change sequence to find bugs where functions individually work but cancel each other out, produce wrong final state, or leave the UI in an inconsistent state. Use when: systematic debugging found no bugs but users report broken buttons, or after any major refactor touching shared state stores."

84

Quality

84%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Overview
Quality
Evals
Security
Files

Quality

Discovery

85%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description that clearly defines a specific debugging methodology focused on UI state interaction bugs. It excels at completeness with an explicit 'Use when' clause and has a distinctive niche. The main weakness is that trigger terms could be broader to capture more natural user phrasings for this type of problem.

Suggestions

Add more natural trigger terms users might say, such as 'button not working', 'click does nothing', 'state management bug', 'UI glitch', or 'race condition' to improve discoverability.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: tracing button/touchpoint state change sequences, finding bugs where functions cancel each other out, produce wrong final state, or leave UI in inconsistent state. These are concrete, well-defined debugging activities.

3 / 3

Completeness

Clearly answers both what (trace user-facing buttons through state change sequences to find interaction bugs) and when (explicit 'Use when' clause specifying two triggers: systematic debugging found no bugs but users report broken buttons, or after major refactor touching shared state stores).

3 / 3

Trigger Term Quality

Includes some natural terms like 'broken buttons', 'shared state stores', 'major refactor', 'UI inconsistent state', and 'debugging'. However, it misses common variations users might say like 'state management bug', 'click does nothing', 'button not working', 'race condition', or 'UI glitch'.

2 / 3

Distinctiveness Conflict Risk

Highly distinctive niche: specifically targets interaction-level state bugs where individual functions work but combined behavior fails. This is clearly distinguishable from general debugging skills, UI testing skills, or state management skills due to its focus on cross-cutting state interaction bugs.

3 / 3

Total

11

/

12

Passed

Implementation

77%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a strong, highly actionable skill that teaches a genuinely novel debugging methodology Claude wouldn't know by default. Its main weakness is length — the motivating example is repeated multiple times and some sections (like the problem statement) explain things Claude already understands. The workflow is exceptionally clear with explicit sequencing, output formats, and validation built into each step.

Suggestions

Remove or drastically shorten the 'Problem This Solves' section — Claude understands what traditional debugging covers; jump straight to the methodology.

Extract the bug patterns catalog (Patterns 1-6) into a separate PATTERNS.md reference file to reduce the main skill's token footprint.

Consolidate the motivating 'New Email' example to appear only once (in the Example section at the end) rather than being referenced three separate times.

DimensionReasoningScore

Conciseness

The skill is mostly efficient and the content is genuinely useful, but it's verbose in places — the 'Problem This Solves' section over-explains what traditional debugging does (Claude knows this), and the motivating example is repeated three times (intro, example section, and referenced throughout). The bug pattern examples are valuable but could be tighter.

2 / 3

Actionability

Highly actionable with concrete, executable patterns. The step-by-step trace methodology is specific (identify handler → trace calls → check state reads/writes/resets → verify final state). Code examples are real and illustrative, bug patterns include concrete code snippets, and the output format is clearly specified with templates.

3 / 3

Workflow Clarity

Excellent multi-step workflow: Step 1 (map stores) → Step 2 (audit touchpoints) → Step 3 (report). Each step has explicit output formats. The dependency is clear (Step 1 must complete before others). Validation is built into the process — each touchpoint check includes an expected vs actual comparison and a verdict. The agent parallelization guidance includes ordering constraints.

3 / 3

Progressive Disclosure

The content is well-structured with clear headers and sections, but it's a long monolithic document (~200 lines) that could benefit from splitting. The bug patterns catalog, the agent split recommendations, and the store mapping methodology could each be separate referenced files. References to other skills are present but the skill itself doesn't offload any detail.

2 / 3

Total

10

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Reviewed

Table of Contents