CtrlK
BlogDocsLog inGet started
Tessl Logo

root-cause-tracing

Use when errors occur deep in execution and you need to trace back to find the original trigger - systematically traces bugs backward through call stack, adding instrumentation when needed, to identify source of invalid data or incorrect behavior

78

Quality

72%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/kaizen/skills/root-cause-tracing/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

67%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description does well at answering both 'what' and 'when' with an explicit trigger clause, and conveys a clear debugging methodology. However, it could be more specific about concrete actions and include more natural trigger terms users would use when encountering this type of problem. The description uses second person ('you need to') which slightly detracts from the expected third-person voice.

Suggestions

Add more natural trigger terms users would say, such as 'stack trace', 'root cause analysis', 'debugging', 'exception', 'unexpected value', or 'crash'.

List more specific concrete actions beyond 'traces backward' and 'adding instrumentation' - e.g., 'inspects variable state at each frame, adds logging/print statements, identifies where data first becomes invalid, examines function inputs and outputs'.

Use third-person voice instead of second person - replace 'you need to trace back' with something like 'the developer needs to trace back' or rephrase to 'Systematically traces bugs backward...'

DimensionReasoningScore

Specificity

The description names the domain (debugging/tracing) and some actions ('traces bugs backward through call stack', 'adding instrumentation'), but the actions are somewhat general rather than listing multiple distinct concrete operations like specific debugging techniques or tools.

2 / 3

Completeness

The description clearly answers both 'what' (systematically traces bugs backward through call stack, adding instrumentation to identify source of invalid data or incorrect behavior) and 'when' (when errors occur deep in execution and you need to trace back to find the original trigger), with an explicit 'Use when' clause.

3 / 3

Trigger Term Quality

Includes some relevant terms like 'errors', 'call stack', 'bugs', 'invalid data', 'incorrect behavior', and 'trace back', but misses common user-facing variations like 'stack trace', 'root cause', 'debugging', 'exception', 'crash', or 'breakpoint'.

2 / 3

Distinctiveness Conflict Risk

The focus on backward tracing through call stacks is somewhat distinctive, but it could overlap with general debugging skills, error handling skills, or logging/instrumentation skills. The niche of 'reverse debugging' is implied but not sharply delineated.

2 / 3

Total

9

/

12

Passed

Implementation

77%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a solid, actionable skill that teaches a clear debugging methodology with concrete examples and a well-sequenced workflow. Its main weaknesses are the unrenderable dot/graphviz diagrams that consume tokens without clear benefit, a missing bundle file (find-polluter.sh) that's referenced, and some padding in the closing sections. The core tracing process and real example are excellent and highly practical.

Suggestions

Replace dot/graphviz diagrams with simple markdown lists or ASCII art — dot syntax is not renderable in this context and wastes tokens

Include the referenced find-polluter.sh script in the bundle, or inline its key logic, since it's referenced but missing

DimensionReasoningScore

Conciseness

The content is mostly efficient but includes some unnecessary elements: the dot/graphviz diagrams add visual noise without being renderable in most contexts, and the 'Real-World Impact' section at the end is padding. The real example walkthrough is valuable but could be tighter.

2 / 3

Actionability

Provides concrete, executable code examples for instrumentation (TypeScript debug logging, bash grep commands, bisection script usage), specific tracing steps with real values, and a complete worked example showing the full trace chain from symptom to fix with defense-in-depth layers.

3 / 3

Workflow Clarity

The 5-step tracing process is clearly sequenced (Observe → Find Immediate Cause → Ask What Called This → Keep Tracing Up → Find Original Trigger), with explicit validation through the defense-in-depth pattern. The feedback loop of 'Is this the source? → no → keep tracing' is well articulated.

3 / 3

Progressive Disclosure

References the bisection script '@find-polluter.sh' which is not provided in the bundle, making that reference a dead end. The content is reasonably structured with clear sections, but the dot diagrams and inline example could benefit from being split out. For a skill of this length (~120 lines), some content could be externalized.

2 / 3

Total

10

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
NeoLabHQ/context-engineering-kit
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.