CtrlK
BlogDocsLog inGet started
Tessl Logo

analyze-error

Error Stack Trace Analysis for dd-trace-dotnet

57

1.56x
Quality

36%

Does it follow best practices?

Impact

94%

1.56x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./.claude/skills/analyze-error/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

22%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description reads more like a title than a functional skill description. It identifies the domain (stack trace analysis for dd-trace-dotnet) but fails to enumerate specific capabilities or provide any guidance on when Claude should select this skill. The lack of concrete actions and explicit trigger conditions makes it insufficient for reliable skill selection among a large pool of skills.

Suggestions

Add specific concrete actions such as 'Parses .NET exception stack traces, identifies root cause errors, maps failures to dd-trace-dotnet instrumentation issues, and suggests fixes.'

Add an explicit 'Use when...' clause, e.g., 'Use when the user shares a .NET stack trace, exception, or error log related to Datadog tracing (dd-trace-dotnet), or asks about debugging Datadog .NET APM issues.'

Include additional natural trigger terms like 'exception', 'Datadog', '.NET tracing', 'APM', 'crash', 'debugging', and 'traceback' to improve keyword coverage.

DimensionReasoningScore

Specificity

The description names a domain ('Error Stack Trace Analysis') and a specific project ('dd-trace-dotnet'), but does not list any concrete actions like 'parses stack traces', 'identifies root causes', or 'suggests fixes'. It's essentially a title, not a description of capabilities.

1 / 3

Completeness

The description only weakly addresses 'what' (analysis of error stack traces) and completely lacks any 'when' clause or explicit trigger guidance. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and the 'what' is also too vague to merit even a 2.

1 / 3

Trigger Term Quality

Contains some useful trigger terms like 'stack trace', 'error', and 'dd-trace-dotnet' that users might naturally mention. However, it misses common variations like 'exception', 'crash', 'debugging', 'Datadog', '.NET tracing', or 'traceback'.

2 / 3

Distinctiveness Conflict Risk

The mention of 'dd-trace-dotnet' provides some specificity to a particular project, which helps distinguish it. However, 'Error Stack Trace Analysis' is broad enough to potentially overlap with general debugging or error analysis skills.

2 / 3

Total

6

/

12

Passed

Implementation

50%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides a solid structural framework for exception analysis in dd-trace-dotnet with clear output formatting requirements and useful constraints. However, it lacks concrete examples (sample stack traces, example analyses, sample git diffs) that would make it truly actionable, and the workflow could benefit from explicit validation steps and feedback loops for ambiguous cases.

Suggestions

Add a concrete example showing a sample redacted stack trace input and the expected full analysis output (all 5 sections), so Claude has a clear reference for format and depth.

Include a validation step in the workflow, e.g., 'Before suggesting a fix, verify the identified code path still exists in the referenced version' to handle the noted constraint about outdated stack traces.

Provide a sample git diff in the output format section to make the 'Suggested Fix' expectations concrete and copy-paste ready.

Consider adding a bundle file with duck typing reference material or CODEOWNER team mappings, since these are referenced but not provided.

DimensionReasoningScore

Conciseness

The skill is reasonably concise but includes some unnecessary elaboration (e.g., explaining what redacted exceptions are, restating that exceptions are 'gracefully caught'). Some constraints could be tightened, but overall it doesn't over-explain concepts Claude already knows.

2 / 3

Actionability

The skill provides a structured workflow and output format, but lacks concrete examples of actual stack traces, example analyses, or sample git diffs. There are no executable code snippets or copy-paste-ready templates—guidance is descriptive rather than demonstrative.

2 / 3

Workflow Clarity

The workflow has a clear sequence (parse input → consider duck typing → analyze → output), but lacks explicit validation checkpoints or feedback loops. For instance, there's no step to verify the root cause analysis against the codebase before suggesting a fix, and no guidance on what to do if the stack trace is ambiguous or incomplete.

2 / 3

Progressive Disclosure

The content is reasonably organized with clear sections, but everything is inline in a single file with no references to supporting documents. The duck typing consideration references a source path but doesn't link to any bundle documentation. For a skill of this complexity (multi-step analysis with domain-specific knowledge), external references for duck typing internals, CODEOWNER mappings, or example analyses would improve navigation.

2 / 3

Total

8

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
DataDog/dd-trace-dotnet
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.