Troubleshoot Golang programs systematically - find and fix the root cause. Use when encountering bugs, crashes, deadlocks, or unexpected behavior in Go code. Covers debugging methodology, common Go pitfalls, test-driven debugging, pprof setup and capture, Delve debugger, race detection, GODEBUG tracing, and production debugging. Start here for any 'something is wrong' situation. Not for interpreting profiles or benchmarking (see golang-benchmark skill) or applying optimization patterns (see golang-performance skill).
86
85%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Quality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an excellent skill description that hits all the marks. It provides specific concrete capabilities, uses natural trigger terms a developer would use when encountering Go issues, clearly states both what it does and when to use it, and explicitly differentiates itself from related skills by cross-referencing them. The description is comprehensive yet concise.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions and tools: debugging methodology, common Go pitfalls, test-driven debugging, pprof setup and capture, Delve debugger, race detection, GODEBUG tracing, and production debugging. | 3 / 3 |
Completeness | Clearly answers both 'what' (troubleshoot Go programs, find and fix root cause, covers specific tools and methodologies) and 'when' (encountering bugs, crashes, deadlocks, unexpected behavior). Also explicitly delineates boundaries by referencing related skills for benchmarking and performance optimization. | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural terms users would say: 'bugs', 'crashes', 'deadlocks', 'unexpected behavior', 'Go code', 'Golang', 'something is wrong', 'debugging', 'race detection'. These are highly natural phrases a user would use when encountering issues. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with explicit boundary-setting: 'Not for interpreting profiles or benchmarking (see golang-benchmark skill) or applying optimization patterns (see golang-performance skill).' This cross-referencing to related skills significantly reduces conflict risk and makes the niche crystal clear. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
70%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured debugging skill with excellent workflow clarity and progressive disclosure. The decision tree is a standout feature that quickly routes to the right diagnostic path. The main weaknesses are moderate verbosity in the philosophical/methodology sections (some of which Claude already knows) and limited executable code examples in the main file itself — the actionable content is largely deferred to reference files.
Suggestions
Trim the Golden Rules and Red Flags sections — several points restate debugging fundamentals Claude already knows (e.g., 'one hypothesis at a time', 'it's almost never a Go bug'). Keep the Go-specific guidance, cut the generic debugging philosophy.
Add 1-2 minimal executable code snippets directly in the main file (e.g., a quick race detector invocation with expected output, or a minimal pprof setup) so the skill body itself has copy-paste ready content rather than deferring all code to references.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is mostly efficient and well-structured, but includes some unnecessary elaboration — e.g., the Red Flags section is somewhat verbose, and some Golden Rules explain debugging philosophy Claude already understands (like 'one hypothesis at a time'). The decision tree and reference links are lean and effective, but the overall document could be tightened by ~20-30%. | 2 / 3 |
Actionability | The decision tree provides concrete commands (go test -race, curl pprof endpoint, GOTRACEBACK=all), and the methodology gives clear steps. However, the main SKILL.md itself contains no executable code examples — it's mostly procedural guidance and philosophy. The actual executable content is deferred to reference files, so the skill body itself is more instructional than copy-paste ready. | 2 / 3 |
Workflow Clarity | The workflow is exceptionally clear: a decision tree routes to the right section, the Golden Rules enforce a strict sequence (read error → reproduce → one hypothesis → root cause → fix → verify), red flags provide self-correction checkpoints, and the escalation path (fmt.Println → logging → pprof → Delve) is explicit. The 'NO FIXES WITHOUT ROOT CAUSE' constraint and feedback loops (3+ fix attempts → re-read code) are strong validation checkpoints. | 3 / 3 |
Progressive Disclosure | Excellent progressive disclosure. The main file is a clear overview with a decision tree and golden rules, then 10 well-described reference files are linked with one-sentence summaries explaining exactly what each contains. References are one level deep and clearly signaled. Cross-references to related skills are also provided. | 3 / 3 |
Total | 10 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_field | 'metadata' should map string keys to string values | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
b88f91d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.