Go testing patterns including table-driven tests, subtests, benchmarks, fuzzing, and test coverage. Follows TDD methodology with idiomatic Go practices.
71
62%
Does it follow best practices?
Impact
80%
1.12xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./docs/zh-TW/skills/golang-testing/SKILL.mdQuality
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description does well at listing specific Go testing capabilities and is clearly distinguishable as a Go-specific testing skill. Its main weakness is the absence of an explicit 'Use when...' clause, which would help Claude know exactly when to select this skill. Adding common user-facing trigger terms would also improve discoverability.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks to write, run, or improve Go tests, or mentions go test, _test.go files, or test coverage.'
Include more natural user trigger terms such as 'unit tests', 'write tests', 'go test', '_test.go', and 'testing package' to improve keyword coverage.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions/patterns: table-driven tests, subtests, benchmarks, fuzzing, test coverage, and TDD methodology. These are all concrete, identifiable testing techniques. | 3 / 3 |
Completeness | Clearly answers 'what does this do' with specific Go testing patterns and TDD methodology, but lacks an explicit 'Use when...' clause or equivalent trigger guidance, which caps this at 2 per the rubric. | 2 / 3 |
Trigger Term Quality | Includes good Go-specific testing terms like 'table-driven tests', 'subtests', 'benchmarks', 'fuzzing', 'test coverage', and 'TDD'. However, it misses common natural user phrases like 'write tests', 'unit tests', '_test.go', 'go test', or 'testing package'. | 2 / 3 |
Distinctiveness Conflict Risk | Clearly scoped to Go testing specifically, with Go-idiomatic terminology (table-driven tests, subtests, fuzzing). Unlikely to conflict with general testing skills or other language-specific testing skills due to the explicit Go focus. | 3 / 3 |
Total | 10 / 12 Passed |
Implementation
57%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides excellent, actionable Go testing examples that are idiomatic and copy-paste ready, which is its greatest strength. However, it suffers from being a monolithic reference document (~400+ lines) that tries to cover everything inline without progressive disclosure to sub-files. Some content like the basic TDD walkthrough and the 'When to activate' section could be trimmed to improve conciseness.
Suggestions
Split the monolithic content into focused sub-files (e.g., BENCHMARKS.md, FUZZING.md, MOCKING.md, HTTP_TESTING.md) and keep SKILL.md as a concise overview with links to each pattern.
Remove the 'When to activate' section and the basic TDD step-by-step walkthrough — Claude already understands TDD and when to write tests; keep only the RED-GREEN-REFACTOR summary box.
Add a brief workflow section that sequences how to approach testing a new feature end-to-end: write test → run with -race → check coverage threshold → fix gaps → integrate into CI, with explicit validation checkpoints.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is quite long (~400+ lines) and covers many patterns comprehensively, but includes some unnecessary verbosity like the 'When to activate' section, explanatory comments Claude already knows (e.g., '// placeholder'), and the TDD step-by-step walkthrough which is basic knowledge. The coverage targets table and best practices list add value but the overall document could be tightened significantly. | 2 / 3 |
Actionability | Every section provides fully executable, copy-paste ready Go code examples with concrete test cases, bash commands, and even CI/CD YAML configuration. The code is complete and idiomatic, covering table-driven tests, subtests, benchmarks, fuzzing, HTTP handler testing, mocking, and golden files. | 3 / 3 |
Workflow Clarity | The TDD RED-GREEN-REFACTOR cycle is clearly sequenced with explicit verification steps (run test, verify failure, implement, run test, verify pass). However, the overall document is more of a pattern catalog than a workflow guide. There are no validation checkpoints for the testing process itself (e.g., verifying coverage thresholds are met before proceeding, or handling flaky test scenarios). | 2 / 3 |
Progressive Disclosure | The entire skill is a monolithic document with all content inline — table-driven tests, subtests, benchmarks, fuzzing, HTTP testing, mocking, golden files, CI/CD, and best practices are all in one file with no references to separate documents. This is a wall of text that would benefit greatly from splitting into focused sub-files with a concise overview in the main SKILL.md. | 1 / 3 |
Total | 8 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (711 lines); consider splitting into references/ and linking | Warning |
Total | 10 / 11 Passed | |
79cc4e3
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.