CtrlK
BlogDocsLog inGet started
Tessl Logo

zig-docs

Fetches Zig language and standard library documentation via CLI. Activates when needing Zig API details, std lib function signatures, or language reference content that isn't covered in zig-best-practices.

83

1.50x
Quality

83%

Does it follow best practices?

Impact

72%

1.50x

Average score across 3 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a well-crafted description that clearly identifies its niche (Zig documentation retrieval), provides explicit activation triggers, and proactively distinguishes itself from a related skill. The main weakness is that the 'what' portion could be slightly more specific about the concrete actions beyond 'fetches documentation'.

DimensionReasoningScore

Specificity

Names the domain (Zig documentation) and a general action (fetches documentation via CLI), and mentions some specifics like 'API details, std lib function signatures, language reference content', but doesn't list multiple distinct concrete actions beyond fetching.

2 / 3

Completeness

Clearly answers both what ('Fetches Zig language and standard library documentation via CLI') and when ('Activates when needing Zig API details, std lib function signatures, or language reference content'), with an explicit boundary distinguishing it from zig-best-practices.

3 / 3

Trigger Term Quality

Includes strong natural trigger terms: 'Zig', 'standard library', 'std lib', 'function signatures', 'language reference', 'API details'. These are terms a user would naturally use when seeking Zig documentation help.

3 / 3

Distinctiveness Conflict Risk

Clearly scoped to Zig documentation fetching specifically, and explicitly differentiates itself from the related 'zig-best-practices' skill, making it highly distinctive and unlikely to conflict.

3 / 3

Total

11

/

12

Passed

Implementation

77%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a solid, highly actionable reference skill for fetching Zig documentation. Its greatest strength is the concrete, executable commands covering multiple access patterns (raw source, pandoc, API listing, grep searching). Minor weaknesses include some redundancy between sections and the content being slightly longer than necessary due to overlapping information between the instructions, examples, and tables.

Suggestions

Remove redundancy between the 'Instructions' bullet about JS-rendered docs and the identical note in 'Troubleshooting' — keep it in one place only.

Consider consolidating the 'Common Module Paths' table and the curl examples in 'Quick Reference' to avoid listing the same modules twice.

DimensionReasoningScore

Conciseness

The skill is mostly efficient but includes some redundancy—the common modules table partially duplicates the curl examples, and the documentation sources table repeats information already conveyed in the instructions. The troubleshooting section restates the JS-rendering caveat mentioned at the top. Could be tightened by ~20-30%.

2 / 3

Actionability

Every section provides concrete, copy-paste-ready bash commands with specific URLs. The curl, pandoc, grep, and jq commands are fully executable. Module paths are specific and the search patterns show real usage with grep flags.

3 / 3

Workflow Clarity

This is essentially a reference/lookup skill rather than a multi-step destructive workflow, so complex validation checkpoints aren't needed. The single-task nature (fetch documentation) is unambiguous, with clear fallback paths (JS-rendered → use Codeberg raw, pandoc fails → use curl). The troubleshooting section provides error recovery guidance.

3 / 3

Progressive Disclosure

The content is well-structured with clear section headers and tables for quick scanning, but it's all inline in one file. The common module paths table and documentation sources table are useful reference material that could be split into a separate reference file. For its length (~120 lines of content), it's borderline but slightly long for a single skill file.

2 / 3

Total

10

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
NeverSight/skills_feed
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.