CtrlK
BlogDocsLog inGet started
Tessl Logo

readme-doctor

Audit a GitHub repo's README against best-practice patterns and produce a prioritized punch list of fixes. Runs a structured review covering hero presence, install-to-first-success length, "what is this in one sentence" clarity, audience-jargon match, scannability, and drift signals (stale versions, dead links, badge sprawl). Read-only diagnostic; opens a PR only when the user explicitly asks.

72

Quality

65%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/repo-doctor/skills/readme-doctor/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

67%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a well-crafted description with excellent specificity and a clear niche, listing concrete review dimensions that make the skill's purpose unmistakable. Its main weaknesses are the lack of an explicit 'Use when...' clause and limited natural trigger terms that users would actually say when requesting this kind of review. Adding explicit trigger guidance and more user-facing keywords would elevate this from good to excellent.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user asks to review, audit, or improve a README, or mentions README quality, documentation review, or repo documentation.'

Include more natural user-facing trigger terms such as 'review my README', 'improve README', 'documentation quality', 'README best practices', or 'repo docs'.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: auditing README against best-practice patterns, producing a prioritized punch list, reviewing hero presence, install-to-first-success length, one-sentence clarity, audience-jargon match, scannability, and drift signals (stale versions, dead links, badge sprawl). Very detailed and concrete.

3 / 3

Completeness

The 'what' is thoroughly covered with specific review dimensions and output format. However, there is no explicit 'Use when...' clause or equivalent trigger guidance — the description only implies when it should be used. Per rubric guidelines, missing explicit trigger guidance caps completeness at 2.

2 / 3

Trigger Term Quality

Includes some natural keywords like 'README', 'GitHub repo', 'dead links', 'badge', and 'PR', but misses common user phrasings like 'review my README', 'improve documentation', 'README feedback', or 'docs quality'. The terms used are more diagnostic/technical than what users would naturally say.

2 / 3

Distinctiveness Conflict Risk

Very clear niche: specifically auditing GitHub repo READMEs against best-practice patterns. The detailed review dimensions (hero presence, install-to-first-success, badge sprawl, etc.) make this highly distinctive and unlikely to conflict with general documentation or code review skills.

3 / 3

Total

10

/

12

Passed

Implementation

62%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a thoughtfully designed audit skill with an excellent workflow structure and well-defined rubric criteria. Its main weaknesses are verbosity (the rubric and taxonomy sections could be more compact or split into reference files) and a lack of concrete executable examples — no sample output format, no example punch list, no specific commands for the discovery phase. The philosophical framing in the intro and some explanatory prose assume Claude needs convincing rather than instructing.

Suggestions

Add a concrete example of a completed punch list entry (showing the exact format with line numbers, evidence, and fix suggestion) so Claude knows the expected output shape.

Move the detailed rubric scoring anchors (§2.1–§2.9) into a separate RUBRIC.md reference file, keeping only a summary table in the main SKILL.md.

Replace the prose description of the scan phase (§1.3) with specific executable commands (e.g., `gh repo view --json stargazerCount,description`, `find . -maxdepth 2 -type f`).

Trim the opening paragraph — Claude doesn't need to be persuaded about why README shape matters; jump straight to the phases.

DimensionReasoningScore

Conciseness

The skill is well-written but verbose in places — the opening paragraph philosophizes about README problems rather than jumping to instructions, and some rubric descriptions explain concepts Claude already understands (e.g., what scannability means, what badges are). The audience/category taxonomy sections are thorough but could be tightened into tables.

2 / 3

Actionability

The rubric criteria are concrete and well-defined with clear scoring anchors, but there are no executable code examples or specific commands beyond a mention of `gh repo view`. The skill describes what to do conceptually (score criteria, produce punch list) but doesn't show example output formats, example punch list entries, or exact CLI commands for the scan phase.

2 / 3

Workflow Clarity

The four-phase workflow is clearly sequenced with explicit decision points (operating mode selection, user confirmation before PRs, category override in Semi-auto mode). Phase 4 includes a proper confirmation gate before destructive operations, and the punch list prioritization scheme (P0/P1/P2) provides clear decision criteria for what matters most.

3 / 3

Progressive Disclosure

The content is well-structured with clear H2/H3 headings and numbered sections, but it's a long monolithic document (~200+ lines) with detailed rubric criteria that could be split into a separate RUBRIC.md reference file. The cross-references to `repo-visuals` are helpful but the inline rubric detail bloats the main skill file.

2 / 3

Total

9

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
livlign/claude-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.