Audit a GitHub repo's README against best-practice patterns and produce a prioritized punch list of fixes. Runs a structured review covering hero presence, install-to-first-success length, "what is this in one sentence" clarity, audience-jargon match, scannability, and drift signals (stale versions, dead links, badge sprawl). Read-only diagnostic; opens a PR only when the user explicitly asks.
72
65%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/repo-doctor/skills/readme-doctor/SKILL.mdMost README problems aren't typos or missing sections — they're shape problems: the hero never lands, install-to-first-success buries the lede, the "what is this" sentence assumes prior context, the badge row is louder than the content. This skill audits a README against patterns that consistently correlate with maintainer outcomes (stars, contributor onboarding, issue quality) and produces a punch list a maintainer can act on in an afternoon.
The skill's quality comes from the rubric being grounded in the repo's actual audience and category, not from a generic checklist. A README for a 50K-star framework needs different things than a personal-project utility.
Same three modes as repo-visuals — Auto, Semi-auto (recommended), Manual. Use AskUserQuestion. Mode affects how many decisions are silent vs surfaced; it does not skip rubric checks.
User may provide:
gh repo view + clone shallowexamples/ dir, etc.). Flag these as "skipped — paste-only mode."package.json, Cargo.toml, pyproject.toml, etc. — version, description, keywordsexamples/ exist? docs/? Image assets?Categorize the repo from the scan — different categories get different rubric weights:
State the inferred category back to the user with one-line evidence ("inferred CLI tool — bin/ entry in package.json, README opens with a $ npx line"). In Auto mode proceed silently; Semi-auto/Manual let the user override.
Who is this README written for? Infer from jargon density, claimed prerequisites, comparison points named.
A README that's pitched at the wrong audience for its category is the single most common shape problem. Catch it here.
Each criterion: score 1–5 with one-line evidence and (for any score ≤3) one-line fix. Default 3, evidence required to move.
The first sentence after the title should answer "what is this thing" without prior context. Test: imagine a stranger landing here from a Hacker News link. Do they know what it is in 10 seconds?
Above-the-fold = first ~25 lines, before any heading deeper than H2. Does the README open with an image, GIF, or visible-output block that makes the project's value legible without reading?
Count the lines (or clicks) from "I want to try this" to "I see the thing working." Less is more. Measure as: line of first install command → line of first runnable example → line of first observable output.
Does a runnable, real-world-shaped example appear before the reference docs / option list / config schema?
foo/bar/baz), runs end-to-end as shownimport foo from 'foo'Does the jargon density match the audience inferred in §1.5?
Headings, paragraphs, lists, tables — can a skim-reader find what they need in 30 seconds?
Things that should not be in a README a year later:
Score:
For non-1.0 / personal / experimental projects: does the README set scope honestly? "This is a weekend project. It works for X. It will not handle Y."
(Skip this check for repos clearly past 1.0 with active maintenance.)
For repos that want contributors: is there a low-friction on-ramp? CONTRIBUTING.md, "good first issue" labels, dev-setup section?
CONTRIBUTING.md exists but is generic boilerplate(Skip for repos that explicitly don't accept contributions.)
Convert scored criteria into a prioritized list:
For each item: what's wrong (one sentence, citing line numbers), why it matters (one sentence tied to the inferred audience), suggested fix (one sentence, concrete enough to act on).
Display the punch list grouped by priority. In Auto mode, also output an overall README health score (simple average of the rubric × 20 = /100).
By default this skill is read-only. The punch list lives in the chat.
If the user asks ("write the fixes," "open a PR," "apply P0/P1"):
README.md locally — minimal diffs, one item per commit ideallyrepo-visuals Phase 5: detect ownership, fork if needed, open PR with the punch list as the PR description, alt-text where relevantNever auto-apply edits without explicit user confirmation. This skill's value is the diagnosis; the surgery is the user's call.
repo-visuals's job)markdownlint for that)90aa043
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.