Scan a directory or workspace for SKILL.md files across all agents and repos, capture supporting files (references, scripts, linked docs), dedupe vendored copies, enrich each Tessl tile with registry signals, and emit a canonical JSON inventory validated by JSON Schema. Then run four analytical phases in parallel against the inventory — staleness + git provenance (history, broken refs, contributors), quality (Tessl `skill review`), duplicates (similarity + LLM judgement), registry-search (per-standalone-skill registry suggestions, HTTP only) — and render a self-contained interactive HTML report with a top-of-report health overview, top-issues panel, recently-changed list, and per-tessl.json manifests view.
84
90%
Does it follow best practices?
Impact
97%
1.44xAverage score across 2 eval scenarios
Advisory
Suggest reviewing before use
A team-mate has dropped off a small repository and asked you to give them a one-page summary of every skill it contains: how many there are, how stale each one looks, whether any of them duplicate each other, and how their content quality stacks up. They specifically want a single self-contained HTML report they can share in Slack, and the underlying JSON files so they can grep through the raw data afterwards.
The repository lives at ./resources/myrepo in your working directory. It is a real git checkout (the setup script you were handed has already initialised it and made one commit so that the history is non-empty).
Run the full skill-insights pipeline against ./resources/myrepo and produce, in the repo's .skill-insights/ directory, all of:
discovery.json — the canonical inventorystaleness.json — per-skill staleness scoresquality.json — per-skill quality scores (or a graceful failure record if the Tessl CLI is unavailable in this sandbox)duplicates.json — duplicate clusters and overlap pairsreport.html — the rendered self-contained reportOnce the pipeline has run, write a short summary to ./pipeline-log.md (in the working directory, NOT inside the repo) that records:
discovery.json