CtrlK
BlogDocsLog inGet started
Tessl Logo

tessleng/skill-insights

Scan a directory or workspace for SKILL.md files across all agents and repos, capture supporting files (references, scripts, linked docs), dedupe vendored copies, enrich each Tessl tile with registry signals, and emit a canonical JSON inventory validated by JSON Schema. Then run four analytical phases in parallel against the inventory — staleness + git provenance (history, broken refs, contributors), quality (Tessl `skill review`), duplicates (similarity + LLM judgement), registry-search (per-standalone-skill registry suggestions, HTTP only) — and render a self-contained interactive HTML report with a top-of-report health overview, top-issues panel, recently-changed list, and per-tessl.json manifests view.

84

1.44x
Quality

90%

Does it follow best practices?

Impact

97%

1.44x

Average score across 2 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

Overview
Quality
Evals
Security
Files

task.mdevals/scenario-2/

Audit the Skills in a Repository

Problem/Feature Description

A team-mate has dropped off a small repository and asked you to give them a one-page summary of every skill it contains: how many there are, how stale each one looks, whether any of them duplicate each other, and how their content quality stacks up. They specifically want a single self-contained HTML report they can share in Slack, and the underlying JSON files so they can grep through the raw data afterwards.

The repository lives at ./resources/myrepo in your working directory. It is a real git checkout (the setup script you were handed has already initialised it and made one commit so that the history is non-empty).

Output Specification

Run the full skill-insights pipeline against ./resources/myrepo and produce, in the repo's .skill-insights/ directory, all of:

  • discovery.json — the canonical inventory
  • staleness.json — per-skill staleness scores
  • quality.json — per-skill quality scores (or a graceful failure record if the Tessl CLI is unavailable in this sandbox)
  • duplicates.json — duplicate clusters and overlap pairs
  • report.html — the rendered self-contained report

Once the pipeline has run, write a short summary to ./pipeline-log.md (in the working directory, NOT inside the repo) that records:

  1. The order in which you ran the phases (which were sequential, which were parallel)
  2. The final skill count and repo count from discovery.json
  3. Whether any phase degraded gracefully and why

evals

README.md

tile.json