CtrlK
BlogDocsLog inGet started
Tessl Logo

repo-visuals

Create hero visuals — animated GIF or static PNG — for GitHub repositories. Runs a structured discovery conversation (scan repo → recommend format → propose creative scenarios → agree on a brief), then designs bespoke HTML, previews it in the browser, and exports.

65

Quality

57%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Critical

Do not install without reviewing

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/repo-visuals/skills/repo-visuals/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

67%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a well-crafted description with excellent specificity and a clearly defined niche. The structured workflow is well-articulated, making it easy to understand the skill's capabilities. However, it lacks an explicit 'Use when...' clause and could benefit from additional natural trigger terms that users might employ when requesting this type of work.

Suggestions

Add a 'Use when...' clause, e.g., 'Use when the user wants a banner, hero image, social preview, or visual header for a GitHub repository or README.'

Include additional natural trigger terms like 'banner', 'readme image', 'repo header', 'social preview', or 'open graph image' to improve discoverability.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: create hero visuals (animated GIF or static PNG), run a structured discovery conversation (scan repo, recommend format, propose creative scenarios, agree on a brief), design bespoke HTML, preview in browser, and export.

3 / 3

Completeness

Clearly answers 'what does this do' with detailed actions and workflow, but lacks an explicit 'Use when...' clause or equivalent trigger guidance, which caps this dimension at 2 per the rubric guidelines.

2 / 3

Trigger Term Quality

Includes some natural keywords like 'hero visuals', 'animated GIF', 'static PNG', 'GitHub repositories', but misses common user terms like 'banner', 'readme image', 'repo header', 'social preview', or 'open graph image' that users might naturally say.

2 / 3

Distinctiveness Conflict Risk

Very clear niche — hero visuals specifically for GitHub repositories with a defined workflow (discovery conversation → HTML design → browser preview → export). Unlikely to conflict with general image generation or web design skills.

3 / 3

Total

10

/

12

Passed

Implementation

47%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a deeply thorough, well-structured workflow with excellent phase sequencing, validation gates, and mode-aware behavior — the workflow clarity is genuinely strong. However, it suffers significantly from verbosity: the main file tries to be both an overview and a comprehensive reference, inlining extensive rationale, incident histories, and edge-case handling that should live in the referenced craft/ files. The actionability is moderate — concrete in places (AskUserQuestion shapes, CLI commands) but abstract in others (HTML structure described rather than templated).

Suggestions

Move the detailed craft rules in §2.4 (layout discipline bullets, fake-app-UI rules, headline voice rules, asset embedding rules) into craft/rules.md and craft/headlines.md where they're already referenced — keep only a 1-2 line summary with a pointer in the main file.

Strip rationale paragraphs that explain *why* a rule exists (e.g., 'Why one retry, not many', 'Why on HTML not the exported artifact') — Claude can follow the rule without the design justification.

Add a concrete, minimal working example of the index.html structure described in §2.3 — even a 30-line skeleton with stage, timeline object, and one scene function would make the build phase far more actionable.

Reduce the Gate A and Gate B sections by ~60% — the critique prompts and pass/fail rules are useful, but the surrounding explanation of scope narrowing and incident references is excessive.

DimensionReasoningScore

Conciseness

Extremely verbose — the skill is thousands of tokens long with extensive rationale, incident references, edge-case commentary, and explanations of why rules exist. Much of this (e.g., explaining what sycophantic loops are, why one retry not many, detailed mode-interaction tables for gates) is context Claude can infer. The document reads more like an internal design doc than a lean skill instruction.

1 / 3

Actionability

Contains some concrete, executable elements (the AskUserQuestion JSON shape, ffmpeg commands, directory layouts, specific CLI commands for browser opening) but much of the guidance is procedural prose rather than copy-paste-ready code. Key export recipes and shipping mechanics are deferred to external files (craft/export.md, craft/ship.md) rather than included inline, and the HTML structure in §2.3 is described abstractly rather than with a working template.

2 / 3

Workflow Clarity

The multi-phase workflow is exceptionally well-sequenced with explicit gates (Gate A, Gate B), clear entry/exit conditions, convergence checklists (§1.6), mode-dependent behavior tables, and feedback loops (fail → fix → re-validate, with a hard cap at one retry). Validation checkpoints are explicit and well-reasoned for each phase transition.

3 / 3

Progressive Disclosure

References to external files are well-signaled and one-level deep (craft/export.md, craft/ship.md, craft/headlines.md, craft/rules.md, craft/evaluate.md, craft/redesign.md, craft/reference-gallery.md), which is good. However, the main SKILL.md itself is a monolithic wall of text with enormous inline detail that should be in those referenced files instead — the §2.4 rules of thumb section alone contains multiple paragraphs of incident-driven craft rules that belong in craft/rules.md.

2 / 3

Total

8

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
livlign/claude-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.