Run Lighthouse audits locally via CLI or Node API, parse and interpret reports, and set performance budgets. Use when measuring site performance, understanding Lighthouse scores, setting up budgets, or integrating audits into CI. Triggers on: lighthouse, run lighthouse, lighthouse score, performance audit, performance budget. Do NOT use for fixing specific performance issues (use perf-web-optimization or core-web-vitals) or Astro-specific optimization (use perf-astro).
92
89%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Quality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an exemplary skill description that excels across all dimensions. It provides specific capabilities, comprehensive trigger terms, clear 'what' and 'when' guidance, and explicitly delineates boundaries with other related skills to minimize conflict risk. The inclusion of 'Do NOT use' clauses with alternative skill references is a best practice that sets this apart.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Run Lighthouse audits locally via CLI or Node API, parse and interpret reports, and set performance budgets.' These are clear, actionable capabilities. | 3 / 3 |
Completeness | Clearly answers both 'what' (run audits, parse reports, set budgets) and 'when' (measuring site performance, understanding scores, setting up budgets, integrating into CI) with explicit trigger terms and even negative boundaries for when NOT to use it. | 3 / 3 |
Trigger Term Quality | Explicitly lists natural trigger terms users would say: 'lighthouse, run lighthouse, lighthouse score, performance audit, performance budget.' These cover common variations of how users would phrase requests. | 3 / 3 |
Distinctiveness Conflict Risk | Excellent distinctiveness with explicit negative boundaries ('Do NOT use for fixing specific performance issues... or Astro-specific optimization') and references to alternative skills, making it very clear when this skill should and should not be selected. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
79%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a strong, highly actionable skill with excellent conciseness and fully executable examples across CLI, Node API, CI, and report parsing. Its main weaknesses are the lack of explicit validation/feedback loops in multi-step workflows (like the build comparison process) and the monolithic structure that could benefit from splitting advanced topics into referenced files.
Suggestions
Add validation checkpoints to multi-step workflows, e.g., after running Lighthouse in the Compare Builds section, verify the JSON output exists and is valid before proceeding to comparison.
Consider splitting CI integration (GitHub Actions + LHCI) and report parsing/comparison into separate referenced files to keep the main SKILL.md as a concise overview with quick-start content.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is lean and efficient throughout. No unnecessary explanations of what Lighthouse is or how it works conceptually. Every section jumps straight into executable commands and code. Comments in code are minimal and useful. | 3 / 3 |
Actionability | Every section provides fully executable, copy-paste ready code: CLI commands with real flags, complete Node API scripts, full GitHub Actions YAML, complete budget.json, and working comparison scripts. Nothing is pseudocode. | 3 / 3 |
Workflow Clarity | The sections are well-organized and sequenced logically (CLI → budgets → Node API → CI → parsing → comparing), but there are no explicit validation checkpoints or feedback loops. For example, the Compare Builds section lists steps but doesn't include verification of results or error handling for failed audits. | 2 / 3 |
Progressive Disclosure | The content is well-structured with clear section headers, but it's a fairly long monolithic file (~180 lines of content) with no references to external files. The LHCI configuration, GitHub Actions setup, and comparison scripts could be split into separate reference files to keep the main skill leaner. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
906a57d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.