Audit a Studio-backed WordPress site for performance, accessibility, and visible frontend quality issues, then recommend or validate improvements.
54
60%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/codex/plugins/wordpress-studio/skills/auditing/SKILL.mdQuality
Discovery
57%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a clear niche (auditing Studio-backed WordPress sites) and covers three audit dimensions, giving it reasonable distinctiveness. However, it lacks an explicit 'Use when...' clause, misses common user trigger terms like 'page speed', 'Lighthouse', or 'a11y', and could be more specific about the concrete actions performed during the audit.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks to audit, review, or optimize a WordPress or Studio-backed site for speed, accessibility, or visual quality.'
Include natural trigger term variations users would say, such as 'page speed', 'Lighthouse', 'WCAG', 'a11y', 'Core Web Vitals', 'load time', 'site review'.
List more specific concrete actions, e.g., 'Run Lighthouse audits, check WCAG compliance, analyze Core Web Vitals, review image optimization, inspect layout and visual regressions.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Studio-backed WordPress site) and lists areas of concern (performance, accessibility, frontend quality), but doesn't enumerate specific concrete actions like 'run Lighthouse audits, check WCAG compliance, analyze Core Web Vitals, review image optimization'. | 2 / 3 |
Completeness | The 'what' is reasonably covered (audit for performance, accessibility, frontend quality; recommend/validate improvements), but there is no explicit 'Use when...' clause or trigger guidance telling Claude when to select this skill. | 2 / 3 |
Trigger Term Quality | Includes some relevant keywords like 'WordPress', 'performance', 'accessibility', 'frontend', and 'Studio-backed', but misses common user variations like 'page speed', 'Lighthouse', 'WCAG', 'a11y', 'SEO', 'Core Web Vitals', 'load time', or 'site audit'. | 2 / 3 |
Distinctiveness Conflict Risk | The combination of 'Studio-backed WordPress site' with the specific audit focus areas (performance, accessibility, visible frontend quality) creates a clear niche that is unlikely to conflict with other skills. | 3 / 3 |
Total | 9 / 12 Passed |
Implementation
62%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured audit workflow skill with clear sequencing and good domain-specific thresholds. Its main weaknesses are the lack of concrete tool invocation examples (what does an `audit_performance` call look like? what does the output look like?) and some verbosity in sections where Claude's existing knowledge could be leveraged. The workflow clarity is strong with explicit re-testing and before/after comparison steps.
Suggestions
Add a concrete example of an `audit_performance` tool invocation with sample parameters and a snippet of expected output to improve actionability.
Include a sample audit report format (even abbreviated) so Claude knows the expected output structure.
Trim the accessibility and visual QA bullet lists to focus on non-obvious, WordPress-specific guidance rather than general review criteria Claude already knows.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably efficient but includes some unnecessary framing (e.g., the 'Ownership' section listing what it owns, the 'Principle' section explaining when to use which workflow). The threshold tables are valuable and dense, but some prose could be tightened—e.g., the accessibility and visual QA bullet lists describe what to look for in ways Claude would already understand. | 2 / 3 |
Actionability | The skill references specific MCP tools like `audit_performance`, `record_workflow_event`, and `studio`, and provides concrete threshold tables and metric names. However, there are no executable code examples, no example tool invocations with parameters, and no sample report output format. The guidance is specific but not copy-paste ready. | 2 / 3 |
Workflow Clarity | The workflow is clearly sequenced (resolve site → pick scope → audit → report → re-test) with explicit validation in step 7 (re-test after changes with before/after comparison). The workflow events with stage markers provide checkpoints, and the numbered steps make the sequence unambiguous. | 3 / 3 |
Progressive Disclosure | The skill references `studio` for site resolution and MCP tool details, which is appropriate delegation. However, there are no bundle files provided, so we can't verify the reference. The content is moderately long (~120 lines of substantive content) and the threshold tables plus detailed metric breakdowns could potentially be split into a reference file, but it's not egregiously monolithic. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
1c076c2
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.