When the user wants to audit, review, or diagnose SEO issues on their site. Also use when the user mentions "SEO audit," "technical SEO," "why am I not ranking," "SEO issues," "on-page SEO," "meta tags review," "SEO health check," "my traffic dropped," "lost rankings," "not showing up in Google," "site isn't ranking," "Google update hit me," "page speed," "core web vitals," "crawl errors," or "indexing issues." Use this even if the user just says something vague like "my SEO is bad" or "help with SEO" — start with an audit. For building pages at scale to target keywords, see programmatic-seo. For adding structured data, see schema-markup. For AI search optimization, see ai-seo.
70
62%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./.agents/skills/seo-audit/SKILL.mdQuality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong description with excellent trigger term coverage and completeness, including helpful cross-references to related skills that reduce conflict risk. Its main weakness is that the 'what it does' portion could be more specific about the concrete actions performed during an SEO audit (e.g., checking meta tags, analyzing page speed, reviewing internal links) rather than staying at the level of 'audit, review, diagnose.'
Suggestions
Add specific concrete actions the skill performs, e.g., 'Checks meta tags, analyzes page speed and core web vitals scores, identifies crawl errors, reviews heading hierarchy, evaluates internal linking structure, and assesses mobile-friendliness.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description names the domain (SEO auditing) and some actions like 'audit, review, or diagnose SEO issues,' but doesn't list specific concrete actions such as 'check meta tags, analyze page speed scores, identify crawl errors, review heading structure.' The actions remain somewhat high-level. | 2 / 3 |
Completeness | Clearly answers both 'what' (audit, review, diagnose SEO issues) and 'when' with an extensive explicit trigger list. It also helpfully distinguishes itself from related skills (programmatic-seo, schema-markup, ai-seo) with cross-references. | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger terms users would actually say, including conversational phrases like 'why am I not ranking,' 'my traffic dropped,' 'not showing up in Google,' 'my SEO is bad,' alongside technical terms like 'core web vitals,' 'crawl errors,' and 'indexing issues.' | 3 / 3 |
Distinctiveness Conflict Risk | Explicitly differentiates itself from related skills (programmatic-seo, schema-markup, ai-seo) with clear boundary statements. The focus on auditing/diagnosing creates a distinct niche, and the cross-references reduce conflict risk significantly. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill reads like a comprehensive SEO audit textbook rather than a concise instruction set for Claude. Its main strength is thoroughness — it covers all major SEO audit areas and provides a clear report output format. However, it significantly over-explains concepts Claude already knows, lacks executable code/commands for actually performing checks, and would benefit greatly from splitting detailed checklists into separate reference files while keeping the main skill as a lean workflow.
Suggestions
Cut 60-70% of the content by removing explanations of concepts Claude already knows (E-E-A-T definitions, what title tags are, what HTTPS is, etc.) and keep only the specific decision rules and thresholds (e.g., 'LCP < 2.5s', 'titles 50-60 chars').
Add executable commands and code snippets for performing actual checks — e.g., curl commands for robots.txt/sitemap validation, specific web_fetch patterns for checking meta tags, and shell commands for common audit tasks.
Move the detailed checklists (Technical SEO, On-Page SEO, Content Quality, Common Issues by Site Type) into separate reference files and keep the main SKILL.md as a workflow overview with links to those references.
Add explicit validation steps to the workflow — e.g., after identifying issues, verify findings before reporting; after recommending fixes, describe how to confirm the fix worked.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is extremely verbose at ~350+ lines, with extensive lists of things Claude already knows (what E-E-A-T stands for, what meta descriptions are, what HTTPS is, basic image optimization concepts). Much of this is textbook SEO knowledge that doesn't need to be spelled out — Claude already knows what title tags, heading hierarchy, and canonical tags are. The 'Common Issues by Site Type' section alone is largely general knowledge. | 1 / 3 |
Actionability | The skill provides checklists and structured categories but lacks executable code or commands. There are no concrete examples of how to actually perform checks (e.g., curl commands to check robots.txt, scripts to validate sitemaps, specific Search Console API calls). The Schema Markup Detection Limitation section is one of the few genuinely actionable parts with a specific JS snippet. Most guidance is 'check for X' without showing how. | 2 / 3 |
Workflow Clarity | The audit framework provides a priority order (Crawlability → Technical → On-Page → Content → Authority) and the output format section gives a clear report structure with a prioritized action plan. However, there are no validation checkpoints, no feedback loops for verifying fixes, and no explicit sequencing of how to actually conduct the audit step-by-step. The 'Initial Assessment' section is a good starting point but the rest reads more like a reference checklist than a workflow. | 2 / 3 |
Progressive Disclosure | The skill references two external files (references/ai-writing-detection.md and related skills like ai-seo, programmatic-seo, etc.), which is good. However, the massive amount of inline content (E-E-A-T details, common issues by site type, full on-page audit checklists) could easily be split into separate reference files. The skill tries to be both an overview and a comprehensive reference, resulting in a monolithic document. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
6338825
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.