Use this skill whenever a user wants to improve existing pages on their website to get cited more by AI models — whether they say "our pages aren't getting cited", "improve this page for AI visibility", "which of our pages should we update", "make this article more cite-worthy", "our competitors are getting cited instead of us", "update our content for AI search", or any variation where the goal is improving an existing asset rather than creating something new. This skill pulls owned pages from AI Visibility, identifies which ones have citation potential but are underperforming, compares them against the external pages that are winning citations on the same topics, and produces section-level rewrites or a full-page update — then pushes the revision to the CMS as a draft. Trigger even if the user just says "help me get cited more" or "why is [competitor] getting cited instead of us".
90
88%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Quality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that excels across all dimensions. It provides specific concrete actions, abundant natural trigger terms that users would realistically say, clearly answers both what and when, and carves out a distinct niche (improving existing pages for AI citations) that differentiates it from related skills. The only minor note is that it's somewhat verbose, but the verbosity serves a purpose by covering many trigger variations.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: pulls owned pages from AI Visibility, identifies underperforming pages with citation potential, compares against external pages winning citations, produces section-level rewrites or full-page updates, and pushes revisions to CMS as a draft. | 3 / 3 |
Completeness | Clearly answers both 'what' (pulls pages from AI Visibility, identifies underperforming ones, compares against competitors, produces rewrites, pushes to CMS) and 'when' (explicit 'Use this skill whenever...' clause with multiple trigger scenarios and a final 'Trigger even if...' catch-all). | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger phrases users would say: 'our pages aren't getting cited', 'improve this page for AI visibility', 'make this article more cite-worthy', 'our competitors are getting cited instead of us', 'update our content for AI search', 'help me get cited more', 'why is [competitor] getting cited instead of us'. These are realistic user utterances. | 3 / 3 |
Distinctiveness Conflict Risk | Clearly scoped to improving existing pages for AI citation visibility, distinct from content creation skills or general SEO skills. The explicit distinction 'improving an existing asset rather than creating something new' and the specific AI citation/visibility niche make it unlikely to conflict with other skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a strong, highly actionable skill with excellent workflow clarity and specific API/tool guidance throughout. Its main weakness is length — at ~300+ lines with all CMS variants and diagnostic criteria inline, it could benefit from splitting reference material into separate files. The closing 'principles' section is somewhat redundant with guidance already woven into the workflow steps.
Suggestions
Extract the CMS-specific connection and push instructions (Steps 0 and 7) into a shared CMS_REFERENCE.md file, since this pattern appears in multiple skills (the skill itself references `prompt-gap-to-publish`).
Remove or significantly trim the 'What makes a page AI-cite-worthy' section — its principles are already operationalized in Steps 4 and 5, making the summary redundant.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is generally well-written but includes some unnecessary explanation, particularly the 'What makes a page AI-cite-worthy' section at the end which restates principles already embedded in the workflow steps. The CMS discovery table and per-CMS instructions in Step 7 are useful reference material but add significant length. Some prose could be tightened (e.g., the opening paragraph explains the logic Claude could infer). | 2 / 3 |
Actionability | The skill provides specific API calls (e.g., `get_ai_visibility_pages` with exact parameters like `sortBy: "citationCount"` and `mentionsBrandId`), concrete output formats (tables with specific columns), exact CMS tool names and parameters (e.g., `state: DRAFT` for HubSpot), and a clear section-level patch format with BEFORE/AFTER/WHY structure. The guidance is highly executable. | 3 / 3 |
Workflow Clarity | The 8-step workflow (Steps 0-7) is clearly sequenced with explicit checkpoints: Step 0 gates CMS discovery before analysis begins, Step 3 pauses for user selection, Step 4 requires user agreement on diagnosis before rewriting, Step 6 offers simulation validation before publishing, and Step 7 always saves as draft (never auto-publishes). The feedback loops and safety gates are well-placed throughout. | 3 / 3 |
Progressive Disclosure | The skill is a single monolithic file with no bundle files to offload detail into. The CMS-specific instructions in Steps 0 and 7, the diagnosis dimensions in Step 4, and the 'What makes a page AI-cite-worthy' section could all be split into referenced files. There's one reference to the `prompt-gap-to-publish` skill for CMS setup guidance, but otherwise everything is inline. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
221ffaa
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.