CtrlK
BlogDocsLog inGet started
Tessl Logo

source-outranker

Use this skill whenever a user wants to understand which external sources are being cited by AI models on topics relevant to their brand, and wants to create content that will outrank those sources — whether they say "what sources are AI models citing", "why is [third-party site] being cited instead of us", "we want to be the definitive source on X", "build something that gets cited more than G2 or TechRadar", "create an authoritative asset", or any variation where the goal is producing a new reference asset (definition page, benchmark, methodology, glossary, comparison hub) designed to beat existing top-cited sources. This skill analyzes AI Visibility source data, reverse-engineers what makes top-cited pages authoritative, and produces a superior source asset — then pushes it to CMS as a draft. Trigger on any mention of "sources", "third-party citations", "authoritative content", "definitional pages", or "outrank".

90

Quality

88%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong, well-crafted description that excels across all dimensions. It provides specific actions, abundant natural trigger terms with example user phrases, clear 'what' and 'when' guidance, and occupies a distinctive niche. The only minor weakness is that it uses second-person framing ('Use this skill whenever a user wants...') rather than pure third-person voice, though this is a common pattern and the description otherwise follows best practices closely.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: analyzes AI Visibility source data, reverse-engineers what makes top-cited pages authoritative, produces a superior source asset, and pushes it to CMS as a draft. Also specifies concrete output types like definition page, benchmark, methodology, glossary, comparison hub.

3 / 3

Completeness

Clearly answers both 'what' (analyzes AI Visibility source data, reverse-engineers authority factors, produces superior source assets, pushes to CMS) and 'when' (explicit 'Use this skill whenever...' clause with detailed trigger scenarios and a final 'Trigger on any mention of...' list).

3 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms including quoted user phrases ('what sources are AI models citing', 'why is [third-party site] being cited instead of us', 'we want to be the definitive source on X') and explicit trigger keywords ('sources', 'third-party citations', 'authoritative content', 'definitional pages', 'outrank'). These are terms users would naturally say.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive niche — focuses specifically on competitive source citation analysis for AI visibility and creating content to outrank third-party cited sources. The combination of AI citation analysis, authority reverse-engineering, and CMS publishing is unlikely to conflict with other skills.

3 / 3

Total

12

/

12

Passed

Implementation

77%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured, highly actionable skill with a clear multi-step workflow that covers the full pipeline from source analysis to CMS publishing. Its main weaknesses are length (could be more concise by removing explanatory sections Claude doesn't need) and monolithic structure (the asset templates and CMS instructions could be split into referenced files for better progressive disclosure).

Suggestions

Move the 'What makes a source authoritative to AI models' section to a separate reference file or remove it — Claude can infer these principles from the audit criteria in Step 2.

Extract the three asset type templates (Definitive Guide, Benchmark Report, Comparison Hub) into a separate TEMPLATES.md file referenced from Step 4 to reduce the main skill's token footprint.

Trim editorial commentary like 'Not a skeleton — complete, publish-ready content' and 'This avoids a copy-paste dead end' which explain rationale Claude doesn't need.

DimensionReasoningScore

Conciseness

The skill is fairly long (~300+ lines) and includes some sections that explain concepts Claude already knows (e.g., the 'What makes a source authoritative to AI models' section at the end is largely general knowledge). The CMS mapping tables and asset type templates are useful but could be tighter. Some editorial commentary ('Not a skeleton — complete, publish-ready content') adds unnecessary tokens.

2 / 3

Actionability

The skill provides specific API calls (e.g., `get_ai_visibility_sources`, `list_ai_visibility_org_brands`), concrete CMS tool patterns with exact function names, detailed asset templates with specific structural guidance (H1/H2 counts, word counts per section, meta field character limits), and clear output formats. The guidance is highly executable.

3 / 3

Workflow Clarity

The 6-step workflow (Steps 0-5) is clearly sequenced with logical dependencies. Step 0 handles CMS discovery before analysis begins, preventing dead ends. Each step has clear inputs and outputs, there are explicit checkpoints (asking user to pick an asset in Step 3, confirming CMS push in Step 5), and the workflow includes a fallback (Markdown output even when CMS push succeeds).

3 / 3

Progressive Disclosure

The content is entirely monolithic — everything lives in one long file with no references to supporting documents. The asset type templates (Definitive Guide, Benchmark Report, Comparison Hub) could each be separate reference files. There's a reference to 'prompt-gap-to-publish' guidance but no other external file structure. For a skill this long, splitting would improve navigability.

2 / 3

Total

10

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
amplitude/builder-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.