CtrlK
BlogDocsLog inGet started
Tessl Logo

competitor-prompt-hijacker

Use this skill whenever a user wants to win AI citations on prompts that competitors currently dominate — whether they say "competitors are getting cited instead of us", "we're losing on these prompts", "how do I outrank [competitor] in AI answers", "find prompts where we should be winning", "create content to beat [competitor]", or any variation where the goal is capturing AI share on prompts a competitor currently owns. This skill pulls competitor visibility data from AI Visibility, identifies the specific prompts where competitors win and Amplitude is absent, clusters them by intent, and produces targeted comparison pages, alternatives content, or rebuttal assets — then pushes drafts to CMS. Trigger on any mention of competitor, prompt hijack, outrank, or "why is [competitor] getting cited instead of us".

90

Quality

88%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description that excels across all dimensions. It provides specific concrete actions, abundant natural trigger terms, explicit when/what guidance, and occupies a clearly distinct niche. The only minor weakness is the use of second-person voice ('Use this skill whenever a user wants to...') which borders on instructional rather than third-person descriptive, though it doesn't fully violate the guideline since it's directed at Claude rather than the user.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: pulls competitor visibility data, identifies prompts where competitors win, clusters by intent, produces comparison pages/alternatives content/rebuttal assets, and pushes drafts to CMS.

3 / 3

Completeness

Clearly answers both what (pulls competitor data, identifies prompts, clusters by intent, produces content, pushes to CMS) and when (explicit 'Use this skill whenever...' clause with detailed trigger scenarios and a final 'Trigger on...' sentence).

3 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms users would say: 'competitors are getting cited instead of us', 'we're losing on these prompts', 'outrank [competitor]', 'find prompts where we should be winning', 'prompt hijack', 'why is [competitor] getting cited instead of us'. These are realistic user phrases.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive niche focused on AI citation competition and prompt hijacking. The specific domain of competitor AI visibility analysis with CMS integration is unlikely to conflict with other skills.

3 / 3

Total

12

/

12

Passed

Implementation

77%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a strong, highly actionable skill with a clear multi-step workflow that takes the user from competitor identification through content creation to CMS publishing. Its main weakness is that it's monolithic — the three content templates and six CMS-specific push instructions inflate the document significantly and would benefit from being split into referenced files. The content is mostly efficient but includes some unnecessary explanatory sections.

Suggestions

Extract the three content templates (comparison page, alternatives page, category capture page) into a separate TEMPLATES.md file referenced from Step 5 to reduce the main skill's length.

Move the CMS-specific push instructions into a shared CMS_PUSH.md reference file, since this guidance appears to overlap with the 'prompt-gap-to-publish' skill already mentioned.

DimensionReasoningScore

Conciseness

The skill is fairly long (~300 lines) but most content earns its place — the CMS table, bucket definitions, and content templates are all necessary. However, some sections are verbose (e.g., the 'What makes competitor content get cited' section explains things Claude likely already knows about SEO/citation principles), and the CMS discovery step repeats guidance from another skill.

2 / 3

Actionability

Highly actionable throughout: specific API calls (get_ai_visibility_competitors, get_ai_visibility_prompts), concrete filtering criteria (competitor visibility >50%, Amplitude <30%), detailed content templates with exact H1 formats, section structures, and word counts, plus CMS-specific push commands with exact field mappings and status settings.

3 / 3

Workflow Clarity

The 7-step workflow (Steps 0-6) is clearly sequenced with logical dependencies. Each step has explicit outputs and decision points (e.g., 'Which bucket do you want to attack first?'). Step 4's diagnosis before writing serves as a validation checkpoint, and Step 6 includes a fallback (Markdown copy) and confirmation message. The workflow handles branching well across three content buckets.

3 / 3

Progressive Disclosure

The content is well-structured with clear headers and tables, but it's entirely monolithic — all content templates (Bucket 1, 2, 3) and all CMS instructions are inline rather than split into referenced files. The comparison page template, alternatives page template, and CMS push instructions could each be separate files. There's a reference to 'prompt-gap-to-publish' but no other file references.

2 / 3

Total

10

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
amplitude/builder-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.