CtrlK
BlogDocsLog inGet started
Tessl Logo

simulation-backed-publisher

Use this skill whenever a user wants to test content variants before publishing to find which one will get cited most by AI models — whether they say "which version of this content will perform better", "test this article before we publish", "simulate how AI will respond to this content", "which angle should we use", "generate content variants and pick the winner", "run a simulation before publishing", or any variation where the goal is data-driven content selection rather than gut-feel publishing. This skill takes an identified content opportunity, generates 2–3 distinct variants with different angles or structures, scores them against actual AI model responses from AI Visibility, references the Simulate Changes feature for pre-publish validation, and produces a clear recommendation on which variant to publish — then pushes the winner to CMS. Trigger on any mention of "simulate", "test variants", "which performs better", "A/B content", or "before we publish".

90

Quality

88%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong, well-crafted description that clearly communicates what the skill does, when to use it, and includes abundant natural trigger terms. The description is comprehensive with specific actions and explicit trigger guidance. The only minor weakness is that it uses second-person voice ('Use this skill whenever a user wants...') in places, though it largely describes actions in third person, and it is somewhat verbose — the same information could be conveyed more concisely.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: generates 2-3 distinct variants, scores them against AI model responses, references the Simulate Changes feature, produces a recommendation, and pushes the winner to CMS.

3 / 3

Completeness

Clearly answers both 'what' (generates variants, scores against AI model responses, recommends winner, pushes to CMS) and 'when' (explicit 'Use this skill whenever...' clause and a dedicated 'Trigger on...' sentence with specific trigger terms).

3 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms including 'which version will perform better', 'test this article before we publish', 'simulate how AI will respond', 'A/B content', 'test variants', 'before we publish', and many natural variations users would actually say.

3 / 3

Distinctiveness Conflict Risk

Occupies a clear niche: pre-publish AI visibility variant testing and simulation. The combination of AI citation optimization, variant generation, simulation scoring, and CMS publishing makes it highly distinctive and unlikely to conflict with generic content or A/B testing skills.

3 / 3

Total

12

/

12

Passed

Implementation

77%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured, highly actionable skill with a clear multi-step workflow and strong validation checkpoints. Its main weaknesses are verbosity (the motivational 'Why simulate' section and some explanatory text Claude doesn't need) and the monolithic structure that could benefit from splitting CMS-specific details and scoring rubrics into separate reference files. The workflow design is excellent, with explicit decision points and feedback loops throughout.

Suggestions

Remove or drastically shorten the 'Why simulate before publishing' section — Claude doesn't need motivation to follow instructions, and this adds ~15 lines of pure rationale.

Extract CMS-specific push instructions (Step 7 details) into a separate reference file like CMS_PUSH_REFERENCE.md to reduce the main skill's length and improve progressive disclosure.

Trim explanatory phrases like 'This is the ground truth — it's exactly what AI models currently say' and 'Small rewrites don't help you learn anything; genuine alternatives do' which explain reasoning Claude doesn't need.

DimensionReasoningScore

Conciseness

The skill is fairly long (~300 lines) and includes some unnecessary explanation, particularly the 'Why simulate before publishing' section at the end which explains concepts Claude doesn't need motivation for. The CMS table and per-CMS instructions are useful but could be more compact. Some sections like Step 2's extraction bullet points explain analytical concepts Claude already understands.

2 / 3

Actionability

The skill provides highly concrete, executable guidance throughout: specific API tool names to call (e.g., `get_ai_visibility_prompts`, `create_documents_from_markdown`), exact scoring rubrics with 1-5 scales, specific CMS field mappings, example table output for scoring, and exact phrasing for user-facing messages. Every step has clear, copy-paste-ready instructions.

3 / 3

Workflow Clarity

The 8-step workflow (Steps 0-7) is clearly sequenced with explicit checkpoints: Step 0 discovers CMS before starting, Step 5 is an explicit simulation checkpoint where the user decides whether to proceed or pause, and the workflow includes feedback loops (e.g., 'If the simulation comes back and changes their mind... generate the revised variant in full before pushing'). Validation is built into the scoring step and the simulate-before-publish checkpoint.

3 / 3

Progressive Disclosure

The content is entirely monolithic — all instructions are inline in a single long file with no references to supporting files. The CMS-specific instructions, variant differentiation axes, and scoring rubric could be split into separate reference files. There are references to other skills ('prompt-gap-to-publish', 'competitor-prompt-hijacker') but no structured navigation to supporting materials.

2 / 3

Total

10

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
amplitude/builder-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.