CtrlK
BlogDocsLog inGet started
Tessl Logo

paid-creative-ai

When the user wants to create AI-generated ad creative, test performance creative, manage creative fatigue, or optimize paid media with AI tools. Also use when the user mentions 'ad creative,' 'performance creative,' 'creative testing,' 'creative fatigue,' 'Meta ads,' 'Google ads,' 'TikTok ads,' 'AI ads,' 'ad budget,' 'ROAS,' 'Advantage+,' or 'Performance Max.' This skill covers AI-powered paid creative from generation through performance optimization. Do NOT use for technical implementation, code review, or software architecture.

75

Quality

68%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./packages/skills-catalog/skills/(gtm)/paid-creative-ai/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description with excellent trigger term coverage and clear completeness. The explicit 'when to use' and 'when NOT to use' clauses make it highly functional for skill selection. The main weakness is that the capability descriptions could be more concrete—listing specific actions rather than broad categories would improve specificity.

Suggestions

Add more concrete specific actions such as 'generate ad copy variants, create headline tests, analyze creative performance metrics, recommend budget allocation' to improve specificity.

DimensionReasoningScore

Specificity

The description names the domain (AI ad creative, paid media) and mentions some actions like 'create AI-generated ad creative,' 'test performance creative,' 'manage creative fatigue,' and 'optimize paid media,' but these are somewhat high-level and not as concrete as listing specific discrete actions like 'generate ad copy, resize images, A/B test headlines.'

2 / 3

Completeness

Clearly answers both 'what' (AI-powered paid creative from generation through performance optimization) and 'when' (explicit 'Use when' triggers with a comprehensive list of keywords). Also includes a helpful exclusion clause for what NOT to use it for.

3 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms users would actually say: 'ad creative,' 'performance creative,' 'creative testing,' 'creative fatigue,' 'Meta ads,' 'Google ads,' 'TikTok ads,' 'AI ads,' 'ad budget,' 'ROAS,' 'Advantage+,' 'Performance Max.' These are highly relevant platform-specific and domain-specific terms.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with a clear niche in AI ad creative and paid media optimization. The explicit exclusion of technical implementation/code review further reduces conflict risk. Platform-specific terms like 'Advantage+' and 'Performance Max' make it very unlikely to conflict with other skills.

3 / 3

Total

11

/

12

Passed

Implementation

47%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is a comprehensive paid creative strategy guide with strong workflow clarity and well-structured testing frameworks with explicit decision criteria. However, it is severely over-long for a SKILL.md file—the extensive benchmark tables, platform pricing data, and tool comparisons should be in reference files rather than inline. The content is strategic/advisory rather than technically executable, which is appropriate for the domain but limits actionability to decision frameworks rather than copy-paste artifacts.

Suggestions

Move benchmark tables (CPM, CPC, ROAS, CPA by industry) and tool comparison tables to `references/quick-reference.md` or separate reference files, keeping only the most critical thresholds inline.

Remove explanatory content about what platform AI tools do (e.g., 'Advantage+ takes a single product image and generates multiple variants...') since Claude already knows these tools—keep only the non-obvious setup best practices and gotchas.

Trim the cross-platform budget split and creative adaptation matrices to the top 2-3 most common scenarios, linking to a reference file for the full matrix.

Consider reducing the 'Before Starting' questions from 7 to 3-4 essential ones (platform, budget, KPI) since Claude can ask follow-ups as needed.

DimensionReasoningScore

Conciseness

This skill is extremely verbose at 400+ lines with extensive tables of benchmarks, platform comparisons, and budget allocation frameworks that Claude could derive or look up. Much of this is reference data (CPM/CPC/ROAS benchmarks, industry CPA benchmarks, tool pricing) that bloats the context window significantly. The content explains concepts like what Advantage+ does and what Performance Max is, which Claude already knows.

1 / 3

Actionability

The skill provides structured frameworks (70/20/10 rule, hook/body/CTA matrix, kill criteria tables) with specific thresholds and timelines, which is concrete guidance. However, there is no executable code or copy-paste ready commands—it's all strategic advice and decision tables. The testing workflow and production workflow are reasonably specific but remain at the instructional level rather than providing executable artifacts.

2 / 3

Workflow Clarity

Multi-step workflows are clearly sequenced with explicit phases (Concept Testing → Element Isolation → Winner Scaling), specific timelines, budget thresholds, and clear kill/advance criteria at each stage. The AI Creative Production Workflow has day-by-day steps with validation (Step 4 quality filter) before deployment. The creative fatigue management section includes clear early warning signals with thresholds and corresponding actions.

3 / 3

Progressive Disclosure

The skill references `references/quick-reference.md` and related skills at the end, which is good. However, the massive amount of inline content (benchmark tables, platform comparisons, budget allocation matrices) should be split into reference files rather than included in the main SKILL.md. The 10-section monolithic structure with extensive tables is a wall of text that would benefit greatly from offloading reference data.

2 / 3

Total

8

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
tech-leads-club/agent-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.