CtrlK
BlogDocsLog inGet started
Tessl Logo

paid-creative-ai

When the user wants to create AI-generated ad creative, test performance creative, manage creative fatigue, or optimize paid media with AI tools. Also use when the user mentions 'ad creative,' 'performance creative,' 'creative testing,' 'creative fatigue,' 'Meta ads,' 'Google ads,' 'TikTok ads,' 'AI ads,' 'ad budget,' 'ROAS,' 'Advantage+,' or 'Performance Max.' This skill covers AI-powered paid creative from generation through performance optimization. Do NOT use for technical implementation, code review, or software architecture.

75

Quality

68%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./packages/skills-catalog/skills/(gtm)/paid-creative-ai/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description with excellent trigger term coverage and completeness. It clearly defines both what the skill does and when to use it, with explicit inclusion and exclusion criteria. The main area for improvement is adding more specific concrete actions beyond the high-level capabilities mentioned.

Suggestions

Add more granular concrete actions such as 'generate ad copy variants,' 'recommend creative refresh schedules,' 'analyze ad performance metrics,' or 'suggest audience-creative pairings' to improve specificity.

DimensionReasoningScore

Specificity

The description names the domain (AI ad creative, paid media) and mentions some actions like 'create AI-generated ad creative,' 'test performance creative,' 'manage creative fatigue,' and 'optimize paid media,' but these are somewhat high-level and not as concrete as listing specific discrete actions (e.g., 'generate ad copy variants,' 'analyze CTR by creative,' 'recommend budget allocation').

2 / 3

Completeness

Clearly answers both 'what' (AI-powered paid creative from generation through performance optimization) and 'when' (explicit 'Use when' triggers with a comprehensive list of keywords). It also includes a helpful exclusion clause ('Do NOT use for technical implementation, code review, or software architecture').

3 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms users would actually say: 'ad creative,' 'performance creative,' 'creative testing,' 'creative fatigue,' 'Meta ads,' 'Google ads,' 'TikTok ads,' 'AI ads,' 'ad budget,' 'ROAS,' 'Advantage+,' 'Performance Max.' These are highly relevant, platform-specific terms that real users would naturally use.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with platform-specific terms (Meta ads, Google ads, TikTok ads, Advantage+, Performance Max) and domain-specific jargon (ROAS, creative fatigue). The explicit exclusion of technical/code tasks further reduces conflict risk with engineering-oriented skills.

3 / 3

Total

11

/

12

Passed

Implementation

47%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill demonstrates strong workflow clarity with well-structured testing frameworks, clear kill/scale criteria, and phased processes with validation checkpoints. However, it is severely over-long and verbose, containing extensive benchmark tables and platform reference data that should be in separate files rather than inline. The content is strategically actionable but lacks executable commands or code, functioning more as a comprehensive reference guide than a lean, efficient skill.

Suggestions

Move benchmark tables (Sections 7, 3's budget tables) and platform comparison matrices to separate reference files (e.g., references/benchmarks.md, references/platform-specs.md) and link to them from the main skill.

Remove explanatory descriptions of what platform AI tools do (e.g., 'Advantage+ takes a single product image and generates multiple variants...') since Claude already knows these tools—keep only the non-obvious best practices and gotchas.

Trim the 'Before Starting' questions from 7 to 3-4 essential ones, as Claude can naturally ask follow-ups based on context.

Consolidate the creative adaptation matrix and repurposing workflow into a compact checklist rather than a full table with every platform permutation.

DimensionReasoningScore

Conciseness

This skill is extremely verbose at 400+ lines with extensive tables of benchmarks, platform comparisons, and budget breakdowns that Claude could derive or look up. Much of this is reference data (CPM/CPC/ROAS benchmarks, industry CPA tables) that bloats the context window significantly. The content explains concepts like what Advantage+ does and what Performance Max is, which Claude already knows.

1 / 3

Actionability

The skill provides concrete frameworks (70/20/10 rule, kill criteria thresholds, testing phases with timelines) and specific decision tables, which is useful. However, there is no executable code or commands—everything is strategic guidance and benchmarks. The modular testing framework and creative production workflow are reasonably specific but still high-level process descriptions rather than copy-paste executable steps.

2 / 3

Workflow Clarity

The multi-step workflows are clearly sequenced with explicit validation checkpoints. The creative testing framework has clear phases (Concept Testing → Element Isolation → Winner Scaling) with specific metrics, budgets, and kill/advance criteria at each stage. The creative production workflow has a day-by-day timeline with quality filter steps. The kill criteria table provides explicit thresholds with minimum data requirements before decisions.

3 / 3

Progressive Disclosure

There is a reference to `references/quick-reference.md` at the bottom and related skills are listed, which is good. However, the massive amount of inline content (benchmark tables, platform comparisons, budget allocation matrices) should be split into separate reference files rather than included in the main SKILL.md. The 10 sections with extensive tables create a monolithic document that would benefit greatly from offloading reference data.

2 / 3

Total

8

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
tech-leads-club/agent-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.