Cross-platform creative quality audit covering ad copy, video, image, and format diversity across all platforms. Detects creative fatigue, evaluates platform-native compliance, and provides production priorities. Use when user says creative audit, ad creative, creative fatigue, ad copy, ad design, or creative review.
79
75%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/ads-creative/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-crafted skill description that clearly defines its scope (cross-platform ad creative auditing), lists specific capabilities (fatigue detection, compliance evaluation, production priorities), and includes an explicit 'Use when' clause with natural trigger terms. It is concise, uses third-person voice throughout, and would be easily distinguishable from other skills in a large skill library.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'cross-platform creative quality audit', 'ad copy, video, image, and format diversity', 'detects creative fatigue', 'evaluates platform-native compliance', 'provides production priorities'. These are concrete, actionable capabilities. | 3 / 3 |
Completeness | Clearly answers both 'what' (cross-platform creative quality audit covering ad copy, video, image, format diversity, fatigue detection, compliance evaluation, production priorities) and 'when' (explicit 'Use when user says...' clause with specific trigger terms). | 3 / 3 |
Trigger Term Quality | Includes a strong set of natural trigger terms users would say: 'creative audit', 'ad creative', 'creative fatigue', 'ad copy', 'ad design', 'creative review'. These cover common variations of how users would phrase requests in this domain. | 3 / 3 |
Distinctiveness Conflict Risk | Occupies a clear niche focused on advertising creative auditing across platforms. The combination of 'creative fatigue', 'platform-native compliance', and 'production priorities' makes it highly distinctive and unlikely to conflict with general marketing or content skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
50%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a comprehensive creative audit skill with strong platform-specific thresholds and well-structured tables, but it's overly long for a SKILL.md overview. It mixes high-value platform-specific intelligence (Andromeda similarity scores, TikTok safe zones, fatigue thresholds) with generic marketing knowledge that Claude already knows. The workflow has validation steps but lacks concrete error recovery paths and executable examples.
Suggestions
Move universal best practices (ad copy principles, video production standards) out of the skill entirely or into a reference file—Claude already knows these fundamentals.
Add a concrete worked example showing one platform's audit from data input to scored output, making the process more actionable and copy-paste ready.
Split per-platform assessment details into separate reference files (e.g., ads/references/meta-creative.md) and keep SKILL.md as a concise overview with navigation links.
Strengthen workflow step 8 with specific validation criteria (e.g., 'compare current CTR to 30-day rolling average; flag only if decline is statistically significant across 3+ data points').
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is fairly dense with useful platform-specific thresholds and tables, but includes some content Claude would already know (ad copy principles like 'lead with benefit, not feature', video codec standards like H.264/AAC). The universal best practices section is largely generic marketing knowledge that adds token cost without unique value. | 2 / 3 |
Actionability | Provides concrete thresholds, check IDs, and specific metrics (e.g., CTR declining >20% over 14 days = FAIL, frequency >5.0), which is strong. However, it lacks executable code or commands, and the references to external files (platform-specs.md, benchmarks.md, scoring-system.md) contain the actual scoring algorithm rather than providing it inline. The output section shows a template but not a fully worked example. | 2 / 3 |
Workflow Clarity | The process section has a numbered sequence with two validation checkpoints (confirm data exists, verify fatigue signals reference actual trends). However, the validation steps are somewhat vague—step 8 says 'verify fatigue signals reference actual performance trends, not assumptions' without specifying how to verify this. Missing feedback loops for error recovery (e.g., what to do if validation at step 5 fails). | 2 / 3 |
Progressive Disclosure | References external files (platform-specs.md, benchmarks.md, scoring-system.md) which is good progressive disclosure, but the main skill file itself is quite long (~200+ lines) with substantial inline detail that could be split into per-platform reference files. The format diversity matrix, fatigue detection tables, and universal best practices could be separate references to keep the main skill leaner. | 2 / 3 |
Total | 8 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
402ba63
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.