CtrlK
BlogDocsLog inGet started
Tessl Logo

ads

Multi-platform paid advertising audit and optimization skill. Analyzes Google, Meta, YouTube, LinkedIn, TikTok, Microsoft, and Apple Ads. 250+ checks with scoring, parallel agents, industry templates, and AI creative generation.

63

Quality

55%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./ads/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

40%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description identifies a clear niche (multi-platform paid ad auditing) and lists specific platforms, giving it good distinctiveness. However, it reads more like a feature list or marketing copy ('250+ checks', 'parallel agents', 'AI creative generation') than a functional skill description. It critically lacks a 'Use when...' clause and misses common user-facing trigger terms like 'PPC', 'ad spend', 'ROAS', or 'campaign performance'.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user asks to audit, review, or optimize paid ad campaigns, PPC performance, ad spend, or ROAS across any major ad platform.'

Replace buzzwordy feature claims ('250+ checks', 'parallel agents', 'industry templates') with concrete actions like 'identifies wasted ad spend, recommends bid adjustments, audits targeting settings, generates ad copy variations'.

Include common user-facing synonyms and trigger terms: 'PPC', 'ad spend', 'ROAS', 'campaign performance', 'Facebook Ads', 'ad account review', 'SEM'.

DimensionReasoningScore

Specificity

Names the domain (paid advertising audit/optimization) and lists platforms, but the actual actions are vague — 'analyzes', '250+ checks', 'scoring', 'AI creative generation' are more feature claims than concrete actions a user would request. It doesn't specify what concrete outputs or tasks it performs (e.g., 'identifies wasted spend', 'recommends bid adjustments', 'generates ad copy').

2 / 3

Completeness

Describes what it does (analyzes ads across platforms) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and the 'what' portion is also somewhat vague with buzzwordy feature lists rather than clear capabilities, pushing this to 1.

1 / 3

Trigger Term Quality

Includes good platform names (Google, Meta, YouTube, LinkedIn, TikTok, Microsoft, Apple Ads) which are natural trigger terms, and 'paid advertising audit' is reasonable. However, it misses common user phrasings like 'PPC', 'ad spend', 'ROAS', 'ad performance', 'campaign review', 'ad account', 'SEM', or 'Facebook Ads' (vs Meta).

2 / 3

Distinctiveness Conflict Risk

The combination of multi-platform paid advertising audit with specific platform names (Google, Meta, YouTube, LinkedIn, TikTok, Microsoft, Apple Ads) creates a very clear niche that is unlikely to conflict with other skills. This is highly distinctive.

3 / 3

Total

8

/

12

Passed

Implementation

70%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured orchestration skill that excels at progressive disclosure and workflow clarity, with clear command references, quality gates, and a logical delegation model. Its main weaknesses are moderate redundancy (sub-skills list duplicates the quick reference table, verbose footer rules) and a lack of executable code examples for the orchestration and subagent invocation patterns. The skill would benefit from trimming duplicate content and adding concrete Task tool invocation examples.

Suggestions

Remove the Sub-Skills numbered list section since it duplicates the Quick Reference table, or consolidate them into a single reference

Add a concrete executable example of subagent spawning via the Task tool (showing exact syntax with context: fork) in the Orchestration Logic section

Condense the Community Footer section—the 'when to show' and 'when to skip' lists could be simplified to a single rule like 'Show after all major deliverable commands; skip for utilities, intermediate steps, and non-analysis outputs'

DimensionReasoningScore

Conciseness

The skill is fairly well-organized but includes redundancy—the sub-skills list at the bottom largely duplicates the Quick Reference table, and the community footer section with its detailed 'when to show' and 'when to skip' lists is verbose. The industry detection section and quality gates are efficient, but overall there's room to tighten.

2 / 3

Actionability

The skill provides concrete command references, specific quality gate rules (e.g., '3x Kill Rule', budget sufficiency formulas), and clear file paths. However, it lacks executable code examples—the orchestration logic describes what to do conceptually ('spawn subagents via Task tool') without showing exact invocation syntax, and the scoring formula is simple pseudocode rather than a complete implementation.

2 / 3

Workflow Clarity

The orchestration logic has a clear numbered sequence with an explicit validation step (step 5: verify subagent JSON before aggregating). The creative workflow is a well-defined sequential pipeline with clear inputs/outputs at each step. Quality gates serve as validation checkpoints, and the PDF report quality gate explicitly requires running `--check` before `--output`.

3 / 3

Progressive Disclosure

Excellent progressive disclosure—the SKILL.md serves as a clear overview/orchestrator, with 22+ reference files listed with descriptions and explicit path resolution instructions. References are one level deep, clearly signaled, and the instruction to load on-demand rather than at startup is a smart design choice.

3 / 3

Total

10

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
AgriciDaniel/claude-ads
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.