Understand audience demographics, preferences, behavior patterns, and engagement quality across Facebook, Instagram, YouTube, and TikTok.
57
48%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/apify-audience-analysis/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a clear domain (social media audience analysis) and names specific platforms, which is helpful for differentiation. However, it lacks concrete actions (listing categories rather than specific operations), misses a 'Use when...' clause entirely, and could benefit from more natural trigger terms that users would actually say when requesting this type of analysis.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks about social media audience insights, follower demographics, engagement rates, or platform analytics for Facebook, Instagram, YouTube, or TikTok.'
Replace vague category terms with specific concrete actions, e.g., 'Analyze follower demographics, segment audiences by behavior, measure engagement rates, compare cross-platform performance, and identify top-performing content types.'
Include additional natural trigger terms users might say, such as 'social media analytics', 'follower insights', 'audience report', 'social metrics', or 'platform performance'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (social media audience analysis) and some actions (understand demographics, preferences, behavior patterns, engagement quality) across specific platforms, but these are more like categories than concrete actions. It doesn't list specific operations like 'generate demographic reports', 'segment audiences', or 'track engagement metrics'. | 2 / 3 |
Completeness | Describes what the skill does (understand audience demographics, preferences, etc.) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and since the 'what' is also somewhat vague, this scores a 1. | 1 / 3 |
Trigger Term Quality | Includes relevant platform names (Facebook, Instagram, YouTube, TikTok) and terms like 'audience demographics', 'engagement', and 'behavior patterns' that users might naturally use. However, it's missing common variations like 'social media analytics', 'follower insights', 'audience analysis', or 'social metrics'. | 2 / 3 |
Distinctiveness Conflict Risk | The mention of specific platforms (Facebook, Instagram, YouTube, TikTok) and focus on audience understanding provides some distinctiveness, but it could overlap with general social media analytics skills, marketing skills, or platform-specific skills. The scope is broad enough to potentially conflict with other social media-related skills. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a functional, actionable skill with concrete commands and a well-organized workflow structure. Its main weaknesses are the lack of validation checkpoints in the workflow (e.g., verifying Actor run success before summarizing) and some verbosity from the large inline Actor table and generic boilerplate sections. The error handling exists but should be woven into the workflow as explicit feedback loops.
Suggestions
Add an explicit validation step after Step 4 (e.g., check exit code or output file existence before proceeding to summarize) to create a feedback loop for failed runs.
Move the large Actor lookup table to a separate reference file (e.g., ACTORS.md) and link to it from the main skill to improve progressive disclosure and reduce inline bulk.
Remove the generic 'Limitations' boilerplate at the end—these are general agent instructions, not skill-specific constraints.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The large Actor lookup table is useful reference but could be more compact. Some sections like 'Limitations' contain generic boilerplate that doesn't add value. The 'When to Use' section partially restates the obvious. Overall mostly efficient but has room for tightening. | 2 / 3 |
Actionability | Provides fully executable bash commands for each step, concrete Actor IDs, specific CLI flags, and clear input/output patterns. The commands are copy-paste ready with clear placeholder substitution points. | 3 / 3 |
Workflow Clarity | Steps are clearly sequenced with a progress checklist, but there are no validation checkpoints between steps. After running the Actor (Step 4), there's no verification that the run succeeded before proceeding to summarize. The error handling section exists but isn't integrated into the workflow as feedback loops. | 2 / 3 |
Progressive Disclosure | The skill references external scripts (`run_actor.js`) appropriately, but the large Actor table could be split into a separate reference file. The content is reasonably structured with clear sections, but the inline table makes the document quite long without clear navigation to separate reference materials. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
1a9f5ac
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.