Scrape reviews, ratings, and brand mentions from multiple platforms using Apify Actors.
72
66%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/apify-brand-reputation-monitoring/SKILL.mdQuality
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description is concise and specific about what the skill does, naming concrete data types and the tool used. Its main weakness is the absence of an explicit 'Use when...' clause, which limits Claude's ability to know exactly when to select this skill. Adding trigger guidance and a few more natural user terms would strengthen it significantly.
Suggestions
Add a 'Use when...' clause, e.g., 'Use when the user wants to scrape or collect reviews, ratings, or brand mentions from websites, or mentions Apify.'
Include common platform names or variations users might mention, such as 'Yelp', 'Amazon reviews', 'Google Reviews', 'Trustpilot', 'social media monitoring', or 'web scraping'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'scrape reviews, ratings, and brand mentions from multiple platforms using Apify Actors.' This names the domain (web scraping), specific data types (reviews, ratings, brand mentions), and the tool (Apify Actors). | 3 / 3 |
Completeness | Clearly answers 'what does this do' (scrape reviews, ratings, brand mentions using Apify Actors), but lacks an explicit 'Use when...' clause or equivalent trigger guidance for when Claude should select this skill. | 2 / 3 |
Trigger Term Quality | Includes some natural keywords like 'reviews', 'ratings', 'brand mentions', 'scrape', and 'Apify', but misses common variations users might say such as 'sentiment', 'feedback', 'crawl', 'monitor', or specific platform names (e.g., 'Yelp', 'Amazon', 'Google Reviews'). | 2 / 3 |
Distinctiveness Conflict Risk | The combination of scraping reviews/ratings/brand mentions specifically via Apify Actors creates a clear niche that is unlikely to conflict with other skills. The tool specificity (Apify) and data type specificity (reviews, ratings, brand mentions) make it highly distinguishable. | 3 / 3 |
Total | 10 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides strong actionability with executable commands and a comprehensive Actor lookup table, making it practical for brand reputation monitoring tasks. Its main weaknesses are the lack of validation checkpoints integrated into the workflow (error handling is separate and not looped back) and the overly long inline Actor table that would benefit from being in a reference file. The generic limitations section adds no value and wastes tokens.
Suggestions
Integrate error handling into the workflow as explicit validation checkpoints (e.g., after Step 4, check for run success/failure before proceeding to Step 5, with a retry loop).
Move the 18-row Actor lookup table to a separate reference file (e.g., ACTORS.md) and keep only the most common 4-5 entries inline with a link to the full list.
Remove the generic 'Limitations' section—these are boilerplate instructions Claude already follows and waste tokens.
Consolidate the three nearly identical command blocks in Step 4 into a single template with parameter notes for format/output flags.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The Actor table is comprehensive but quite long—many entries could be consolidated or moved to a reference file. The limitations section contains generic boilerplate that doesn't add value. However, the workflow steps themselves are reasonably lean. | 2 / 3 |
Actionability | Provides fully executable bash commands for each step, concrete Actor IDs in a lookup table, specific CLI flags and options, and clear command patterns for all three output formats. The mcpc command for fetching schemas is copy-paste ready. | 3 / 3 |
Workflow Clarity | The 5-step workflow is clearly sequenced with a progress checklist, but there are no validation checkpoints between steps. After running the Actor (Step 4), there's no verification that the run succeeded before summarizing (Step 5). Error handling is listed separately but not integrated into the workflow as feedback loops. | 2 / 3 |
Progressive Disclosure | The large Actor lookup table (18 rows) and three nearly identical command blocks inflate the main file. The Actor table and detailed command variants would be better placed in reference files. However, the overall structure with clear sections and the checklist is well-organized. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
1a9f5ac
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.