AI-driven data extraction from 55+ Actors across all major platforms. This skill automatically selects the best Actor for your task.
36
21%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/apify-ultimate-scraper/SKILL.mdQuality
Discovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is too vague and relies on insider jargon ('Actors') without explaining what it means. It fails to specify concrete actions, natural trigger terms, or explicit usage conditions. The description would be nearly impossible for Claude to reliably distinguish from other data-related skills in a large skill library.
Suggestions
Add a 'Use when...' clause with explicit trigger conditions, e.g., 'Use when the user asks to scrape data from websites, extract product listings, or collect social media posts.'
Replace vague terms like 'all major platforms' and '55+ Actors' with specific platform names and actions, e.g., 'Scrapes product data from Amazon, extracts tweets from Twitter/X, collects reviews from Google Maps using Apify Actors.'
Include natural keywords users would actually say, such as 'web scraping', 'scrape', 'crawl', 'extract data from website', and specific platform names to improve trigger term quality.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description says 'data extraction from 55+ Actors across all major platforms' but doesn't list any concrete actions beyond the vague 'data extraction.' It doesn't specify what kind of data, what platforms, or what specific operations are performed. 'Automatically selects the best Actor' is also vague. | 1 / 3 |
Completeness | The 'what' is vaguely stated as 'data extraction' without specifics, and there is no explicit 'when' clause or trigger guidance. There is no 'Use when...' statement or equivalent, and the description doesn't clarify when Claude should select this skill over others. | 1 / 3 |
Trigger Term Quality | The description uses domain-specific jargon ('Actors', '55+ Actors') that users would not naturally say. It lacks natural trigger terms like specific platform names (e.g., 'scrape', 'web scraping', 'Amazon', 'Twitter'), file types, or common user phrases. 'All major platforms' is too generic to serve as a trigger. | 1 / 3 |
Distinctiveness Conflict Risk | 'Data extraction' and 'all major platforms' are extremely generic phrases that could overlap with many other skills involving data processing, web scraping, API calls, or platform integrations. Nothing in the description carves out a clear, distinct niche. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
42%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill provides excellent actionability with concrete, executable commands and a clear workflow structure, but is severely undermined by poor token efficiency. The massive inline Actor catalog (55+ entries across multiple tables) should be extracted to a reference file, as it dominates the skill and buries the actual workflow. Adding validation checkpoints (e.g., verifying schema fetch success, validating input JSON) would strengthen the workflow.
Suggestions
Move all Actor lookup tables (Instagram, Facebook, TikTok, YouTube, Google Maps, Other) and the use-case mapping tables to a separate ACTORS_REFERENCE.md file, keeping only a brief summary and link in the main skill.
Add a validation step after fetching the actor schema (Step 2) to verify the response contains expected fields before proceeding to build input JSON.
Add a feedback loop in Step 4: if the run fails, check the error output, adjust input parameters, and retry before escalating to the user.
Remove the 'When to Use' section's third bullet and the 'Prerequisites' parenthetical note—these add tokens without actionable value.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is extremely verbose with massive Actor lookup tables (55+ actors across 6 categories) that consume enormous token budget. This reference data would be far better placed in a separate file. The use-case mapping tables and multi-actor workflow tables add further bulk. The core workflow is buried under hundreds of lines of catalog data. | 1 / 3 |
Actionability | The skill provides fully executable bash commands for every step: searching actors, fetching schemas, and running scripts with concrete flags (--actor, --input, --output, --format). The commands are copy-paste ready with clear placeholder substitution patterns. | 3 / 3 |
Workflow Clarity | The 5-step workflow is clearly sequenced with a progress checklist, but it lacks validation checkpoints. There's no step to verify the actor schema was fetched correctly, no validation that the JSON_INPUT is well-formed before running, and no feedback loop for retrying failed runs beyond the basic error handling section at the end. | 2 / 3 |
Progressive Disclosure | This is a monolithic wall of text with all 55+ Actor tables inline rather than in a separate reference file. The Actor catalog, use-case mapping, and multi-actor workflows should be split into referenced files, leaving only the workflow and a brief overview in the main SKILL.md. | 1 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
b739683
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.