Browser automation powers web testing, scraping, and AI agent interactions. The difference between a flaky script and a reliable system comes down to understanding selectors, waiting strategies, and anti-detection patterns. This skill covers Playwright (recommended) and Puppeteer, with patterns for testing, scraping, and agentic browser control. Key insight: Playwright won the framework war. Unless you need Puppeteer's stealth ecosystem or are Chrome-only, Playwright is the better choice in 202
40
Quality
27%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./docs/v19.7/configuration/agent/skills_external/antigravity-awesome-skills-main/skills/browser-automation/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description reads like introductory documentation rather than a skill selection guide. While it establishes the domain (browser automation with Playwright/Puppeteer) and provides context about framework choice, it lacks explicit trigger conditions and concrete action verbs. The description is also truncated (ends mid-word '202'), suggesting incomplete content.
Suggestions
Add an explicit 'Use when...' clause with trigger terms like 'automate browser', 'web scraping', 'Playwright', 'Puppeteer', 'headless', 'click buttons', 'fill web forms'
Replace conceptual language ('understanding selectors, waiting strategies') with concrete actions ('navigate pages, click elements, extract data, handle authentication, capture screenshots')
Remove the editorial commentary ('Playwright won the framework war', 'Key insight') and focus on actionable capability descriptions
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (browser automation) and mentions some actions (web testing, scraping, AI agent interactions) but lacks concrete specific actions like 'click buttons', 'fill forms', 'capture screenshots'. The description focuses more on concepts (selectors, waiting strategies, anti-detection) than actionable capabilities. | 2 / 3 |
Completeness | Describes what the skill covers (browser automation frameworks and patterns) but completely lacks a 'Use when...' clause or any explicit trigger guidance. The description reads more like documentation than selection criteria. | 1 / 3 |
Trigger Term Quality | Includes relevant keywords like 'Playwright', 'Puppeteer', 'browser automation', 'web testing', 'scraping', but misses common user terms like 'headless browser', 'web crawler', 'automate website', 'click', 'navigate', or file extensions. | 2 / 3 |
Distinctiveness Conflict Risk | The mention of Playwright and Puppeteer specifically helps distinguish it, but 'web testing' and 'scraping' could overlap with other testing or data extraction skills. The framework-specific focus provides some distinctiveness. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
22%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill content appears to be severely truncated or incomplete. It establishes a persona and lists pattern names but provides no actual executable code, concrete examples, or complete guidance. The Sharp Edges table has placeholder 'Issue' text instead of actual issues, and solutions are just code comments without implementation. The skill fails to deliver on its promise of teaching browser automation.
Suggestions
Add complete, executable Playwright code examples for each pattern (e.g., show actual test isolation setup, user-facing locator usage, auto-wait implementation)
Fix the Sharp Edges table with actual issue descriptions and complete solution code, not just comment fragments
Remove the persona introduction paragraph - it adds no actionable value and wastes tokens
Add a Quick Start section with a minimal working example that demonstrates the core recommended approach
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content has some unnecessary persona framing ('You are a browser automation expert who has debugged thousands...') that doesn't add actionable value. The capabilities list is verbose. However, the patterns/anti-patterns sections are reasonably concise. | 2 / 3 |
Actionability | The skill is severely lacking in concrete, executable guidance. Pattern names are listed but no actual code examples are provided. The Sharp Edges table has solutions that appear truncated (just comments like '# REMOVE all waitForTimeout calls' with no actual code). No copy-paste ready examples exist. | 1 / 3 |
Workflow Clarity | There is no clear workflow or sequence of steps. The content describes concepts (Test Isolation Pattern, Auto-Wait Pattern) but doesn't explain how to implement them. No validation checkpoints or multi-step processes are defined. | 1 / 3 |
Progressive Disclosure | The content has some structure with sections (Patterns, Anti-Patterns, Sharp Edges, Related Skills), but the Sharp Edges table appears malformed/truncated. No references to external files for detailed content. The organization exists but content within sections is incomplete. | 2 / 3 |
Total | 6 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
20ba150
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.