This skill enables cross-browser compatibility testing for web applications using BrowserStack, Selenium Grid, or Playwright. It tests across Chrome, Firefox, Safari, and Edge, identifying browser-specific bugs and ensuring consistent functionality. It is used when a user requests to "test browser compatibility", "run cross-browser tests", or uses the `/browser-test` or `/bt` command to assess web application behavior across different browsers and devices. The skill generates a report detailing compatibility issues and screenshots for visual verification.
63
53%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./backups/skills-migration-20251108-070147/plugins/testing/browser-compatibility-tester/skills/browser-compatibility-tester/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that clearly articulates specific capabilities (cross-browser testing with named tools and browsers), provides explicit trigger guidance with natural user phrases and slash commands, and occupies a distinct niche unlikely to conflict with other skills. The description is well-structured, uses third person voice correctly, and balances detail with conciseness.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: cross-browser compatibility testing, testing across Chrome/Firefox/Safari/Edge, identifying browser-specific bugs, generating reports with compatibility issues and screenshots for visual verification. Names specific tools (BrowserStack, Selenium Grid, Playwright). | 3 / 3 |
Completeness | Clearly answers both 'what' (cross-browser compatibility testing, identifying bugs, generating reports with screenshots) and 'when' (explicit 'It is used when...' clause with trigger phrases and commands). | 3 / 3 |
Trigger Term Quality | Includes strong natural trigger terms users would say: 'test browser compatibility', 'run cross-browser tests', '/browser-test', '/bt', plus browser names (Chrome, Firefox, Safari, Edge) and tool names (BrowserStack, Selenium Grid, Playwright). Good coverage of natural language variations. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive niche focused specifically on cross-browser compatibility testing with named tools and browsers. The specific commands (/browser-test, /bt) and domain (BrowserStack, Selenium Grid, Playwright) make it very unlikely to conflict with other skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
7%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill reads like a product marketing description rather than actionable technical guidance. It lacks any executable code, concrete configuration examples, or specific commands for BrowserStack/Selenium/Playwright. The content explains concepts Claude already understands while failing to provide the implementation details that would actually enable cross-browser testing.
Suggestions
Add executable code examples for at least one testing framework (e.g., a complete Playwright test script with browser matrix configuration) that Claude can adapt and run.
Replace the abstract workflow description with concrete commands and file paths, e.g., 'Run `npx playwright test --config=cross-browser.config.ts`' with an actual config template.
Add a validation/verification step showing how to check test results and handle failures, such as parsing the test output or screenshot comparison.
Remove the 'When to Use This Skill', 'Best Practices', and 'Integration' sections—they add no actionable information and waste tokens on things Claude already knows.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is verbose and explains concepts Claude already knows (what cross-browser testing is, what browsers exist, what CI/CD is). The 'When to Use This Skill' section repeats the description, 'Best Practices' are generic platitudes, and the examples describe what the skill will do rather than providing actionable instructions. Nearly every section could be cut or condensed significantly. | 1 / 3 |
Actionability | There is no executable code, no concrete commands, no configuration examples, no test templates, and no actual implementation guidance. The entire skill describes what should happen at a high level without showing how to do any of it—no Playwright/Selenium/BrowserStack code snippets, no browser matrix configuration format, no report template. | 1 / 3 |
Workflow Clarity | The four-step workflow is purely descriptive with no concrete commands, no validation checkpoints, and no error recovery steps. Steps like 'Generating Cross-Browser Tests' and 'Executing Tests' give no indication of what tools to invoke, what files to create, or how to verify success. There are no feedback loops for handling test failures. | 1 / 3 |
Progressive Disclosure | The content is organized into logical sections with clear headings, which provides some structure. However, there are no references to supporting files (no bundle files exist), and content that should be in separate files (e.g., browser matrix configuration templates, example test scripts, report templates) is neither inline nor referenced—it's simply missing. | 2 / 3 |
Total | 5 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
13d35b8
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.