Performance Test Agent. 성능 테스트 계획 및 실행을 담당합니다. Load Test, Lighthouse, Core Web Vitals 측정을 수행합니다.
Install with Tessl CLI
npx tessl i github:shaul1991/shaul-agents-plugin --skill performance-test56
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description provides a reasonable overview of performance testing capabilities with some specific tools mentioned (Lighthouse, Core Web Vitals), but suffers from missing explicit trigger guidance ('Use when...') which is critical for skill selection. The Korean language description limits discoverability for English-speaking users, and the lack of concrete output descriptions weakens specificity.
Suggestions
Add an explicit 'Use when...' clause with trigger scenarios like 'Use when the user asks about page load times, web performance optimization, Lighthouse scores, or Core Web Vitals metrics'
Include more natural trigger terms users would say: 'page speed', 'slow website', 'performance audit', 'LCP', 'FID', 'CLS', 'web vitals'
Specify concrete outputs: 'generates performance reports', 'identifies bottlenecks', 'provides optimization recommendations'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (performance testing) and lists some specific actions (Load Test, Lighthouse, Core Web Vitals measurement), but lacks comprehensive detail about what concrete outputs or capabilities are provided beyond measurement. | 2 / 3 |
Completeness | Describes what it does (performance test planning and execution) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. | 1 / 3 |
Trigger Term Quality | Includes relevant technical terms like 'Load Test', 'Lighthouse', 'Core Web Vitals', and '성능 테스트' that users might search for, but missing common variations like 'page speed', 'performance audit', 'web performance', or 'LCP/FID/CLS'. | 2 / 3 |
Distinctiveness Conflict Risk | The specific mention of Lighthouse and Core Web Vitals provides some distinctiveness for web performance testing, but 'Load Test' is generic and could overlap with other testing or infrastructure skills. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
57%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides comprehensive, executable examples for performance testing tools (Lighthouse, k6, Artillery) with good actionability. However, it suffers from being a monolithic reference document rather than a structured skill guide - it lacks clear workflow sequencing, validation checkpoints, and progressive disclosure through external files. The content would benefit from restructuring into a concise overview with links to detailed tool-specific guides.
Suggestions
Split tool-specific configurations (k6, Artillery, Lighthouse CI) into separate reference files and link from a concise overview
Add an explicit workflow section that guides when to use each test type and in what order (e.g., 'Run Lighthouse first, then k6 for APIs identified as slow')
Include validation checkpoints such as 'Review Lighthouse scores before proceeding to load testing' or 'If P95 > threshold, investigate before increasing load'
Remove the ASCII pyramid diagram and verbose report template - link to a template file instead
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill contains useful reference material but includes some redundant content like the ASCII pyramid diagram and verbose report templates. The Core Web Vitals table and performance requirements sections are helpful references, but the overall document could be tightened. | 2 / 3 |
Actionability | Provides fully executable code examples for Lighthouse CLI, k6 load tests, Artillery configs, web-vitals measurement, and CI workflows. Commands are copy-paste ready with specific flags and options. | 3 / 3 |
Workflow Clarity | While individual tools have clear usage examples, there's no explicit workflow for when to use which test type, no validation checkpoints between steps, and no guidance on interpreting results before proceeding. The skill reads more like a reference than a guided process. | 2 / 3 |
Progressive Disclosure | This is a monolithic document with over 300 lines of content inline. Report templates, CI configurations, and detailed tool configs should be split into separate reference files. No external file references are provided despite the complexity warranting them. | 1 / 3 |
Total | 8 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.