tessl i github:sickn33/antigravity-awesome-skills --skill performance-engineerExpert performance engineer specializing in modern observability, application optimization, and scalable system performance. Masters OpenTelemetry, distributed tracing, load testing, multi-tier caching, Core Web Vitals, and performance monitoring. Handles end-to-end optimization, real user monitoring, and scalability patterns. Use PROACTIVELY for performance optimization, observability, or scalability challenges.
Validation
81%| Criteria | Description | Result |
|---|---|---|
description_trigger_hint | Description may be missing an explicit 'when to use' trigger hint (e.g., 'Use when...') | Warning |
metadata_version | 'metadata.version' is missing | Warning |
license_field | 'license' field is missing | Warning |
Total | 13 / 16 Passed | |
Implementation
13%This skill is essentially a capabilities catalog rather than actionable guidance. It exhaustively lists technologies and concepts Claude already knows without providing any concrete code, commands, or specific procedures. The content violates the core principle that skills should add only what Claude doesn't already know - instead it explains what distributed tracing, caching, and profiling are rather than showing how to implement them in this specific context.
Suggestions
Replace the extensive capability lists with 2-3 concrete, executable code examples showing actual performance optimization workflows (e.g., a complete k6 load test script, an OpenTelemetry setup snippet)
Move the technology catalogs to a separate REFERENCE.md file and keep SKILL.md focused on actionable procedures
Add specific validation checkpoints to the workflow, such as 'Run baseline load test before changes: `k6 run baseline.js` - record p95 latency'
Remove explanations of what tools are (Claude knows Redis, Prometheus, etc.) and instead document project-specific configurations or patterns
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with extensive lists of technologies Claude already knows. The 'Capabilities' section is a massive enumeration of tools and concepts (OpenTelemetry, Redis, k6, etc.) that adds no actionable value - Claude knows what these are. The content could be reduced by 80%+ without losing utility. | 1 / 3 |
Actionability | No concrete code examples, commands, or executable guidance anywhere. The entire skill is abstract descriptions like 'Query optimization: Execution plan analysis, index optimization' without showing HOW to do any of it. The 'Instructions' section has only 4 vague steps with no specifics. | 1 / 3 |
Workflow Clarity | The 4-step instruction workflow and 9-step 'Response Approach' provide a sequence, but lack validation checkpoints, specific commands, or feedback loops. For performance work involving production systems, the absence of concrete verification steps is a significant gap. | 2 / 3 |
Progressive Disclosure | Monolithic wall of text with no references to external files. All content is inline in one massive document. The extensive capability lists should be in separate reference files, with SKILL.md providing a concise overview and navigation. | 1 / 3 |
Total | 5 / 12 Passed |
Activation
67%The description covers a clear domain with explicit 'Use when' guidance, which is good for completeness. However, it relies heavily on listing technologies rather than describing concrete actions Claude would take, and uses first-person framing ('Expert performance engineer', 'Masters') which violates the third-person voice requirement. The trigger terms are technical but miss common user language for performance problems.
Suggestions
Replace vague verbs like 'Masters' and 'Handles' with concrete actions: 'Configures OpenTelemetry instrumentation, analyzes distributed traces, implements caching strategies, optimizes Core Web Vitals'
Add natural user trigger terms: 'slow application', 'latency', 'bottleneck', 'memory issues', 'profiling', 'response time'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names domain (performance engineering, observability) and lists technologies (OpenTelemetry, distributed tracing, load testing, caching, Core Web Vitals), but uses vague action verbs like 'Masters', 'Handles' rather than concrete actions like 'configures', 'implements', 'analyzes'. | 2 / 3 |
Completeness | Clearly answers both what (performance optimization, observability, scalability with specific technologies) and when ('Use PROACTIVELY for performance optimization, observability, or scalability challenges'), providing explicit trigger guidance. | 3 / 3 |
Trigger Term Quality | Includes relevant technical terms (OpenTelemetry, distributed tracing, Core Web Vitals, load testing, caching) but missing common user phrases like 'slow app', 'latency issues', 'memory leak', 'profiling', 'bottleneck', or file extensions users might mention. | 2 / 3 |
Distinctiveness Conflict Risk | Performance and optimization are fairly specific domains, but 'scalable system performance' and 'application optimization' could overlap with general backend development or DevOps skills. The technology list helps but isn't fully distinctive. | 2 / 3 |
Total | 9 / 12 Passed |
Reviewed
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.