Optimize end-to-end application performance with profiling, observability, and backend/frontend tuning. Use when coordinating performance optimization across the stack.
Install with Tessl CLI
npx tessl i github:duclm1x1/Dive-Ai --skill application-performance-performance-optimization72
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description has good structure with explicit 'what' and 'when' clauses, earning full marks for completeness. However, it relies on somewhat abstract terminology (profiling, observability, tuning) rather than concrete actions, and lacks the natural trigger terms users would actually say when experiencing performance issues. The scope is broad enough that it could conflict with more specialized performance skills.
Suggestions
Add concrete actions like 'analyze flame graphs, identify memory leaks, reduce bundle sizes, optimize database queries, implement caching strategies'
Include natural user trigger terms in the 'Use when' clause such as 'slow', 'latency', 'bottleneck', 'speed up', 'load time', or 'memory issues'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (performance optimization) and mentions some actions (profiling, observability, backend/frontend tuning), but these are high-level categories rather than concrete specific actions like 'analyze flame graphs' or 'reduce bundle size'. | 2 / 3 |
Completeness | Clearly answers both what ('Optimize end-to-end application performance with profiling, observability, and backend/frontend tuning') and when ('Use when coordinating performance optimization across the stack') with an explicit trigger clause. | 3 / 3 |
Trigger Term Quality | Includes some relevant terms like 'performance optimization', 'profiling', 'observability', but misses common natural variations users might say like 'slow app', 'latency', 'bottleneck', 'speed up', 'load time', or 'memory leak'. | 2 / 3 |
Distinctiveness Conflict Risk | The 'across the stack' and 'end-to-end' framing provides some distinction, but 'performance optimization' is broad enough to potentially conflict with more specialized skills for frontend-only or backend-only optimization. | 2 / 3 |
Total | 9 / 12 Passed |
Implementation
62%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides a comprehensive, well-structured workflow for end-to-end performance optimization with excellent phase sequencing and validation checkpoints. However, it suffers from being overly verbose (especially the extended thinking block), lacks concrete executable examples, and could benefit from better progressive disclosure by splitting detailed phase content into separate files.
Suggestions
Remove the extended thinking block - Claude doesn't need meta-explanations about the workflow's purpose
Add concrete code examples for at least one optimization technique (e.g., actual k6 load test script, sample Grafana dashboard JSON, or specific query optimization example)
Split detailed phase content into separate files (e.g., PHASE1-PROFILING.md, PHASE4-LOAD-TESTING.md) and keep SKILL.md as a concise overview with navigation
Replace placeholder syntax like '{context_from_phase_1}' with more specific guidance on what data should be passed between phases
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill contains some unnecessary verbosity, particularly the extended thinking block which explains the workflow's purpose to Claude (who doesn't need this meta-explanation). The phase descriptions are reasonably efficient but could be tighter. | 2 / 3 |
Actionability | Provides structured prompts for subagents and clear phase organization, but lacks executable code examples. The guidance is template-based with placeholders like '$ARGUMENTS' and '{context_from_phase_1}' rather than concrete, copy-paste ready commands or scripts. | 2 / 3 |
Workflow Clarity | Excellent multi-phase workflow with clear sequencing across 5 phases and 13 steps. Each step specifies inputs, outputs, and context dependencies. Includes validation phase (Phase 4) with load testing and regression testing, plus continuous monitoring in Phase 5. | 3 / 3 |
Progressive Disclosure | Content is well-organized with clear sections and phases, but everything is inline in one large document. The detailed 13-step process with configuration options and success criteria could benefit from splitting into separate reference files for each phase or topic area. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.