Optimize end-to-end application performance with profiling, observability, and backend/frontend tuning. Use when coordinating performance optimization across the stack.
Install with Tessl CLI
npx tessl i github:boisenoise/skills-collections --skill application-performance-performance-optimization71
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description has good structure with an explicit 'Use when' clause and covers the general domain well. However, it relies on high-level buzzwords (profiling, observability, tuning) rather than concrete actions, and lacks the natural trigger terms users would actually type when experiencing performance issues.
Suggestions
Add specific concrete actions like 'analyze flame graphs, identify memory leaks, reduce API latency, optimize database queries, minimize bundle size'
Include natural user trigger terms like 'slow', 'latency', 'bottleneck', 'speed up', 'load time', 'memory usage', 'response time'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (performance optimization) and mentions some actions (profiling, observability, backend/frontend tuning), but these are high-level categories rather than concrete specific actions like 'analyze flame graphs' or 'reduce bundle size'. | 2 / 3 |
Completeness | Clearly answers both what (optimize performance with profiling, observability, tuning) and when (coordinating performance optimization across the stack) with an explicit 'Use when' clause. | 3 / 3 |
Trigger Term Quality | Includes some relevant terms like 'performance optimization', 'profiling', 'observability', but misses common natural variations users might say like 'slow app', 'latency', 'bottleneck', 'speed up', 'load time', or 'memory leak'. | 2 / 3 |
Distinctiveness Conflict Risk | The 'across the stack' and 'end-to-end' framing provides some distinction, but 'performance optimization' and 'profiling' could overlap with more specific frontend-only or backend-only performance skills. | 2 / 3 |
Total | 9 / 12 Passed |
Implementation
62%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides a comprehensive orchestration framework for performance optimization with clear phasing and validation checkpoints. However, it lacks concrete executable examples (actual profiling commands, k6 scripts, Grafana dashboard configs) and relies heavily on delegating to subagents with templated prompts. The extended thinking block and repetitive prompt structures add unnecessary tokens.
Suggestions
Replace the extended thinking block with a brief 1-2 sentence purpose statement - Claude doesn't need the rationale explained
Add at least one concrete, executable example per phase (e.g., actual k6 load test script, sample Grafana dashboard JSON, specific profiling commands)
Extract detailed phase instructions into separate files (e.g., PHASE1-PROFILING.md, PHASE4-LOAD-TESTING.md) and keep SKILL.md as a concise overview with navigation
Consolidate the repeated 'Use Task tool with subagent_type=' pattern into a single instruction at the top, then just reference step numbers and prompts
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill contains some unnecessary verbosity, particularly the extended thinking block which explains obvious workflow rationale. The phase descriptions are reasonably efficient but could be tightened - phrases like 'Use Task tool with subagent_type=' are repeated 13 times when a single instruction could establish the pattern. | 2 / 3 |
Actionability | The skill provides structured prompts for subagents but lacks concrete, executable code examples. Instructions are template-based with placeholders like '$ARGUMENTS' and '{context_from_phase_1}' rather than showing actual commands, scripts, or copy-paste ready configurations for profiling tools, load testing, or monitoring setup. | 2 / 3 |
Workflow Clarity | The workflow is clearly sequenced across 5 phases with 13 numbered steps. Each phase builds on previous outputs, validation is included via load testing (Phase 4) and regression testing (step 11), and there are explicit success criteria with measurable thresholds for validation. | 3 / 3 |
Progressive Disclosure | The content is well-organized with clear sections and phases, but it's a monolithic document (~200 lines) that could benefit from splitting detailed phase instructions into separate files. Configuration options and success criteria are appropriately placed, but the 13-step process inline makes the skill lengthy. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.