CtrlK
BlogDocsLog inGet started
Tessl Logo

application-performance-performance-optimization

Optimize end-to-end application performance with profiling, observability, and backend/frontend tuning. Use when coordinating performance optimization across the stack.

Install with Tessl CLI

npx tessl i github:sickn33/antigravity-awesome-skills --skill application-performance-performance-optimization
What are skills?

71

Does it follow best practices?

Agent success when using this skill

Validation for skill structure

SKILL.md
Review
Evals

Discovery

67%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description has good structure with explicit 'what' and 'when' clauses, earning full marks for completeness. However, it relies on somewhat abstract category terms rather than concrete actions, and lacks the natural trigger terms users would actually say when experiencing performance issues. The cross-stack coordination angle provides some distinctiveness but could be sharper.

Suggestions

Add concrete specific actions like 'analyze flame graphs, reduce bundle size, optimize database queries, identify memory leaks' to improve specificity.

Include natural user trigger terms like 'slow', 'latency', 'bottleneck', 'speed up', 'loading time' that users would actually say when they have performance problems.

DimensionReasoningScore

Specificity

Names the domain (performance optimization) and mentions some actions (profiling, observability, backend/frontend tuning), but these are high-level categories rather than concrete specific actions like 'analyze flame graphs' or 'reduce bundle size'.

2 / 3

Completeness

Clearly answers both what ('Optimize end-to-end application performance with profiling, observability, and backend/frontend tuning') and when ('Use when coordinating performance optimization across the stack') with an explicit trigger clause.

3 / 3

Trigger Term Quality

Includes some relevant terms like 'performance optimization', 'profiling', 'observability', but misses common natural variations users might say like 'slow', 'latency', 'bottleneck', 'speed up', 'optimize queries', or 'memory leak'.

2 / 3

Distinctiveness Conflict Risk

The 'across the stack' qualifier helps distinguish from single-layer optimization skills, but terms like 'profiling' and 'observability' could overlap with more specialized monitoring or debugging skills.

2 / 3

Total

9

/

12

Passed

Implementation

62%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides a comprehensive orchestration framework for performance optimization with excellent workflow structure and clear phase progression. However, it lacks concrete executable examples (actual profiling commands, k6 scripts, monitoring configurations) and relies heavily on delegating to subagents without showing what those implementations look like. The extended thinking block adds unnecessary tokens explaining rationale Claude can infer.

Suggestions

Remove the extended thinking block - Claude doesn't need the workflow rationale explained; the structure speaks for itself

Add concrete code examples for at least one tool per phase (e.g., actual k6 load test script, sample Grafana dashboard JSON, OpenTelemetry instrumentation snippet)

Extract configuration options, success criteria, and tool-specific guides into separate reference files with clear links from the main skill

Replace descriptive prompts with executable templates showing actual Task tool invocation syntax

DimensionReasoningScore

Conciseness

The skill contains some unnecessary verbosity, particularly the extended thinking block which explains obvious workflow rationale. The phase descriptions are reasonably efficient but could be tighter - some prompts repeat context that Claude would understand implicitly.

2 / 3

Actionability

The skill provides structured prompts for subagents but lacks concrete, executable code examples. Instructions are descriptive ('Use Task tool with subagent_type=...') rather than showing actual implementation. No copy-paste ready commands or code snippets for profiling, load testing, or monitoring setup.

2 / 3

Workflow Clarity

The 5-phase workflow is clearly sequenced with logical progression from profiling through optimization to validation and monitoring. Each step explicitly references context from previous steps, and Phase 4 provides validation checkpoints before production deployment.

3 / 3

Progressive Disclosure

Content is well-organized with clear sections and phases, but it's a monolithic document with no references to external files for detailed guidance. The configuration options and success criteria could be separate reference documents, and specific tool guides (DataDog setup, k6 scripts) would benefit from linked resources.

2 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.