CtrlK
BlogDocsLog inGet started
Tessl Logo

performance-engineer

Expert performance engineer specializing in modern observability,

Install with Tessl CLI

npx tessl i github:sickn33/antigravity-awesome-skills --skill performance-engineer
What are skills?

24

Does it follow best practices?

Validation for skill structure

SKILL.md
Review
Evals

Discovery

0%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is severely underdeveloped and appears truncated. It provides no concrete actions, no trigger terms users would naturally use, and no guidance on when Claude should select this skill. The use of 'Expert' framing violates the third-person action-oriented voice expected in skill descriptions.

Suggestions

Complete the description with specific actions like 'Analyzes distributed traces, creates monitoring dashboards, debugs latency issues, configures alerting rules'

Add a 'Use when...' clause with natural trigger terms: 'Use when the user mentions metrics, monitoring, traces, logs, APM tools, latency debugging, or observability platforms like Datadog, Grafana, or Prometheus'

Replace 'Expert performance engineer' framing with action-oriented third-person voice describing what the skill does, not what it is

DimensionReasoningScore

Specificity

The description uses vague language ('Expert performance engineer') without listing any concrete actions. It mentions 'modern observability' as a domain but provides no specific capabilities like 'analyze traces', 'create dashboards', or 'debug latency issues'.

1 / 3

Completeness

Missing both 'what' (no concrete actions listed) and 'when' (no trigger guidance or 'Use when...' clause). The description is incomplete and appears truncated.

1 / 3

Trigger Term Quality

Contains only technical jargon ('performance engineer', 'observability') without natural keywords users would say. Missing terms like 'metrics', 'monitoring', 'traces', 'logs', 'APM', 'latency', or 'debugging'.

1 / 3

Distinctiveness Conflict Risk

Very generic phrasing that could overlap with any performance, monitoring, or DevOps-related skill. 'Observability' is too broad without specifying tools, platforms, or specific use cases.

1 / 3

Total

4

/

12

Passed

Implementation

12%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is a verbose capability catalog rather than actionable guidance. It extensively lists technologies and concepts Claude already knows without providing any concrete code, commands, or executable examples. The content would benefit from dramatic reduction and replacement of abstract descriptions with specific, copy-paste-ready implementations.

Suggestions

Replace the extensive 'Capabilities' enumeration with 2-3 concrete, executable examples showing actual profiling commands, load test scripts, or monitoring setup code

Remove 'Behavioral Traits' and 'Knowledge Base' sections entirely - these describe generic good practices Claude already knows

Add specific code examples for common tasks like setting up OpenTelemetry tracing, writing a k6 load test, or configuring Prometheus alerts

Split detailed reference material (tool comparisons, platform-specific guides) into separate linked files and keep SKILL.md as a concise overview

DimensionReasoningScore

Conciseness

Extremely verbose with extensive lists of technologies Claude already knows. The 'Capabilities' section is a massive enumeration of tools and concepts (OpenTelemetry, Redis, k6, etc.) that adds no actionable value - Claude knows what these are. The 'Behavioral Traits' and 'Knowledge Base' sections describe generic good practices rather than specific instructions.

1 / 3

Actionability

No concrete code examples, commands, or executable guidance anywhere. The entire skill is abstract descriptions like 'Collect traces, profiles, and load tests' and 'Implement optimizations with proper testing' without showing HOW to do any of these things. The 'Example Interactions' are just prompts, not actual examples with outputs.

1 / 3

Workflow Clarity

The 'Instructions' and 'Response Approach' sections provide numbered steps with a logical sequence, but lack validation checkpoints and concrete verification steps. For risky operations like load testing production, there's a safety note but no explicit feedback loop for error recovery.

2 / 3

Progressive Disclosure

Monolithic wall of text with no references to external files. All content is inline in one massive document. The extensive capability lists could be split into separate reference files, but instead everything is dumped into a single skill file with poor organization for discovery.

1 / 3

Total

5

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.