CtrlK
BlogDocsLog inGet started
Tessl Logo

performance-engineer

Expert performance engineer specializing in modern observability,

19

Quality

0%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/performance-engineer/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

0%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is extremely weak across all dimensions. It appears to be an incomplete fragment (ending with a comma) that uses vague, title-like language rather than describing concrete capabilities or trigger conditions. It provides no actionable information for Claude to determine when to select this skill.

Suggestions

Complete the description and list specific concrete actions the skill performs, e.g., 'Analyzes application performance metrics, configures distributed tracing, creates monitoring dashboards, and diagnoses latency bottlenecks.'

Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks about application performance, latency issues, tracing, metrics, observability pipelines, or monitoring setup.'

Replace the persona-style opening ('Expert performance engineer') with third-person action verbs describing what the skill does, not what it is.

DimensionReasoningScore

Specificity

The description uses vague, abstract language ('expert performance engineer') without listing any concrete actions. It does not describe what the skill actually does—no verbs like 'analyze', 'monitor', 'diagnose', etc.

1 / 3

Completeness

The description fails to answer both 'what does this do' and 'when should Claude use it'. There is no 'Use when...' clause and the 'what' is extremely vague. The description also appears truncated (ends with a comma).

1 / 3

Trigger Term Quality

The only potentially relevant keywords are 'performance' and 'observability', which are broad and jargon-heavy. It lacks natural user terms like 'latency', 'tracing', 'metrics', 'dashboards', 'APM', 'logs', etc.

1 / 3

Distinctiveness Conflict Risk

'Performance engineer' and 'observability' are very broad terms that could overlap with many skills related to monitoring, debugging, infrastructure, DevOps, or general performance optimization.

1 / 3

Total

4

/

12

Passed

Implementation

0%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is essentially a persona description and technology catalog rather than actionable instructions. It lists hundreds of tools and concepts Claude already knows without providing any concrete code, commands, or executable workflows. The content would need to be fundamentally restructured from a 'what I know' format to a 'how to do it' format with specific, executable guidance.

Suggestions

Replace the massive 'Capabilities' enumeration with a concise quick-start section containing executable code examples (e.g., a k6 load test script, a Prometheus query, an OpenTelemetry setup snippet).

Add concrete, step-by-step workflows with validation checkpoints for common tasks like 'diagnose API latency' or 'set up distributed tracing', including specific commands and expected outputs.

Move detailed tool-specific guidance into separate reference files (e.g., LOAD_TESTING.md, OBSERVABILITY.md) and keep SKILL.md as a concise overview with clear navigation links.

Remove the 'Behavioral Traits', 'Knowledge Base', and 'Example Interactions' sections entirely—they describe Claude's persona rather than providing actionable instructions.

DimensionReasoningScore

Conciseness

Extremely verbose with extensive lists of tools, platforms, and concepts that Claude already knows. The 'Capabilities' section alone is a massive enumeration of technologies that adds no actionable value—it reads like a resume rather than instructions. The 'Behavioral Traits', 'Knowledge Base', and 'Example Interactions' sections are similarly padded with information Claude doesn't need to be told.

1 / 3

Actionability

No concrete code, commands, or executable examples anywhere. The entire skill is abstract descriptions and bullet-point lists of technologies. The 'Instructions' section has four vague steps like 'Collect traces, profiles, and load tests to isolate bottlenecks' with no specifics on how to do any of it. There is nothing copy-paste ready or directly executable.

1 / 3

Workflow Clarity

The four-step 'Instructions' workflow is extremely high-level and lacks any validation checkpoints, specific commands, or feedback loops. For a skill involving potentially destructive operations like load testing production systems, there are no concrete safeguards, verification steps, or error recovery procedures beyond a vague 'Avoid load testing production without approvals.'

1 / 3

Progressive Disclosure

The content is a monolithic wall of text with no references to external files. Massive sections like the 12 capability categories with dozens of bullet points each should be split into separate reference documents. There is no navigation structure or clear hierarchy—everything is dumped into a single file at the same level of detail.

1 / 3

Total

4

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
sickn33/antigravity-awesome-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.