CtrlK
BlogDocsLog inGet started
Tessl Logo

common-performance-engineering

Enforce universal standards for high-performance development. Use when profiling bottlenecks, reducing latency, fixing memory leaks, improving throughput, or optimizing algorithm complexity in any language. (triggers: **/*.ts, **/*.tsx, **/*.go, **/*.dart, **/*.java, **/*.kt, **/*.swift, **/*.py, performance, optimize, profile, scalability, latency, throughput, memory leak, bottleneck)

69

Quality

62%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./.agent/skills/common/common-performance-engineering/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

82%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a solid description with strong trigger term coverage and good completeness, explicitly stating both what the skill does and when to use it. Its main weaknesses are the somewhat vague opening claim ('Enforce universal standards for high-performance development') which reads as buzzwordy, and the very broad language coverage which increases conflict risk with language-specific skills.

Suggestions

Replace the vague opening 'Enforce universal standards for high-performance development' with more concrete actions, e.g., 'Profiles code hotspots, reduces latency, fixes memory leaks, and optimizes algorithm complexity across multiple languages.'

Consider narrowing or clarifying the file extension triggers to reduce conflict risk with language-specific skills, or add a note about when this skill should be preferred over language-specific optimization guidance.

DimensionReasoningScore

Specificity

The description names the domain (performance optimization) and lists several actions like 'profiling bottlenecks, reducing latency, fixing memory leaks, improving throughput, optimizing algorithm complexity,' but the opening phrase 'Enforce universal standards for high-performance development' is vague and buzzwordy. It doesn't list concrete deliverables or specific techniques.

2 / 3

Completeness

Clearly answers both 'what' (enforce performance standards, profile bottlenecks, reduce latency, fix memory leaks, improve throughput, optimize complexity) and 'when' with an explicit 'Use when...' clause listing specific scenarios and trigger terms.

3 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms users would say: 'performance', 'optimize', 'profile', 'latency', 'throughput', 'memory leak', 'bottleneck', 'scalability'. Also includes file extension patterns for multiple languages. These are terms users would naturally use when seeking performance help.

3 / 3

Distinctiveness Conflict Risk

The performance optimization niche is fairly distinct, but the broad file extension triggers (covering TS, Go, Dart, Java, Kotlin, Swift, Python) mean this skill could conflict with language-specific skills. The phrase 'universal standards' and 'any language' makes it very broad, increasing overlap risk with other development-related skills.

2 / 3

Total

10

/

12

Passed

Implementation

42%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill reads more like a checklist of performance engineering principles than actionable guidance for Claude. Its main weakness is the complete absence of concrete code examples, profiling commands, or executable patterns — everything is abstract advice that Claude likely already knows. The structure and progressive disclosure are good, but the content needs to earn its place by providing specific, copy-paste-ready patterns rather than general best practices.

Suggestions

Add concrete, executable code examples for at least the most common patterns (e.g., a memoization decorator in Python, a batching pattern in TypeScript, a profiling command for each supported language).

Replace generic advice like 'implement multi-level caching with appropriate TTL' with specific implementation snippets or at minimum concrete tool/library recommendations per language.

Add specific profiling tool commands to the workflow section (e.g., `py-spy top --pid <PID>`, `go tool pprof`, Chrome DevTools Performance tab) so the Baseline and Verify steps are actionable.

Remove explanations of concepts Claude already knows (what SLIs/SLOs are, what tree shaking is, what lazy initialization means) to improve token efficiency.

DimensionReasoningScore

Conciseness

Mostly efficient but includes some unnecessary explanation that Claude already knows (e.g., explaining what SLIs/SLOs are, what tree shaking is, what lazy initialization means). Some bullet points like 'Optimize data structures: Set for lookups, List for iteration' are things Claude inherently understands. However, it's not egregiously verbose.

2 / 3

Actionability

The skill provides no concrete code, commands, or executable examples. Every item is a high-level directive ('use efficient serialization', 'implement multi-level caching', 'write micro-benchmarks') without showing how. It describes rather than instructs, and defers all concrete implementation to a referenced file.

1 / 3

Workflow Clarity

The 4-step workflow (Baseline → Identify → Fix → Verify) is a reasonable sequence with an implicit validation step (re-profile to confirm). However, it lacks specific profiling commands/tools, doesn't specify what 'confirm improvement' means concretely, and has no explicit feedback loop for when verification fails.

2 / 3

Progressive Disclosure

The skill is well-structured with clear sections, keeps the overview concise, and appropriately references implementation details in a single-level-deep external file (references/implementation.md). Navigation is clear and signaled.

3 / 3

Total

8

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
HoangNguyen0403/agent-skills-standard
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.