CtrlK
BlogDocsLog inGet started
Tessl Logo

team-perf-opt

Unified team skill for performance optimization. Coordinator orchestrates pipeline, workers are team-worker agents. Supports single/fan-out/independent parallel modes. Triggers on "team perf-opt".

63

Quality

55%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./.codex/skills/team-perf-opt/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

25%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description reads more like an internal architecture document than a skill description. It focuses on implementation details (coordinator, workers, fan-out modes) rather than user-facing capabilities, and relies on an artificial trigger phrase instead of natural language keywords. The domain of 'performance optimization' is too broad without specifying what kind of performance is being optimized.

Suggestions

Replace the artificial trigger 'team perf-opt' with natural user language describing when this skill should activate, e.g., 'Use when the user asks about optimizing application performance, reducing latency, or improving throughput'.

Specify concrete actions the skill performs, e.g., 'Profiles code bottlenecks, analyzes memory usage, benchmarks API response times' instead of abstract architecture terms like 'orchestrates pipeline'.

Clarify the specific domain of performance optimization (web apps, databases, APIs, etc.) to reduce overlap with other potential optimization-related skills.

DimensionReasoningScore

Specificity

The description uses vague language like 'performance optimization' and 'orchestrates pipeline' without specifying concrete actions. It describes architecture (coordinator, workers, fan-out modes) rather than what the skill actually does for the user.

1 / 3

Completeness

It partially answers 'what' (performance optimization with parallel modes) and has a 'when' trigger ('Triggers on team perf-opt'), but the trigger is an artificial command rather than a meaningful usage context. The 'when' clause exists but is not useful for natural skill selection.

2 / 3

Trigger Term Quality

The trigger term 'team perf-opt' is artificial jargon that no user would naturally say. Terms like 'coordinator', 'fan-out', and 'pipeline' are internal implementation details, not natural user language. Missing any terms a user would actually use when needing performance optimization.

1 / 3

Distinctiveness Conflict Risk

'Performance optimization' is broad and could overlap with many skills (database optimization, code profiling, web performance, etc.). The artificial trigger 'team perf-opt' provides some distinctiveness but only through a contrived keyword rather than a naturally distinct domain.

2 / 3

Total

6

/

12

Passed

Implementation

85%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-architected team orchestration skill with strong actionability and workflow clarity. The architecture diagram, delegation lock, spawn templates, and error handling tables provide concrete, executable guidance. Minor verbosity in some sections (model selection rationale, agent coordination examples) could be trimmed, but overall the content earns its token budget by conveying complex multi-agent coordination patterns that Claude wouldn't inherently know.

DimensionReasoningScore

Conciseness

The skill is fairly well-structured but includes some sections that could be tightened—the Model Selection Guide rationale column, the verbose spawn template, and the agent coordination examples add bulk. However, most content is domain-specific configuration that Claude wouldn't inherently know, so it's not explaining basic concepts.

2 / 3

Actionability

The skill provides concrete spawn templates with exact parameter structures, specific tool call allowlists/blocklists, precise file paths, exact CLI commands, and copy-paste ready code blocks for agent spawning and completion actions. The delegation lock table is immediately executable as a decision matrix.

3 / 3

Workflow Clarity

The pipeline modes (single, fan-out, independent) are clearly diagrammed with explicit sequencing. The baseline-to-result pipeline has numbered steps with validation (regression detection triggers FIX task). The error handling table covers failure modes with specific resolutions, and the review-fix cycle has an explicit escalation threshold (3 iterations).

3 / 3

Progressive Disclosure

The SKILL.md serves as a clear router/overview with well-signaled one-level-deep references to role files (roles/coordinator/role.md, etc.) and spec files (specs/pipelines.md, specs/team-config.json). The role registry table provides a clean navigation index, and detailed domain instructions are appropriately delegated to role-specific files.

3 / 3

Total

11

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

Total

10

/

11

Passed

Repository
catlog22/Claude-Code-Workflow
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.