CtrlK
BlogDocsLog inGet started
Tessl Logo

team-perf-opt

Unified team skill for performance optimization. Coordinator orchestrates pipeline, workers are team-worker agents. Supports single/fan-out/independent parallel modes. Triggers on "team perf-opt".

63

Quality

55%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./.codex/skills/team-perf-opt/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

25%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description reads more like an internal architecture document than a skill description meant to help Claude select the right tool. It focuses on implementation details (coordinator, workers, fan-out modes) rather than user-facing capabilities, and its trigger term is an artificial command rather than natural language a user would employ. The domain of 'performance optimization' is too broad without specifying what kind of performance is being optimized.

Suggestions

Replace the artificial trigger 'team perf-opt' with natural language triggers describing when users would need this, e.g., 'Use when the user asks to optimize application performance, reduce latency, or improve throughput'.

Specify concrete actions the skill performs, such as 'Profiles code hotspots, identifies memory leaks, benchmarks API response times' instead of abstract terms like 'orchestrates pipeline'.

Clarify the domain of performance optimization (e.g., web apps, databases, backend services) to reduce overlap with other optimization-related skills.

DimensionReasoningScore

Specificity

The description uses vague language like 'performance optimization' and 'orchestrates pipeline' without specifying concrete actions. It mentions architectural patterns (fan-out, parallel modes) but doesn't describe what the skill actually does in terms of user-facing capabilities.

1 / 3

Completeness

It attempts to answer both 'what' (performance optimization with parallel modes) and 'when' (triggers on 'team perf-opt'), but the 'what' is vague and the 'when' is an artificial command rather than a natural use-case trigger. The explicit trigger clause prevents a score of 1, but it's not meaningful guidance.

2 / 3

Trigger Term Quality

The trigger term 'team perf-opt' is technical jargon that no user would naturally say. Terms like 'coordinator', 'workers', 'fan-out', and 'pipeline' are internal implementation details, not natural user language.

1 / 3

Distinctiveness Conflict Risk

The specific trigger phrase 'team perf-opt' provides some distinctiveness, but 'performance optimization' is extremely broad and could overlap with many other skills related to code optimization, database tuning, web performance, etc.

2 / 3

Total

6

/

12

Passed

Implementation

85%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured orchestration skill that provides highly actionable, concrete guidance for coordinating a multi-agent performance optimization pipeline. The workflow clarity is strong with explicit validation checkpoints, error recovery paths, and clear pipeline sequencing across three parallel modes. Minor verbosity in some sections (delegation lock rationale, model selection guide) prevents a perfect conciseness score, but overall token efficiency is reasonable given the complexity of the orchestration being described.

DimensionReasoningScore

Conciseness

The skill is fairly dense and information-rich, but includes some sections that could be tightened—e.g., the Model Selection Guide rationale column restates obvious points, and the Delegation Lock table is somewhat verbose. However, most content is non-trivial orchestration logic that Claude wouldn't inherently know.

2 / 3

Actionability

Provides concrete spawn_agent templates with exact parameters, specific tool call allowlists, named agent targeting examples, timeout handling sequences, and executable completion action code. The guidance is copy-paste ready for orchestration tasks.

3 / 3

Workflow Clarity

The pipeline modes (single, fan-out, independent) are clearly diagrammed with explicit sequencing. Validation checkpoints are present: benchmark regression triggers auto-FIX tasks, review-fix cycles have iteration limits with user escalation, agent health checks reconcile state, and timeout handling has a clear 3-step escalation (STATUS_CHECK → FINALIZE → close).

3 / 3

Progressive Disclosure

SKILL.md serves as a clear router/overview with well-signaled one-level-deep references to role files (roles/coordinator/role.md, etc.) and spec files (specs/pipelines.md, specs/team-config.json). The Role Registry table provides a clean navigation index. Content is appropriately split between the overview and referenced files.

3 / 3

Total

11

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

Total

10

/

11

Passed

Repository
catlog22/Claude-Code-Workflow
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.