CtrlK
BlogDocsLog inGet started
Tessl Logo

team-ultra-analyze

Deep collaborative analysis team skill. All roles route via this SKILL.md. Beat model is coordinator-only (monitor.md). Structure is roles/ + specs/. Triggers on "team ultra-analyze", "team analyze".

63

Quality

55%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./.codex/skills/team-ultra-analyze/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

25%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description focuses heavily on internal implementation details (roles, specs, coordinator patterns) rather than communicating what the skill does for the user. It lacks concrete actions, natural trigger terms, and a clear explanation of when and why Claude should select this skill. The artificial command-style triggers suggest it's designed for explicit invocation rather than intelligent skill selection.

Suggestions

Replace internal architecture details with concrete actions the skill performs (e.g., 'Conducts multi-perspective analysis of complex topics by examining evidence, counterarguments, and synthesizing findings').

Add natural trigger terms users would actually say, such as 'deep analysis', 'thorough research', 'examine from multiple angles', 'comprehensive review'.

Add a clear 'Use when...' clause describing scenarios in natural language, e.g., 'Use when the user needs in-depth, multi-faceted analysis of complex problems or wants multiple perspectives on a topic.'

DimensionReasoningScore

Specificity

The description uses vague, abstract language like 'deep collaborative analysis' without listing any concrete actions. It describes internal architecture ('roles/ + specs/', 'coordinator-only') rather than what the skill actually does for the user.

1 / 3

Completeness

It vaguely addresses 'what' (collaborative analysis) and includes explicit trigger phrases, but the 'what' is too abstract to be useful and the 'when' relies on artificial command terms rather than natural use-case triggers. The trigger phrases partially satisfy the 'when' requirement.

2 / 3

Trigger Term Quality

The trigger terms 'team ultra-analyze' and 'team analyze' are artificial command phrases, not natural language a user would say. No natural keywords like 'analyze data', 'research', or domain-specific terms are included.

1 / 3

Distinctiveness Conflict Risk

The artificial trigger phrases 'team ultra-analyze' and 'team analyze' reduce conflict risk since they're unlikely to match other skills accidentally, but the core concept of 'analysis' is extremely broad and could overlap with many analytical skills.

2 / 3

Total

6

/

12

Passed

Implementation

85%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured orchestration skill that clearly defines the coordinator's responsibilities, worker delegation patterns, and pipeline modes. Its actionability and workflow clarity are strong, with concrete templates, explicit tool allowlists, and timeout cascades. The main weakness is moderate verbosity—some sections (model selection, JS pseudocode examples, agent coordination details) could be tightened without losing clarity.

Suggestions

Tighten the 'v4 Agent Coordination' section by removing the JavaScript-style pseudocode comments and consolidating the parallel phase coordination into a more compact reference table or shorter example.

DimensionReasoningScore

Conciseness

The skill is fairly detailed and well-structured, but includes some sections that could be tightened—e.g., the JavaScript-style pseudocode for parallel coordination, the lengthy delegation lock table, and the model selection guide add bulk. Some of this is genuinely novel configuration, but the overall document is verbose for what is essentially a routing/orchestration spec.

2 / 3

Actionability

The skill provides concrete spawn_agent templates with exact parameter names, specific tool call allowlists/blocklists, precise session directory structures, timeout values, and copy-paste-ready coordination patterns. The delegation lock table is immediately executable as a decision matrix.

3 / 3

Workflow Clarity

The pipeline modes (Quick/Standard/Deep) are clearly diagrammed with explicit sequencing. The timeout cascade (wait 30min → STATUS_CHECK 3min → FINALIZE 3min → close) provides a clear error recovery loop. Agent health checks, resume/continue commands, and completion actions all have explicit validation and feedback steps.

3 / 3

Progressive Disclosure

The SKILL.md acts as a clear router/overview with a well-organized role registry table linking to one-level-deep role files (roles/<name>/role.md). Specs are referenced separately. The architecture diagram, role router logic, and section organization make navigation straightforward. Content is appropriately split between this overview and the referenced role files.

3 / 3

Total

11

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

Total

10

/

11

Passed

Repository
catlog22/Claude-Code-Workflow
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.