CtrlK
BlogDocsLog inGet started
Tessl Logo

team-ultra-analyze

Deep collaborative analysis team skill. All roles route via this SKILL.md. Beat model is coordinator-only (monitor.md). Structure is roles/ + specs/. Triggers on "team ultra-analyze", "team analyze".

62

Quality

53%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./.codex/skills/team-ultra-analyze/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

22%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description focuses heavily on internal implementation details (roles, specs, coordinator patterns) rather than communicating what the skill does for the user or when it should be selected. It lacks concrete actions, natural trigger terms, and explicit use-case guidance, making it very difficult for Claude to appropriately select this skill from a pool of alternatives.

Suggestions

Replace internal architecture details with concrete actions the skill performs (e.g., 'Performs multi-perspective analysis of complex problems by coordinating specialized analytical roles including X, Y, Z').

Add an explicit 'Use when...' clause describing real user scenarios and needs (e.g., 'Use when the user needs in-depth analysis from multiple angles, such as strategic planning, risk assessment, or complex decision-making').

Include natural keywords users would actually say when needing this capability (e.g., 'deep analysis', 'multi-angle review', 'comprehensive evaluation', 'team analysis') rather than relying solely on artificial command triggers.

DimensionReasoningScore

Specificity

The description uses vague, abstract language like 'deep collaborative analysis' without listing any concrete actions. It describes internal architecture ('roles/ + specs/', 'coordinator-only') rather than what the skill actually does for the user.

1 / 3

Completeness

The 'what' is extremely vague ('deep collaborative analysis') and the 'when' is limited to artificial trigger commands rather than describing real use cases. There is no explicit 'Use when...' clause describing scenarios or user needs.

1 / 3

Trigger Term Quality

It includes explicit trigger phrases ('team ultra-analyze', 'team analyze'), but these are artificial command-style triggers rather than natural keywords a user would organically say. The term 'analyze' is relevant but overly generic without domain context.

2 / 3

Distinctiveness Conflict Risk

The specific trigger phrases 'team ultra-analyze' and 'team analyze' provide some distinctiveness, but the core concept of 'analysis' is extremely broad and could overlap with many other analytical skills. The internal architecture details don't help with disambiguation from a user perspective.

2 / 3

Total

6

/

12

Passed

Implementation

85%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured orchestration skill that provides highly actionable, concrete guidance for a complex multi-agent analysis pipeline. The progressive disclosure is excellent, with SKILL.md serving as a clear router to role-specific files. The main weakness is moderate verbosity — some sections (agent coordination examples, ASCII diagrams) could be tightened without losing clarity, though the complexity of the multi-agent system partially justifies the length.

DimensionReasoningScore

Conciseness

The skill is fairly long and includes some sections that could be tightened (e.g., the architecture ASCII diagram is helpful but verbose, the agent coordination section repeats patterns). However, most content is genuinely instructive for a complex multi-agent orchestration skill, so the verbosity is partially justified by complexity.

2 / 3

Actionability

Provides concrete spawn_agent templates with exact parameter structures, specific tool call allowlists/blocklists, executable coordination patterns with wait_agent/close_agent, session directory layouts, and completion action flows with request_user_input schemas. Highly copy-paste ready.

3 / 3

Workflow Clarity

The pipeline modes (Quick/Standard/Deep) are clearly sequenced with explicit phase ordering. The Delegation Lock table provides a validation checkpoint before every tool call. Agent health checks, error handling with specific resolutions, and the parallel phase coordination pattern with batch spawn + wait provide clear feedback loops.

3 / 3

Progressive Disclosure

SKILL.md acts as a router/overview, with role-specific details properly delegated to roles/<name>/role.md files (one level deep, clearly linked in the Role Registry table). Specs are referenced via a single link. The structure cleanly separates coordination logic (here) from domain instructions (role files).

3 / 3

Total

11

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

Total

10

/

11

Passed

Repository
catlog22/Claude-Code-Workflow
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.