Check workflow delegation prompts against agent role definitions for content separation violations. Detects conflicts, duplication, boundary leaks, and missing contracts. Triggers on "check delegation", "delegation conflict", "prompt vs role check".
84
81%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Risky
Do not use without reviewing
Invoked when user requests "check delegation", "delegation conflict", "prompt vs role check", or when reviewing workflow skill quality. </purpose>
<required_reading>
Parse $ARGUMENTS to identify what to check.
| Signal | Scope |
|---|---|
File path to command .md | Single command + its agents |
File path to agent .md | Single agent + commands that spawn it |
Directory path (e.g., .claude/skills/team-*/) | All commands + agents in that skill |
| "all" or no args | Scan all .claude/commands/, .claude/skills/*/, .claude/agents/ |
If ambiguous, ask:
AskUserQuestion(
header: "Scan Scope",
question: "What should I check for delegation conflicts?",
options: [
{ label: "Specific skill", description: "Check one skill directory" },
{ label: "Specific command+agent pair", description: "Check one command and its spawned agents" },
{ label: "Full scan", description: "Scan all commands, skills, and agents" }
]
)For each command file in scope:
2a. Extract Agent() calls from commands:
# Search both Agent() (current) and Task() (legacy GSD) patterns
grep -n "Agent(\|Task(" "$COMMAND_FILE"
grep -n "subagent_type" "$COMMAND_FILE"For each Agent() call, extract:
subagent_type → agent nameprompt=)2b. Locate agent definitions:
For each subagent_type found:
# Check standard locations
ls .claude/agents/${AGENT_NAME}.md 2>/dev/null
ls .claude/skills/*/agents/${AGENT_NAME}.md 2>/dev/null2c. Build pair map:
$PAIRS = [
{
command: { path, agent_calls: [{ line, subagent_type, prompt_content }] },
agent: { path, role, sections, quality_gate, output_contract }
}
]If an agent file cannot be found, record as MISSING_AGENT — this is itself a finding.
For each Agent() call, extract structured blocks from the prompt content:
| Block | What It Contains |
|---|---|
<objective> | What to accomplish |
<files_to_read> | Input file paths |
<additional_context> / <planning_context> / <verification_context> | Runtime parameters |
<output> / <expected_output> | Output format/location expectations |
<quality_gate> | Per-invocation quality checklist |
<deep_work_rules> / <instructions> | Cross-cutting policy or revision instructions |
<downstream_consumer> | Who consumes the output |
<success_criteria> | Success conditions |
| Free-form text | Unstructured instructions |
Also detect ANTI-PATTERNS in prompt content:
For each agent file, extract:
| Section | Key Content |
|---|---|
<role> | Identity, spawner, responsibilities, mandatory read |
<philosophy> | Guiding principles |
<upstream_input> | How agent interprets input |
<output_contract> | Return markers (COMPLETE/BLOCKED/CHECKPOINT) |
<quality_gate> | Self-check criteria |
| Domain sections | All <section_name> tags with their content |
| YAML frontmatter | name, description, tools |
Question: Does the delegation prompt redefine the agent's identity?
Check: Scan prompt content for:
<role> sectionAllowed: References to mode ("standard mode", "revision mode") that the agent's <role> already lists in "Spawned by:".
Severity: error if prompt redefines role; warning if prompt adds responsibilities not in agent's <role>.
Question: Does the delegation prompt embed domain knowledge that belongs in the agent?
Check: Scan prompt content for:
| Condition | Action |)| TOO VAGUE | JUST RIGHT |)Exception: <deep_work_rules> is an acceptable cross-cutting policy pattern from GSD — flag as info only.
Severity: error if prompt contains domain tables/examples that duplicate agent content; warning if prompt contains heuristics not in agent.
Question: Do the prompt's quality checks overlap or conflict with the agent's own <quality_gate>?
Check: Compare prompt <quality_gate> / <success_criteria> items against agent's <quality_gate> items:
warning (redundant, may diverge)errorinfoSeverity: error for contradictions; warning for duplicates; info for gaps.
Question: Does the prompt's expected output format conflict with the agent's <output_contract>?
Check:
<expected_output> markers vs agent's <output_contract> return markers## DONE, agent returns ## TASK COMPLETE)Severity: error if return markers conflict; warning if format expectations unspecified on either side.
Question: Does the delegation prompt dictate HOW the agent should work?
Check: Scan prompt for:
<objective> scopeAllowed: <instructions> block for revision mode (telling agent what changed, not how to work).
Severity: error if prompt overrides agent's process; warning if prompt suggests process hints.
Question: Does the prompt make decisions that belong to the agent's domain?
Check:
<philosophy> or domain sections own these decisions<context_fidelity> says are "Claude's Discretion"Allowed: Passing through user-locked decisions from CONTEXT.md — this is proper delegation, not authority conflict.
Severity: error if prompt makes domain decisions agent should own; info if prompt passes through user decisions (correct behavior).
Question: Are the delegation handoff points properly defined?
Check:
<output_contract> with return markers → command handles all markers?<files_to_read> — does prompt provide it?<upstream_input> — does prompt provide matching input structure?Severity: error if return marker handling is missing; warning if agent expects input the prompt doesn't provide.
For each command-agent pair, aggregate findings:
{command_path} → {agent_name}
Agent() at line {N}:
D1 (Role Re-def): {PASS|WARN|ERROR} — {detail}
D2 (Domain Leak): {PASS|WARN|ERROR} — {detail}
D3 (Quality Gate): {PASS|WARN|ERROR} — {detail}
D4 (Output Format): {PASS|WARN|ERROR} — {detail}
D5 (Process Override): {PASS|WARN|ERROR} — {detail}
D6 (Scope Authority): {PASS|WARN|ERROR} — {detail}
D7 (Missing Contract): {PASS|WARN|ERROR} — {detail}| Verdict | Condition |
|---|---|
| CLEAN | 0 errors, 0-2 warnings |
| REVIEW | 0 errors, 3+ warnings |
| CONFLICT | 1+ errors |
For each finding, provide:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
DELEGATION-CHECK ► SCAN COMPLETE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Scope: {description}
Pairs checked: {N} command-agent pairs
Findings: {E} errors, {W} warnings, {I} info
Verdict: {CLEAN | REVIEW | CONFLICT}
| Pair | D1 | D2 | D3 | D4 | D5 | D6 | D7 |
|------|----|----|----|----|----|----|-----|
| {cmd} → {agent} | ✅ | ⚠️ | ✅ | ✅ | ❌ | ✅ | ✅ |
| ... | | | | | | | |
{If CONFLICT: detailed findings with fix recommendations}
───────────────────────────────────────────────────────
## Fix Priority
1. {Highest severity fix}
2. {Next fix}
...
───────────────────────────────────────────────────────<success_criteria>
<deep_work_rules>)0f8e801
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.