Scientific delegation framework for orchestrators — provide observations and success criteria while preserving agent autonomy. Use when assigning work to sub-agents, before invoking the Agent tool, or when preparing delegation prompts for specialist agents.
89
Quality
87%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Workflow Reference: See the agent-orchestration skill for the complete delegation flow, DONE/BLOCKED signaling protocol, and agent selection guide.
CRITICAL: You are an orchestrator. Complete ALL steps before invoking the Agent tool. Incomplete preparation causes failed delegations, wasted agent context, and poor outcomes.
<user_instructions>$ARGUMENTS</user_instructions>
flowchart TD
START[Skill loaded] --> CHECK{user_instructions content?}
CHECK -->|Empty or literal $ARGUMENTS| WAIT[Complete Step 1 only\nState: 'Delegation framework loaded. Awaiting task.'\nSTOP — wait for user task]
CHECK -->|Contains a task| FULL[Complete ALL steps 1–10\nFill every section with actual content\nDo NOT skip or summarize steps]Your role as orchestrator — apply these throughout all 10 steps:
Reason: Agents have 200k context windows and specialized expertise. Pre-gathering causes context rot and duplicates work. Prescribing HOW limits agents from discovering better solutions.
ACTION: Activate the agent-orchestration skill NOW:
Skill(skill: "agent-orchestration:agent-orchestration")Then load domain-specific skills based on task type:
flowchart TD
START[Task Received] --> ORCH[Load agent-orchestration]
ORCH --> CHECK{Task domain?}
CHECK -->|Python code| PY[python3-development]
CHECK -->|Linting/code quality| LINT[holistic-linting]
CHECK -->|GitLab CI| GL[gitlab-skill]
CHECK -->|Git commits| CC[conventional-commits]
CHECK -->|Package management| UV[uv]
CHECK -->|Documentation| DOCS[mkdocs]
CHECK -->|Pre-commit hooks| PRE[pre-commit]
CHECK -->|Other domain| OTHER[Search available_skills list]
PY & LINT & GL & CC & UV & DOCS & PRE & OTHER --> PROCEED[Proceed to Step 2]Why: Domain skills contain specialized knowledge agents need. Loading before delegation ensures agents have access to project conventions and best practices.
ACTION: Select the task type that best matches this delegation:
Why: Task type determines context depth. Focused tasks need precise location. Investigative tasks need all observations. Architectural tasks need system-wide context.
TASK SUMMARY (write one clear sentence):
[Example: "Fix authentication failing for OAuth2 users" or "Investigate why CI pipeline times out on large PRs"]ACTION: List ONLY data already in your context. Use "observed", "measured", "reported" language.
CRITICAL: Do NOT run commands to pre-gather data. Agents gather their own comprehensive data.
Why: Pre-gathering wastes your context, duplicates agent work, and causes context rot. Pass through existing observations; let agents collect fresh data.
OBSERVATIONS FROM USER:
[Example: "User reported: 'OAuth login redirects to 404'" or "User stated build fails on Python 3.12"]OBSERVATIONS FROM PRIOR AGENTS (if any):
[Example: "context-gathering agent found 3 instances of deprecated auth.login() in src/handlers/"]ERRORS ALREADY IN CONTEXT (verbatim, if any):
[Example: Exact error text already received — not pre-gathered by running commands now]KNOWN LOCATIONS (file:line references already in context):
[Example: "src/auth/oauth.py:127 — where user reported issue occurs"]ACTION: Define specific, measurable outcomes and verification methods.
Why: Clear success criteria prevent scope creep and tell agents exactly when they are done.
WHAT must be true when done (measurable outcome):
[Example: "OAuth login completes successfully for all providers" or "All pytest tests in test_auth.py pass"]HOW will completion be verified:
[Example: "Run `pytest test_auth.py -v` — all tests pass" or "Manual test: log in with Google/GitHub/Microsoft accounts"]ACTION: Define WHERE to look, WHAT to achieve, and WHY it matters. Focus on context, not implementation.
Why: World-building enables agents to understand the problem space and make informed decisions about HOW to solve it.
WHERE (problem location, scope boundaries):
[Example: "Authentication module at src/auth/ — OAuth handlers specifically" or "CI pipeline .github/workflows/test.yml"]WHAT (identification criteria, acceptance criteria):
[Example: "OAuth redirect must return 200 status with valid session token" or "Pipeline must complete within 10 minutes"]WHY (expected outcomes, user requirements):
[Example: "Users cannot log in with enterprise SSO accounts, blocking customer onboarding" or "Slow CI blocks PRs, reducing team velocity"]ACTION: Describe the ecosystem and available tools. List capabilities — do not prescribe which to use.
Why: Agents choose tools based on their expertise. Prescribing tools limits discovery.
PROJECT ECOSYSTEM (language, package manager, build system):
[Example: "Python project using uv for all operations — activate uv skill" or "Node.js with pnpm workspaces"]AUTHENTICATED CLIS (gh, glab, aws, etc.):
[Example: "gh CLI pre-authenticated for GitHub operations" or "glab configured for GitLab access"]MCP TOOLS AVAILABLE (check your functions list):
[Example: "Excellent MCP servers installed — check <functions> list and prefer MCP tools (Ref, context7, exa) over built-in alternatives"]PROJECT-SPECIFIC RESOURCES (scripts, reports, docs):
[Example: "Validation scripts in ./scripts/ — check README.md" or "Previous fixes documented in .claude/reports/"]ACTION: Choose the agent type that best matches the task domain and requirements.
Why: Specialized agents have domain expertise and optimized workflows for their task types.
AGENT TYPE (from available subagent_types):
[Example: "python-cli-architect" or "linting-root-cause-resolver" or "context-gathering"]RATIONALE (why this agent matches the task):
[Example: "Task involves Python code changes — python-cli-architect has Python expertise and best practices" or "Need comprehensive context without polluting orchestrator context — context-gathering agent is optimized for this"]ACTION: Review your filled sections against each criterion. Mark pass or fail.
Why: This checklist catches anti-patterns that limit agent effectiveness before they reach the agent.
Verification checklist:
Your ROLE_TYPE is sub-agent. — if not, add to Step 9 promptACTION: Use your filled sections to construct the delegation prompt following this template.
Why: This structure ensures agents receive observations, success criteria, and autonomy to apply their expertise.
Copy this template and fill in from your worksheet:
Your ROLE_TYPE is sub-agent.
[Task summary from Step 2]
OBSERVATIONS:
[From Step 3 — verbatim, not paraphrased]
DEFINITION OF SUCCESS:
[From Step 4]
CONTEXT:
[From Step 5 — WHERE, WHAT, WHY]
YOUR TASK:
1. Perform comprehensive context gathering using available tools, skills, and resources
2. Form hypothesis based on evidence
3. Design and execute experiments
4. Verify findings against authoritative sources
5. Implement solution following best practices
6. Verify `/am-i-complete` criteria satisfied with evidence
AVAILABLE RESOURCES:
[From Step 6 — describe ecosystem, do not prescribe tools]ACTION: Final check, then delegate.
READY TO DELEGATE? Mark pass or fail:
If all pass, invoke the Agent tool:
Agent(
agent="[agent type from Step 7]",
prompt="[your constructed prompt from Step 9]"
)Why final check matters: One incomplete step causes agent confusion, wasted context, or failed delegation. Thirty seconds of verification saves ten minutes of back-and-forth.
fd243f9
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.