CtrlK
BlogDocsLog inGet started
Tessl Logo

orchestration-protocols

Coordinates multiple agents with parallel task spawning, health monitoring, circuit breakers, and escalation paths. Use when managing parallel agents, handling agent timeouts, orchestrate agents, run tasks in parallel, concurrent agent execution, or fan-out tasks. Use when coordinating multi-agent task delegation.

100

Quality

100%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

SKILL.md
Quality
Evals
Security

Orchestration Protocols

Runtime patterns for managing delegated agents.

Active Steering

Intervene early when you spot:

SignalAction
Failing tests/buildsCheck dependency resolution; revert if builds break
Unexpected file changesRevert; enforce partition
Scope creepRedirect to scoped files only
Circular behaviorHalt; switch approach
Intent misunderstandingClarify prompt; re-delegate

When redirecting, explain why and how:

"Don't modify libs/data/src/lib/product.ts — shared across features. Add the new query in libs/data/src/lib/reviews.ts."

Sub-agents: Catch problems early (5 min in can save an hour). Background agents: Steer post-hoc — invest in prompt specificity and partition constraints upfront.

Background Agents

Run autonomously in isolated Git worktrees. Reserve for well-scoped tasks >5 min with clear acceptance criteria.

  • Spawn: Delegate Session → Background → Select agent → Enter prompt
  • Auto-compaction: At 95% token limit; use --resume to continue
  • No real-time monitoring: Invest in specific prompts, strict partition constraints, and acceptance criteria checklists upfront

Parallel Research Protocol

Spawn multiple research sub-agents in parallel when 3+ independent questions must be answered before implementation. Spawn if: ≥3 independent questions AND answers span multiple codebase areas — otherwise handle sequentially.

Spawn Strategy

RuleDetail
Divide by topic/areaEach researcher owns a coherent domain
Max 3–5 researchersMore creates diminishing returns and token waste
Focused scope per agentExplicit dirs, file patterns, or questions
Economy/Standard tierManage cost for research sub-agents

Prompt template:

Research: [specific question]
Scope: [files/directories to search]
Return: key findings, relevant file paths (with line numbers), patterns, unanswered questions

Result Merge Protocol

  1. Collect all results into single context
  2. Checkpoint: verify every researcher returned a result (no timeout or error); re-run any that failed before proceeding.
  3. Deduplicate (same file/pattern counts once)
  4. Resolve conflicts — specific evidence beats general observations
  5. Synthesize into concise context block for implementation prompts
  6. Checkpoint: confirm synthesized block covers every original question; mark any unanswered questions as blockers.

Batch Reviews

  • Group by domain (UI, data); run fast reviews in parallel for independent outputs
  • Review sequentially when outputs share the same partition boundary
  • Combine related artifacts into one panel question when they share acceptance criteria

Context Compaction

Summarize prior phase output before passing to the next agent. Extract: files changed, key decisions, verification (pass/fail), blockers. Discard: raw tool output, reasoning traces, failed attempts.

Template:

### Prior Phase Output
**Phase [N] — [Agent Name] — [Task Title]**
- Files changed: [list]
- Decisions: [key decisions affecting downstream work]
- Verification: [lint ✅ | types ✅ | tests ✅]
- Blockers: [none | list]

Concrete example:

### Prior Phase Output
**Phase 2 — Researcher A — Find usages of `calculateTotal()`**
- Files changed: none (read-only research)
- Decisions: `calculateTotal` lives in `libs/cart/src/lib/total.ts`; new logic should live in `libs/cart/src/lib/discounts.ts`
- Verification: lint ✅ | types ✅ | unit smoke test (cart total) ✅
- Blockers: design question on rounding behavior (see `docs/rounding.md`)

Health & Recovery Reference

Detailed Agent Health Monitoring, Error Recovery Playbook, and Agent Circuit Breaker tables have been moved to REFERENCE.md to keep this skill concise. See REFERENCE.md in this directory for thresholds, recovery steps, and escalation flows.

CLI examples (spawn & monitor)

These use the OpenCastle CLI (npx opencastle or bin/cli.mjs):

opencastle run --file convoy.yml --dry-run
opencastle run --file convoy.yml --verbose
opencastle run --resume
opencastle run --status
opencastle run --retry-failed

Post-run verification (copy-paste checks):

if [ $? -ne 0 ]; then
  echo "opencastle run failed — inspect .opencastle/convoy.log" \
	 && tail -n 200 .opencastle/convoy.log && exit 1
fi

npx opencastle run --status

grep -i "error\|failed" .opencastle/convoy.log || echo "no obvious errors in logs"

Validation & Verification Checkpoints

PhaseCheckCommand / Action
Pre-spawnInputs present (task, scope, ACs)test -s convoy.yml || exit 1
During-runTail for fatal errorstail -F .opencastle/convoy.log | grep -i "fatal|error"
Pre-mergeAll agents exited 0jq -e '.agents[] | .exit_code == 0' .opencastle/results.json
Output schemaRequired fields presentjq -e '.agents[] | (.findings and .file_paths)' .opencastle/results.json
Post-mergeLint + smoke tests passnpm run lint && npm test -- -t "smoke"
BlockerAny failureBlock merge; reopen to original researcher(s)
Repository
monkilabs/opencastle
Last updated
Created

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.