Cancel any active OMC mode (autopilot, ralph, ultrawork, ultraqa, swarm, ultrapilot, pipeline, team)
73
67%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/cancel/SKILL.mdIntelligent cancellation that detects and cancels the active OMC mode.
The cancel skill is the standard way to complete and exit any OMC mode.
When the stop hook detects work is complete, it instructs the LLM to invoke
this skill for proper state cleanup. If cancel fails or is interrupted,
retry with --force flag, or wait for the 2-hour staleness timeout as
a last resort.
Automatically detects which mode is active and cancels it:
/oh-my-claudecode:cancelOr say: "cancelomc", "stopomc"
/oh-my-claudecode:cancel follows the session-aware state contract:
state_list_active and state_get_status, navigating .omc/state/sessions/{sessionId}/… to discover which mode is active..omc/state/*.json are consulted only as a compatibility fallback if the session id is missing or empty..omc/state/swarm.db / .omc/state/swarm-active.marker) and is not session-scoped.state_clear with the session id to remove only the matching session files; modes stay bound to their originating session.Active modes are still cancelled in dependency order:
Use --force or --all when you need to erase every session plus legacy artifacts, e.g., to reset the workspace entirely.
/oh-my-claudecode:cancel --force/oh-my-claudecode:cancel --allSteps under the hood:
state_list_active enumerates .omc/state/sessions/{sessionId}/… to find every known session.state_clear runs once per session to drop that session’s files.state_clear without session_id removes legacy files under .omc/state/*.json, .omc/state/swarm*.db, and compatibility artifacts (see list).~/.claude/teams/*/, ~/.claude/tasks/*/, .omc/state/team-state.json) are best-effort cleared as part of the legacy fallback.
Every state_clear command honors the session_id argument, so even force mode still uses the session-aware paths first before deleting legacy files.
Legacy compatibility list (removed only under --force/--all):
.omc/state/autopilot-state.json.omc/state/ralph-state.json.omc/state/ralph-plan-state.json.omc/state/ralph-verification.json.omc/state/ultrawork-state.json.omc/state/ultraqa-state.json.omc/state/swarm.db.omc/state/swarm.db-wal.omc/state/swarm.db-shm.omc/state/swarm-active.marker.omc/state/swarm-tasks.db.omc/state/ultrapilot-state.json.omc/state/ultrapilot-ownership.json.omc/state/pipeline-state.json.omc/state/omc-teams-state.json.omc/state/plan-consensus.json.omc/state/ralplan-state.json.omc/state/boulder.json.omc/state/hud-state.json.omc/state/subagent-tracking.json.omc/state/subagent-tracker.lock.omc/state/rate-limit-daemon.pid.omc/state/rate-limit-daemon.log.omc/state/checkpoints/ (directory).omc/state/sessions/ (empty directory cleanup after clearing sessions)When you invoke this skill:
# Check for --force or --all flags
FORCE_MODE=false
if [[ "$*" == *"--force"* ]] || [[ "$*" == *"--all"* ]]; then
FORCE_MODE=true
fiThe skill now relies on the session-aware state contract rather than hard-coded file paths:
state_list_active to enumerate .omc/state/sessions/{sessionId}/… and discover every active session.state_get_status to learn which mode is running (autopilot, ralph, ultrawork, etc.) and whether dependent modes exist.session_id was supplied to /oh-my-claudecode:cancel, skip legacy fallback entirely and operate solely within that session path; otherwise, consult legacy files in .omc/state/*.json only if the state tools report no active session. Swarm remains a shared SQLite/marker mode outside session scoping.Use force mode to clear every session plus legacy artifacts via state_clear. Direct file removal is reserved for legacy cleanup when the state tools report no active sessions.
Teams are detected by checking for config files in ~/.claude/teams/:
# Check for active teams
TEAM_CONFIGS=$(find ~/.claude/teams -name config.json -maxdepth 2 2>/dev/null)Two-pass cancellation protocol:
Pass 1: Graceful Shutdown
For each team found in ~/.claude/teams/:
1. Read config.json to get team_name and members list
2. For each non-lead member:
a. Send shutdown_request via SendMessage
b. Wait up to 15 seconds for shutdown_response
c. If response received: member terminates and is auto-removed
d. If timeout: mark member as unresponsive, continue to next
3. Log: "Graceful pass: X/Y members responded"Pass 2: Reconciliation
After graceful pass:
1. Re-read config.json to check remaining members
2. If only lead remains (or config is empty): proceed to TeamDelete
3. If unresponsive members remain:
a. Wait 5 more seconds (they may still be processing)
b. Re-read config.json again
c. If still stuck: attempt TeamDelete anyway
d. If TeamDelete fails: report manual cleanup pathTeamDelete + Cleanup:
1. Call TeamDelete() — removes ~/.claude/teams/{name}/ and ~/.claude/tasks/{name}/
2. Clear team state: state_clear(mode="team")
3. Check for linked ralph: state_read(mode="ralph") — if linked_team is true:
a. Clear ralph state: state_clear(mode="ralph")
b. Clear linked ultrawork if present: state_clear(mode="ultrawork")
4. Run orphan scan (see below)
5. Emit structured cancel reportOrphan Detection (Post-Cleanup):
After TeamDelete, verify no agent processes remain:
node "${CLAUDE_PLUGIN_ROOT}/scripts/cleanup-orphans.mjs" --team-name "{team_name}"The orphan scanner:
ps aux (Unix) or tasklist (Windows) for processes with --team-name matching the deleted teamUse --dry-run to inspect without killing. The scanner is safe to run multiple times.
Structured Cancel Report:
Team "{team_name}" cancelled:
- Members signaled: N
- Responses received: M
- Unresponsive: K (list names if any)
- TeamDelete: success/failed
- Manual cleanup needed: yes/no
Path: ~/.claude/teams/{name}/ and ~/.claude/tasks/{name}/Implementation note: The cancel skill is executed by the LLM, not as a bash script. When you detect an active team:
~/.claude/teams/*/config.json to find active teamscreatedAt)SendMessage(type: "shutdown_request", recipient: member-name, content: "Cancelling")TeamDelete() to clean upstate_clear(mode="team", session_id)Autopilot handles its own cleanup including linked ralph and ultraqa.
state_read(mode="autopilot", session_id) to get current phasestate_read(mode="ralph", session_id):
linked_ultrawork: true, clear ultrawork first: state_clear(mode="ultrawork", session_id)state_clear(mode="ralph", session_id)state_read(mode="ultraqa", session_id):
state_clear(mode="ultraqa", session_id)state_write(mode="autopilot", session_id, state={active: false, ...existing})state_read(mode="ralph", session_id) to check for linked ultraworklinked_ultrawork: true:
linked_to_ralph: truestate_clear(mode="ultrawork", session_id)state_clear(mode="ralph", session_id)state_read(mode="ultrawork", session_id)linked_to_ralph: true, warn user to cancel ralph instead (which cascades)state_clear(mode="ultrawork", session_id)Clear directly: state_clear(mode="ultraqa", session_id)
Report: "No active OMC modes detected. Use --force to clear all state files anyway."
The cancel skill runs as follows:
--force / --all flags, tracking whether cleanup should span every session or stay scoped to the current session id.state_list_active to enumerate known session ids and state_get_status to learn the active mode (autopilot, ralph, ultrawork, etc.) for each session.state_clear with that session_id to remove only the session’s files, then run mode-specific cleanup (autopilot → ralph → …) based on the state tool signals.state_clear per session, then run a global state_clear without session_id to drop legacy files (.omc/state/*.json, compatibility artifacts) and report success. Swarm remains a shared SQLite/marker mode outside session scoping.~/.claude/teams/*/, ~/.claude/tasks/*/, .omc/state/team-state.json) remain best-effort cleanup items invoked during the legacy/global pass.State tools always honor the session_id argument, so even force mode still clears the session-scoped paths before deleting compatibility-only legacy state.
Mode-specific subsections below describe what extra cleanup each handler performs after the state-wide operations finish.
| Mode | Success Message |
|---|---|
| Autopilot | "Autopilot cancelled at phase: {phase}. Progress preserved for resume." |
| Ralph | "Ralph cancelled. Persistent mode deactivated." |
| Ultrawork | "Ultrawork cancelled. Parallel execution mode deactivated." |
| UltraQA | "UltraQA cancelled. QA cycling workflow stopped." |
| Swarm | "Swarm cancelled. Coordinated agents stopped." |
| Ultrapilot | "Ultrapilot cancelled. Parallel autopilot workers stopped." |
| Pipeline | "Pipeline cancelled. Sequential agent chain stopped." |
| Team | "Team cancelled. Teammates shut down and cleaned up." |
| Plan Consensus | "Plan Consensus cancelled. Planning session ended." |
| Force | "All OMC modes cleared. You are free to start fresh." |
| None | "No active OMC modes detected." |
| Mode | State Preserved | Resume Command |
|---|---|---|
| Autopilot | Yes (phase, files, spec, plan, verdicts) | /oh-my-claudecode:autopilot |
| Ralph | No | N/A |
| Ultrawork | No | N/A |
| UltraQA | No | N/A |
| Swarm | No | N/A |
| Ultrapilot | No | N/A |
| Pipeline | No | N/A |
| Plan Consensus | Yes (plan file path preserved) | N/A |
.omc/state/ directoryWhen cancelling modes that may have spawned MCP workers (team bridge daemons), the cancel skill should also:
.omc/state/team-bridge/{team}/*.heartbeat.jsontmux kill-session -t omc-team-{team}-{worker} for each worker.omc/state/team-mcp-workers.jsonWhen --force is used, also clean up:
rm -rf .omc/state/team-bridge/ # Heartbeat files
rm -f .omc/state/team-mcp-workers.json # Shadow registry
# Kill all omc-team-* tmux sessions
tmux list-sessions -F '#{session_name}' 2>/dev/null | grep '^omc-team-' | while read s; do tmux kill-session -t "$s" 2>/dev/null; done48ffaac
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.