Self-referential loop until task completion with configurable verification reviewer
50
38%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/ralph/SKILL.md[RALPH + ULTRAWORK - ITERATION {{ITERATION}}/{{MAX}}]
Your previous attempt did not output the completion promise. Continue working on the task.
<Purpose> Ralph is a PRD-driven persistence loop that keeps working on a task until ALL user stories in prd.json have passes: true and are reviewer-verified. It wraps ultrawork's parallel execution with session persistence, automatic retry on failure, structured story tracking, and mandatory verification before completion. </Purpose><Use_When>
<Do_Not_Use_When>
autopilot insteadplan skill insteadultrawork directly
</Do_Not_Use_When><Why_This_Exists> Complex tasks often fail silently: partial implementations get declared "done", tests get skipped, edge cases get forgotten. Ralph prevents this by:
<PRD_Mode>
By default, ralph operates in PRD mode. A scaffold prd.json is auto-generated when ralph starts if none exists.
Opt-out: If {{PROMPT}} contains --no-prd, skip PRD generation and work in legacy mode (no story tracking, generic verification). Use this for trivial quick fixes.
Deslop opt-out: If {{PROMPT}} contains --no-deslop, skip the mandatory post-review deslop pass entirely. Use this only when the cleanup pass is intentionally out of scope for the run.
Reviewer selection: Pass --critic=architect, --critic=critic, or --critic=codex in the Ralph prompt to choose the completion reviewer for that run. architect remains the default.
</PRD_Mode>
<Execution_Policy>
run_in_background: true for long operations (installs, builds, test suites)model parameter explicitly when delegating to agentsdocs/shared/agent-tiers.md before first delegation to select correct agent tiersPick next story: Read prd.json and select the highest-priority story with passes: false. This is your current focus.
Implement the current story:
prd.jsonrun_in_background: trueVerify the current story's acceptance criteria: a. For EACH acceptance criterion in the story, verify it is met with fresh evidence b. Run relevant checks (test, build, lint, typecheck) and read the output c. If any criterion is NOT met, continue working -- do NOT mark the story as complete
Mark story complete:
a. When ALL acceptance criteria are verified, set passes: true for this story in prd.json
b. Record progress in progress.txt: what was implemented, files changed, learnings for future iterations
c. Add any discovered codebase patterns to progress.txt
Check PRD completion:
a. Read prd.json -- are ALL stories marked passes: true?
b. If NOT all complete, loop back to Step 2 (pick next story)
c. If ALL complete, proceed to Step 7 (architect verification)
Reviewer verification (tiered, against acceptance criteria):
20 files or security/architectural changes: THOROUGH tier (architect / Opus)
--critic=critic, use the Claude critic agent for the approval pass--critic=codex, run omc ask codex --agent-prompt critic "..." for the approval pass7.5 Mandatory Deslop Pass:
{{PROMPT}} contains --no-deslop, run oh-my-claudecode:ai-slop-cleaner in standard mode (not --review) on the files changed during the current Ralph session only.7.6 Regression Re-verification:
--no-deslop was explicitly specified).On approval: After Step 7.6 passes (with Step 7.5 completed, or skipped via --no-deslop), run /oh-my-claudecode:cancel to cleanly exit and clean up all state files
On rejection: Fix the issues raised, re-verify with the same reviewer, then loop back to check if the story needs to be marked incomplete
</Steps><Tool_Usage>
Task(subagent_type="oh-my-claudecode:architect", ...) for architect verification cross-checks when changes are security-sensitive, architectural, or involve complex multi-system integrationTask(subagent_type="oh-my-claudecode:critic", ...) when --critic=criticomc ask codex --agent-prompt critic "..." when --critic=codexstate_write / state_read for ralph mode state persistence between iterations
</Tool_Usage>After refinement: acceptanceCriteria: [ "detectNoPrdFlag('ralph --no-prd fix') returns true", "detectNoPrdFlag('ralph fix this') returns false", "stripNoPrdFlag removes --no-prd and trims whitespace", "TypeScript compiles with no errors (npm run build)" ]
Why good: Generic criteria replaced with specific, testable criteria.
</Good>
<Good>
Correct parallel delegation:Task(subagent_type="oh-my-claudecode:executor", model="haiku", prompt="Add type export for UserConfig") Task(subagent_type="oh-my-claudecode:executor", model="sonnet", prompt="Implement the caching layer for API responses") Task(subagent_type="oh-my-claudecode:executor", model="opus", prompt="Refactor auth module to support OAuth2 flow")
Why good: Three independent tasks fired simultaneously at appropriate tiers.
</Good>
<Good>
Story-by-story verification:Why good: Each story verified against its own acceptance criteria before marking complete.
</Good>
<Bad>
Claiming completion without PRD verification:
"All the changes look good, the implementation should work correctly. Task complete."
Why bad: Uses "should" and "look good" -- no fresh evidence, no story-by-story verification, no architect review.
</Bad>
<Bad>
Sequential execution of independent tasks:Task(executor, "Add type export") → wait → Task(executor, "Implement caching") → wait → Task(executor, "Refactor auth")
Why bad: These are independent tasks that should run in parallel, not sequentially.
</Bad>
<Bad>
Keeping generic acceptance criteria:
"prd.json created with criteria: Implementation is complete, Code compiles. Moving on to coding."
Why bad: Did not refine scaffold criteria into task-specific ones. This is PRD theater.
</Bad>
</Examples>
<Escalation_And_Stop_Conditions>
- Stop and report when a fundamental blocker requires user input (missing credentials, unclear requirements, external service down)
- Stop when the user says "stop", "cancel", or "abort" -- run `/oh-my-claudecode:cancel`
- Continue working when the hook system sends "The boulder never stops" -- this means the iteration continues
- If the selected reviewer rejects verification, fix the issues and re-verify (do not stop)
- If the same issue recurs across 3+ iterations, report it as a potential fundamental problem
</Escalation_And_Stop_Conditions>
<Final_Checklist>
- [ ] All prd.json stories have `passes: true` (no incomplete stories)
- [ ] prd.json acceptance criteria are task-specific (not generic boilerplate)
- [ ] All requirements from the original task are met (no scope reduction)
- [ ] Zero pending or in_progress TODO items
- [ ] Fresh test run output shows all tests pass
- [ ] Fresh build output shows success
- [ ] lsp_diagnostics shows 0 errors on affected files
- [ ] progress.txt records implementation details and learnings
- [ ] Selected reviewer verification passed against specific acceptance criteria
- [ ] ai-slop-cleaner pass completed on changed files (or `--no-deslop` specified)
- [ ] Post-deslop regression tests pass
- [ ] `/oh-my-claudecode:cancel` run for clean state cleanup
</Final_Checklist>
<Advanced>
## Background Execution Rules
**Run in background** (`run_in_background: true`):
- Package installation (npm install, pip install, cargo build)
- Build processes (make, project build commands)
- Test suites
- Docker operations (docker build, docker pull)
**Run blocking** (foreground):
- Quick status checks (git status, ls, pwd)
- File reads and edits
- Simple commands
</Advanced>
Original task:
{{PROMPT}}48ffaac
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.