Orchestrates a dual-AI engineering loop where Claude Code plans and implements, while Codex validates and reviews, with continuous feedback for optimal code quality
This skill implements a balanced engineering loop:
AskUserQuestion):
gpt-5 or gpt-5-codexlow, medium, or highecho "Review this implementation plan and identify any issues:
[Claude's plan here]
Check for:
- Logic errors
- Missing edge cases
- Architecture flaws
- Security concerns" | codex exec -m --config model_reasoning_effort="" --sandbox read-onlyIf Codex finds issues:
AskUserQuestion): "Should I revise the plan and re-validate, or proceed with fixes?"Once the plan is validated:
After every change:
codex exec resume --last to continue validation sessions:echo "Review the updated implementation" | codex exec resume --lastNote: Resume inherits all settings (model, reasoning, sandbox) from original session
When Codex identifies problems:
When implementation errors occur:
| Phase | Command Pattern | Purpose |
|---|---|---|
| Validate plan | echo "plan" | codex exec --sandbox read-only | Check logic before coding |
| Implement | Claude uses Edit/Write/Read tools | Claude implements the validated plan |
| Review code | echo "review changes" | codex exec --sandbox read-only | Codex validates Claude's implementation |
| Continue review | echo "next step" | codex exec resume --last | Continue validation session |
| Apply fixes | Claude uses Edit/Write tools | Claude fixes issues found by Codex |
| Re-validate | echo "verify fixes" | codex exec resume --last | Codex re-checks after fixes |
AskUserQuestionPlan (Claude) → Validate Plan (Codex) → Feedback →
Implement (Claude) → Review Code (Codex) →
Fix Issues (Claude) → Re-validate (Codex) → Repeat until perfectThis creates a self-correcting, high-quality engineering system where:
1be5394
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.