Ensures alignment between user and Claude during feature/spec planning through a structured interview process. Use this skill when the user invokes /plan-interview before implementing a new feature, refactoring, or any non-trivial implementation task. The skill runs an upfront interview to gather requirements across technical constraints, scope boundaries, risk tolerance, and success criteria before any codebase exploration. Do NOT use this skill for: pure research/exploration tasks, simple bug fixes, or when the user just wants standard planning without the interview process.
80
83%
Does it follow best practices?
Impact
56%
1.03xAverage score across 3 eval scenarios
Advisory
Suggest reviewing before use
npx skills add pskoett/pskoett-ai-skills/skills/plan-interviewEvery skill in this collection is built around a core philosophy — a principle that agents struggle to internalize on their own.
This skill's philosophy: "Make the change easy, then make the change."
Agents default to plowing straight through implementation, no matter how tangled the path. They rarely pause to ask: "Would a preparatory refactor make this change simple instead of hard?" Planning is where that question gets asked. During codebase exploration and plan generation, actively look for structural friction — code that makes the target change awkward, brittle, or overly complex. When you find it, the plan should propose a preparatory step: make the change easy first, then make the change itself. Two clean steps beat one heroic slog.
Run a structured requirements interview before planning implementation. This ensures alignment between you and the user by gathering explicit requirements rather than making assumptions.
User calls /plan-interview <task description>.
Skip this skill if the task is purely research/exploration (not implementation).
Check your available tools for AskUserQuestion. If it exists, use it to interview the user in thematic batches of 2-3 questions — this is the preferred method as it creates a structured prompt for the user to respond to. If AskUserQuestion is not available (e.g., GitHub Copilot or other providers without it), ask the same questions directly in chat and pause for responses before continuing.
Cover ALL four domains before proceeding:
Technical Constraints
Scope Boundaries
Risk Tolerance
Success Criteria
Before leaving the interview phase, classify the task and choose a planning depth:
Let the user override this (fast vs deep) if they have a clear preference.
| Scenario | Action |
|---|---|
| Contradictory requirements | Make a recommendation with rationale, ask for confirmation |
| User pivots requirements | Restart interview fresh with new direction |
| Interrupted session | Ask user: continue where we left off or restart? |
After interview completes, explore the codebase to understand:
For complex or unfamiliar projects, do a brief context refresh before deep planning:
AGENTS.md and README.md if present and relevantBefore writing the plan, explicitly ask: "Does the knowledge needed to complete this task exist somewhere I can reach?"
For each significant implementation step, classify where the required knowledge lives:
When a step falls into "nowhere reachable":
This audit prevents the most common agent failure: confidently proceeding when the knowledge simply isn't there, producing plausible but wrong output.
Write plan to docs/plans/plan-NNN-<slug>.md where NNN is sequential.
Use a draft -> refine workflow. Stay in plan space while you are still finding material improvements. Planning tokens are usually much cheaper than implementation tokens for non-trivial work.
Run 1..N refinement passes depending on complexity. For each pass:
Stop iterating when any of the following is true:
If the user provides multiple competing plans (from different models or prior iterations):
Reusable prompt templates for the refinement loop and multi-plan synthesis live in references/iterative-plan-refinement-prompts.md.
Every plan MUST include:
## Success Criteria
[Clear definition of done from interview]
## Risk Assessment
[What could go wrong + mitigations]
## Affected Files/Areas
[Which parts of codebase will be touched]
## Test Strategy
[Unit tests, integration tests, and e2e tests/scripts where applicable; include key scenarios, failure modes, and fixtures/mocks]
## Validation and Diagnostics
[How to verify the feature works after implementation; include detailed logging/diagnostics expectations in tests/scripts when useful for debugging]
## Knowledge Map
[For each major step, where does the required knowledge live?]
| Step | Knowledge Source | Confidence |
|------|-----------------|------------|
| Step 1 | Codebase (existing pattern in src/auth/) | High |
| Step 2 | Training data (standard OAuth2 flow) | High |
| Step 3 | Nowhere — need user to provide API spec | Blocked |
## Open Questions
[Uncertainties to resolve during implementation]
- [ ] Question 1 - [Blocks implementation / Can proceed]
- [ ] Question 2 - [Blocks implementation / Can proceed]
## Implementation Checklist
- [ ] Step 1
- [ ] Step 2
...Include when relevant:
When user approves the plan:
TodoWrite with checklist items (if TodoWrite is not available, track progress via structured comments in your output)If user wants quick planning, use draft + refine:
If a partial plan exists in docs/plans/:
AskUserQuestion: "I found an existing partial plan. Would you like to:"
- "Continue the interview"
- "Resume plan drafting/refinement"
- "Review current plan and finalize"
- "Start fresh with a new plan"If resuming refinement, first summarize the current plan state and the most recent refinement changes, then continue with the fresh-eyes refinement loop.
.context-surfing/ provides the re-entry point and remaining work for replanning.docs/plans/plan-NNN-<slug>.md) — consumed by intent-framed-agent as context for the intent frame, and by context-surfing as part of the wave anchor. Copied verbatim into handoff files on drift exit.plan-interview (requirements and plan generation — you are here)intent-framed-agent (execution contract + scope drift monitoring)context-surfing (context quality monitoring — runs concurrently with intent-framed-agent)simplify-and-harden (post-completion quality/security pass)self-improvement (capture recurring patterns and promote durable rules)User: /plan-interview Add OAuth authentication to the API
Claude: I'll help you plan the OAuth authentication feature. Let me gather
requirements first.
[AskUserQuestion - Scope batch]
1. Which OAuth providers need to be supported (Google, GitHub, etc.)?
2. Is this replacing existing auth or adding alongside it?
[Copilot guidance: ask the same two questions in chat and wait for responses]
User: Google and GitHub. Adding alongside existing JWT auth.
[AskUserQuestion - Technical batch]
1. Should OAuth tokens be stored in the existing session table or separately?
2. Any specific callback URL constraints from your hosting setup?
User: Separate table. We're on Vercel so callback URLs need to be configured per environment.
[AskUserQuestion - Risk/Success batch]
1. What's the acceptable degradation if an OAuth provider is down?
2. How should we handle users who already exist with the same email?
User: Fall back to standard login. Link accounts by email if user confirms.
Claude: [Explores codebase for auth patterns, session handling, database schema]
Claude: [Writes docs/plans/plan-003-oauth-authentication.md]
Plan created. The main decision point is whether to use Passport.js
or implement OAuth flows directly - I've documented both approaches
with tradeoffs. Ready to review?
User: Looks good, go with Passport.js
Claude: [Auto-starts implementation with TodoWrite tracking]d6c68fa
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.