Use when docs/solutions/ learnings may be stale — after refactors, migrations, or dependency upgrades, when a retrieved learning feels outdated or contradicts a recently solved problem, when pattern docs no longer reflect current code, or when reviewing docs/solutions/ for accuracy.
Maintain the quality of docs/solutions/ over time. This workflow reviews existing learnings against the current codebase, then refreshes any derived pattern docs that depend on them.
Check if $ARGUMENTS contains mode:autonomous. If present, strip it from arguments (use the remainder as a scope hint) and run in autonomous mode.
| Mode | When | Behavior |
|---|---|---|
| Interactive (default) | User is present and can answer questions | Ask for decisions on ambiguous cases, confirm actions |
| Autonomous | mode:autonomous in arguments | No user interaction. Apply all unambiguous actions (Keep, Update, auto-Archive, Replace with sufficient evidence). Mark ambiguous cases as stale. Generate a summary report at the end. |
status: stale, stale_reason, and stale_date in the frontmatter. If even the stale-marking write fails, include it as a recommendation.These principles apply to interactive mode only. In autonomous mode, skip all user questions and apply the autonomous mode rules above.
Follow the following interaction style:
AskUserQuestion in Claude Code, request_user_input in Codex, ask_user in Gemini). Otherwise, present numbered options in plain text and wait for the user's reply before continuingThe goal is not to force the user through a checklist. The goal is to help them make a good maintenance decision with the smallest amount of friction.
Refresh in this order:
Why this order:
If the user starts by naming a pattern doc, you may begin there to understand the concern, but inspect the supporting learning docs before changing the pattern.
For each candidate artifact, classify it into one of four outcomes:
| Outcome | Meaning | Default action |
|---|---|---|
| Keep | Still accurate and still useful | No file edit by default; report that it was reviewed and remains trustworthy |
| Update | Core solution is still correct, but references drifted | Apply evidence-backed in-place edits |
| Replace | The old artifact is now misleading, but there is a known better replacement | Create a trustworthy successor or revised pattern, then mark/archive the old artifact as needed |
| Archive | No longer useful or applicable | Move the obsolete artifact to docs/solutions/_archived/ with archive metadata when appropriate |
Start by discovering learnings and pattern docs under docs/solutions/.
Exclude:
README.mddocs/solutions/_archived/Find all .md files under docs/solutions/, excluding README.md files and anything under _archived/.
If $ARGUMENTS is provided, use it to narrow scope before proceeding. Try these matching strategies in order, stopping at the first that produces results:
docs/solutions/ (e.g., performance-issues, database-issues)module, component, or tags fields in learning frontmatter for the argumentIf no matches are found, report that and ask the user to clarify. In autonomous mode, report the miss and stop — do not guess at scope.
If no candidate docs are found, report:
No candidate docs found in docs/solutions/.
Run `superpowers:compound` after solving problems to start building your knowledge base.Before asking the user to classify anything:
| Scope | When to use it | Interaction style |
|---|---|---|
| Focused | 1-2 likely files or user named a specific doc | Investigate directly, then present a recommendation |
| Batch | Up to ~8 mostly independent docs | Investigate first, then present grouped recommendations |
| Broad | 9+ docs, ambiguous, or repo-wide stale-doc sweep | Triage first, then investigate in batches |
When scope is broad (9+ candidate docs), do a lightweight triage before deep investigation:
Example:
Found 24 learnings across 5 areas.
The auth module has 5 learnings and 2 pattern docs that cross-reference
each other — and 3 of those reference files that no longer exist.
I'd start there.
1. Start with auth (recommended)
2. Pick a different area
3. Review everythingDo not ask action-selection questions yet. First gather evidence.
For each learning in scope, read it, cross-reference its claims against the current codebase, and form a recommendation.
A learning has several dimensions that can independently go stale. Surface-level checks catch the obvious drift, but staleness often hides deeper:
Match investigation depth to the learning's specificity — a learning referencing exact file paths and code snippets needs more verification than one describing a general principle.
The critical distinction is whether the drift is cosmetic (references moved but the solution is the same) or substantive (the solution itself changed):
superpowers:compound-refresh fixes these directly.superpowers:compound's document format (frontmatter, problem, root cause, solution, prevention), using the investigation evidence already gathered. The orchestrator does not rewrite learnings inline — it delegates to a subagent for context isolation.The boundary: if you find yourself rewriting the solution section or changing what the learning recommends, stop — that is Replace, not Update.
Memory-sourced drift signals are supplementary, not primary. A memory note describing a different approach does not alone justify Replace or Archive. Use memory signals to:
In autonomous mode, memory-only drift (no codebase corroboration) should result in stale-marking, not action.
Three guidelines that are easy to get wrong:
After reviewing the underlying learning docs, investigate any relevant pattern docs under docs/solutions/patterns/.
Pattern docs are high-leverage — a stale pattern is more dangerous than a stale individual learning because future work may treat it as broadly applicable guidance. Evaluate whether the generalized rule still holds given the refreshed state of the learnings it depends on.
A pattern doc with no clear supporting learnings is a stale signal — investigate carefully before keeping it unchanged.
Use subagents for context isolation when investigating multiple artifacts — not just because the task sounds complex. Choose the lightest approach that fits:
| Approach | When to use |
|---|---|
| Main thread only | Small scope, short docs |
| Sequential subagents | 1-2 artifacts with many supporting files to read |
| Parallel subagents | 3+ truly independent artifacts with low overlap |
| Batched subagents | Broad sweeps — narrow scope first, then investigate in batches |
When spawning any subagent, include this instruction in its task prompt:
Use dedicated file search and read tools (Glob, Grep, Read) for all investigation. Do NOT use shell commands (ls, find, cat, grep, test, bash) for file operations. This avoids permission prompts and is more reliable.
Also read MEMORY.md from the auto memory directory if it exists. Check for notes related to the learning's problem domain. Report any memory-sourced drift signals separately from codebase-sourced evidence, tagged with "(auto memory [claude])" in the evidence section. If MEMORY.md does not exist or is empty, skip this check.
There are two subagent roles:
The orchestrator merges investigation results, detects contradictions, coordinates replacement subagents, and performs all archival/metadata edits centrally. In interactive mode, it asks the user questions on ambiguous cases. In autonomous mode, it marks ambiguous cases as stale instead. If two artifacts overlap or discuss the same root issue, investigate them together rather than parallelizing.
After gathering evidence, assign one recommended action.
The learning is still accurate and useful. Do not edit the file — report that it was reviewed and remains trustworthy. Only add last_refreshed if you are already making a meaningful update for another reason.
The core solution is still valid but references have drifted (paths, class names, links, code snippets, metadata). Apply the fixes directly.
Choose Replace when the learning's core guidance is now misleading — the recommended fix changed materially, the root cause or architecture shifted, or the preferred pattern is different.
The user may have invoked the refresh months after the original learning was written. Do not ask them for replacement context they are unlikely to have — use agent intelligence to investigate the codebase and synthesize the replacement.
Evidence assessment:
By the time you identify a Replace candidate, Phase 1 investigation has already gathered significant evidence: the old learning's claims, what the current code actually does, and where the drift occurred. Assess whether this evidence is sufficient to write a trustworthy replacement:
status: stale, stale_reason: [what you found], stale_date: YYYY-MM-DD to the frontmattersuperpowers:compound after their next encounter with that area, when they have fresh problem-solving contextChoose Archive when:
Action:
docs/solutions/_archived/, preserving directory structure when helpfularchived_date: YYYY-MM-DDarchive_reason: [why it was archived]When a learning's referenced files are gone, that is strong evidence — but only that the implementation is gone. Before archiving, reason about whether the problem the learning solves is still a concern in the codebase:
auth_token.rb is gone — does the application still handle session tokens? If so, the concept persists under a new implementation. That is Replace, not Archive.Do not search mechanically for keywords from the old learning. Instead, understand what problem the learning addresses, then investigate whether that problem domain still exists in the codebase. The agent understands concepts — use that understanding to look for where the problem lives now, not where the old code used to be.
Auto-archive only when both the implementation AND the problem domain are gone:
If the implementation is gone but the problem domain persists (the app still does auth, still processes payments, still handles migrations), classify as Replace — the problem still matters and the current approach should be documented.
Do not keep a learning just because its general advice is "still sound" — if the specific code it references is gone, the learning misleads readers. But do not archive a learning whose problem domain is still active — that knowledge gap should be filled with a replacement.
If there is a clearly better successor, strongly consider Replace before Archive so the old artifact points readers toward the newer guidance.
Apply the same four outcomes (Keep, Update, Replace, Archive) to pattern docs, but evaluate them as derived guidance rather than incident-level learnings. Key differences:
If "archive" feels too strong but the pattern should no longer be elevated, reduce its prominence in place if the docs structure supports that.
Skip this entire phase. Do not ask any questions. Do not present options. Do not wait for input. Proceed directly to Phase 4 and execute all actions based on the classifications from Phase 2:
Most Updates should be applied directly without asking. Only ask the user when:
superpowers:compoundDo not ask questions about whether code changes were intentional, whether the user wants to fix bugs in the code, or other concerns outside doc maintenance. Stay in your lane — doc accuracy.
Always present choices using the platform's blocking question tool when available (AskUserQuestion in Claude Code, request_user_input in Codex, ask_user in Gemini). Otherwise, present numbered options in plain text and wait for the user's reply before proceeding.
Question rules:
For a single artifact, present:
Then ask:
This [learning/pattern] looks like a [Update/Keep/Replace/Archive].
Why: [one-sentence rationale based on the evidence]
What would you like to do?
1. [Recommended action]
2. [Second plausible action]
3. Skip for nowDo not list all four actions unless all four are genuinely plausible.
For several learnings:
Ask for confirmation in stages:
If the user asked for a sweeping refresh, keep the interaction incremental:
Do not front-load the user with a full maintenance queue.
No file edit by default. Summarize why the learning remains trustworthy.
Apply in-place edits only when the solution is still substantively correct.
Examples of valid in-place updates:
app/models/auth_token.rb reference to app/models/session_token.rbmodule: AuthToken to module: SessionTokenExamples that should not be in-place updates:
Those cases require Replace, not Update.
Process Replace candidates one at a time, sequentially. Each replacement is written by a subagent to protect the main context window.
When evidence is sufficient:
superpowers:compound's document format: YAML frontmatter (title, category, date, module, component, tags), problem description, root cause, current solution with code examples, and prevention tips. It should use dedicated file search and read tools if it needs additional context beyond what was passed.superseded_by: [new learning path] to the old learning's frontmatterdocs/solutions/_archived/When evidence is insufficient:
status: stale, stale_reason: [what you found], stale_date: YYYY-MM-DDsuperpowers:compound after their next encounter with that areaArchive only when a learning is clearly obsolete or redundant. Do not archive a document just because it is old.
The full report MUST be printed as markdown output. Do not summarize findings internally and then output a one-liner. The report is the deliverable — print every section in full, formatted as readable markdown with headers, tables, and bullet points.
After processing the selected scope, output the following report:
Compound Refresh Summary
========================
Scanned: N learnings
Kept: X
Updated: Y
Replaced: Z
Archived: W
Skipped: V
Marked stale: SThen for EVERY file processed, list:
For Keep outcomes, list them under a reviewed-without-edits section so the result is visible without creating git churn.
In autonomous mode, the report is the sole deliverable — there is no user present to ask follow-up questions, so the report must be self-contained and complete. Print the full report. Do not abbreviate, summarize, or skip sections.
Split actions into two sections:
Applied (writes that succeeded):
Recommended (actions that could not be written — e.g., permission denied):
If all writes succeed, the Recommended section is empty. If no writes succeed (e.g., read-only invocation), all actions appear under Recommended — the report becomes a maintenance plan.
After all actions are executed and the report is generated, handle committing the changes. Skip this phase if no files were modified (all Keep, or all writes failed).
Before offering options, check:
Use sensible defaults — no user to ask:
| Context | Default action |
|---|---|
| On main/master | Create a branch named for what was refreshed (e.g., docs/refresh-auth-and-ci-learnings), commit, attempt to open a PR. If PR creation fails, report the branch name. |
| On a feature branch | Commit as a separate commit on the current branch |
| Git operations fail | Include the recommended git commands in the report and continue |
Stage only the files that compound-refresh modified — not other dirty files in the working tree.
First, run git branch --show-current to determine the current branch. Then present the correct options based on the result. Stage only compound-refresh files regardless of which option the user picks.
If the current branch is main, master, or the repo's default branch:
docs/refresh-auth-learnings not docs/compound-refresh){current branch name}If the current branch is a feature branch, clean working tree:
{current branch name} as a separate commit (recommended)If the current branch is a feature branch, dirty working tree (other uncommitted changes):
{current branch name} (selective staging — other dirty files stay untouched)Write a descriptive commit message that:
| Mistake | Reality |
|---|---|
| Rewriting the solution section and calling it Update | If you're changing what the learning recommends, that's Replace, not Update. Stop. |
| Archiving when referenced files are gone | Check whether the problem domain is still active first. Missing implementation ≠ missing problem. |
| Running Replace subagents in parallel | Replace subagents run sequentially. Parallel execution risks context exhaustion and conflicting writes. |
Editing a Keep doc to add last_refreshed | Only add last_refreshed when making a meaningful update for another reason. Prefer no-write Keep. |
| Asking the user whether code changes were intentional | That's code review, not doc maintenance. Stay in your lane — doc accuracy only. |
| Treating memory-only drift as sufficient Replace evidence | Memory notes are supplementary. Without codebase corroboration, mark stale instead. |
| Skipping the report when all outcomes are Keep | Always output the report. Keep outcomes are listed under "reviewed without edits" — not invisible. |
superpowers:compound captures a newly solved, verified problemsuperpowers:compound-refresh maintains older learnings as the codebase evolvesUse Replace only when the refresh process has enough real evidence to write a trustworthy successor. When evidence is insufficient, mark as stale and recommend superpowers:compound for when the user next encounters that problem area.
cb03f92
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.