Manage research entries in ./research/ — create, refresh, and validate. Use when asked to add a tool, "document this", "research this", "refresh this research", "validate research entries", or given a tool URL. Modes: default (single URL), --batch (multiple URLs in parallel), --rerun (refresh stale entries), --validate (structural check and auto-fix).
87
85%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
<mode_args>$ARGUMENTS</mode_args>
[!IMPORTANT] When provided a process map or Mermaid diagram, treat it as the authoritative procedure. Execute steps in the exact order shown, including branches, decision points, and stop conditions. A Mermaid process diagram is an executable instruction set. Follow it exactly as written: respect sequence, conditions, loops, parallel paths, and terminal states. Do not improvise, reorder, or skip steps. If any node is ambiguous or missing required detail, pause and ask a clarifying question before continuing. When interacting with a user, report before acting the interpreted path you will follow from the diagram, then execute.
Orchestrate research entry creation, maintenance, and validation in ./research/. Spawns @research-curator agents for content work; handles coordination, README updates, and post-actions.
Parse <mode_args/> to select operating mode. Optional --layer 0|1|2 filters discovery by SDLC layer when used with knowledge-explorer or refresh-research.
The following diagram is the authoritative procedure for mode routing. Execute steps in the exact order shown, including branches, decision points, and stop conditions.
flowchart TD
Start(["Parse <mode_args/>"]) --> Q1{"Does <mode_args/> contain --batch?"}
Q1 -->|"Yes — batch flag present"| Q1Layer{"Does <mode_args/> also contain --layer 0, 1, or 2?"}
Q1 -->|"No — batch flag absent"| Q2{"Does <mode_args/> contain --rerun?"}
Q1Layer -->|"Yes — layer filter present"| BatchLayer(["Execute Batch Mode with layer filter applied"])
Q1Layer -->|"No — no layer filter"| Batch(["Execute Batch Mode"])
Q2 -->|"Yes — rerun flag present"| Q2Layer{"Does <mode_args/> also contain --layer 0, 1, or 2?"}
Q2 -->|"No — rerun flag absent"| Q3{"Does <mode_args/> contain --validate?"}
Q2Layer -->|"Yes — layer filter present"| RerunLayer(["Execute Rerun Mode with layer filter applied"])
Q2Layer -->|"No — no layer filter"| Rerun(["Execute Rerun Mode"])
Q3 -->|"Yes — validate flag present"| Validate(["Execute Validate Mode"])
Q3 -->|"No — no flags matched — <mode_args/> contains a URL only"| Default(["Execute Default Mode — single URL"])Single source of truth: ./research/ (repo-root relative).
Structure:
./research/
README.md # Category tables with all entries
{category}/ # One directory per category
{resource-name}.md # Individual research entriesCategory selection follows the flowchart in Entry Template. Create directories as needed.
These rules apply whenever this orchestrator receives results from any @research-curator agent. Violating them corrupts information before it reaches the user.
Rule 1 — Preserve exact counts. When an agent reports numbers, relay those exact numbers.
| Agent says | Relay as | Never relay as |
|---|---|---|
| "7 of 10 found" | "7 of 10 found" | "most found" |
| "3 errors, 2 warnings" | "3 errors, 2 warnings" | "several issues" |
| "0 results" | "0 results" | "nothing relevant" |
Rule 2 — Preserve failure reasons. Relay the specific reason; do not generalize.
| Agent says | Relay as | Never relay as |
|---|---|---|
| "HTTP 403 Forbidden" | "access denied (HTTP 403)" | "not available" |
| "Connection timeout" | "connection timed out" | "doesn't exist" |
| "File not found at path X" | "file not found at X" | "no such file" |
| "Rate limited" | "rate limited" | "unavailable" |
Rule 3 — Reference files instead of re-summarizing. When an agent wrote a file, include its path in the relay.
Rule 4 — Relay structure, not interpretation. When an agent returns a STATUS/ARTIFACTS/WARNINGS block, preserve that structure. Do not flatten it into a single sentence.
Rule 5 — Distinguish observations from conclusions. "Config has no timeout field" (observation) is different from "timeout defaults to 30s" (agent's conclusion). Keep them distinct.
Before reporting results to the user after any mode completes, verify:
<default_mode>
Trigger: <mode_args/> contains a URL with no flags.
Parse -- extract the URL from <mode_args/>
Spawn agent -- invoke @research-curator via Agent tool with the URL
Agent tool parameters:
agent: .claude/agents/research-curator.md
prompt: "Research and create an entry for: {URL}"Wait for structured result (status, file path, category, key findings)
Apply relay rules -- verify pre-relay checklist before proceeding
Spawn four tasks concurrently -- if research status is not failed:
a. Agent tool parameters:
agent: .claude/agents/research-insight-extractor.md
prompt: "Extract improvements from {file-path-from-agent-result}"
b. Agent tool parameters:
agent: .claude/agents/research-utilization-assessor.md
prompt: "Assess utilization opportunities from {file-path-from-agent-result}"
c. Agent tool parameters:
agent: .claude/agents/research-cross-referencer.md
prompt: "Add cross-references to {file-path-from-agent-result}"
d. Update ./research/README.md -- add new entry to category tableWait for all four tasks and surface results -- collect structured return blocks from all three agents and confirm README updated:
IMMEDIATE_ATTENTION:, report each item with #{issue} {title} and the one-sentence reason. If no IMMEDIATE_ATTENTION section: report "N improvements added to backlog from {resource-name}."PROPOSALS_WRITTEN count and FILE path. If STATUS: no_utilization_surface, report "No direct utilization surface found."CROSS_REFERENCES_ADDED count.Post-actions -- lint, commit, push (see Post-Actions)
status: failed, relay the exact failure reason to user and stop</default_mode>
<batch_mode>
Trigger: <mode_args/> contains --batch.
Full workflow defined in Batch Mode reference. Summary below.
Extract all tokens after --batch matching https?:// as target URLs. Non-URL tokens ignored with warning.
Spawn up to 5 @research-curator agents per wave via Agent tool. Wait for all agents in the current wave before spawning the next. After all waves complete, for each successful entry spawn three concurrent agents: @research-insight-extractor, @research-utilization-assessor, and @research-cross-referencer (up to 5 entries processed concurrently — 3 agents each). See Batch Mode reference for the complete wave spawning diagram.
Before spawning, check if ./research/ already contains an entry for the URL's resource.
If found:
Entry is N days old (last verified: YYYY-MM-DD, vX.Y.Z). Proceeding with refresh.--rerun ./research/{category}/{name}.md to the agent instead of skipping.If the Freshness Tracking section is absent or Last Verified is unreadable, emit:
Entry exists but freshness data unavailable. Proceeding with refresh.
and pass --rerun ./research/{category}/{name}.md to the agent.
After each wave, relay exact counts and exact failure reasons from agent output:
Wave N complete: M/N succeeded
created -- category/resource-name.md
refreshed -- category/resource-name.md (was N days old)
failed -- https://url.com -- {exact reason from agent}After all waves:
Batch complete: X/Y total succeeded
Files created: [list]
README updated: Yes</batch_mode>
<rerun_mode>
Trigger: <mode_args/> contains --rerun.
Re-research existing entries to refresh stale data.
The following diagram is the authoritative procedure for rerun mode. Execute steps in the exact order shown, including branches, decision points, and stop conditions.
flowchart TD
Start(["Parse --rerun argument value"]) --> Q{"What is the --rerun target value?"}
Q -->|"category/name — single entry path"| VerifyFile{"Does ./research/category/name.md exist?"}
Q -->|"all — re-research every entry"| FindAll["Glob ./research/**/*.md<br>excluding README.md — collect all entry paths"]
VerifyFile -->|"No — file not found"| Missing(["Report error: entry not found at path. Stop."])
VerifyFile -->|"Yes — file exists"| ReadFile["Read ./research/category/name.md<br>extract current content and metadata"]
ReadFile --> Spawn1["Spawn @research-curator via Agent tool<br>prompt: --rerun ./research/category/name.md"]
Spawn1 --> RelayCheck1["Apply pre-relay quality checklist"]
RelayCheck1 --> UpdateDate["Update ./research/README.md<br>refresh freshness date for this entry"]
FindAll --> WaveSpawn["Spawn @research-curator agents in waves of 5<br>each receives --rerun ./research/category/name.md<br>wait for each wave before spawning next"]
WaveSpawn --> RelayCheck2["Apply pre-relay quality checklist<br>to all wave results"]
RelayCheck2 --> UpdateDates["Update ./research/README.md<br>refresh freshness dates for all re-researched entries"]
UpdateDate --> SpawnAnalysis1["Concurrently spawn 3 agents:<br>@research-insight-extractor 'Extract improvements from ./research/category/name.md'<br>@research-utilization-assessor 'Assess utilization opportunities from ./research/category/name.md'<br>@research-cross-referencer 'Add cross-references to ./research/category/name.md'"]
SpawnAnalysis1 --> WaitAnalysis1["Wait for all 3 agents<br>Surface IMMEDIATE_ATTENTION items from insight result<br>Report utilization proposal count<br>Report cross-references added count"]
WaitAnalysis1 --> PostActions(["Execute Post-Actions — lint, commit, push"])
UpdateDates --> SpawnAnalysisN["For each updated entry (concurrent, up to 5 entries)<br>spawn 3 agents per entry:<br>@research-insight-extractor<br>@research-utilization-assessor<br>@research-cross-referencer"]
SpawnAnalysisN --> WaitAnalysisN["Wait for all analysis agents<br>Collect IMMEDIATE_ATTENTION items<br>Report total utilization proposals and cross-references added"]
WaitAnalysisN --> PostActionsVerify ./research/{category}/{name}.md exists
Spawn @research-curator via Agent tool:
prompt: "--rerun ./research/{category}/{name}.md"Agent reads existing entry, re-gathers fresh data, updates content and freshness tracking
Apply pre-relay quality checklist to agent result
Update README with refreshed date
Concurrently spawn three analysis agents:
- @research-insight-extractor — "Extract improvements from ./research/{category}/{name}.md"
- @research-utilization-assessor — "Assess utilization opportunities from ./research/{category}/{name}.md"
- @research-cross-referencer — "Add cross-references to ./research/{category}/{name}.md"Wait for all three; surface IMMEDIATE_ATTENTION items from insight result; report utilization proposal count; report cross-references added count
./research/**/*.md excluding README.md--rerun ./research/{category}/{name}.md</rerun_mode>
<validate_mode>
Trigger: <mode_args/> contains --validate.
Run structural validation and fix error-severity issues.
The validator script (validate_research.py) checks each entry file against the rules in Validation Rules. It emits JSON with three severity levels:
@research-curator with --fix and the specific issue list.The following diagram is the authoritative procedure for validate mode. Execute steps in the exact order shown, including branches, decision points, and stop conditions.
flowchart TD
Start(["Parse --validate argument value"]) --> Q{"What is the --validate target value?"}
Q -->|"category/name — single entry path"| RunScript["Run validate_research.py --json<br>on ./research/category/name.md"]
Q -->|"all — validate every entry"| RunScriptAll["Run validate_research.py --json<br>on ./research/ directory"]
RunScript --> ParseJSON["Parse JSON output<br>Extract issues keyed by severity: error, warning, info<br>Count totals per severity"]
RunScriptAll --> ParseJSON
ParseJSON --> HasErrors{"Does parsed output contain<br>any error-severity issues?"}
HasErrors -->|"Yes — N error-severity issues found"| SpawnFix["Spawn @research-curator agents in waves of 5<br>Each agent receives --fix flag<br>PLUS the exact error list for that entry from JSON output<br>(not a summary — the raw issue text)"]
HasErrors -->|"No — zero error-severity issues"| ReportClean(["Report: all entries passed. Include exact warning and info counts. Stop."])
SpawnFix --> RelayCheck["Apply pre-relay quality checklist<br>to all fix-agent results"]
RelayCheck --> ReportSummary["Report validation summary with exact counts<br>(total scanned, passed, errors fixed, warnings noted, info items)"]
ReportSummary --> PostActions(["Execute Post-Actions — lint, commit, push"])uv run .claude/skills/research-curator/scripts/validate_research.py --json ./research/{target}When spawning a fix agent, pass the exact error text from the JSON output — not a paraphrase. The agent receives:
prompt: "--fix ./research/{category}/{name}.md
Issues to fix (from validator JSON):
- {exact issue text from JSON}
- {exact issue text from JSON}"Severity handling per Validation Rules:
@research-curator with --fix flag and the exact issue list extracted from JSONFor error-severity fixes, spawn agents in waves of 5 (same pattern as Batch Mode).
Report exact counts from the validator JSON output — do not paraphrase:
Validation complete:
Total scanned: N
Passed: N
Errors found: N (M auto-fixed)
Warnings noted: N
Info items: N</validate_mode>
<post_actions>
Shared by all modes. Execute after any mode completes successfully.
README Update -- add or update entries in ./research/README.md category tables
Lint -- run formatting checks on all modified files:
uv run prek run --files ./research/README.md [new-or-modified-files]Commit -- stage and commit all research and insight changes:
git add ./research/
git commit -m "docs(research): [action] [resource names]"Push -- push to current branch:
git push -u origin HEADCommit message actions by mode:
add {resource-name} research entryadd {N} research entriesrefresh {resource-name|N entries}fix validation issues in {resource-name|N entries}</post_actions>
<output_format>
Report to user after any mode completes. All counts and failure reasons MUST be relayed exactly as received from agents — apply the pre-relay quality checklist before writing this output.
## Research Entry Created
**Resource**: {name}
**Category**: {category}
**File**: ./research/{category}/{filename}.md
**README Updated**: Yes
**Cross-References Added**: N
**Utilization Proposals**: N (file: ./research/insights/YYYY-MM-DD-{name}-utilization.md)
### Key Findings
- Finding 1
- Finding 2
- Finding 3
### Next Review
YYYY-MM-DD## Batch Research Complete
**Total**: X URLs processed
**Created**: Y new entries
**Refreshed**: Z existing entries
**Failed**: W
### Entries Created
- ./research/{category}/{name}.md
### Entries Refreshed
- ./research/{category}/{name}.md (was N days old, last: YYYY-MM-DD, vX.Y.Z)
### Failures
- {URL} -- {exact reason from agent output}## Research Entries Refreshed
**Refreshed**: N entries
**Changes Detected**: M entries had updated data
### Updated Entries
- ./research/{category}/{name}.md -- {what changed}## Validation Results
**Scanned**: N entries
**Passed**: N
**Errors Fixed**: N
**Warnings**: N
**Info**: N
### Fixes Applied
- ./research/{category}/{name}.md -- {exact issue fixed, from validator JSON}
### Warnings (manual review recommended)
- ./research/{category}/{name}.md -- {exact warning text}</output_format>
--validate mode--batch mode@research-curator at .claude/agents/research-curator.md -- single-entry research executor@research-insight-extractor at .claude/agents/research-insight-extractor.md -- extracts backlog improvements from research entries@research-utilization-assessor at .claude/agents/research-utilization-assessor.md -- assesses direct API/service utilization opportunities@research-cross-referencer at .claude/agents/research-cross-referencer.md -- appends Cross-References section to research entriesSOURCE: Agent result relay rules and pre-relay checklist adapted from plugins/summarizer/skills/agent-result-relay/SKILL.md (accessed 2026-03-06).
11ec483
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.