Generate format-controlled research reports with evidence tracking, citations, source governance, and multi-pass synthesis. This skill should be used when users request a research report, literature review, market or industry analysis, competitive landscape, policy or technical brief. Triggers: "帮我调研一下", "深度研究", "综述报告", "深入分析", "research this topic", "write a report on", "survey the literature on", "competitive analysis of", "技术选型分析", "竞品研究", "政策分析", "行业报告". V6 adds: source-type governance, AS_OF freshness checks, mandatory counter-review, and citation registry. V6.1 adds: source accessibility (circular verification forbidden, exclusive advantage encouraged).
87
85%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Create high-fidelity research reports with strict format control, evidence mapping, source governance, and multi-pass synthesis.
Lead Agent (coordinator — minimizes raw search context)
|
P0: Environment + source policy setup
|
P1: Research Task Board (roles, queries, parallel groups)
|
Dispatch ──→ Subagent A ──→ writes task-a.md ──┐
──→ Subagent B ──→ writes task-b.md ──┤ (parallel)
──→ Subagent C ──→ writes task-c.md ──┘
| |
| research-notes/ <────────────────────────┘
|
P2: Build citation registry with source_type + as_of + authority
P3: Evidence-mapped outline with counter-claim flags
P4: Draft from notes (never from raw search results)
P5: Counter-review (claims, confidence, alternatives)
P6: Verify (every [n] in registry, traceability check)
P7: Polish → final report with confidence markersContext efficiency: Subagents' raw search results stay in their context and are discarded. Lead agent sees only distilled notes (~60-70% context reduction).
Determine the research mode before starting:
| Dimension | Options |
|---|---|
| Topic Mode | Enterprise Research (company/corporation) OR General Research (industry/policy/tech) |
| Depth Mode | Standard (5-6 tasks, 3000-8000 words) OR Lightweight (3-4 tasks, 2000-4000 words) |
CRITICAL RULE: Every source must be classified by accessibility:
| Accessibility | Definition | Examples | Usage Rule |
|---|---|---|---|
public | Available to any external researcher without authentication | Public websites, news articles, WHOIS (without privacy), academic papers | ✅ Always allowed |
semi-public | Requires registration or limited access | LinkedIn profiles, Crunchbase basic, industry reports (free tier) | ✅ Allowed with disclosure |
exclusive-user-provided | User's paid subscriptions, private APIs, proprietary databases | Crunchbase Pro, PitchBook, private data feeds, internal databases | ✅ ALLOWED for third-party research |
private-user-owned | User's own accounts when researching themselves | User's registrar for user's own company, user's bank for user's own finances | ❌ FORBIDDEN - circular verification |
⚠️ CIRCULAR VERIFICATION BAN: You must NOT:
✅ EXCLUSIVE INFORMATION ADVANTAGE: You SHOULD:
Every source MUST also be tagged with:
| Label | Definition | Examples |
|---|---|---|
official | Primary source, official documentation | Company SEC filings, government reports, official blog |
academic | Peer-reviewed research | Journal articles, conference papers, dissertations |
secondary-industry | Professional analysis | Industry reports, analyst coverage, trade publications |
journalism | News reporting | Reputable media outlets, investigative journalism |
community | User-generated content | Forums, reviews, social media, Q&A sites |
other | Uncategorized or mixed | Aggregators, unverified sources |
Quality Gates:
Set AS_OF date explicitly at P0. For all time-sensitive claims:
Check capabilities before starting:
| Check | Requirement | Impact if Missing |
|---|---|---|
| web_search available | Required | Stop - cannot proceed |
| web_fetch available | Required for DEEP tasks | SCAN-only mode |
| Subagent dispatch | Preferred | Degrade to sequential |
| Filesystem writable | Required | In-memory notes only |
Set policy variables:
AS_OF: Today's date (YYYY-MM-DD) - mandatory for timed topicsMODE: Standard (default) or LightweightSOURCE_TYPE_POLICY: Enforce official/academic/secondary/journalism/community/other labelsCOUNTER_REVIEW_PLAN: What opposing interpretation to testReport: [P0 complete] Subagent: {yes/no}. Mode: {standard/lightweight}. AS_OF: {YYYY-MM-DD}.
When researching a specific company/enterprise, follow this specialized workflow that ensures six-dimension coverage, quantified analysis frameworks, and three-level quality control.
Enterprise Research Progress:
- [ ] E1: Intake — confirm company entity, research depth, format contract
- [ ] E2: Six-dimension data collection (parallel where possible)
- [ ] D1: Company fundamentals (entity, founding, funding, ownership)
- [ ] D2: Business & products (segments, products, revenue structure)
- [ ] D3: Competitive position (industry rank, competitors, barriers)
- [ ] D4: Financial & operations (3-year financials, efficiency metrics)
- [ ] D5: Recent developments (6-month events, strategic signals)
- [ ] D6: Internal/proprietary sources (or note limitation)
- [ ] E3: Structured analysis frameworks
- [ ] SWOT analysis (evidence-backed, 4 quadrants × 3-5 entries)
- [ ] Competitive barrier quantification (7 dimensions, weighted score)
- [ ] Risk matrix (8 categories, probability × impact)
- [ ] Comprehensive scorecard (6 dimensions, weighted total)
- [ ] E4: L1/L2/L3 quality checks at each stage transition
- [ ] E5: Draft report using 7-chapter enterprise template
- [ ] E6: Multi-pass drafting + UNION merge (same as general Step 6-7)
- [ ] E7: Present draft for human review and iterateDecompose the research question into 4-6 investigation tasks (Standard) or 3-4 tasks (Lightweight).
Each task assignment includes:
When in Enterprise Research Mode, task board maps to six dimensions:
Report: [P1 complete] {N} tasks in {M} groups. Dispatching Group A.
When researching a specific company/enterprise, follow this specialized workflow that ensures six-dimension coverage, quantified analysis frameworks, and three-level quality control.
Same as P0/P1 above, plus:
Subagents execute tasks using references/subagent_prompt.md and output to references/research_notes_format.md.
Source-Type and As OfEach task-{id}.md must contain:
Lead agent executes tasks sequentially, acting as each specialist. Raw search results are discarded after writing notes.
Follow references/enterprise_research_methodology.md for:
Key principles:
Run L1 quality check after completing each dimension (see enterprise_quality_checklist.md).
Status per task: [P2 task-{id} complete] {N} sources, {M} findings.
Status all: [P2 complete] {N} tasks done, {M} total sources. Building registry.
Apply frameworks from references/enterprise_analysis_frameworks.md in order:
Run L2 quality check after analysis is complete.
Three-level checks from references/enterprise_quality_checklist.md:
Use the 7-chapter enterprise report template from enterprise_quality_checklist.md:
Plus appendices: Data Source Index, Glossary, Disclaimer.
Lead agent reads all task notes and builds unified registry.
## Sources sectionCITATION REGISTRY
Approved:
[1] Author/Org — Title | URL | Source-Type: official | Accessibility: public | Date: 2026-03-01 | Auth: 8 | task-a
[2] ...
Dropped:
x Source | URL | Source-Type: community | Accessibility: privileged | Auth: 3 | Reason: PRIVILEGED SOURCE - NOT ALLOWED
Stats: {approved}/{total}, {N} domains, official_share {xx}%
Privileged sources rejected: {N}Critical rule: These [n] are FINAL. P5 may only cite from Approved list. Dropped sources never reappear.
Circular verification handling: When researching the user's own company/assets, if you discover data in user's private accounts (e.g., user's domain registrar showing they own domains), you MUST:
Exclusive source handling: When user EXPLICITLY PROVIDES their paid subscriptions or private APIs for third-party research (e.g., "Use my Crunchbase Pro to research competitors"), you SHOULD:
Report: [P3 complete] {approved}/{total} sources. {N} domains. Official share: {xx}%. Privileged rejected: {N}.
When researching entities with no public footprint (like the "字节跳动子公司" example):
What an external researcher would find:
Correct response:
Findings: NO PUBLIC INFORMATION AVAILABLE
Sources checked:
- WHOIS (public): Privacy protected [failed]
- Company registry (public): Access denied/No API [failed]
- News media: No coverage [failed]
- Corporate website: Placeholder only [minimal]
Verdict: UNABLE TO VERIFY COMPANY EXISTENCE from external perspective
Sources found: 0 (or minimal, e.g., only WHOIS showing domain exists)
Confidence: N/A - Insufficient evidenceDO NOT:
DO:
Lead agent reads notes + registry to build outline.
Outline format:
## N. {Section Title}
Sources: [1][3][7] from tasks a, b
Claims: {claim from task-a finding 3}, {claim from task-b finding 1}
Counter-claim candidates: {alternative explanations}
Recency checks: {source dates + AS_OF}
Gaps: {limited official evidence}Write section by section using references/report_template_v6.md.
Rules:
Anti-hallucination:
Status: [P5 in progress] {N}/{M} sections, ~{words} words.
For each major conclusion, perform opposite-view checks:
For comprehensive parallel review, use the Counter-Review Team:
# 1. Prepare inputs
counter-review-inputs/
├── draft_report.md
├── citation_registry.md
├── task-notes/
└── p0_config.md
# 2. Dispatch to 4 specialist agents in parallel
SendMessage to: claim-validator
SendMessage to: source-diversity-checker
SendMessage to: recency-validator
SendMessage to: contradiction-finder
# 3. Wait for all specialists to complete
# 4. Send to coordinator for synthesis
SendMessage to: counter-review-coordinator
inputs: [4 specialist reports]
# 5. Receive final P6 Counter-Review ReportSee references/counter_review_team_guide.md for detailed usage.
If Counter-Review Team is unavailable, perform manual checks:
Include in final report:
## 核心争议 / Key Controversies
- **争议 1:** [主张 A 与反向证据 B 对比] [n][m]
- **争议 2:** ...Report: [P6 complete] {N} issues found: {critical} critical, {high} high, {medium} medium.
Cross-check before finalization:
Report: [P7 complete] {N} spot-checks, {M} violations fixed.
| File | When to Load |
|---|---|
| source_accessibility_policy.md | P0 (CRITICAL): Source classification rules - read first |
| subagent_prompt.md | P2: Task dispatch to subagents |
| research_notes_format.md | P2: Subagent output format |
| report_template_v6.md | P5: Draft with confidence markers and counter-review |
| quality_gates.md | All phases: Quality thresholds and anti-hallucination checks |
| File | When to Load |
|---|---|
| research_report_template.md | Build outline and draft structure |
| formatting_rules.md | Enforce section formatting and citation rules |
| source_quality_rubric.md | Score and triage sources |
| research_plan_checklist.md | Build research plan and query set |
| completeness_review_checklist.md | Review for coverage, citations, and compliance |
| File | When to Load |
|---|---|
| enterprise_research_methodology.md | Six-dimension data collection workflow, source priority, cross-validation rules |
| enterprise_analysis_frameworks.md | SWOT template, competitive barrier quantification, risk matrix, comprehensive scoring |
| enterprise_quality_checklist.md | L1/L2/L3 quality checks, per-dimension checklists, 7-chapter report template |
After completing research, suggest verification and output:
Research report complete: [N] sources cited, [M] claims made.
Options:
A) Verify facts — run /fact-checker on the report (Recommended)
B) Create slides — run /ppt-creator from the findings
C) Export as PDF — run /pdf-creator for formal delivery
D) No thanks — the report is ready as-is5c9eda4
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.