End-to-end AEO (Answer Engine Optimization) and GEO (Generative Engine Optimization) agent for documentation sites. Use when someone wants to optimize their docs for AI citation, improve AI discoverability, audit their site against AEO best practices, create llms.txt, add AI-friendly structured data, or work through a full 29-item AEO/GEO checklist. Also use for phrases like "optimize my docs for AI", "make my site discoverable by AI", "AEO audit", "GEO optimization", or "help me rank in AI search results".
99
100%
Does it follow best practices?
Impact
99%
1.30xAverage score across 3 eval scenarios
Advisory
Suggest reviewing before use
A five-phase optimization agent that audits your documentation site against a 29-item AEO/GEO checklist, lets you choose which items to tackle, creates research-backed implementation plans, executes them one at a time with your approval at each step, and verifies results.
Before starting: Ask the user for their docs project path (local directory). If they haven't provided one, ask now.
Phase 1: Assess → Scan project, score all 29 items, identify quick wins
Phase 2: Select → Human-in-the-loop item selection via multi-select
Phase 3: Plan → Research-backed implementation plan per selected item
Phase 4: Execute → Implement one item at a time, checkpoint after each
Phase 5: Verify → Re-scan and compare before/after, produce final reportGoal: Produce a scored assessment of all 29 AEO/GEO items for the user's project.
Steps:
references/checklist.md — this is your master reference for all 29 itemspython3 scripts/assess_project.py --project-root <path> --output /tmp/aeo-assessment.jsonagents/assessor.md for qualitative analysis of content-quality itemspython3 scripts/generate_report.py --assessment /tmp/aeo-assessment.json --output /tmp/aeo-report.mdRating scale: ✅ Implemented · 🟡 Partial · ❌ Not Implemented · ⬜ N/A
Save the assessment JSON to /tmp/aeo-assessment-baseline.json for later comparison in Phase 5.
Goal: Let the user choose which items to implement using AskUserQuestion multi-select.
Approach: Present items in priority-ordered batches of ≤4 (tool cap). Group by tier:
Show the not-implemented items that are High Impact + Low/Medium Difficulty. Ask:
"Which Quick Win items do you want to tackle? (select all that apply)"
Format each option as: #N — Item Name (⏱ time estimate)
Show remaining High Impact items not yet selected. Same format.
Continue with medium-impact items. Keep batches to 4.
After all rounds: Present the complete selected list, sorted by dependency order (see Dependency Map below). Ask the user to confirm or reorder before planning begins.
Dependency Map:
Goal: Create a research-backed implementation plan for each selected item.
Follow the planner subagent instructions in agents/planner.md.
Present all plans to the user as a single document. They can:
Goal: Implement items one at a time, in dependency order, with user approval between each.
For each item, follow the executor subagent instructions in agents/executor.md.
| Item(s) | Strategy | Action |
|---|---|---|
| #1, #2 | custom: llms-txt-creator | Invoke the llms-txt-creator skill (same tile) |
| #3, #4, #7, #8, #10, #15, #16, #18, #19, #21, #23, #28 | delegate: seo-geo | Phrase the task to naturally invoke the seo-geo skill (GEO content optimization, technical SEO, meta tags, etc.) |
| #5, #6, #11, #14 | delegate: schema-markup | Phrase the task to invoke the schema-markup skill (JSON-LD schema generation) |
| #17, #22 | delegate: seo-audit | Phrase the task to invoke the seo-audit skill (meta audit, Core Web Vitals) |
| #12 | custom: ai-prompt-files | Invoke the ai-prompt-files skill (same tile) |
| #13 | delegate: mcp-builder | Phrase the task to invoke the mcp-builder skill (MCP server scaffolding) |
| #25, #26 | custom: ai-discoverability | Invoke the ai-discoverability skill (same tile) |
| #27 | delegate: i18n-manager | Phrase the task to invoke the i18n-manager skill |
| #9, #20, #24, #29 | direct | Implement directly per the plan (code examples, semantic search, changelog, OpenAPI spec) |
After each item is complete, present:
✅ Item #N (Item Name) complete.
Changes made:
• [list files created/modified]
How would you like to proceed?
(a) Approve and continue to next item
(b) Request changes to this item
(c) Skip to next item
(d) Stop and review all changes so farWrite a brief summary of each completed item to .aeo-work-log.md in the project root.
Goal: Confirm what was implemented, flag any issues, produce a final report.
Follow the verifier subagent instructions in agents/verifier.md.
Steps:
python3 scripts/verify_implementations.py \
--baseline /tmp/aeo-assessment-baseline.json \
--project-root <path> \
--output /tmp/aeo-verification.mdThe three custom subskills in skills/ handle items with no equivalent on the registry:
| Subskill | Items | Load When |
|---|---|---|
skills/llms-txt-creator/SKILL.md | #1, #2 | Executor routes to custom: llms-txt-creator |
skills/ai-prompt-files/SKILL.md | #12 | Executor routes to custom: ai-prompt-files |
skills/ai-discoverability/SKILL.md | #25, #26 | Executor routes to custom: ai-discoverability |
Load on-demand as needed (don't pre-load all):
| File | Load When |
|---|---|
references/checklist.md | Phase 1 (always) — 29-item master reference |
references/assessment-rubric.md | Phase 1 — qualitative scoring criteria |
references/geo-research.md | Phase 3 — planning GEO content items (#3, #4, #15, #16, #19) |
references/llms-txt-spec.md | Phase 3 — planning items #1, #2, #25 |
| # | Item | Category | Impact | Time |
|---|---|---|---|---|
| 1 | llms.txt | AI Files | High | 2h |
| 2 | llms-full.txt | AI Files | High | 1h |
| 3 | GEO content optimization | Content | High | 8h |
| 4 | Question-based headings | Content | High | 4h |
| 5 | Organization JSON-LD | Structured Data | High | 1h |
| 6 | SoftwareApplication JSON-LD | Structured Data | High | 1h |
| 7 | robots.txt AI crawler access | Technical | High | 30m |
| 8 | Meta tags (title/description) | Technical | High | 2h |
| 9 | Multi-language code examples | Content | High | 8h |
| 10 | OG / social meta tags | Technical | Med | 2h |
| 11 | FAQPage JSON-LD | Structured Data | High | 3h |
| 12 | AI editor prompt files | AI Files | Med | 2h |
| 13 | MCP server | AI Integration | High | 16h |
| 14 | BreadcrumbList JSON-LD | Structured Data | Med | 1h |
| 15 | Comparison guides | Content | High | 8h |
| 16 | Glossary / terminology | Content | Med | 4h |
| 17 | Core Web Vitals | Technical | Med | 8h |
| 18 | Sitemap.xml | Technical | Med | 1h |
| 19 | Long-tail FAQ content | Content | High | 6h |
| 20 | Semantic search | Technical | Med | 16h |
| 21 | Canonical URLs | Technical | Med | 1h |
| 22 | Title/description optimization | Technical | Med | 4h |
| 23 | TechArticle JSON-LD | Structured Data | Med | 2h |
| 24 | Changelog | Content | Low | 2h |
| 25 | llms.txt directory registration | AI Discoverability | High | 1h |
| 26 | External backlinks / authority | AI Discoverability | High | 8h |
| 27 | i18n / multilingual | Content | Med | 16h |
| 28 | Internal linking | Technical | Med | 4h |
| 29 | OpenAPI spec | AI Integration | Med | 8h |
658c481
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.