CtrlK
BlogDocsLog inGet started
Tessl Logo

aeo-geo-agent

End-to-end AEO (Answer Engine Optimization) and GEO (Generative Engine Optimization) agent for documentation sites. Use when someone wants to optimize their docs for AI citation, improve AI discoverability, audit their site against AEO best practices, create llms.txt, add AI-friendly structured data, or work through a full 29-item AEO/GEO checklist. Also use for phrases like "optimize my docs for AI", "make my site discoverable by AI", "AEO audit", "GEO optimization", or "help me rank in AI search results".

99

1.30x
Quality

100%

Does it follow best practices?

Impact

99%

1.30x

Average score across 3 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

SKILL.md
Quality
Evals
Security

AEO/GEO Optimization Agent

A five-phase optimization agent that audits your documentation site against a 29-item AEO/GEO checklist, lets you choose which items to tackle, creates research-backed implementation plans, executes them one at a time with your approval at each step, and verifies results.

Before starting: Ask the user for their docs project path (local directory). If they haven't provided one, ask now.


Five-Phase Workflow

Phase 1: Assess   → Scan project, score all 29 items, identify quick wins
Phase 2: Select   → Human-in-the-loop item selection via multi-select
Phase 3: Plan     → Research-backed implementation plan per selected item
Phase 4: Execute  → Implement one item at a time, checkpoint after each
Phase 5: Verify   → Re-scan and compare before/after, produce final report

Phase 1 — Assessment

Goal: Produce a scored assessment of all 29 AEO/GEO items for the user's project.

Steps:

  1. Load references/checklist.md — this is your master reference for all 29 items
  2. Run the scanner script:
    python3 scripts/assess_project.py --project-root <path> --output /tmp/aeo-assessment.json
  3. Follow the assessor subagent instructions in agents/assessor.md for qualitative analysis of content-quality items
  4. Run the report generator:
    python3 scripts/generate_report.py --assessment /tmp/aeo-assessment.json --output /tmp/aeo-report.md
  5. Present the report to the user. Highlight the Quick Wins section (high impact + low difficulty + not implemented).

Rating scale: ✅ Implemented · 🟡 Partial · ❌ Not Implemented · ⬜ N/A

Save the assessment JSON to /tmp/aeo-assessment-baseline.json for later comparison in Phase 5.


Phase 2 — Selection (Human-in-the-Loop)

Goal: Let the user choose which items to implement using AskUserQuestion multi-select.

Approach: Present items in priority-ordered batches of ≤4 (tool cap). Group by tier:

Round 1 — Quick Wins

Show the not-implemented items that are High Impact + Low/Medium Difficulty. Ask:

"Which Quick Win items do you want to tackle? (select all that apply)"

Format each option as: #N — Item Name (⏱ time estimate)

Round 2 — High Impact (remaining)

Show remaining High Impact items not yet selected. Same format.

Round 3 — Medium Impact

Continue with medium-impact items. Keep batches to 4.

After all rounds: Present the complete selected list, sorted by dependency order (see Dependency Map below). Ask the user to confirm or reorder before planning begins.

Dependency Map:

  • #2 requires #1 (llms-full.txt requires llms.txt)
  • #25 requires #1 (directory registration requires llms.txt)
  • #6 benefits from #5 (SoftwareApplication JSON-LD after Organization)
  • #11 benefits from #5 and #6 (FAQPage after base schemas)
  • Execute in dependency order automatically

Phase 3 — Planning

Goal: Create a research-backed implementation plan for each selected item.

Follow the planner subagent instructions in agents/planner.md.

Present all plans to the user as a single document. They can:

  • Approve all plans → begin execution
  • Request changes to specific plans → revise and re-present
  • Remove items from the list → update execution queue

Phase 4 — Execution (Sequential with Checkpoints)

Goal: Implement items one at a time, in dependency order, with user approval between each.

For each item, follow the executor subagent instructions in agents/executor.md.

Delegation Routing Table

Item(s)StrategyAction
#1, #2custom: llms-txt-creatorInvoke the llms-txt-creator skill (same tile)
#3, #4, #7, #8, #10, #15, #16, #18, #19, #21, #23, #28delegate: seo-geoPhrase the task to naturally invoke the seo-geo skill (GEO content optimization, technical SEO, meta tags, etc.)
#5, #6, #11, #14delegate: schema-markupPhrase the task to invoke the schema-markup skill (JSON-LD schema generation)
#17, #22delegate: seo-auditPhrase the task to invoke the seo-audit skill (meta audit, Core Web Vitals)
#12custom: ai-prompt-filesInvoke the ai-prompt-files skill (same tile)
#13delegate: mcp-builderPhrase the task to invoke the mcp-builder skill (MCP server scaffolding)
#25, #26custom: ai-discoverabilityInvoke the ai-discoverability skill (same tile)
#27delegate: i18n-managerPhrase the task to invoke the i18n-manager skill
#9, #20, #24, #29directImplement directly per the plan (code examples, semantic search, changelog, OpenAPI spec)

After Each Item — Checkpoint

After each item is complete, present:

✅ Item #N (Item Name) complete.

Changes made:
  • [list files created/modified]

How would you like to proceed?
(a) Approve and continue to next item
(b) Request changes to this item
(c) Skip to next item
(d) Stop and review all changes so far
  • Response (a): continue to next item
  • Response (b): loop executor with feedback, re-implement, re-checkpoint
  • Response (c): mark as skipped, continue to next
  • Response (d): stop execution, summarize all changes, offer to resume or go to Phase 5

Write a brief summary of each completed item to .aeo-work-log.md in the project root.


Phase 5 — Verification

Goal: Confirm what was implemented, flag any issues, produce a final report.

Follow the verifier subagent instructions in agents/verifier.md.

Steps:

  1. Re-run the scanner and compare to baseline:
    python3 scripts/verify_implementations.py \
      --baseline /tmp/aeo-assessment-baseline.json \
      --project-root <path> \
      --output /tmp/aeo-verification.md
  2. Do qualitative spot checks on implemented items
  3. Present final report: items improved, items with issues, remaining items

Custom Subskills

The three custom subskills in skills/ handle items with no equivalent on the registry:

SubskillItemsLoad When
skills/llms-txt-creator/SKILL.md#1, #2Executor routes to custom: llms-txt-creator
skills/ai-prompt-files/SKILL.md#12Executor routes to custom: ai-prompt-files
skills/ai-discoverability/SKILL.md#25, #26Executor routes to custom: ai-discoverability

Reference Files

Load on-demand as needed (don't pre-load all):

FileLoad When
references/checklist.mdPhase 1 (always) — 29-item master reference
references/assessment-rubric.mdPhase 1 — qualitative scoring criteria
references/geo-research.mdPhase 3 — planning GEO content items (#3, #4, #15, #16, #19)
references/llms-txt-spec.mdPhase 3 — planning items #1, #2, #25

Item Reference (Quick Lookup)

#ItemCategoryImpactTime
1llms.txtAI FilesHigh2h
2llms-full.txtAI FilesHigh1h
3GEO content optimizationContentHigh8h
4Question-based headingsContentHigh4h
5Organization JSON-LDStructured DataHigh1h
6SoftwareApplication JSON-LDStructured DataHigh1h
7robots.txt AI crawler accessTechnicalHigh30m
8Meta tags (title/description)TechnicalHigh2h
9Multi-language code examplesContentHigh8h
10OG / social meta tagsTechnicalMed2h
11FAQPage JSON-LDStructured DataHigh3h
12AI editor prompt filesAI FilesMed2h
13MCP serverAI IntegrationHigh16h
14BreadcrumbList JSON-LDStructured DataMed1h
15Comparison guidesContentHigh8h
16Glossary / terminologyContentMed4h
17Core Web VitalsTechnicalMed8h
18Sitemap.xmlTechnicalMed1h
19Long-tail FAQ contentContentHigh6h
20Semantic searchTechnicalMed16h
21Canonical URLsTechnicalMed1h
22Title/description optimizationTechnicalMed4h
23TechArticle JSON-LDStructured DataMed2h
24ChangelogContentLow2h
25llms.txt directory registrationAI DiscoverabilityHigh1h
26External backlinks / authorityAI DiscoverabilityHigh8h
27i18n / multilingualContentMed16h
28Internal linkingTechnicalMed4h
29OpenAPI specAI IntegrationMed8h
Repository
kubical-ai/aeo-geo-optimization
Last updated
Created

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.