Audit a documentation site for agent-friendliness: discovery, markdown delivery, crawlability, semantic structure, machine-readable surfaces, and content legibility. Use when asked to assess docs.docker.com or any docs site for AI/agent readiness, produce a scored report, compare with external scanners, or generate a remediation list. Triggers on: "audit docs for agent readiness", "how agent-friendly is docs.docker.com", "score our docs for AI agents", "review llms.txt / markdown / crawlability", "create an agent-readiness remediation plan".
93
92%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Advisory
Suggest reviewing before use
Security
1 medium severity finding. This skill can be installed but you should review these findings before use.
The skill exposes the agent to untrusted, user-generated content from public third-party sources, creating a risk of indirect prompt injection. This includes browsing arbitrary URLs, reading social media posts or forum comments, and analyzing content from unknown websites.
Third-party content exposure detected (high risk: 0.90). The skill explicitly fetches and interprets live public docs (SKILL.md directs "direct HTTP requests, sitemap sampling, and page-level inspection" and scripts/baseline-probes.sh uses curl against BASE_URL/llms.txt, /sitemap.xml and sampled page URLs, including Accept: text/markdown), so it ingests untrusted third‑party web content that directly influences scoring and remediation actions and could enable indirect prompt injection.
c0aa985
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.