Audit any website for AI/agent-friendliness using AgentLint. Run `npx @cjavdev/agent-lint` with a URL to scan a site across 17 rules in 5 categories (transport, structure, tokens, discoverability, agent), get a 0-100 AgentScore with letter grade, and receive a prioritized remediation plan. Use this skill when: auditing a site for AI readiness, checking if a site has llms.txt or markdown support, improving a website's agent-friendliness score, fixing AgentLint violations, or understanding what makes a site AI-friendly. Trigger phrases: 'run agentlint', 'audit site for AI', 'check agent-friendliness', 'agentlint scan', 'AI-friendly audit', 'check llms.txt', 'agent readiness'.
100
100%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Audit websites for AI/agent-friendliness. Runs 17 rules across 5 categories, produces a 0-100 AgentScore, and guides remediation.
npx @cjavdev/agent-lint <url> --agentThe --agent flag outputs a structured markdown report optimized for parsing. If the user wants raw JSON, use --json instead.
Common flags:
| Flag | Default | Description |
|---|---|---|
--max-depth <n> | 3 | Maximum crawl depth |
--max-pages <n> | 30 | Maximum pages to crawl |
--json | — | Output as JSON |
--agent | — | Output agent-friendly markdown |
--config <path> | — | Path to config file |
Exit codes: 0 = no errors found, 1 = errors found, 2 = invalid input/system error.
Extract from the CLI output:
Prioritize fixes by impact:
For each violation, provide:
references/remediation-guide.md for detailed instructions)| Grade | Score | Meaning |
|---|---|---|
| A | 90-100 | Excellent. Site is highly agent-friendly. |
| B | 80-89 | Good. Minor improvements possible. |
| C | 70-79 | Fair. Several gaps in agent-friendliness. |
| D | 60-69 | Poor. Significant barriers for AI agents. |
| F | 0-59 | Failing. Major issues across multiple categories. |
Scoring formula: Start at 100. Subtract 10 per error, 4 per warning, 1 per info. Clamped to 0-100.
| Rule ID | What It Checks |
|---|---|
transport/accept-markdown | Returns markdown for Accept: text/markdown |
discoverability/llms-txt | /llms.txt exists |
| Rule ID | What It Checks |
|---|---|
transport/content-type-valid | Valid Content-Type header on responses |
transport/robots-txt | /robots.txt exists (AI agent blocks are info) |
structure/heading-hierarchy | H1 exists, no skipped heading levels |
structure/anchor-ids | Headings have anchor IDs for deep linking |
tokens/page-token-count | Page under 4,000 tokens (configurable) |
tokens/boilerplate-duplication | <30% repeated nav/header/footer content |
agent/agent-usage-guide | Pages mention AI/agent keywords |
| Rule ID | What It Checks |
|---|---|
structure/semantic-html | Uses <main>, <article>, or <section> |
structure/meta-description | Has <meta name="description"> |
structure/lang-attribute | <html lang="..."> attribute present |
tokens/nav-ratio | Nav tokens <20% of page tokens |
agent/mcp-detect | /.well-known/mcp.json exists |
discoverability/sitemap | /sitemap.xml exists |
discoverability/openapi-detect | OpenAPI spec at common paths |
discoverability/structured-data | JSON-LD structured data present |
When presenting a remediation plan, order fixes by points recoverable per unit of effort:
Quick wins (fix first):
discoverability/llms-txt — Create a single file, recover 10 ptsstructure/lang-attribute — One-line HTML change, recover 1 ptstructure/meta-description — Add meta tags, recover 1 pt per pagediscoverability/sitemap — Most frameworks auto-generate thisMedium effort:
transport/content-type-valid — Usually a server config fixstructure/heading-hierarchy — HTML structure fixesstructure/anchor-ids — Add a rehype/markdown pluginagent/agent-usage-guide — Write a dedicated docs pagetransport/robots-txt — Create/update a text fileHigh effort, high impact:
transport/accept-markdown — Requires server-side content negotiation (10 pts)tokens/page-token-count — May require content restructuringtokens/boilerplate-duplication — Requires template/layout changesSites can customize behavior via agent-lint.config.json:
{
"maxDepth": 3,
"maxPages": 30,
"tokenThreshold": 4000,
"ignorePatterns": ["/blog/*"],
"rules": {
"tokens/page-token-count": {
"severity": "info",
"ignorePaths": ["/docs/changelog"]
}
}
}For step-by-step fix instructions with code examples for each rule (Nginx, Cloudflare Workers, Next.js, Express, static HTML), see references/remediation-guide.md.
04dcd58
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.