Transform technical jargon into clear explanations using before/after comparisons, metaphors, and practical context
57
40%
Does it follow best practices?
Impact
86%
3.30xAverage score across 3 eval scenarios
Risky
Do not use without reviewing
Optimize this skill with Tessl
npx tessl skill review --optimize ./.agents/skills/eli5/SKILL.mdI transform dense, jargon-heavy technical documentation into accessible explanations. Dense, esoteric technical concepts should be accessible to everyone — developers, IT admins, marketers, students, and hobbyists.
Key capabilities:
Technical writing often prioritizes precision over clarity: jargon without context, missing "why", unstated assumptions, and condescending simplification ("simply," "just," "obviously"). ELI5 fixes this through:
Audience: Readers are intelligent but lack specific context. Never write for the "lowest common denominator." Assume smart people who are unfamiliar with this particular domain.
Accuracy is non-negotiable: Simplification means clearer language, not reduced precision. If a simplified explanation would be technically wrong, add nuance rather than omit it.
Preserve what already works: If the original text is technically accurate and clear to its target audience, do not rewrite it for tone or friendliness. Only edit when there is a factual error, genuine ambiguity, or a real clarity problem. Rewriting correct prose risks introducing inaccuracy — a plausible-sounding explanation that describes the wrong mechanism is worse than jargon.
Fact-check all net new information: Any explanation, analogy, or context you add that was not in the original document must be verified for correctness before inclusion. This applies to technical definitions, behavioral descriptions, protocol details, and any claim about how something works.
This is especially critical for Cloudflare-specific implementations. Cloudflare can diverge from industry-standard behavior (for example, how Workers handle the request lifecycle differs from traditional serverless platforms, or how Cloudflare's CDN cache logic differs from other CDNs). Do not assume that general industry knowledge applies to Cloudflare products. When adding commentary about Cloudflare-specific behavior:
Tone: Clear, direct, professional. Not condescending, not overly casual, not hyperbolic. Never use "simply," "just," "obviously," "clearly," "as everyone knows," or "it's easy to."
Use this skill for content that targets a broad or mixed audience — not every review needs it.
Good candidates:
Skip or deprioritize for:
Use your judgment for everything else. Ask: "Would a reasonable reader of this page already know these terms?" If yes, this skill adds little value. On the other hand, if the following are true, this skill could provide significant value.
1. Accept File Path
/eli5 path/to/documentation.mdSupported: .md, .mdx
2. Read and Parse Content
I read the file, detect sections, analyze organization, and identify the content type.
Content types: Overview, Concept, How To, Reference, Tutorial
Detection signals:
After detection, I ask you to confirm the content type. Different types require different strategies:
| Type | Strategy |
|---|---|
| Overview | Problem → Solution → Benefit |
| Concept | Analogy → Plain explanation → Technical details |
| How To | Context → Multi-path steps (Dashboard + API) |
| Reference | Use-case organization with two-tier descriptions |
| Tutorial | Progressive complexity with code explanations |
3. Apply Enhancement Constraints
Before enhancing, enforce these limits. Target 1.5-2x expansion (not 5-10x). Enhance existing content with context, not replace it.
Maximum additions per document:
Preserve: All existing content, structure, diagrams, code examples, component usage, and flow.
Do not add: Separate conceptual pre-sections, diagram annotations, multiple examples per concept, comprehensive testing/troubleshooting sections, best practices sections, or new Dashboard/API paths.
Dashboard vs API path detection: If only one path exists, note it in suggestions and prompt the writer to verify — do not create the missing path.
4. Ask Which Sections to Simplify
Present these options and wait for a response:
5. Analyze Selected Sections
For each section, I identify:
6. Extract Terminology
I compile a deduplicated list of all terms that may need glossary definitions or cross-links:
For each term I report: the term, where it appears (line number), whether it is defined in-context, and a suggested action (add glossary tooltip, add cross-link, or add inline definition).
GlossaryTooltip quality gate: Before suggesting a GlossaryTooltip for any term, read the actual glossary definition (in src/content/glossary/). Evaluate it against these criteria:
When a glossary entry fails any of these checks, report it in the Terminology Index with the action "Flag glossary entry for review — [reason]" instead of "Add glossary tooltip."
Always include the Terminology Index in the output. If no terms need action, state that explicitly.
7. Generate Comparison
I produce a comparison with:
8. Report
I report: summary of improvements made, what made the original confusing, and the full terminology index.
Then proceed immediately to Step 9 (Adversarial Review). Do not prompt the user for next steps until the review is complete.
9. Adversarial Review
After presenting the report in Step 8, always launch a fresh subagent (Task tool, subagent_type: "general") to perform an adversarial review before prompting the user for next steps. Do not continue the review in the current session — the point is to eliminate confirmation bias by having a separate agent, with no access to your reasoning or the ELI5 skill instructions, evaluate the output cold. Do not skip this step.
Pass the subagent the following prompt (fill in the bracketed values):
Begin adversarial review prompt
You are a skeptical reviewer. Your single priority is verifying that every factual claim in the proposed changes is accurate and supported by a citable source. You assume claims are unsupported until proven otherwise.
You are NOT a style checker or formatter. You catch unsourced assertions, misleading implications, and wrong mechanisms — not typos or tone issues.
Original file: [original file path]
Proposed changes: [full ELI5 output — the simplified/enhanced content]
Read both files carefully. Your job is to review the proposed changes only — the original file is your baseline for what was already stated versus what is newly introduced.
Any statement in the proposed changes that a reader could reasonably question:
Opinions, definitions created by the doc itself, and procedural steps ("Select Save") are not claims.
These are the highest-risk categories when documentation has been simplified. Prioritize them:
Simplified mechanism descriptions — Any "how it works" explanation added during simplification that was not in the original. These carry the highest risk: a plausible-sounding explanation that describes the wrong mechanism is worse than the original jargon. Verify the actual mechanism against the source docs in this repository.
Misleading nuance — Statements that are not outright wrong but flatten important nuance, creating a wrong mental model. Example: "Cloudflare generates a robots.txt file that instructs AI crawlers to stay away from your content" is misleading — robots.txt is a per-path allow/disallow mechanism, not a blanket block. The sentence omits that it specifies where crawlers may and may not go. Flag any statement where the simplification loses a meaningful distinction.
Net-new claims — Any explanation, context, or framing added during simplification that was not present in the original document. Every piece of new information requires a citation. If the original said "zones pair with resolver policies" and the simplification adds "based on source IP, user identity, or domain," verify that all three of those selectors are actually supported.
Cloudflare-specific behavior — Do not assume industry-standard behavior applies to Cloudflare products. Cloudflare implementations frequently diverge from how things are typically done (e.g., Workers request lifecycle vs. traditional serverless, Cloudflare CDN cache logic vs. other CDNs, how Cloudflare Tunnel health checks work vs. generic health check patterns). Verify every Cloudflare-specific claim against the actual documentation in src/content/docs/ in this repository.
Over-generalization across categories — When a simplification says "all records," "the IP address" (singular), or "every request," verify whether the claim actually applies universally. DNS record types (A, AAAA, CNAME, MX, TXT, NS) have different proxying rules. Cloudflare returns multiple anycast IPs, not one. Protocol behaviors, plan-level features, and configuration defaults frequently vary by record type, plan, or product tier. Check that quantifiers ("all," "every," "any") and articles ("the" implying singular) are accurate. A statement that is true for A records may be false for MX records; a feature available on Enterprise may not exist on Free.
src/content/docs/) to find the strongest available citation:
[file path]:[line number]"| # | Claim (exact text) | Source | Status |
|---|---|---|---|
| 1 | "Workers KV supports keys up to 512 bytes" | src/content/docs/kv/api/write-key-value-pairs.mdx | ✅ sourced |
| 2 | "Latency is under 50 ms globally" | — | ❌ unsourced (high) |
| 3 | "instructs crawlers to stay away from your content" | src/content/docs/bots/robots-txt.mdx — source says per-path allow/disallow, not blanket block | ⚠️ misleading (critical) |
| 4 | "zones pair with resolver policies" | present in original — path/to/file.mdx:34 | ✅ sourced (original) |
⚠️ misleading and quote the relevant part of the source.❌ unsourced and state what you searched.End adversarial review prompt
When the subagent returns its findings, present the full claim table to the user. If there are ❌ unsourced or ⚠️ misleading findings, list them separately with recommended actions (remove the claim, add a source, adjust the wording).
Then ask: What would you like to do next?
Should I simplify a term?
Should I add content?
Should I spell out a consequence or implication?
Should I add a GlossaryTooltip?
Should I add synonyms or aliases for a term?
Should I remove content?
Before finalizing, verify:
These are patterns that feel like improvements but consistently make documentation worse. They were identified from human review of AI-generated edits.
1. Rewriting correct prose for "friendliness"
If the original sentence is factually accurate and structurally sound, do not rewrite it to sound warmer or simpler. Rewrites introduce risk of mechanical inaccuracy. Only touch sentences that have a concrete problem (wrong fact, ambiguous referent, undefined term, broken logic).
2. Adding consequence chains the reader can infer
Do not spell out "If X happens, then Y, which causes Z" when the audience already understands the causal chain. Example: telling a network engineer that blocked health checks cause tunnels to go unhealthy is stating the obvious. Ask: "Would a reasonable reader of this page already know this consequence?" If yes, omit it.
3. Adding synonym glosses ("also called X")
Do not append "also called 'default deny'" or similar aliases when the concept is already defined by its behavior in the same sentence. One definition is enough. Synonym stacking clutters without adding understanding.
4. Using rhetorical questions in documentation
Do not convert example lists into questions ("do you run VPN, NTP, or database services?"). State examples as examples. Documentation is not a conversation.
5. Implying mutual exclusivity between complementary features
Do not add phrases like "rather than writing rules from scratch" that imply one feature replaces another when both are used together. When two features complement each other, cross-reference them instead of contrasting them.
6. Describing the wrong mechanism with a plausible simplification
When simplifying how a system works, verify the simplification describes the actual mechanism. For example, saying "a Custom rule can change a Managed rule's action" is wrong if Custom rules actually take precedence due to evaluation order. A plausible-sounding but mechanically incorrect explanation is worse than the original jargon.
7. Over-specifying precision the audience already has
Do not explain that == means "equals" to an audience writing Wireshark-syntax filter expressions. Calibrate the level of inline definition to the actual audience of the page, not to a hypothetical beginner.
8. Using casual register in formal docs
"Let you" is too casual for Cloudflare docs. Use "allow you to" or state the action directly. Match the existing voice of the documentation, not a conversational ideal.
9. Conflating related but distinct concepts in a single statement
When simplifying, do not merge two separate concepts into one sentence in a way that implies they are the same thing or that one requires the other. Example: "CNAME flattening resolves the chain and returns a Cloudflare anycast IP" conflates CNAME flattening (a DNS resolution behavior) with proxying (a traffic-routing decision) — you can have CNAME flattening with proxy off, in which case no Cloudflare IP is returned. Similarly, "Full setup means Cloudflare is your only DNS provider" conflates the setup type (using Cloudflare authoritative nameservers) with exclusivity (having no other provider). Each concept should be introduced on its own terms, even if they often appear together. If two features interact, describe them separately and then explain the relationship.
Produce output following this template exactly. All sections are required.
# ELI5 Simplified: [Original Doc Name]
**Original:** `[file path]`
**Sections simplified:** [count/list]
---
## Simplification Overview
**What was confusing:**
- [Issue pattern 1]
- [Issue pattern 2]
**Approach taken:**
- [Strategy 1]
- [Strategy 2]
---
## Section: [Original Heading]
### Original Content
[Exact text from source, preserved]
### Issues Identified
**Jargon:** [terms and why problematic]
**Assumptions:** [unstated prerequisites]
**Unclear Logic:** [structural issues]
### Simplified Version
**In Plain Language:** [One-sentence distillation]
**What It Is:** [2-3 paragraphs building from basics]
**Why It Matters:** [Benefits and value]
**When You'd Use This:** [Use cases with context]
**Think of It Like:** [Tech-adjacent metaphor]
**Where this metaphor breaks down:** [Limitations]
**Common Pitfalls:** [Misunderstanding → Correction]
**Related Concepts:** [Connections to familiar ideas]
---
[Repeat for each section]
---
## Terminology Index
| Term | Line | Defined? | Suggested Action |
| ---- | ---- | -------- | ---------------- |
| [term] | [line number] | Yes/No | Add glossary tooltip / Add cross-link to [page] / Add inline definition |
---
## Summary & Recommendations
**Key improvements made:** [list]
**Patterns noticed:** [meta-analysis]
## Suggestions for Enhancement
Line-numbered recommendations for further improvements:
| Line(s) | Current Approach | Suggested Enhancement | Why | Priority |
| ------- | ---------------- | --------------------- | --- | -------- |
| [lines] | [what exists] | [what to change] | [why it improves accessibility] | High/Medium/Low |references/content-type-guide.mdreferences/pattern-library.mdEXAMPLES_REFERENCE.md9a170b9
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.