Five-skill presentation system: ingest talks into a rhetoric vault, run interactive clarification, generate a speaker profile, create presentations that match your documented patterns, and produce the deck illustrations + thumbnail visual layer. Includes a 102-entry Presentation Patterns taxonomy (91 observable, 11 unobservable go-live items) for scoring, brainstorming, and go-live preparation.
93
94%
Does it follow best practices?
Impact
93%
1.43xAverage score across 21 eval scenarios
Advisory
Suggest reviewing before use
Phase 4 runs two complementary checkers against outline.yaml:
| Script | Surface | Output |
|---|---|---|
scripts/check-rhetorical.py outline.yaml | Closed pattern taxonomy — opening PUNCH, big-idea singleton, thesis preview/payoff, sparkline elements, master-story threading, callback ledger, inoculation count, progressive-list contiguity, running gags, duration accounting | rhetorical-review.md |
scripts/guardrail-check.py outline.yaml <speaker-profile.json> | Profile-aware rules — slide budget, Act 1 ratio, branding, profanity, data attribution, closing completeness, cut-line availability (conditional on modular_design) | stdout report |
Anti-pattern frequency from profile.pattern_profile.antipattern_frequency and
illustration coverage (checks 9 and 10 below) are not currently wired into
guardrail-check.py — the agent should surface them as additional manual
checks alongside the script output during the Phase 4 audit.
The two scripts are independent — run both. check-rhetorical.py needs no
profile and emits a deterministic report regardless of the speaker. guardrail-check.py
fuses outline data with profile thresholds and venue context.
This file defines the guardrail check structure — what each check covers, how it's wired to schema fields, and how results are reported.
Thresholds and speaker-specific rules come from the vault at runtime:
speaker-profile.json → guardrail_sources.slide_budgets[]speaker-profile.json → guardrail_sources.act1_ratio_limits[]speaker-profile.json → guardrail_sources.recurring_issues[]speaker-profile.json → design_rules.footerspeaker-profile.json → rhetoric_defaultsspeaker-profile.json → confirmed_intents[]If the speaker profile is not available, fall back to the rhetoric vault summary Sections 15 (Areas for Improvement) and 16 (Speaker-Confirmed Intent) for prose rules.
Run these checks after Phase 3 delivery and after each Phase 4 revision.
Read guardrail_sources.slide_budgets[] from the speaker profile for the thresholds.
Match the talk's duration to the closest budget entry.
Progressive-reveal slides count toward the budget. If you show the same chart 3 times with different bars highlighted, that's 3 slides, not 1. Flag progressive reveals as budget-expensive and ask if the emphasis is worth the cost.
Demo-driven talks are the exception. If the selected mode is demo-driven, apply a much lower slide count (the live demo IS the content).
[PASS/FAIL] Slide count: {actual}/{budget} for {duration}-minute slotIf over budget, suggest specific cuts. Prioritize cutting:
Read guardrail_sources.act1_ratio_limits[] from the speaker profile for the limits.
[PASS/WARN/FAIL] Act 1 ratio: {act1_slides}/{total_slides} = {percentage}%
(limit: {max}% for {duration}-min slot)WARN if within 5% of limit. FAIL if over.
Read design_rules.footer from the speaker profile for the footer elements.
Generate the checklist dynamically from footer.elements[]:
footer.co_presented_extra)[PASS/FAIL] Branding: Footer elements specified for {conference}
[WARN] Branding: Conference hashtag not yet confirmed — flag for authorRead rhetoric_defaults.profanity_calibration and rhetoric_defaults.on_slide_profanity
from the speaker profile.
The key rule (common across speakers): keep profanity verbal-only by default. On-slide profanity limits deck reuse across venues.
[PASS/FAIL] Profanity register: {spec_register} applied consistently
[WARN/FAIL] On-slide profanity: {count} instances found — {approved/not approved}If on-slide profanity is present without explicit approval, flag it: "Slide {N} has baked-in profanity: '{text}'. This limits reuse at corporate/ family-friendly events. Keep it verbal-only, or explicitly approve for this talk?"
Survey data without visible source attribution creates credibility risk.
[PASS/FAIL] Data attribution: {N} data slides checked, {M} missing sources
Slides needing sources: {list}Expired dates, deadlines, and promotional material appear on reused slides.
[PASS/FAIL] Time-sensitive content: {count} items found
{list with slide numbers and content}Even compressed formats need a structured close. Read rhetoric_defaults.three_part_close
from the speaker profile for whether the speaker defaults to a full three-part close.
Minimum viable close:
[PASS/FAIL] Closing: {summary present?} | {CTA present?} | {social present?}
[WARN] Closing: Missing {component} — intentional or oversight?Always check this. Read rhetoric_defaults.default_duration_minutes and
rhetoric_defaults.modular_design from the speaker profile.
In outline.yaml, cuttable chapters and slides carry cuttable: true. The
guardrail counts cuttable minutes and verifies the talk can compress to
shorter slots.
[PASS/FAIL] Cut lines: {cuttable_min} min of cuttable content
(cuttable chapters: {ids}; cuttable slides: {ns})FAIL when the profile's rhetoric_defaults.modular_design is true and no
chapter or slide carries cuttable: true. PASS-and-skipped when
modular_design is false (speakers who opt out don't get penalized for
inflexible decks). The talk-duration-shorter-than-default heuristic is no
longer used — the modular_design flag is the single gate.
This check has two layers: speaker-specific recurring issues from the vault, and taxonomy-based antipattern scanning from the Presentation Patterns reference.
Read guardrail_sources.recurring_issues[] from the speaker profile. Each entry
describes a known weakness and its specific guardrail check.
For each recurring_issues entry, run the check described in its guardrail field
and report at the severity level in its severity field.
Common anti-patterns (may or may not apply to a given speaker):
Read references/patterns/_index.md (Phase 4 section of the phase-grouped lookup table)
and profile → pattern_profile.antipattern_frequency if available.
Speaker-specific antipatterns — scan pattern_profile.antipattern_frequency for
patterns with severity: "recurring". These are flagged as [RECURRING] with the
speaker's historical frequency and trend.
Contextual antipatterns — scan the outline against ALL antipatterns from the taxonomy.
For each match, read the individual pattern file for detection heuristics and scoring
criteria. These are flagged as [CONTEXTUAL] (new detection, not historically tracked).
Contextual detection rules:
Report format:
[RECURRING] Shortchanged (8/24, decreasing) — plan cut lines for the 20-min slot
[RECURRING] Meme accretion (5/24, stable) — Act 1 meme ratio at 55%
[CONTEXTUAL] Bullet-Riddled Corpse — slides 14, 22 have 6+ bullet points
[CONTEXTUAL] Dual-Headed Monster — co-presented talk, handoff points not definedOnly runs when the outline includes an Illustration Style Anchor section.
When no illustration strategy exists, output [SKIP] Illustrations: no illustration strategy defined in the guardrail report and skip all sub-checks below.
Visual coverage ratio — what percentage of non-EXCEPTION slides have image prompts. Not every slide needs an illustration (text-only slides, transition slides), but the ratio should reflect the author's intent for the deck's visual density.
Format consistency — every slide has a Format field (FULL, IMG+TXT, EXCEPTION,
or a custom format from the format vocabulary).
EXCEPTION justification — every EXCEPTION slide has a reason explaining why it uses a real asset instead of a generated illustration.
Style anchor reference — every image prompt starts with [STYLE ANCHOR] (the token
the generation script replaces with the actual anchor text).
Prompt quality — no prompts that are just the Illustration description copy-pasted. The image prompt should be richer and more specific than the human-readable description (it includes composition details, labeling, specific visual elements).
[PASS/FAIL] Illustration coverage: {N}/{M} illustrated slides have image prompts
[PASS/FAIL] Format tags: {N}/{total} slides have format specified
[WARN] EXCEPTION slides: {N} without justification
[PASS/FAIL] Style anchor reference: {N}/{M} prompts start with [STYLE ANCHOR]
[PASS/WARN] Prompt quality: {N} prompts appear to be copy-paste of descriptionBuild coverage check — when slides define - Builds: sections:
[PASS/SKIP] Build coverage: {N} slides have builds defined, {M} build-step images generatedSKIP if no slides define builds. PASS if all defined build steps have corresponding
images in illustrations/builds/. Report missing steps if any.
Prompt quality anti-patterns — flag prompts likely to produce poor results:
[WARN] Prompt anti-patterns: {N} prompts with potential issuesDetection rules (check all edit/fix/generation prompts):
Always runs. This is the convergent-thinking cut pass that asks of every section in the outline: does this directly support the Big Idea? Per the "Murder Your Darlings — The Pre-Delivery Cut Pass" subsection in patterns/prepare/crucible.md.
For each top-level section in the outline:
[CALLBACK: master-story reference], sections labeled with a personal anecdote that doesn't tie back to the thesis, and sections containing data-rich material that took significant effort to gather. These are the most-loved-by-author and most-likely-to-be-darlings.[PASS/WARN/FAIL] Big Idea alignment: {N}/{total} sections traceable to Big Idea components
[WARN] Candidate darlings: {section names} — flagged for reviewThe pass produces flags, not automatic deletions. The author makes the actual cut decisions; the guardrail surfaces the candidates.
Always runs. Per patterns/build/sparkline.md (the "Three Contrast Types" subsection) and Duarte's kairos principle, the analytical/emotional ratio of the outline must match the audience type.
Categorize each content section as analytical (data, logic, argument, technical detail) or emotional (story, image, anecdote, evocative language). Compare the ratio against the audience profile from the spec:
[PASS/WARN] Emotion balance: {N}% analytical / {M}% emotional vs. {target}% target for {audience type}WARN if the ratio is more than 15 percentage points off the target in either direction. The check is descriptive, not prescriptive — the author decides whether the imbalance is intentional (some talks deliberately skew). Surface the imbalance so it's a conscious choice.
Runs before final spec lock for high-stakes talks (keynotes, sales pitches, fundraising presentations, executive briefings). Per the "Screening with Critics — Beyond Copyediting" subsection in patterns/build/peer-review.md.
Has the author scheduled (or executed) a formal screening session with the following properties?
[PASS/WARN/SKIP] Screening: {scheduled / completed / not applicable for this stakes level}SKIP for low-stakes talks (internal demos, small-group presentations, tutorial sessions). For everything else, WARN if no screening is scheduled or completed.
Use this template after each check:
GUARDRAIL CHECK — {talk title} — {date}
================================================
[PASS] Slide budget: {actual}/{max} for {duration}-min slot
[PASS/WARN/FAIL] Act 1 ratio: {%} (limit: {max}%)
[PASS/FAIL] Branding: {status}
[PASS/FAIL] Profanity: {register} applied, {on-slide count} on-slide
[PASS/FAIL] Data attribution: {N} slides checked, {M} issues
[PASS/FAIL] Time-sensitive: {count} items
[PASS/FAIL] Closing: summary={y/n} CTA={y/n} social={y/n}
[PASS/FAIL] Cut lines: {present/missing}
[INFO] Anti-patterns: {any flags from recurring_issues}
[RECURRING/CONTEXTUAL] Presentation Patterns: {taxonomy-based antipattern flags}
[PASS/FAIL/SKIP] Illustrations: {coverage ratio} | {format tags} | {prompt quality}
[PASS/SKIP] Builds: {N} defined, {M} images generated
[WARN] Prompt anti-patterns: {N} issues found
[PASS/WARN/FAIL] Big Idea alignment: {N}/{total} sections traceable to Big Idea
[PASS/WARN] Emotion balance: {N}% analytical / {M}% emotional vs. {target}%
[PASS/WARN/SKIP] Screening with critics: {scheduled / completed / not applicable}
================================================The Illustrations, Builds, and Prompt anti-patterns lines show [SKIP] when no
illustration strategy is defined.
evals
scenario-1
scenario-2
scenario-3
scenario-4
scenario-5
scenario-6
scenario-7
scenario-8
scenario-9
scenario-10
scenario-11
scenario-12
scenario-13
scenario-14
scenario-15
scenario-16
scenario-17
scenario-18
scenario-19
scenario-20
scenario-21
rules
skills
illustrations
presentation-creator
references
patterns
build
deliver
prepare
scripts
vault-clarification
vault-ingress
vault-profile