Four-skill presentation system: ingest talks into a rhetoric vault, run interactive clarification, generate a speaker profile, then create new presentations that match your documented patterns. Includes a 102-entry Presentation Patterns taxonomy (91 observable, 11 unobservable go-live items) for scoring, brainstorming, and go-live preparation.
97
94%
Does it follow best practices?
Impact
98%
1.24xAverage score across 30 eval scenarios
Advisory
Suggest reviewing before use
This file defines the guardrail check structure — what to check, how to check it, and how to report results.
Thresholds and speaker-specific rules come from the vault at runtime:
speaker-profile.json → guardrail_sources.slide_budgets[]speaker-profile.json → guardrail_sources.act1_ratio_limits[]speaker-profile.json → guardrail_sources.recurring_issues[]speaker-profile.json → design_rules.footerspeaker-profile.json → rhetoric_defaultsspeaker-profile.json → confirmed_intents[]If the speaker profile is not available, fall back to the rhetoric vault summary Sections 15 (Areas for Improvement) and 16 (Speaker-Confirmed Intent) for prose rules.
Run these checks after Phase 3 delivery and after each Phase 4 revision.
Read guardrail_sources.slide_budgets[] from the speaker profile for the thresholds.
Match the talk's duration to the closest budget entry.
Progressive-reveal slides count toward the budget. If you show the same chart 3 times with different bars highlighted, that's 3 slides, not 1. Flag progressive reveals as budget-expensive and ask if the emphasis is worth the cost.
Demo-driven talks are the exception. If the selected mode is demo-driven, apply a much lower slide count (the live demo IS the content).
[PASS/FAIL] Slide count: {actual}/{budget} for {duration}-minute slotIf over budget, suggest specific cuts. Prioritize cutting:
Read guardrail_sources.act1_ratio_limits[] from the speaker profile for the limits.
[PASS/WARN/FAIL] Act 1 ratio: {act1_slides}/{total_slides} = {percentage}%
(limit: {max}% for {duration}-min slot)WARN if within 5% of limit. FAIL if over.
Read design_rules.footer from the speaker profile for the footer elements.
Generate the checklist dynamically from footer.elements[]:
footer.co_presented_extra)[PASS/FAIL] Branding: Footer elements specified for {conference}
[WARN] Branding: Conference hashtag not yet confirmed — flag for authorRead rhetoric_defaults.profanity_calibration and rhetoric_defaults.on_slide_profanity
from the speaker profile.
The key rule (common across speakers): keep profanity verbal-only by default. On-slide profanity limits deck reuse across venues.
[PASS/FAIL] Profanity register: {spec_register} applied consistently
[WARN/FAIL] On-slide profanity: {count} instances found — {approved/not approved}If on-slide profanity is present without explicit approval, flag it: "Slide {N} has baked-in profanity: '{text}'. This limits reuse at corporate/ family-friendly events. Keep it verbal-only, or explicitly approve for this talk?"
Survey data without visible source attribution creates credibility risk.
[PASS/FAIL] Data attribution: {N} data slides checked, {M} missing sources
Slides needing sources: {list}Expired dates, deadlines, and promotional material appear on reused slides.
[PASS/FAIL] Time-sensitive content: {count} items found
{list with slide numbers and content}Even compressed formats need a structured close. Read rhetoric_defaults.three_part_close
from the speaker profile for whether the speaker defaults to a full three-part close.
Minimum viable close:
[PASS/FAIL] Closing: {summary present?} | {CTA present?} | {social present?}
[WARN] Closing: Missing {component} — intentional or oversight?Always check this. Read rhetoric_defaults.default_duration_minutes and
rhetoric_defaults.modular_design from the speaker profile.
Scan the outline for literal [CUT LINE strings marking where to truncate for shorter
slots, and [EXPAND ZONE strings marking sections that can grow.
[PASS/FAIL] Cut lines: {present/missing} — scan for [CUT LINE] markers in the outline
[PASS/FAIL] Expand zones: {present/missing} for longer adaptationFAIL if no [CUT LINE] marker exists and the talk duration is shorter than the
speaker's default.
This check has two layers: speaker-specific recurring issues from the vault, and taxonomy-based antipattern scanning from the Presentation Patterns reference.
Read guardrail_sources.recurring_issues[] from the speaker profile. Each entry
describes a known weakness and its specific guardrail check.
For each recurring_issues entry, run the check described in its guardrail field
and report at the severity level in its severity field.
Common anti-patterns (may or may not apply to a given speaker):
Read references/patterns/_index.md (Phase 4 section of the phase-grouped lookup table)
and profile → pattern_profile.antipattern_frequency if available.
Speaker-specific antipatterns — scan pattern_profile.antipattern_frequency for
patterns with severity: "recurring". These are flagged as [RECURRING] with the
speaker's historical frequency and trend.
Contextual antipatterns — scan the outline against ALL antipatterns from the taxonomy.
For each match, read the individual pattern file for detection heuristics and scoring
criteria. These are flagged as [CONTEXTUAL] (new detection, not historically tracked).
Contextual detection rules:
Report format:
[RECURRING] Shortchanged (8/24, decreasing) — plan cut lines for the 20-min slot
[RECURRING] Meme accretion (5/24, stable) — Act 1 meme ratio at 55%
[CONTEXTUAL] Bullet-Riddled Corpse — slides 14, 22 have 6+ bullet points
[CONTEXTUAL] Dual-Headed Monster — co-presented talk, handoff points not definedOnly runs when the outline includes an Illustration Style Anchor section.
When no illustration strategy exists, output [SKIP] Illustrations: no illustration strategy defined in the guardrail report and skip all sub-checks below.
Visual coverage ratio — what percentage of non-EXCEPTION slides have image prompts. Not every slide needs an illustration (text-only slides, transition slides), but the ratio should reflect the author's intent for the deck's visual density.
Format consistency — every slide has a Format field (FULL, IMG+TXT, EXCEPTION,
or a custom format from the format vocabulary).
EXCEPTION justification — every EXCEPTION slide has a reason explaining why it uses a real asset instead of a generated illustration.
Style anchor reference — every image prompt starts with [STYLE ANCHOR] (the token
the generation script replaces with the actual anchor text).
Prompt quality — no prompts that are just the Illustration description copy-pasted. The image prompt should be richer and more specific than the human-readable description (it includes composition details, labeling, specific visual elements).
[PASS/FAIL] Illustration coverage: {N}/{M} illustrated slides have image prompts
[PASS/FAIL] Format tags: {N}/{total} slides have format specified
[WARN] EXCEPTION slides: {N} without justification
[PASS/FAIL] Style anchor reference: {N}/{M} prompts start with [STYLE ANCHOR]
[PASS/WARN] Prompt quality: {N} prompts appear to be copy-paste of descriptionBuild coverage check — when slides define - Builds: sections:
[PASS/SKIP] Build coverage: {N} slides have builds defined, {M} build-step images generatedSKIP if no slides define builds. PASS if all defined build steps have corresponding
images in illustrations/builds/. Report missing steps if any.
Prompt quality anti-patterns — flag prompts likely to produce poor results:
[WARN] Prompt anti-patterns: {N} prompts with potential issuesDetection rules (check all edit/fix/generation prompts):
Always runs. This is the convergent-thinking cut pass that asks of every section in the outline: does this directly support the Big Idea? Per the "Murder Your Darlings — The Pre-Delivery Cut Pass" subsection in patterns/prepare/crucible.md.
For each top-level section in the outline:
[CALLBACK: master-story reference], sections labeled with a personal anecdote that doesn't tie back to the thesis, and sections containing data-rich material that took significant effort to gather. These are the most-loved-by-author and most-likely-to-be-darlings.[PASS/WARN/FAIL] Big Idea alignment: {N}/{total} sections traceable to Big Idea components
[WARN] Candidate darlings: {section names} — flagged for reviewThe pass produces flags, not automatic deletions. The author makes the actual cut decisions; the guardrail surfaces the candidates.
Always runs. Per patterns/build/sparkline.md (the "Three Contrast Types" subsection) and Duarte's kairos principle, the analytical/emotional ratio of the outline must match the audience type.
Categorize each content section as analytical (data, logic, argument, technical detail) or emotional (story, image, anecdote, evocative language). Compare the ratio against the audience profile from the spec:
[PASS/WARN] Emotion balance: {N}% analytical / {M}% emotional vs. {target}% target for {audience type}WARN if the ratio is more than 15 percentage points off the target in either direction. The check is descriptive, not prescriptive — the author decides whether the imbalance is intentional (some talks deliberately skew). Surface the imbalance so it's a conscious choice.
Runs before final spec lock for high-stakes talks (keynotes, sales pitches, fundraising presentations, executive briefings). Per the "Screening with Critics — Beyond Copyediting" subsection in patterns/build/peer-review.md.
Has the author scheduled (or executed) a formal screening session with the following properties?
[PASS/WARN/SKIP] Screening: {scheduled / completed / not applicable for this stakes level}SKIP for low-stakes talks (internal demos, small-group presentations, tutorial sessions). For everything else, WARN if no screening is scheduled or completed.
Use this template after each check:
GUARDRAIL CHECK — {talk title} — {date}
================================================
[PASS] Slide budget: {actual}/{max} for {duration}-min slot
[PASS/WARN/FAIL] Act 1 ratio: {%} (limit: {max}%)
[PASS/FAIL] Branding: {status}
[PASS/FAIL] Profanity: {register} applied, {on-slide count} on-slide
[PASS/FAIL] Data attribution: {N} slides checked, {M} issues
[PASS/FAIL] Time-sensitive: {count} items
[PASS/FAIL] Closing: summary={y/n} CTA={y/n} social={y/n}
[PASS/FAIL] Cut lines: {present/missing}
[INFO] Anti-patterns: {any flags from recurring_issues}
[RECURRING/CONTEXTUAL] Presentation Patterns: {taxonomy-based antipattern flags}
[PASS/FAIL/SKIP] Illustrations: {coverage ratio} | {format tags} | {prompt quality}
[PASS/SKIP] Builds: {N} defined, {M} images generated
[WARN] Prompt anti-patterns: {N} issues found
[PASS/WARN/FAIL] Big Idea alignment: {N}/{total} sections traceable to Big Idea
[PASS/WARN] Emotion balance: {N}% analytical / {M}% emotional vs. {target}%
[PASS/WARN/SKIP] Screening with critics: {scheduled / completed / not applicable}
================================================The Illustrations, Builds, and Prompt anti-patterns lines show [SKIP] when no
illustration strategy is defined.
evals
scenario-1
scenario-2
scenario-3
scenario-4
scenario-5
scenario-6
scenario-7
scenario-8
scenario-9
scenario-10
scenario-11
scenario-12
scenario-13
scenario-14
scenario-15
scenario-16
scenario-17
scenario-18
scenario-19
scenario-20
scenario-21
scenario-22
scenario-23
scenario-24
scenario-25
scenario-26
scenario-27
scenario-28
scenario-29
scenario-30
rules
skills
presentation-creator
references
patterns
build
deliver
prepare
scripts
vault-clarification
vault-ingress
vault-profile