CtrlK
BlogDocsLog inGet started
Tessl Logo

jbaruch/speaker-toolkit

Two-skill presentation system: analyze your speaking style into a rhetoric knowledge vault, then create new presentations that match your documented patterns. Includes an 88-entry Presentation Patterns taxonomy for scoring, brainstorming, and go-live preparation.

Overall
score

95%

Does it follow best practices?

Validation for skill structure

Overview
Skills
Evals
Files

red-yellow-green.mdskills/presentation-creator/references/patterns/deliver/

id:
red-yellow-green
name:
Red Yellow Green
type:
pattern
part:
deliver
phase_relevance:
publishing
vault_dimensions:
4
detection_signals:
audience feedback mechanism, exit polling, colored card voting
related_patterns:
crucible, know-your-audience
inverse_of:
difficulty:
foundational
observable:
No

Red Yellow Green

Summary

Use colored cards near the room entrance for instant audience feedback — attendees drop a card in a bucket on the way out. Green means great, yellow means okay, red means poor. Simple, immediate, and honest.

The Pattern in Detail

Most conference feedback systems are broken. Online surveys have abysmal completion rates. Written evaluations are filled out hours later when memories have faded and the emotional response has cooled. The Red Yellow Green pattern provides an elegantly simple alternative: place three stacks of colored cards (red, yellow, green) near the room entrance and a bucket or box by the exit. As attendees leave, they drop one card — green for "great talk," yellow for "decent but room for improvement," red for "this did not work for me." No forms, no logins, no writing required.

The simplicity of the system is its greatest strength. The barrier to participation is almost zero — picking up a card and dropping it in a bucket takes less than two seconds. This means you get feedback from a much higher percentage of the audience than any other method. And because the response is physical and immediate, it captures the audience's genuine in-the-moment reaction rather than a post-hoc rationalization. People vote with their gut, which is often more honest than what they would write in a considered review.

The scoring is tallied immediately after the session. Count the cards, calculate the ratios, and you have an instant, quantitative read on how the audience received your talk. Over time, tracking these ratios across venues and audiences reveals patterns: maybe your talk works better for smaller audiences, or maybe the afternoon slot hurts your numbers, or maybe a specific section consistently correlates with yellow and red cards.

The system also provides psychological safety for the audience. Dropping a red card is anonymous and takes two seconds; writing a negative review requires composing criticism and attaching your name (or at least your identity to the feedback platform). Many people who would never write a negative review will drop a red card, giving you more honest feedback. This is valuable — the criticism you do not hear is the criticism you cannot act on.

One refinement is to add a small comment card for anyone who wants to provide written feedback alongside their color card. This captures the detail that the color system lacks without requiring it of everyone. You might also experiment with a fourth color or a numbered scale, but simplicity is the pattern's core virtue — resist the temptation to complicate it.

When to Use / When to Avoid

Use this pattern whenever you want honest, high-participation feedback and have logistical control over the room setup (you need to place cards and a collection bucket). It works best at conferences and meetups where you will present the same talk multiple times and want to track improvement. Avoid it in very small settings (under ten people) where anonymity is impossible and direct conversation is better, or in venues where you cannot control the room setup.

Detection Heuristics

  • Physical feedback mechanism visible near the room exit
  • Speaker mentions the feedback system at the start or end of the talk
  • Evidence of systematic feedback collection and tracking
  • Speaker references feedback trends across multiple deliveries

Scoring Criteria

  • Strong signal (2 pts): Structured, low-barrier feedback mechanism in place, evidence that feedback is collected and acted upon across deliveries
  • Moderate signal (1 pt): Some feedback collection but using higher-barrier methods (online surveys, written forms) with lower participation rates
  • Absent (0 pts): No feedback mechanism beyond whatever the conference provides by default

Relationship to Vault Dimensions

This pattern maps to Vault Dimension 4 (Audience Engagement). While it operates at the end of the talk rather than during it, the feedback loop it creates ultimately improves engagement in future deliveries. The audience also feels valued when they see a speaker actively seeking their input.

Combinatorics

Red Yellow Green feeds directly into Crucible (feedback drives iterative improvement), supports Know Your Audience (feedback patterns reveal audience preferences), and pairs with Seeding Satisfaction (positive pre-talk interactions often correlate with more generous post-talk feedback). It can also inform Emotional State adjustments for future deliveries at similar venues.

Install with Tessl CLI

npx tessl i jbaruch/speaker-toolkit@0.5.1

skills

presentation-creator

references

patterns

_index.md

guardrails.md

process.md

slide-generation.md

SKILL.md

CHANGELOG.md

README.md

tile.json