CtrlK
BlogDocsLog inGet started
Tessl Logo

scenario-generator

Generates concrete scenarios from a requirement — happy paths, edge cases, and error conditions — expressed as Given/When/Then or equivalent structured narratives. Use when turning a requirement into acceptance tests, when exploring what could go wrong, or when the requirement is abstract and needs grounding.

Install with Tessl CLI

npx tessl i github:santosomar/general-secure-coding-agent-skills --skill scenario-generator
What are skills?

100

Quality

100%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SKILL.md
Review
Evals

Scenario Generator

A requirement says "users can reset their password." Scenarios say: this user, in this state, does this, and this happens. Scenarios are requirements made concrete enough to execute.

Scenario structure — Given/When/Then

Given  <initial state / preconditions>
When   <action the actor takes>
Then   <observable outcome>

Every scenario is one path through the system. A requirement usually generates 3–10 scenarios: one happy path, several edges, several errors.

Systematic scenario enumeration

For each requirement, walk the dimensions:

DimensionQuestions
Actor variantsLogged in? Anonymous? Admin? Suspended?
Input boundariesEmpty? Max length? Just over max? Unicode? Null?
State variantsFirst time? Repeat? Concurrent? After a failure?
TimingImmediately? After timeout? During maintenance?
Failure injectionDB down? Third-party 500? Network partition mid-flow?
SequenceOut of order? Duplicate? Replayed?

Cross the dimensions that matter. Not all N×M — most combinations are uninteresting. Pick the ones where the outcome differs.

Worked example

Requirement: "A user can reset their password via email link. The link expires after 1 hour."

Happy path:

Scenario: Successful password reset
  Given a user with email alice@example.com and a known password
  When  Alice requests a password reset for alice@example.com
  Then  a reset email is sent to alice@example.com
  And   the email contains a link valid for 1 hour
  When  Alice clicks the link within 1 hour
  And   submits a new password meeting the policy
  Then  Alice can log in with the new password
  And   cannot log in with the old password

Edge — timing boundary:

Scenario: Link used at exactly T+60min
  Given a reset link issued at time T
  When  the link is used at exactly T + 60 minutes
  Then  <ASK: is the boundary inclusive or exclusive? — stakeholder decision>

Error — expired:

Scenario: Expired link rejected
  Given a reset link issued at time T
  When  the link is used at T + 61 minutes
  Then  an "expired link" error is shown
  And   no password change occurs
  And   the user can request a new link

Error — wrong actor:

Scenario: Reset requested for nonexistent email
  Given no user with email eve@nowhere.com exists
  When  a reset is requested for eve@nowhere.com
  Then  the response is identical to the success case (no enumeration leak)
  And   no email is sent

Sequence — reuse:

Scenario: Link is single-use
  Given a reset link that has already been used successfully
  When  the same link is used again
  Then  an "invalid link" error is shown
  And   no password change occurs

Concurrency:

Scenario: Two reset requests in flight
  Given a user requests reset, receiving link L1
  And   requests reset again, receiving link L2
  When  L1 is used
  Then  <ASK: does L2 stay valid? does issuing L2 invalidate L1? — stakeholder decision>

Found by this exercise: two <ASK> placeholders — boundary inclusivity and concurrent-link policy. Neither was in the original requirement. Scenarios surface gaps.

Prioritizing scenarios

Not all scenarios are worth automating:

PriorityScenariosAutomate?
P0Happy path, security-relevant errors (enumeration, expiry)Yes — blocking
P1Boundaries (exact expiry moment, max password length)Yes
P2Unlikely-but-bad (concurrent links, partial failures)Yes if cheap; manual test otherwise
P3Cosmetic (error message wording)Spot-check

Do not

  • Do not generate only happy paths. The happy path is one scenario. The interesting scenarios are where things vary.
  • Do not leave <ASK> items unresolved. They're requirements gaps — escalate to the stakeholder before the scenario becomes a test with a guessed answer.
  • Do not conflate scenarios with test implementation. Given a user with email alice@example.com is a scenario. INSERT INTO users VALUES ... is a test. Keep scenarios implementation-agnostic.
  • Do not generate every combination. 5 dimensions × 4 values each = 1024 scenarios. Pick the ones where the outcome differs. The rest are redundant.

Output format

## Requirement
<verbatim>

## Dimensions explored
<actor variants, input boundaries, state variants, timing, failures, sequence>

## Scenarios

### Happy path
<Given/When/Then>

### Edge: <name>
<Given/When/Then>
...

### Error: <name>
<Given/When/Then>
...

## Gaps surfaced
<ASK items — requirement decisions the scenarios need that the requirement doesn't provide>

## Coverage note
<which dimension combinations were skipped and why they're uninteresting>
Repository
santosomar/general-secure-coding-agent-skills
Last updated
Created

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.