CtrlK
BlogDocsLog inGet started
Tessl Logo

pre-mortem

Identify what could go wrong before launch by imagining failure and working backward. Use after a spec is approved but before you ship — surfaces risks the team isn't talking about.

75

Quality

70%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./product-skills/skills/pre-mortem/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

67%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description has a clear and well-defined 'when' clause that specifies the exact workflow moment to use this skill, which is a notable strength. However, it lacks specificity in the concrete actions performed (e.g., does it produce a ranked risk list, a mitigation plan, a structured pre-mortem document?) and could benefit from more natural trigger terms that users would actually type. The description also uses second person ('before you ship') which slightly detracts from the expected third-person voice.

Suggestions

Add more specific concrete actions, e.g., 'Generates a ranked list of risks with likelihood, impact, and mitigation strategies by imagining failure scenarios and working backward.'

Include natural trigger terms users would say, such as 'pre-mortem', 'risk assessment', 'failure modes', 'risk analysis', or 'what could go wrong'.

DimensionReasoningScore

Specificity

The description names a domain (pre-launch risk analysis) and describes the core action ('imagining failure and working backward'), but doesn't list multiple specific concrete actions like generating risk matrices, categorizing risks, or producing mitigation plans.

2 / 3

Completeness

Clearly answers both 'what' (identify what could go wrong by imagining failure and working backward) and 'when' (after a spec is approved but before you ship), with an explicit temporal trigger for usage.

3 / 3

Trigger Term Quality

Includes some relevant terms like 'failure', 'risks', 'launch', and 'ship', but misses common natural variations users might say such as 'pre-mortem', 'risk assessment', 'what could go wrong', 'risk analysis', or 'failure modes'.

2 / 3

Distinctiveness Conflict Risk

The pre-mortem / failure analysis niche is fairly specific, but the description could overlap with general risk assessment, code review, or QA-related skills since 'surfaces risks' and 'before launch' are somewhat broad.

2 / 3

Total

9

/

12

Passed

Implementation

72%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured prompt-template skill with a clear, actionable framework (Tigers/Paper Tigers/Elephants) that provides genuine value beyond what Claude would generate unprompted. Its main weaknesses are minor verbosity in the introduction and a lack of explicit validation/iteration steps — there's no guidance on how to assess whether the pre-mortem output is thorough enough or what to do if it surfaces only obvious risks beyond the vague tip to 'push harder.'

Suggestions

Add a brief validation step or feedback loop, e.g., 'If the output contains fewer than 2 Elephants, re-prompt with: What assumptions is the team treating as facts? What would a skeptical outsider question?'

Trim the introductory paragraph — the one-line bold description ('Imagine your launch failed. Now figure out why.') is sufficient context; the follow-up paragraph restates it.

DimensionReasoningScore

Conciseness

The skill is mostly efficient but includes some unnecessary framing ('The spec is done, the team is building, and everyone's optimistic') and explanatory text that Claude doesn't need. The prompt template itself has some redundancy in explaining concepts like what Tigers/Paper Tigers/Elephants are — though since these are domain-specific categorizations, most of that earns its place. The intro paragraph could be trimmed.

2 / 3

Actionability

The prompt template is fully copy-paste ready with a clear variable ($ARGUMENTS), specific categorization framework (Tigers/Paper Tigers/Elephants), concrete urgency levels with definitions, and explicit deliverables for each Tiger (mitigation action, owner role, deadline). This is an instruction-only skill with highly specific, actionable guidance.

3 / 3

Workflow Clarity

The skill describes a single-step process (run the prompt with context), but for a risk analysis workflow there's no validation checkpoint — no step to verify completeness of the analysis, no feedback loop to push for deeper Elephants if initial output is shallow, and no explicit sequence for what to do with the output (e.g., create tickets, share with team, re-run with updated context). The Tips section partially compensates but doesn't constitute a structured workflow.

2 / 3

Progressive Disclosure

For a simple, single-purpose skill with no bundle files, the content is well-organized with clear sections (intro, prompt template, tips). The cross-reference to 'craft-experiment-design' is a clean one-level-deep pointer. The length is appropriate for inline content with no need for separate files.

3 / 3

Total

10

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
amplitude/builder-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.