RAID log methodology with decision extraction from emails and meeting notes. Use when asked to identify risks, log assumptions, track issues, extract decisions from correspondence, create or update a RAID log, escalate a risk to an issue, assess project risks, validate or challenge assumptions, or capture scope-relevant decisions. Also triggers when the user pastes email chains and asks what decisions were made, or needs to find buried decisions and untested assumptions in correspondence. Trigger on: 'risk report', 'RAID log', 'what are the risks', 'what decisions were made', 'update the risk register', 'what assumptions are we making', 'what could go wrong', 'flag any issues', 'extract decisions from these emails'.
71
63%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/risk-and-issues-manager/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an excellent skill description that thoroughly covers capabilities, trigger conditions, and natural user language. It lists specific concrete actions, provides an explicit 'Use when...' clause with diverse scenarios, and includes a dedicated 'Trigger on:' section with natural phrases. The description is well-structured, uses third person voice throughout, and carves out a distinct niche combining RAID methodology with decision extraction from correspondence.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: identify risks, log assumptions, track issues, extract decisions from correspondence, create/update RAID log, escalate risk to issue, assess project risks, validate/challenge assumptions, capture scope-relevant decisions. | 3 / 3 |
Completeness | Clearly answers both 'what' (RAID log methodology with decision extraction from emails and meeting notes) and 'when' (explicit 'Use when...' clause with detailed trigger scenarios, plus a 'Trigger on:' list of specific phrases). Both dimensions are thoroughly covered. | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger terms including 'risk report', 'RAID log', 'what are the risks', 'what decisions were made', 'update the risk register', 'what assumptions are we making', 'what could go wrong', 'flag any issues', and 'extract decisions from these emails'. These are highly natural phrases users would actually say. | 3 / 3 |
Distinctiveness Conflict Risk | Occupies a clear niche around RAID log methodology and decision extraction from correspondence. The combination of project risk management terminology and email/meeting note analysis is highly distinctive and unlikely to conflict with generic document processing or general project management skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
27%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill contains genuinely valuable domain knowledge about RAID log management and decision extraction from correspondence, with well-structured field definitions for each RAID component. However, it is severely over-length, spending thousands of tokens explaining concepts and rationale that Claude already understands, while lacking concrete worked examples of the core decision-extraction function. The content would benefit enormously from being split into a concise SKILL.md with references to detailed methodology and field-structure files.
Suggestions
Reduce the main skill to ~200-300 lines covering the step-by-step process, field structures, and output format. Move the RAID methodology section (why logs fail, risk identification categories, assumption lifecycle, decision philosophy) to a separate METHODOLOGY.md reference file.
Add a concrete worked example showing an actual email snippet being processed into specific RAID log entries — this is the highest-value function but has zero examples of input→output transformation.
Add explicit validation checkpoints: after extracting decisions/risks/assumptions, present a summary to the user for confirmation before producing the full IRAD report, especially for implied decisions.
Move format-specific guidance (docx, SharePoint, Excel) to a separate FORMATS.md file, keeping only a brief summary with links in the main skill.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | This skill is extremely verbose at ~4500+ words. It extensively explains concepts Claude already understands (what RAID logs are, why they fail, what assumptions are, the difference between risks and issues), includes lengthy philosophical justifications, and repeats guidance multiple times. The 'Why most RAID logs fail' section, the extended discussion of assumptions, and the detailed explanation of decision types are all knowledge Claude possesses. Much of this reads like a training manual for a human LPM rather than concise instructions for an AI. | 1 / 3 |
Actionability | The skill provides detailed field structures for each RAID component (Issues, Risks, Assumptions, Decisions tables) which are concrete and actionable. However, there is no executable code, no concrete example of processing an actual email chain into RAID entries, and no sample input/output demonstrating the decision extraction process. The step-by-step process is more descriptive than prescriptive, telling Claude what to look for rather than showing a worked example. | 2 / 3 |
Workflow Clarity | The 4-step process (determine mode, process input, produce output, write summary) provides a clear sequence, and the risk-to-issue escalation has explicit steps. However, there are no validation checkpoints — no step says 'verify extracted decisions with user before finalizing,' no feedback loop for confirming implied decisions were correctly identified, and no explicit verification that the output matches the input's content. For a skill that processes unstructured text into structured data, validation of extraction accuracy is critical. | 2 / 3 |
Progressive Disclosure | This is a monolithic wall of text with no references to external files. The RAID methodology section, risk identification framework, assumption lifecycle discussion, and format-specific guidance could all be separate reference documents. Everything is inline, making the skill extremely long and difficult to navigate. There are cross-skill handoff mentions but no actual file references for detailed content that should be split out. | 1 / 3 |
Total | 6 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
1eb58a1
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.