CtrlK
BlogDocsLog inGet started
Tessl Logo

adverse-event-narrative

1. Confirm the user objective, required inputs, and non-negotiable constraints before doing detailed work. 2. Validate that the request matches the documented scope and stop early if the task would require unsupported as.

33

Quality

17%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/Academic Writing/adverse-event-narrative/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

0%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description reads like generic process instructions rather than a skill description. It lacks any concrete actions, domain specificity, trigger terms, or 'use when' guidance. The text also appears truncated ('unsupported as' cuts off), further reducing its utility for skill selection.

Suggestions

Rewrite to specify the actual domain and concrete actions this skill performs (e.g., 'Validates API requests against OpenAPI schemas' rather than 'validate that the request matches the documented scope').

Add an explicit 'Use when...' clause with natural trigger terms that describe when Claude should select this skill over others.

Remove the numbered process steps and replace with a concise capability summary followed by trigger conditions, following the pattern: '[What it does]. Use when [trigger conditions].'

DimensionReasoningScore

Specificity

The description contains no concrete actions or domain-specific capabilities. Phrases like 'confirm the user objective' and 'validate that the request matches the documented scope' are abstract process steps, not specific skill actions.

1 / 3

Completeness

The description fails to answer 'what does this do' in any meaningful way and completely lacks a 'when should Claude use it' clause. It reads like internal process instructions rather than a skill description.

1 / 3

Trigger Term Quality

There are no natural keywords a user would say. Terms like 'non-negotiable constraints', 'documented scope', and 'unsupported as' (which appears truncated) are not phrases users would use when seeking help with a task.

1 / 3

Distinctiveness Conflict Risk

The description is extremely generic — confirming objectives and validating scope could apply to virtually any skill. It provides no domain, file type, or task-specific information to distinguish it from other skills.

1 / 3

Total

4

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill covers a specialized regulatory domain (adverse event narrative generation) with genuinely useful domain-specific content like CIOMS I structure, causality assessment criteria, and quality checklists. However, it suffers from severe verbosity with repeated boilerplate, redundant sections, and extensive inline content that should be in reference files. The generic scaffolding sections (Input Validation, Error Handling, Response Template) consume significant tokens without adding skill-specific value.

Suggestions

Remove the repeated boilerplate text — the scope description appears verbatim in at least 3 sections (When to Use, Key Features, Workflow). Consolidate into a single clear statement.

Move the detailed CIOMS I section descriptions, WHO-UMC categories, and multi-format output examples into reference files (e.g., references/cioms_i_guidelines.md) and keep only a brief summary with links in the main skill file.

Remove or drastically compress the generic sections (Input Validation, Error Handling, Response Template, Output Requirements) which contain no skill-specific information and explain things Claude already knows.

Ensure code examples are either truly executable with complete imports and setup, or clearly labeled as API illustrations showing the intended interface pattern.

DimensionReasoningScore

Conciseness

The skill is extremely verbose with significant redundancy. The description/scope sentences are repeated verbatim in 'When to Use', 'Key Features', and 'Workflow' sections. There's extensive explanation of CIOMS guidelines, WHO-UMC categories, and regulatory concepts that Claude already knows. The generic boilerplate sections (Error Handling, Input Validation, Response Template, Output Requirements) add substantial token cost with minimal skill-specific value.

1 / 3

Actionability

The skill provides Python code examples with specific imports and method calls (NarrativeGenerator, timeline analysis, causality assessment), but these appear to be pseudocode referencing modules that may not actually exist as shown. The CLI commands (py_compile, --help) are concrete but basic. The quality checklist and common pitfalls sections provide useful specific guidance, but the core code examples lack completeness (missing imports, incomplete setup).

2 / 3

Workflow Clarity

The workflow section provides a 5-step sequence with validation (step 2) and fallback handling (step 5), and the quality checklist provides pre/post validation checkpoints. However, the 'Example run plan' and 'Workflow' sections are somewhat redundant and the validation steps lack concrete commands for checking narrative quality. The critical medical review step is mentioned but not integrated into the workflow sequence itself.

2 / 3

Progressive Disclosure

The skill references external files well (references/ directory, scripts/ directory) and has clear section headers. However, the inline content is far too long — the detailed CIOMS section descriptions, WHO-UMC categories, and multi-format output examples should be in separate reference files rather than inline. The 'Implementation Details' section says 'See ## Workflow above' which is a confusing self-reference.

2 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.