1. Confirm the user objective, required inputs, and non-negotiable constraints before doing detailed work. 2. Validate that the request matches the documented scope and stop early if the task would require unsupported as.
33
17%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/Academic Writing/ehr-semantic-compressor/SKILL.mdQuality
Discovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description reads like generic meta-instructions for how to handle any request, not a skill description. It lacks any concrete domain, specific actions, trigger terms, or 'use when' guidance. Additionally, the text appears truncated ('unsupported as'), further reducing its utility.
Suggestions
Identify the specific domain and concrete actions this skill performs (e.g., 'Validates API request payloads against OpenAPI schemas') and replace the abstract process language.
Add an explicit 'Use when...' clause with natural trigger terms a user would say, such as specific file types, task names, or tool references.
Fix the truncated text ('unsupported as') and ensure the description clearly distinguishes this skill from others by naming its unique scope or niche.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description contains no concrete actions or domain-specific capabilities. Phrases like 'confirm the user objective' and 'validate that the request matches the documented scope' are abstract process steps, not specific skill actions. | 1 / 3 |
Completeness | The description fails to answer 'what does this do' in any concrete way and completely lacks a 'when should Claude use it' clause. It reads like generic process instructions rather than a skill description. | 1 / 3 |
Trigger Term Quality | There are no natural keywords a user would say. Terms like 'non-negotiable constraints', 'documented scope', and 'unsupported as' (which appears truncated) are internal jargon, not user-facing trigger terms. | 1 / 3 |
Distinctiveness Conflict Risk | The description is extremely generic—confirming objectives and validating scope could apply to virtually any skill. It provides no domain, file type, or task-specific anchors to distinguish it from other skills. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is excessively verbose and repetitive, with the scope description copied verbatim into multiple sections and large amounts of boilerplate (security checklists, risk tables, lifecycle status, evaluation criteria) that add little actionable value for Claude. While it does provide some concrete elements—CLI commands, JSON schemas, parameter tables, and external file references—the core workflow is generic and template-like rather than tailored to EHR semantic compression. The document would benefit greatly from cutting its length by 50-60% and focusing on the specific clinical summarization logic.
Suggestions
Remove all duplicated content: the scope description appears verbatim in 'When to Use', 'Key Features', and 'Workflow'—state it once and reference it.
Cut boilerplate sections that don't add skill-specific value: Risk Assessment table, Security Checklist, Evaluation Criteria, Lifecycle Status, and Response Template are generic and waste tokens.
Add explicit validation checkpoints in the workflow, e.g., 'Verify the output JSON contains all requested extract_sections before returning' and 'Check that summary_length is within max_length bounds'.
Remove circular cross-references ('See ## Usage above', 'See ## Workflow above') and reorganize so the document flows linearly: Quick Start → Usage → Workflow → References.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose and repetitive. The description/scope statement is repeated verbatim in 'When to Use', 'Key Features', and 'Workflow'. Sections like 'Risk Assessment', 'Security Checklist', 'Evaluation Criteria', 'Lifecycle Status', and 'Response Template' are boilerplate that add little skill-specific value. Many sections reference each other circularly ('See ## Usage above', 'See ## Workflow above'). Claude already knows how to structure responses and handle errors. | 1 / 3 |
Actionability | There are concrete commands (python scripts/main.py --input, --help), input/output JSON schemas, and a parameter table, which is useful. However, the actual core logic is entirely delegated to scripts/main.py with no visibility into what it does or how to troubleshoot it. The 'Example run plan' is generic and not specific to EHR compression. Much of the content describes rather than instructs. | 2 / 3 |
Workflow Clarity | The Workflow section provides a 5-step sequence with a fallback path, and the Example Usage section has a 4-step run plan. However, there are no explicit validation checkpoints between steps (e.g., checking output validity after summarization). The workflow is also generic—it reads like a template rather than being tailored to EHR semantic compression. Missing feedback loops for a medical data processing task caps this at 2. | 2 / 3 |
Progressive Disclosure | References to external files (references/requirements.txt, references/guidelines.md, sample_input.json, sample_output.json) are present and clearly listed. However, the SKILL.md itself is a monolithic wall of text with many sections that could be split out (Security Checklist, Risk Assessment, Evaluation Criteria, Response Template). The inline content is far too long for an overview document, and circular cross-references ('See ## Usage above') indicate poor organization. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
8277276
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.