Generates FAQ lists from complex medical policies or protocols. Trigger when user provides medical documents, policies, or protocols and requests FAQ generation, patient education materials, or simplified explanations.
66
58%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/Academic Writing/faq-generator/SKILL.mdQuality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-constructed skill description with a clear 'what' and explicit 'when' trigger clause. It occupies a distinct niche (medical FAQ generation) with good trigger term coverage. The main weakness is that the specificity of actions could be improved by listing more concrete outputs or steps beyond just 'generates FAQ lists'.
Suggestions
Add more specific concrete actions beyond FAQ generation, such as 'extracts key policy points, converts medical jargon to plain language, organizes questions by topic category' to improve specificity.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | It names the domain (medical policies/protocols) and one specific action (generates FAQ lists), but doesn't list multiple concrete actions beyond FAQ generation. The mention of 'patient education materials' and 'simplified explanations' hints at broader capabilities but remains somewhat vague. | 2 / 3 |
Completeness | Clearly answers both 'what' (generates FAQ lists from complex medical policies or protocols) and 'when' (explicit trigger clause specifying when user provides medical documents and requests FAQ generation, patient education materials, or simplified explanations). | 3 / 3 |
Trigger Term Quality | Includes strong natural trigger terms: 'medical documents', 'policies', 'protocols', 'FAQ generation', 'patient education materials', 'simplified explanations'. These cover a good range of terms a user would naturally use when requesting this type of work. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive niche combining medical domain with FAQ generation specifically. The combination of 'medical policies/protocols' and 'FAQ generation' creates a clear, narrow scope that is unlikely to conflict with general document summarization or other medical skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
27%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is heavily padded with boilerplate content that adds little value for the specific task of generating FAQs from medical documents. The core actionable content (parameters, output format, features) is buried among generic workflow descriptions, security checklists, and lifecycle metadata. The skill would benefit enormously from being reduced to its essential elements: what the script does, how to run it, what the output looks like, and medical-domain-specific guidance.
Suggestions
Remove or drastically reduce boilerplate sections (Risk Assessment, Security Checklist, Lifecycle Status, Response Template, Output Requirements) that don't add FAQ-generation-specific value, and consolidate redundant workflow descriptions into a single clear sequence.
Add a concrete example showing actual medical policy input text and the resulting FAQ output, so Claude understands the expected transformation.
Fix the broken cross-references ('See ## Features above' appears before the Features section) and reorganize so the most actionable content (Parameters, Output Format, Example Usage) appears first.
Add medical-domain-specific guidance such as how to handle medical terminology simplification, accuracy constraints for patient-facing content, and disclaimers that should accompany generated FAQs.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose and repetitive. Multiple sections reference the same information (e.g., 'See ## Features above' and 'See ## Workflow above' cross-references to sections that appear later). The description is repeated verbatim in multiple places. Boilerplate sections like Risk Assessment, Security Checklist, Lifecycle Status, and Response Template add significant token overhead without providing actionable value for FAQ generation. Much of the content explains generic workflow patterns Claude already knows. | 1 / 3 |
Actionability | The Parameters table, output JSON schema, and CLI commands provide some concrete guidance. However, the actual FAQ generation logic is entirely delegated to `scripts/main.py` without showing what it does or how it works. The workflow steps are generic process descriptions rather than specific instructions for generating FAQs from medical documents. No example of actual input/output for FAQ generation is provided. | 2 / 3 |
Workflow Clarity | The workflow section provides a numbered sequence but steps are generic and abstract ('Confirm the user objective', 'Validate that the request matches the documented scope'). There's no specific validation for medical content accuracy or FAQ quality. The error handling section mentions fallback paths but doesn't specify concrete validation checkpoints for the FAQ generation process itself. | 2 / 3 |
Progressive Disclosure | The document is a monolithic wall of text with many sections that could be consolidated or removed. Cross-references like 'See ## Features above' and 'See ## Workflow above' point to sections within the same file and appear before those sections, creating confusion. The content is poorly organized with redundant sections (e.g., 'Example Usage' run plan overlaps with 'Workflow', 'Output Requirements' overlaps with 'Response Template'). References to `references/` directory are vague with no specifics about what's there. | 1 / 3 |
Total | 6 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
8277276
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.