CtrlK
BlogDocsLog inGet started
Tessl Logo

medical-device-mdr-auditor

Audit medical device technical files against EU MDR 2017/745 regulations.

41

Quality

27%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/Academic Writing/medical-device-mdr-auditor/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

40%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description identifies a clear and distinctive regulatory niche (EU MDR medical device auditing), which is its strongest aspect. However, it lacks a 'Use when...' clause to guide skill selection, and the single action verb 'audit' is too broad—it would benefit from listing specific audit activities. The trigger terms are adequate but could be expanded with common synonyms and related regulatory concepts.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user asks about EU MDR compliance, CE marking, medical device classification, or technical documentation review.'

List specific concrete actions beyond 'audit', such as 'Reviews clinical evaluation reports, checks essential requirements checklists, validates risk management files, and assesses labeling compliance.'

Include additional natural trigger terms like 'CE marking', 'conformity assessment', 'STED', 'notified body', 'clinical evaluation', and 'essential requirements'.

DimensionReasoningScore

Specificity

Names the domain (medical device technical files) and one action (audit), but does not list multiple specific concrete actions like checking classification, reviewing clinical evidence, validating labeling, etc.

2 / 3

Completeness

Describes what it does (audit technical files against EU MDR) but has no explicit 'Use when...' clause or equivalent trigger guidance, which per the rubric should cap completeness at 2, and since the 'when' is entirely missing, it scores a 1.

1 / 3

Trigger Term Quality

Includes relevant keywords like 'medical device', 'technical files', 'EU MDR', and '2017/745' which are terms a user might use, but misses common variations like 'CE marking', 'conformity assessment', 'essential requirements', 'STED', or 'notified body'.

2 / 3

Distinctiveness Conflict Risk

The combination of 'medical device technical files' and 'EU MDR 2017/745' is a very specific regulatory niche that is unlikely to conflict with other skills.

3 / 3

Total

8

/

12

Passed

Implementation

14%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill suffers from severe verbosity and poor organization, with extensive boilerplate sections (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria, Response Template) that consume tokens without adding MDR-specific value. The MDR check points and JSON output format are the strongest elements, providing genuinely useful domain-specific content. However, the generic workflow, circular cross-references, and repetitive content significantly undermine usability.

Suggestions

Remove all generic boilerplate sections (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria, Response Template, Output Requirements) and focus exclusively on MDR audit-specific content.

Replace the generic workflow with an MDR-specific audit sequence: e.g., 1) Identify device class → 2) Check class-specific required documents → 3) Validate each document against MDR checklist → 4) Generate findings → 5) Classify compliance level.

Fix the document structure: put Overview first, remove circular cross-references ('See ## X above/below'), and consolidate the three separate usage/example sections into one.

Move the detailed MDR check points and output format into a referenced file (e.g., references/mdr-checklist.md) and keep SKILL.md as a concise overview with clear navigation to detailed content.

DimensionReasoningScore

Conciseness

Extremely verbose and repetitive. Multiple sections reference each other circularly ('See ## Prerequisites above', 'See ## Usage above', 'See ## Workflow above'). Contains boilerplate sections (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria) that add no actionable value for Claude. The same information (e.g., py_compile command, script path) is repeated across multiple sections. Generic output requirements and response templates pad the content significantly.

1 / 3

Actionability

The MDR check points with specific regulation references (Annex XIV Part A, Article 83) and the JSON output format example are genuinely useful and concrete. However, much of the 'workflow' is generic boilerplate ('confirm the user objective', 'validate that the request matches documented scope') rather than MDR-specific executable guidance. The usage examples include hardcoded absolute paths which reduce portability.

2 / 3

Workflow Clarity

The workflow section is entirely generic ('Confirm the user objective, required inputs...') with no MDR-specific audit sequence. There are no validation checkpoints for the audit process itself—e.g., no step to verify document completeness before classification-specific checks, no feedback loop for resolving findings. The 'Example run plan' is also generic and doesn't describe the actual audit workflow.

1 / 3

Progressive Disclosure

The document is a monolithic wall of text with poor organization. Sections are out of logical order (Overview appears after Implementation Details; Prerequisites appears near the end but is referenced earlier). Circular cross-references ('See ## Prerequisites above' when Prerequisites is actually below) create confusion. The single external reference to 'references/audit-reference.md' is mentioned only at the very end. Content that should be in separate files (security checklist, risk assessment, evaluation criteria) is inlined.

1 / 3

Total

5

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.