CtrlK
BlogDocsLog inGet started
Tessl Logo

blind-review-sanitizer

Use blind-review-sanitizer for academic writing workflows that need structured anonymization, explicit assumptions, and clear output boundaries for double-blind submission.

56

Quality

46%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/Academic Writing/blind-review-sanitizer/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

57%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description identifies a clear and distinctive niche (academic blind-review anonymization) but lacks specificity about what concrete actions the skill performs. The trigger terms are relevant but could be expanded with more natural user language variations. The what/when distinction is blurred, with abstract terms like 'explicit assumptions' and 'clear output boundaries' adding little clarity about actual capabilities.

Suggestions

Replace abstract phrases like 'structured anonymization' and 'clear output boundaries' with concrete actions such as 'removes author names, strips institutional affiliations, redacts acknowledgments, and flags self-citations'.

Add natural trigger term variations users might say, such as 'anonymous manuscript', 'de-identify paper', 'remove identifying information', 'peer review preparation'.

Separate the 'what' and 'when' more clearly, e.g., 'Removes author names, affiliations, and self-citations from academic papers. Use when preparing manuscripts for double-blind peer review or anonymizing submissions.'

DimensionReasoningScore

Specificity

Names the domain (academic writing, double-blind submission) and some actions (structured anonymization, explicit assumptions, clear output boundaries), but these are somewhat abstract rather than concrete actions like 'removes author names, strips affiliations, redacts acknowledgments'.

2 / 3

Completeness

The 'when' is partially addressed ('academic writing workflows that need structured anonymization... for double-blind submission'), but the 'what' is weak—it doesn't clearly enumerate what the skill actually does beyond vague terms like 'structured anonymization' and 'clear output boundaries'. The 'Use when' clause exists but is merged with the what, making both less clear.

2 / 3

Trigger Term Quality

Includes relevant terms like 'blind-review', 'anonymization', 'double-blind submission', and 'academic writing', but misses common natural variations users might say such as 'remove author names', 'de-identify paper', 'anonymous manuscript', 'peer review preparation', or 'blinded submission'.

2 / 3

Distinctiveness Conflict Risk

The description targets a very specific niche—blind review sanitization for academic double-blind submissions—which is unlikely to conflict with other skills. The combination of 'blind-review', 'anonymization', and 'double-blind submission' creates a distinct trigger profile.

3 / 3

Total

9

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is significantly over-engineered and verbose for what is essentially a CLI wrapper around a Python anonymization script. It repeats the same commands and concepts across multiple sections, includes extensive boilerplate that Claude doesn't need (security checklists, lifecycle status, response templates), and buries the actionable content under layers of generic process guidance. The parameter table and error handling sections are the strongest parts, but the overall token cost is far too high for the information density.

Suggestions

Consolidate the workflow into a single authoritative section, removing duplicate descriptions from 'Example Usage', 'Implementation Details', and 'Workflow'. Add an explicit post-run validation step (e.g., grep for author names in output).

Remove sections that restate Claude's general capabilities: 'Output Requirements', 'Response Template', 'Input Validation' boilerplate, and 'Key Features' bullet about 'structured execution path'. These add ~40 lines of zero-information content.

Eliminate repeated commands—'python -m py_compile scripts/main.py' appears three times. Keep it only in Quick Check or Audit-Ready Commands, not both plus Example Usage.

Move Risk Assessment, Security Checklist, Evaluation Criteria, and Lifecycle Status into a separate reference file to keep SKILL.md focused on actionable guidance.

DimensionReasoningScore

Conciseness

Extremely verbose and repetitive. Multiple sections restate the same information (e.g., 'python -m py_compile scripts/main.py' appears 3 times, workflow steps are described in multiple places). Contains self-referential loops like 'See ## Prerequisites above' and 'See ## Workflow above'. Sections like 'Key Features', 'Implementation Details', 'Input Validation', 'Output Requirements', and 'Response Template' add significant bulk with generic guidance Claude already knows. The skill could be reduced to ~30% of its current size without losing actionable content.

1 / 3

Actionability

Provides concrete CLI commands and a parameter table with specific flags, which is useful. However, much of the guidance is procedural boilerplate rather than executable specifics. The actual anonymization logic is delegated entirely to scripts/main.py with no inline examples of what the script does to content. The 'Example run plan' is generic and not specific to this tool's behavior.

2 / 3

Workflow Clarity

The Workflow section provides a reasonable 5-step sequence with a stop condition for missing inputs, and error handling is documented. However, there are no explicit validation checkpoints between steps (e.g., no 'verify the output contains no author names' step). The workflow is split across multiple sections (Example Usage run plan, Workflow, Implementation Details) making it hard to follow a single authoritative sequence.

2 / 3

Progressive Disclosure

References a 'references/' directory and links to audit-reference.md, which is good. However, the main file is monolithic with many sections that could be consolidated or moved to reference files (Risk Assessment, Security Checklist, Evaluation Criteria, Lifecycle Status). The document is over-structured with too many top-level sections rather than being a concise overview pointing to details.

2 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.