CtrlK
BlogDocsLog inGet started
Tessl Logo

blind-review-sanitizer

Use blind-review-sanitizer for academic writing workflows that need structured anonymization, explicit assumptions, and clear output boundaries for double-blind submission.

56

Quality

46%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/Academic Writing/blind-review-sanitizer/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

57%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description identifies a clear and distinctive niche (academic blind review anonymization) but lacks specificity about what concrete actions the skill performs. It partially addresses when to use it but would benefit from more explicit trigger terms and a clearer enumeration of capabilities like removing author names, stripping affiliations, or anonymizing self-citations.

Suggestions

Add specific concrete actions the skill performs, e.g., 'Removes author names, strips institutional affiliations, anonymizes self-citations, and redacts acknowledgment sections from academic manuscripts.'

Include more natural trigger term variations users might say, such as 'anonymize paper,' 'remove author info,' 'blind review,' 'peer review preparation,' or 'de-identify manuscript.'

DimensionReasoningScore

Specificity

Names the domain (academic writing/double-blind submission) and mentions some actions like 'structured anonymization' and 'explicit assumptions,' but doesn't list concrete specific actions (e.g., remove author names, strip affiliations, redact acknowledgments, anonymize self-citations).

2 / 3

Completeness

The 'when' is partially addressed with 'academic writing workflows that need structured anonymization... for double-blind submission,' but the 'what' is weak—it doesn't clearly explain what concrete actions the skill performs. The 'Use when' equivalent is present but blended with vague capability descriptions.

2 / 3

Trigger Term Quality

Includes some relevant terms like 'double-blind submission,' 'anonymization,' and 'academic writing,' but misses common natural variations users might say such as 'blind review,' 'remove author info,' 'anonymize paper,' 'de-identify manuscript,' or 'peer review preparation.'

2 / 3

Distinctiveness Conflict Risk

The description targets a very specific niche—blind review sanitization for academic double-blind submissions—which is unlikely to conflict with other skills. The combination of 'anonymization,' 'double-blind,' and 'academic writing' creates a distinct trigger profile.

3 / 3

Total

9

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is significantly over-engineered for its purpose, with extensive boilerplate sections (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria) that consume tokens without adding actionable guidance. The core workflow is reasonable but buried in redundant content, and the actual anonymization logic is opaque since it's entirely delegated to an external script without showing concrete input/output examples. Circular cross-references ('See ## X above' pointing to sections that appear later) further hurt usability.

Suggestions

Remove or consolidate redundant sections: merge 'Quick Check' and 'Audit-Ready Commands' (identical content), remove circular cross-references, and move Risk Assessment/Security Checklist/Lifecycle Status/Evaluation Criteria to a separate reference file.

Add a concrete before/after example showing what the anonymization actually does to manuscript text (e.g., input paragraph with author names → output paragraph with redactions).

Add an explicit validation checkpoint after sanitization runs, such as 'grep for remaining author names in the output file to verify completeness' before delivering the result.

Fix the document ordering so cross-references point forward correctly, or eliminate them entirely since the sections are in the same file.

DimensionReasoningScore

Conciseness

Extremely verbose with significant redundancy. Multiple sections repeat the same information (e.g., 'Quick Check' and 'Audit-Ready Commands' contain identical commands, 'See ## Prerequisites above' and 'See ## Workflow above' are circular references). The skill explains concepts Claude already knows (error handling principles, input validation basics) and includes boilerplate sections like Risk Assessment, Security Checklist, Lifecycle Status, and Evaluation Criteria that add little actionable value for the task.

1 / 3

Actionability

Provides concrete CLI commands and a parameter table with specific flags, which is useful. However, the actual anonymization logic is entirely delegated to `scripts/main.py` without showing what the script does or providing executable examples of the sanitization process itself. The 'Example run plan' is procedural but generic, and there's no example showing actual input/output of the anonymization.

2 / 3

Workflow Clarity

The Workflow section provides a reasonable 5-step sequence with a stop condition for missing inputs (step 5) and a fallback path (step 3). However, there are no explicit validation checkpoints between steps (e.g., verifying the sanitization output before delivery), and the workflow lacks a feedback loop for checking whether anonymization was complete. The circular 'See ## Workflow above' reference in Implementation Details is confusing.

2 / 3

Progressive Disclosure

References `references/audit-reference.md` for additional guidance, which is appropriate. However, the main file is a monolithic wall of text with many sections that could be consolidated or moved to reference files (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria). The document structure has poor internal navigation with broken cross-references ('See ## Prerequisites above' appears before Prerequisites).

2 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.