CtrlK
BlogDocsLog inGet started
Tessl Logo

anki-card-creator

Use anki-card-creator for academic writing workflows that need structured execution, explicit assumptions, and clear output boundaries for study-card generation.

49

Quality

37%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/Academic Writing/anki-card-creator/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

40%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description is heavy on abstract process language ('structured execution', 'explicit assumptions', 'clear output boundaries') but light on concrete capabilities. It fails to specify what the skill actually does with Anki cards and uses jargon that users are unlikely to naturally say. The trigger terms are partially relevant but miss common user vocabulary like 'flashcards' or 'Anki deck'.

Suggestions

Replace abstract phrases like 'structured execution' and 'clear output boundaries' with concrete actions such as 'Creates Anki flashcards from notes, generates Q&A pairs, exports to .apkg format'.

Add natural trigger terms users would actually say: 'flashcards', 'Anki deck', 'spaced repetition', 'study cards', 'memorization'.

Strengthen the 'Use when' clause with specific triggers: 'Use when the user wants to create Anki flashcards, generate study cards from lecture notes, or convert academic content into spaced repetition decks'.

DimensionReasoningScore

Specificity

The description uses vague, abstract language like 'structured execution', 'explicit assumptions', and 'clear output boundaries' without listing any concrete actions. It does not specify what the skill actually does (e.g., create flashcards, parse notes, format Q&A pairs).

1 / 3

Completeness

There is a 'Use when' clause ('Use anki-card-creator for academic writing workflows...'), but the 'what does this do' part is extremely weak—it only vaguely references 'study-card generation' without describing concrete capabilities. The 'when' is present but overly abstract.

2 / 3

Trigger Term Quality

Contains some relevant keywords like 'anki', 'study-card', and 'academic writing', but misses common natural terms users would say such as 'flashcards', 'Anki deck', 'spaced repetition', 'study notes', or 'memorization'.

2 / 3

Distinctiveness Conflict Risk

The mention of 'anki' and 'study-card generation' provides some distinctiveness, but the phrase 'academic writing workflows' is broad enough to overlap with general writing or academic assistance skills.

2 / 3

Total

7

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is significantly over-engineered for its purpose, with extensive boilerplate sections (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria) that consume tokens without adding proportional value. The core actionable content—how to actually generate Anki cards—is diluted by repetitive sections and generic process guidance. The skill would benefit greatly from aggressive trimming, consolidation of duplicate content, and addition of concrete input/output examples.

Suggestions

Remove or consolidate duplicate content: 'py_compile' command appears 3 times, workflow is described in both 'Example Usage' and 'Workflow' sections, and multiple sections cross-reference each other circularly.

Move boilerplate sections (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria) to a separate reference file and link to it, keeping SKILL.md focused on actionable guidance.

Add a concrete end-to-end example showing actual input content and the expected TSV output, rather than just the single-line 'Q: What is the mechanism of metformin?' with no corresponding output.

Remove explanatory filler like 'See ## Prerequisites above for related details' (which points to a section that just says 'No additional packages required') and the self-referential feature description that quotes the skill's own description verbatim.

DimensionReasoningScore

Conciseness

Extremely verbose and repetitive. Multiple sections restate the same information (e.g., 'python -m py_compile scripts/main.py' appears 3 times, workflow steps are repeated across sections). Contains extensive boilerplate (Risk Assessment, Security Checklist, Lifecycle Status) that adds little actionable value. Cross-references to non-existent sections ('See ## Prerequisites above') and self-referential descriptions waste tokens.

1 / 3

Actionability

Provides concrete CLI commands and a parameter table, which is useful. However, the actual card generation examples are thin—only one trivial example ('Q: What is the mechanism of metformin?') with no corresponding output shown. The workflow steps are procedural but somewhat generic ('confirm the study objective') rather than giving specific executable guidance for common scenarios.

2 / 3

Workflow Clarity

The Workflow section provides a reasonable 5-step sequence with a stop-and-ask fallback for missing content. However, validation checkpoints are weak—there's no explicit 'validate output before returning' step, and the feedback loop for script failure is buried in a separate Error Handling section rather than integrated into the workflow. The 'Example run plan' duplicates the workflow without adding clarity.

2 / 3

Progressive Disclosure

References a single external file (references/audit-reference.md) which is appropriate, but the main document itself is a monolithic wall of text with many sections that could be consolidated or split out. The Risk Assessment, Security Checklist, and Lifecycle Status sections bloat the main file and would be better as separate references. Section organization is present but poorly prioritized—quick start information is buried among boilerplate.

2 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.