Precision editing tool that reduces abstract word count through intelligent compression techniques, maintaining scientific rigor while meeting strict journal and conference requirements.
43
30%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/Academic Writing/abstract-trimmer/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a reasonably specific niche—reducing word count in scientific abstracts—but relies on vague buzzwords like 'intelligent compression techniques' and 'precision editing tool' rather than listing concrete actions. It critically lacks a 'Use when...' clause, making it harder for Claude to know when to select this skill, and misses common natural trigger terms users would employ.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user needs to shorten an academic abstract, reduce word count for a journal submission, or meet a conference word limit.'
Replace vague phrases like 'intelligent compression techniques' with concrete actions such as 'removes redundant phrases, consolidates sentences, eliminates filler words, and tightens phrasing.'
Include more natural trigger terms users would say: 'shorten,' 'condense,' 'trim,' 'word limit,' 'too long,' 'over the word count,' 'submission guidelines.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | It names the domain (abstract word count reduction) and mentions 'intelligent compression techniques' and 'maintaining scientific rigor,' but doesn't list specific concrete actions like 'removes redundant phrases, shortens sentences, eliminates filler words.' The language is somewhat vague with buzzwords like 'intelligent compression techniques.' | 2 / 3 |
Completeness | It describes what the skill does (reduces abstract word count) but has no explicit 'Use when...' clause or equivalent trigger guidance. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and the 'what' itself is also somewhat vague, placing this at 1. | 1 / 3 |
Trigger Term Quality | Contains some relevant keywords like 'abstract,' 'word count,' 'journal,' 'conference,' but misses common natural terms users would say such as 'shorten abstract,' 'reduce word count,' 'trim,' 'condense,' 'word limit,' or 'submission requirements.' | 2 / 3 |
Distinctiveness Conflict Risk | The focus on academic abstract word count reduction is a fairly specific niche, but the phrase 'precision editing tool' and 'compression techniques' could overlap with general editing or writing improvement skills. It's somewhat specific but not sharply delineated. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
27%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is excessively verbose and poorly organized, with significant repetition (the description appears 3+ times, commands appear in multiple sections, and circular cross-references add confusion). While it does provide some concrete CLI examples and a useful parameter table, the bulk of the content is generic boilerplate about input validation, risk assessment, security checklists, and response templates that don't add value for this specific task. The actual domain-specific guidance on abstract trimming strategies is minimal compared to the process overhead.
Suggestions
Remove all circular self-references ('See ## Features above') and consolidate duplicated content into single authoritative sections. The skill description should appear once.
Cut boilerplate sections (Risk Assessment, Security Checklist, Lifecycle Status, Response Template, Output Requirements, Input Validation) that don't provide abstract-trimming-specific value — these are generic patterns Claude already knows.
Add a concrete before/after example showing an actual abstract being trimmed with each strategy, so Claude understands the expected transformation quality.
Move detailed parameter tables, advanced usage, and evaluation criteria to separate reference files, keeping SKILL.md as a concise overview with clear pointers.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose and repetitive. Multiple sections reference each other circularly ('See ## Features above', 'See ## Usage above', 'See ## Workflow above'). The skill description is repeated verbatim in 'When to Use'. Boilerplate sections like Risk Assessment, Security Checklist, Lifecycle Status, Response Template, and Output Requirements add significant token bloat without providing value Claude doesn't already know. The same commands (py_compile, --help) appear in multiple places. | 1 / 3 |
Actionability | The skill does provide concrete CLI commands with specific flags and parameters, a parameter table, and example JSON output. However, much of the 'Implementation Details' and 'Workflow' sections are generic process descriptions rather than specific executable guidance for abstract trimming. The actual trimming logic is delegated entirely to scripts/main.py with no insight into what the script does or how to troubleshoot compression quality. | 2 / 3 |
Workflow Clarity | There are numbered workflow steps, but they are generic project management steps ('Confirm the user objective', 'Validate that the request matches documented scope') rather than specific abstract-trimming workflows. The 'Example run plan' is closer to useful but lacks validation checkpoints for the trimming output quality. No feedback loop for checking if the trimmed abstract preserves scientific accuracy before finalizing. | 2 / 3 |
Progressive Disclosure | The content is a monolithic wall of text with many sections that could be separate files. Circular self-references ('See ## Features above') indicate poor organization. The document mixes quick-start usage, detailed parameter tables, risk assessments, security checklists, evaluation criteria, lifecycle status, response templates, and error handling all in one file. References to 'references/' directory exist but are vague about what's actually there. | 1 / 3 |
Total | 6 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
8277276
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.