Adapt abstracts to meet specific conference word limits and formats.
41
27%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/Academic Writing/conference-abstract-adaptor/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a clear niche (conference abstract adaptation) but is too terse to be effective for skill selection. It lacks explicit trigger guidance ('Use when...'), misses common user phrasings, and doesn't enumerate the specific actions it can perform beyond adapting to word limits and formats.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user needs to shorten, reformat, or tailor a research abstract for a specific conference submission.'
Include more natural trigger terms users would say, such as 'submission', 'shorten abstract', 'paper abstract', 'character limit', 'rewrite', 'academic conference'.
List more specific concrete actions, e.g., 'Trims, restructures, and reformats research abstracts to meet conference-specific word counts, section requirements, and formatting guidelines.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (abstracts, conferences) and a couple of actions (adapt to word limits and formats), but doesn't list multiple concrete actions like trimming content, restructuring sections, reformatting citations, etc. | 2 / 3 |
Completeness | Describes what the skill does (adapt abstracts to word limits and formats) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, missing 'Use when...' caps completeness at 2, and the 'what' is also only moderately detailed, so this scores at 1. | 1 / 3 |
Trigger Term Quality | Includes relevant terms like 'abstracts', 'conference', 'word limits', and 'formats', but misses common variations users might say such as 'submission', 'paper abstract', 'character limit', 'shorten abstract', 'rewrite for conference', or specific conference names/types. | 2 / 3 |
Distinctiveness Conflict Risk | The combination of 'abstracts' and 'conference' provides some specificity, but could overlap with general writing/editing skills or academic writing skills. It's somewhat distinctive but not sharply delineated. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
22%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill suffers from extreme verbosity with generic boilerplate that overwhelms the small amount of task-specific content. The supported conferences table and CLI parameters are genuinely useful, but the actual domain knowledge for adapting abstracts (trimming strategies, section requirements, character vs word counting) is absent. The workflow is entirely generic and lacks any validation checkpoints specific to abstract formatting.
Suggestions
Remove all generic boilerplate sections (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria, Response Template, Output Requirements) that add no task-specific value and consume significant token budget.
Add concrete, task-specific workflow steps: e.g., 1) Parse abstract and count words, 2) Identify required sections for target conference, 3) Trim/restructure to meet word limit, 4) Verify final word count is within limit, 5) Validate all required sections are present.
Remove circular self-references ('See ## Prerequisites above', 'See ## Usage above') that add confusion without value.
Include domain-specific guidance on how to actually adapt abstracts—e.g., strategies for trimming (remove hedging language, compress methods), how to restructure a single-paragraph abstract into structured sections, and how character limits differ from word limits.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose and repetitive. Contains numerous sections that explain generic concepts Claude already knows (error handling patterns, security checklists, risk assessments, lifecycle status, evaluation criteria). Multiple sections reference each other circularly ('See ## Prerequisites above', 'See ## Usage above', 'See ## Workflow above'). The actual task-specific content (supported conferences table, parameters, usage examples) is buried under boilerplate. Easily 60%+ of the content is generic filler that adds no task-specific value. | 1 / 3 |
Actionability | The parameters table, usage examples, and supported conferences table provide concrete, executable guidance. However, the workflow steps are generic and abstract ('Confirm the user objective', 'Validate that the request matches the documented scope'), and the actual logic of how to adapt an abstract (trimming strategies, section restructuring) is entirely absent—delegated to an opaque scripts/main.py with no visibility into what it does. | 2 / 3 |
Workflow Clarity | The workflow section is entirely generic boilerplate ('Confirm the user objective, required inputs, and non-negotiable constraints') with no task-specific sequencing. There are no validation checkpoints specific to abstract adaptation (e.g., verify word count after trimming, check required sections are present). The 'Example run plan' is slightly better but still vague. No feedback loops for the core task of iteratively trimming/restructuring to meet word limits. | 1 / 3 |
Progressive Disclosure | There is a reference to references/audit-reference.md, and the content is organized into sections with headers. However, the skill is monolithic with far too much inline content that is generic boilerplate (security checklist, risk assessment, lifecycle status, evaluation criteria, response template) rather than being appropriately split or simply omitted. The useful content is hard to find amid the noise. | 2 / 3 |
Total | 6 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
8277276
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.