Use when determining author order on research manuscripts, assigning CRediT contributor roles for transparency, documenting individual contributions to collaborative projects, or resolving authorship disputes in multi-institutional research. Generates fair and transparent authorship assignments following ICMJE guidelines and CRediT taxonomy. Helps research teams document contributions, resolve disputes, and ensure equitable credit distribution in academic publications.
62
53%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/Academic Writing/authorship-credit-gen/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that clearly defines its niche in academic authorship management. It leads with an explicit 'Use when...' clause containing natural trigger terms, follows with concrete capabilities, and uses domain-specific terminology that makes it highly distinguishable. The description is well-structured, concise, and covers both the what and when comprehensively.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: determining author order, assigning CRediT contributor roles, documenting individual contributions, resolving authorship disputes, and generating fair authorship assignments following ICMJE guidelines and CRediT taxonomy. | 3 / 3 |
Completeness | Clearly answers both what ('Generates fair and transparent authorship assignments following ICMJE guidelines and CRediT taxonomy') and when ('Use when determining author order on research manuscripts, assigning CRediT contributor roles, documenting individual contributions, or resolving authorship disputes'). The 'Use when...' clause is explicit and detailed. | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural terms users would say: 'author order', 'research manuscripts', 'CRediT', 'contributor roles', 'authorship disputes', 'multi-institutional research', 'ICMJE guidelines', 'academic publications', 'contributions'. These are terms researchers would naturally use when seeking this kind of help. | 3 / 3 |
Distinctiveness Conflict Risk | Occupies a very clear niche around academic authorship, CRediT taxonomy, and ICMJE guidelines. The domain-specific terminology (author order, CRediT roles, authorship disputes, multi-institutional research) makes it highly unlikely to conflict with other skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
7%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is heavily padded with generic boilerplate that applies to any skill, not specifically to authorship credit generation. The code examples are fictional API calls that won't execute, and the description is copy-pasted multiple times across sections. The actual domain expertise—ICMJE guidelines, CRediT taxonomy specifics, concrete authorship dispute resolution logic—is almost entirely absent, replaced by vague references to a script that may not implement any of the shown interfaces.
Suggestions
Replace fictional API examples with actual executable code or concrete step-by-step instructions for determining authorship order (e.g., a real scoring algorithm, actual CRediT role definitions, ICMJE criteria checklist).
Remove all duplicate content—the description appears 3 times verbatim. Consolidate 'When to Use' and 'When to Use This Skill' into a single brief section.
Strip generic boilerplate sections (Output Requirements, Error Handling, Input Validation, Response Template, Quality Checklist) that contain no authorship-specific guidance, or replace them with domain-specific content (e.g., what constitutes valid contribution data, how to handle ghost authorship).
Add concrete domain knowledge: list the 14 CRediT roles, summarize the 4 ICMJE criteria for authorship, provide a real worked example showing how contributions map to author order with actual decision logic.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose and repetitive. The description is copy-pasted verbatim into 'When to Use', 'Key Features', and 'When to Use This Skill' sections. Generic boilerplate sections (Output Requirements, Error Handling, Input Validation, Response Template, Quality Checklist) pad the content significantly without adding domain-specific value. Much of this content explains things Claude already knows how to do. | 1 / 3 |
Actionability | The code examples are entirely pseudocode calling fictional APIs (AuthorshipCreditGenerator, tool.calculate_contribution_scores, etc.) that almost certainly don't exist in the referenced scripts/main.py. None of the code is executable or copy-paste ready—it's aspirational API design, not working code. The Quick Start shows two different import paths suggesting confusion about the actual implementation. | 1 / 3 |
Workflow Clarity | The workflow section is entirely generic ('Confirm the user objective', 'Validate that the request matches the documented scope') with no authorship-specific steps. There are no validation checkpoints specific to authorship determination, no concrete decision criteria for resolving disputes, and no feedback loops for verifying correctness of authorship assignments. The 'Implementation Details' section references '## Workflow above' but appears before the Workflow section. | 1 / 3 |
Progressive Disclosure | There are references to external files (references/guide.md, references/examples/, references/api-docs/, references/audit-reference.md) which is good structure. However, the main file itself is a monolithic wall of text with massive amounts of inline content that could be split out, and the references feel like boilerplate rather than genuinely curated pointers to real supporting material. | 2 / 3 |
Total | 5 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
0b96148
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.