CtrlK
BlogDocsLog inGet started
Tessl Logo

authorship-credit-gen

Use when determining author order on research manuscripts, assigning CRediT contributor roles for transparency, documenting individual contributions to collaborative projects, or resolving authorship disputes in multi-institutional research. Generates fair and transparent authorship assignments following ICMJE guidelines and CRediT taxonomy. Helps research teams document contributions, resolve disputes, and ensure equitable credit distribution in academic publications.

73

Quality

67%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/Academic Writing/authorship-credit-gen/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an excellent skill description that clearly defines a specific academic niche. It uses third person voice correctly, provides comprehensive trigger terms that researchers would naturally use, and explicitly states both what the skill does and when to use it. The domain-specific terminology (CRediT, ICMJE) makes it highly distinctive.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'determining author order', 'assigning CRediT contributor roles', 'documenting individual contributions', 'resolving authorship disputes', 'Generates fair and transparent authorship assignments'. These are concrete, actionable capabilities.

3 / 3

Completeness

Explicitly answers both what ('Generates fair and transparent authorship assignments following ICMJE guidelines and CRediT taxonomy') and when ('Use when determining author order on research manuscripts, assigning CRediT contributor roles...'). The 'Use when' clause is present and comprehensive.

3 / 3

Trigger Term Quality

Excellent coverage of natural terms users would say: 'author order', 'research manuscripts', 'CRediT', 'contributor roles', 'authorship disputes', 'multi-institutional research', 'ICMJE guidelines', 'academic publications'. These are terms researchers would naturally use.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive niche focused specifically on academic authorship and contribution tracking. Terms like 'CRediT taxonomy', 'ICMJE guidelines', 'authorship disputes', and 'author order' are unique to this domain and unlikely to conflict with other skills.

3 / 3

Total

12

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill suffers from severe verbosity and redundancy, with the same information repeated across multiple sections (When to Use appears 3 times, workflow guidance is duplicated). While it provides concrete code examples for authorship determination, the examples show inconsistent API patterns and the workflow steps are generic boilerplate rather than domain-specific guidance for resolving authorship disputes or applying ICMJE guidelines.

Suggestions

Consolidate all 'When to Use' content into a single 2-3 line section and remove the redundant descriptions from 'Key Features' and other sections

Unify the Quick Start code to show one consistent initialization pattern and verify the module imports match actual file structure

Replace generic workflow steps with authorship-specific validation checkpoints (e.g., 'Verify all contributors meet ICMJE criteria', 'Confirm contribution weights sum to 1.0')

Remove duplicate sections (two Workflow sections, two References sections) and consolidate the Response Template/Output Requirements into a single output format section

DimensionReasoningScore

Conciseness

Extremely verbose with massive redundancy - the 'When to Use' section repeats the description verbatim, 'Key Features' restates it again, and there's a separate 'When to Use This Skill' section. Multiple overlapping sections (Workflow appears twice, Quick Start has duplicate initialization code, references are listed multiple times).

1 / 3

Actionability

Provides concrete Python code examples with specific method calls and parameters, but the code references modules (scripts.authorship_credit, AuthorshipCreditGenerator) that may not exist or match the actual implementation. The Quick Start shows two different initialization patterns suggesting inconsistency.

2 / 3

Workflow Clarity

Contains workflow steps but they are generic boilerplate ('Confirm the user objective', 'Validate that the request matches') rather than specific to authorship determination. No validation checkpoints for the actual authorship assignment process - missing steps like 'verify all contributors are listed' or 'confirm contribution percentages sum correctly'.

2 / 3

Progressive Disclosure

References external files (references/guide.md, references/examples/, references/api-docs/) appropriately, but the main document is bloated with redundant sections that should be consolidated. The structure exists but content organization is poor with duplicate information scattered throughout.

2 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.