CtrlK
BlogDocsLog inGet started
Tessl Logo

authorship-credit-gen

Use when determining author order on research manuscripts, assigning CRediT contributor roles for transparency, documenting individual contributions to collaborative projects, or resolving authorship disputes in multi-institutional research. Generates fair and transparent authorship assignments following ICMJE guidelines and CRediT taxonomy. Helps research teams document contributions, resolve disputes, and ensure equitable credit distribution in academic publications.

62

Quality

53%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/Academic Writing/authorship-credit-gen/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description that clearly defines its niche in academic authorship management. It leads with an explicit 'Use when' clause containing natural trigger terms, follows with specific capabilities, and references domain-specific standards (ICMJE, CRediT) that make it highly distinctive. The description is well-structured, concise, and covers both the what and when comprehensively.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: determining author order, assigning CRediT contributor roles, documenting individual contributions, resolving authorship disputes, and generating fair authorship assignments following ICMJE guidelines and CRediT taxonomy.

3 / 3

Completeness

Clearly answers both 'what' (generates fair authorship assignments, documents contributions, resolves disputes, ensures equitable credit distribution) and 'when' (explicit 'Use when' clause covering author order determination, CRediT role assignment, documenting contributions, and resolving authorship disputes).

3 / 3

Trigger Term Quality

Excellent coverage of natural terms users would say: 'author order', 'research manuscripts', 'CRediT', 'contributor roles', 'authorship disputes', 'multi-institutional research', 'ICMJE guidelines', 'academic publications', 'contributions'. These are terms researchers would naturally use when seeking this kind of help.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive niche focused specifically on academic authorship, CRediT taxonomy, and ICMJE guidelines. Very unlikely to conflict with other skills given the specialized domain of research authorship and contribution documentation.

3 / 3

Total

12

/

12

Passed

Implementation

7%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is heavily padded with generic boilerplate, redundant sections, and fabricated API examples that are not executable. The actual domain-specific guidance for authorship determination (ICMJE criteria, CRediT taxonomy specifics, dispute resolution procedures) is almost entirely absent—replaced by vague workflow steps and invented Python method calls. The content reads as template-generated rather than crafted for the specific task.

Suggestions

Remove all duplicated content (the description appears 3 times, 'When to Use' appears twice, 'References' appears twice) and generic boilerplate sections (Quality Checklist, Response Template, Output Requirements) that don't add domain-specific value.

Replace the fabricated Python API examples with either actual executable code from scripts/main.py or concrete, specific instructions for how to structure contribution data and apply ICMJE criteria—the actual domain knowledge Claude needs.

Add a concrete workflow with real validation steps specific to authorship determination, e.g., 'Verify each listed author meets all 4 ICMJE criteria: (1) substantial contributions to conception/design/acquisition/analysis, (2) drafting or critical revision, (3) final approval, (4) accountability agreement.'

Include actual CRediT taxonomy role definitions and concrete examples of how to map real contributions to roles, rather than assuming a tool API handles everything.

DimensionReasoningScore

Conciseness

Extremely verbose and repetitive. The description is copy-pasted verbatim into 'When to Use' and 'Key Features' sections. There are redundant sections (two 'When to Use' sections, two 'References' sections), generic boilerplate that adds no value (Quality Checklist, Output Requirements, Response Template), and extensive explanation of concepts Claude already knows. The skill could be reduced to a fraction of its size.

1 / 3

Actionability

The code examples are not executable—they reference classes and methods (AuthorshipCreditGenerator, AuthorshipCreditGen) that appear to be fabricated API surfaces with no evidence they exist in the actual script. The Quick Start imports from two different modules inconsistently. The examples are essentially pseudocode dressed up as Python, with invented method signatures like `tool.analyze_equity()` and `tool.generate_contributor_statement(style='Nature')` that cannot be verified or run.

1 / 3

Workflow Clarity

The workflow section is entirely generic ('Confirm the user objective', 'Validate that the request matches the documented scope') with no specifics about authorship determination. There are no concrete validation checkpoints for the actual domain task (e.g., verifying ICMJE criteria are met, checking that all contributors have been accounted for). The 'Example run plan' is also generic boilerplate.

1 / 3

Progressive Disclosure

There is some structure with references to external files (references/guide.md, references/examples/, references/api-docs/) and the content is organized into sections. However, the main file is bloated with redundant sections that should have been consolidated or removed, and the references feel like placeholders rather than well-signaled navigation points.

2 / 3

Total

5

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.