Generate standardized author contribution statements following CRediT (Contributor Roles Taxonomy) standards. Creates formal contribution declarations for manuscripts with support for 14 contribution roles, co-first authors, corresponding authors, and multiple output formats for journal submission.
Install with Tessl CLI
npx tessl i github:aipoch/medical-research-skills --skill authorship-credit-genOverall
score
17%
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Standardized contribution statement generator that creates transparent, machine-readable author attribution following the CRediT taxonomy adopted by major scientific publishers (Nature, Science, Elsevier, PLOS).
Key Capabilities:
✅ Use this skill when:
❌ Do NOT use when:
Integration:
manuscript-prep-assistant (author list finalization), grant-proposal-assistant (contributor documentation)blind-review-sanitizer (anonymization for submission), conflict-of-interest-checker (ethics compliance)Map author contributions to 14 standardized roles:
from scripts.credit_generator import CRediTGenerator
generator = CRediTGenerator()
# Define author contributions
authors = [
{
"name": "Dr. Sarah Chen",
"orcid": "0000-0001-2345-6789",
"affiliation": "Stanford University",
"roles": ["Conceptualization", "Methodology", "Writing - Original Draft"]
},
{
"name": "Dr. Michael Rodriguez",
"orcid": "0000-0002-3456-7890",
"affiliation": "MIT",
"roles": ["Data Curation", "Formal Analysis", "Software"]
}
]
# Generate statement
statement = generator.generate(
authors=authors,
format="text",
language="en"
)
print(statement)
# Dr. Sarah Chen: Conceptualization, Methodology, Writing - Original Draft
# Dr. Michael Rodriguez: Data Curation, Formal Analysis, Software14 CRediT Roles:
| Role | Description | Typical Contributors |
|---|---|---|
| Conceptualization | Ideas, research goals | PI, senior researchers |
| Data Curation | Data management, annotation | Data managers, bioinformaticians |
| Formal Analysis | Statistical analysis | Statisticians, data scientists |
| Funding Acquisition | Grant writing, financial support | PIs, research administrators |
| Investigation | Experiments, data collection | Lab members, research assistants |
| Methodology | Protocol development | Methods specialists, PIs |
| Project Administration | Coordination, logistics | Lab managers, PIs |
| Resources | Materials, reagents, samples | Collaborators, core facilities |
| Software | Programming, code development | Bioinformaticians, programmers |
| Supervision | Mentoring, oversight | PIs, senior scientists |
| Validation | Verification, replication | Independent validators |
| Visualization | Figures, charts, graphics | Graphic designers, authors |
| Writing - Original Draft | Initial manuscript | Lead author, writing committee |
| Writing - Review & Editing | Critical revision | All authors, editor |
Handle co-first and corresponding author situations:
# Complex authorship with special designations
statement = generator.generate(
authors=authors,
co_first_authors=["Dr. Sarah Chen", "Dr. Michael Rodriguez"],
corresponding_authors=["Dr. Sarah Chen"],
co_corresponding=["Prof. James Wilson"], # Multiple corresponding
deceased_authors=["Dr. Robert Brown"], # Posthumous authorship
current_affiliation={
"Dr. Sarah Chen": "Now at Genentech"
}
)Special Notations:
Generate statements for different journal requirements:
# Text format (most journals)
text_statement = generator.generate(authors=authors, format="text")
# XML format (Elsevier, Springer)
xml_statement = generator.generate(authors=authors, format="xml")
# JSON (machine-readable)
json_statement = generator.generate(authors=authors, format="json")
# YAML (human + machine readable)
yaml_statement = generator.generate(authors=authors, format="yaml")
# LaTeX (direct manuscript insertion)
latex_statement = generator.generate(authors=authors, format="latex")Format Comparison:
| Format | Best For | Example Journals |
|---|---|---|
| Text | General use, readability | Nature, Science, PLOS |
| XML | Structured data, submission systems | Elsevier, Springer |
| JSON | API integration, databases | PubMed, ORCID |
| YAML | Human editing + processing | GitHub, preprints |
| LaTeX | Direct manuscript insertion | arXiv, Overleaf |
Check compliance with authorship standards:
# Validate against ICMJE criteria
validation = generator.validate(
authors=authors,
criteria="icmje" # ICMJE, CRediT, or institutional
)
if validation.issues:
print("⚠️ Authorship concerns:")
for issue in validation.issues:
print(f" - {issue.author}: {issue.concern}")
print(f" Recommendation: {issue.recommendation}")Validation Checks:
Scenario: 4-author research paper with clear roles.
# Interactive mode for clarity
python scripts/main.py --interactive
# Or command line
python scripts/main.py \
--authors "Dr_Chen:C1,C6,C10,C13|Dr_Rodriguez:C2,C3,C9,C14|Dr_Kim:C5,C7,C11|Student_Wang:C5,C12,C14" \
--corresponding "Dr_Chen" \
--format text \
--output contribution.txtTypical Distribution:
Scenario: 50+ author consortium paper (e.g., GWAS study).
# Batch processing for large groups
contributions = generator.process_consortium(
members_file="consortium_members.csv",
working_groups={
"Analysis": ["Formal Analysis", "Software"],
"Writing": ["Writing - Original Draft", "Writing - Review & Editing"],
"Steering": ["Conceptualization", "Supervision", "Project Administration"]
},
group_authors=True # Group by contribution type
)Consortium Best Practices:
Scenario: Pharma company + university collaboration.
# Handle sensitive industry contributions
statement = generator.generate(
authors=authors,
industry_partners=["Pfizer", "Genentech"],
funding_disclosure="This work was funded by Pfizer Inc. (grant #12345)",
employee_authors=["Dr_Smith@Pfizer"],
conflict_notes="Dr. Smith is an employee of Pfizer Inc."
)Industry Considerations:
Scenario: Update contribution statement from preprint to final submission.
# Read preprint version
python scripts/main.py \
--input preprint_contribution.txt \
--format json \
--output contribution_structure.json
# Modify for journal (add new analyses, revision roles)
python scripts/main.py \
--input contribution_structure.json \
--add-roles "Dr_Chen:Validation|Reviewer_A:C14" \
--format xml \
--output journal_credit.xmlFrom raw contributions to journal submission:
# Step 1: Collect author inputs via form/survey
python scripts/main.py \
--collect-contributions \
--template survey_template.json \
--output raw_responses.json
# Step 2: Validate and flag issues
python scripts/main.py \
--input raw_responses.json \
--validate \
--criteria icmje \
--output validation_report.txt
# Step 3: Generate consensus version
python scripts/main.py \
--input raw_responses.json \
--resolve-conflicts \
--output consensus_contributions.json
# Step 4: Create multiple format outputs
python scripts/main.py \
--input consensus_contributions.json \
--format text --output credit_statement.txt
python scripts/main.py \
--input consensus_contributions.json \
--format xml --output credit.xml
python scripts/main.py \
--input consensus_contributions.json \
--format json --output credit.json
# Step 5: Generate visual summary
python scripts/main.py \
--input consensus_contributions.json \
--visualize \
--type matrix \
--output contribution_matrix.pngPython API:
from scripts.credit_generator import CRediTGenerator
from scripts.validator import AuthorshipValidator
from scripts.visualizer import ContributionVisualizer
# Initialize
generator = CRediTGenerator()
validator = AuthorshipValidator()
visualizer = ContributionVisualizer()
# Step 1: Define author contributions
authors = [
{
"name": "Dr. Sarah Chen",
"orcid": "0000-0001-2345-6789",
"affiliation": "Stanford University",
"roles": ["Conceptualization", "Methodology", "Supervision"]
},
# ... more authors
]
# Step 2: Validate
validation = validator.validate_icmje(authors)
if validation.concerns:
print("⚠️ Authorship issues detected:")
for concern in validation.concerns:
print(f" {concern}")
# Step 3: Generate statements
text = generator.generate_text(authors)
xml = generator.generate_xml(authors)
json_data = generator.generate_json(authors)
# Step 4: Create visual summary
matrix = visualizer.create_contribution_matrix(authors)
matrix.save("contribution_matrix.png")
# Step 5: Export complete package
generator.export_package(
authors=authors,
formats=["text", "xml", "json"],
visuals=["matrix", "venn"],
output_dir="contribution_package/"
)
print("✅ Contribution package generated")
print(f" Text statement: contribution_package/statement.txt")
print(f" XML for submission: contribution_package/credit.xml")
print(f" Visual matrix: contribution_package/matrix.png")Content Accuracy:
CRediT Compliance:
ICMJE Compliance:
Before Submission:
Role Assignment Issues:
❌ Role inflation → Everyone has every role
❌ Ghost authors → Significant contributors not listed
❌ Gift authorship → Minimal contributors included
❌ Vague roles → "Writing" without specifying which type
Communication Issues:
❌ No author discussion → PI decides unilaterally
❌ Last-minute changes → Adding authors after submission
❌ Disputes unresolved → Disagreements left lingering
Technical Issues:
❌ Inconsistent formatting → Mixed styles in same paper
❌ Missing ORCIDs → Incomplete author metadata
Available in references/ directory:
credit_taxonomy_official.md - Official CRediT taxonomy documentationicmje_criteria.md - ICMJE authorship recommendationsjournal_requirements.md - Specific requirements by publisherauthorship_ethics.md - COPE and institutional guidelinesconsensus_templates.md - Templates for author discussionsdispute_resolution.md - Handling authorship conflictsLocated in scripts/ directory:
main.py - CLI interface for contribution generationcredit_generator.py - Core CRediT statement creationvalidator.py - ICMJE and ethical compliance checkingparser.py - Parse various input formatsexporter.py - Multi-format output generationvisualizer.py - Contribution matrices and chartsconsensus.py - Facilitate author agreement⚖️ Ethical Note: Transparent authorship is fundamental to scientific integrity. This tool facilitates documentation, but cannot replace honest discussion among collaborators about who contributed what. When in doubt, err on the side of inclusion and transparency.
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.