Transforms research code into publication-ready, reproducible workflows. Adds documentation, implements error handling, creates environment specifications, and ensures computational reproducibility for scientific publications.
74
68%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/Data analysis/code-refactor-for-reproducibility/SKILL.mdRefactor research code for publication by adding documentation, parameterizing hardcoded values, pinning dependencies, and validating deterministic outputs.
python -m py_compile scripts/main.py
python scripts/main.py --helpFallback template: If scripts/main.py fails or inputs are missing, report: (a) which step failed, (b) what partial output is still valid, (c) the manual equivalent command or reasoning path.
| Parameter | Type | Required | Description |
|---|---|---|---|
--input | string | Yes | Source code file or directory to refactor |
--output | string | Yes | Output directory for refactored code |
--language | string | No | Language hint: python or r (default: auto-detect) |
--template | string | No | Journal template: nature, science, elife |
--project-name | string | No | Project name for README and environment files |
python scripts/main.py --input analysis.py --output refactored/ --language python
python scripts/main.py --input src/ --output pub_ready/ --template nature --project-name my-studyStep 1 — Analyze: Identify reproducibility issues (missing docstrings, hardcoded paths, missing seeds, bare except:, unpinned imports, magic numbers).
Step 2 — Refactor: Apply docstrings, parameterize paths via argparse, set random seeds (SEED = 42), add structured error handling with logging.
Step 3 — Environment: Generate requirements.txt via pipreqs or environment.yml with pinned versions. Verify clean install in a fresh venv.
Step 4 — Validate: Run pipeline twice, diff outputs, confirm checksums match pre-refactor baseline. Run pytest tests/ -v.
→ Full patterns and templates: references/guide.md
Every response must make these explicit:
This skill accepts: research code files (Python/R scripts, notebooks, analysis pipelines) submitted for reproducibility improvement.
If the request does not involve refactoring existing research code — for example, asking to write new code from scratch, debug unrelated software, or perform statistical analysis — do not proceed. Instead respond:
"
code-refactor-for-reproducibilityis designed to improve reproducibility of existing research code. Your request appears to be outside this scope. Please provide source code files to refactor, or use a more appropriate skill for your task."
scripts/main.py fails, report the failure point, summarize what still can be completed safely, and provide a manual fallback.ca9aaa4
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.