Generates the "Results" section for meta-analysis sensitivity analysis based on statistical tables and titles. Use when the user wants to describe sensitivity analysis results or format sensitivity tables for a meta-analysis paper.
55
44%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/Academic Writing/meta-results-sensitivity-analysis/SKILL.mdUse this skill when:
scripts/validate_skill.py.Python: 3.10+. Repository baseline for current packaged skills.Third-party packages: not explicitly version-pinned in this skill package. Add pinned versions if this skill needs stricter environment control.See ## Usage above for related details.
cd "20260316/scientific-skills/Academic Writing/meta-results-sensitivity-analysis"
python -m py_compile scripts/validate_skill.py
python scripts/validate_skill.py --helpExample run plan:
CONFIG block or documented parameters if the script uses fixed settings.python scripts/validate_skill.py with the validated inputs.See ## Workflow above for related details.
scripts/validate_skill.py.Run this minimal command first to verify the supported execution path:
python scripts/validate_skill.py --helpThis skill generates a descriptive "Results" section for meta-analysis sensitivity analysis. It processes statistical tables (Leave-One-Out method), generates a textual description using an LLM, and formats the output with proper table citations and legends.
(Table 5)) and formats the table with a standard legend.title (optional): Title of the meta-analysis.sensitivity_table (optional): The raw statistical table data.language (required): Output language (Chinese or English).outcome_name (optional): Name of the outcome indicator.from scripts.format_result import format_sensitivity_result
# 1. LLM generates the description (simulated)
# description = llm.generate(prompt="Describe the sensitivity table...", context=inputs)
# 2. Script formats the final result
# final_output = format_sensitivity_result(
# text=description,
# table_data=inputs['sensitivity_table'],
# language=inputs['language']
# )meta_results_sensitivity_analysis_result.md unless the skill documentation defines a better convention.Run this minimal verification path before full execution when possible:
No local script validation step is required for this skill.Expected output format:
Result file: meta_results_sensitivity_analysis_result.md
Validation summary: PASS/FAIL with brief notes
Assumptions: explicit list if any8277276
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.