CtrlK
BlogDocsLog inGet started
Tessl Logo

labarchive-integration

Converts LabArchives notebook data, entry metadata, and authorized ELN exports into manuscript-ready academic writing outputs such as Methods sections, data-availability statements, reproducibility appendices, experiment timelines, and submission support notes. Optional bundled scripts can be used to collect or validate source notebook data before writing.

77

Quality

72%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/Academic Writing/labarchive-integration/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

82%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong description with excellent specificity and distinctiveness, clearly naming the input sources (LabArchives, ELN exports) and multiple concrete output types (Methods sections, data-availability statements, etc.). The main weakness is the absence of an explicit 'Use when...' clause, which would help Claude know precisely when to select this skill over others. The trigger terms are naturally phrased and domain-appropriate.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user mentions LabArchives, electronic lab notebooks, ELN exports, or needs to convert notebook data into manuscript sections.'

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: converting notebook data into Methods sections, data-availability statements, reproducibility appendices, experiment timelines, and submission support notes. Also mentions bundled scripts for collecting/validating source data.

3 / 3

Completeness

The 'what' is thoroughly covered with specific outputs and input types. However, there is no explicit 'Use when...' clause or equivalent trigger guidance telling Claude when to select this skill, which per the rubric caps completeness at 2.

2 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'LabArchives', 'notebook', 'ELN', 'manuscript', 'Methods sections', 'data-availability statements', 'reproducibility', 'experiment timelines', 'submission'. These cover the domain well and match how researchers would phrase requests.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive due to the specific combination of LabArchives/ELN as input source and manuscript-ready academic writing as output. This is a clear niche unlikely to conflict with general writing or general data processing skills.

3 / 3

Total

11

/

12

Passed

Implementation

62%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured skill with a clear workflow, strong safety boundaries, and explicit validation checkpoints. Its main weaknesses are the lack of concrete input/output examples for the writing deliverables and moderate verbosity in the output contract specifications that could be condensed or offloaded to a referenced file. The refusal contract and completion checklist are strong additions.

Suggestions

Add at least one concrete example showing a sample notebook input and the corresponding Methods Draft or Data Availability Statement output, so Claude has a clear model to follow.

Consider moving the detailed output contracts (Outputs A–D) into a referenced file like assets/output_contracts.md to reduce the main skill's token footprint while keeping the overview lean.

DimensionReasoningScore

Conciseness

The skill is reasonably well-structured but includes some sections that could be tightened. The 'When to Use' and 'When Not to Use' sections have some overlap with the refusal contract, and the output contracts repeat structural patterns that could be condensed. However, it avoids explaining concepts Claude already knows.

2 / 3

Actionability

The skill provides concrete script commands and a clear refusal template, but the core writing guidance remains somewhat abstract—it describes what outputs 'must include' without providing concrete examples of actual generated text. No example input/output pairs are given for any of the five deliverables.

2 / 3

Workflow Clarity

The five-step workflow is clearly sequenced with explicit validation checkpoints: authorization check at step 1, dry-run before live execution at step 2, a final safety pass at step 5, and a well-defined refusal/recovery contract for when the workflow cannot proceed. The feedback loop for script failures is also addressed.

3 / 3

Progressive Disclosure

The skill references external files (assets/writing_outputs_template.md, bundled scripts) which is good progressive disclosure, but the main file itself is quite long with detailed output contracts that could potentially be split into a referenced template or appendix. The inline detail for all five output types makes the document heavier than necessary.

2 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.