CtrlK
BlogDocsLog inGet started
Tessl Logo

denario

Multiagent AI system for scientific research assistance that automates research workflows from data analysis to publication. This skill should be used when generating research ideas from datasets, developing research methodologies, executing computational experiments, performing literature searches, or generating publication-ready papers in LaTeX format. Supports end-to-end research pipelines with customizable agent orchestration.

83

2.77x
Quality

75%

Does it follow best practices?

Impact

100%

2.77x

Average score across 3 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/denario/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

77%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description is fairly strong in specifying concrete capabilities and providing explicit 'when to use' guidance. Its main weaknesses are in trigger term coverage, where it could include more natural user phrasings, and in distinctiveness, where several of its listed capabilities could overlap with other specialized skills. The description is well-structured but could benefit from more natural language keywords that users would actually type.

Suggestions

Add more natural user-facing trigger terms like 'write a paper', 'scientific paper', 'academic writing', '.tex files', 'run experiments', 'analyze my dataset' to improve matching with how users actually phrase requests.

Sharpen distinctiveness by emphasizing the end-to-end pipeline aspect more clearly and differentiating from standalone data analysis or writing skills, e.g., 'Use this skill specifically when the user needs a multi-step research pipeline rather than standalone analysis or writing assistance.'

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: generating research ideas from datasets, developing research methodologies, executing computational experiments, performing literature searches, generating publication-ready papers in LaTeX format, and customizable agent orchestration.

3 / 3

Completeness

Clearly answers both 'what' (automates research workflows from data analysis to publication with specific capabilities listed) and 'when' (explicitly states 'should be used when generating research ideas from datasets, developing research methodologies, executing computational experiments, performing literature searches, or generating publication-ready papers').

3 / 3

Trigger Term Quality

Includes some relevant keywords like 'research ideas', 'data analysis', 'literature searches', 'LaTeX', and 'publication', but misses common user variations like 'write a paper', 'analyze my data', 'run experiments', 'scientific paper', 'academic writing', or file extensions like '.tex'.

2 / 3

Distinctiveness Conflict Risk

While the scientific research focus and LaTeX output are somewhat distinctive, terms like 'data analysis', 'literature searches', and 'research methodologies' could overlap with general data analysis skills, literature review tools, or writing assistance skills. The 'multiagent AI system' framing adds some distinction but the individual capabilities could conflict with more specialized skills.

2 / 3

Total

10

/

12

Passed

Implementation

72%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill provides good actionable guidance with executable code examples and a well-structured progressive disclosure pattern. Its main weaknesses are moderate verbosity (redundant workflow examples, unnecessary 'When to Use' section, vague feature descriptions) and lack of validation checkpoints between pipeline stages, which is important for a multi-step research automation system.

Suggestions

Remove the 'When to Use This Skill' section and the 'Advanced Features' bullet list—these are either inferrable from context or too vague to be actionable. Consolidate the end-to-end example with the step-by-step sections to eliminate duplication.

Add validation/verification checkpoints between pipeline stages (e.g., 'Review den.idea before proceeding to get_method()', 'Check generated figures in ./research_project/results/ before paper generation') to catch errors early in this multi-step workflow.

DimensionReasoningScore

Conciseness

The skill includes some unnecessary explanations (e.g., 'Overview' section restating what Denario is, 'When to Use This Skill' section listing things Claude could infer, 'Advanced Features' bullet points that are vague marketing-speak). The end-to-end workflow example largely duplicates the step-by-step sections above it. Could be tightened significantly.

2 / 3

Actionability

Provides fully executable Python code for each pipeline stage, concrete CLI commands for installation and GUI launch, and complete end-to-end workflow examples that are copy-paste ready. The API usage is specific with real method calls and parameters.

3 / 3

Workflow Clarity

The five-stage pipeline is clearly sequenced and easy to follow, but there are no validation checkpoints or error recovery steps between stages. For a system that executes computational experiments and generates papers, there should be verification steps (e.g., checking idea quality before proceeding to methodology, validating results before paper generation).

2 / 3

Progressive Disclosure

Well-structured with a clear overview in the main file and one-level-deep references to installation.md, llm_configuration.md, research_pipeline.md, and examples.md. Content is appropriately split between the main skill and reference files, with clear signaling of what each reference contains.

3 / 3

Total

10

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

metadata_version

'metadata.version' is missing

Warning

Total

10

/

11

Passed

Repository
K-Dense-AI/claude-scientific-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.