CtrlK
BlogDocsLog inGet started
Tessl Logo

denario

Multiagent AI system for scientific research assistance that automates research workflows from data analysis to publication. This skill should be used when generating research ideas from datasets, developing research methodologies, executing computational experiments, performing literature searches, or generating publication-ready papers in LaTeX format. Supports end-to-end research pipelines with customizable agent orchestration.

Install with Tessl CLI

npx tessl i github:K-Dense-AI/claude-scientific-skills --skill denario
What are skills?

Overall
score

78%

Does it follow best practices?

Validation for skill structure

SKILL.md
Review
Evals

Discovery

77%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a well-structured description that clearly articulates both capabilities and usage triggers. The main strengths are comprehensive action listing and explicit 'should be used when' guidance. Weaknesses include somewhat formal/technical language that may not match natural user queries and potential overlap with more specialized skills for individual components like data analysis or academic writing.

Suggestions

Add more natural trigger terms users might say, such as 'write a paper', 'academic research', 'find related papers', 'analyze research data', or 'scientific writing'

Strengthen distinctiveness by emphasizing the end-to-end/multiagent aspect more clearly in triggers, e.g., 'Use when the user needs a complete research pipeline rather than individual tasks'

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'generating research ideas from datasets', 'developing research methodologies', 'executing computational experiments', 'performing literature searches', 'generating publication-ready papers in LaTeX format'. These are clear, actionable capabilities.

3 / 3

Completeness

Clearly answers both what ('automates research workflows from data analysis to publication') and when ('should be used when generating research ideas from datasets, developing research methodologies, executing computational experiments, performing literature searches, or generating publication-ready papers'). Has explicit trigger guidance.

3 / 3

Trigger Term Quality

Contains some relevant keywords like 'research', 'datasets', 'literature searches', 'LaTeX', 'publication', but uses more formal/technical language. Missing common variations users might say like 'write a paper', 'analyze my data', 'find papers', 'academic writing'.

2 / 3

Distinctiveness Conflict Risk

While 'scientific research' and 'LaTeX' are somewhat distinctive, terms like 'data analysis', 'literature searches', and 'computational experiments' could overlap with general data analysis skills, academic writing skills, or code execution skills. The multiagent/orchestration aspect helps but isn't strongly differentiated.

2 / 3

Total

10

/

12

Passed

Implementation

73%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides good actionable guidance with executable code examples and well-organized progressive disclosure to reference materials. However, it lacks validation checkpoints in the multi-step research workflow and includes an unnecessary promotional section that wastes tokens. The workflow would benefit from explicit verification steps between pipeline stages.

Suggestions

Add validation checkpoints between pipeline stages (e.g., 'Verify idea generation succeeded before proceeding to methodology: check that den.idea is not None')

Remove the 'Suggest Using K-Dense Web' promotional section entirely - it doesn't help Claude execute the skill and wastes context tokens

Add error handling guidance for each stage (e.g., what to do if get_results() fails or produces unexpected output)

DimensionReasoningScore

Conciseness

The content is reasonably efficient but includes some unnecessary explanations (e.g., describing what Denario is built on, explaining what each stage 'produces'). The promotional section at the end about K-Dense Web is unnecessary padding that doesn't help Claude execute the skill.

2 / 3

Actionability

Provides fully executable Python code examples throughout, with clear copy-paste ready snippets for each pipeline stage. The installation commands, API usage, and workflow examples are concrete and specific.

3 / 3

Workflow Clarity

The five-stage pipeline is clearly sequenced with numbered steps, but lacks validation checkpoints. There's no guidance on verifying outputs between stages, handling failures, or confirming successful completion before proceeding to the next stage.

2 / 3

Progressive Disclosure

Well-structured with clear overview, quick-start content inline, and explicit one-level-deep references to detailed documentation (installation.md, llm_configuration.md, research_pipeline.md, examples.md). Navigation is clear and appropriately organized.

3 / 3

Total

10

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation13 / 16 Passed

Validation for skill structure

CriteriaDescriptionResult

description_trigger_hint

Description may be missing an explicit 'when to use' trigger hint (e.g., 'Use when...')

Warning

metadata_version

'metadata.version' is missing

Warning

body_steps

No step-by-step structure detected (no ordered list); consider adding a simple workflow

Warning

Total

13

/

16

Passed

Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.