CtrlK
BlogDocsLog inGet started
Tessl Logo

graph-interpretation

Use when interpreting scientific graphs and charts, explaining data visualizations for research presentations, writing figure captions for publications, or analyzing trends in clinical research data. Converts complex visual data into clear, accurate explanations for academic papers, clinical reports, and public presentations.

73

Quality

67%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/Academic Writing/graph-interpretation/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description that clearly defines its niche in scientific data visualization interpretation and academic communication. It opens with an explicit 'Use when' clause containing natural trigger terms, lists specific concrete actions, and identifies distinct output contexts (academic papers, clinical reports, presentations). The description is concise yet comprehensive, making it easy for Claude to select appropriately.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: interpreting scientific graphs/charts, explaining data visualizations, writing figure captions, analyzing trends in clinical research data, and converting visual data into explanations for academic papers/clinical reports/presentations.

3 / 3

Completeness

Explicitly answers both 'what' (converts complex visual data into clear explanations for academic papers, clinical reports, presentations) and 'when' (starts with 'Use when interpreting scientific graphs and charts, explaining data visualizations...'). The 'Use when' clause is explicit and detailed.

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'scientific graphs', 'charts', 'data visualizations', 'research presentations', 'figure captions', 'publications', 'clinical research data', 'academic papers', 'clinical reports'. These cover a good range of terms a researcher would naturally use.

3 / 3

Distinctiveness Conflict Risk

Clearly carved out niche at the intersection of scientific/clinical data visualization interpretation and academic writing. The combination of scientific graphs, figure captions, clinical research data, and academic publications makes this highly distinctive and unlikely to conflict with generic data analysis or general writing skills.

3 / 3

Total

12

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is heavily padded with generic boilerplate (error handling, input validation, response template sections) that add no graph-interpretation-specific value and consume significant tokens. While it contains useful domain knowledge (graph types table, audience-specific explanation templates, common pitfalls), the core code examples appear non-executable and the workflow is abstract rather than concrete. The skill would benefit greatly from cutting boilerplate, fixing code examples, and splitting reference material into separate files.

Suggestions

Remove or drastically reduce generic boilerplate sections (Error Handling, Input Validation, Response Template, Output Requirements, Implementation Details) that repeat standard practices Claude already knows—this could cut 40%+ of the content.

Fix the code examples to be actually executable or clearly mark them as illustrative API designs; the syntax error `clinical Utility=True` and references to non-existent modules undermine credibility.

Eliminate the repeated description text that appears verbatim in 'When to Use' and 'Key Features' sections.

Move the detailed graph types table, audience templates, and statistical reporting standards into separate reference files and link to them from a concise overview.

DimensionReasoningScore

Conciseness

Extremely verbose and repetitive. The 'When to Use' section repeats the description verbatim. 'Key Features' restates the description again. Generic boilerplate sections (Error Handling, Input Validation, Response Template, Output Requirements) add significant bulk without graph-interpretation-specific value. The skill explains concepts Claude already knows and includes extensive template code that appears non-executable.

1 / 3

Actionability

Contains code examples with specific API calls and CLI commands, but none appear to be actually executable—they reference modules like `scripts.graph_interpreter` and classes like `GraphInterpreter` that are likely fictional. The code has syntax errors (e.g., `clinical Utility=True`). The statistical output structures and audience-specific templates provide useful concrete guidance, but the core execution path is not copy-paste ready.

2 / 3

Workflow Clarity

The Workflow section provides a 5-step sequence but it's generic and abstract ('Confirm the user objective', 'Validate that the request matches'). The Quality Checklist provides good before/during/after checkpoints for interpretation, but there are no explicit validation steps tied to the code execution path. The workflow lacks concrete feedback loops for error recovery in the actual graph interpretation process.

2 / 3

Progressive Disclosure

References `references/` directory and `scripts/main.py` but doesn't clearly signal what's in those files. The document itself is monolithic—over 250 lines of inline content that could be split into separate files (e.g., graph types reference, audience templates, statistical reporting standards). Some structure exists with headers but the content is not well-layered for progressive discovery.

2 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.