CtrlK
BlogDocsLog inGet started
Tessl Logo

graph-interpretation

Use when interpreting scientific graphs and charts, explaining data visualizations for research presentations, writing figure captions for publications, or analyzing trends in clinical research data. Converts complex visual data into clear, accurate explanations for academic papers, clinical reports, and public presentations.

67

Quality

60%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/Academic Writing/graph-interpretation/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description that clearly defines its niche in scientific data visualization interpretation and academic communication. It opens with an explicit 'Use when' clause containing natural trigger terms, lists specific concrete actions, and clearly distinguishes itself from generic data analysis or writing skills. The description is concise yet comprehensive, covering both the 'what' and 'when' effectively.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: interpreting scientific graphs/charts, explaining data visualizations, writing figure captions, analyzing trends in clinical research data, and converting visual data into explanations for academic papers/clinical reports/presentations.

3 / 3

Completeness

Explicitly answers both 'what' (converts complex visual data into clear explanations for academic papers, clinical reports, presentations) and 'when' (starts with 'Use when interpreting scientific graphs and charts, explaining data visualizations...'). The 'Use when' clause is explicit and detailed.

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'scientific graphs', 'charts', 'data visualizations', 'research presentations', 'figure captions', 'publications', 'clinical research data', 'academic papers', 'clinical reports'. These cover a good range of terms a researcher would naturally use.

3 / 3

Distinctiveness Conflict Risk

Clearly carved out niche at the intersection of scientific/clinical data visualization interpretation and academic writing. The combination of scientific graphs, figure captions, clinical research data, and academic publications makes this highly distinctive and unlikely to conflict with generic data analysis or general writing skills.

3 / 3

Total

12

/

12

Passed

Implementation

20%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is a bloated, largely non-actionable document that appears to be auto-generated from a template. It contains fictional API code for a non-existent GraphInterpreter library, extensive copy-pasted descriptions, and generic boilerplate sections (error handling, input validation, response template) that add no real value. The actual task—interpreting scientific graphs—receives no concrete, executable guidance; instead, the skill presents an imaginary Python API as if it were real software.

Suggestions

Remove the fictional GraphInterpreter API code and replace with actual actionable guidance on how Claude should interpret different graph types (e.g., what to look for in a Kaplan-Meier curve, how to describe a forest plot).

Eliminate the duplicated description text in 'When to Use' and 'Key Features' sections, and remove generic boilerplate sections (Output Requirements, Error Handling, Input Validation, Response Template) that don't add skill-specific value.

Provide concrete examples of actual graph interpretation: show a description of a graph's visual elements as input and the expected interpretation/caption as output, rather than pseudo-code for a non-existent library.

Move the detailed reference tables (graph types, statistical pitfalls, quality checklists) into separate reference files and keep SKILL.md as a concise overview with clear pointers to those files.

DimensionReasoningScore

Conciseness

Extremely verbose and repetitive. The description is copy-pasted multiple times (in 'When to Use' and 'Key Features'). There are massive sections of pseudo-API code for a library (GraphInterpreter) that likely doesn't exist, boilerplate sections like 'Output Requirements', 'Error Handling', 'Input Validation', and 'Response Template' that are generic filler, and extensive tables and templates that pad the content enormously. Much of this explains concepts Claude already knows (statistical terms, graph types).

1 / 3

Actionability

Despite the volume of code examples, none of it is actually executable. The 'GraphInterpreter' class appears to be a fictional API with methods like `interpret()`, `generate_multi_audience()`, `generate_regulatory_summary()` etc. that don't correspond to any real library. The CLI examples reference `scripts/graph_interpreter.py` while other sections reference `scripts/main.py`. There's a syntax error in Pattern 3 (`clinical Utility=True`). The skill provides no real, concrete guidance on how to actually interpret graphs.

1 / 3

Workflow Clarity

There is a numbered workflow section and a quality checklist with before/during/after phases, which provides some structure. However, the workflow steps are generic and abstract ('Confirm the user objective', 'Validate that the request matches the documented scope'), and there are no real validation checkpoints tied to the actual task of interpreting scientific graphs. The run plan references `scripts/main.py` but the actual interpretation workflow is unclear.

2 / 3

Progressive Disclosure

There are references to `references/` directory and `scripts/main.py`, suggesting some file structure. However, the skill itself is monolithic with enormous inline content (supported graph types table, statistical reporting standards, audience templates, common patterns, CLI usage, quality checklists, best practices, common pitfalls, output requirements, error handling, input validation, response template) that could be split into separate reference files. The content is organized with headers but is far too long for a SKILL.md overview.

2 / 3

Total

6

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.