Use when working with *.excalidraw or *.excalidraw.json files, user mentions diagrams/flowcharts, or requests architecture visualization - delegates all Excalidraw operations to subagents to prevent context exhaustion from verbose JSON (single files: 4k-22k tokens, can exceed read limits)
Install with Tessl CLI
npx tessl i github:softaworks/agent-toolkit --skill excalidraw86
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillAgent success when using this skill
Validation for skill structure
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-structured description with excellent trigger terms and clear 'Use when' guidance. The main weakness is that it focuses more on implementation details (subagent delegation, token counts) rather than listing specific user-facing capabilities like creating, editing, or exporting diagrams. The description would benefit from replacing technical implementation notes with concrete actions.
Suggestions
Replace implementation details ('delegates to subagents', 'context exhaustion', token counts) with specific user-facing actions like 'create diagrams', 'edit flowcharts', 'generate architecture diagrams'
Add concrete capabilities such as 'Creates, edits, and exports Excalidraw diagrams' before the 'Use when' clause
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Excalidraw files, diagrams/flowcharts, architecture visualization) and mentions delegation to subagents, but doesn't list specific concrete actions like 'create diagrams', 'edit flowcharts', or 'export visualizations'. | 2 / 3 |
Completeness | Explicitly answers both what (delegates Excalidraw operations to subagents) and when ('Use when working with *.excalidraw files, user mentions diagrams/flowcharts, or requests architecture visualization') with clear trigger guidance. | 3 / 3 |
Trigger Term Quality | Includes excellent natural trigger terms: '*.excalidraw', '*.excalidraw.json', 'diagrams', 'flowcharts', 'architecture visualization' - these are terms users would naturally use when needing this skill. | 3 / 3 |
Distinctiveness Conflict Risk | Very distinct niche with specific file extensions (*.excalidraw, *.excalidraw.json) and clear domain focus on Excalidraw specifically, unlikely to conflict with general diagramming or other visualization skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured skill that clearly communicates a critical constraint (never read Excalidraw files directly) with actionable delegation patterns. Its main weakness is repetitiveness - the core message is restated in multiple sections (Overview, Common Rationalizations, Red Flags, Iron Law) which inflates token count. The actionable templates and concrete examples are excellent, making this immediately usable despite the verbosity.
Suggestions
Consolidate the 'Common Rationalizations', 'Red Flags', and 'Iron Law' sections into a single 'Avoid These Patterns' section to reduce repetition
Move the detailed 'Token Analysis' table and 'Why Straightforward JSON Doesn't Matter' section to a separate ANALYSIS.md reference file, keeping only a brief summary in the main skill
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is somewhat repetitive, restating the core principle ('never read directly, always delegate') multiple times across sections. The 'Common Rationalizations' table and 'Red Flags' section overlap significantly. However, the repetition serves a pedagogical purpose for a critical constraint, and most content is actionable rather than explanatory. | 2 / 3 |
Actionability | Provides concrete, copy-paste ready task templates for all four operation types (read, modify, create, compare). Includes specific examples of good vs bad patterns, clear tables mapping operations to actions, and explicit subagent return formats. The guidance is immediately executable. | 3 / 3 |
Workflow Clarity | Each operation type has a clear numbered sequence with explicit steps. The subagent task templates include approach steps and expected return formats. The 'Implementation Example' shows the complete workflow from user request to response. Validation is implicit in the subagent return requirements. | 3 / 3 |
Progressive Disclosure | The skill is self-contained with good section organization (Overview → Problem → When to Use → Patterns → Templates → Examples). However, at ~200 lines, some content could be split into reference files (e.g., detailed token analysis, full template library). The structure is flat rather than layered with external references. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.