Use when user needs capabilities Claude lacks (image generation, real-time X/Twitter data) or explicitly requests external models ("blockrun", "use grok", "use gpt", "dall-e", "deepseek")
78
Quality
71%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./.agent/skills/blockrun/SKILL.mdQuality
Discovery
54%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description excels at trigger terms and distinctiveness by naming specific external models and use cases, making it easy to identify when to select this skill. However, it critically fails at explaining what the skill actually does—it only describes when to use it, not the actions or capabilities it provides. The description is essentially a 'Use when...' clause without the preceding capability statement.
Suggestions
Add a capability statement before the 'Use when' clause explaining what the skill does (e.g., 'Delegates requests to external AI models and services to access capabilities beyond Claude's native abilities.')
Describe concrete actions the skill performs (e.g., 'Generates images via DALL-E, fetches real-time Twitter/X data via Grok, routes queries to GPT or DeepSeek models')
Restructure to follow the pattern: '[What it does]. Use when [triggers]' rather than starting with 'Use when'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names specific capabilities (image generation, real-time X/Twitter data) and mentions external models by name (grok, gpt, dall-e, deepseek), but doesn't describe concrete actions the skill performs—only when to use it. | 2 / 3 |
Completeness | The description only answers 'when' (use when user needs X or says Y) but completely omits 'what' the skill actually does. There's no explanation of the skill's capabilities or actions. | 1 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger terms users would say: 'blockrun', 'use grok', 'use gpt', 'dall-e', 'deepseek', 'image generation', 'real-time X/Twitter data'. These are specific phrases users would naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | Very distinct niche—external model delegation with specific model names and capability gaps. Unlikely to conflict with other skills due to explicit model names and unique use case of accessing external services. | 3 / 3 |
Total | 9 / 12 Passed |
Implementation
87%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-crafted skill that efficiently communicates BlockRun's capabilities through tables and executable code examples. The progressive disclosure to sub-skills is excellent. The main weakness is workflow clarity - while individual operations are clear, the overall flow from initialization through various use cases could benefit from more explicit sequencing and error handling.
Suggestions
Add a numbered quick-start workflow showing the complete flow: 1) Initialize client, 2) Check balance, 3) Make first call, 4) Verify success
Include error handling guidance for common failures (insufficient balance, network issues, rate limits)
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is lean and efficient, using tables for quick reference and avoiding explanations of concepts Claude already knows. Every section serves a clear purpose without padding. | 3 / 3 |
Actionability | Provides fully executable Python code examples that are copy-paste ready, with clear import statements and method calls. The trigger tables give concrete guidance on when to use each capability. | 3 / 3 |
Workflow Clarity | The budget control section shows a validation pattern, but the overall workflow for using BlockRun lacks explicit sequencing. Missing clear step-by-step flow for first-time setup vs. subsequent usage, and no error handling guidance. | 2 / 3 |
Progressive Disclosure | Excellent structure with a concise overview in the main file and clear one-level-deep references to sub-skills for detailed topics (basic chat, image generation, search parameters, etc.). Navigation is well-signaled. | 3 / 3 |
Total | 11 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3395991
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.