CtrlK
BlogDocsLog inGet started
Tessl Logo

blockrun

Use when user needs capabilities Claude lacks (image generation, real-time X/Twitter data) or explicitly requests external models ("blockrun", "use grok", "use gpt", "dall-e", "deepseek")

74

Quality

66%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./docs/v19.7/configuration/agent/skills_external/antigravity-awesome-skills-main/skills/blockrun/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

54%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description excels at trigger terms and distinctiveness by naming specific external models and use cases, making it easy to identify when to select this skill. However, it critically fails at explaining what the skill actually does—it only describes when to use it, not the actions or capabilities it provides. The description is essentially a 'Use when...' clause without the preceding capability statement.

Suggestions

Add a capability statement before the 'Use when' clause explaining what the skill does (e.g., 'Delegates requests to external AI models and services to access capabilities beyond Claude's native abilities.')

Describe concrete actions the skill performs (e.g., 'Generates images via DALL-E, fetches real-time Twitter/X data via Grok, routes queries to GPT or DeepSeek models')

Restructure to follow the pattern: '[What it does]. Use when [triggers]' rather than starting with 'Use when'

DimensionReasoningScore

Specificity

Names specific capabilities (image generation, real-time X/Twitter data) and mentions external models by name (grok, gpt, dall-e, deepseek), but doesn't describe concrete actions the skill performs—only when to use it.

2 / 3

Completeness

The description only answers 'when' (use when user needs X or says Y) but completely omits 'what' the skill actually does. There's no explanation of the skill's capabilities or actions.

1 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms users would say: 'blockrun', 'use grok', 'use gpt', 'dall-e', 'deepseek', 'image generation', 'real-time X/Twitter data'. These are specific phrases users would naturally use.

3 / 3

Distinctiveness Conflict Risk

Very distinct niche—external model delegation with specific model names and capability gaps. Unlikely to conflict with other skills due to explicit model names and unique use case of accessing external services.

3 / 3

Total

9

/

12

Passed

Implementation

77%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a strong, actionable skill with excellent executable code examples and clear decision guidance for when to use BlockRun. The main weaknesses are some content redundancy (wallet setup shown multiple times, cost info duplicated) and the monolithic structure that could benefit from splitting reference material into separate files. The workflow guidance and trigger tables are particularly well done.

Suggestions

Consolidate wallet setup code into a single 'Getting Started' section instead of repeating it across multiple sections

Move detailed reference material (xAI Live Search Reference, Available Models, Cost Reference) to separate linked files to improve progressive disclosure

Remove the duplicate cost information - the 'Cost Reference' section and the pricing column in 'Available Models' table overlap significantly

DimensionReasoningScore

Conciseness

The skill is reasonably efficient but includes some redundancy (e.g., wallet setup code repeated multiple times, cost tables duplicated). Some sections like 'Philosophy' and explanations of what BlockRun does could be tighter.

2 / 3

Actionability

Excellent executable code examples throughout - all snippets are copy-paste ready with proper imports, real method calls, and expected outputs. The SDK usage section provides complete, working examples for each use case.

3 / 3

Workflow Clarity

Clear decision tables for when to use BlockRun vs handle tasks directly. Budget control section shows explicit validation loop (check spending before each call, stop when budget reached). The 'When to Use' and 'Example User Prompts' tables provide unambiguous guidance.

3 / 3

Progressive Disclosure

Content is well-organized with clear sections, but it's a long monolithic file (~250 lines) that could benefit from splitting detailed reference material (xAI Live Search Reference, Available Models, Cost Reference) into separate files with links from the main skill.

2 / 3

Total

10

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

Total

10

/

11

Passed

Repository
duclm1x1/Dive-Ai
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.