Train and deploy neural networks in distributed E2B sandboxes with Flow Nexus
57
37%
Does it follow best practices?
Impact
96%
7.38xAverage score across 3 eval scenarios
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./.claude/skills/flow-nexus-neural/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a specific domain (neural network training/deployment) and a specific platform (E2B sandboxes, Flow Nexus), but it is too terse and lacks explicit trigger guidance. It fails to enumerate concrete capabilities beyond 'train and deploy' and does not include a 'Use when...' clause, making it difficult for Claude to reliably select this skill at the right time.
Suggestions
Add an explicit 'Use when...' clause with trigger scenarios, e.g., 'Use when the user asks about training ML models in sandboxes, distributed GPU training, or deploying neural networks with Flow Nexus.'
Expand the capability list with specific actions such as 'configure distributed training jobs, monitor training progress, manage sandbox environments, deploy trained models to endpoints.'
Include natural user-facing keywords like 'machine learning', 'deep learning', 'model training', 'ML deployment', 'GPU' to improve trigger term coverage.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (neural networks, distributed sandboxes) and two actions (train and deploy), but lacks comprehensive detail about specific capabilities like model types, monitoring, configuration, or data handling. | 2 / 3 |
Completeness | Describes what it does (train and deploy neural networks) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, missing 'Use when' caps completeness at 2, and the 'what' is also thin, so this scores a 1. | 1 / 3 |
Trigger Term Quality | Includes some relevant keywords like 'neural networks', 'train', 'deploy', and 'distributed', but 'E2B sandboxes' and 'Flow Nexus' are product-specific jargon that users may not naturally say. Missing common variations like 'deep learning', 'ML model', 'machine learning', 'GPU training'. | 2 / 3 |
Distinctiveness Conflict Risk | The mention of 'E2B sandboxes' and 'Flow Nexus' adds some distinctiveness as product-specific terms, but 'train and deploy neural networks' is broad enough to overlap with general ML/AI skills. The niche is somewhat defined but not sharply delineated. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
42%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill excels at actionability with concrete, executable MCP tool calls and realistic examples, but is severely undermined by extreme verbosity and poor progressive disclosure. Content is heavily duplicated (architecture configs appear multiple times), and the entire document reads as an exhaustive API reference rather than a focused skill guide. Workflow clarity is adequate but lacks validation checkpoints for multi-step distributed operations.
Suggestions
Extract architecture patterns, common use cases, and detailed API response examples into separate reference files (e.g., ARCHITECTURES.md, EXAMPLES.md, API-REFERENCE.md) and link from a concise overview
Remove duplicate content — architecture configs appear in examples, then again in the Architecture Patterns section; keep one canonical location
Add explicit validation checkpoints to the distributed training workflow (e.g., verify cluster is ready before deploying nodes, confirm nodes are healthy before starting training)
Cut response JSON examples to only one or two illustrative cases rather than showing expected responses for nearly every call
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~500+ lines. Massive amounts of repetition — architecture patterns are shown in full examples AND repeated in the 'Architecture Patterns' section. Common use cases duplicate earlier examples. Response JSON examples add bulk without teaching Claude anything new. The skill could be 60-70% shorter. | 1 / 3 |
Actionability | Every capability has concrete, executable MCP tool calls with full parameter examples and expected response JSON. The code is copy-paste ready with realistic configurations and parameters. | 3 / 3 |
Workflow Clarity | The distributed training section shows a reasonable sequence (init cluster → deploy nodes → connect → train → monitor → terminate), but there are no explicit validation checkpoints or error-recovery feedback loops. The troubleshooting section is separate and reactive rather than integrated into workflows. For destructive operations like cluster termination, no confirmation or validation step is mentioned. | 2 / 3 |
Progressive Disclosure | This is a monolithic wall of content with no delegation to sub-files. The architecture patterns, common use cases, and detailed API examples should be split into separate reference files. The 'Related Skills' and 'Resources' sections hint at external content but the SKILL.md itself tries to be a comprehensive reference document rather than an overview. | 1 / 3 |
Total | 7 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (739 lines); consider splitting into references/ and linking | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
8db2712
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.