Train and deploy neural networks in distributed E2B sandboxes with Flow Nexus
57
37%
Does it follow best practices?
Impact
96%
7.38xAverage score across 3 eval scenarios
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./.claude/skills/flow-nexus-neural/SKILL.mdSingle-node architecture configuration and training workflow
LSTM architecture type
28%
100%
LSTM layer types
0%
100%
Correct training config structure
0%
100%
Training params present
62%
100%
Appropriate tier for PoC
0%
100%
Divergent config block
8%
100%
Divergent factor value
0%
100%
Correct train tool name
0%
80%
Training status monitoring
8%
83%
Dropout regularization layers
10%
100%
Distributed training cluster lifecycle and node deployment
Cluster init tool
0%
100%
wasmOptimization enabled
100%
100%
Topology specified
100%
100%
Consensus mechanism
100%
100%
Parameter server node
0%
100%
Worker node deployed
0%
100%
Aggregator node deployed
0%
100%
Autonomy values set
100%
100%
Cluster connect called
0%
100%
Distributed train tool
0%
100%
Cluster status monitoring
0%
100%
Cluster terminate called
0%
100%
Federated learning, production validation, and model publishing
Federated flag enabled
0%
100%
Aggregation rounds set
0%
100%
Min nodes per round
0%
100%
Performance benchmark
0%
100%
Validation workflow
0%
100%
Model published
0%
100%
Distributed inference aggregation
0%
0%
Cluster init for federated
0%
100%
Cluster terminated
0%
100%
Healthcare category used
0%
100%
8db2712
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.