Use when building ML/AI apps in Rust. Keywords: machine learning, ML, AI, tensor, model, inference, neural network, deep learning, training, prediction, ndarray, tch-rs, burn, candle, 机器学习, 人工智能, 模型推理
72
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
62%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description has strong trigger term coverage and good distinctiveness through its Rust + ML niche, but critically fails to describe what the skill actually does. It reads more like a keyword list than a capability description, leaving Claude unable to understand what specific actions or guidance this skill provides.
Suggestions
Add specific concrete actions the skill enables, e.g., 'Guides tensor operations, model loading, inference pipelines, and training loops using Rust ML frameworks'
Restructure to lead with capabilities before the 'Use when' clause, e.g., 'Build and deploy ML models in Rust using tch-rs, burn, or candle. Covers tensor operations, model inference, and neural network training. Use when...'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description only says 'building ML/AI apps in Rust' without listing any concrete actions like 'train models', 'run inference', 'load tensors', or 'optimize neural networks'. It's vague about what the skill actually does. | 1 / 3 |
Completeness | Has a 'Use when' clause but the 'what' is extremely weak - it only says 'building ML/AI apps' without describing specific capabilities. The when is present but the what is essentially missing. | 2 / 3 |
Trigger Term Quality | Excellent coverage of natural keywords users would say: 'machine learning', 'ML', 'AI', 'tensor', 'model', 'inference', 'neural network', 'deep learning', 'training', 'prediction', plus specific library names (tch-rs, burn, candle) and even Chinese terms. | 3 / 3 |
Distinctiveness Conflict Risk | The combination of 'Rust' + ML/AI domain + specific library names (tch-rs, burn, candle, ndarray) creates a clear niche that's unlikely to conflict with general ML skills or general Rust skills. | 3 / 3 |
Total | 9 / 12 Passed |
Implementation
72%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides a well-structured, token-efficient reference for ML development in Rust with good framework recommendations and design patterns. The main weaknesses are incomplete code examples (placeholder functions, ellipsis in types) and lack of explicit validation/error handling workflows for ML operations that can fail silently or produce incorrect results.
Suggestions
Complete the batched inference code example by implementing or showing the stack_inputs/unstack_outputs helper functions
Fix the get_model() function signature to use the actual type instead of '...' placeholder
Add validation checkpoints for model loading and inference (e.g., input shape validation, output sanity checks)
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is lean and efficient, using tables and code blocks to convey information densely. No unnecessary explanations of concepts Claude would already know; every section adds specific Rust/ML domain knowledge. | 3 / 3 |
Actionability | Provides executable code patterns for inference server and batching, but the batched inference example uses undefined helper functions (stack_inputs, unstack_outputs) making it incomplete. The type annotation in get_model() uses '...' placeholder. | 2 / 3 |
Workflow Clarity | The 'Trace Down' section shows decision flows and the tables map use cases to frameworks, but there are no explicit validation checkpoints or error recovery steps for the ML workflows described. | 2 / 3 |
Progressive Disclosure | Well-organized with clear sections, tables for quick reference, and appropriate cross-references to related skills (m10-performance, m07-concurrency, etc.). Content is appropriately structured without deep nesting. | 3 / 3 |
Total | 10 / 12 Passed |
Validation
75%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 12 / 16 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_version | 'metadata' field is not a dictionary | Warning |
license_field | 'license' field is missing | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
body_steps | No step-by-step structure detected (no ordered list); consider adding a simple workflow | Warning |
Total | 12 / 16 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.