Use when building ML/AI apps in Rust. Keywords: machine learning, ML, AI, tensor, model, inference, neural network, deep learning, training, prediction, ndarray, tch-rs, burn, candle, 机器学习, 人工智能, 模型推理
73
67%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/domain-ml/SKILL.mdQuality
Discovery
62%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description has strong trigger term coverage and clear distinctiveness through its Rust + ML niche, but critically fails to describe what the skill actually does. It reads more like a keyword list than a capability description, leaving Claude unable to understand what specific actions or guidance this skill provides.
Suggestions
Add specific concrete actions the skill enables, e.g., 'Guides tensor operations, model loading, inference pipelines, and training loops using Rust ML frameworks'
Expand the 'Use when' clause to describe scenarios, e.g., 'Use when implementing neural networks, running model inference, or integrating ML frameworks like tch-rs, burn, or candle in Rust projects'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description only says 'building ML/AI apps in Rust' without listing any concrete actions like 'train models', 'run inference', 'load tensors', or 'optimize neural networks'. It's vague about what the skill actually does. | 1 / 3 |
Completeness | Has a 'Use when' clause but the 'what' is extremely weak - it only says 'building ML/AI apps' without describing specific capabilities. The when is present but the what is essentially missing. | 2 / 3 |
Trigger Term Quality | Excellent coverage of natural keywords users would say: 'machine learning', 'ML', 'AI', 'tensor', 'model', 'inference', 'neural network', 'deep learning', 'training', 'prediction', plus specific library names (tch-rs, burn, candle) and even Chinese terms. | 3 / 3 |
Distinctiveness Conflict Risk | The combination of 'Rust' + ML/AI domain + specific library names (tch-rs, burn, candle, ndarray) creates a clear niche that's unlikely to conflict with general ML skills or general Rust skills. | 3 / 3 |
Total | 9 / 12 Passed |
Implementation
72%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill effectively communicates ML domain constraints for Rust development with excellent conciseness and organization. The tables provide quick reference for framework selection and common patterns. However, the code examples have completeness issues (placeholder types, undefined helper functions) that reduce immediate actionability.
Suggestions
Complete the get_model() function with the actual type instead of '...' placeholder to make it copy-paste ready
Implement or provide signatures for stack_inputs() and unstack_outputs() helper functions in the batched inference example
Add validation/error handling patterns for model loading failures and inference errors, especially for production deployment scenarios
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is lean and efficient, using tables and code blocks to convey information densely. No unnecessary explanations of concepts Claude would already know; every section adds specific Rust/ML domain knowledge. | 3 / 3 |
Actionability | Provides executable code patterns for inference server and batching, but the batched inference example uses undefined helper functions (stack_inputs, unstack_outputs) making it incomplete. The type annotation in get_model() uses '...' placeholder. | 2 / 3 |
Workflow Clarity | The 'Trace Down' section shows decision flows and the tables map use cases to frameworks, but there are no explicit validation checkpoints or error recovery steps for the ML workflows described. | 2 / 3 |
Progressive Disclosure | Well-organized with clear sections, tables for quick reference, and explicit cross-references to related skills (m10-performance, m12-lifecycle, etc.). Content is appropriately structured for a domain constraints document. | 3 / 3 |
Total | 10 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
1f4becd
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.