Model Quantization Tool - Auto-activating skill for ML Deployment. Triggers on: model quantization tool, model quantization tool Part of the ML Deployment skill category.
Install with Tessl CLI
npx tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill model-quantization-toolOverall
score
19%
Does it follow best practices?
Validation for skill structure
Activation
7%This description is severely underdeveloped, functioning more as a placeholder than a useful skill description. It lacks any concrete actions, meaningful trigger terms, or guidance on when Claude should select this skill. The redundant trigger term and boilerplate category mention provide no value for skill selection.
Suggestions
Add specific capabilities like 'Converts neural network models to lower precision formats (INT8, FP16), applies post-training quantization, calibrates quantized models for accuracy'
Replace redundant triggers with natural user phrases: 'Use when user mentions quantizing models, reducing model size, INT8/FP16 conversion, optimizing for edge deployment, or compressing neural networks'
Include supported frameworks or model types to improve distinctiveness: 'Works with PyTorch, TensorFlow, and ONNX models'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description only names the tool ('Model Quantization Tool') without describing any concrete actions. There are no specific capabilities listed like 'converts models to INT8', 'reduces model size', or 'applies post-training quantization'. | 1 / 3 |
Completeness | The description fails to answer 'what does this do' beyond the name, and provides no explicit 'when to use' guidance. The 'Triggers on' section is not a proper 'Use when...' clause and only repeats the skill name. | 1 / 3 |
Trigger Term Quality | The trigger terms are redundant ('model quantization tool' listed twice) and overly specific. Missing natural variations users might say like 'quantize model', 'reduce model size', 'INT8', 'FP16', 'compress neural network', or 'optimize for inference'. | 1 / 3 |
Distinctiveness Conflict Risk | The term 'model quantization' is fairly specific to a particular ML task, which provides some distinctiveness. However, the lack of detail about what types of quantization or frameworks are supported could cause overlap with other ML optimization skills. | 2 / 3 |
Total | 5 / 12 Passed |
Implementation
0%This skill is essentially a placeholder with no substantive content. It describes what a model quantization skill would do without providing any actual guidance, code examples, quantization techniques, tool recommendations, or workflows. The content is entirely generic boilerplate that could apply to any skill topic.
Suggestions
Add concrete code examples for common quantization techniques (e.g., INT8 quantization with PyTorch, TensorFlow Lite conversion, ONNX quantization)
Include a clear workflow with validation steps: model preparation -> quantization -> accuracy validation -> deployment verification
Specify actual tools and libraries (e.g., torch.quantization, TensorRT, ONNX Runtime) with executable code snippets
Remove all generic boilerplate ('provides automated assistance', 'follows best practices') and replace with specific quantization parameters, trade-offs, and decision criteria
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is padded with generic boilerplate that explains nothing specific about model quantization. Phrases like 'provides automated assistance' and 'follows industry best practices' are filler that Claude doesn't need. | 1 / 3 |
Actionability | No concrete code, commands, or specific guidance is provided. The skill describes what it does abstractly ('provides step-by-step guidance') but never actually provides any guidance, examples, or executable instructions for model quantization. | 1 / 3 |
Workflow Clarity | No workflow is defined. There are no steps, no sequence, and no validation checkpoints. The content only describes trigger phrases and vague capabilities without any actual process. | 1 / 3 |
Progressive Disclosure | The content is a monolithic block of generic text with no structure pointing to detailed materials. There are no references to additional documentation, examples, or API references for the complex topic of model quantization. | 1 / 3 |
Total | 4 / 12 Passed |
Validation
69%Validation — 11 / 16 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
description_trigger_hint | Description may be missing an explicit 'when to use' trigger hint (e.g., 'Use when...') | Warning |
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
metadata_version | 'metadata' field is not a dictionary | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
body_steps | No step-by-step structure detected (no ordered list); consider adding a simple workflow | Warning |
Total | 11 / 16 Passed | |
Reviewed
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.