CtrlK
BlogDocsLog inGet started
Tessl Logo

model-quantization-tool

Model Quantization Tool - Auto-activating skill for ML Deployment. Triggers on: model quantization tool, model quantization tool Part of the ML Deployment skill category.

33

1.05x
Quality

3%

Does it follow best practices?

Impact

83%

1.05x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./planned-skills/generated/08-ml-deployment/model-quantization-tool/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

7%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is a boilerplate template with almost no substantive content. It names the domain (model quantization) but provides zero concrete actions, no meaningful trigger terms beyond a duplicated phrase, and no explicit guidance on when Claude should select this skill. It reads as auto-generated metadata rather than a useful skill description.

Suggestions

Add specific concrete actions the skill performs, e.g., 'Converts ML models to lower precision formats (INT8, FP16), applies post-training quantization, and benchmarks accuracy-performance tradeoffs.'

Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks about quantizing models, reducing model size for deployment, INT8/FP16 conversion, or optimizing inference latency.'

Remove the duplicated trigger term and replace with diverse natural language variations users would actually say, such as 'quantize', 'model compression', 'reduce precision', 'deployment optimization'.

DimensionReasoningScore

Specificity

The description names a domain ('model quantization') but describes no concrete actions. There are no specific capabilities listed like 'converts models to INT8', 'applies post-training quantization', or 'reduces model size'. It only states it is a 'tool' without explaining what it does.

1 / 3

Completeness

The description fails to answer both 'what does this do' (no concrete actions) and 'when should Claude use it' (no explicit 'Use when...' clause). The 'Triggers on' line is just a repeated label, not meaningful trigger guidance.

1 / 3

Trigger Term Quality

The trigger terms are just 'model quantization tool' repeated twice, which is identical and redundant. It misses natural variations users would say like 'quantize model', 'INT8', 'reduce model precision', 'ONNX quantization', 'model compression', or 'inference optimization'.

1 / 3

Distinctiveness Conflict Risk

The term 'model quantization' is fairly specific to a niche ML domain, which provides some distinctiveness. However, the lack of concrete actions and the generic 'ML Deployment' category could cause overlap with other ML deployment skills.

2 / 3

Total

5

/

12

Passed

Implementation

0%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is an empty shell with no actual content. It consists entirely of auto-generated boilerplate that describes what a model quantization skill would do without providing any actionable information, code, or workflows. It fails on every dimension of the rubric.

Suggestions

Add concrete, executable code examples for common quantization approaches (e.g., using PyTorch's dynamic/static quantization, GPTQ with auto-gptq, or bitsandbytes INT8/INT4 quantization).

Define a clear multi-step workflow: select quantization method → quantize model → validate accuracy/perplexity → benchmark inference speed → deploy, with explicit validation checkpoints.

Remove all generic boilerplate ('This skill provides automated assistance...') and replace with specific technical guidance covering trade-offs between quantization methods (PTQ vs QAT, INT8 vs INT4, etc.).

Add references to detailed sub-documents or sections for advanced topics like calibration datasets, mixed-precision strategies, and framework-specific guides (TensorRT, ONNX Runtime, vLLM).

DimensionReasoningScore

Conciseness

The content is entirely filler and boilerplate. It explains nothing Claude doesn't already know, repeats 'model quantization tool' excessively, and provides zero technical substance. Every token is wasted.

1 / 3

Actionability

There is no concrete guidance whatsoever—no code, no commands, no specific quantization techniques (e.g., INT8, GPTQ, AWQ), no library references, no examples. It only describes what the skill claims to do without actually doing it.

1 / 3

Workflow Clarity

No workflow is defined. The skill mentions 'step-by-step guidance' but provides none. There are no steps, no validation checkpoints, and no process to follow.

1 / 3

Progressive Disclosure

The content is a flat, uninformative page with no references to detailed materials, no links to examples or advanced guides, and no meaningful structure beyond generic headings.

1 / 3

Total

4

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.