CtrlK
BlogDocsLog inGet started
Tessl Logo

model-quantization-tool

Model Quantization Tool - Auto-activating skill for ML Deployment. Triggers on: model quantization tool, model quantization tool Part of the ML Deployment skill category.

33

1.05x

Quality

3%

Does it follow best practices?

Impact

83%

1.05x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./planned-skills/generated/08-ml-deployment/model-quantization-tool/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

7%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is severely underdeveloped, essentially serving as a placeholder rather than a functional skill description. It lacks any concrete actions, meaningful trigger terms, or guidance on when Claude should select this skill. The only redeeming quality is that 'model quantization' is a somewhat specific domain term.

Suggestions

Add specific capabilities like 'Quantize neural network models to INT8/FP16 precision, reduce model size for edge deployment, convert between quantization formats'

Replace redundant trigger terms with natural variations: 'quantize', 'model compression', 'reduce precision', 'INT8', 'FP16', 'optimize model size', 'edge deployment'

Add an explicit 'Use when...' clause: 'Use when the user needs to reduce model size, convert model precision, or prepare models for resource-constrained deployment environments'

DimensionReasoningScore

Specificity

The description only names the tool ('Model Quantization Tool') without describing any concrete actions. There are no specific capabilities listed like 'quantize models to INT8', 'reduce model size', or 'convert precision formats'.

1 / 3

Completeness

The description fails to answer 'what does this do' beyond the name, and while it mentions triggers, they are just the skill name repeated. There is no 'Use when...' clause or explicit guidance on when to select this skill.

1 / 3

Trigger Term Quality

The trigger terms are redundant ('model quantization tool' repeated twice) and miss natural variations users would say like 'quantize', 'reduce model size', 'INT8', 'FP16', 'compress model', or 'optimize for deployment'.

1 / 3

Distinctiveness Conflict Risk

The term 'model quantization' is fairly specific to a particular ML task, which provides some distinctiveness. However, the lack of detail about what types of quantization or frameworks it supports could cause overlap with other ML deployment skills.

2 / 3

Total

5

/

12

Passed

Implementation

0%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is an empty template with no actual content about model quantization. It contains only generic boilerplate describing what a skill should do without providing any actionable guidance, code examples, tool recommendations, or workflows for quantizing ML models.

Suggestions

Add concrete code examples showing how to quantize models using common frameworks (e.g., PyTorch quantization, TensorFlow Lite, ONNX Runtime)

Include a clear workflow with steps: select quantization method → prepare calibration data → quantize → validate accuracy → export

Provide specific guidance on quantization types (INT8, FP16, dynamic vs static) with when to use each

Remove all generic boilerplate ('provides automated assistance', 'follows best practices') and replace with actual technical content

DimensionReasoningScore

Conciseness

The content is padded with generic boilerplate that explains nothing specific about model quantization. Phrases like 'provides automated assistance' and 'follows industry best practices' are filler that Claude doesn't need.

1 / 3

Actionability

No concrete code, commands, or specific guidance is provided. The skill describes what it does abstractly ('provides step-by-step guidance') but never actually provides any guidance, examples, or executable content.

1 / 3

Workflow Clarity

No workflow, steps, or process is defined. The skill claims to provide 'step-by-step guidance' but contains zero actual steps for performing model quantization.

1 / 3

Progressive Disclosure

No references to detailed materials, no links to examples or documentation, and no structured navigation. The content is a shallow placeholder with no depth or organization.

1 / 3

Total

4

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.