CtrlK
BlogDocsLog inGet started
Tessl Logo

model-drift-detector

Model Drift Detector - Auto-activating skill for ML Deployment. Triggers on: model drift detector, model drift detector Part of the ML Deployment skill category.

36

1.02x

Quality

3%

Does it follow best practices?

Impact

99%

1.02x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./planned-skills/generated/08-ml-deployment/model-drift-detector/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

7%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is severely underdeveloped, essentially providing only a skill name and category without any substantive content. It fails to describe what the skill does, when to use it, or provide meaningful trigger terms. The redundant trigger term ('model drift detector' listed twice) suggests this may be auto-generated placeholder content.

Suggestions

Add specific actions the skill performs, e.g., 'Detects statistical drift in model inputs and outputs, monitors prediction quality degradation, generates drift reports and alerts'

Add a proper 'Use when...' clause with explicit triggers, e.g., 'Use when monitoring deployed models, checking for data drift, investigating prediction quality issues, or when users mention drift detection, model monitoring, or feature distribution changes'

Expand trigger terms to include natural variations users would say: 'data drift', 'concept drift', 'model degradation', 'feature drift', 'distribution shift', 'model monitoring'

DimensionReasoningScore

Specificity

The description only names the skill ('Model Drift Detector') without describing any concrete actions. There are no verbs indicating what the skill actually does - no mention of detecting, monitoring, alerting, analyzing, or any specific capabilities.

1 / 3

Completeness

The description fails to answer 'what does this do' beyond the name itself, and provides no explicit 'when to use' guidance. The 'Triggers on' field is not a proper 'Use when...' clause and only repeats the skill name.

1 / 3

Trigger Term Quality

The 'Triggers on' field redundantly lists 'model drift detector' twice, providing no natural keyword variations. Missing common terms users might say like 'data drift', 'concept drift', 'model degradation', 'prediction quality', 'feature drift', or 'monitoring'.

1 / 3

Distinctiveness Conflict Risk

The term 'model drift detector' is fairly specific to ML monitoring, which provides some distinctiveness. However, without describing specific capabilities, it could overlap with other ML monitoring or deployment skills in the same category.

2 / 3

Total

5

/

12

Passed

Implementation

0%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is essentially a placeholder template with no substantive content about model drift detection. It lacks any concrete implementation details, code examples, statistical methods, or monitoring approaches. The entire document could be replaced with actual drift detection techniques (PSI, KS tests, feature distribution monitoring) and would provide far more value.

Suggestions

Add concrete code examples for drift detection methods (e.g., Population Stability Index, Kolmogorov-Smirnov test, feature distribution comparisons)

Include a clear workflow: data collection → baseline establishment → monitoring setup → drift detection → alerting → remediation steps

Provide specific configuration examples for drift thresholds and monitoring intervals

Remove all generic boilerplate ('provides automated assistance', 'follows best practices') and replace with actual technical guidance

DimensionReasoningScore

Conciseness

The content is padded with generic boilerplate that provides no actual information about model drift detection. Phrases like 'provides automated assistance' and 'follows industry best practices' are meaningless filler that Claude already understands.

1 / 3

Actionability

No concrete code, commands, or specific techniques for detecting model drift are provided. The content describes what the skill does in abstract terms but gives zero executable guidance on implementing drift detection (e.g., statistical tests, monitoring metrics, threshold configurations).

1 / 3

Workflow Clarity

No workflow is defined. There are no steps for setting up drift detection, no validation checkpoints, and no process for responding to detected drift. The skill merely lists vague capabilities without any sequenced instructions.

1 / 3

Progressive Disclosure

The content is a flat, uninformative document with no structure pointing to detailed resources. There are no references to implementation guides, API documentation, or examples that would provide actual drift detection knowledge.

1 / 3

Total

4

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.