CtrlK
BlogDocsLog inGet started
Tessl Logo

model-drift-detector

Model Drift Detector - Auto-activating skill for ML Deployment. Triggers on: model drift detector, model drift detector Part of the ML Deployment skill category.

36

1.02x
Quality

3%

Does it follow best practices?

Impact

99%

1.02x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./planned-skills/generated/08-ml-deployment/model-drift-detector/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

7%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is essentially a title and category label with no substantive content. It fails to describe what the skill does, provides no meaningful trigger terms beyond its own name repeated, and lacks any 'Use when...' guidance. It would be nearly useless for Claude to differentiate this skill from others in a large skill library.

Suggestions

Add concrete actions the skill performs, e.g., 'Detects statistical drift in model predictions, compares feature distributions against baselines, generates drift reports, and triggers retraining alerts.'

Add an explicit 'Use when...' clause with natural trigger scenarios, e.g., 'Use when the user mentions data drift, concept drift, model degradation, prediction monitoring, distribution shift, or needs to evaluate whether a deployed model's performance has changed.'

Include diverse natural trigger terms users might say, such as 'data drift', 'concept drift', 'model monitoring', 'feature drift', 'distribution shift', 'model staleness', 'retraining', and 'production model performance'.

DimensionReasoningScore

Specificity

The description names a domain ('ML Deployment') and a concept ('Model Drift Detector') but does not describe any concrete actions. There are no verbs indicating what the skill actually does—no 'detects', 'monitors', 'alerts', 'compares distributions', etc.

1 / 3

Completeness

The description fails to answer 'what does this do' beyond naming itself, and the 'when' clause is essentially just the skill name repeated as a trigger phrase. There is no explicit 'Use when...' guidance with meaningful trigger scenarios.

1 / 3

Trigger Term Quality

The trigger terms listed are just 'model drift detector' repeated twice. There are no natural variations a user might say such as 'data drift', 'concept drift', 'model degradation', 'prediction drift', 'distribution shift', or 'monitoring model performance'.

1 / 3

Distinctiveness Conflict Risk

The term 'model drift detector' is somewhat specific to a niche ML concept, which provides some distinctiveness. However, the lack of concrete actions and the vague 'ML Deployment' category could cause overlap with other ML-related skills.

2 / 3

Total

5

/

12

Passed

Implementation

0%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is essentially a placeholder with no substantive content. It repeatedly describes itself in abstract terms without providing any actionable guidance on model drift detection—no statistical methods (PSI, KS test, KL divergence), no code examples, no monitoring workflows, no tool recommendations. It fails on every dimension of the rubric.

Suggestions

Add concrete, executable code examples for drift detection methods (e.g., PSI calculation, KS test, KL divergence) using specific libraries like evidently, alibi-detect, or scipy.

Define a clear workflow: data collection → reference/current distribution comparison → threshold evaluation → alerting → retraining trigger, with explicit validation checkpoints.

Remove all meta-description sections ('Purpose', 'When to Use', 'Example Triggers', 'Capabilities') that describe the skill rather than teaching drift detection, and replace with actual technical content.

Add specific configuration examples for production monitoring (e.g., Prometheus metrics, Grafana dashboards, or evidently monitoring setup) with concrete thresholds and alerting rules.

DimensionReasoningScore

Conciseness

The content is entirely filler and meta-description. It explains what the skill does in abstract terms without providing any actual technical content. Every section restates the same vague information about 'model drift detector' without adding substance.

1 / 3

Actionability

There is zero concrete guidance—no code, no commands, no specific algorithms, no statistical tests, no thresholds, no library recommendations. The skill describes rather than instructs, offering only vague promises like 'provides step-by-step guidance' without actually providing any.

1 / 3

Workflow Clarity

No workflow is defined at all. There are no steps, no sequence, no validation checkpoints. The content merely states it can provide guidance without actually laying out any process for detecting model drift.

1 / 3

Progressive Disclosure

The content is a flat, repetitive document with no meaningful structure. There are no references to detailed files, no layered organization, and the sections are redundant rather than progressively informative.

1 / 3

Total

4

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.