CtrlK
CommunityDocumentationLog inGet started
Tessl Logo

model-drift-detector

Model Drift Detector - Auto-activating skill for ML Deployment. Triggers on: model drift detector, model drift detector Part of the ML Deployment skill category.

Install with Tessl CLI

npx tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill model-drift-detector
What are skills?

Overall
score

19%

Does it follow best practices?

Validation for skill structure

SKILL.md
Review
Evals

Activation

7%

This description is severely underdeveloped, essentially serving as a placeholder rather than a functional skill description. It lacks any concrete actions, meaningful trigger terms, or guidance on when to use the skill. The redundant trigger term and reliance on category membership provide almost no value for skill selection.

Suggestions

Add specific actions the skill performs, e.g., 'Detects data drift, concept drift, and prediction drift in deployed ML models. Monitors feature distributions, generates drift reports, and triggers alerts.'

Include a 'Use when...' clause with natural trigger terms like 'model monitoring', 'production model degradation', 'feature distribution changes', 'model performance decline', or 'retrain trigger'.

Remove the redundant duplicate trigger term and expand with variations users would naturally say when needing drift detection capabilities.

DimensionReasoningScore

Specificity

The description only names the skill ('Model Drift Detector') without describing any concrete actions. There are no verbs indicating what the skill actually does - no mention of detecting, monitoring, alerting, analyzing, or any specific capabilities.

1 / 3

Completeness

The description fails to answer both 'what does this do' and 'when should Claude use it'. It only states the skill name and category without explaining functionality or providing explicit usage triggers beyond the redundant trigger phrase.

1 / 3

Trigger Term Quality

The trigger terms listed are just 'model drift detector' repeated twice, which is redundant and lacks natural variations users might say like 'drift detection', 'model monitoring', 'data drift', 'concept drift', or 'production model performance'.

1 / 3

Distinctiveness Conflict Risk

While 'model drift' is a specific ML concept that provides some distinctiveness, the lack of detail about what kind of drift (data drift, concept drift, prediction drift) or what actions it performs could cause overlap with other ML monitoring or deployment skills.

2 / 3

Total

5

/

12

Passed

Implementation

0%

This skill is essentially a placeholder with no actionable content. It describes what a model drift detector skill would do but provides zero implementation guidance, code examples, or specific techniques. The entire content could be replaced with actual drift detection methods (statistical tests, distribution comparisons, performance monitoring) to be useful.

Suggestions

Add concrete code examples for drift detection methods (e.g., PSI, KS test, KL divergence) with executable Python snippets

Define a clear workflow: collect baseline statistics → monitor incoming data → detect drift → alert/retrain, with specific thresholds and validation steps

Include specific metrics and tools (e.g., Evidently, WhyLabs, custom implementations) with configuration examples

Remove all generic boilerplate ('provides automated assistance', 'follows best practices') and replace with actual technical content

DimensionReasoningScore

Conciseness

The content is padded with generic boilerplate that provides no actual information about model drift detection. Phrases like 'provides automated assistance' and 'follows industry best practices' are meaningless filler that Claude already understands.

1 / 3

Actionability

No concrete code, commands, or specific techniques are provided. The skill describes what it does in abstract terms but never shows how to actually detect model drift - no algorithms, metrics, thresholds, or implementation details.

1 / 3

Workflow Clarity

No workflow is defined. There are no steps for detecting drift, no validation checkpoints, and no guidance on what to do when drift is detected. The content is purely descriptive metadata.

1 / 3

Progressive Disclosure

The content is a flat, uninformative structure with no references to detailed materials, no links to implementation guides, and no organization beyond generic section headers that contain no substantive content.

1 / 3

Total

4

/

12

Passed

Validation

69%

Validation11 / 16 Passed

Validation for skill structure

CriteriaDescriptionResult

description_trigger_hint

Description may be missing an explicit 'when to use' trigger hint (e.g., 'Use when...')

Warning

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

metadata_version

'metadata' field is not a dictionary

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

body_steps

No step-by-step structure detected (no ordered list); consider adding a simple workflow

Warning

Total

11

/

16

Passed

Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.