Build this skill enables AI assistant to track and manage ai/ml model versions using the model-versioning-tracker plugin. it should be used when the user asks to manage model versions, track model lineage, log model performance, or implement version control f... Use when appropriate context detected. Trigger with relevant phrases based on skill purpose.
57
50%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/ai-ml/model-versioning-tracker/skills/tracking-model-versions/SKILL.mdTrack and manage AI/ML model versions using MLflow, DVC, or Weights & Biases. Log model metadata (hyperparameters, training data hash, framework version), record evaluation metrics (accuracy, F1, latency), manage model registry transitions (Staging, Production, Archived), and generate model cards documenting lineage and performance.
mlflow server or managed MLflow)mlflow, pandas, and the relevant ML framework installedMLFLOW_TRACKING_URI and verify connectivity with mlflow experiments list.mlflow experiments create --experiment-name <name>.mlflow.<flavor>.log_model().mlflow.register_model() with the run URI and a descriptive model name.None -> Staging -> Production using client.transition_model_version_stage(). Archive previous production versions.mlflow.search_runs() and generating comparison tables showing metric improvements between versions.${CLAUDE_SKILL_DIR}/assets/model_card_template.md.See ${CLAUDE_SKILL_DIR}/assets/example_mlflow_workflow.yaml for a complete workflow configuration.
Tracking a new image classification model version: Log a ResNet-50 fine-tuned on a custom dataset. Record hyperparameters (lr=0.001, epochs=50, optimizer=Adam), metrics (val_accuracy=0.94, val_loss=0.18, inference_latency_ms=12), and the serialized model artifact. Register as version 3 in the model registry and transition to Staging for validation.
Comparing model versions before production promotion: Query MLflow for all versions of the sentiment-analysis model. Generate a comparison table showing accuracy improved from 0.87 (v2) to 0.91 (v3) while inference latency increased from 8ms to 15ms. Recommend promoting v3 to Production only if latency is acceptable for the use case.
Generating a model card for compliance review: Extract metadata from MLflow model registry version 5: training dataset (100K customer reviews), evaluation results (F1=0.89 on held-out test set), known limitations (struggles with sarcasm and multilingual input), and intended use (customer feedback classification). Output a structured Markdown model card.
| Error | Cause | Solution |
|---|---|---|
| MLflow connection refused | Tracking server not running or wrong URI | Verify MLFLOW_TRACKING_URI is correct; start server with mlflow server --host 0.0.0.0 --port 5000 |
| Artifact upload failed | Insufficient permissions on artifact store | Check S3/GCS bucket permissions; verify IAM role has write access to the artifact path |
| Model registration conflict | Model name already exists with incompatible schema | Use a versioned model name or delete the conflicting registry entry |
| Metrics not logged | MLflow run ended before logging completed | Ensure all log_metric() calls happen within the active run context (with mlflow.start_run():) |
| Stage transition denied | Model version already in target stage | Archive the existing version in that stage first, then retry the transition |
3a2d27d
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.