CtrlK
CommunityDocumentationLog inGet started
Tessl Logo

model-evaluation-metrics

Model Evaluation Metrics - Auto-activating skill for ML Training. Triggers on: model evaluation metrics, model evaluation metrics Part of the ML Training skill category.

Install with Tessl CLI

npx tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill model-evaluation-metrics
What are skills?

Overall
score

19%

Does it follow best practices?

Validation for skill structure

SKILL.md
Review
Evals

Model Evaluation Metrics

Purpose

This skill provides automated assistance for model evaluation metrics tasks within the ML Training domain.

When to Use

This skill activates automatically when you:

  • Mention "model evaluation metrics" in your request
  • Ask about model evaluation metrics patterns or best practices
  • Need help with machine learning training skills covering data preparation, model training, hyperparameter tuning, and experiment tracking.

Capabilities

  • Provides step-by-step guidance for model evaluation metrics
  • Follows industry best practices and patterns
  • Generates production-ready code and configurations
  • Validates outputs against common standards

Example Triggers

  • "Help me with model evaluation metrics"
  • "Set up model evaluation metrics"
  • "How do I implement model evaluation metrics?"

Related Skills

Part of the ML Training skill category. Tags: ml, training, pytorch, tensorflow, sklearn

Repository
github.com/jeremylongshore/claude-code-plugins-plus-skills
Last updated
Created

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.