CtrlK
BlogDocsLog inGet started
Tessl Logo

gpu-resource-optimizer

Gpu Resource Optimizer - Auto-activating skill for ML Deployment. Triggers on: gpu resource optimizer, gpu resource optimizer Part of the ML Deployment skill category.

36

1.05x

Quality

3%

Does it follow best practices?

Impact

99%

1.05x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./planned-skills/generated/08-ml-deployment/gpu-resource-optimizer/SKILL.md
SKILL.md
Quality
Evals
Security

Evaluation results

100%

5%

GPU Memory Optimization for a PyTorch Inference Service

Production-ready GPU memory optimization

Criteria
Without context
With context

Dynamic batching

100%

100%

GPU memory limit config

100%

100%

torch.cuda.empty_cache or memory cleanup

100%

100%

Half-precision inference

100%

100%

No-grad context

100%

100%

Batch size in config

100%

100%

Error handling for OOM

50%

100%

Step-by-step deployment instructions

100%

100%

Hardware-specific guidance

100%

100%

Production config separation

100%

100%

Timeout parameter

100%

100%

Without context: $0.4098 · 2m 26s · 14 turns · 15 in / 8,863 out tokens

With context: $0.6724 · 3m 26s · 27 turns · 319 in / 11,921 out tokens

100%

8%

Multi-GPU Training Pipeline for a CV Team

Multi-GPU MLOps pipeline configuration

Criteria
Without context
With context

Distributed training setup

100%

100%

Per-job GPU allocation

100%

100%

Memory limit per GPU

100%

100%

Checkpointing

100%

100%

Job queue configuration

100%

100%

Hardware spec in config

100%

100%

Step-by-step setup instructions

100%

100%

Environment variable GPU control

100%

100%

No-grad in validation

0%

100%

Timeout or max runtime config

100%

100%

Separate config from code

100%

100%

Without context: $0.4778 · 2m 54s · 16 turns · 16 in / 11,267 out tokens

With context: $0.8707 · 4m 3s · 29 turns · 30 in / 16,762 out tokens

97%

1%

GPU Health Monitoring for a Production ML Serving Cluster

GPU monitoring and production alerting

Criteria
Without context
With context

nvidia-smi integration

100%

100%

Zero-utilization detection

100%

100%

Memory leak / stale process detection

80%

70%

Structured log output

75%

100%

Configurable polling interval

100%

100%

Threshold-based alerting

100%

100%

Alert severity levels

100%

100%

Configurable thresholds

100%

100%

Step-by-step runbook

100%

100%

Remediation steps

100%

100%

Output path config

100%

100%

Without context: $0.7309 · 3m 44s · 22 turns · 23 in / 15,056 out tokens

With context: $0.7904 · 4m 19s · 25 turns · 182 in / 16,110 out tokens

Repository
jeremylongshore/claude-code-plugins-plus-skills
Evaluated
Agent
Claude Code
Model
Claude Sonnet 4.6

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.