CtrlK
BlogDocsLog inGet started
Tessl Logo

adapting-transfer-learning-models

This skill automates the adaptation of pre-trained machine learning models using transfer learning techniques. It is triggered when the user requests assistance with fine-tuning a model, adapting a pre-trained model to a new dataset, or performing transfer learning. It analyzes the user's requirements, generates code for adapting the model, includes data validation and error handling, provides performance metrics, and saves artifacts with documentation. Use this skill when you need to leverage existing models for new tasks or datasets, optimizing for performance and efficiency.

Install with Tessl CLI

npx tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill adapting-transfer-learning-models
What are skills?

79

1.01x

Quality

43%

Does it follow best practices?

Impact

86%

1.01x

Average score across 9 eval scenarios

Optimize this skill with Tessl

npx tessl skill review --optimize ./backups/skills-migration-20251108-070147/plugins/ai-ml/transfer-learning-adapter/skills/transfer-learning-adapter/SKILL.md
SKILL.md
Review
Evals

Evaluation results

100%

Plant Disease Detection Model

Vision model fine-tuning with artifacts

Criteria
Without context
With context

ML framework used

100%

100%

Pre-trained model loaded

100%

100%

Architecture modification

100%

100%

Data preprocessing

100%

100%

Data augmentation

100%

100%

Regularization applied

100%

100%

Training monitoring

100%

100%

Error handling present

100%

100%

Performance metrics reported

100%

100%

Artifacts saved

100%

100%

Documentation produced

100%

100%

Hyperparameters specified

100%

100%

Without context: $1.5121 · 11m 59s · 56 turns · 98 in / 21,299 out tokens

With context: $0.8524 · 4m 59s · 30 turns · 268 in / 15,656 out tokens

48%

5%

Customer Support Ticket Classifier

NLP model adaptation with validation

Criteria
Without context
With context

ML framework used

0%

0%

Pre-trained language model

0%

0%

Tokenization present

20%

40%

Padding handled

0%

0%

Attention masks used

0%

0%

Architecture modification

12%

37%

Data validation

100%

100%

Error handling

87%

87%

Performance metrics

100%

100%

Artifacts saved

100%

100%

Process documentation

100%

100%

Training monitoring

12%

25%

Without context: $1.9525 · 11m 21s · 43 turns · 41 in / 37,405 out tokens

With context: $1.0450 · 5m 6s · 38 turns · 381 in / 15,198 out tokens

100%

3%

Medical X-Ray Anomaly Detector

Hyperparameter tuning and regularization

Criteria
Without context
With context

ML framework used

100%

100%

Pre-trained model loaded

100%

100%

Dropout regularization

100%

100%

Weight decay regularization

100%

100%

Learning rate specified

100%

100%

Batch size specified

100%

100%

Data preprocessing

100%

100%

Architecture modification

100%

100%

Performance metrics reported

100%

100%

Hyperparameter documentation

62%

100%

Model card produced

100%

100%

Training monitoring

100%

100%

Without context: $1.0751 · 2s · 1 turns · 3 in / 46 out tokens

With context: $1.1488 · 7s · 1 turns · 3 in / 44 out tokens

89%

2%

Patient Readmission Risk Model

Tabular data transfer learning

Criteria
Without context
With context

ML framework used

100%

0%

Pre-trained model loaded

55%

66%

Architecture modification

100%

100%

Tabular data preprocessing

100%

100%

Data validation

0%

100%

Error handling

100%

100%

Training monitoring

100%

100%

Accuracy reported

100%

100%

Precision reported

100%

100%

Recall reported

100%

100%

F1-score reported

100%

100%

Model artifacts saved

100%

100%

Process documentation

100%

100%

Without context: $1.1972 · 13m 51s · 40 turns · 41 in / 18,875 out tokens

With context: $0.8216 · 3m 52s · 27 turns · 321 in / 14,253 out tokens

100%

13%

Technical Support Ticket Categorization

Multi-label text classification

Criteria
Without context
With context

ML framework used

100%

100%

Pre-trained language model

100%

100%

Tokenization applied

100%

100%

Padding handled

100%

100%

Attention masks used

100%

100%

Multi-label architecture

100%

100%

Training monitoring

71%

100%

Error handling

0%

100%

All four metrics reported

66%

100%

Model artifacts saved

100%

100%

Process documentation

100%

100%

Regularization applied

100%

100%

Without context: $1.1961 · 14m 51s · 34 turns · 899 in / 19,273 out tokens

With context: $1.1450 · 8m 20s · 37 turns · 330 in / 18,260 out tokens

63%

-8%

Compliance-Ready Model Adaptation for Financial Document Classification

Artifacts saving and documentation

Criteria
Without context
With context

ML framework used

0%

0%

Pre-trained model loaded

0%

0%

Tokenization applied

50%

33%

Model saved to disk

100%

100%

Per-epoch training log

50%

0%

Accuracy in metrics report

100%

100%

Precision in metrics report

100%

100%

Recall in metrics report

100%

100%

F1-score in metrics report

100%

100%

Model card present

100%

100%

Hyperparameters documented

100%

100%

Error handling present

100%

100%

Architecture modification

20%

0%

Without context: $0.4995 · 2m 31s · 22 turns · 21 in / 9,227 out tokens

With context: $0.8485 · 4m 14s · 33 turns · 191 in / 13,530 out tokens

100%

Customer Call Emotion Classifier

Audio domain transfer learning

Criteria
Without context
With context

ML framework used

100%

100%

Pre-trained model loaded

100%

100%

Audio preprocessing applied

100%

100%

Architecture modification

100%

100%

Data validation

100%

100%

Error handling

100%

100%

Training monitoring

100%

100%

Accuracy reported

100%

100%

Precision reported

100%

100%

Recall reported

100%

100%

F1-score reported

100%

100%

Model saved to disk

100%

100%

Per-epoch training log

100%

100%

Regularization applied

100%

100%

Hyperparameters documented

100%

100%

Without context: $1.4276 · 2s · 1 turns · 3 in / 51 out tokens

With context: $1.3996 · 1s · 1 turns · 3 in / 23 out tokens

100%

6%

Wildlife Camera Trap Species Classifier

Requirements analysis and workflow documentation

Criteria
Without context
With context

Requirements analysis present

100%

100%

Target task described

100%

100%

Dataset characteristics documented

100%

100%

Metrics rationale documented

100%

100%

ML framework used

100%

100%

Pre-trained model loaded

100%

100%

Architecture modification

100%

100%

Data preprocessing

100%

100%

Error handling

0%

100%

Training monitoring

100%

100%

All four metrics reported

100%

100%

Model saved

100%

100%

Regularization applied

100%

100%

Hyperparameters specified

100%

100%

Without context: $3.2665 · 2s · 1 turns · 3 in / 42 out tokens

With context: $0.9456 · 5m 12s · 32 turns · 31 in / 17,323 out tokens

78%

13%

Predictive Maintenance: Remaining Useful Life Estimation

Time-series regression transfer learning

Criteria
Without context
With context

ML framework used

0%

100%

Pre-trained model loaded

0%

22%

Regression output architecture

70%

100%

Sensor data preprocessing

100%

100%

Data validation

0%

0%

Error handling

0%

0%

Training monitoring

100%

100%

MAE reported

100%

100%

RMSE reported

100%

100%

R-squared reported

100%

100%

Model saved to disk

100%

100%

Per-epoch training log

100%

100%

Regularization applied

100%

100%

Hyperparameters specified

60%

100%

Without context: $2.1806 · 2s · 1 turns · 3 in / 58 out tokens

With context: $1.8184 · 2s · 1 turns · 3 in / 30 out tokens

Evaluated
Agent
Claude Code
Model
Claude Sonnet 4.6

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.