CtrlK
CommunityDocumentationLog inGet started
Tessl Logo

fine-tuning-expert

tessl i github:jeffallan/claude-skills --skill fine-tuning-expert

Use when fine-tuning LLMs, training custom models, or optimizing model performance for specific tasks. Invoke for parameter-efficient methods, dataset preparation, or model adaptation.

59%

Overall

SKILL.md
Review
Evals

Validation

75%
CriteriaDescriptionResult

metadata_version

'metadata' field is not a dictionary

Warning

license_field

'license' field is missing

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

body_examples

No examples detected (no code fences and no 'Example' wording)

Warning

Total

12

/

16

Passed

Implementation

42%

This skill has good structural organization with clear progressive disclosure through the reference table, but critically lacks actionability. It reads more like a role description than executable guidance - there are no code examples, no specific commands, and no concrete implementations despite promising 'scripts' in the output templates. The constraints are useful but the skill fails to show Claude how to actually perform fine-tuning tasks.

Suggestions

Add executable code examples for at least one complete workflow (e.g., a minimal LoRA fine-tuning script with PEFT library)

Replace the 'Output Templates' description with actual template code/configs that Claude can adapt

Remove the 'Role Definition' section - Claude doesn't need to be told it's acting as an ML engineer

Add validation checkpoints to the Core Workflow (e.g., 'Validate dataset format before proceeding to training')

DimensionReasoningScore

Conciseness

The skill includes some unnecessary sections like 'Role Definition' that restates what Claude already knows about being an ML engineer. The 'Knowledge Reference' section is a keyword dump that adds little value. However, the core workflow and constraints are reasonably tight.

2 / 3

Actionability

The skill provides no executable code, no concrete commands, and no specific examples. It describes what to do ('Configure hyperparameters, monitor loss') but never shows how. The 'Output Templates' section promises scripts but provides none.

1 / 3

Workflow Clarity

The 5-step core workflow provides a clear sequence, and the MUST DO/MUST NOT DO constraints add useful guardrails. However, there are no validation checkpoints or feedback loops for error recovery in the workflow steps themselves.

2 / 3

Progressive Disclosure

The reference table provides clear one-level-deep navigation to specific topics with explicit 'Load When' guidance. The structure appropriately separates overview content from detailed references.

3 / 3

Total

8

/

12

Passed

Activation

65%

The description has strong trigger term coverage for the ML fine-tuning domain but is inverted in structure - it focuses heavily on when to use the skill while being vague about what it actually does. The lack of concrete capabilities (what outputs or actions it performs) makes it harder to distinguish from related ML skills.

Suggestions

Add specific concrete actions the skill performs, e.g., 'Configures LoRA/QLoRA adapters, prepares training datasets, sets hyperparameters, and monitors training runs'

Restructure to lead with capabilities before trigger conditions: 'Configures and executes LLM fine-tuning workflows including [specific actions]. Use when...'

DimensionReasoningScore

Specificity

Names the domain (LLM fine-tuning, custom models) and mentions some actions (parameter-efficient methods, dataset preparation, model adaptation), but lacks concrete specific actions like 'configure LoRA adapters' or 'prepare training datasets in JSONL format'.

2 / 3

Completeness

Has 'when' clauses ('Use when...', 'Invoke for...') but the 'what' is weak - it describes scenarios for use but doesn't clearly state what the skill actually does or produces. The description is trigger-focused but capability-light.

2 / 3

Trigger Term Quality

Good coverage of natural terms users would say: 'fine-tuning', 'LLMs', 'training custom models', 'model performance', 'parameter-efficient methods', 'dataset preparation', 'model adaptation' are all terms users naturally use when seeking this type of help.

3 / 3

Distinctiveness Conflict Risk

Somewhat specific to ML/LLM domain which helps, but 'optimizing model performance' and 'dataset preparation' could overlap with general ML skills, data processing skills, or evaluation skills.

2 / 3

Total

9

/

12

Passed

Reviewed

Table of Contents

ValidationImplementationActivation

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.