CtrlK
BlogDocsLog inGet started
Tessl Logo

fine-tuning-expert

Use when fine-tuning LLMs, training custom models, or adapting foundation models for specific tasks. Invoke for configuring LoRA/QLoRA adapters, preparing JSONL training datasets, setting hyperparameters for fine-tuning runs, adapter training, transfer learning, finetuning with Hugging Face PEFT, OpenAI fine-tuning, instruction tuning, RLHF, DPO, or quantizing and deploying fine-tuned models. Trigger terms include: LoRA, QLoRA, PEFT, finetuning, fine-tuning, adapter tuning, LLM training, model training, custom model.

90

Quality

88%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

SKILL.md
Quality
Evals
Security

Security

2 findings — 2 medium severity. This skill can be installed but you should review these findings before use.

Medium

W011: Third-party content exposure detected (indirect prompt injection risk)

What this means

The skill exposes the agent to untrusted, user-generated content from public third-party sources, creating a risk of indirect prompt injection. This includes browsing arbitrary URLs, reading social media posts or forum comments, and analyzing content from unknown websites.

Why it was flagged

Third-party content exposure detected (high risk: 1.00). The skill's core workflow and references (SKILL.md and references/dataset-preparation.md) explicitly call datasets.load_dataset / load_custom_dataset and transformers.from_pretrained (including "try loading from Hugging Face Hub" and hub model IDs), which ingest datasets and models from public Hugging Face / other open sources (untrusted user-provided content) that are read and used to drive fine-tuning, evaluation, and deployment decisions—so third‑party content can materially influence agent behavior.

Report incorrect finding
Medium

W012: Unverifiable external dependency detected (runtime URL that controls agent)

What this means

The skill fetches instructions or code from an external URL at runtime, and the fetched content directly controls the agent’s prompts or executes code. This dynamic dependency allows the external source to modify the agent’s behavior without any changes to the skill itself.

Why it was flagged

Potentially malicious external URL detected (high risk: 0.90). The skill calls AutoModelForCausalLM.from_pretrained with model IDs like "meta-llama/Llama-3.1-8B" together with trust_remote_code=True (e.g. https://huggingface.co/meta-llama/Llama-3.1-8B), which will fetch and execute remote repository code at runtime and is relied upon to load the model.

Repository
Jeffallan/claude-skills
Audited
Security analysis
Snyk

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.