Use when fine-tuning LLMs, training custom models, or adapting foundation models for specific tasks. Invoke for configuring LoRA/QLoRA adapters, preparing JSONL training datasets, setting hyperparameters for fine-tuning runs, adapter training, transfer learning, finetuning with Hugging Face PEFT, OpenAI fine-tuning, instruction tuning, RLHF, DPO, or quantizing and deploying fine-tuned models. Trigger terms include: LoRA, QLoRA, PEFT, finetuning, fine-tuning, adapter tuning, LLM training, model training, custom model.
97
100%
Does it follow best practices?
Impact
94%
1.01xAverage score across 6 eval scenarios
Advisory
Suggest reviewing before use
Security
2 findings — 2 medium severity. This skill can be installed but you should review these findings before use.
The skill exposes the agent to untrusted, user-generated content from public third-party sources, creating a risk of indirect prompt injection. This includes browsing arbitrary URLs, reading social media posts or forum comments, and analyzing content from unknown websites.
Third-party content exposure detected (high risk: 0.90). The skill's required dataset-preparation workflow (SKILL.md and references/dataset-preparation.md) explicitly loads public datasets and arbitrary Hugging Face Hub paths via load_dataset(...) and even uses public corpora like "wikitext" for calibration, meaning untrusted/user-generated third-party content is ingested and directly influences dataset validation, training, and downstream actions.
The skill fetches instructions or code from an external URL at runtime, and the fetched content directly controls the agent’s prompts or executes code. This dynamic dependency allows the external source to modify the agent’s behavior without any changes to the skill itself.
Potentially malicious external URL detected (high risk: 0.90). The skill includes runtime model loading from the Hugging Face Hub (e.g., "meta-llama/Llama-3.1-8B") with trust_remote_code=True, which causes code from that remote repo to be fetched and executed at runtime and is required for model loading, so this is a high-confidence runtime external-code execution risk.
5b76101
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.