This skill should be used when the user asks to "fine-tune on books", "create SFT dataset", "train style model", "extract ePub text", or mentions style transfer, LoRA training, book segmentation, or author voice replication.
72
61%
Does it follow best practices?
Impact
92%
1.95xAverage score across 3 eval scenarios
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./examples/book-sft-pipeline/SKILL.mdSegmentation pipeline implementation
Min word count bound
0%
100%
Max word count bound
0%
100%
Paragraph boundary splitting
100%
100%
Overlap implementation
0%
0%
Tier 2 for oversized paragraphs
100%
100%
Validation function present
100%
100%
Sentence completeness check
0%
100%
chunks.json produced
100%
100%
Chunk count reasonable
100%
100%
No mid-sentence breaks in output
100%
100%
Diverse instruction dataset generation
15+ prompt templates
40%
100%
5+ system prompts
100%
100%
Cheap LLM for instructions
100%
100%
Instructions don't quote text
90%
100%
Messages format
0%
100%
2 variants per chunk
83%
100%
Rotating template selection
100%
100%
Test set of 50+ documented
0%
0%
dataset_stats.json present
100%
100%
Assistant content is book text
33%
100%
LoRA training configuration and validation
Base model name
0%
100%
LoRA rank 32
0%
100%
Learning rate 5e-4
0%
100%
3 training epochs
100%
100%
Tinker API used
50%
100%
cross_entropy loss
0%
100%
AdamParams optimizer
0%
100%
save_weights_for_sampler_async
0%
100%
Modern scenario test prompts
0%
100%
Originality check via text search
100%
100%
AI detector integration
0%
100%
3ab8c94
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.