This skill should be used when the user asks to "fine-tune on books", "create SFT dataset", "train style model", "extract ePub text", or mentions style transfer, LoRA training, book segmentation, or author voice replication.
72
61%
Does it follow best practices?
Impact
92%
1.95xAverage score across 3 eval scenarios
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./examples/book-sft-pipeline/SKILL.mdQuality
Discovery
37%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is fundamentally incomplete - it functions as a trigger list rather than a skill description. While it excels at providing natural keywords users might say, it completely fails to explain what the skill actually does, making it impossible for Claude to understand the skill's capabilities or for users to know what to expect.
Suggestions
Add a clear 'what' statement at the beginning describing concrete actions (e.g., 'Extracts and segments text from ePub files, creates supervised fine-tuning datasets, and prepares training data for style transfer models.')
Restructure to follow the pattern: [What it does] + [Use when...] rather than only listing triggers
Include specific outputs or deliverables the skill produces (e.g., 'generates JSONL training files', 'produces segmented text chunks')
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description contains no concrete actions - it only lists trigger phrases without explaining what the skill actually does. There are no verbs describing capabilities like 'extracts', 'processes', or 'creates'. | 1 / 3 |
Completeness | The description only addresses 'when' (trigger conditions) but completely omits 'what' - there is no explanation of what the skill actually does or what capabilities it provides. | 1 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger terms users would say: 'fine-tune on books', 'create SFT dataset', 'train style model', 'extract ePub text', 'style transfer', 'LoRA training', 'book segmentation', 'author voice replication'. | 3 / 3 |
Distinctiveness Conflict Risk | The trigger terms are fairly specific to a niche (book-based fine-tuning/style transfer), but without knowing what the skill does, it's unclear how it would be distinguished from other ML/training-related skills. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
85%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a strong, actionable skill with excellent workflow clarity and progressive disclosure. The code examples are executable and the pipeline phases are well-sequenced with validation checkpoints. The main weakness is verbosity in the 'Integration with Context Engineering Skills' section which adds ~200 lines of meta-commentary that doesn't help Claude execute the pipeline.
Suggestions
Remove or drastically condense the 'Integration with Context Engineering Skills' section - it explains relationships to other skills rather than providing actionable guidance for this pipeline
The 'Skill Metadata' section at the end is unnecessary for Claude's execution and could be moved to YAML frontmatter or removed entirely
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is comprehensive but includes some unnecessary sections like 'Integration with Context Engineering Skills' which adds ~200 lines explaining how this skill relates to other skills - information Claude doesn't need to execute the pipeline. The core pipeline content is reasonably efficient. | 2 / 3 |
Actionability | Excellent executable code throughout - extraction, segmentation, instruction generation, dataset building, and training loops are all copy-paste ready Python. Specific configurations, expected results tables, and concrete examples make this highly actionable. | 3 / 3 |
Workflow Clarity | Clear 6-phase pipeline with explicit sequencing, validation phase with specific tests (modern scenario, originality grep, AI detector), and a 'Known Issues and Solutions' section that provides error recovery guidance. The architecture diagram and numbered phases make the workflow unambiguous. | 3 / 3 |
Progressive Disclosure | Well-structured with clear sections progressing from concepts to implementation to validation. References to detailed docs (segmentation-strategies.md, tinker-format.md, tinker.txt) are one level deep and clearly signaled. Content is appropriately split between overview and reference materials. | 3 / 3 |
Total | 11 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
3ab8c94
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.