CtrlK
BlogDocsLog inGet started
Tessl Logo

transformers

This skill should be used when working with pre-trained transformer models for natural language processing, computer vision, audio, or multimodal tasks. Use for text generation, classification, question answering, translation, summarization, image classification, object detection, speech recognition, and fine-tuning models on custom datasets.

79

Quality

75%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/transformers/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

77%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description is strong in listing specific capabilities and providing explicit trigger guidance for when to use the skill. However, it lacks mention of the specific library or framework (likely Hugging Face Transformers), which would significantly improve both trigger term quality and distinctiveness. The task terms are comprehensive but read more like a feature list than natural user language.

Suggestions

Add the specific library/framework name (e.g., 'Hugging Face Transformers') and common abbreviations users would mention, such as 'HF', 'transformers library', 'BERT', 'GPT', 'tokenizer'.

Include file/artifact references users might mention, such as 'model checkpoints', '.safetensors', 'pipeline', 'AutoModel', or 'from_pretrained' to improve trigger term coverage and distinctiveness.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: text generation, classification, question answering, translation, summarization, image classification, object detection, speech recognition, and fine-tuning models on custom datasets.

3 / 3

Completeness

Clearly answers both 'what' (working with pre-trained transformer models for various tasks) and 'when' with an explicit trigger clause ('should be used when working with pre-trained transformer models...' and 'Use for text generation, classification...').

3 / 3

Trigger Term Quality

Includes relevant keywords like 'transformer models', 'NLP', 'text generation', 'classification', 'fine-tuning', but misses common user-facing terms like 'Hugging Face', 'LLM', 'BERT', 'GPT', 'tokenizer', or library names that users would naturally mention. The terms are more task-descriptive than what users would actually say.

2 / 3

Distinctiveness Conflict Risk

While it specifies 'pre-trained transformer models', terms like 'text generation', 'classification', 'summarization', and 'translation' are broad enough to overlap with general ML skills, Python data science skills, or specific framework skills. The lack of a specific library name (e.g., Hugging Face Transformers) reduces distinctiveness.

2 / 3

Total

10

/

12

Passed

Implementation

72%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured skill with strong progressive disclosure and actionable code examples. Its main weaknesses are moderate verbosity (redundant descriptions across Quick Start and Common Patterns, unnecessary 'When to use' explanations) and missing validation/error-handling steps in workflows, particularly for fine-tuning. Tightening the content and adding verification checkpoints would elevate this skill significantly.

Suggestions

Remove the 'When to use' lines in Core Capabilities—Claude can infer appropriate use cases—and consolidate Quick Start and Common Patterns to eliminate redundancy.

Add validation checkpoints to the fine-tuning pattern (e.g., verify dataset format before training, evaluate model after training, check for common errors like mismatched label counts).

Remove the Overview paragraph's generic description of the library; the title and structure already convey this.

DimensionReasoningScore

Conciseness

The skill includes some unnecessary framing (e.g., 'The Hugging Face Transformers library provides access to thousands of pre-trained models') and redundant descriptions in the Core Capabilities section that essentially repeat what the reference files cover. The 'When to use' lines add moderate value but could be tighter. The Common Patterns section partially duplicates the Quick Start section.

2 / 3

Actionability

The skill provides fully executable, copy-paste ready code examples for installation, authentication, pipeline inference, custom model usage, and fine-tuning. Commands are specific with real model names and concrete parameters.

3 / 3

Workflow Clarity

The patterns are presented clearly but there are no validation checkpoints or error recovery steps. For fine-tuning workflows (which involve potentially destructive/expensive operations), there's no mention of validating datasets, checking model outputs, or handling training failures. The progression from simple to advanced is logical but lacks explicit verification steps.

2 / 3

Progressive Disclosure

Excellent structure with a concise overview and quick start at the top, followed by well-signaled one-level-deep references to specific reference files. Each capability section clearly points to its detailed reference document, and the reference documentation section provides a clean navigation index.

3 / 3

Total

10

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

metadata_version

'metadata.version' is missing

Warning

Total

10

/

11

Passed

Repository
K-Dense-AI/claude-scientific-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.