CtrlK
BlogDocsLog inGet started
Tessl Logo

hugging-face-datasets

Create and manage datasets on Hugging Face Hub. Supports initializing repos, defining configs/system prompts, streaming row updates, and SQL-based dataset querying/transformation. Designed to work alongside HF MCP server for comprehensive dataset workflows.

79

1.86x
Quality

72%

Does it follow best practices?

Impact

82%

1.86x

Average score across 6 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/hugging-face-datasets/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

67%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description excels at specificity and distinctiveness, clearly defining its Hugging Face dataset management niche with concrete actions. However, it lacks an explicit 'Use when...' clause and could benefit from more natural trigger terms that users would actually say when needing this skill.

Suggestions

Add a 'Use when...' clause with explicit triggers like 'Use when the user wants to create, upload, or manage datasets on Hugging Face, or mentions HF Hub, dataset repos, or dataset transformations.'

Include more natural user terms and variations such as 'HF', 'upload dataset', 'dataset repository', 'Hugging Face data' to improve trigger term coverage.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'initializing repos, defining configs/system prompts, streaming row updates, and SQL-based dataset querying/transformation'. These are clear, actionable capabilities.

3 / 3

Completeness

Clearly answers 'what' with specific capabilities, but lacks an explicit 'Use when...' clause. The 'when' is only implied through the capability descriptions rather than explicitly stated with trigger guidance.

2 / 3

Trigger Term Quality

Includes some relevant terms like 'Hugging Face Hub', 'datasets', 'SQL', but missing common user variations like 'HF', 'upload dataset', 'dataset repo', or file extensions. Technical terms like 'streaming row updates' are not natural user language.

2 / 3

Distinctiveness Conflict Risk

Very clear niche targeting Hugging Face Hub datasets specifically. The combination of 'Hugging Face', 'datasets', 'HF MCP server' creates distinct triggers unlikely to conflict with other skills.

3 / 3

Total

10

/

12

Passed

Implementation

77%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a strong, highly actionable skill with excellent executable examples and clear workflow guidance. The main weakness is its length - it tries to be both a quick reference and comprehensive documentation in one file, leading to some verbosity and missed opportunities for progressive disclosure through linked reference files.

Suggestions

Move the detailed template JSON schemas and DuckDB SQL functions reference to separate REFERENCE.md or TEMPLATES.md files, keeping only quick examples in the main skill

Remove the 'Overview' and 'Core Capabilities' sections that describe what the skill does - the content itself demonstrates this

Consolidate the 'HF Path Format' explanation which appears in multiple places into a single reference

DimensionReasoningScore

Conciseness

The skill is comprehensive but includes some unnecessary verbosity, such as the 'Overview' section explaining what the skill does (which could be inferred from the content itself) and repeated explanations of the hf:// protocol. Some sections like 'Quality Assurance Features' list capabilities without actionable guidance.

2 / 3

Actionability

Excellent actionability with fully executable bash commands and Python code examples throughout. Commands are copy-paste ready with clear argument syntax, and the Python API usage section provides complete, runnable code snippets.

3 / 3

Workflow Clarity

Multi-step workflows are clearly sequenced with numbered steps (e.g., 'Recommended Workflow' with Discovery → Creation → Content Management). The 'Combined Workflow Examples' section provides end-to-end processes with explicit steps and validation through the describe/histogram exploration before transformation.

3 / 3

Progressive Disclosure

The skill is well-organized with clear sections, but it's quite long (~400 lines) and could benefit from splitting detailed reference material (like the DuckDB SQL functions, template schemas, and Python API) into separate files. References to external files like 'training_examples.json' exist but the main content is monolithic.

2 / 3

Total

10

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (543 lines); consider splitting into references/ and linking

Warning

Total

10

/

11

Passed

Repository
huggingface/skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.