CtrlK
BlogDocsLog inGet started
Tessl Logo

data-quality-checker

Data Quality Checker - Auto-activating skill for Data Pipelines. Triggers on: data quality checker, data quality checker Part of the Data Pipelines skill category.

33

1.00x
Quality

0%

Does it follow best practices?

Impact

96%

1.00x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./planned-skills/generated/11-data-pipelines/data-quality-checker/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

0%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is essentially a placeholder that provides no useful information beyond the skill's name and category. It lacks any concrete actions, meaningful trigger terms, or explicit guidance on when to use the skill. It would be ineffective for skill selection in a multi-skill environment.

Suggestions

Add specific concrete actions the skill performs, e.g., 'Validates data completeness, checks for null values, detects schema mismatches, identifies duplicate records, and flags statistical outliers in datasets.'

Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks about data validation, checking for missing values, detecting duplicates, verifying data integrity, or running quality checks on CSV/database tables.'

Remove the duplicate trigger term ('data quality checker' is listed twice) and replace with diverse natural language variations users would actually say, such as 'validate my data', 'check for nulls', 'data anomalies', 'data profiling', 'clean data'.

DimensionReasoningScore

Specificity

The description provides no concrete actions whatsoever. It only names itself ('Data Quality Checker') without describing what it actually does—no mention of specific checks, validations, transformations, or outputs.

1 / 3

Completeness

Neither 'what does this do' nor 'when should Claude use it' is meaningfully answered. There is no explicit 'Use when...' clause, and the 'what' is entirely absent beyond the skill name.

1 / 3

Trigger Term Quality

The only trigger terms listed are 'data quality checker' repeated twice, which is the skill's own name rather than natural keywords a user would say. Missing terms like 'validate data', 'null checks', 'schema validation', 'data anomalies', 'missing values', etc.

1 / 3

Distinctiveness Conflict Risk

The description is extremely generic—'data quality' could overlap with many data-related skills. Without specific actions or distinct triggers, it would be nearly impossible to distinguish from other data pipeline or data validation skills.

1 / 3

Total

4

/

12

Passed

Implementation

0%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is essentially a placeholder with no substantive content. It contains no actionable instructions, no code examples, no concrete data quality check patterns, and no workflow guidance. It reads as a template that was never filled in with actual skill content.

Suggestions

Add concrete, executable code examples for common data quality checks (e.g., null checks, schema validation, duplicate detection, range validation) using specific tools like Great Expectations, dbt tests, or custom Python/SQL patterns.

Define a clear multi-step workflow for implementing data quality checks in a pipeline: e.g., 1) define expectations, 2) run validation, 3) handle failures (quarantine, alert, retry), with explicit validation checkpoints.

Remove all generic filler text ('Provides step-by-step guidance', 'Follows industry best practices') and replace with specific patterns, configurations, and examples that Claude doesn't already know.

Add references to supporting files for advanced topics (e.g., GREAT_EXPECTATIONS_SETUP.md, CUSTOM_VALIDATORS.md, ALERTING_PATTERNS.md) to enable progressive disclosure of complex data quality scenarios.

DimensionReasoningScore

Conciseness

The content is padded with generic filler that tells Claude nothing useful. Phrases like 'Provides step-by-step guidance' and 'Follows industry best practices' are vague platitudes. The entire file explains what the skill is rather than providing any actual instruction or knowledge.

1 / 3

Actionability

There is zero concrete guidance—no code, no commands, no specific steps, no examples of data quality checks, no schemas, no tool usage. The content only describes the skill abstractly without instructing Claude how to actually perform data quality checking.

1 / 3

Workflow Clarity

No workflow is defined at all. There are no steps, no sequencing, no validation checkpoints. For a data quality checker—which inherently involves multi-step validation processes—this is a critical omission.

1 / 3

Progressive Disclosure

The content is a monolithic block of generic text with no references to supporting files, no structured navigation, and no bundle files to support it. There is no meaningful content to disclose progressively.

1 / 3

Total

4

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.