Data Quality Checker - Auto-activating skill for Data Pipelines. Triggers on: data quality checker, data quality checker Part of the Data Pipelines skill category.
33
0%
Does it follow best practices?
Impact
96%
1.00xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./planned-skills/generated/11-data-pipelines/data-quality-checker/SKILL.mdQuality
Discovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is essentially a placeholder with no substantive content. It names the skill and its category but provides zero information about what concrete actions it performs, what types of data quality issues it addresses, or when Claude should select it. The trigger terms are just the skill name repeated.
Suggestions
Add specific concrete actions the skill performs, e.g., 'Validates data completeness, detects null values, checks schema conformance, identifies duplicate records, and flags statistical outliers in datasets.'
Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks about data validation, missing values, data profiling, duplicate detection, schema checks, or pipeline data quality issues.'
Remove the redundant duplicate trigger term and replace with varied natural language terms users would actually say, such as 'data validation', 'check for nulls', 'data integrity', 'clean data', 'data anomalies'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description names the domain ('Data Quality Checker', 'Data Pipelines') but provides no concrete actions. There is no indication of what the skill actually does—no verbs describing specific capabilities like validating, profiling, detecting anomalies, etc. | 1 / 3 |
Completeness | The description fails to answer 'what does this do' beyond the name itself, and the 'when' clause is limited to a redundant trigger phrase. There is no explicit 'Use when...' guidance or meaningful trigger context. | 1 / 3 |
Trigger Term Quality | The only trigger terms listed are 'data quality checker' repeated twice. There are no natural user-facing keywords like 'validate data', 'null values', 'data profiling', 'missing data', 'data anomalies', or other terms a user would naturally say. | 1 / 3 |
Distinctiveness Conflict Risk | The description is extremely generic—'Data Quality Checker' and 'Data Pipelines' could overlap with many data-related skills. Without specific actions or distinct triggers, it would be difficult to distinguish from other data processing or validation skills. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
0%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is an empty shell with no substantive content. It consists entirely of generic boilerplate that repeats the phrase 'data quality checker' without providing any actual guidance, code, tools, patterns, or workflows for performing data quality checks. It would provide zero value to Claude beyond what it already knows.
Suggestions
Add concrete, executable code examples for common data quality checks (e.g., null checks, schema validation, duplicate detection) using specific tools like Great Expectations, dbt tests, or pandas-based validators.
Define a clear multi-step workflow for implementing data quality checks in a pipeline, including validation checkpoints and error handling/retry logic.
Remove all boilerplate sections (Purpose, When to Use, Example Triggers) that add no actionable information and replace with actual technical content.
Include specific patterns for common data quality scenarios (e.g., freshness checks, referential integrity, statistical anomaly detection) with copy-paste ready configurations.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is entirely filler and boilerplate. It explains nothing Claude doesn't already know, repeats 'data quality checker' excessively, and provides zero substantive information about how to actually check data quality. | 1 / 3 |
Actionability | There are no concrete code examples, commands, specific tools, libraries, or executable guidance. Every section is vague and abstract — 'Provides step-by-step guidance' without actually providing any steps. | 1 / 3 |
Workflow Clarity | No workflow is defined at all. There are no steps, no sequence, no validation checkpoints — just generic claims about capabilities without any actual process described. | 1 / 3 |
Progressive Disclosure | The content is a flat, uninformative page with no references to detailed materials, no links to examples or advanced guides, and no meaningful structure beyond boilerplate headings. | 1 / 3 |
Total | 4 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
c8a915c
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.