CtrlK
BlogDocsLog inGet started
Tessl Logo

data-quality-frameworks

Implement data quality validation with Great Expectations, dbt tests, and data contracts. Use when building data quality pipelines, implementing validation rules, or establishing data contracts.

84

1.46x
Quality

66%

Does it follow best practices?

Impact

97%

1.46x

Average score across 6 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/data-engineering/skills/data-quality-frameworks/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a solid skill description that clearly identifies its niche in data quality validation with specific tooling references. It includes an explicit 'Use when' clause with relevant trigger terms. The main weakness is that the specific actions/capabilities could be more granular—listing concrete operations rather than high-level verbs like 'implement' and 'building'.

Suggestions

Expand the 'what' portion with more concrete actions, e.g., 'create expectation suites, write dbt schema tests, define and enforce data contract schemas, set up validation checkpoints'.

DimensionReasoningScore

Specificity

Names the domain (data quality validation) and specific tools (Great Expectations, dbt tests, data contracts), but doesn't list multiple concrete actions beyond 'implement', 'building', and 'establishing'. It could be more specific about what actions are performed (e.g., 'create expectation suites, write dbt schema tests, define contract schemas').

2 / 3

Completeness

Clearly answers both 'what' (implement data quality validation with Great Expectations, dbt tests, and data contracts) and 'when' (explicit 'Use when' clause covering building pipelines, implementing validation rules, or establishing data contracts).

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'data quality', 'Great Expectations', 'dbt tests', 'data contracts', 'validation rules', 'data quality pipelines'. These are terms a user working in this domain would naturally use.

3 / 3

Distinctiveness Conflict Risk

The combination of Great Expectations, dbt tests, and data contracts creates a very specific niche. These are distinct tools and concepts that are unlikely to overlap with other skills, making this clearly distinguishable.

3 / 3

Total

11

/

12

Passed

Implementation

42%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill provides highly actionable, executable code across multiple data quality tools, which is its primary strength. However, it is excessively verbose—dumping hundreds of lines of code inline without progressive disclosure or external file references. The lack of a clear end-to-end workflow and the inclusion of basic concepts Claude already knows significantly reduce its effectiveness as a skill file.

Suggestions

Split the six patterns into separate referenced files (e.g., patterns/great_expectations.md, patterns/dbt_tests.md, patterns/data_contracts.md) and keep SKILL.md as a concise overview with links.

Remove the 'Core Concepts' section (data quality dimensions table and testing pyramid) as Claude already knows these concepts.

Add a clear end-to-end workflow section showing the sequence: setup GE → define expectations → configure checkpoint → integrate with dbt → validate → handle failures, with explicit validation checkpoints.

Trim the 'When to Use This Skill' bullet list—it restates the description and wastes tokens.

DimensionReasoningScore

Conciseness

The skill is extremely verbose at 400+ lines. It explains basic concepts like data quality dimensions that Claude already knows, includes a redundant ASCII testing pyramid, and provides exhaustive code examples that could be significantly condensed. The 'When to Use This Skill' and 'Core Concepts' sections add little value for Claude.

1 / 3

Actionability

The skill provides fully executable code examples across Great Expectations, dbt tests, custom SQL tests, data contracts, and a complete quality pipeline class. Code is copy-paste ready with concrete configurations, specific expectation types, and real YAML/Python/SQL patterns.

3 / 3

Workflow Clarity

While individual patterns are clear, there's no overarching workflow sequence showing how these pieces fit together in a pipeline. The automated quality pipeline (Pattern 6) includes a failure check but the overall skill lacks explicit validation checkpoints and a clear step-by-step process for setting up data quality from scratch.

2 / 3

Progressive Disclosure

The entire skill is a monolithic wall of content with no references to external files. All six patterns, the data contract spec, and the full pipeline class are inlined. This would benefit enormously from splitting patterns into separate files with a concise overview linking to them.

1 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (584 lines); consider splitting into references/ and linking

Warning

Total

10

/

11

Passed

Repository
wshobson/agents
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.