CtrlK
BlogDocsLog inGet started
Tessl Logo

data-quality-frameworks

Implement data quality validation with Great Expectations, dbt tests, and data contracts. Use when building data quality pipelines, implementing validation rules, or establishing data contracts.

76

1.84x
Quality

66%

Does it follow best practices?

Impact

94%

1.84x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./tests/ext_conformance/artifacts/agents-wshobson/data-engineering/skills/data-quality-frameworks/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a solid description that clearly identifies its niche in data quality validation with specific tooling references and an explicit 'Use when' clause. Its main weakness is that the capability actions are somewhat high-level ('implement', 'building', 'establishing') rather than listing granular concrete tasks. The trigger terms are strong and domain-appropriate, making it easy for Claude to select this skill when relevant.

Suggestions

Add more specific concrete actions such as 'create expectation suites, configure checkpoints, write dbt schema tests, define column-level data contracts' to improve specificity.

DimensionReasoningScore

Specificity

Names the domain (data quality validation) and specific tools (Great Expectations, dbt tests, data contracts), but doesn't list multiple concrete actions beyond 'implement', 'building', and 'establishing'. It lacks granular actions like 'create expectation suites', 'configure checkpoints', or 'define schema contracts'.

2 / 3

Completeness

Clearly answers both 'what' (implement data quality validation with Great Expectations, dbt tests, and data contracts) and 'when' (Use when building data quality pipelines, implementing validation rules, or establishing data contracts) with an explicit 'Use when...' clause.

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'data quality', 'Great Expectations', 'dbt tests', 'data contracts', 'validation rules', 'data quality pipelines'. These are terms a user working in this domain would naturally use.

3 / 3

Distinctiveness Conflict Risk

The combination of Great Expectations, dbt tests, and data contracts creates a very specific niche. This is unlikely to conflict with general data engineering or testing skills due to the highly specific tooling and domain focus.

3 / 3

Total

11

/

12

Passed

Implementation

42%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides highly actionable, executable code examples across multiple data quality frameworks, which is its primary strength. However, it is severely over-long and monolithic—most of the detailed pattern implementations should be extracted into separate reference files. The content also explains concepts Claude already knows (data quality dimensions, what primary keys are) and lacks a clear sequenced workflow tying the patterns together.

Suggestions

Reduce the main SKILL.md to a concise overview (~50-80 lines) with quick-start examples, and move detailed patterns (GE suites, dbt tests, data contracts, pipeline class) into separate referenced files like GREAT_EXPECTATIONS.md, DBT_TESTS.md, DATA_CONTRACTS.md.

Remove the data quality dimensions table and testing pyramid—Claude already understands these concepts. Focus only on tool-specific configuration that Claude wouldn't know.

Add an explicit end-to-end workflow section showing the recommended sequence: e.g., 1) Define contract → 2) Create GE suite → 3) Configure checkpoint → 4) Validate → 5) Handle failures, with validation checkpoints between steps.

Trim code examples to essential differentiating snippets rather than full class implementations—e.g., the DataQualityPipeline class could be reduced to its key method signatures and the validation loop pattern.

DimensionReasoningScore

Conciseness

The skill is extremely verbose at 400+ lines, with extensive boilerplate code that Claude could generate on its own. The data quality dimensions table, testing pyramid ASCII art, and lengthy explanations of concepts like primary keys and foreign keys are unnecessary. Many patterns are overly detailed with full class implementations rather than concise, differentiating guidance.

1 / 3

Actionability

The content provides fully executable code examples across Great Expectations, dbt tests, custom SQL tests, data contracts, and a complete quality pipeline class. Code is copy-paste ready with concrete configurations, YAML schemas, and Python implementations.

3 / 3

Workflow Clarity

While individual patterns are clear, there's no explicit end-to-end workflow showing how to sequence these tools together. The automated quality pipeline (Pattern 6) includes a validation-and-report flow with failure handling, but the overall skill lacks explicit validation checkpoints between steps (e.g., verify GE init before creating suites, verify suite before running checkpoint).

2 / 3

Progressive Disclosure

The entire skill is a monolithic wall of content with no references to external files. The 6 detailed patterns, best practices, and resources are all inline. Content like the full pipeline class, data contract spec, and custom dbt tests should be split into separate reference files with clear navigation from the main skill.

1 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (591 lines); consider splitting into references/ and linking

Warning

Total

10

/

11

Passed

Repository
Dicklesworthstone/pi_agent_rust
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.