CtrlK
BlogDocsLog inGet started
Tessl Logo

data-quality-frameworks

Implement data quality validation with Great Expectations, dbt tests, and data contracts. Use when building data quality pipelines, implementing validation rules, or establishing data contracts.

76

1.84x
Quality

66%

Does it follow best practices?

Impact

94%

1.84x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./tests/ext_conformance/artifacts/agents-wshobson/data-engineering/skills/data-quality-frameworks/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a solid description that clearly identifies its niche in data quality validation with specific tooling references and an explicit 'Use when' clause. Its main weakness is that the capability actions are somewhat high-level ('implement', 'building') rather than listing granular concrete actions like creating expectation suites, configuring checkpoints, or writing custom validators.

Suggestions

Add more specific concrete actions such as 'create expectation suites, configure validation checkpoints, write dbt schema tests, define data contract schemas' to improve specificity.

DimensionReasoningScore

Specificity

Names the domain (data quality validation) and specific tools (Great Expectations, dbt tests, data contracts), but doesn't list multiple concrete actions beyond 'implement', 'building', and 'establishing'. It lacks granular actions like 'create expectation suites', 'configure checkpoints', or 'define schema contracts'.

2 / 3

Completeness

Clearly answers both 'what' (implement data quality validation with Great Expectations, dbt tests, and data contracts) and 'when' (Use when building data quality pipelines, implementing validation rules, or establishing data contracts) with an explicit 'Use when...' clause.

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'data quality', 'Great Expectations', 'dbt tests', 'data contracts', 'validation rules', 'data quality pipelines'. These are terms a user working in this domain would naturally use.

3 / 3

Distinctiveness Conflict Risk

The combination of Great Expectations, dbt tests, and data contracts creates a very specific niche. This is unlikely to conflict with general data engineering or testing skills due to the explicit tool and pattern references.

3 / 3

Total

11

/

12

Passed

Implementation

42%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill is highly actionable with excellent, executable code examples across multiple data quality tools, but it is far too verbose and monolithic for a SKILL.md file. It reads more like a comprehensive tutorial or reference guide than a concise skill, with significant content that should be split into separate files and core concepts that Claude already understands. The lack of a clear end-to-end workflow connecting the patterns weakens its utility as operational guidance.

Suggestions

Reduce the SKILL.md to a concise overview with one quick-start example each for Great Expectations and dbt, then move the detailed patterns (Patterns 1-6) into separate referenced files like GE_PATTERNS.md, DBT_TESTS.md, and DATA_CONTRACTS.md.

Remove the 'Core Concepts' section entirely—Claude already knows data quality dimensions and doesn't need an ASCII testing pyramid.

Add a clear end-to-end workflow section showing the sequence: choose validation approach → implement tests → run validation → handle failures → iterate, with explicit checkpoints.

Trim the best practices to only non-obvious, actionable items rather than generic advice like 'don't ignore warnings.'

DimensionReasoningScore

Conciseness

Extremely verbose at 400+ lines. The data quality dimensions table explains concepts Claude already knows, the testing pyramid ASCII art adds no value, and the massive code blocks are exhaustive rather than selective. The 'Core Concepts' section and best practices do/don't lists are padding that Claude doesn't need.

1 / 3

Actionability

The skill provides fully executable code examples across Great Expectations, dbt tests, custom SQL tests, data contracts, and a complete quality pipeline class. Code is copy-paste ready with concrete configurations, specific expectation types, and real YAML/SQL/Python patterns.

3 / 3

Workflow Clarity

The patterns are presented as independent blocks without a clear sequenced workflow connecting them. The automated quality pipeline (Pattern 6) includes validation and failure handling, but there's no overarching workflow showing when to use which pattern, and no explicit validation checkpoints between steps like 'set up GE → create suite → run checkpoint → handle failures → iterate.'

2 / 3

Progressive Disclosure

This is a monolithic wall of content with 6 extensive patterns all inline. The data contract YAML, full pipeline class, and multiple dbt test files should be split into separate reference files. External links at the bottom are generic documentation references, not structured navigation to companion skill files.

1 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (591 lines); consider splitting into references/ and linking

Warning

Total

10

/

11

Passed

Repository
Dicklesworthstone/pi_agent_rust
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.