CtrlK
BlogDocsLog inGet started
Tessl Logo

data-quality-frameworks

Implement data quality validation with Great Expectations, dbt tests, and data contracts. Use when building data quality pipelines, implementing validation rules, or establishing data contracts.

Install with Tessl CLI

npx tessl i github:Dicklesworthstone/pi_agent_rust --skill data-quality-frameworks
What are skills?

79

Does it follow best practices?

Validation for skill structure

SKILL.md
Review
Evals

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a solid skill description that clearly communicates its purpose and when to use it. The explicit 'Use when...' clause and specific tool mentions (Great Expectations, dbt tests) provide good trigger coverage. The main weakness is that the capabilities could be more specific about what concrete actions are performed beyond 'implement' and 'validation'.

Suggestions

Expand specificity by listing concrete actions like 'create expectation suites, configure checkpoints, define schema tests, generate data quality reports'

DimensionReasoningScore

Specificity

Names the domain (data quality validation) and specific tools (Great Expectations, dbt tests, data contracts), but doesn't list multiple concrete actions beyond 'implement' and 'validation rules' - lacks detail on what specific operations are performed.

2 / 3

Completeness

Clearly answers both what ('Implement data quality validation with Great Expectations, dbt tests, and data contracts') and when ('Use when building data quality pipelines, implementing validation rules, or establishing data contracts') with explicit trigger guidance.

3 / 3

Trigger Term Quality

Good coverage of natural terms users would say: 'data quality', 'validation', 'Great Expectations', 'dbt tests', 'data contracts', 'validation rules', 'data quality pipelines' - these are terms practitioners would naturally use.

3 / 3

Distinctiveness Conflict Risk

Clear niche focused on data quality validation with specific tool mentions (Great Expectations, dbt tests) that distinguish it from general data processing or other validation skills. Unlikely to conflict with unrelated skills.

3 / 3

Total

11

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides highly actionable, production-ready code examples for data quality frameworks with Great Expectations, dbt, and data contracts. However, it's overly verbose for a skill file - explaining basic concepts Claude knows and including extensive inline examples that would be better as linked references. The workflow for actually implementing these patterns end-to-end lacks explicit sequencing and validation checkpoints.

Suggestions

Remove the 'Core Concepts' section (data quality dimensions table and testing pyramid) - Claude already understands these concepts

Split detailed patterns (3-6) into separate reference files like GREAT_EXPECTATIONS.md, DBT_TESTS.md, DATA_CONTRACTS.md and link from a concise overview

Add an explicit workflow section showing the sequence: 1) Set up GE context, 2) Create suite, 3) Run validation, 4) Handle failures, 5) Integrate with CI/CD - with validation checkpoints at each step

Condense the Best Practices section into the main workflow rather than a separate do's/don'ts list

DimensionReasoningScore

Conciseness

The skill is comprehensive but verbose. The 'Core Concepts' section with the data quality dimensions table and testing pyramid explains concepts Claude likely knows. The extensive pattern examples are useful but could be more condensed.

2 / 3

Actionability

Excellent executable code throughout - complete Python scripts, YAML configurations, SQL tests, and CLI commands that are copy-paste ready. Each pattern includes working, runnable examples with proper imports and context.

3 / 3

Workflow Clarity

While individual patterns are clear, the overall workflow for implementing data quality is implicit. Missing explicit validation checkpoints and error recovery steps - Pattern 6 shows a pipeline but doesn't guide through the setup sequence or what to do when validations fail beyond raising an error.

2 / 3

Progressive Disclosure

Content is reasonably structured with patterns and sections, but it's a monolithic document with 400+ lines. The detailed patterns (especially Patterns 3-6) could be split into separate reference files, with SKILL.md providing just the quick start and linking to detailed guides.

2 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (591 lines); consider splitting into references/ and linking

Warning

Total

10

/

11

Passed

Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.