CtrlK
BlogDocsLog inGet started
Tessl Logo

tracing-downstream-lineage

Trace downstream data lineage and impact analysis. Use when the user asks what depends on this data, what breaks if something changes, downstream dependencies, or needs to assess change risk before modifying a table or DAG.

79

Quality

73%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/tracing-downstream-lineage/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a well-structured skill description with a clear 'Use when...' clause containing multiple natural trigger phrases. Its main weakness is that the 'what' portion could be more specific about the concrete actions performed (e.g., listing affected tables, generating dependency trees, identifying breaking pipelines). Overall it's a strong description that would perform well in skill selection.

Suggestions

Expand the capability description with more concrete actions, e.g., 'Trace downstream data lineage, identify affected tables and pipelines, and perform impact analysis' to improve specificity.

DimensionReasoningScore

Specificity

The description names the domain (data lineage/impact analysis) and mentions some actions ('trace downstream data lineage', 'assess change risk'), but doesn't list multiple concrete specific actions like listing affected tables, generating dependency graphs, or identifying breaking queries.

2 / 3

Completeness

Clearly answers both 'what' (trace downstream data lineage and impact analysis) and 'when' (explicit 'Use when...' clause with multiple trigger scenarios including dependency questions, change risk assessment, and DAG modifications).

3 / 3

Trigger Term Quality

Includes strong natural trigger terms users would actually say: 'what depends on this data', 'what breaks if something changes', 'downstream dependencies', 'change risk', 'table', 'DAG'. These cover common phrasings well.

3 / 3

Distinctiveness Conflict Risk

The focus on downstream lineage, impact analysis, and DAG dependencies creates a clear niche that is unlikely to conflict with general data skills. The specific terms like 'downstream dependencies', 'change risk', and 'DAG' make it distinctly identifiable.

3 / 3

Total

11

/

12

Passed

Implementation

57%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a solid skill that provides a clear framework for downstream impact analysis with good structure and useful output templates. Its main weaknesses are some verbosity in categorization/explanation sections and a lack of fully executable, concrete commands for several discovery steps. Adding validation checkpoints and making the discovery steps more actionable would elevate it significantly.

Suggestions

Replace vague discovery steps like 'Look for BI tool connections' with concrete commands or queries (e.g., specific SQL to check table access logs or a grep pattern for BI-related references).

Add a validation checkpoint after Step 2 (e.g., 'Verify completeness: confirm no additional consumers exist by cross-checking query logs or access patterns') to create a feedback loop before proceeding to risk assessment.

Trim the criticality categorization in Step 3 — collapse the four-tier list into a compact table rather than listing examples for each level, since Claude can infer appropriate categorization from brief anchors.

DimensionReasoningScore

Conciseness

The skill is reasonably well-structured but includes some unnecessary verbosity, such as explaining what criticality levels mean (Claude knows what 'Critical' vs 'Low' means) and over-explaining obvious concepts like 'Dashboards often query tables directly.' The categorization section could be tightened significantly.

2 / 3

Actionability

Provides a mix of concrete commands (af dags list, SQL queries) and vague guidance ('Look for BI tool connections', 'Check for common BI patterns'). The SQL example is useful but many steps are descriptive rather than executable. The output templates with tables and diagrams are helpful but the discovery steps lack complete, copy-paste-ready commands.

2 / 3

Workflow Clarity

The 5-step sequence is clearly laid out and logically ordered, which is good. However, there are no explicit validation checkpoints or feedback loops — for instance, no step to verify the dependency tree is complete before proceeding to risk assessment, and no guidance on what to do if a step fails or yields incomplete results. For a workflow involving potentially destructive changes, this is a gap.

2 / 3

Progressive Disclosure

The skill is well-organized with clear sections, a logical flow from identification to output, and ends with well-signaled references to related skills. Content is appropriately scoped for a single SKILL.md file without being monolithic, and cross-references are one level deep.

3 / 3

Total

9

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
astronomer/agents
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.