tessl install github:dbt-labs/dbt-agent-skills --skill using-dbt-for-analytics-engineeringUse when doing any dbt work - building or modifying models, debugging errors, exploring unfamiliar data sources, writing tests, or evaluating impact of changes. Use for analytics pipelines, data transformations, and data modeling.
Review Score
71%
Validation Score
11/16
Implementation Score
57%
Activation Score
82%
Core principle: Apply software engineering discipline (DRY, modularity, testing) to data transformation work through dbt's abstraction layer.
Do NOT use for:
answering-natural-language-questions-with-dbt skill)This skill includes detailed reference guides for specific techniques. Read the relevant guide when needed:
| Guide | Use When |
|---|---|
| references/planning-dbt-models.md | Building new models - work backwards from desired output and use dbt show to validate results |
| references/discovering-data.md | Exploring unfamiliar sources or onboarding to a project |
| references/writing-data-tests.md | Adding tests - prioritize high-value tests over exhaustive coverage |
| references/debugging-dbt-errors.md | Fixing project parsing, compilation, or database errors |
| references/evaluating-impact-of-a-dbt-model-change.md | Assessing downstream effects before modifying models |
| references/writing-documentation.md | Write documentation that doesn't just restate the column name |
| references/managing-packages.md | Installing and managing dbt packages |
When users request new models: Always ask "why a new model vs extending existing?" before proceeding. Legitimate reasons exist (different grain, precalculation for performance), but users often request new models out of habit. Your job is to surface the tradeoff, not blindly comply.
{{ ref }} and {{ source }} over hardcoded table names.yml or .yaml file in the models directory, but normally colocated with the SQL file)description to understand its purposedescription fields to understand what each column representsmeta properties that document business logic or ownershipWhen implementing a model, you must use dbt show regularly to:
--limit with dbt show and insert limits early into CTEs when exploring data--defer --state path/to/prod/artifacts) to reuse production objectsdbt clone to produce zero-copy clones--select instead of running the entire project| Mistake | Why It's Wrong | Fix |
|---|---|---|
| One-shotting models | Data work requires validation; schemas are unknown | Follow references/planning-dbt-models.md, iterate with dbt show |
| Not working iteratively | Changes to multiple models at once makes it hard to debug | Run dbt build --select changed_model on each model after modifying it |
| Assuming schema knowledge | Column names, types, and values vary across warehouses | Follow references/discovering-data.md before writing SQL |
| Not reading existing model documentation | Column names don't reveal business meaning | Read YAML descriptions before modifying models |
| Creating unnecessary models | Warehouse compute has real costs | Extend existing models when possible |
| Hardcoding table names | Breaks dbt's dependency graph | Always use {{ ref() }} and {{ source() }} |
| Global config changes | Configuration cascades unexpectedly | Change surgically, match existing patterns |
| Running DDL directly | Bypasses dbt's abstraction and tracking | Use dbt commands exclusively |
| Excuse | Reality |
|---|---|
| "User explicitly asked for a new model" | Users request out of habit. Ask why before complying. |
| "I've done this pattern hundreds of times" | This project's schema may differ. Verify with dbt show. |
| "User is senior / knows what they're doing" | Seniority doesn't change compute costs. Surface tradeoffs. |
| "It's just a small change" | Small changes compound. Follow DRY principles. |
dbt show validation because "it's straightforward"