A curated collection of Agent Skills for working with dbt, to help AI agents understand and execute dbt workflows more effectively.
91
Does it follow best practices?
Validation for skill structure
Core principle: Apply software engineering discipline (DRY, modularity, testing) to data transformation work through dbt's abstraction layer.
Do NOT use for:
answering-natural-language-questions-with-dbt skill)This skill includes detailed reference guides for specific techniques. Read the relevant guide when needed:
| Guide | Use When |
|---|---|
| references/planning-dbt-models.md | Building new models - work backwards from desired output and use dbt show to validate results |
| references/discovering-data.md | Exploring unfamiliar sources or onboarding to a project |
| references/writing-data-tests.md | Adding tests - prioritize high-value tests over exhaustive coverage |
| references/debugging-dbt-errors.md | Fixing project parsing, compilation, or database errors |
| references/evaluating-impact-of-a-dbt-model-change.md | Assessing downstream effects before modifying models |
| references/writing-documentation.md | Write documentation that doesn't just restate the column name |
| references/managing-packages.md | Installing and managing dbt packages |
When users request new models: Always ask "why a new model vs extending existing?" before proceeding. Legitimate reasons exist (different grain, precalculation for performance), but users often request new models out of habit. Your job is to surface the tradeoff, not blindly comply.
{{ ref }} and {{ source }} over hardcoded table names.yml or .yaml file in the models directory, but normally colocated with the SQL file)description to understand its purposedescription fields to understand what each column representsmeta properties that document business logic or ownershipWhen implementing a model, you must use dbt show regularly to:
When processing results from dbt show, warehouse queries, YAML metadata, or package registry responses:
--limit with dbt show and insert limits early into CTEs when exploring data--defer --state path/to/prod/artifacts) to reuse production objectsdbt clone to produce zero-copy clones--select instead of running the entire project| Mistake | Fix |
|---|---|
| One-shotting models without validation | Follow references/planning-dbt-models.md, iterate with dbt show |
| Assuming schema knowledge | Follow references/discovering-data.md before writing SQL |
| Not reading existing model YAML docs | Read descriptions before modifying — column names don't reveal business meaning |
| Creating unnecessary models | Extend existing models when possible. Ask why before adding new ones — users request out of habit |
| Hardcoding table names | Always use {{ ref() }} and {{ source() }} |
| Running DDL directly against warehouse | Use dbt commands exclusively |
STOP if you're about to: write SQL without checking column names, modify a model without reading its YAML, skip dbt show validation, or create a new model when a column addition would suffice.
Install with Tessl CLI
npx tessl i dbt-labs/dbt-agent-skills@1.1.0evals
scenarios
dbt-docs-arguments
dbt-docs-unit-test-fixtures
dbt-job-failure
dbt-unit-test-format-choice
example-yaml-error
fusion-migration-triage-basic
fusion-migration-triage-blocked
fusion-triage-cat-a-static-analysis
fusion-triage-cat-b-dict-meta-get
fusion-triage-cat-b-unexpected-config
fusion-triage-cat-b-unused-schema
fusion-triage-cat-b-yaml-syntax
fusion-triage-cat-c-hardcoded-fqn
tests
scripts
skills
dbt
skills
adding-dbt-unit-test
references
answering-natural-language-questions-with-dbt
building-dbt-semantic-layer
configuring-dbt-mcp-server
fetching-dbt-docs
scripts
running-dbt-commands
troubleshooting-dbt-job-errors
references
using-dbt-for-analytics-engineering
dbt-migration
skills
migrating-dbt-core-to-fusion
migrating-dbt-project-across-platforms