Use when a dbt Cloud/platform job fails and you need to diagnose the root cause, especially when error messages are unclear or when intermittent failures occur. Do not use for local dbt development errors.
Install with Tessl CLI
npx tessl i github:dbt-labs/dbt-agent-skills --skill troubleshooting-dbt-job-errors84
Quality
83%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-structured description with strong trigger terms and clear scope boundaries. The explicit 'Use when' and 'Do not use' clauses make it highly actionable for skill selection. The main weakness is the lack of specific diagnostic actions that would help users understand the full capability.
Suggestions
Add specific concrete actions like 'analyze run logs', 'check model dependencies', 'review environment variables', or 'inspect connection settings' to improve specificity.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (dbt Cloud/platform job failures) and describes the general action (diagnose root cause), but lacks specific concrete actions like 'analyze logs', 'check dependencies', or 'review run history'. | 2 / 3 |
Completeness | Clearly answers both what (diagnose root cause of dbt Cloud job failures) and when (job fails, unclear error messages, intermittent failures). Also includes helpful exclusion criteria ('Do not use for local dbt development errors'). | 3 / 3 |
Trigger Term Quality | Includes natural keywords users would say: 'dbt Cloud', 'job fails', 'error messages', 'intermittent failures'. These are terms users would naturally use when encountering this problem. | 3 / 3 |
Distinctiveness Conflict Risk | Very specific niche targeting dbt Cloud/platform job failures with clear exclusion of local development. The combination of 'dbt Cloud', 'job fails', and 'platform' creates distinct triggers unlikely to conflict with other skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured troubleshooting skill with excellent actionability and workflow clarity. It provides concrete commands, clear decision trees, and appropriate validation checkpoints. The main weakness is length - while comprehensive, some sections (like the rationalizations table and full investigation template) add verbosity that could be trimmed or externalized.
Suggestions
Consider moving the investigation document template to a separate file (e.g., INVESTIGATION_TEMPLATE.md) and referencing it, reducing the main skill's token footprint.
The 'Rationalizations That Mean STOP' table, while valuable, could be condensed to a shorter list or moved to a linked reference document.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is mostly efficient but includes some verbose sections like the 'Rationalizations That Mean STOP' table and extensive markdown formatting. The mermaid diagram adds visual value but consumes tokens. Some explanations could be tightened. | 2 / 3 |
Actionability | Provides fully executable commands and concrete examples throughout: specific MCP tool calls, dbt CLI commands, git commands, and even a complete unit test YAML example. The artifact URL template with placeholders is immediately usable. | 3 / 3 |
Workflow Clarity | Clear 4-step workflow with explicit decision points and validation. The mermaid flowchart visualizes the process, and each step has clear branching logic (MCP available vs not, error type classification, root cause found vs not). Includes explicit 'do not proceed' guidance. | 3 / 3 |
Progressive Disclosure | References external skills ('discovering-data', 'using-dbt-for-analytics-engineering') appropriately, but the document itself is quite long (~200 lines) with detailed content that could potentially be split. The Quick Reference table at the end is good, but some sections like the investigation template could be a separate file. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 13 / 16 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_version | 'metadata.version' is missing | Warning |
license_field | 'license' field is missing | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 13 / 16 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.