Diagnose and fix Databricks common errors and exceptions. Use when encountering Databricks errors, debugging failed jobs, or troubleshooting cluster and notebook issues. Trigger with phrases like "databricks error", "fix databricks", "databricks not working", "debug databricks", "spark error".
80
77%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/saas-packs/databricks-pack/skills/databricks-common-errors/SKILL.mdQuality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-structured skill description with strong trigger terms and clear 'what/when' guidance. Its main weakness is that the capability description is somewhat general—it could benefit from listing more specific concrete actions like parsing stack traces, resolving cluster configuration errors, or fixing notebook dependency issues. Overall, it would perform well in a multi-skill selection scenario.
Suggestions
Add more specific concrete actions to improve specificity, e.g., 'parse stack traces, resolve cluster configuration errors, fix notebook dependency conflicts, troubleshoot Spark OOM errors'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | It names the domain (Databricks) and some actions ('diagnose and fix', 'debugging failed jobs', 'troubleshooting cluster and notebook issues'), but doesn't list specific concrete actions like parsing error logs, fixing configuration issues, or resolving dependency conflicts. | 2 / 3 |
Completeness | Clearly answers both 'what' (diagnose and fix Databricks common errors and exceptions) and 'when' (encountering Databricks errors, debugging failed jobs, troubleshooting cluster and notebook issues) with explicit trigger phrases. | 3 / 3 |
Trigger Term Quality | Includes strong natural trigger terms users would actually say: 'databricks error', 'fix databricks', 'databricks not working', 'debug databricks', 'spark error'. These cover common variations of how users would phrase their requests. | 3 / 3 |
Distinctiveness Conflict Risk | Databricks is a specific platform, and the description clearly scopes to Databricks errors, cluster issues, and notebook problems. This is unlikely to conflict with general debugging or other platform-specific skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid, highly actionable troubleshooting reference with real executable code for each error pattern. Its main weaknesses are the lack of explicit verification steps after applying fixes (e.g., how to confirm a permission grant worked or schema evolution succeeded) and the monolithic structure that could benefit from splitting into separate files per error category. The content is mostly concise but has room for tightening.
Suggestions
Add explicit verification commands after each fix (e.g., after GRANT, run SHOW GRANTS to confirm; after schema evolution, read back the table schema) to close the feedback loop.
Consider splitting detailed error patterns into separate referenced files (e.g., delta-errors.md, cluster-errors.md) and keeping SKILL.md as a concise lookup table with links.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is mostly efficient with real code examples and minimal fluff, but some sections include explanatory comments and context that Claude already knows (e.g., explaining what OOM means, what concurrent writes are). The summary table at the end partially duplicates the detailed sections above. Overall reasonably lean but could be tightened. | 2 / 3 |
Actionability | Every error pattern includes fully executable code—real SDK calls, SQL statements, and CLI commands that are copy-paste ready. The merge_with_retry function, the cluster state handling, and the diagnostic jq commands are all concrete and immediately usable. | 3 / 3 |
Workflow Clarity | Step 1 (identify error) and Step 2 (match and fix) provide a reasonable high-level workflow, but there are no explicit validation/verification checkpoints after applying fixes. The 'Output' section mentions 'Resolution verified' but never specifies how to verify. For operations like schema evolution or permission grants, a feedback loop (apply -> verify -> retry) would be important. | 2 / 3 |
Progressive Disclosure | The content is well-structured with clear headers per error type and a summary table, but it's a long monolithic file (~200+ lines of detailed content) that could benefit from splitting detailed error patterns into separate files. The reference to 'databricks-rate-limits' skill and 'databricks-debug-bundle' shows some cross-referencing, but the bulk of content is inline rather than appropriately distributed. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3a2d27d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.