tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill databricks-common-errorsDiagnose and fix Databricks common errors and exceptions. Use when encountering Databricks errors, debugging failed jobs, or troubleshooting cluster and notebook issues. Trigger with phrases like "databricks error", "fix databricks", "databricks not working", "debug databricks", "spark error".
Validation
81%| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
metadata_version | 'metadata' field is not a dictionary | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 13 / 16 Passed | |
Implementation
88%This is a strong, actionable skill that efficiently covers common Databricks errors with executable solutions. The code examples are production-ready with proper error handling patterns. Minor weakness is the lack of explicit verification steps after applying fixes - users should confirm their error is actually resolved.
Suggestions
Add verification commands after each solution (e.g., 'Verify fix: databricks clusters get --cluster-id abc123 | grep state' to confirm cluster is running)
Include a brief 'If this doesn't work' fallback for each error pattern to guide escalation
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is lean and efficient, jumping directly into error patterns with executable solutions. No unnecessary explanations of what Databricks or Spark are - assumes Claude's competence throughout. | 3 / 3 |
Actionability | Every error includes fully executable Python code, SQL commands, or CLI commands that are copy-paste ready. Solutions include both quick fixes and more robust alternatives with retry logic. | 3 / 3 |
Workflow Clarity | The initial 3-step workflow (Identify → Find → Apply) is clear but generic. Individual error solutions lack explicit validation checkpoints - for example, the DELTA_CONCURRENT_WRITE retry logic doesn't verify the write succeeded after retries complete. | 2 / 3 |
Progressive Disclosure | Well-organized with clear sections, appropriate cross-references to related skills (databricks-rate-limits, databricks-debug-bundle), and external resources. Content is appropriately scoped without unnecessary nesting. | 3 / 3 |
Total | 11 / 12 Passed |
Activation
90%This is a well-structured skill description that excels at completeness and trigger term quality. It clearly defines when to use the skill and provides natural language triggers. The main weakness is that the capability description could be more specific about what types of errors and fixes it handles.
Suggestions
Add specific concrete actions like 'resolve out-of-memory errors', 'fix cluster startup failures', 'debug notebook execution issues', 'troubleshoot Spark job failures' to improve specificity
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Databricks) and general actions ('diagnose and fix', 'debugging failed jobs', 'troubleshooting cluster and notebook issues'), but doesn't list specific concrete actions like 'resolve OOM errors', 'fix cluster startup failures', or 'debug notebook execution errors'. | 2 / 3 |
Completeness | Clearly answers both what ('Diagnose and fix Databricks common errors and exceptions') and when ('Use when encountering Databricks errors, debugging failed jobs, or troubleshooting cluster and notebook issues') with explicit trigger guidance. | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger terms users would say: 'databricks error', 'fix databricks', 'databricks not working', 'debug databricks', 'spark error'. These are realistic phrases users would naturally use when seeking help. | 3 / 3 |
Distinctiveness Conflict Risk | Clear niche focused specifically on Databricks troubleshooting with distinct triggers. The combination of 'Databricks' + error/debugging context makes it unlikely to conflict with general coding or other cloud platform skills. | 3 / 3 |
Total | 11 / 12 Passed |
Reviewed
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.