Collect Databricks debug evidence for support tickets and troubleshooting. Use when encountering persistent issues, preparing support tickets, or collecting diagnostic information for Databricks problems. Trigger with phrases like "databricks debug", "databricks support bundle", "collect databricks logs", "databricks diagnostic".
80
77%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/saas-packs/databricks-pack/skills/databricks-debug-bundle/SKILL.mdQuality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-structured skill description with strong completeness and distinctiveness. It clearly defines when to use the skill and provides explicit trigger phrases. The main weakness is that the 'what' portion could be more specific about the concrete actions performed (e.g., what types of logs, configs, or artifacts are collected).
Suggestions
Add more specific concrete actions to improve specificity, e.g., 'Collects cluster logs, driver logs, Spark UI data, notebook execution history, and configuration snapshots for Databricks debug evidence.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description names the domain (Databricks debug evidence) and some actions (collect debug evidence, prepare support tickets, collect diagnostic information), but doesn't list multiple specific concrete actions like 'gather cluster logs, export notebook run history, capture spark UI screenshots, collect driver logs'. | 2 / 3 |
Completeness | Clearly answers both 'what' (collect Databricks debug evidence for support tickets and troubleshooting) and 'when' (encountering persistent issues, preparing support tickets, collecting diagnostic information) with explicit trigger phrases listed. | 3 / 3 |
Trigger Term Quality | Includes explicit trigger phrases that users would naturally say: 'databricks debug', 'databricks support bundle', 'collect databricks logs', 'databricks diagnostic'. Also includes natural terms like 'support tickets', 'troubleshooting', and 'diagnostic information'. | 3 / 3 |
Distinctiveness Conflict Risk | Very specific niche targeting Databricks debug/diagnostic collection specifically. The trigger terms are highly distinctive ('databricks debug', 'databricks support bundle') and unlikely to conflict with other skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid, actionable skill with fully executable code and clear output specifications for collecting Databricks debug information. Its main weaknesses are the lack of validation checkpoints between collection steps (e.g., verifying auth before proceeding, checking if cluster ID exists) and the monolithic presentation that could benefit from extracting the script into a bundle file. The content is moderately verbose given that the 7-step breakdown could be presented as a single script with brief inline comments.
Suggestions
Add validation checkpoints: verify CLI authentication succeeds before collecting data, check that cluster_id/run_id exist before querying, and add a summary of which collection steps succeeded/failed at the end.
Extract the full bash script into a bundle file (e.g., databricks-debug-bundle.sh) and keep SKILL.md as an overview with usage examples, output description, and safety notes.
Consolidate the 7 fragmented script steps into a single cohesive script block to reduce repetitive headers and improve readability.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably efficient but could be tightened. The prerequisites section, some inline comments, and the 'Current State' dynamic checks add modest overhead. The step-by-step breakdown across 7 steps with repeated append patterns is somewhat verbose—this could be a single script with brief annotations rather than fragmented steps with explanatory headers. | 2 / 3 |
Actionability | The skill provides fully executable bash scripts and Python code that are copy-paste ready. Commands are specific with real CLI flags, jq filters, and concrete examples of usage with different argument combinations. The output format is clearly specified. | 3 / 3 |
Workflow Clarity | The steps are clearly sequenced and the overall flow is logical. However, there are no validation checkpoints between steps—no verification that the CLI is authenticated, that the cluster ID is valid, or that individual collection steps succeeded before proceeding. The error handling table at the end is about data safety, not workflow validation. For a script that performs multiple API calls and file operations, explicit validation/retry guidance is missing. | 2 / 3 |
Progressive Disclosure | The content is well-structured with clear sections, but it's quite long (~180 lines of script content) and monolithic. The entire script could be a referenced file with SKILL.md providing just the overview, usage examples, and output description. The 'Next Steps' reference to 'databricks-rate-limits' is good, but no bundle files exist to offload the detailed script content. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3a2d27d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.