CtrlK
BlogDocsLog inGet started
Tessl Logo

databricks-debug-bundle

Collect Databricks debug evidence for support tickets and troubleshooting. Use when encountering persistent issues, preparing support tickets, or collecting diagnostic information for Databricks problems. Trigger with phrases like "databricks debug", "databricks support bundle", "collect databricks logs", "databricks diagnostic".

80

Quality

77%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/saas-packs/databricks-pack/skills/databricks-debug-bundle/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a solid skill description with excellent trigger term coverage and completeness, clearly specifying both what the skill does and when to use it. The main weakness is that the capability description could be more specific about the concrete actions performed (e.g., what types of logs, metrics, or artifacts are collected). The explicit trigger phrases section is a strong differentiator.

Suggestions

Add more specific concrete actions to improve specificity, e.g., 'Collects cluster logs, driver logs, Spark event logs, notebook execution history, and environment configuration for Databricks debug evidence.'

DimensionReasoningScore

Specificity

Names the domain (Databricks debug evidence) and some actions (collect debug evidence, prepare support tickets, collect diagnostic information), but doesn't list multiple specific concrete actions like 'gather cluster logs, export notebook run history, capture Spark UI metrics'.

2 / 3

Completeness

Clearly answers both 'what' (collect Databricks debug evidence for support tickets and troubleshooting) and 'when' (encountering persistent issues, preparing support tickets, collecting diagnostic information) with explicit trigger phrases listed.

3 / 3

Trigger Term Quality

Includes explicit natural trigger phrases: 'databricks debug', 'databricks support bundle', 'collect databricks logs', 'databricks diagnostic'. Also includes natural terms like 'support tickets', 'troubleshooting', and 'diagnostic information' that users would naturally say.

3 / 3

Distinctiveness Conflict Risk

Very specific niche targeting Databricks debug/diagnostic collection. The combination of 'Databricks' + 'debug'/'diagnostic'/'support bundle' creates a clear, distinct trigger space unlikely to conflict with other skills.

3 / 3

Total

11

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a solid, actionable skill with concrete executable code for collecting Databricks debug information. Its main weaknesses are the lack of validation checkpoints between steps (e.g., checking if API calls succeed before proceeding) and the monolithic inline presentation of what is essentially a single bash script split across 7 steps. The content could be more concise by presenting the script as a single file reference with the SKILL.md focusing on usage and key decisions.

Suggestions

Add explicit validation checkpoints: verify cluster_id exists before collecting cluster info, check API responses for errors before proceeding to the next step, and add a final bundle verification step.

Consider consolidating the 7 script fragments into a single referenced script file, with the SKILL.md providing a concise overview, usage examples, and key decision points rather than the full implementation inline.

Add error recovery guidance: what to do when auth fails, when cluster is not accessible, or when DBFS logs are unavailable — currently errors are silently captured into files without feedback loops.

DimensionReasoningScore

Conciseness

The skill is fairly long with some sections that could be tightened (e.g., the error handling table, resources section, and some inline comments are somewhat redundant). The 'Current State' dynamic checks and prerequisites are useful but the overall length (~180 lines) could be reduced by combining steps into a single script rather than splitting across 7 steps.

2 / 3

Actionability

The skill provides fully executable bash scripts and Python code that are copy-paste ready. Commands are specific with real CLI flags, jq filters, and concrete examples of usage with different argument combinations.

3 / 3

Workflow Clarity

The 7 steps are clearly sequenced and the final packaging step includes redaction. However, there are no explicit validation checkpoints between steps — no verification that cluster_id exists, that API calls succeeded before proceeding, or feedback loops for error recovery. For a script that collects sensitive data and packages it, validation gaps are notable.

2 / 3

Progressive Disclosure

The content is well-structured with clear headers and sections, but it's essentially a monolithic document with all implementation details inline. The script steps could be presented as a single downloadable script with the SKILL.md providing an overview and usage guide, rather than embedding ~100 lines of bash across 7 steps.

2 / 3

Total

9

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.