Execute Databricks production deployment checklist and rollback procedures. Use when deploying Databricks jobs to production, preparing for launch, or implementing go-live procedures. Trigger with phrases like "databricks production", "deploy databricks", "databricks go-live", "databricks launch checklist".
85
83%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Quality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-structured description with excellent completeness and trigger term coverage. Its main weakness is that the 'what' portion could be more specific about the concrete actions involved in the checklist and rollback procedures. The explicit trigger phrases and clear 'Use when' clause make it highly functional for skill selection.
Suggestions
Expand the capability description with more specific actions, e.g., 'validate job configurations, verify cluster policies, check permissions, execute rollback scripts' to improve specificity.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | It names the domain (Databricks production deployment) and mentions two actions (deployment checklist and rollback procedures), but doesn't list specific concrete actions like 'validate job configurations, verify cluster policies, run integration tests, execute rollback scripts'. | 2 / 3 |
Completeness | Clearly answers both 'what' (execute deployment checklist and rollback procedures) and 'when' (deploying to production, preparing for launch, go-live procedures) with explicit trigger phrases listed. | 3 / 3 |
Trigger Term Quality | Includes strong natural trigger terms: 'databricks production', 'deploy databricks', 'databricks go-live', 'databricks launch checklist'. These are phrases users would naturally say when needing this skill, with good variation coverage. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with the Databricks-specific niche combined with production deployment focus. The trigger terms are specific enough to avoid conflicts with generic deployment skills or other Databricks skills focused on development/testing. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a strong, highly actionable production deployment checklist with excellent workflow clarity and concrete, executable examples at every step. Its main weakness is length — the comprehensive inline examples (full YAML config, Python monitoring code, bash rollback script, SQL dashboard) make it token-heavy and could benefit from being split into referenced bundle files. The error handling table and rollback procedure are particularly well-done with clear escalation and verification steps.
Suggestions
Extract the full YAML job configuration, rollback script, and monitoring Python code into separate bundle files (e.g., resources/prod_etl.yml, scripts/rollback.sh, scripts/health_check.py) and reference them from SKILL.md to improve progressive disclosure and reduce token footprint.
Trim the YAML example to show only essential/non-obvious fields with a comment like '# See resources/prod_etl.yml for full config' to improve conciseness.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is fairly efficient and avoids explaining basic concepts Claude would know, but it's quite long (~200+ lines) with some sections that could be tightened. The full YAML job configuration example is extensive and could be trimmed to essential fields with a note about optional ones. The post-deploy monitoring Python function includes some boilerplate that adds length without proportional value. | 2 / 3 |
Actionability | Excellent actionability throughout — every step includes executable bash commands, complete YAML configurations, working Python code, and SQL queries. The rollback script is a complete, copy-paste-ready bash script with proper error handling (set -euo pipefail). The deployment commands include verification steps with concrete jq parsing. | 3 / 3 |
Workflow Clarity | The 7-step workflow is clearly sequenced from pre-deployment security through rollback. Validation checkpoints are explicit: bundle validate before deploy, verification run after deploy, health check post-deploy. The rollback procedure includes a clear feedback loop (pause → cancel → redeploy → restore → verify). The error handling table provides clear escalation paths with severity levels and actions. | 3 / 3 |
Progressive Disclosure | The skill references external resources (databricks-observability, databricks-upgrade-migration, external docs) but the body itself is quite long and monolithic. The full YAML job config, Python health check, rollback script, and SQL dashboard query could be split into referenced files. Without bundle files to offload content, the single file carries a lot of inline detail. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3a2d27d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.