Configure Databricks CI/CD integration with GitHub Actions and Asset Bundles. Use when setting up automated testing, configuring CI pipelines, or integrating Databricks deployments into your build process. Trigger with phrases like "databricks CI", "databricks GitHub Actions", "databricks automated tests", "CI databricks", "databricks pipeline".
85
83%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Quality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a solid skill description that clearly identifies its niche (Databricks CI/CD with GitHub Actions), provides explicit trigger guidance, and lists natural user phrases. The main weakness is that the specificity of concrete actions could be improved — it describes the domain well but could enumerate more granular capabilities. Note: the description uses second person 'your' in 'your build process', which should be converted to third person.
Suggestions
Add more specific concrete actions such as 'create GitHub Actions workflow files, define Asset Bundle configurations, set up deployment targets, configure test stages' to improve specificity.
Replace second person 'your build process' with third person phrasing like 'the build process' to maintain consistent voice.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Databricks CI/CD with GitHub Actions and Asset Bundles) and some actions (configure integration, setting up automated testing, configuring CI pipelines, integrating deployments), but doesn't list multiple concrete granular actions like 'create workflow files, define bundle configurations, set up test stages'. | 2 / 3 |
Completeness | Clearly answers both 'what' (configure Databricks CI/CD integration with GitHub Actions and Asset Bundles) and 'when' (setting up automated testing, configuring CI pipelines, integrating deployments) with explicit trigger phrases listed. | 3 / 3 |
Trigger Term Quality | Explicitly lists natural trigger phrases like 'databricks CI', 'databricks GitHub Actions', 'databricks automated tests', 'CI databricks', 'databricks pipeline' — these are terms users would naturally say. Good coverage of common variations. | 3 / 3 |
Distinctiveness Conflict Risk | Very specific niche combining Databricks + CI/CD + GitHub Actions + Asset Bundles. The trigger terms are highly specific to this domain and unlikely to conflict with generic CI/CD or generic Databricks skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a strong, highly actionable skill with clear multi-step workflows, explicit validation checkpoints, and production-ready code examples. Its main weakness is length — the repeated environment variable blocks and some sections (OIDC, branch-based targets) could be extracted to keep the primary skill more concise. Overall it provides excellent guidance for setting up Databricks CI/CD with GitHub Actions.
Suggestions
Extract the repeated DATABRICKS_HOST/CLIENT_ID/CLIENT_SECRET env blocks into a reusable pattern or note (e.g., 'All steps use the same three env vars from secrets') to reduce duplication.
Move the OIDC authentication and branch-based development sections to a supplementary file (e.g., ADVANCED-CI.md) and link from the main skill to improve progressive disclosure.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is mostly efficient with concrete YAML and Python examples, but includes some unnecessary elements like the overview paragraph restating what the title already conveys, and the error handling table covers some obvious issues (e.g., 'PySpark import error' → 'Add to pip install step'). The env blocks are repeated verbatim across multiple steps, adding bulk. | 2 / 3 |
Actionability | Provides fully executable, copy-paste-ready GitHub Actions YAML workflows, complete pytest fixtures with PySpark, specific CLI commands, and concrete databricks.yml configuration snippets. Every step includes real commands and real code. | 3 / 3 |
Workflow Clarity | The multi-step workflow is clearly sequenced: validate/test on PR → deploy staging with integration tests → deploy production on merge with approval gates. Validation checkpoints are explicit (bundle validate before deploy, integration tests before merge, smoke tests post-deploy), and concurrency control prevents parallel deployment conflicts. | 3 / 3 |
Progressive Disclosure | The content is well-structured with clear sections and a resources section linking to external docs, but the skill itself is quite long (~180 lines of substantive content) with inline YAML blocks that could be referenced as separate files. The OIDC section and branch-based development targets could be split into supplementary files to keep the main skill leaner. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
c8a915c
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.