Build apps on Databricks Apps platform. Use when asked to create dashboards, data apps, analytics tools, or visualizations. Invoke BEFORE starting implementation.
80
76%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./examples/saas-tracker/template/.agents/skills/databricks-apps/SKILL.mdQuality
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description is structurally sound with a clear 'Use when' clause and timing guidance, earning strong completeness marks. However, it could be more specific about the concrete actions performed and include more natural trigger terms that users would actually say. The broad terms like 'dashboards' and 'visualizations' create some conflict risk with other potential skills.
Suggestions
Add more specific concrete actions such as 'deploy Streamlit/Gradio/Dash apps', 'configure app endpoints', or 'connect apps to Databricks SQL warehouses' to improve specificity.
Include additional trigger terms users might naturally use, such as 'Streamlit on Databricks', 'deploy data app', 'Databricks app framework', or specific framework names supported by the platform.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Databricks Apps platform) and some actions (create dashboards, data apps, analytics tools, visualizations), but doesn't list specific concrete actions like 'deploy Streamlit apps', 'configure app permissions', or 'connect to Unity Catalog tables'. The actions listed are more like categories than concrete operations. | 2 / 3 |
Completeness | Clearly answers both 'what' (build apps on Databricks Apps platform) and 'when' (when asked to create dashboards, data apps, analytics tools, or visualizations) with an explicit 'Use when' clause. Also includes a timing directive ('Invoke BEFORE starting implementation') which adds useful procedural guidance. | 3 / 3 |
Trigger Term Quality | Includes some relevant keywords like 'dashboards', 'data apps', 'analytics tools', 'visualizations', and 'Databricks Apps'. However, it misses common variations users might say such as 'Streamlit', 'Gradio', 'Dash', 'deploy app on Databricks', 'Databricks app framework', or 'interactive reports'. | 2 / 3 |
Distinctiveness Conflict Risk | The Databricks Apps platform reference provides some distinctiveness, but terms like 'dashboards', 'visualizations', and 'analytics tools' are quite broad and could easily overlap with general dashboard-building skills, Plotly/Matplotlib visualization skills, or other BI platform skills. | 2 / 3 |
Total | 9 / 12 Passed |
Implementation
85%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured skill that provides clear, actionable guidance for building Databricks Apps. Its strongest aspects are the phase-based reference table for progressive disclosure, the concrete CLI commands with correct flag syntax, and the explicit workflow ordering with validation checkpoints. Minor verbosity in the scaffolding section and some repetitive warnings slightly reduce token efficiency, but the content is overall lean and purposeful.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Generally efficient and avoids explaining basic concepts Claude knows, but some sections are slightly verbose — e.g., the manifest/scaffolding section repeats concepts and the 'When to Use What' section could be tighter. The repeated warnings (DO NOT guess, ALWAYS, etc.) add some bulk but are arguably justified for error prevention. | 2 / 3 |
Actionability | Provides concrete, executable CLI commands for every step (manifest, init, validate, typegen, docs), specific flag syntax with examples, clear anti-patterns with wrong/right comparisons, and precise naming constraints. The scaffolding workflow is fully copy-paste ready with real flag names and values. | 3 / 3 |
Workflow Clarity | The development workflow is clearly sequenced with numbered steps, explicit ordering constraints ('DO NOT write UI code before running typegen'), validation checkpoints (validate before deploying, update smoke tests before validation), and distinct paths for analytics vs Lakebase apps. The scaffolding workflow has a clear two-phase sequence (manifest first, then init) with validation. | 3 / 3 |
Progressive Disclosure | Excellent structure with a phase-based reference table at the top pointing to specific guide files, inline links to detailed references throughout, and clear separation between overview content in SKILL.md and detailed content in referenced files. References are one level deep and clearly signaled. The `npx docs` command provides an additional discovery mechanism. | 3 / 3 |
Total | 11 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
6338825
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.