CtrlK
BlogDocsLog inGet started
Tessl Logo

databricks-app-python

Builds Python-based Databricks applications using Dash, Streamlit, Gradio, Flask, FastAPI, or Reflex. Handles OAuth authorization (app and user auth), app resources, SQL warehouse and Lakebase connectivity, model serving integration, foundation model APIs, LLM integration, and deployment. Use when building Python web apps, dashboards, ML demos, or REST APIs for Databricks, or when the user mentions Streamlit, Dash, Gradio, Flask, FastAPI, Reflex, or Databricks app.

89

Quality

86%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an excellent skill description that clearly defines its scope, lists concrete capabilities, and provides explicit trigger guidance. It names six specific frameworks, multiple Databricks-specific features, and includes a well-crafted 'Use when...' clause with natural trigger terms. The description is comprehensive yet concise, making it easy for Claude to distinguish this skill from general Python or web development skills.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions and technologies: building apps with named frameworks (Dash, Streamlit, Gradio, Flask, FastAPI, Reflex), OAuth authorization, SQL warehouse/Lakebase connectivity, model serving integration, foundation model APIs, LLM integration, and deployment.

3 / 3

Completeness

Clearly answers both 'what' (builds Python-based Databricks applications with specific frameworks, handles OAuth, connectivity, deployment) and 'when' (explicit 'Use when...' clause listing trigger scenarios like building web apps, dashboards, ML demos, REST APIs, or mentioning specific frameworks).

3 / 3

Trigger Term Quality

Excellent coverage of natural terms users would say: specific framework names (Streamlit, Dash, Gradio, Flask, FastAPI, Reflex), 'Databricks app', 'Python web apps', 'dashboards', 'ML demos', 'REST APIs'. These are all terms users would naturally use when requesting this kind of work.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with a clear niche: the combination of Databricks platform + specific Python web frameworks + Databricks-specific features (Lakebase, SQL warehouse, model serving) makes this unlikely to conflict with generic Python or generic web app skills.

3 / 3

Total

12

/

12

Passed

Implementation

72%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured skill that excels at progressive disclosure and actionability, serving as an effective hub document that routes to detailed guides while providing enough concrete code and configuration to get started. The main weaknesses are some content redundancy between sections (Quick Reference vs Platform Constraints) and a workflow that lacks explicit validation checkpoints between steps. The inline code examples (backend toggle, SQL connection, Pydantic models) are useful but could arguably live in the framework-specific guides to keep the overview leaner.

Suggestions

Add explicit validation/verification steps to the workflow (e.g., 'Test auth locally with `databricks auth token`' or 'Verify app starts locally before deploying')

Consolidate the Quick Reference and Platform Constraints tables to eliminate redundancy (runtime, pre-installed frameworks appear in both)

DimensionReasoningScore

Conciseness

The skill is mostly efficient with good use of tables and structured references, but includes some redundancy (e.g., platform constraints repeated in both Quick Reference and Platform Constraints sections, framework info duplicated between the selection table and detailed guides description). The Pydantic models example and backend toggle pattern add bulk that may not be essential for the overview file.

2 / 3

Actionability

Provides fully executable code examples for SQL warehouse connections, backend toggle patterns, and Pydantic models. The framework selection table includes exact app.yaml commands. The checklist, common issues table, and concrete code snippets make this highly actionable and copy-paste ready.

3 / 3

Workflow Clarity

The workflow section provides a clear decision tree for routing to the right guide, and the checklist is helpful. However, there are no explicit validation checkpoints or feedback loops in the main workflow — no 'verify auth works before proceeding' or 'test locally before deploying' steps. For a multi-step process involving deployment and resource configuration, this is a gap.

2 / 3

Progressive Disclosure

Excellent progressive disclosure with a concise overview that clearly signals six detailed guide files, each with descriptive summaries and keywords. References are one level deep, well-organized by topic, and include both internal guides and external documentation links. The navigation between sections is intuitive.

3 / 3

Total

10

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
databricks-solutions/ai-dev-kit
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.