CtrlK
BlogDocsLog inGet started
Tessl Logo

databricks-app-python

Builds Python-based Databricks applications using Dash, Streamlit, Gradio, Flask, FastAPI, or Reflex. Handles OAuth authorization (app and user auth), app resources, SQL warehouse and Lakebase connectivity, model serving integration, foundation model APIs, LLM integration, and deployment. Use when building Python web apps, dashboards, ML demos, or REST APIs for Databricks, or when the user mentions Streamlit, Dash, Gradio, Flask, FastAPI, Reflex, or Databricks app.

89

Quality

86%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an excellent skill description that clearly defines its scope, lists concrete capabilities, and provides explicit trigger guidance. It names specific frameworks and Databricks-specific features, making it highly distinguishable from generic web development or data engineering skills. The 'Use when...' clause covers both task-based triggers and keyword-based triggers effectively.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions and technologies: building apps with named frameworks (Dash, Streamlit, Gradio, Flask, FastAPI, Reflex), OAuth authorization, SQL warehouse/Lakebase connectivity, model serving integration, foundation model APIs, LLM integration, and deployment.

3 / 3

Completeness

Clearly answers both 'what' (builds Python-based Databricks applications with specific frameworks, handles OAuth, connectivity, deployment) and 'when' (explicit 'Use when...' clause covering building web apps, dashboards, ML demos, REST APIs, or mentioning specific frameworks/Databricks app).

3 / 3

Trigger Term Quality

Excellent coverage of natural terms users would say: names all six frameworks explicitly, mentions 'Python web apps', 'dashboards', 'ML demos', 'REST APIs', 'Databricks app', and specific technical terms like 'OAuth', 'SQL warehouse', 'Lakebase', 'model serving'. These are terms users would naturally use when requesting this kind of work.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with a clear niche: the combination of Databricks platform + specific Python web frameworks + Databricks-specific features (Lakebase, SQL warehouse, app resources) makes this unlikely to conflict with generic Python or generic web app skills.

3 / 3

Total

12

/

12

Passed

Implementation

72%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured skill that serves effectively as a hub document, routing users to detailed sub-guides with clear signaling and keywords. The actionability is strong with executable code examples and concrete configuration details. The main weaknesses are some content redundancy between sections and the lack of explicit validation/verification steps in the deployment workflow.

Suggestions

Add explicit validation checkpoints to the workflow, e.g., 'Test locally with USE_MOCK_BACKEND=true before deploying' and 'After deploy, verify with: databricks apps logs <name>'

Consolidate the Quick Reference and Platform Constraints tables to eliminate duplication of runtime, compute, and pre-installed framework information

DimensionReasoningScore

Conciseness

The skill is mostly efficient with good use of tables and structured references, but includes some redundancy (e.g., platform constraints repeated in both Quick Reference and Platform Constraints sections, framework info duplicated between the selection table and detailed guides). The Pydantic models example and backend toggle pattern add bulk that may not be essential for the overview file.

2 / 3

Actionability

Provides fully executable code examples for SQL warehouse connections, backend toggle patterns, and Pydantic models. The framework selection table includes exact app.yaml commands. The checklist, common issues table, and concrete file structure are all immediately actionable.

3 / 3

Workflow Clarity

The workflow section provides a clear decision tree for routing to the right guide, and the checklist is helpful. However, there are no explicit validation checkpoints or feedback loops in the main workflow — no 'verify deployment succeeded' step, no 'test locally before deploying' guidance. For a skill involving deployment (a potentially destructive/complex operation), this gaps caps the score at 2.

2 / 3

Progressive Disclosure

Excellent progressive disclosure with a concise overview that clearly signals six detailed sub-guides, each with descriptive summaries and keywords. References are one level deep, well-organized, and include both internal guides and external documentation. Navigation is intuitive with the decision-tree workflow pointing to specific files.

3 / 3

Total

10

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
databricks-solutions/ai-dev-kit
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.