CtrlK
BlogDocsLog inGet started
Tessl Logo

databricks-lakebase-autoscale

Patterns and best practices for Lakebase Autoscaling (next-gen managed PostgreSQL). Use when creating or managing Lakebase Autoscaling projects, configuring autoscaling compute or scale-to-zero, working with database branching for dev/test workflows, implementing reverse ETL via synced tables, or connecting applications to Lakebase with OAuth credentials.

89

Quality

86%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description that clearly identifies the product domain (Lakebase Autoscaling / managed PostgreSQL), lists specific capabilities, and provides explicit trigger guidance via a well-structured 'Use when...' clause. The description is concise yet comprehensive, covering multiple distinct use cases with natural trigger terms that would help Claude accurately select this skill.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: creating/managing projects, configuring autoscaling compute, scale-to-zero, database branching for dev/test, reverse ETL via synced tables, and connecting with OAuth credentials.

3 / 3

Completeness

Clearly answers both 'what' (patterns and best practices for Lakebase Autoscaling managed PostgreSQL) and 'when' with an explicit 'Use when...' clause listing five distinct trigger scenarios.

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'Lakebase', 'Autoscaling', 'PostgreSQL', 'scale-to-zero', 'database branching', 'reverse ETL', 'synced tables', 'OAuth credentials', 'dev/test workflows'. Good coverage of domain-specific terms a user working with this product would naturally use.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with product-specific terminology ('Lakebase Autoscaling', 'synced tables', 'scale-to-zero') that clearly carves out a niche unlikely to conflict with generic database or PostgreSQL skills.

3 / 3

Total

12

/

12

Passed

Implementation

72%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured, highly actionable skill with excellent progressive disclosure and concrete, executable code examples covering all major Lakebase Autoscaling operations. Its main weaknesses are moderate verbosity (some redundant sections and explanations) and lack of explicit multi-step workflow sequencing with validation checkpoints. The troubleshooting table and comparison table add significant practical value.

Suggestions

Add an explicit end-to-end workflow section (e.g., 'First Project Setup') with numbered steps, .wait() validation checkpoints, and error recovery guidance for the create→configure→connect flow.

Remove or consolidate the 'When to Use' and 'Overview' sections which largely duplicate the opening paragraph and description; fold unique details into the feature table.

DimensionReasoningScore

Conciseness

The skill is mostly efficient with good use of tables and code examples, but includes some unnecessary sections like 'When to Use' (which restates the description), the Overview paragraph that repeats the intro, and explanatory text Claude already knows. The 'Notes' section partially duplicates information from tables above.

2 / 3

Actionability

Excellent actionability with fully executable Python code examples for all major operations (create project, generate credentials, connect, create branch, resize compute), complete CLI commands, and specific SDK class/method names. Code is copy-paste ready with real import paths and parameter names.

3 / 3

Workflow Clarity

Individual operations are clear, but there's no explicit end-to-end workflow with validation checkpoints. The Quick Start shows project creation but doesn't sequence through the full setup (create project → wait → get endpoint → generate credential → connect). The '.wait()' pattern is mentioned in Notes but not consistently shown as a validation step in workflows. Missing feedback loops for error recovery in multi-step operations.

2 / 3

Progressive Disclosure

Excellent progressive disclosure with a clear overview in SKILL.md and well-signaled one-level-deep references to detailed files (projects.md, branches.md, computes.md, connection-patterns.md, reverse-etl.md). The main file provides enough to get started while pointing to specifics. Related skills are also clearly linked.

3 / 3

Total

10

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
databricks-solutions/ai-dev-kit
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.