CtrlK
BlogDocsLog inGet started
Tessl Logo

databricks-lakebase

Manage Lakebase Postgres Autoscaling projects, branches, and endpoints via Databricks CLI. Use when asked to create, configure, or manage Lakebase Postgres databases, projects, branches, computes, or endpoints.

86

Quality

83%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

SKILL.md
Quality
Evals
Security

Lakebase Postgres Autoscaling

FIRST: Use the parent databricks-core skill for CLI basics, authentication, and profile selection.

Lakebase is Databricks' serverless Postgres-compatible database (similar to Neon). It provides fully managed OLTP storage with autoscaling, branching, and scale-to-zero.

Manage Lakebase Postgres projects, branches, endpoints, and databases via databricks postgres CLI commands.

Resource Hierarchy

Project (top-level container)
  └── Branch (isolated database environment, copy-on-write)
        ├── Endpoint (read-write or read-only)
        ├── Database (standard Postgres DB)
        └── Role (Postgres role)
  • Project: Top-level container. Creating one auto-provisions a production branch and a primary read-write endpoint.
  • Branch: Isolated database environment sharing storage with parent (copy-on-write). States: READY, ARCHIVED.
  • Endpoint (called Compute in the Lakebase UI): Compute resource powering a branch. Types: ENDPOINT_TYPE_READ_WRITE, ENDPOINT_TYPE_READ_ONLY (read replica).
  • Database: Standard Postgres database within a branch. Default: databricks_postgres.
  • Role: Postgres role within a branch. Manage roles via databricks postgres create-role -h.

Resource Name Formats

ResourceFormat
Projectprojects/{project_id}
Branchprojects/{project_id}/branches/{branch_id}
Endpointprojects/{project_id}/branches/{branch_id}/endpoints/{endpoint_id}
Databaseprojects/{project_id}/branches/{branch_id}/databases/{database_id}

All IDs: 1-63 characters, start with lowercase letter, lowercase letters/numbers/hyphens only (RFC 1123).

CLI Discovery — ALWAYS Do This First

Note: "Lakebase" is the product name; the CLI command group is postgres. All commands use databricks postgres ....

Do NOT guess command syntax. Discover available commands and their usage dynamically:

# List all postgres subcommands
databricks postgres -h

# Get detailed usage for any subcommand (flags, args, JSON fields)
databricks postgres <subcommand> -h

Run databricks postgres -h before constructing any command. Run databricks postgres <subcommand> -h to discover exact flags, positional arguments, and JSON spec fields for that subcommand.

Create a Project

Do NOT list projects before creating.

databricks postgres create-project <PROJECT_ID> \
  --json '{"spec": {"display_name": "<DISPLAY_NAME>"}}' \
  --profile <PROFILE>
  • Auto-creates: production branch + primary read-write endpoint (1 CU min/max, scale-to-zero)
  • Long-running operation; the CLI waits for completion by default. Use --no-wait to return immediately.
  • Run databricks postgres create-project -h for all available spec fields (e.g. pg_version).

After creation, verify the auto-provisioned resources:

databricks postgres list-branches projects/<PROJECT_ID> --profile <PROFILE>
databricks postgres list-endpoints projects/<PROJECT_ID>/branches/<BRANCH_ID> --profile <PROFILE>
databricks postgres list-databases projects/<PROJECT_ID>/branches/<BRANCH_ID> --profile <PROFILE>

Autoscaling

Endpoints use compute units (CU) for autoscaling. Configure min/max CU via create-endpoint or update-endpoint. Run databricks postgres create-endpoint -h to see all spec fields.

Scale-to-zero is enabled by default. When idle, compute scales down to zero; it resumes in seconds on next connection.

Branches

Branches are copy-on-write snapshots of an existing branch. Use them for experimentation: testing schema migrations, trying queries, or previewing data changes -- without affecting production.

databricks postgres create-branch projects/<PROJECT_ID> <BRANCH_ID> \
  --json '{
    "spec": {
      "source_branch": "projects/<PROJECT_ID>/branches/<SOURCE_BRANCH_ID>",
      "no_expiry": true
    }
  }' --profile <PROFILE>

Branches require an expiration policy: use "no_expiry": true for permanent branches.

When done experimenting, delete the branch. Protected branches must be unprotected first -- use update-branch to set spec.is_protected to false, then delete:

# Step 1 — unprotect
databricks postgres update-branch projects/<PROJECT_ID>/branches/<BRANCH_ID> \
  --json '{"spec": {"is_protected": false}}' --profile <PROFILE>

# Step 2 — delete (run -h to confirm positional arg format for your CLI version)
databricks postgres delete-branch projects/<PROJECT_ID>/branches/<BRANCH_ID> \
  --profile <PROFILE>

Never delete the production branch — it is the authoritative branch auto-provisioned at project creation.

What's Next

Build a Databricks App

After creating a Lakebase project, scaffold a Databricks App connected to it.

Step 1 — Discover branch name (use .name from a READY branch):

databricks postgres list-branches projects/<PROJECT_ID> --profile <PROFILE>

Step 2 — Discover database name (use .name from the desired database; <BRANCH_ID> is the branch ID, not the full resource name):

databricks postgres list-databases projects/<PROJECT_ID>/branches/<BRANCH_ID> --profile <PROFILE>

Step 3 — Scaffold the app with the lakebase feature:

databricks apps init --name <APP_NAME> \
  --features lakebase \
  --set "lakebase.postgres.branch=<BRANCH_NAME>" \
  --set "lakebase.postgres.database=<DATABASE_NAME>" \
  --run none --profile <PROFILE>

Where <BRANCH_NAME> is the full resource name (e.g. projects/<PROJECT_ID>/branches/<BRANCH_ID>) and <DATABASE_NAME> is the full resource name (e.g. projects/<PROJECT_ID>/branches/<BRANCH_ID>/databases/<DB_ID>).

For the full app development workflow, use the databricks-apps skill.

Schema Permissions for Deployed Apps

When a Lakebase database is used by a deployed Databricks App, the app's Service Principal has CAN_CONNECT_AND_CREATE permission, which means it can create new objects but cannot access any existing schemas or tables (including public). The SP must create the schema itself to become its owner.

ALWAYS deploy the app before running it locally. This is the #1 source of Lakebase permission errors.

When deployed, the app's Service Principal runs the schema initialization SQL (e.g. CREATE SCHEMA IF NOT EXISTS app_data), creating the schema and tables — and becoming their owner. Only the owner (or a superuser) can access those objects.

If you run locally first, your personal credentials create the schema and become the owner. The deployed Service Principal then cannot access it — even though it has CAN_CONNECT_AND_CREATE — because it didn't create it and cannot access existing schemas.

Correct workflow:

  1. Deploy first: databricks apps deploy <APP_NAME> --profile <PROFILE> — verify with databricks apps get <APP_NAME> --profile <PROFILE> that the app is deployed before proceeding
  2. Grant local access (if needed): if you're not the project creator, assign databricks_superuser to your identity via the Lakebase UI. Project creators already have sufficient access.
  3. Develop locally: your credentials get DML access (SELECT/INSERT/UPDATE/DELETE) to SP-owned schemas

Note: Project creators already have access to SP-owned schemas. Other team members need databricks_superuser (grants full DML but not DDL). If you need to alter the schema during local development, redeploy the app to apply DDL changes.

If you already ran locally first and hit permission denied after deploying: the schema is owned by your personal credentials, not the SP. ⚠️ Do NOT drop the schema without asking the user first — dropping it (DROP SCHEMA <name> CASCADE) deletes all data in that schema. Ask the user how they'd like to proceed:

  • Option A (destructive): Drop the schema and redeploy so the SP recreates it. Only safe if the schema has no valuable data.
  • Option B (manual): The user can reassign ownership or manually grant the SP access, preserving existing data.

Other Workflows

Connect a Postgres client Get the connection string from the endpoint, then connect with psql, DBeaver, or any standard Postgres client.

databricks postgres get-endpoint projects/<PROJECT_ID>/branches/<BRANCH_ID>/endpoints/<ENDPOINT_ID> --profile <PROFILE>

Manage roles and permissions Create Postgres roles and grant access to databases or schemas.

databricks postgres create-role -h   # discover role spec fields

Add a read-only endpoint Create a read replica for analytics or reporting workloads to avoid contention on the primary read-write endpoint.

databricks postgres create-endpoint projects/<PROJECT_ID>/branches/<BRANCH_ID> <ENDPOINT_ID> \
  --json '{"spec": {"type": "ENDPOINT_TYPE_READ_ONLY"}}' --profile <PROFILE>

Troubleshooting

ErrorSolution
cannot configure default credentialsUse --profile flag or authenticate first
PERMISSION_DENIEDCheck workspace permissions
permission denied for schema <name>Schema owned by another role. Deploy the app first so the SP creates and owns the schema. See Schema Permissions for Deployed Apps above
Protected branch cannot be deletedupdate-branch to set spec.is_protected to false first
Long-running operation timeoutUse --no-wait and poll with get-operation
Repository
databricks/devhub
Last updated
Created

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.