Manage Lakebase Postgres Autoscaling projects, branches, and endpoints via Databricks CLI. Use when asked to create, configure, or manage Lakebase Postgres databases, projects, branches, computes, or endpoints.
86
83%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
FIRST: Use the parent databricks-core skill for CLI basics, authentication, and profile selection.
Lakebase is Databricks' serverless Postgres-compatible database (similar to Neon). It provides fully managed OLTP storage with autoscaling, branching, and scale-to-zero.
Manage Lakebase Postgres projects, branches, endpoints, and databases via databricks postgres CLI commands.
Project (top-level container)
└── Branch (isolated database environment, copy-on-write)
├── Endpoint (read-write or read-only)
├── Database (standard Postgres DB)
└── Role (Postgres role)production branch and a primary read-write endpoint.READY, ARCHIVED.ENDPOINT_TYPE_READ_WRITE, ENDPOINT_TYPE_READ_ONLY (read replica).databricks_postgres.databricks postgres create-role -h.| Resource | Format |
|---|---|
| Project | projects/{project_id} |
| Branch | projects/{project_id}/branches/{branch_id} |
| Endpoint | projects/{project_id}/branches/{branch_id}/endpoints/{endpoint_id} |
| Database | projects/{project_id}/branches/{branch_id}/databases/{database_id} |
All IDs: 1-63 characters, start with lowercase letter, lowercase letters/numbers/hyphens only (RFC 1123).
Note: "Lakebase" is the product name; the CLI command group is
postgres. All commands usedatabricks postgres ....
Do NOT guess command syntax. Discover available commands and their usage dynamically:
# List all postgres subcommands
databricks postgres -h
# Get detailed usage for any subcommand (flags, args, JSON fields)
databricks postgres <subcommand> -hRun databricks postgres -h before constructing any command. Run databricks postgres <subcommand> -h to discover exact flags, positional arguments, and JSON spec fields for that subcommand.
Do NOT list projects before creating.
databricks postgres create-project <PROJECT_ID> \
--json '{"spec": {"display_name": "<DISPLAY_NAME>"}}' \
--profile <PROFILE>production branch + primary read-write endpoint (1 CU min/max, scale-to-zero)--no-wait to return immediately.databricks postgres create-project -h for all available spec fields (e.g. pg_version).After creation, verify the auto-provisioned resources:
databricks postgres list-branches projects/<PROJECT_ID> --profile <PROFILE>
databricks postgres list-endpoints projects/<PROJECT_ID>/branches/<BRANCH_ID> --profile <PROFILE>
databricks postgres list-databases projects/<PROJECT_ID>/branches/<BRANCH_ID> --profile <PROFILE>Endpoints use compute units (CU) for autoscaling. Configure min/max CU via create-endpoint or update-endpoint. Run databricks postgres create-endpoint -h to see all spec fields.
Scale-to-zero is enabled by default. When idle, compute scales down to zero; it resumes in seconds on next connection.
Branches are copy-on-write snapshots of an existing branch. Use them for experimentation: testing schema migrations, trying queries, or previewing data changes -- without affecting production.
databricks postgres create-branch projects/<PROJECT_ID> <BRANCH_ID> \
--json '{
"spec": {
"source_branch": "projects/<PROJECT_ID>/branches/<SOURCE_BRANCH_ID>",
"no_expiry": true
}
}' --profile <PROFILE>Branches require an expiration policy: use "no_expiry": true for permanent branches.
When done experimenting, delete the branch. Protected branches must be unprotected first -- use update-branch to set spec.is_protected to false, then delete:
# Step 1 — unprotect
databricks postgres update-branch projects/<PROJECT_ID>/branches/<BRANCH_ID> \
--json '{"spec": {"is_protected": false}}' --profile <PROFILE>
# Step 2 — delete (run -h to confirm positional arg format for your CLI version)
databricks postgres delete-branch projects/<PROJECT_ID>/branches/<BRANCH_ID> \
--profile <PROFILE>Never delete the production branch — it is the authoritative branch auto-provisioned at project creation.
After creating a Lakebase project, scaffold a Databricks App connected to it.
Step 1 — Discover branch name (use .name from a READY branch):
databricks postgres list-branches projects/<PROJECT_ID> --profile <PROFILE>Step 2 — Discover database name (use .name from the desired database; <BRANCH_ID> is the branch ID, not the full resource name):
databricks postgres list-databases projects/<PROJECT_ID>/branches/<BRANCH_ID> --profile <PROFILE>Step 3 — Scaffold the app with the lakebase feature:
databricks apps init --name <APP_NAME> \
--features lakebase \
--set "lakebase.postgres.branch=<BRANCH_NAME>" \
--set "lakebase.postgres.database=<DATABASE_NAME>" \
--run none --profile <PROFILE>Where <BRANCH_NAME> is the full resource name (e.g. projects/<PROJECT_ID>/branches/<BRANCH_ID>) and <DATABASE_NAME> is the full resource name (e.g. projects/<PROJECT_ID>/branches/<BRANCH_ID>/databases/<DB_ID>).
For the full app development workflow, use the databricks-apps skill.
When a Lakebase database is used by a deployed Databricks App, the app's Service Principal has CAN_CONNECT_AND_CREATE permission, which means it can create new objects but cannot access any existing schemas or tables (including public). The SP must create the schema itself to become its owner.
ALWAYS deploy the app before running it locally. This is the #1 source of Lakebase permission errors.
When deployed, the app's Service Principal runs the schema initialization SQL (e.g. CREATE SCHEMA IF NOT EXISTS app_data), creating the schema and tables — and becoming their owner. Only the owner (or a superuser) can access those objects.
If you run locally first, your personal credentials create the schema and become the owner. The deployed Service Principal then cannot access it — even though it has CAN_CONNECT_AND_CREATE — because it didn't create it and cannot access existing schemas.
Correct workflow:
databricks apps deploy <APP_NAME> --profile <PROFILE> — verify with databricks apps get <APP_NAME> --profile <PROFILE> that the app is deployed before proceedingdatabricks_superuser to your identity via the Lakebase UI. Project creators already have sufficient access.Note: Project creators already have access to SP-owned schemas. Other team members need
databricks_superuser(grants full DML but not DDL). If you need to alter the schema during local development, redeploy the app to apply DDL changes.
If you already ran locally first and hit permission denied after deploying: the schema is owned by your personal credentials, not the SP. ⚠️ Do NOT drop the schema without asking the user first — dropping it (DROP SCHEMA <name> CASCADE) deletes all data in that schema. Ask the user how they'd like to proceed:
Connect a Postgres client Get the connection string from the endpoint, then connect with psql, DBeaver, or any standard Postgres client.
databricks postgres get-endpoint projects/<PROJECT_ID>/branches/<BRANCH_ID>/endpoints/<ENDPOINT_ID> --profile <PROFILE>Manage roles and permissions Create Postgres roles and grant access to databases or schemas.
databricks postgres create-role -h # discover role spec fieldsAdd a read-only endpoint Create a read replica for analytics or reporting workloads to avoid contention on the primary read-write endpoint.
databricks postgres create-endpoint projects/<PROJECT_ID>/branches/<BRANCH_ID> <ENDPOINT_ID> \
--json '{"spec": {"type": "ENDPOINT_TYPE_READ_ONLY"}}' --profile <PROFILE>| Error | Solution |
|---|---|
cannot configure default credentials | Use --profile flag or authenticate first |
PERMISSION_DENIED | Check workspace permissions |
permission denied for schema <name> | Schema owned by another role. Deploy the app first so the SP creates and owns the schema. See Schema Permissions for Deployed Apps above |
| Protected branch cannot be deleted | update-branch to set spec.is_protected to false first |
| Long-running operation timeout | Use --no-wait and poll with get-operation |
6338825
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.