Build and deploy Streamlit apps natively in Snowflake. Covers snowflake.yml scaffolding, Snowpark sessions, multi-page structure, Marketplace publishing as Native Apps, and caller's rights connections (v1.53.0+). Use when building data apps on Snowflake, deploying SiS, fixing package channel errors, authentication issues, cache key bugs, or path resolution errors.
Install with Tessl CLI
npx tessl i github:jezweb/claude-skills --skill streamlit-snowflake91
Does it follow best practices?
Validation for skill structure
Build and deploy Streamlit apps natively within Snowflake, including Marketplace publishing as Native Apps.
Copy the templates to your project:
# Create project directory
mkdir my-streamlit-app && cd my-streamlit-app
# Copy templates (Claude will provide these)Update placeholders in snowflake.yml:
definition_version: 2
entities:
my_app:
type: streamlit
identifier: my_streamlit_app # ← Your app name
stage: my_app_stage # ← Your stage name
query_warehouse: my_warehouse # ← Your warehouse
main_file: streamlit_app.py
pages_dir: pages/
artifacts:
- common/
- environment.yml# Deploy to Snowflake
snow streamlit deploy --replace
# Open in browser
snow streamlit deploy --replace --openUse when:
Don't use when:
Snowflake offers two runtime options for Streamlit apps:
environment.yml with Snowflake Anaconda Channelrequirements.txt or pyproject.toml with PyPI packagesContainer Runtime Configuration:
CREATE STREAMLIT my_app
FROM '@my_stage/app_folder'
MAIN_FILE = 'streamlit_app.py'
RUNTIME_NAME = 'SYSTEM$ST_CONTAINER_RUNTIME_PY3_11'
COMPUTE_POOL = my_compute_pool
QUERY_WAREHOUSE = my_warehouse;Key difference: Container runtime allows external PyPI packages - not limited to Snowflake Anaconda Channel.
See: Runtime Environments
Streamlit apps support two privilege models:
Security implications:
Use caller's rights when:
See Caller's Rights Connection pattern below.
my-streamlit-app/
├── snowflake.yml # Project definition (required)
├── environment.yml # Package dependencies (required)
├── streamlit_app.py # Main entry point
├── pages/ # Multi-page apps
│ └── data_explorer.py
├── common/ # Shared utilities
│ └── utils.py
└── .gitignoreimport streamlit as st
# Get Snowpark session (native SiS connection)
conn = st.connection("snowflake")
session = conn.session()
# Query data
df = session.sql("SELECT * FROM my_table LIMIT 100").to_pandas()
st.dataframe(df)Execute queries with viewer's privileges instead of owner's privileges:
import streamlit as st
# Use caller's rights for data isolation
conn = st.connection("snowflake", type="callers_rights")
# Each viewer sees only data they have permission to access
df = conn.query("SELECT * FROM sensitive_customer_data")
st.dataframe(df)Security comparison:
| Connection Type | Privilege Model | Use Case |
|---|---|---|
type="snowflake" (default) | Owner's rights | Internal tools, trusted users |
type="callers_rights" (v1.53.0+) | Caller's rights | Public apps, data isolation |
Source: Streamlit v1.53.0 Release
@st.cache_data(ttl=600) # Cache for 10 minutes
def load_data(query: str):
conn = st.connection("snowflake")
return conn.session().sql(query).to_pandas()
# Use cached function
df = load_data("SELECT * FROM large_table")Warning: In Streamlit v1.22.0-1.53.0, params argument is not included in cache key. Use ttl=0 to disable caching when using parametrized queries, or upgrade to 1.54.0+ when available (Issue #13644).
When using Snowpark DataFrames with charts or tables, select only required columns to avoid fetching unnecessary data:
# ❌ Fetches all 50 columns even though chart only needs 2
df = session.table("wide_table") # 50 columns
st.line_chart(df, x="date", y="value")
# ✅ Fetch only needed columns for better performance
df = session.table("wide_table").select("date", "value")
st.line_chart(df, x="date", y="value")
# 5-10x faster for wide tablesWhy it matters: st.dataframe() and chart components call df.to_pandas() which evaluates ALL columns, even if the visualization only needs some. Pre-selecting columns reduces data transfer and improves performance (Issue #11701).
environment.yml (required format):
name: sf_env
channels:
- snowflake # REQUIRED - only supported channel
dependencies:
- streamlit=1.35.0 # Explicit version (default is old 1.22.0)
- pandas
- plotly
- altair=4.0 # Version 4.0 supported in SiS
- snowflake-snowpark-pythonThis skill prevents 14 documented errors:
| Error | Cause | Prevention |
|---|---|---|
PackageNotFoundError | Using conda-forge or external channel | Use channels: - snowflake (or Container Runtime for PyPI) |
| Missing Streamlit features | Default version 1.22.0 | Explicitly set streamlit=1.35.0 (or use Container Runtime for 1.49+) |
ROOT_LOCATION deprecated | Old CLI syntax | Use Snowflake CLI 3.14.0+ with FROM source_location |
| Auth failures (2026+) | Password-only authentication | Use key-pair or OAuth (see references/authentication.md) |
| File upload fails | File >200MB | Keep uploads under 200MB limit |
| DataFrame display fails | Data >32MB | Paginate or limit data before display |
page_title not supported | SiS limitation | Don't use page_title, page_icon, or menu_items in st.set_page_config() |
| Custom component error | SiS limitation | Only components without external service calls work |
_snowflake module not found | Container Runtime migration | Use from snowflake.snowpark.context import get_active_session instead of from _snowflake import get_active_session (Migration Guide) |
| Cached query returns wrong data with different params | params not in cache key (v1.22.0-1.53.0) | Use ttl=0 to disable caching for parametrized queries, or upgrade to 1.54.0+ when available (Issue #13644) |
Invalid connection_name 'default' with kwargs only | Missing secrets.toml or connections.toml | Create minimal .streamlit/secrets.toml with [connections.snowflake] section (Issue #9016) |
| Native App upgrades unexpectedly | Implicit default Streamlit version (BCR-1857) | Explicitly set streamlit=1.35.0 in environment.yml to prevent automatic version changes (BCR-1857) |
| File paths fail in Container Runtime subdirectories | Some commands use entrypoint-relative paths | Use pathlib to resolve absolute paths: Path(__file__).parent / "assets/logo.png" (Runtime Docs) |
| Slow performance with wide Snowpark DataFrames | st.dataframe() fetches all columns even if unused | Pre-select only needed columns: df.select("col1", "col2") before passing to Streamlit (Issue #11701) |
# Deploy and replace existing
snow streamlit deploy --replace
# Deploy and open in browser
snow streamlit deploy --replace --open
# Deploy specific entity (if multiple in snowflake.yml)
snow streamlit deploy my_app --replaceSee references/ci-cd.md for GitHub Actions workflow template.
To publish your Streamlit app to Snowflake Marketplace:
templates-native-app/ templatesSee templates-native-app/README.md for complete workflow.
my-native-app/
├── manifest.yml # Native App manifest
├── setup.sql # Installation script
├── streamlit/
│ ├── environment.yml
│ ├── streamlit_app.py
│ └── pages/
└── README.mdOnly packages from the Snowflake Anaconda Channel are available:
-- Query available packages
SELECT * FROM information_schema.packages
WHERE language = 'python'
ORDER BY package_name;
-- Search for specific package
SELECT * FROM information_schema.packages
WHERE language = 'python'
AND package_name ILIKE '%plotly%';Common available packages:
Not available:
See: Snowpark Python Packages Explorer
st.dataframe)st.file_uploader.so files - Native compiled libraries unsupportedst.set_page_config - page_title, page_icon, menu_items not supportedst.bokeh_chart - Not supportedeval() blocked - CSP prevents unsafe JavaScript executionst.cache_data and st.cache_resource don't persist across usersPassword-only authentication is being deprecated:
| Milestone | Date | Requirement |
|---|---|---|
| Milestone 1 | Sept 2025 - Jan 2026 | MFA required for Snowsight users |
| Milestone 2 | May - July 2026 | All new users must use MFA |
| Milestone 3 | Aug - Oct 2026 | All users must use MFA or key-pair/OAuth |
Recommended authentication methods:
See references/authentication.md for implementation patterns.
fa91c34
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.