CLI for interacting with LangGraph API
Overall
score
74%
Evaluation — 74%
↑ 1.17xAgent success when using this tile
Automated Docker and Docker Compose integration for LangGraph applications. Provides capability detection, file generation, and orchestration support for development and production deployments.
Detect available Docker and Docker Compose features on the system.
def check_capabilities(runner) -> DockerCapabilitiesPurpose: Detect Docker and Docker Compose capabilities and versions Parameters:
runner: Execution runner context for subprocess commands
Returns: DockerCapabilities named tuple with detected featuresclass DockerCapabilities(NamedTuple):
docker: bool
compose: bool
buildx: bool
version: strDockerCapabilities Properties:
Usage Examples:
from langgraph_cli.docker import check_capabilities
from langgraph_cli.exec import Runner
with Runner() as runner:
caps = check_capabilities(runner)
if caps.docker:
print(f"Docker {caps.version} available")
if caps.compose:
print("Docker Compose available")
if caps.buildx:
print("Multi-platform builds supported")Generate Docker Compose YAML files from configuration and runtime parameters.
def compose(
capabilities: DockerCapabilities,
image: str,
config: Config,
port: int = DEFAULT_PORT,
docker_compose: Optional[pathlib.Path] = None,
watch: bool = False,
debugger_port: Optional[int] = None,
debugger_base_url: Optional[str] = None,
postgres_uri: Optional[str] = None
) -> strPurpose: Generate complete Docker Compose YAML for LangGraph deployment Parameters:
capabilities (DockerCapabilities): Docker system capabilitiesimage (str): Docker image name to deployconfig (Config): LangGraph configuration objectport (int): Host port to expose (default: 8123)docker_compose (Optional[pathlib.Path]): Additional compose file to mergewatch (bool): Enable file watching for developmentdebugger_port (Optional[int]): Port for debugger UIdebugger_base_url (Optional[str]): Base URL for debugger API accesspostgres_uri (Optional[str]): Custom PostgreSQL connection string
Returns: Complete Docker Compose YAML as stringUsage Examples:
from langgraph_cli.docker import compose, check_capabilities
from langgraph_cli.config import validate_config_file
from langgraph_cli.exec import Runner
import pathlib
# Load configuration and check capabilities
config = validate_config_file(pathlib.Path("langgraph.json"))
with Runner() as runner:
capabilities = check_capabilities(runner)
# Generate compose file for production
compose_yaml = compose(
capabilities=capabilities,
image="my-app:latest",
config=config,
port=8080
)
# Generate compose file for development with debugging
dev_compose_yaml = compose(
capabilities=capabilities,
image="my-app:dev",
config=config,
port=2024,
watch=True,
debugger_port=8081,
debugger_base_url="http://localhost:2024"
)
# Generate with additional services
full_compose_yaml = compose(
capabilities=capabilities,
image="my-app:latest",
config=config,
docker_compose=pathlib.Path("docker-compose.services.yml"),
postgres_uri="postgresql://user:pass@postgres:5432/db"
)Convert Python data structures to properly formatted YAML.
def dict_to_yaml(d: dict, *, indent: int = 0) -> strPurpose: Convert dictionary to YAML format with proper indentation Parameters:
d (dict): Dictionary to convert to YAMLindent (int): Base indentation level (default: 0)
Returns: YAML string representationUsage Examples:
from langgraph_cli.docker import dict_to_yaml
# Convert configuration to YAML
config_dict = {
"services": {
"app": {
"image": "my-app:latest",
"ports": ["8080:8080"],
"environment": {
"LOG_LEVEL": "INFO"
}
}
}
}
yaml_output = dict_to_yaml(config_dict)
print(yaml_output)
# Output:
# services:
# app:
# image: my-app:latest
# ports:
# - "8080:8080"
# environment:
# LOG_LEVEL: INFOThe CLI generates different Docker Compose configurations based on use case:
version: '3.8'
services:
langgraph-api:
image: my-app:latest
ports:
- "8123:8000"
environment:
- PORT=8000
- HOST=0.0.0.0
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
interval: 30s
timeout: 10s
retries: 3version: '3.8'
services:
langgraph-api:
image: my-app:dev
ports:
- "2024:8000"
- "8081:8001" # debugger port
environment:
- PORT=8000
- HOST=0.0.0.0
- DEBUGGER_PORT=8001
volumes:
- ./src:/app/src:ro # watch mode
develop:
watch:
- action: rebuild
path: ./srcversion: '3.8'
services:
langgraph-api:
image: my-app:latest
ports:
- "8123:8000"
environment:
- PORT=8000
- HOST=0.0.0.0
- DATABASE_URL=postgresql://postgres:password@postgres:5432/langgraph
depends_on:
postgres:
condition: service_healthy
postgres:
image: postgres:15
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
- POSTGRES_DB=langgraph
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5
volumes:
postgres_data:Docker integration supports multi-platform builds using Docker Buildx:
The system automatically detects Buildx support and enables multi-platform builds when available:
# Capability detection includes Buildx support
capabilities = check_capabilities(runner)
if capabilities.buildx:
# Multi-platform builds available
platforms = ["linux/amd64", "linux/arm64"]Generated Docker commands support platform specification:
# Single platform
docker build -t my-app:latest .
# Multi-platform with Buildx
docker buildx build --platform linux/amd64,linux/arm64 -t my-app:latest .The CLI follows this workflow for Docker file generation:
Docker containers are configured with proper environment variable handling:
Generated Docker configurations include health checks and monitoring:
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s# PostgreSQL health check
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER}"]
interval: 5s
timeout: 5s
retries: 5Production configurations include resource constraints:
deploy:
resources:
limits:
cpus: '2.0'
memory: 2G
reservations:
cpus: '0.5'
memory: 512MDocker integration provides detailed error handling:
Install with Tessl CLI
npx tessl i tessl/pypi-langgraph-clievals
scenario-1
scenario-2
scenario-3
scenario-4
scenario-5
scenario-6
scenario-7
scenario-8
scenario-9
scenario-10