CtrlK
BlogDocsLog inGet started
Tessl Logo

agent-openai-memory

Add memory capabilities to your agent. Use when: (1) User asks about 'memory', 'state', 'remember', 'conversation history', (2) Want to persist conversations or user preferences, (3) Adding checkpointing or long-term storage.

78

Quality

73%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./agent-openai-advanced/.claude/skills/agent-memory/SKILL.md
SKILL.md
Quality
Evals
Security

Stateful Memory with OpenAI Agents SDK Sessions

This template uses OpenAI Agents SDK Sessions with AsyncDatabricksSession to persist conversation history to a Databricks Lakebase instance.

How Sessions Work

Sessions automatically manage conversation history for multi-turn interactions:

  1. Before each run: The session retrieves prior conversation history and prepends it to input
  2. During the run: New items (user messages, responses, tool calls) are generated
  3. After each run: All new items are automatically stored in the session

This eliminates the need to manually manage conversation state between runs.

Key Concepts

ConceptDescription
SessionStores conversation history for a specific session_id
session_idUnique identifier linking requests to the same conversation
AsyncDatabricksSessionSession implementation backed by Databricks Lakebase
LAKEBASE_INSTANCE_NAMEEnvironment variable specifying the Lakebase instance

How This Template Uses Sessions

Session Creation (agent_server/agent.py)

from databricks_openai.agents import AsyncDatabricksSession

session = AsyncDatabricksSession(
    session_id=get_session_id(request),
    instance_name=LAKEBASE_INSTANCE_NAME,
)

result = await Runner.run(agent, messages, session=session)

Session ID Extraction (agent_server/agent.py)

The session_id is extracted from custom_inputs or auto-generated:

def get_session_id(request: ResponsesAgentRequest) -> str:
    if hasattr(request, "custom_inputs") and request.custom_inputs:
        if "session_id" in request.custom_inputs:
            return request.custom_inputs["session_id"]
    return str(uuid7())

Lakebase Instance Resolution (agent_server/utils.py)

The LAKEBASE_INSTANCE_NAME env var can be either an instance name or a hostname. The resolve_lakebase_instance_name() function handles both cases:

_LAKEBASE_INSTANCE_NAME_RAW = os.environ.get("LAKEBASE_INSTANCE_NAME")
LAKEBASE_INSTANCE_NAME = resolve_lakebase_instance_name(_LAKEBASE_INSTANCE_NAME_RAW)

Prerequisites

  1. Dependency: databricks-openai[memory] must be in pyproject.toml (already included)

  2. Lakebase instance: You need a Databricks Lakebase instance. See the lakebase-setup skill for creating and configuring one.

  3. Environment variable: Set LAKEBASE_INSTANCE_NAME in your .env file:

    LAKEBASE_INSTANCE_NAME=<your-lakebase-instance-name>

Configuration Files

databricks.yml (Lakebase Resource)

Add the Lakebase database resource to your app:

resources:
  apps:
    agent_openai_advanced:
      name: "your-app-name"
      source_code_path: ./

      resources:
        # ... other resources (experiment, etc.) ...

        # Lakebase instance for session storage
        - name: 'database'
          database:
            instance_name: '<your-lakebase-instance-name>'
            database_name: 'databricks_postgres'
            permission: 'CAN_CONNECT_AND_CREATE'

databricks.yml config block (Environment Variables)

The LAKEBASE_INSTANCE_NAME env var is resolved from the database resource at deploy time. Add to your app's config.env in databricks.yml:

config:
        env:
          - name: LAKEBASE_INSTANCE_NAME
            value_from: "database"

.env (Local Development)

LAKEBASE_INSTANCE_NAME=<your-lakebase-instance-name>

Testing Sessions

Test Multi-Turn Conversation Locally

# Start the server
uv run start-app

# First message - starts a new session
curl -X POST http://localhost:8000/invocations \
  -H "Content-Type: application/json" \
  -d '{"input": [{"role": "user", "content": "Hello, I live in SF!"}]}'

# Note the session_id from custom_outputs in the response

# Second message - continues the same session
curl -X POST http://localhost:8000/invocations \
  -H "Content-Type: application/json" \
  -d '{
      "input": [{"role": "user", "content": "What city did I say I live in?"}],
      "custom_inputs": {"session_id": "<session_id from previous response>"}
  }'

Test Streaming

curl -X POST http://localhost:8000/invocations \
  -H "Content-Type: application/json" \
  -d '{
      "input": [{"role": "user", "content": "Hello!"}],
      "stream": true
  }'

Troubleshooting

IssueCauseSolution
"LAKEBASE_INSTANCE_NAME environment variable is required"Missing env varSet LAKEBASE_INSTANCE_NAME in .env
SSL connection closed unexpectedlyNetwork/instance issueVerify Lakebase instance is running: databricks lakebase instances get <name>
Agent doesn't remember previous messagesDifferent session_idPass the same session_id via custom_inputs across requests
"Unable to resolve hostname"Hostname doesn't match any instanceVerify the hostname or use the instance name directly
Permission deniedMissing Lakebase accessAdd database resource to databricks.yml with CAN_CONNECT_AND_CREATE

Next Steps

  • Configure Lakebase: see lakebase-setup skill
  • Test locally: see run-locally skill
  • Deploy: see deploy skill
Repository
databricks/app-templates
Last updated
Created

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.