28 Jan 20266 minute read

LanceDB brings versioned context memory to multimodal AI agents
28 Jan 20266 minute read

As AI agents take on longer-running tasks and ingest more than just text, teams face a real problem: context no longer fits neatly inside a single prompt or chat history. Information accumulates across steps, experiments, and retries, and needs to be retained if agents are going to be reliable and debuggable over time.
For teams building or extending coding agents, this shows up in how agent state is handled across runs. Agents need to retain intermediate decisions, share context with other agents or collaborators, and reconstruct what happened during earlier steps.
That challenge helps explain the release of Lance Context, a new open source library from the team behind LanceDB, a database project built around the Apache Arrow–based Lance data format. Lance Context is designed to manage the lifecycle of what it calls multimodal agentic context: the accumulated inputs and outputs an agent touches as it works.
Treating agent context as a dataset
Rather than thinking about context as something assembled on the fly for each model call, Lance Context treats it as a versioned dataset. Text, images, PDFs, embeddings, and structured data can live together inside a single “context stream,” backed by the Lance format.
Each update to that stream creates a new immutable version, making it possible to rewind to earlier states or branch off alternative execution paths. That design is aimed at agent systems that explore multiple approaches to a task, or that need to reconstruct exactly what an agent had access to at a given point in time.
For developers building custom agent stacks, this makes it easier to keep track of what an agent has seen and done across runs. An agent can persist what it has seen across sessions, branch experiments without duplicating state, or share a common contextual record with other agents or teammates — all without relying on ad hoc logs or stitched-together storage systems.
According to the LanceDB team, this approach is already being used inside Uber for production agent systems, where agents consume a mix of documents, visual inputs, and tabular data. The emphasis is on persistence and replay: keeping a durable record of agent state that can be reused over time, rather than relying on ephemeral chat logs.
Lance Context itself is open source, meaning the code is publicly available and can be inspected, modified, and self-hosted. It is positioned as a low-level building block rather than a full agent framework, leaving decisions about prompt construction and task execution to other tools.
Measuring memory’s impact
Storing richer context is only part of the story. A recurring question for teams adopting agent systems is whether retaining more information actually improves outcomes, or simply increases complexity.
From Tessl’s perspective, this question becomes unavoidable as agent context moves from transient to persistent. Once agents carry memory across runs, branches, and collaborators, teams need ways to understand whether that additional context is actually helping.
This is the problem evals are designed to address. While richer memory makes it possible for agents to operate over longer horizons, it also introduces new uncertainty around quality, regressions, and unintended behavior. Persisting context alone does not answer whether an agent is producing better code, fewer errors, or more consistent results over time.
For developers building and extending agent systems, this raises an important question: how do you tell whether additional memory is improving outcomes, or simply making behavior harder to reason about?
Seen through that lens, Lance Context and Tessl operate at different layers. Lance Context provides a way to durably store and version the full body of context an agent interacts with. Tessl focuses on assessing whether that context is being used effectively, through evals that track quality and change over time.
Context as infrastructure
These efforts point toward a broader shift in how agent systems are built. As agents move beyond short, single-turn interactions, context assumes the role of shared infrastructure, with its own lifecycle and operational concerns.
Lance Context formalizes that memory layer by treating agent context as data that can be stored, branched, and replayed. For AI-native developers, its relevance lies in enabling agent systems that can be extended, debugged, and evolved over time, particularly as agents take on longer-running and more complex roles.
Tessl, meanwhile, addresses the next question: how teams determine whether all that remembered context is actually helping agents do better work. As agent-based development becomes more common, memory and measurement are likely to become standard parts of the stack — even if they are handled by different tools.
Join Our Newsletter
Be the first to hear about events, news and product updates from AI Native Dev.



