Coding Agents are powered by LLMs. These have been trained on vast knowledge on coding practices and software libraries. Working with software libraries and LLMs does present a set of challenges. We’ll dig into each of these issues and show how you can use the Tessl Registry to keep your agents' knowledge up to date, and enable them with the best context.
LLM and Library Challenges
1. Knowledge cut off dates
Training of models happen at a certain moment in time. When your favorite library releases a new feature the model doesn’t know about it. This can lead to your coding agent hallucinating new features when you ask it for specific features. Even though new models arrive frequently, their dataset or snapshot might still be the same not having up to date knowledge.
To illustrate here’s the release schedule of Claude models. There is often a 3-4 months gap:
- 2024.04 - Anthropic Claude 3.5 Sonnet
- 2024.07 - Claude 3.5 Haiku
- 2024.11 - Claude 3.7 Sonnet
- 2025.03 - Claude 4 Opus
- 2025.07 - Claude 4.5 Sonnet
Source : LLM Knowledge cut off dates project
2. Multi-version APIs confusion
Often for the more popular and more long-lived libraries, there exists different multiple major versions of the library. Because LLMs have been trained on all different versions at the same time, they can mix the syntax of the different versions. It has no way to know which versions you are meant to use. Find out how we test this.
3. Private libraries should stay… private
Not every library is public, you might be using internal libraries built by other teams or some proprietary logic. In theory you could train/fine-tune your own model but that is a tedious and expensive task. And you also might not want to share your competitive code with some of the bigger models out there out of precaution.
Common solutions and their pitfalls
Convert code repositories to big prompts
Much like people copy and paste code, early on developers tried automatically generating one big prompt from their codebase. As the codebase becomes more complex, this becomes challenging as it might not fit the full context window (maximum prompt size) for the LLM of the LLM.
The single file approach also makes the coding agent spend more time to find the right pieces of documentation, as it has to process it in smaller chunks. It also assumes the repo contains both the documentation and the code.
Adhoc web searches for up to date information
In case your agent supports searching the web for extra information, you can have it crawl the internet. Often though this means your agent has to go through a lot of webpages not optimized for agents: it contains navigation headers, it requires javascript clicks, or even has the agents chase advertisements.
The web search approach is also not optimized for finding information for agents: how does the agent know when to stop? Does it really need to search this information over and over again? Is it even reading about the version you need?
Agent optimized documentation as extra context
Some websites have started to offer an agent optimized version (llms.txt (https://llmstxt.org/)) of their content, but it’s still not common. To solve the endless crawling , there are sites that try to keep track of a list of documentation sites for popular libraries, but it’s hard to maintain for all the libraries. These intermediate versions act as a local cached and optimized versions, saving time to find the relevant documentation.
The AI native way - Tessl registry
Our approach combines the best from the above approaches and goes as follows:
- we turn a code repository into a set of markdown files (specs) optimized for consumption by coding agents
- with a set of eval, which are tests in AI engineering speak, we see if these specifications cover the whole code repository
- to check the results, we challenge our agent to solve some coding challenges
- if needed, we improve the specs or give them a final stamp
- then we bundle all that pre-processed knowledge into a package (we call that a Tile)
- to make it available for your agents, we publish the tile to our Tessl registry
- our Tessl cli will check your locally installed coding packages to figure out the versions and libraries in use
- we download the relevant tiles locally as part of your repository directory
- finally, we help your agent find the relevant downloaded documentation
Our whole documentation generation pipeline solves the common shortcomings:
- context window overload : we make sure the content of one file fits the context window and is intelligently split the related parts
- wait time: information is instantly available and no more long agent spinning cycles
- native support : no need for scripts to check different versions as it understands your library package tools
- your agent in control : by installing this information locally, we let your agent decide (using MCP) on the context it needs for the job at hand
We’ve already generated 10k tiles from popular OSS libraries in our registry ready for you to use. If the library isn’t in there, you can ask us to generate that documentation too!
Oh, and your private repositories? They can stay private -- you can publish them in a private workspace for your team to collaborate on ! Contact us if you want to try our feature that turns your code into reusable agent documentation. And all your teams can reuse that new information in their agents. Also ideal for internal microservice-oriented architectures.
Also for package maintainers
The challenge is not only for people using libraries but also for people providing libraries to others: you want to help people using your library to adopt the new features even if the LLMs don’t know about them yet. To ease any major version migration, you can publish your own documentation using tiles and have people install them as extra knowledge for their coding agents. You can automate the process but still stay in control of review and packaging different versions.
Become one of our first official publishers that unleash the full power of agent-optimized documentation for their libraries.

