From the AI Native Dev: From building CoPilot as GitHub CTO to building a code foundation model at Poolside: Jason Warner explores the AI Code Generation journey and where it leads.
# min read
Introduction
In this episode of AI Native Dev, Guy Podjarny sits down with Jason Warner, a seasoned expert in software development and AI, to discuss his journey from being the CTO of GitHub to founding Poolside, a company focused on creating state-of-the-art language models for software development. With an extensive background in the tech industry, Jason has significantly contributed to various high-impact projects. At GitHub, he led the development of transformative tools like GitHub Actions and GitHub Copilot. His career also includes key roles at Heroku, enhancing the developer experience, and Canonical, where he worked on cloud and infrastructure software. Additionally, Jason serves as an advisor at Redpoint Ventures, providing strategic guidance to emerging startups. In this conversation, he shares insights into the evolution of code generation, the impact of AI on software development, and how Poolside aims to redefine the developer experience.
Jason Warner's Journey to Poolside
Jason Warner's career trajectory is nothing short of impressive. "Probably biggest relevant claim to fame over here is he was GitHub's CTO, at the time that he built up Copilot and Actions and a bunch of other, fun things over there," says Guy Podjarny, highlighting Jason's significant contributions at GitHub. Before his tenure at GitHub, Jason served as the GM at Heroku, where he was instrumental in building end-to-end developer workflows. His earlier role at Canonical involved working on cloud and infrastructure software. After a stint at Redpoint Ventures, Jason founded Poolside, a company focused on creating state-of-the-art language models for software development. "And now we're building Poolside, which is, it's a frontier AI company, so think OpenAI, Anthropic, etc., focused on software as the domain and the discipline, and so focused for, as developers," Jason explains.
Foundations of Code Generation with LLMs
Diving into the foundations of code generation with Large Language Models (LLMs), Jason credits OpenAI for their pioneering work. "We owe OpenAI a debt of gratitude. From a commercialization, from a corporation perspective, they went and they did a lot of hard work for a bunch of years, and when it wasn't really in vogue to do it, and they proved out a bunch of things," he says. Jason explains that the scaling laws—adding more data and compute—hold true, enabling models to solve more complex problems. "Effectively what we've seen is that as these models get larger, we throw more data at it and more compute at it, what they're able to do is effectively solve more complex problems," he adds. These models, as Jason notes, are good at understanding enough to answer questions based on their internal representations.
Code-Specific Models vs. General Purpose Models
Jason draws a clear line between code-specific models like Poolside and general-purpose models like OpenAI's GPT. "There's some history here, which is you end up in the early days, you ended up with code-specific models. And actually there's a whole host of people who are still building code-specific models," he says. Poolside focuses specifically on software development, unlike general-purpose models that include code in their training but are not exclusively focused on it. "And then, but a side effect of that is that it's going to be able to answer code questions as well," Jason explains. He argues that a focused approach yields higher results in specific domains, as the models are tuned to understand and generate code more accurately.
The Evolution of Code Generation Models
Discussing the current state of the art in code generation, Jason shares some impressive examples. "Somebody who doesn't know anything about software, but they want to solve a problem. And for, and they're able to go from a nothing to a something. That is just a monumentally impressive thing," he notes. He also mentions the complexity of tools like ffmpeg, where providing a correct command can be a significant achievement. "Anytime I ask any model about ffmpeg and it actually gives me a usable, real answer that works at the command line, I'm always impressed because ffmpeg is an effing nightmare," Jason says. He emphasizes the importance of both retrieval and reasoning, stating that while a lot of it is retrieval, there are instances where the model shows true problem-solving capabilities.
Challenges and Limitations of Current LLMs
Identifying the boundaries of current LLMs in code generation, Jason points out that while they are great as code assistants, they are not yet autonomous junior developers. "The developer is still front and center, hands on keyboard, asking questions, or getting some feedback, whether it be code assistance in terms of ghost writing and code completion or asking in a side-by-side chat kind of world, but it's assisting the developer to a degree," he explains. One of the main challenges is the reliability of these models over extended interactions. "The answers actually will decrease over time because there's get the broader and broader context, there's too much stuff being pulled in," Jason observes. He also notes the need for better context and customer-specific data to improve model performance.
Customer Code and Context
Training models on customer-specific code and context is crucial for enhancing their relevance and accuracy. "The way that I refer to this is basically that you're going to want to add the customer's confusing term here, but I always say customer's context to the system because you want the customer's information to be available to the models," Jason explains. This approach helps the model understand and adapt to the specific coding standards and practices within an organization. "Code is super fascinating because a comma. A misplaced comma in code is incredibly bad. It's like changes the work, not work, functionality, but a misplaced comma in language can be bad, but for the most part, humans who read these things can overlook that, but the compiler can't overlook that," he elaborates.
The Future of AI in Software Development
Looking into the future, Jason speculates that true human-level reasoning and planning will emerge in the domain of software first. "One of the core beliefs that we have at Poolside is we actually believe true human level reasoning and planning will emerge in the domain of software first, because of some of the luxuries that we have in software," he says. The ability to pre-verify code before it runs in production could be a game-changer. "Could you imagine a world where you're interacting with a system, you're asking it to do something, and it could pre-verify that the generation or the code that you've done, one, adheres to all your SDLC, but also two, is runnable, executable, and is running for you on a system," Jason envisions. He believes this will lead to a more advanced and efficient software development process.
The Role of Developers in an AI-Native World
As AI becomes more integrated into software development, the role of developers is likely to evolve. Jason foresees a shift towards more abstract and high-level tasks. "I've always said there's two types of developers in the world. There's the developer that wants to build an application and everything else doesn't matter. And then there's people who love to chase the stack," he explains. In the future, developers may not interact with source code as much but will focus on higher-level planning and reasoning. "I don't think people are necessarily going to always have to interact with source code in the future," Jason says. However, he emphasizes the importance of understanding the fundamentals of software development, as these skills will remain crucial.
Summary
In this enlightening conversation, Jason Warner shares his insights into the evolution of code generation, the advantages of domain-specific models, and the future role of developers in an AI-native world. Key takeaways include:
- Jason's extensive experience in tech, from GitHub to Poolside.
- The fundamentals of LLMs in code generation and the importance of scaling.
- The differences between code-specific models and general-purpose models.
- The current state of the art in code generation and the balance between retrieval and reasoning.
- The challenges and limitations of current LLMs, including the need for context and customer-specific data.
- The importance of training models on customer-specific code to enhance accuracy.
- Speculations on the future of AI in software development, including the potential for autonomous developers.
- The evolving role of developers, focusing on higher-level tasks and understanding the fundamentals.
For more updates and advancements in AI-driven software development, follow Poolside on Twitter @poolsideai and visit their website at poolside.ai.
Guypo with his brand new swag!
On a personal note, I’m extremely excited for this new adventure. Founding Blaze (acquired by Akamai) was about making the web faster; Founding Snyk was about proving security can be embedded into dev. Both are great missions, which I continue to be passionate about. However, for me, Tessl is an even bigger opportunity - offering a better way to create software. Provide a path, made possible by AI, for producing software that is naturally more performant, more secure, and better in many other ways. SO MUCH opportunity awaits, and we have an incredible team on the case.
Almost the whole team for our first team photo!
Yaniv, telling us something amazing!
Recording the next podcast episode of The AI Native Dev!