Tessl Raises $125M to Build AI Native Development
Join us for an enlightening discussion on the future of software development as Tessl unveils its bold vision for AI Native Software Development. Special guest Ben Galbraith shares insights into the groundbreaking journey Tessl is embarking on, alongside hosts Simon Maple and Guy Podjarny.
Episode Description
In this special episode of AI Native Dev, brought to you by Tessl, hosts Simon Maple and Guy Podjarny are joined by Ben Galbraith, a new addition to the Tessl team from Google, to discuss Tessl’s groundbreaking vision for AI Native Software Development. With a significant funding milestone of $125 million, Tessl is poised to revolutionize how software is conceived and built. Throughout the episode, Guy and Ben delve into the challenges of current software development methodologies, the role of Large Language Models (LLMs), and Tessl's commitment to an open ecosystem that fosters innovation and collaboration. Listen in to learn about Tessl's platform development journey, insights from product development, and their exciting plans for community engagement leading up to a beta release in early 2025.
Chapters
1. [00:00:00] - Introduction to AI Native Dev and Tessl's Major Announcement
2. [00:01:00] - Guy Podjarny Introduces Tessl's $125M Funding Milestone
3. [00:02:00] - Guest Introductions: Ben Galbraith and His Role at Tessl
4. [00:04:00] - Understanding AI Native Software Development
5. [00:06:00] - Addressing Current Shortcomings in Software Development
6. [00:08:00] - The Role and Challenges of Large Language Models (LLMs)
7. [00:15:00] - Tessl's Platform Development Journey
8. [00:20:00] - Insights into Product Development and Community Engagement
9. [00:29:00] - Tessl's Strategic Funding and Future Outlook
10. [00:33:00] - Closing Thoughts and Call to Action
Full Script
**Guy Podjarny:** [00:00:00] We're building a platform that enables AI native software development. And so what we've been building so far is iterations on what that platform could be and can be tried. We think the right future for it is a pluggable system. It's a system that is open, that is extensible, that is able to allow builders and tool builders to, to build pieces of it, and then allow developers and developers teams to compose those into their own unique build workflows.
**Simon Maple:** You're listening to the AI Native Dev brought to you by Tessl Hello and welcome to the AI Native Dev. This is actually quite a special episode. And it's one in which we're [00:01:00] going to be making quite a significant Tessl announcement. Guy, why don't you kick us off with the announcement?
**Guy Podjarny:** Yeah, indeed. We got a few bucks around the company. So we are announcing a 125M dollars in funding.
This includes a 25M seed round that we actually did back in April. That was led by Boldstart and GV. And a 100M round that we've just concluded that was led by Index with participation from Accel as well as Boldstart and GV again. So it's a big news for us. It fuels us.
We'll talk about kind of motivations and such more. But it sets us up to build towards that big vision that we have. So it's an exciting day.
**Simon Maple:** There we go. And we were so excited. We had to get that announcement out in the first 30 seconds. Now let's take a step back and as I mentioned, this is a special podcast.
I'm going to be the host for this podcast and Guy, unusually you're not going to be the host for this podcast. You're going to be one of the two people that I'll be interviewing. Let's do some intros.
**Guy Podjarny:** God help us all.
**Simon Maple:** My name is Simon Maple. I'm the. One of the hosts, usually, of the AI Native Dev Podcast. I work for yourself Tessl and [00:02:00] I run DevRel at Tessl.
Guypo, why don't you give us a brief introduction for those who don't know you?
**Guy Podjarny:** I am the founder and CEO of Tessl. Before that, I founded Snyk, which I built into a multi billion dollar company with an amazing group of people around me in the developer security space and I am the co host here of yourself, Simon on it and have a pretty kind of rich background before that building developer tools in the DevOps space.
**Simon Maple:** Wonderful. And we also have Ben Galbraith. Ben.
**Ben Galbraith:** I'm delighted to be here. I joined Tessl. I guess it's been about two months to lead product and delighted to be here as part of joining Tessl. I've moved to London from the Bay Area, California. Huge move for us. I've got a bunch of kids that have come with me and my wife as well.
And prior to joining Tessl, I was a Google for a number of years with product and design leadership roles on Firebase and Chrome and Identity. And before that, where we met was when I was at Walmart Labs running product design and the front engineering for Walmart's globally commerce stuff. And before that did a bunch of stuff, Mozilla and Ajaxian talking about [00:03:00] the dynamic web back in the day and a bunch of other roles and really delighted to be here on the Tessl team.
**Guy Podjarny:** Yeah. And it's fun. And Ben and I have been talking about doing something together for I guess pretty much since those kind of Walmart labs.
**Ben Galbraith:** 15 years, yeah.
**Guy Podjarny:** It's really fun to get the opportunity. The only part was to, have him move his family over here to London.
**Simon Maple:** Yeah, it was a small thing.
**Guy Podjarny:** To do over here, but it's a big adventure, yeah, we're super excited.
**Simon Maple:** Wonderful. In the celebratory mood, why don't we? Bubbly. Pass across some bubbles and so that people on the podcast as well as on the video can join us. Let's cheers in front of the microphone. So cheers, 125M to Tessl.
Cheers to the AI Native Development movement. Absolutely.
So AI, yes, AI Native Development. We know before we talk about the announcement had the money, the green in too much depth. Let's talk a little bit about the space. Now we know because this is the AI Native Dev Podcast, we know the space that Tessl is in, which is AI Native Development.
Let's first of all. And I know we talked about this Guy [00:04:00] in the first episode, but let's go a little bit deeper now and just talk about, see how that's changed. First of all, let's start with the pain points. So when we think about AI Native Development as potentially a solution or a new paradigm in which we see development going forward, what are the shortcomings of how software is developed today.
**Guy Podjarny:** So our premise is that today software development is code centric and the code couples within it both what the system does and how it does it And it's written into the very same lines, right? I guess a developer or an LLM you're reading the code and you're trying to parse out from the implementation both what the application does and how it does it.
Maybe you're helped by some comments. And this is just the reality of software development. The code becomes the source of truth and over time it gets more and more hairy to maybe parse it out. It gets bigger, the logic becomes more refined and optimized and edge case ads. And if the comments get out of whack because there's no immediate value to maintaining them or the documentation.
And so the code continues [00:05:00] to grow coupling this what and how, and they become very hard to remove and separate. And that really creates a fragility challenge in the software that gets built up every time you modify the how, which is necessary every time you maintain. The software , it lives in a dynamic environment.
And so there's a dependency. The change is open source, a component or framework on operating system or a service that you use. And so all of these things change. You have to change the system. Of course, you evolve the system. You add a feature. It might break another feature, and every time you do that, it's very hard to know that you haven't modified what the application does.
Once again, like if you're amazing at it, then for a while you would build tests, and even the tests, they combine both testing what the system does and how it does it. And so it really grows over time, and it's just been our reality. We try to tackle it with breaking the system out into smaller components and things like that, but fundamentally it's the core challenge that exists in software development, and we think there's an [00:06:00] opportunity which we can talk about to separate those with the world of AI for a variety of reasons, and we think that would help make software easier to create, but also easier to maintain over time and make it less fragile and easier to improve and optimize.
**Ben Galbraith:** I really like the articulation of the sort of what and how, and it feels as an industry, we've been trying to do this for a while. I think about, I don't know how many people looked at what Charles Simonyi was doing with this thing called intentional software 20 years ago, but he also had this observation that you've got a description of the system and documentation and comments, and then you've got code, which is a parallel rearticulation, and he tried to make the comments be the canon of the behavior and to drive the system. And I just don't think they had the tools that were necessary. And that's what's so exciting about the world we live in now with LLMs having this ability to fill in the gaps. And I think that's tantalizing. We've all had this experience with things like Claude and ChatGPT.
It's seen what happens when you have a brief articulation. And then you see the LLM fill in all these gaps, but [00:07:00] that sort of leaves a lot of questions about, okay, so I've generated a lot of stuff, but how do I maintain it? And doing that in a dialogue with sort of a chat agent is a way to do it, but I think we've all found that it's not a very satisfying way to do it.
And so one of the ways I think about what we're doing at Tessel is how can we take what the LLMs have proven to be really good at and structure code in a way where you can have an ongoing maintenance relationship. And then one of the other big challenges is how do you validate in a meaningful way that the LLM got it right? Because randomness is baked into the system. Like all of these models have this notion of temperature, which is explicitly about don't do what you think you should do. Do something random. It's so creative. That's right. And so you really have to have some sort of a validation story coupled with this gap filling ability that LLMs bring to the table to bring spec driven development to life.
**Simon Maple:** And that's interesting. It's essentially the LLMs, the AI that's unlocked this. Obviously there are a ton of tools around as well today. A few. A few, one every day. What is it about AI assistants that doesn't plug this hole, doesn't [00:08:00] satisfy this shortcoming in where we're talking about development today?
**Ben Galbraith:** Yeah, I think there's at least two ways to think about it. One is the power we're seeing for these tools to augment the code that you write today. And that's a really powerful use case because it's really clear that these models are fantastic at generating snippets of code. And you have this opportunity to roll the snippet multiple times too.
And that's a really natural expression of what these things are good at. But if you take a step back and you ask yourself, how do you create an entire system in this way? It's really a different challenge. And so that's one thing that I think that separates what we're doing from the sea of tools that are out there.
And occasionally I see a few companies that are tackling this challenge, but by and large, it still seems to be a pretty wide open space to figure out how you can take these LLMs and create a way of programming. that capitalizes on what they're good at and recognizes what they're not good at and create truly scalable systems on top of that.
And I'm preaching to the choir here, but it's such a different way of thinking about software that it motivates a whole new [00:09:00] category. And that's what we're calling at Tessl, AI Native Software Development. To me, that's what that term is really trying to express. It's that it's a new way of developing how with cloud.
When you really think about cloud and what cloud makes possible, you don't want to take a VM and shove it into into sort of some cloud hosts. You want to really think differently about how you design a system to take advantage of cloud. And that's really the type of journey that I think we're going to have to be on to really create entire software systems in this way.
That's how I think about it.
**Guy Podjarny:** And I think that's a really good description around almost like systemizing how do you capture your intent, which oftentimes users, they express in words, the LLM fills in the gaps and it's useful but how do you then continue from there?
What remains to me is indeed the second aspect, which is eventually today, a lot of these tools are different UX for generating code. They go through a more or less elaborate process of creating something. And then eventually they produce code and the code is the long lived asset. The code is the thing that lives from version to version.
Actually most of these applications don't really have a life that is a, [00:10:00] that is longer term, right? So they build those out and and maybe you export that and you continue to work with it in language. But really all it does is it. It modifies the code. So very powerful new user experience for the code, but the code is the asset that continues.
So it continues to conflate this how and the what. And the intent when we talk about spec centric is that spec will capture the intent. What is it that the user has wanted, including verification that we have a lot to do on defining them to say what do you want? And how do I verify that what was produced satisfies those needs?
And this is the thing, the artifact that we continue nurturing over time, we continue building out over time. And once you have this great kind of niche, like central entity that says what you want, and how to verify it's correct. Now you can really tap into that creativity of the LLM, And you can imagine multiple stages of non deterministic, creative, LLM powered or other optimizations and improvements that adapt that [00:11:00] implementation, that improve that optimization.
So I think that's really exciting. And that is that spec centricity. As long as you don't create the new asset, as long as all you do is you generate the code. You always have to decipher back what was the sort of the implementation intent and what was the intent in it. I think I want to highlight also one more thing.
It's pretty obvious that the one unlock here is that machines can write code. Fine, LLMs can implement this and we're capturing it. But also, we're taking for granted, maybe a little bit, is this comment you made quickly, which is filling the gaps. Today, you can provide the machine, the system, an incomplete spec, and it would fill in the gaps.
If you can say, create a tic tac toe game, And not say what are the rules of tic tac toe, because the LLM knows.
**Ben Galbraith:** Yeah.
**Guy Podjarny:** That's amazing. And it makes specs viable. If you compare them to the formal specs of old that are just so painful to use. That's right. It makes them viable, but it also represents [00:12:00] trouble, right?
It also represents a problem. It's like, how do I know which decisions it made? Which is another reason for needing the new development paradigm, because we have to figure out how do you Interact with the LLM's decisions. How do you figure out what it did? How do you define degrees of freedom between the spec and the implementation?
And all of those, they require a craft. They require some information which is what we're trying to build.
**Ben Galbraith:** It's super exciting because it feels like the industry's wanted to go here for so long. I think about test driven development and I feel like test driven development wanted to motivate this style of development.
But when you, the human, are writing the tests and the implementation, it's just too much labor. It's too much toil. And there are a few shops that have managed to make it happen, but most people just don't get there. And now we have another opportunity to think in this way. I don't think specs will feel like a list of tests, in TDD, but I think the craft will feel similar to people who develop software that way.
And I should also say, I started as an engineer, but for most of the past, I'd say, gosh, 12, 13 years, I've been more on the product side than I've [00:13:00] been on the engineering side. And to me, that's what makes spec driven development really resonate because I'm a technical person, but I don't really care about what the current sort of fashion is for Swift or for TypeScript, and I don't want to learn new best practices, but I am very comfortable thinking very technically about what a system should do.
And if there's some new hot way to enumerate a data structure, if there's some new pattern for doing asynchronous callbacks, I don't really want to learn that. But I am very happy to spec out in collaboration with the LLM behavior That's a little bit higher level than that and to get a lot of leverage from that And we'll figure out the right level for where the specs should live and that's something that I think I'm really excited about with our community approach to engage the community and figuring that out together. We have concrete ideas with our current implementation, but we'll see if they're at the right level of abstraction but I'm really excited about this general level of altitude of thinking technically about what the system should do without having to get into the actual weeds of writing the implementation of everything myself and [00:14:00] being productive in that way.
**Simon Maple:** Yeah, and that's great that you're just starting to lean into a little bit of talk about the implementation of how we're, not the implementation of the code, but implementation of a platform of Tessl. And, every single time that we talk about AI Native Development publicly, people go this sounds really interesting.
It sounds very much like a future that's very possible. But what does Tessl do? And I think, you know what? We've got a bit of money now. I think it's time to get a little bit more concrete. So let's talk a little bit about, first of all, the seed. So the 125M. has been raised so far. That's in two buckets.
That's a 25M seed that was led by Boldstart and GV and then a 100M for a Series A. So let's first talk about the seed. When did that happen, Guypo?
**Guy Podjarny:** It happened back in April, in practice, over March with the two investors building it out.
It takes a moment to get all the paperwork in place and that helped us get off the ground. It helped get sort of two great partners for the journey and Ed Sim at Boldstart and Tom Hulme at GV. I am happy to be in a position in which I have more freedom to operate. And part of it is getting [00:15:00] amazing people to to join the journey and pursue a big mission on it.
And it helped on the fundraising side. It was easier than before. But I think it's very important when you build a business to be accountable, to need to explain to people, I'm a bit of a control freak. I don't like giving up control. But I do think accountability is healthy, needing to explain your actions, both to the people around you and to investors.
So that was good. And that was a, an opening shot to say, hey, we're building something real here. It's not some sort of whim that Guy is talking about over here, but a true mission. And that really is demonstrated by getting a great group of people around it. And getting a decent round of funding from top tier investors.
**Ben Galbraith:** If I can add a different perspective too, just as someone who talked to Guy a lot during this period of time and have come in relatively late in the journey so far. I'm just really excited about what Guy was able to do with the fundraising because the initial round demonstrated that serious people had confidence in the vision that Guy had, but when Guy started to talk through the potential of doing a larger round, it was really a powerful [00:16:00] validation of the concept and vision that Guy had incepted with the team.
And not to repeat your words, and also the team that we've been able to recruit here at Tessl, we're now about 20 people, it really gave us what we needed to be able to go the distance, because this is a bold new vision, it's going to be a long journey to figure this out, and we're helping to incept a whole new category.
And it was really important, I felt, just again, as a newbie outsider to see that we had the capital to be able to sustain this journey and that we weren't going to be under pressure to try to figure this out. And are we going to have to make compromises and go in weird paths?
We have enough capital to really see this through, which is just a really exciting place to be. And that's my perspective.
**Simon Maple:** And Guy, this is interesting, actually, because whenever like with startups, timing matters, right? So Guy, obviously with, 125 million now, that puts a lot of cards in your hand.
What does that mean from a timing aspect?
**Guy Podjarny:** Yeah, so I think there are a bunch of reasons to indeed add [00:17:00] around. Ben just mentioned a few, which is to give us stability to go the distance and give everybody involved, ourselves, to build the right thing, the community to say, hey, we can invest in this.
Some of these things will materialize very quickly. Some will take a while. Our customers or would be customers to know that this is solid foundation that they can build upon. And so all of those were important to go further. They didn't have to happen right now. We haven't quite consumed much of the 25M that were originally invested.
But I think the opportunity came to just get a really great investor in Index and Carlos, and then have Excel participate again, fundamentally, I guess having been in this world for a while now, both as the founder of Snyk and building that out, but also with the many angel investments that I've made, getting the right investors on board.
It's not a trajectory changing path for the company. But if you get the partners, the right partners to be with you for the journey, them as individuals can be very valuable. They offer a different lens on the world because they look [00:18:00] at the macro. They start by looking at the whole market and the whole path.
You're building the company and then you have people in the team that look at the very bits and bytes and the low levels and that completeness of perspective is very important. Carlos was the CTO at GoCardless before this in general, I've known him for over a decade, I think now and and so having that, that right lens is very valuable.
It's also useful, and I see this now in Snyk, to have people that have been along for the journey because they build with you the appreciation of what's important to maintain. And because we're, we're planning here on a long lived company right on a company that would eventually go public and would be sustainable company in the industry It's important for those significant shareholders and original investors to have that conviction around where it's going and not just be excited by the traction or the path we're early in it and I think this investment on the investor side, of course, I'm putting words in their mouth, but it really comes in because they are excited by the team.
They're excited by the vision [00:19:00] and they're excited to build it together. They themselves believe in the thesis and the change that will happen and they want to be part of it. So it's great.
**Ben Galbraith:** I just want to clarify one thing. Sometimes when you have these large raises with an AI company, it's clearly destined to fuel the development of something like a new foundational model or some other complex deep model development.
And I wanted to clarify, that's not our vision. We think we can get where we need to go on top of the existing foundational models and other models that are emerging from the ecosystem. So don't think about this raise as being motivated by a race to burn in order to out innovate some of the other foundational model vendors.
This is really about having the resources to build the team that we need to create what we're trying to create.
**Simon Maple:** With 25M, you were able to afford myself
**Guy Podjarny:** and I needed another 100M to hire Ben.
**Simon Maple:** With the change you were able to hire another 19.
**Ben Galbraith:** I've got a lot of kids that take a lot of money to move the circus, my friend.
**Simon Maple:** Now publicly, obviously Tessl's done a fair amount already. This podcast we've launched a virtual conference, which is happening on the [00:20:00] 21st of November, all around AI Native Development and some of the tools and practical advice about how organizations can get started using AI powered tooling today.
We've launched a community last week which is the AI Native Dev community. Go onto Discord. You can talk with like-minded people and talk a little bit about how we are getting on with Tessl as well. But it's a vendor neutral community there. When we think about what's actually happening under the covers though, the things that we haven't announced, is there anything more we can talk about the platform?
What we are actually building? How far we've got today.
**Guy Podjarny:** Look at Tessl we are building a platform that enables AI native software development. And so what we've been building so far is iterations on what that platform could be and can be tried. We've been working with sort of friendly users to give us feedback to see which parts are terrible and which parts are are compelling. Our own team is using it. And, we do these hackathons of trying to create tiles, which are our unit of software and seeing what we like about it and what we don't. And we've [00:21:00] been iterating, we're on a second major generation of the platform, even though it hasn't seen the light of day, we are iterating on it substantially.
And we're building it out towards and early 2025 beta to start putting people into it. There's an interesting mix here in which on one hand, we firmly believe in shipping it and in being out there and in allowing users to tell your product is terrible and hopefully say why.
So you can fix it. You can improve it. And since we're trying to build something big, we have no illusion of getting it right on the first take. At the same time we want to be respectful of people's time. We don't want to, just have everybody face problems that we can already identify and address, we want to be able to demonstrate enough of this sort of enticing future of AI Native Development and give users an opportunity to engage with them.
And so we're working between those. It's important for me to just further emphasize is that there's a risky trend that happens with every new technology trend, but specifically in the world of AI, to favor walled gardens, to favor small, closed ecosystems that are [00:22:00] very opinionated, it's easier to, to grok, to absorb a new technology, definitely something as disruptive as AI.
Within that more confined context and a lot of the companies in the space right now are naturally leaning towards that. Hey, I will do the end to end thing. And we actually discussed this in the podcast several times. And Matt from Netlify just mentioned this, but the context of the web and we talk about, is it one product that understands the code base and therefore offers you everything around it?
Or is it composable? We believe in an open ecosystem. Then we think eventually. tech moves, it shifts new ideas, new technologies come along, different innovators build different pieces of it. And so what we're building, while we aim to build a sustainable end to end flow, that doesn't require something else, you'd be able to use it and produce powerful things on it, on its own, we think the right future for it is a pluggable system.
It's a system that is open, that is extensible, that is able to allow tool builders to, to build pieces [00:23:00] of it and then allow developers and developer teams to compose those into their own unique build workflows. And so we think a lot about all these moving parts and try to bring them in to this initially more contained community that we'll start at the early next year.
That again will call us on all the sort of silliness that we might have in the platform and and will allow us to improve the product and we will grow that group as we improve the platform.
**Ben Galbraith:** Yeah, from a product perspective, this is one of the more interesting decision points is at what point do we bring the system out to engage the community because we have these expansive visions across so many dimensions about what we want to do.
Like we have ambitions for what spectrum and development. means. And we have milestones along the way that sort of take us there. And there's a temptation to release it at the first milestone. But do we wait? Do we refine it? Do we prove it out? Or do we just let the community give us feedback?
And we think about that with our generator. We think about that with the way that we generate documentation with the system. We think about that in terms of like the types of systems [00:24:00] that we facilitate you building. There's decomposition is a big part of the system, this notion of if you have a really complex system that you want to build, how do we help you factor that into simpler parts that can be effective?
And how much do we want to invest in the decomposition engine and what's anyway, just this balance of figuring out the right movement to surface it for the community is a challenge and it's a fun one.
**Simon Maple:** And over the last, few months, obviously with various product advancements from Tessl engineers, are there anything you want to call out as specifical learnings in terms of, obviously, we've been sharing with certain friendlies who are giving us feedback on the product.
Are there certain things that, our listeners can take away, really interesting takeaways.
**Ben Galbraith:** One thing that's just lately on my mind from the latest testing session and feedback session is the relationship of the information generated because there's this sort of asymmetry from the input you have in the system. I want a system that does X. For example, I want a system. That's a silly example is I want a utility that helps me convert between color spaces and you have something that's really simple and then you put that in the system and it can generate a ton of [00:25:00] information from that. We have the ability in the system to do like a big one shot generation, if we really wanted to, that sort of decomposes that into different pieces and creates specs for the different pieces and a lot of code.
But it can be overwhelming for people to process if you go from like a prompt, this just drop of like dozens of sort of pieces and we're really thinking through what is the right creation journey to go? And do we really want to bring you in an attended fashion along multiple steps in that journey?
And I think we're finding that for some users, the answer is yes, they very much want to go along the way they want an opportunity to look at decisions that the system has made the sort of gap filling we talked about earlier. And understand it along the way and maybe influence it. Whereas other people just feel like, give me your best shot, and I'll take a look at it, and I'll decide if I like what you generated.
That's one of the design challenges that we're puzzling through at the moment.
**Guy Podjarny:** It's also interesting that eventually, we think all of those workflows are legitimate. There isn't one way you develop software. Sometimes you build something basic, and then you decorate, and you expand it.
Sometimes you design something quite elaborate at the beginning, and you build it. [00:26:00] Neither of those is correct or incorrect. They're just You know, whatever fits that person, the current setting. And eventually software development should have flexible workflows. You should be able to choose those.
But at the moment, we're trying to shrink it something that is shippable, that people can start. And related to that is this debate of customizability. We want to demonstrate AI Native Development and one path of temptation is we're going to pick a workflow that is, the workflow of generating software that is AI native, that is spec centric, but that accommodates only one option for those things versus a second path that is actually like injecting in a bunch of these places extensibility and tinkering and modification, which we want to enable because we think that is the right path and also would be relevant to our users, but there's a certain amount of the more options you give, the less streamlined the experience. So I think all of that is really interesting to me.
Probably the biggest learnings are in the world of what the LLMs do and do not do well today. And this [00:27:00] challenge of capturing generation in a delivery plan, right? And how do you bring it forward? How do you advance it? I think it's interesting. The LLMs are very good at certain generations and then not so much at others.
One thing that I was dismissing at the beginning is how much they're willing to cheat. If you give them as a part of that spec, you give them the tests. Initially the team was quite concerned about them cheating and writing if statements. Oh, you told me that the test is if I get three I return seven So it'll just write code that says if I give you three I return seven.
It's just like nah That's not like the LLMs are not designed to cheat. They're designed to give you a correct answer and then they would also do it lo and behold they cheat Quite well, there's all these like human traits of how the LLMs behave and it's interesting because sometimes the way to address those are human emphases.
Either you like emphasize in the prompt to say, do not do that, or you supervise you introduce a critic that says , did they cheat? Did it cheat? It's the same as pretend code, right? You tell it generate a QR code [00:28:00] generator. It's here's the placeholder for the QR code generators.
No. I told you to generate this, not to and so I think dealing with these eccentricities of the LLM is to me the the primary insight for us, like in terms of sharing. I think there's just a lot. And some of those I think are probably learned by every AI dev tool out there.
I think we try to focus more on what is the software development. process here and build these things because we think some of these problems we will solve and some of them we will give the sort of the workflow the system and we actually want to tap into the kind of creativity and brilliance of others to say here's how I solve it or even this is how I solve it for my specific scenario
**Simon Maple:** Let's jump back to the 100M series A, Guy?
What do you think about these AI startups and companies that are raising stupid amounts of money and on ridiculous valuations? What would you say?
**Guy Podjarny:** We could have raised at a higher valuation, but we chose to be robust [00:29:00] here and choose a modest valuation. I'm just, Joking.
We're rated at a pretty good number. I think there's a whole bunch of reasons why this happens, and I think some are better than others. I think there are definitely companies that are just blinded by the sort of the sheer numbers and investors. Investors have very complicated incentives.
You can go from the most cynical route, which is some investors are playing the fees game, like investors get paid both on carry of return or a portion of the returns that they bring their investors and on fees. And so if you're like most cynical, you can talk about them just deploying fast amounts of money, they're still going to make a lot of money.
You can talk about just like learnings, they have to be in the game, they have to try them out. And even if there's a few, failed investments they have to do it, but also there's just firm belief that some of them, you don't know which ones, but some of them will like, look at OpenAI, become incredible businesses.
OpenAI's valuation is eye popping, but their business is pretty ridiculous and how quickly it grew to like billions in revenue. And so [00:30:00] I think there's definitely an element of true belief. I do think that within these fundraisers, there's, you can loosely split them into these two camps that Ben mentioned.
One is big raises that are needed because you need big money to buy GPUs and train your system on it. And those are harder for me to grasp. I think again, people have all sorts of thoughts about them, but they very industrial to me in nature, you have to depreciate to these models very quickly.
So you're building a factory to train the model, then you're burning it down. There's power that gets lost.
**Ben Galbraith:** I have to say as someone who's new to the UK, it reminds me of F1 in a sense, right? Like before the budget cap, right? Like you can feel the team on a shoe string budget. But you're probably going to lose to Ferrari, Mercedes, and maybe McLaren.
If you don't build those out. They're really well funded, because I think of the big vendors in the space, the big players that we all know, OpenAI, Google, Anthropic, it's hard to imagine that any startup, even if you have a fantastic race, can go the distance.
**Guy Podjarny:** If you're playing this game. You have to have that money.
So the startups make sense to me and the [00:31:00] investors. It's just a question. . And then you have the other set, which I believe we are part of which is the reason they're raising the large rounds is because they're taking on a big mission and we are seeking to go the distance and do something big.
Yeah. Those are just opportunities. There's sort of two reasons not to raise money. One is optionality. Basically every time you raise, you're doubling down on the company because you've just raised the valuation for any potential exit or acquisition. And I think we're quite committed, I can definitely say for myself, and I think the team, everybody came in with that conviction, which is we're trying to build something big, and so for us it was less of a factor.
And two is, if you spend the money ahead of actually being ready to spend the money, so you use it unwisely. And I think over here we're relying on our discipline with the experience in the team and myself to be smart about how we how we spend the money and to be ready to hit the gas when we are ready, but not to create fake product market fit sensation by spending money.
And that's why you have to sell that Ferrari.
**Simon Maple:** Is that why this is that why this is just apple juice and sparkling water? Exactly. [00:32:00] So let's talk about the future then. Let's finish off with that. Can I push you on a month, day, and time for the product release?
**Ben Galbraith:** January 15th at 4 p. m. We've pre decided, no we haven't decided on the launch date. We think about it more in terms of making sure that the product is ready for this community that we want to give it to so that it really doesn't waste their time.
We're innovating on what it means to do AI Native Development, and we want to find the right starting point where you see the vision we have and you can realize aspects of that vision in the system and we have meaningful extension points where people can put their labor into it and move things forward.
What we're not trying to do is have a Tessl proprietary platform and invite volunteers to enrich Tessl. We're really trying to create a community that is expansively defining this category together and we're providing some tools, but we want people to really feel like the skills that you're creating by using this tool are general skills that will apply to this new category that we're helping to create with AI Native Development.
So [00:33:00] we are planning to have something ready in early 2025. That's the target. We're not more specific than that, but we really do want to bring it out to develop it in the open with the community and not be in a lab doing an R and D project for the next two years.
Yeah. Yeah.
**Simon Maple:** So something definitely within the first 12 months I thought he said December 15th, no? Did
**Ben Galbraith:** you hear December? This is our journal road back conversation.
**Guy Podjarny:** We talk about, I think we have all have strong conviction to say, to ship it to bring it out there.
And indeed it's balanced with that sort of desire for breadth good opportunity to maybe call out for the listeners is we have now opened our waitlist. And so you can now go to Tessl.io and sign up for the waitlist. You can tell us a little bit more about yourself. That would help us pull you in sooner because we want a variety of users within our early community.
And we're very keen to open that up. And so nobody's more impatient than us in bringing it out there. So I'd encourage the listeners here to sign up and we will pull you in as soon as we can and in the meantime, we'll keep you [00:34:00] posted about development and would love for you to engage in our community and the upcoming conference.
**Simon Maple:** Yeah, absolutely. The waitlist is great for the product. And then of course, if you're just dying to carry on talking about this, of course, we have the podcast. So feel free to subscribe to that. We obviously have the Discord community, which you can connect to from our website AI Native DevCon, which is happening in a couple of days, actually.
Let's talk more on the community. Ben, Guy, thank you very much for joining. Thank you, Simon.
Congratulations to all of us, and everyone at Tessl for this amazing raising moment.
**Guy Podjarny:** Indeed. And I just thought it was a good time to say thank you to everybody that has engaged so far and to all of you who will engage in the future.
And there's a big, exciting journey ahead of us. I know we say this all the time and maybe it sounds cliche, but we really think that this is a group effort and this is a community effort. And really excited to bring a platform to you and then excited to see what you built on top of it. Thanks.
Amazing. Cheers
**Simon Maple:** Thanks. And tune in to the next episode when Guypo will be back as the host next week Yeah, indeed like it or not[00:35:00]
Thanks for tuning in join us next time on the AI Native Dev brought to you by Tessl.
Podcast theme music by [Transistor.fm](https://transistor.fm/?via=music). Learn how to start a [podcast](https://transistor.fm/how-to-start-a-podcast/?via=music) here.