Live Roundup: AI Myth busting in the real world and more with Matt Biilmann, Ben Galbraith, Patrick Debois and Simon Last

In this Thanksgiving special, join Simon Maple and Guy Podjarny as they delve into the heart of AI's impact on the tech community, exploring gratitude, innovation, and the evolving landscape of software development with industry leaders.

Episode Description

This month's episode of the AI Native Dev podcast, hosted by Simon Maple and Guy Podjarny, is a Thanksgiving special that brings together some of the brightest minds in AI and software development. Featuring Mathias Biilmann, CEO of Netlify; DevOps pioneer Patrick Debois; Notion's Simon Last; and Eric from StackBlitz, the episode explores the dynamic world of AI development. Discussions range from the impact of AI on the open web, innovative AI tools, and the evolving role of AI engineers. Tune in for a deep dive into the future of AI and its transformative power in software development.

Chapters

1. [00:00:00] Introduction and Thanksgiving Greetings
2. [00:01:00] Thankfulness and Community Spirit
3. [00:03:00] AI Native DevCon Highlights
4. [00:06:00] The Open Web and AI with Mathias Biilmann
5. [00:09:00] Innovative AI Tools with Patrick Debois
6. [00:14:00] AI as a User with Eric from StackBlitz
7. [00:18:00] Fine-Tuning LLMs: Insights from Notion's Simon Last
8. [00:23:00] Context Windows and Attention
9. [00:28:00] The Role of AI Engineers
10. [00:32:00] Summary and Closing Remarks

Full Script

**Simon Maple:** [00:00:00] You're listening to the AI Native Dev brought to you by Tessl

Hello everyone and a big welcome again to another live monthly roundup. My name is Simon Maple, joining me today.

**Guy Podjarny:** Guy Podjarny. Yeah, doing another live episode. We haven't learned our lesson yet. Why do we do it to ourselves, eh?

**Simon Maple:** It's November 28th, which this year is Thanksgiving. First of all, a massive happy Thanksgiving to the folks in the US and those celebrating.

**Guy Podjarny:** Yeah, this somewhat weird holiday, but happy Thanksgiving folks. Yeah.

**Simon Maple:** I suppose we should start off with maybe what we are thankful.

**Guy Podjarny:** Yeah, that's a good idea. I don't know, do you wanna kick off? What do you, what are you thankful for?

**Simon Maple:** So actually do you know what, we'll talk a little bit about AI Native Devcon later.

One of the things [00:01:00] I'm always in awe of is the technical community in which we work in. And one of the things I love is when ideas come around the community really comes together and thinks together as one brain and provides different angles and different discussions.

And with the new community that we'll talk a little bit about later, it's been amazing to see so many people join that and contribute into that. So community, technical community and I guess community in general, but right now technical community, I'm super thankful for that.

**Guy Podjarny:** Yeah, no, that's amazing. And that fully relates to that element. I guess on my end, I guess beyond 125 million in funding of the thank you. Thank you. Yeah. Dear investors. Beyond that. Probably like what really jumps to mind is a bit more serious on it, which is the ceasefire that we have in Israel between Israel and Hezbollah, hopefully would stop a whole bunch of suffering that has been happening for a while, in the midst of a otherwise, a pretty a dreary situation over there.

It's nice to [00:02:00] have a moment of something that feels a step in the right direction on it. So quite thankful for that. But yeah, I think generally thankful for the Tessl journey. So so much fun to be building and have amazing people even including present company in it. I was gesturing at you on it like clearly wasn't thinking fast enough now, but I really am thankful for the amazing people that joined the journey that believe in the journey and just I kind of building together on it.

It's it's amazing and it's a humbling and it's fun in the day to day.

**Simon Maple:** Amazing.

**Guy Podjarny:** Cool.

**Simon Maple:** Thank you. Thank you for that. So this month on the podcast, we've been busy again. We've had Mathias Biilmann who's been talking about one of the big questions which we'll dive into a little bit is, does AI threaten the open web?

And some really interesting discussions around that challenges and opportunities that AI can provide us with the open web. Thank you. From Matt Biilmann, Netlify's CEO, of course and co founder. Patrick Debois had a second session, but this time not talking about DevOps, rather talking about it from a development angle, so talking about how he's been playing with various coding tools but in a slightly non orthodox [00:03:00] way.

So rather than just using them as they're supposed to be used. Seeing how we can take them to the next level. So that was pretty cool.

**Guy Podjarny:** Always unorthodox thinking from Patrick.

**Simon Maple:** Yeah. You can always expect Patrick to think that one level different, which is amazing. It opens up so many opportunities.

And then we of course had a session in the studio around the corner, which was entirely about an amazing funding round that we had. And how that enables us to build an AI native developer platform. And then you spoke with Simon Last, which has been confusing for the last two weeks when we've been talking about so many, like on the same email thread, like

**Guy Podjarny:** Simon, are you replying to that email?

Oh, hold on. Simon. Maple is not on that thread. It's only Simon Last

**Simon Maple:** I immediately thought though, Oh, this is something I've dropped and so often, but every now and then it is nice.

**Guy Podjarny:** An amazing guest, though. It's so much sort of real world insight around building Notion AI, really appreciated him taking the time and sharing some of those learnings.

**Simon Maple:** Yeah. Let's talk about some news. Before we jump into a deeper dive into those sessions the first was one that you added, which was David Singleton's new AI agent operating system slash dev slash agent. [00:04:00] Sure. Yeah.

**Guy Podjarny:** So it's been recently announced on it. I've heard it through the VC grapevine before which is David Singleton and a bunch of other sort of amazing people, many of whom are original sort of Stripe leaders on it have announced the new company.

I think they call it slash dev slash agent, which is an allusion to slash dev slash payment, which I think was Stripe's original name. So it's a nice homage to that. The website is the S D S A. Like everything, it's like a little bit nebulous in a precisely what it is that they're doing. I don't know, like I'm familiar with at least one more company that is saying something like this, or that is talking about a future and reimagining UIs and the likes.

I think it's very promising because of the amazing team that is built around it. And. what in another conversation that I've had in a group conversation with David is on talks about how it's a combination of a new UI paradigm, a system level services to orchestrate across the agents.

It's the right developer SDK and tool chain, and it's a two sided marketplace around how do the people, create agents and consume them. So it sounds promising and interesting. I think [00:05:00] what's interesting to me from a news perspective, it's not, there's probably 1000 other AI companies that, founded and launched during November as well is the caliber of the team that is quite impressive. And I guess I'm on the lookout for people that try to imagine something that is really further out. And so they say agent app creation, agent UI, like the way humans interaction interact them. What are the privacy models for that?

All of those are just like substantially different ways to address it. And then they start by, it seems like they are anchored in the future. So I found that interesting and I look forward to tracking what it is that they build and how they advanced. And a lot of that team has also built a lot in Android.

And so they've had an opportunity to rethink operating systems for a while. Yeah.

**Simon Maple:** Interesting. Interesting. Second set of news that's here is the Gemini new Google Gemini XP model.

**Guy Podjarny:** Yep. Yep. The new Gemini there. I think so it's interesting. It's a new model from Gemini. I'll admit that, with all the round funding and all that, I haven't had a chance to play with it personally as much.

But it made headlines. Mostly because in a bunch of the benchmarks, it won first place, made the top mark and oftentimes by a [00:06:00] decent margin and has done so even compared to o1.

**Simon Maple:** Yeah.

**Guy Podjarny:** And it's interesting if you're not familiar with the o1, o1 is very much the reasoning model and it's really capable, but it's also slow and it's quite expensive and it thinks through things in some sort of mysterious behind the scenes way. It feels interesting, agentic model and Gemini comes along and in what feels like a much more typical LLM fashion, if that can be said for something though.

Legacy traditional fashion it just gives answers. And those answers actually compare to that reasoning model. So I think it's interesting to think about that distinction of reasoning versus just training and having the neural networks. And I read some place, someone talking about intuition and it's almost like reasoning versus intuition is when you do have the answer immediately, but you don't necessarily know it's your subconscious kind of in action trying to figure out the the jumps in your neural networks, maybe that gets you to the result.

And so someone, it wasn't a term that I've heard the Gemini team use. But [00:07:00] someone talked about reasoning versus intuition, which I thought was like an interesting maybe analogy, maybe a little bit unmorphosizing. I always come back word thinking about a lens as a, you took a risk.

You took a risk. It did pan out. Didn't pan out. Yeah. , I don't know. You have a history with mispronunciations Simon.

**Simon Maple:** I've been known to mispronunciate my words quite a lot. Yeah.

**Guy Podjarny:** It's interesting to see Gemini both, the battle continues. You see really at the end of the day, mostly Google, Anthropic and OpenAI compete for the top spots with Llama always staying close to the top, but mostly I feel getting results that are similar to the reasoning model without a reasoning delay is something that might shake things up.

**Simon Maple:** Yeah. Interesting. Tessl also had a couple of not necessarily announcements, but we've been busy in the news as well.

Obviously the funding announcement, which we've already had a session on full podcast on it, but we won't go too deep into here, but just to just the highlight,

**Guy Podjarny:** we got some money. We're being longterm.

**Simon Maple:** There we go. There we go. And of course with that, we have the shiny new Tessl website. So feel free to have a look at that.

**Guy Podjarny:** [00:08:00] Shoutout to Rachel on the team and with a bunch of support have built this great new website in a relatively short timeline.

Yeah, absolutely. Yeah. Last week as well 21st of November, we had AI Native Dev Con, which the amazing Sammy Hepburn helped us with and again, another short timeline, but. an AI native developer conference. I think this is the first AI native dedicated developer conference. So it's great to have that. And it was over a thousand developers coming together, talking and discussing and listening to some amazing speakers. There's this one guy called Guypo who kicked it off. I don't know if you know him, but talking about I'm sure people are sick of listening to it. I know. I have sad news for you. It might happen a couple more times.

**Simon Maple:** If you haven't listened to that, I very much recommend it. I think it's just the way you showed that progression, and we'll talk about it a little bit later when we think about tools, but the progression or the journey in terms of the challenges that we have with traditional software development and how AI gives us that opportunity to do things differently and to actually help or rather change the way we build software.

So [00:09:00] definitely have a look at that because I think it gives a different, that higher level. change of the way of working based on the new norm that we have now with AI being able to do an amount of stuff under the covers. So I think that's, that was very interesting.

**Guy Podjarny:** It was an interesting I find oftentimes when you have to give these talks, you have to really distill a bunch of the messages into something that is more manageable that people can actually understand and find interesting.

And so I think AI native development is elaborate and you've probably heard me describe it in various ways already. But it really is about how do we narrow it down and how do we compare it to the history of software development simplifying those terms. So it was a lot of work to put together a keynote, and I hope folks have enjoyed it or found it interesting.

But it is also like just an important step in the journey of just forcing it to be, hey, keep it simple. What are the problems with code centric development today? How does it map to some problems that were solved in the past? And where is it going from here?

**Simon Maple:** So I always think doing that just generally now stepping away from the topic, but generally when we have to almost [00:10:00] present something or write something down about our way of thinking, it really does allow us to step back and think about things in a clearer way to be able to more concisely write something down or present something.

So it's a great thing to do generally.

**Guy Podjarny:** Distil your learnings, really force it. And I think sometimes people refer to it as dumbing it down, which I really I don't like that terminology because it's not dumbing it down. It's distilling it.

It's saying what is the core of it? And then if you distill it and you build these core principles, you can think bigger because now you have these kind of good foundations. They're well defined. You don't have to unravel them every time and you can build on top of it. So

**Simon Maple:** there's so many things in like generally that I'm doing, I'm coding or something.

And I'm like, yeah, I know how to do this. And then I realized I have to present on it. And it's oh, actually there's a ton of things I haven't actually looked at. I haven't thought through.

Yeah. Just let's go into the deep dive.

**Guy Podjarny:** Yeah, maybe actually before we dive into the topics of it , so I really maybe it's like over time you get a little bit comfortable with the podcast on it yeah, and this sort of new muscle and these new conversations and It's always fun to talk to smart people, but it's nice to also loosen up a little bit and talk about things [00:11:00] that are not there.

What was your sort of favorite piece that was not, maybe as much the substance of the episodes on it in this month?

**Simon Maple:** So one thing that I really enjoyed there was one thing in Patrick's session that I really enjoyed, which was, he was talking about how you can effectively use AI tooling, AI assisted tooling through different UI effectively.

So he was looking at cursor and he'd done a number of things, including gestures and including voice commands and things like that to put a greater level of input without having to type at a keyboard. And the reason he went to gestures over voice was because as he was trying to code, using voice commands as well as typing at the keyboard.

His wife comes in and starts talking to him. And it's when real life really kicks in and you think, oh, yeah, that's why we use keyboards. Cause it's, there's no outer interference with keyboards. And that was the reason why his wife came in and started talking to him and started messing up his coding and interfering with what was actually being coded.

So he then goes straight to that, right? And he gestures to be able to put one finger up or two fingers up as he's [00:12:00] typing something to be able to do it in a particular mode. So that was a nice kind of like life realization moment of okay this extra input is challenging.

**Guy Podjarny:** That's hilarious.

We actually try at home sometimes at dinner to do a ChatGPT voice mode. And we have a question, something's hey, instead of asking Alexa, which we have in the kitchen, let's open up ChatGPT voice mode and tried and, we're asking it something that we've just been debating.

And so everybody has an opinion on it and it totally cannot handle the multiple speakers at the same time. And so yeah, there's definitely some evolution, a little bit of having these things that have voice recognition and know who identify when is it that they're getting instructions and where is it casual conversation.

**Simon Maple:** I thought you were going to say Alexa and ChatGPT were starting to have a dialogue. I haven't tried that yet, yeah. How about yours? What was yours then?

**Guy Podjarny:** I probably most enjoyed the music analogy that I've had with Matt Biilmann. He was a music a journalist, I think, before joining in. So I started, it's just like a, just a fun sort of start to it.

It's hey, tell us a little bit about what that means. And we got into talking about music and AI. And it ended up being this theme throughout the episode of comparing [00:13:00] software development and AI to music and AI, which I really liked because I always think about software development as a creative role, as a, you're creating, it's somewhere between the engineering, we call it software engineering, but I really think of it a lot as creative and as something where you have to, you take a blank slate, you take something that doesn't exist and you just modify it per your imagination with these virtual tools that we have. So I really liked that. So I liked, I was almost like pleased it wasn't intentional. It was pleased with how the analogies worked because oftentimes I think of them as well as something that is inspiring one another. I think about software creation over time and how is it affected by how music has been created or art or others and vice versa. So I found that really fun. I also enjoyed the height difference in the episode of you and Patrick. Just pointing out, it made it engaging.

**Simon Maple:** There we go. Then it's then it achieved something. That's amazing. Yeah. Yeah. Awesome. So yeah, what did we start off with then? One thing that actually came that I've been thinking about recently was with all the, particularly around the conference, actually, there are a number of today's tools that people will give a [00:14:00] practical advice over Lisa Ray's, Patrick was talking about a lot of them.

We had sessions from both of you. We had, Vassell speaking as well. There are a number of different tools that are doing some amazing stuff. And I hear too much in the community these days or in the news, oh this thing's going to take completely take over cursor. This cursor is going to win or Bolt's going to win.

It's the new thing and stuff like that. And one of the things that I actually love about the community, getting back to what I'm thankful for. One of the things I love about the community is the diversity in that. And everyone is in a different place and everyone prefers something that fits well for them.

So I think when you look at all the tools that are out there, I don't like the talk of the is Java dead, those types of that type of talk. I don't like and when we think about the journey of AI native. where we are today in the more AI assisted land. It's interesting for me to think, okay where are we today?

And what's going to actually get the majority of that usage today versus in one year, two years, for example, thinking about, are people going to lean into that AI native more and [00:15:00] so forth. And it makes me think that actually. We have spec driven, we have code driven, we have prompt driven coding, and actually all of these are going to be used.

There is no black and white kind of one or the other. All of these are going to be used, and it's the case of people being on that journey, and everyone's going to be on that journey in different places. I love to think this as, similar to DevOps really and say in the case of people are gonna be in different stages and all of these tools could actually coexist and the community can be in different tools at the same time and each one there is no right to everyone will have tool that's right for them.

Yeah, what do you think in terms of the journey? With these tools that are taking very different approaches. Do you think there's a place for all of them to exist in? Or yeah,

**Guy Podjarny:** I very much hope I really, I love the diversity as well. And I like the composability as well. Many of these are things that you can pull together. And I think that's software development today. And so some of it is about fit for purpose. It's quite likely that a different stack will be needed to build a pacemaker than the latest and hottest mobile game. It [00:16:00] makes sense that there might be different approaches but they would probably overlap.

There will be pieces that you want to use in both. And you want something that is composable to say I need this piece from here. I need this level of adaptability versus predictability. I need this level of sort of whatever speed versus cost. I'm a little bit more cutting edge and I want to try something that's brand new versus not.

Maybe it's clients and hardware requirements, like all these different things alongside just preferences and ages and you build an application with a stack that comes out today and maybe in three years time or in the case of JavaScript frameworks, five minutes time there might be something entirely new that is out there.

And you've already built your application on the previous stack, so you need bigger gaps to be able to look to it. And so I think that's a part of the beauty. It's messy but it's a part of the beauty. I think when we navigate the unknown and this touches a bit on some of the stuff that I've been discussing with Matt Biilmann is it is really the kind of the dissonance or the difference between the closed ecosystem and open ecosystem. If you don't mind me, peeling a little bit into that, there's, you think today about the world and you think there's you [00:17:00] can look at the web versus mobile and the web is this. open ecosystem and it's a mess. It's it's an entire mess. You have all these JavaScript frameworks. You have all these languages. You have really a wide variety. Technology never dies. You need to support all these ancient for a good while. Like, how long did it take us to get rid of IE6?

And but it's also the source of its creativity. You tried it. It brought to life SaaS and what games and it doesn't have a moderator. There's not nothing there so says you're not allowed to build an application in this fashion or create that user experience. And at the end of the day, different worlds, different users wants different things from it and different innovators and builders can own their own niches, both in terms of what is being built.

What is the output of the user? What is the process of creation? And that's amazing, right? That's beautiful. If you think about mobile, we have these much more closed, controlled, opinionated environments. We really mostly have iPhone and Android. These are two ecosystems. They're substantial.

They're both very powerful. You can build in them. Android a bit more open than iPhone, but not quite as open. And they act as [00:18:00] gatekeepers. If you want to know about what would AI be like in mobile phones, it depends on what iPhone and Android decide, right? If you want a different type of sensor and interaction, the barrier of entry is so high, these new devices that try to come to market on it generally don't really have much of a shot, again, maybe at best in Android because there's a variety of Android based devices and I think software development today is more like the web. The web is partly part of the same thing, and I find that beautiful. I find that messy. And when AI comes along, it makes it even messier. And it allows us to experiment and try more things. But those things are, it's harder indeed. You might have fashions and hey, this is like the hardest thing ever because there's more free competition, free room for creation and multiple winners can come out.

Yeah. And if you contrast that to closed systems and maybe it's mobile. Maybe it's some sort of more closed development environments that exist today. I think the concern is that with AI, the other way to embrace AIs to actually embrace it into places in which they're more [00:19:00] closed ecosystems.

They're more opinionated. They now understand everything. There's like a lot of power to be had in understanding your code base, understanding all the business needs and information around it, controlling all the sort of the pipeline and being opinionated about how software is going to built and and deployed and operated and sold.

And so within those environments, it's easier to absorb the chaos that is AI. It's easier to think, okay, like it's chaotic and how it creates, but it creates things into a more confined space. And I see the appeal and I think we should enjoy the appeal of having AI boost these kind of more opinionated systems.

But what I worry is that almost if they get too strong, if we lean too much into it, we'll find ourselves with two or three platforms that have almost like the sort of superpower of their ability to provide that breadth of capability and software development will be dependent on whatever it is, like innovation and software development, more development tools, changing it will depend on really how they moderate it and I contrast that to the web in [00:20:00] which It's more about composable pieces that you pull together and it's a bit more chaotic and it's a bit more effort because extensibility and composability come at the expense of simplicity. And so it's a bit more chaotic, but it allows more players to play and kind of form more pictures that work.

**Simon Maple:** One of the questions that kind of came out of the map in a session then was, is the open web threatened by AI?

**Guy Podjarny:** Yeah, I think that's the sort of a version of all of this is and Matt was making good points to say he's concerned about that specifically for the web.

Yeah, no, I think it's hard to detach that from Matt's specific kind of a commercial reality. Netlify is very much a composable web type platform and it allows for connecting all sorts of things. And if you contrast it to maybe something like a Vercel or I don't know, maybe like a Shopify then those environments are somewhat competitive.

They're not entirely these things composed, right? They can collaborate, but oftentimes those are more closed environments. They allow. easier ways to [00:21:00] create a subset of the web's applications. And so he was concerned that just as they get almost so good at it, that we lean into these benevolent dictatorships, that allow us to tap into it, but we lose the ability to create the different components.

So I don't know it's like a slightly nebulous concern right now. And I oftentimes feel I don't know. Am I just a doomsayer? By saying this and I don't know if there's a very immediate concrete things. I think when I think about what can Tessl do to help in this context, which is the area where maybe there's some kind of a control that we can apply is, I think there's an advantage to having intermediate representations of the decision.

So the more The way you interact with the AI is, hey, this brilliant alien mind, can I tell you something? And then you will just get stuff done. And as long as the result is correct, I. I have no need to engage with your interim decisions. Then the more dependent you are on it, right? Like at this point you flip the light switch and a light turns on and you really have no [00:22:00] idea what happened in the middle.

The more we require or invest in the, from a community perspective and having explainability and having interim artifacts that are standardized that people can work from and explore then the more you're less locked in the more collaborative. the ecosystem can be because people can pick things off from different places.

They can optimize an interim artifact. So I don't know, there's a chance of that sort of old school software development thinking on it. And I guess I'm thinking out loud about it and expressing a concern. I don't know that this is a kind of a doomed path that we're certainly on.

**Simon Maple:** Yeah. Yeah. Very interesting, that opinion. Let's talk a little bit about, in fact, so this came up in the session with Simon and. Matt which is talking about AI as a user. So now, rather than using AI as a tool, AI actually almost effectively consuming various parts of the web or other parts of applications and so forth.

It was also mentioned the bolt.new s ession with Eric as well from [00:23:00] StackBlitz AI as a user. And that's an interesting concept and one that I don't think too many people are catering for.

**Guy Podjarny:** Yeah. Yeah. And I think probably this might be one of the top insights for me from this month, because I think a month ago, I don't think I would have used that term.

And now I think about it actually all this time, maybe there's a little bit of once Matt opened my eyes to that notion in the prep call that we had even ahead of the podcast, then I now see it everywhere. And it's interesting suddenly you think, okay, hold on.

When you even go to your code assistant, you hit tab and it introduced them open source library for you. How did it choose which one to use, right? Or in Bolt. new, it creates an application. I'm sure the GitHub Spark and all the others do the same. And it chose to deploy it. How did it choose where to deploy it?

How did it know what is possible and what is not possible. And so all of those are actually versions of AI as a user. There is an AI training model platform system somewhere that as a user has chosen to take this application and deploy it in say, Netlify and continue [00:24:00] on from there.

There's an open source library that was mentioned in some places and documented in some fashion and the code generator chose to use it over choosing something else. So if you're the provider of these things, how do you encourage that? Do you want to encourage that? Typically the answer is yes.

Netlify did a really interesting thing. Which is they have a feature that was unrelated to AI at all, which is you can deploy something to a website without authentication and then claim that website after you deployed as an ease of use, like friction reducing element. And that is actually one of the things that made AI deployers defaults to deploying it on Netlify because they don't need a user. They can just do that and then give their users and if the user liked it and the human can claim the site. So I thought that was really interesting. And I think something that we will need to deal with more and more.

**Simon Maple:** Yeah, and it brings me back if it was to think a little bit back to the Armon session as well, where he talked a little bit about how I can fill in various parts of a request that you have. What should it fill in? [00:25:00] When should it make decisions? And when should it require the user to actually put that input?

Because actually, it's meaningful to the user to actually make a requirement there or suggest something and it's actually a little bit similar to that, but rather than thinking about the how it's creating a file or an artifact, it's rather doing it one step higher. So maybe how I should deploy this or what decisions I should make about which vendors to use to deploy this, how I make a decision as to what library I want to choose.

Maybe I care about it. Maybe I don't care about it, but it's all about that user then providing what they need to do and then allowing the AI to fill in the gaps. It's a very similar kind of style problem, but one level higher,

**Guy Podjarny:** yeah, I agree. And I think there's almost like two versions of it as well, like in most kind of, aggregations of content.

One is AI uses the same signals that humans do. And so the question is really what's in its training data. Is your documentation indexed by the LLM so it would know about it? Is your project, is the proof of data of usage or usefulness there? Does the [00:26:00] LLM have what it needs to have to be able to successfully use your platform?

And what happens when you have new versions and all of that. So it's interesting. Those are similar to what you would want for human developers. Like with Google, you just want it indexed. And the other question is, are people going to start gaming it? If you're a new, whatever, like deployment platform and you want these apps to deploy with you, is there a way for you to provide information and slightly over, like with SEO, over rotate them?

To using them, would they create, content farms, would the LLMs needs to get smarter, to avoid reducing that all the way to maybe even thinking about intentional. Is there a nonhuman specification, a structure, like a different way to inform LLMs about, Hey, if we're going to use.

I was going to stick to Netlify over here, right? If you're going to use Netlify, here's a manual. So I think it's interesting. It even came up in the conversation with Notion when we talk about how do you aggregate data? They aggregate data from different places, not just for code, right?

Aggregate data from different places. They choose some authority. What [00:27:00] signals would over time content include in it to think about AI as a consumer? But it's also very applicable in AI dev.

**Simon Maple:** Yeah, I really like that session actually with Simon. This is the Notion CEO and co founder. Yeah.

Super smart guy. His jumper. It didn't look itchy, though. I must say, for those who haven't seen the video. I think

**Guy Podjarny:** He's a very technical guy. And he I think he is much more about the substance. He's not a marketing personality.

**Simon Maple:** See, that's what I see. I look at that and I think, oh, that looks, it looks comfy, but itchy.

Yeah. Now here, there are a few really insightful things actually from that session. So let's talk about fine tuning, because this is something that people have invested heavily in, sometimes built their companies around. And it sounds like potentially from what Simon was saying, maybe over invested in because Simon was questioning the point of fine tuning an LLM based on the fact that actually give it another six months or whatever, and a model's gonna come out that will actually debunk your fine tuning on a previous version or actually go [00:28:00] beyond what it can do today anyway.

**Guy Podjarny:** Yeah.

**Simon Maple:** Tell us a little bit about fine tuning first. What's the difference between fine tuning?

Yeah, and

**Guy Podjarny:** I think that it's a slightly, I guess a ill defined term, or maybe it is defined well in terms of the science of it, but people use it for different purposes. So first of all, like within systems, there's a lot of like within models, there's a lot of conversations about pre training and post training.

Pre training is just gathering the data, just, there's lots to it, right? But a lot of the data that got gathered in and how do you understand, convert it to all these billion parameters and assemble attention to it, et cetera. Inference is when the actual model runs to execute it. Post training of the models before is really trying to make sense of that knowledge and convert it into behavior of how you want it to behave. Chat GPT required post training and over time reinforcement learning to be able to learn which answers are correct or not. So a lot of those things happen within the models and post training is a domain in which generally the perception is there's going to be more and more opportunity in it.

In fact, many people feel that pre training at the moment is actually a little bit less differentiated. [00:29:00] Slightly the extra promise over there is not that grand. Maybe just like the scale of data and the prestige, the opportunities to innovate and to really do something substantial is in post training.

Then reasoning is like a whole new field. So this is the set up I think within the that domain, I don't think Simon was really challenging whether in models post training will be a challenge within that some of these platforms, OpenAI and others, they allow you to fine tune, which is like a version on top of that, that eventually also changes the weights in some fashion, but you provide oftentimes what it translates to is you give it a bunch of Correct and incorrect answers.

Hey, here's a, an example of a good case. Here's an example of a bad answer and you can give it a large volume of that. And when you do that generally the computation, right? Like the inference, when you then run a question that can make an LLM call, then you would sometimes take longer and you will usually pay more because you've asked the platform to do more for you.

And I think what he's been describing is that in concept, it's very [00:30:00] promising. So if you manage to curate a bunch of good examples or bad examples of it to train the system, can you use that and then have the system magically produce the right answer? Calibrate the neurons so that it can produce the right answer.

And he was pointing out that it's just one is it doesn't work as well, but also it's really hard to work with it to debug. Like you came along, you gave it a bunch of information and it's now producing an answer. It produced the incorrect answer. What do you do now? How do you handle that?

Or, you add in another example to your test case and suddenly the answers are different. How do you do now? And and so it's just impractical to use it. He hasn't found it as. valuable. And he went as far as saying that he finds fine tuning to almost be a a negative indicator when he talks to startups.

If they say, hey, the reason we're going to succeed is because we're going to fine tune. It implies some cluelessness I don't know that I have a very firm opinion on whether fine tuning is or isn't successful. I feel [00:31:00] like I'm deferring here to real world experience from him.

I totally relates to the difficulty to debug and maybe I should contrast the alternative to that is if you have a bunch of those good examples, you can put a few of them in a few short prompt or into the system prompt and things like that. And you can use others in an agentic process that looks at the result and says is this correct?

Is this not? Can I change it? And if you do those, if you contrast the debugging capability, there is much more that you can do here. You've identified a case, that here you've identified this new problem case, like you just have more control. You're relying less on magic, not that dissimilar to the composability comments that I meant before.

Yeah. So it's interesting. What was beautiful about the insights from Simon was that they're based in real world reality versus the promise, the sort of the marketing and he's massively excited by AI. So don't take any of this as, he's skeptical about it.

Yeah. It's quite the opposite. Yeah. But it's just pointing out what is real and what is not. And yeah, he was saying, if you don't find tune, if you use the models as they are. It's easier for you to bounce around between models as they hold.

**Simon Maple:** Yeah. And of course, [00:32:00] Notion AI is probably one of the best examples out there about the power of depth of AI.

Yeah. In a production style. Yeah. Application.

**Guy Podjarny:** There's a backstory of it as well that that he didn't share on the podcast, which is a Notion has made an attempt to really deeply fine tune a model. I think it was based on Anthropic, I'm not sure. And really invested a lot of time in it and really also didn't see sufficient results. And so they ended up using the core models and the core models as I understand it, this is a little bit of hearsay. Take it with a grain of salt, but over time the models just at the same time and that they were putting in that effort to find you and then you want to come along and it was just like 80 percent of that has already been achieved.

And so I think part of it is also just based on that.

**Simon Maple:** Yeah. And what's the opportunity cost that you could have had all of this development team doing? Exactly. Yeah. That would continue.

**Guy Podjarny:** And if you contrast that to like a critic agent model in which, okay, maybe what happens now is you put that critic, it says did you give the wrong answer?

And maybe now you see that the wrong answer is not given. It gets the right answers. So you have visibility to that over time. Maybe you even remove that critic because you don't need that [00:33:00] anymore. Yeah. So it's interesting.

**Simon Maple:** Couple of other very interesting things that Simon mentioned or myth busted about big context windows.

Number one. And we actually had a really good ,Guy Eisenkot had a really interesting session about context and ordering another one to go and really good talk. Yeah. But yeah, talking about attention or limited attention, causing problems with that. And I know that was one that caught your eyes.

**Guy Podjarny:** Yeah. And that was I think it's a little bit straightforward. I just mostly loved his phrasing. So we talked about his system prompts and asked him like how big is your system prompt? It's pretty big. It's getting pretty big. It's big enough that it doubts the value of adding another line.

Like they came across another problem. Can they now introduce that into the system prompt? It might not notice it. And I think we're just in the first year plus of the conversation, all the conversation was around context window. And his point, which I agree with, is we're past that, now the context windows are pretty big.

But what that hides is the fact that attention is still limited. And so you can tell it a million things, but it can't [00:34:00] pay attention to all the million. And so you think you informed the LLM about whatever, some new instruction, or, new edge case of if this, then that, or behave like this.

And and in practice, attention, is focusing, simply put, focusing, within everything you told it about what matters when a prompt comes in is actually a lot more limited. So that's the bottleneck today. So I would say that is very aligned with both what we've been experiencing, but also with what I've been hearing from others and the best practice at the moment seems to be.

Keep the instructions at the top, keep them contained and past a certain magical point, it really becomes just data. So things that are explicitly looked up and so you can provide like limited text and then, whatever big files and big like volumes of data, maybe like code bases and things like that.

And it can handle those reasonably well, but the instruction, the attention to instructions, it's much more limited at the top and you need to be careful in managing them. And be based on evaluations, which we know are finicky to figure out [00:35:00] whether a new instruction did or didn't make an impact versus faith in the attention.

**Simon Maple:** That's really interesting actually because what makes me think of a blog that is actually going to be going out hopefully next week may see one of our community engineers has been investigating and writing a little bit about how constraints in a prompt, depending on where you position them in a prompt can have an impact.

And it's amazing. I think I still think this, prompt engineering and things like that, and the amount of context to how that's delivered prompts and context are two of the easiest things, the cheapest things that we can actually change. But it's still such a art that people aren't as familiar with.

Yeah, I think there's a ton of tips like this and we should be, writing more about this in the community about how everyone can level up with just a few simple tricks or a few little things that people should be doing. And Macy's going to be talking a little bit about how you move that constraint to the top.

It's actually going to respect that far more than if you add it to the end because the engines already done a lot of work before looking at that constraint and adding that constraint into

**Guy Podjarny:** a ton of tips like that.

Yeah, very good. And as Caleb Sima actually in an [00:36:00] earlier episode said, the data plane and the control plane here are mixed together.

And so if this was an ordered list and structured format feel very natural. like hey, earlier instructions get more weight loose like the data and the instructions are all mixed together then its so easy to mix them and put an instruction and send them some instructions and some data

and that's just not the right way to address an LLM and the lines are blurry and they also change from model to model and from model version to model version on it.

So part of what makes building these applications challenging,

**Simon Maple:** . And the third one before you almost wrap up there, the third one. His insight was a deep ML background could actually be a negative trait for an AI engineer. 1st question guy. What's an AI engineer?

**Guy Podjarny:** First question I think maybe it's like my paraphrasing a bit , I'm not sure if Simon said the AI engineer title. This was very much in context on it.

And [00:37:00] again, it loved the spicy thought when I asked him? is, hey, as you hiring people into your sort of specialized team that is building AI products, which was what I called an AI engineer, people didn't know how to build on top of the LLMs which we talked, by the way, a lot about how that is complicated.

So if you hire into that team, how important do you think machine learning background is? And he said like most of the team doesn't really have an ML background. Some people have and that the problem is that LLMs are actually reasonably different to traditional ML. And so if people come with enough ML background, pre LLM ML background, which is, LLMs are like a couple of years for practically everybody in the world, maybe there's a few that have another year opportunity if they were inside, then sometimes you actually have dispositions that you need to unlearn. You have biases and methods and sometimes you might be slow because you're not used to having this monstrous capability within your easy access right at your fingertips.

And as a result of that, it feels like it actually [00:38:00] holds them back. And eventually he described the best people he has as people that are just very good at iterating, very good at moving fast. Which I found once again to be a very DevOps y type principle, reminded me of what Armon from HashiCorp has said in the previous episode around how the conclusion we got to there, which is if you're actually the best at DevOps, at continuous deployment, at instrumentations and all that, Then you're actually able to best use that information to train your system and evolve it.

So it does feel like this ability to be agile, to be dynamic is very aligned to how you build with LLMs to be really purposeful about what it is that you want to build, but to be ready and willing and able to roll with the punches and to adjust when things don't work. And do those because the products are so unpredictable.

Yeah. Yeah. Which is almost counter to how you would run research.

**Simon Maple:** Yeah. Which is interesting. Almost having that, like relying on that mechanical sympathy enough of what's under the covers and how things should work. Yes. That'll get you so far, but it's that level of iteration on top of that

**Guy Podjarny:** Research is [00:39:00] research as a field as a whole, and of course, I'm generalizing here, is about slightly like more thorough thesis and some sort of a hypothesis. And then how are you going to go about doing it and trying to be thoroughly assessing, where does it go?

And some fields, of course, in LLM field world very much require that to be able to evolve the models. But when you're building on top of it. It's an interesting thought that I think has good points, which is you actually have to be careful to say, are you too fond of or like too used to these kind of ML methodologies that are more methodical that actually will get in the way of your agility and maybe acceptance of some of the new power, if you will, of the LLM.

We're sometimes needs slightly less scientific methods to make the most of. So I thought it was really interesting at this point. We touched a few topics here, but you really should listen to the episode and hear it straight from the source on it. It has slightly like a lot more to say and like different ways to phrase it, which are far better than my butchering them here.

**Simon Maple:** All of those episodes on [00:40:00] apple podcast or on spotify, et cetera. Feel free to have a listen subscribe. On wherever you are, and you'll be first to hear about all our new episodes going forward as well. That pretty much wraps up this live episode Guypo. Yeah,

**Guy Podjarny:** I think, thanks for tuning in.

And I will maybe make one more mention, which is we have a lot actually all of the talks from AI Native DevCon on our YouTube. That's right.

**Simon Maple:** Yeah. Everything's on the Tesla YouTube. Yeah, go to YouTube, do a search for Tessl and you'll see all the talks within their own playlist.

You can have a look through there as well as all of our podcast episodes as well. So there's two different playlists that you can choose from.

**Guy Podjarny:** So check them out. A lot of information produced by. Smart people. And then also us yeah. Not all of them are winners. Some of them have to have to be better than others.

I wanted to do something to help the others shine, but you can catch all those talks on the YouTube channel and a lot of great tools and really smart perspectives on where the future of AI dev and AI native dev is going. So check them out.

**Simon Maple:** So happy Thanksgiving again to those in the U. S.

**Guy Podjarny:** Happy Thanksgiving.

**Simon Maple:** And thanks all [00:41:00] for tuning in and we'll see you on the next episode.

**Guy Podjarny:** Indeed. See you then. Bye.

**Simon Maple:** Thanks for tuning in. Join us next time on the AI Native Dev brought to you by Tessl.

Podcast theme music by Transistor.fm. Learn how to start a podcast here.