The Evolution of v0 and Vercel's AI SDK, with Malte Ubl, Vercel CTO

In this episode, Malte Ubl, CTO of Vercel, shares groundbreaking insights into AI's role in revolutionizing web development. Discover how Vercel's innovative tools are shaping the future of AI applications and empowering developers worldwide.

Episode Description

Join us for an engaging conversation with Malte Ubl, CTO of Vercel, as he delves into the transformative impact of AI on web development. Hosted by Dion Almaer and Simon Maple from Tessl, this episode of the AI Native Dev Podcast explores how AI is reshaping the landscape for developers. Malte shares his journey from Google to Vercel and discusses the evolution of Vercel's AI SDK and v0 tool, along with the role of eval driven development in accelerating AI application refinement. Gain insights into the commoditization of AI models, the strategic use of proprietary data, and how full-stack development is being redefined by AI advancements.

Chapters

  1. [00:00:00] Introduction and Malte Ubl's Journey
  2. [00:01:00] The AI Revolution at Vercel
  3. [00:03:00] Unpacking Vercel's AI SDK
  4. [00:05:00] Generative UI and Integrating AI with Developer Tools
  5. [00:07:00] The Evolution of V0 and Real-Time Previews
  6. [00:09:00] The Role of Eval Driven Development
  7. [00:11:00] Future of Full-Stack Development with AI
  8. [00:13:00] AI's Impact on the Open Web
  9. [00:15:00] Malte's Vision for AI in Development
  10. [00:17:00] Conclusion and Future Outlooktion

The AI Revolution at Vercel

Malte Ubl's journey with AI began during his tenure at Google, where he was exposed to the company's AI-first approach. Joining Vercel in early 2022, he was positioned at the forefront of AI development. As AI models became more commoditized, Vercel recognized the potential disruption and sought to position itself as a leader in AI applications. Malte described how Vercel's transition to a net CPU-based function execution infrastructure provided a competitive edge when AI API calls surged in demand.

This infrastructure adaptation was not just about handling increased demand but also about optimizing performance and cost efficiency. By leveraging net CPU-based execution, Vercel could offer faster response times and reduce the operational costs associated with AI processing. This strategic move positioned them ahead of competitors who were slower to adapt to the burgeoning AI demands. Malte's experience at Google had primed him to recognize the signs of AI's imminent rise, allowing Vercel to capitalize on this technological wave effectively.

Vercel's AI Developer Tools

Vercel's AI SDK emerged as a pivotal tool, enabling developers to seamlessly switch between AI models like OpenAI and Anthropic. Malte explained, "It's like JDBC for the nineties," providing an abstraction layer that simplifies model switching with minimal disruption. This SDK facilitates code streaming and model harmonization, allowing developers to focus on building innovative applications without getting bogged down by API complexities.

The AI SDK's design philosophy revolves around simplicity and flexibility. By abstracting the complexities of interacting with various AI models, it empowers developers to integrate AI capabilities into their applications with ease. This abstraction layer is crucial for maintaining a consistent development experience, even as underlying AI technologies continue to evolve rapidly. Developers can thus experiment with different AI models, optimizing their applications based on performance and cost without the overhead of significant codebase changes.

Generative UI and the AI SDK

One of the standout features of Vercel's AI SDK is its capability to connect developer UI with AI models. Malte emphasized the importance of not just exchanging text with AI but enabling interactive components like React to communicate with LLMs. This approach paves the way for more dynamic and responsive applications, bridging the gap between user interactions and AI-driven responses.

This innovation in UI design represents a significant step forward in making AI more accessible and functional within web applications. By allowing components to interact with AI models in real-time, developers can create experiences that are not only more engaging but also more intuitive for users. This interaction model supports a wide range of applications, from intelligent chatbots to complex data visualization tools, enabling developers to push the boundaries of what's possible with AI-enhanced interfaces.

The Evolution of v0

Vercel's v0 tool, initially conceived as a chat-based interface for generating web pages, underwent iterations to enhance user experience. Malte recounted the journey from a simple chatbot to a sophisticated visual-focused tool, ultimately returning to a chat interface. The latest release allows developers to build full applications, offering real-time previews and seamless integration with Vercel environments.

The evolution of v0 highlights the iterative nature of software development, especially in the fast-paced AI domain. Each iteration brought new insights into user needs and technological capabilities, refining the tool into a robust platform for web development. The ability to provide real-time previews is particularly beneficial, as it allows developers to see the immediate impact of their changes, reducing the feedback loop and accelerating the development process. This feature set positions v0 as an indispensable tool for developers aiming to leverage AI in their projects.

The Role of Eval Driven Development

As developers navigate the AI landscape, eval driven development becomes crucial. Drawing parallels to Google's search result evaluations, Malte highlighted how LLMs can automate grading processes, akin to unit tests. This approach accelerates development cycles, providing valuable feedback for refining AI applications.

Eval driven development represents a paradigm shift in how developers approach testing and validation in AI applications. By leveraging AI to evaluate AI, developers can achieve a higher degree of automation and accuracy in their testing processes. This method not only speeds up development but also enhances the reliability of AI applications by ensuring that they meet predefined benchmarks before deployment. As AI applications become more complex, eval driven development will likely become a standard practice, offering a scalable solution to testing challenges.

The Future of Full-Stack Development

Malte envisions a future where the concept of a full-stack developer becomes more tangible, aided by AI's ability to enhance skills across domains. Tools like v0 empower developers to venture into unfamiliar territories, enabling them to create sophisticated applications with ease. Malte noted, "This is super powerful for everyone to go into domains that they don't currently feel comfortable with."

This vision of full-stack development, augmented by AI, suggests a democratization of skills, where developers can cross traditional boundaries with the assistance of intelligent tools. AI's ability to provide context-aware guidance and automate repetitive tasks means that developers can spend more time on creative and strategic aspects of their work. This shift not only enhances productivity but also broadens the scope of what individual developers can achieve, fostering innovation and diversity in the types of applications being developed.

AI's Impact on the Open Web

Reflecting on AI's influence on the web, Malte observed how the startup boom in San Francisco underscored the web's agility. The rapid iteration and deployment capabilities of web platforms provide a compelling advantage over native apps, reinforcing the web's role as a primary medium for AI-driven innovation.

The web's inherent flexibility and accessibility make it an ideal platform for experimenting with new AI technologies. Unlike native applications, which often require lengthy development cycles and platform-specific considerations, web applications can be updated and iterated upon quickly. This agility is crucial in the fast-moving AI landscape, where staying ahead of the curve often means being able to pivot and adapt to new developments rapidly. As AI continues to evolve, the web is poised to remain a key player in delivering AI applications to a global audience.

Full Script

**Malte Ubl:** [00:00:00] Yeah. So I think one thing that's definitely going to happen is that it's like notion of full stack developer is going to become more true because if we don't look into the future right now, the models are all commoditized, like Google, Anthropic, they all have essentially equally good models. So they have that backdrop of commoditized models.

They're going to get cheaper to invoke and in order to make products good on them, they actually need access to more data. And my impression is that people underestimate how much proprietary data they have access to.

**Simon Maple:** You're listening to the AI Native Dev, brought to you by Tessl.

Hi there and welcome to another episode of the AI Native Developer. On today's episode we're actually going to be showing you [00:01:00] an episode that was from the AI Native DevCon. And this is actually one of the keynotes from Malte Ubl, who is the CTO of Vercel and Malte is going to be talking with Dion Almaer, who is the field CTO of Tessl.

And when this session came on the conference I saw a ping in one of our chats from a guy I massively respect who says, this guy speaking Malte, he is perhaps the single best web engineer on the planet. And so when someone says that I listened to this session and in this episode Malte will be talking about Vercel's place in the disruptive world of LLMs and he talks about how when we play with ChatGPT.

It's great with creating text and generating text, but it's not necessarily good at creating web pages. So this is one of Vercel's challenges was really about creating that user experience to build web pages. And that's a hard part, not just the piece that allows LLM to actually create websites that make sense.

But what does that UI look like [00:02:00] with a developer trying to actually build that? Is it chat? Is it something different? And Malte will talk about the AI SDK which Vercel created. And this is a couple of things that is mentioned. In one, there's the discussion about that, creating that abstraction layer for effectively the APIs of existing models.

If you're using OpenAI or if you're using Anthropic, If you want to switch between the two you want to do that with the least disruption on your code and your interactions as possible. So there's a number of things which it does, including code streaming, things that people tend not to want to write by themselves.

And that allows for that model switching to be, as I mentioned, fa r less painful. One of the examples that was given is it's like JBC for the nineties. And the second thing which Malte mentions as part of this AI SDK is how you can connect a developer UI with the LLM in the background, and this was really interesting whereby we don't just want to pass text to the AI, to the LLM [00:03:00] and get text back.

We might want to interact in different ways. For example, if I ask a question, I might want to React component back as an answer. Or for example if I interact with a React component on a web page. I want that interaction to be represented and sent to the LLM, depending on what I do. This is a really great session with Malte and I hope you enjoy it.

So over to Dion and Malte.

**Dion Almaer:** Thanks so much for taking the time to chat with us a little bit about AI development. I know that Vercel has gone through and has built some really cool AI developer tools and you've gone through some interesting changes there. So excited to learn a little bit about that with you.

So to start with, if we could go back in time a little bit before we go to where we are now and into the future, I was curious about when the birth of this form of the AI revolution kind of kicked in for you, like when did it first hit you and were you skeptical at first? And was there an aha moment where you're like, Oh, maybe there's actually something to this LLM thing.

**Malte Ubl:** So I joined Vercel, [00:04:00] it's now almost been three years, which was it was early 2022. And so it hadn't happened yet, but I also came from Google where I think Sundar declared the company AI first, maybe 2018. And I had spent a lot of time with this stuff. There was some internal things going around where people got really confused about.

You know how good these AIs were getting and so it's got primed on it. But then I think I went through the same transformations that everyone else where you try a ChatGPT and you realize, holy crap, this is it. This is going to happen.

**Dion Almaer:** And then when you're at Vercel and then that's all kicking in, I assume that the kind of, the company gathered together a little bit to see what does this mean to us? What was that like? What came out of that?

**Malte Ubl:** We've been going through it through many dimensions. The first thing we really got lucky, actually that was my first project getting to market. It had been underway, but we changed our function execution infrastructure to be built on a net CPU basis.

And that was done for all kinds of reasons. But when one minute AI API calls came out, it was an [00:05:00] incredible competitive advantage to have this product in our hands, which was just really good at this and that's where we just got lucky because it was literally done the moment people suddenly wanted to do this but then obviously, we had to like think about there's a disruption going on.

I think we are in the broader like developer experience space where fair to say it's the early adopter for like more advanced AI transformation. And so we absolutely were thinking about what's our place in this world? And I'm so happy with our answer because I feel super native to what we do.

We're a framework company. We make Next. js, we support Svelte and many other frameworks like on the JavaScript side. And so it was obvious that we should make a framework for building AI applications. So that's the AI SDK, which has been incredibly successful in the kind of TypeScript community for building AI applications.

And our angle there was to say, okay, there is some stuff out there from other companies, but it feels really like frameworky and advanced, but nobody knows what application people want to build. So you can't build a framework yet. It's too early. So with the AI it's low level utilities that you're going to need, but it [00:06:00] doesn't like prescribe you about anything.

So that was our angle number one. And the angle number two was v0, which is our kind of tool for getting people started on the front end development and getting projects kicked off and having an expert system for the frameworks that we support. And so again, like that's a relatively obvious thing that I think makes sense for us to do.

And thankfully it's very successful.

**Dion Almaer:** Got it. Awesome. Yeah. Can you explain a little bit more what v0 actually is and how it really came about and grew and got good and

**Malte Ubl:** it actually has gone through a bunch of iterations. But the initial one was that I think like everyone who was thinking about the space is.

You had ChatGPT. ChatGPT was an LLM that's good at text and HTML is just text. So everyone was obviously you need these guys to like, go and make webpages. And, but the thing was, if you tried like a year and a half ago, it would not do a good job. It would make something crappy that would look ugly and didn't work.

And that was like one of the craziest insights. So we had a few folks in San Francisco, a pretty remote company, [00:07:00] but everyone was in the same room. We were hacking a little bit and someone like literally raised their arm. And so I was like, hey, I got it to work. It's good. Like I have the AI, it makes webpages and it looks amazing.

So that was a good moment. But then I realized that apparently that was the easy part. Like getting a user experience together that would actually make sense to developers took us like actually quite a while, quite a bit of iteration. Cause we started with just hacking a chat bot to be in this form.

And then we actually came out shipping something that looked quite different. It was very visual focused. But I will also admit that a year later, roughly, we moved back to more of a chat interface. So we're still figuring out what the right user experience is. But that was our journey just literally yesterday, we shipped a pretty major release that changes v0 to be able to build full applications and have like real time preview while you iterate it on the kind of chat interface and then the ability to deploy to production as well as access to be like your Vercel environment variables that can just query your database.

[00:08:00] All these things, like it's pretty, pretty exciting stuff.

**Dion Almaer:** That's awesome. That initial kind of, it's actually working moment. Was that based on new models, new prompting, what enabled that initial I think this is actually going to maybe work.

**Malte Ubl:** So I think the insight that we had, when we discovered, literally, because it was in the AI, right?

The insight was, and it's such a crazy coincidence that it worked. Is that AI are good at Tailwind, which is this new CSS framework, or we have writing CSS where you actually write all the CSS and classes where like this, every CSS element has a class associated and you just write it out. It's basically inline CSS in a slightly more sustainable way than inline CSS.

So this was at a time where OpenAI was leading the pack, but they had a model, for the longest time. You probably remember the precise cutoff day was it was December, 2021 or something like that. And so the coincidence is the Tailwind at that moment exists and it's popular, but it's not like today. Today is like super popular, but it's already popular. So there's enough training data, [00:09:00] like in the model that it actually is good at this. And the kind of the insight is that the models of that time, at least they're really bad at saying, okay, I'm going to write some CSS.

And I'm going to write their rights HTML the other way around, like it wouldn't work, but it could do it really well in one go because it's all now one bundle. And so that was the step function that while the fifth generation was bad, the L1 generation inside of HTML was really good.

And so that was the unlock that went from this is crap to, Oh no, this is actually going to be a thing.

**Dion Almaer:** That's so interesting because I've often wondered about the what is the sweet spot of data for some of these things around if you have something that's really popular but it's popular with the masses so much that kind of the average of it is like kind of meh versus something that's a little bit not quite as popular but the code that has been written with it has been written by like the early adopters maybe more expert coders so maybe it can be really good versus something brand new where there's just nothing in the training data.

Are we going to have kingmakers [00:10:00] happening? I wonder how it's going to evolve from here.

**Malte Ubl:** There is a possible future where there will be less innovation because the AI just has picked its thing and it's not going to learn anything new if people stop writing, legit code by themselves, that's like a thing that could actually happen.

**Dion Almaer:** You spoke about the UX choices. I remember when I first used v0 from a prompt, I just got multiple versions to start from instead of just having the one. And I thought that was interesting from a standpoint of both, LLMs are really good at giving you multiple options and there's variants baked in and everything else.

And also the PTSD that I have, that maybe you have a little bit of going from the 10 blue links of Google, where it can hide so many sins, where if a really good result is the second one, that's a great result for the human. Whereas if you have a Google assistant or Alexa, where it like comes back with its one result and it's not perfect, you lose trust.

Was that something that was also thought of like early on and there are the UX choices that you've [00:11:00] made that like work well within this probabilistic world that we have with LLMs.

**Malte Ubl:** Yeah, I feel like lots of this actually just related to also the, how the quality of the LLMs and how good people are at prompting and the introduction of multimodal.

They all changed something about this. So when v0 came out, multimodal model weren't really a thing. And so you couldn't upload a design. You couldn't say, hey, make me this, but get the vibe of this, which is like a very good prompting technique for v0 these days, but that wasn't around.

And people would just say, I want a dashboard for X. And if you would do that, you want to add a little bit of creativity, but you don't know what they want. So that's then really helps to make full version and and hoping that one of them is going to be good.

**Dion Almaer:** Yeah. The notion of like elaboration, I think is fascinating.

And it's and even subtle things. I was surprised in a tool that I built before, where I assume that developers have been using ChatGPT type things and we'll start just chatting away. And a lot of them would just do new chat [00:12:00] sessions, zero shots from scratch. And so by adding to the bottom of every chat, like a question, asking for more taught them how to do it and allowed you to elaborate.

And so it feels like a fun stage again, where we're relearning or learning new things to be able to make this stuff actually work.

**Malte Ubl:** Yeah, I think, everything is moving so fast and we're in like in ways that really changed what you can and need to build on a product side because there's a lot of competition and people will immediately eclipse you if you don't actually pick up the latest tech.

**Dion Almaer:** Yeah, that's awesome. Okay, so back to the AI SDK, can you go a little bit deeper on what it is. And I know it also had a recent, I think, 4. 0 release as you're constantly shipping as per usual.

**Malte Ubl:** Yeah, like the AI SDK basically, I would say does two things high level. One is really just an abstraction layer for the various model APIs out there with a focus on Kind of getting the streaming code that nobody runs to write themselves to be just done.

And we've been iterating and it has lots of [00:13:00] JSON forcing related features, lots of tool calling related features and lots of kind of harmonization between models. Because as for example, model X would maybe ship native support for something. Model B doesn't have it. But if you use the AI SDK, you can still switch between them because the API is exposed through the SDK.

It's the same for both. Even if the underlying implementation is quite different. So that's the one thing, which is basically just utilities for connecting to LLMs and abstracting that just a little bit, really just a little bit like JDBC in the nineties for databases. It's not an ORM right at this layer.

It's like really just saying there's a way to talk to the database. It doesn't matter whether you talk to Anthropic or OpenAI. Both are exactly the same and so that's that thing. The other side that we are heavily investing in is this notion of connecting UI and AI. And maybe sometimes confusing.

We call it generative UI. And maybe if you think about what v0 does, it's like very specifically generative UI, that's not what we're talking about here. Like the example app, for [00:14:00] example, that we use is one, Where you, let's say you ask the airlines AI that you wanna change your seat and then it responds with a seat map and then you pick a seat and then the AI knows what you just did.

And so this whole cycle of having UI state, having AI state, having LLMs that don't really know what this UI is. That marshaling between those things is something that we invest in. It's the second big layer in the AI SDK.

**Dion Almaer:** Got it. Makes sense. And so does that tie into the tool use and function calling side of things to be able to integrate these things together.

**Malte Ubl:** Yeah, exactly. So there are several ways of doing it, but like one way is that if you use React where components are just functions, you can literally return react components from tool calls. And so let's say you, the simplest example is you ask the AI, what's the weather.

And so the AI will invoke the get weather function, but that returns a React component to, show you the sunshine and, or clouds or whatever, that's one way to [00:15:00] do it. And then for more complex cases, once you make a change, that's them being, you write a little function that says, okay, if the user selected seat 17 C that is literally, you write a function that translate that into the user selected seat 17 C.

So the LLM knows what's going on because obviously it doesn't really understand what you clicked on the UI.

**Dion Almaer:** That makes sense. I'm thinking about the JDPC metaphor or comparison that you were talking about earlier. And that makes a ton of sense now with respect to, if you swap out the JDPC book driver, but you're using SQL.

So you can't just point it to any old database and have it work. And the same thing, if you swap out the model between an OpenAI model and an Anthropic one, that doesn't necessarily mean you're going to get. Yeah, the same capabilities necessarily, also just like different results. How do you see people playing around with different models with the SDK?

**Malte Ubl:** Yeah. So I think our learnings actually somewhat like they're surprising to me in a way that JDPC analogies is perfect because it's really [00:16:00] like that. We're like, the SQL is almost the same, right? It's not the same, but you can get very far, right? If you don't go into the darkest corners of the SQL language, what we found is you can switch the models.

That doesn't necessarily need to be true, right? It could be that, I'm talking about real applications where people have spent months in like fine tuning the prompt, to do something specific to like, to handle all kinds of edge cases and our experiences that basically, I'm not saying it's like trivial to change the model, but it is like easier than at least I would have expected, like it's, you can talk about how to ideally do it.

Ideally, you have evals that tell you how well you do and then you iterate on it. But even basically if you don't, if you're not so sophisticated, like we just found that it's like it seems daunting. And then it apparently they all converged like enough that it seems to be an option.

**Dion Almaer:** That's really interesting.

Cause yeah, I think I would have maybe because if you spend so much time prompt engineering the hell out of something and you feel like you've got something that's working really [00:17:00] well. You feel like, oh man, if I switch from OpenAI over to Anthropic, all of that work could be, wasted. But if you're saying that's not what you're seeing in practice, that's actually awesome because it means I can like, if I'm using Sonnet 3. 5 and I prefer the vibes in general, I can make that switch earlier and maybe not feel much of the pain. It also obviously ties into the kind of eval side that you're talking about where You know, we learned or relearned the power of building automated tests in the pre LLM world. And I know you've, spoken and posted about the LLM world version of this, which is, eval driven development.

Can you talk about what that looks like and how to actually build in feedback loops and like into building real AI applications?

**Malte Ubl:** Interesting background is because I used to work on search, we would have evals, right? That's literally how Google works and they would be human based. And this is one of the interesting takes is if you look at Google's OpEx.[00:18:00]

People obviously know about the data centers. Although that's mostly CapEx, right? You have to pay for the power, basically. And, but it's, at least when I was there a very incredibly large amount of money was being spent on humans who would look at search results and rate them by the hundreds and thousands of queries.

So it was straight side by side, rate which one's better. Maybe add why, right? And then do it over and over again. That method, like more or less translates right to LLMs. What is, I think the big difference is that now you can use LLMs to do the rating and they are really good at it because everyone knows this, that you have the situation where the LM does something and then you ask it and they, oh my God.

Yeah. Now you mentioned it, right? Like it's just possible to write. To have good if you know the answer and then yeah, it's really easy to get an LLM to grade how well the other LLM did, right? And so you can automate this. It's basically like unit tests, but obviously because LLMs are involved, it's slow.

So like you, you do have to take a bit [00:19:00] of a step back. I feel that always happens when something truly new that's going on, don't want to talk too much about Google, but like for the longest time, there was just no automated testing for any of the Android apps that the company was making. And these are like billion user products.

And so we're in the same stage where that's just, I forgot about it, but no, actually I think we're doing better, right? Everyone understands that evals are needed, but they are more difficult to implement than unit test for Android apps. It's definitely a challenge. It's not fast.

We use brain trust. We're happy customer, but also it is, similar to the AI SDK being an ultra opinionated brain trust is not ultra opinionated by this. That's not like a fully flashed out SaaS product where they like put you through the onboarding wizard and then that's it, right?

That's not yet how it works. You do have to like dig in there and then figure things out.

**Dion Almaer:** Okay. It makes sense. So if developers are thinking about what life is going to be like as a AI native dev as we like to talk about it, we often talk about the notion in general, in the industry of being a T shaped developer, right?

I've like, where are you [00:20:00] broad and where can you go deep? And I think I saw you mentioning a little bit about how you've used LLMs to you have a lot of deep expertise and areas, but it's giving you confidence into even areas that you haven't gone as deep as into and rounding yourself as a developer.

What do you think life's going to be like in the future as a developer, as all of these tools and everything else gets better for us?

**Malte Ubl:** Yeah. So I think one thing that's definitely going to happen is that it's like notion of full stack developer is going to become more true. Because you, if you know in and it's very similar to the T shaped notion, right?

Because you're probably deep on something and then the AI can help you go and figure things out in various different areas. I think v0 is actually a really good example there because and literally for me, because honestly, I don't have the skill myself to take a Figma file and make a webpage I don't know how to do that. I can tune CSS, but I cannot go from scratch. This is not a skill I have, and I've done this for 30 years. But v0 can and then it gives me the thing and I, it's might not be perfect, [00:21:00] but I do have enough skills to to tune it.

So I think this is like just super powerful for everyone to go into domains that they don't currently feel comfortable with. And, I think that's. There's almost like unequivocally a good thing.

**Dion Almaer:** Yeah It's been shocking sometimes to talk to some developers. This is just like a junior developer and it's okay, let's say it is only a junior developer, but it's a junior developer on everything, and it's not actually that junior.

**Malte Ubl:** Yeah, that's a perfect recall. I think one thing that isn't a thing today, but I can see, for example, incident analysis being a perfect use case, where you just answer the question okay, I understand what's broken. Why could it be broken? And almost certainly like some problem like this has happened in the past where humans aren't as good at pattern matching versus this thing, which, yeah, absolutely has amazing recall.

**Dion Almaer:** Yeah. Another thing that sometimes comes up is, some fear that developers may have around just like, how are we going to compete with the FANG companies, people spending billions of [00:22:00] dollars on foundation models, they have all of this AI expertise. Am I just going to be able to do some thin wrapper app?

And that's kind of it. What are your thoughts on, how AI is going to help the little guy, so to speak?

**Malte Ubl:** I'm actually, I'm super bullish. And my bull case has two angles. If we don't look into the future right now, the models are all commoditized, like Google, Anthropic, OpenAI, and Facebook. They all have essentially equally good models. And so they're going to get this themselves down to the bottom, making the product as cheap as possible and so forth.

So they have that backdrop of commoditized models. They're going to get cheaper to invoke. And in order to make products good on them, they actually need access to more data. And my impression is that people underestimate how much proprietary data they have access to. One example I've been giving is let's say, you can be like very boring situations, but say, not boring, but like deep, but maybe not let's say you make forklifts.

No, actually a better example, HVAC [00:23:00] systems. And because that's a bit more mainstream, everyone needs one. And it has documentation. And right now you only give that to like the certified installers. Because it's pretty difficult to consume. I think you could build like just, you don't even have innovate on UX, you build an AI chatbot that does Eli five answers based on the same documentation that you've already written and that you've never published.

And so ChatGPT has not consumed this. And so you can build this like relatively differentiated product. I'm not saying you're going to compete with OpenAI with this, right? But you get you can actually add value on top of something that basically like that, a generic chatbot could never achieve without going into a very broad platform strategy, which I'm not saying they're not trying but they certainly haven't been very successful on that angle.

**Dion Almaer:** Yeah, it makes sense Okay. I know we're coming up on time But I have to ask since we've both spent so much of our lives on the open web. How do you think AI is gonna help or harm the web for in general and for web developers specifically.

**Malte Ubl:** So one thing that I thought was very interesting is that when [00:24:00] the AI kind of startup boom happened in San Francisco, has it been a year and a half ago? And since then, there was such a rush. And such exciting excitement and just like attitude of getting things out there that just no one had time to build native apps.

Like this entire notion that you would make three versions of the thing that you don't know is good yet and that you're going to throw away in two weeks because new models coming out. That's just not no one was even considering that. And I think now it's starting that people say, okay, yeah, I have my successful product, has a consumer angle, let's go make a native app like that's coming, but it was such a powerful demonstration of how good the web is as a medium to iterate, get things out there that I think that really changed a lot of people's mind about what the real priorities are because I'm not an anti native guy, but it's certainly the third thing I'm doing after product market fit.

And after things have settled down a little bit, so and you can talk about many other things, but I think that's a good signal that there's just so much value in having a femoral zero friction platform. It allows everyone to get in front of [00:25:00] everyone else.

**Dion Almaer:** Awesome. Malte, thank you so much for taking the time to chat today.

I could talk to you for hours about this stuff. Really excited about what you're doing at Vercel and all of the tools that you're building to help us do all of these great things. And yeah, look forward to meeting up soon.

**Malte Ubl:** Awesome. Yeah, it was great to see you again and thanks for having me. This was super fun.

Thank you.

**Dion Almaer:** Sounds good. Cheers.

**Simon Maple:** Thanks for tuning in. Join us next time on the AI Native Dev brought to you by Tessl.

Podcast theme music by Transistor.fm. Learn how to start a podcast here.