Rethinking Software Development: James Ward on AI's Role in Software Testing and Coding

In this episode, Simon Maple chats with James Ward, a developer advocate at AWS, about the transformative potential of AI in software development and testing. Discover how AI tools are changing the way developers work and what the future holds for coding and testing.

Episode Description

In this enlightening episode, Simon Maple is joined by James Ward, a developer advocate at AWS, to explore the evolving role of AI in the realm of software testing and development. James shares his journey and experiences with AWS, delves into a provocative tweet about AI's potential to write code, and discusses the current capabilities and limitations of AI tools like AWS Q Developer. They also speculate on the future of development roles, the importance of comprehensive specifications and tests, and the symbiotic relationship between AI and developers. Whether you're a seasoned developer or just curious about the future of AI in coding, this episode offers valuable insights and thought-provoking discussions.

Resources

Chapters

  1. [00:00:17] Introduction - Simon Maple introduces James Ward and sets the stage for the discussion on AI in testing.
  2. [00:00:40] James Ward's Journey - James shares his background and experiences with AWS and his role as a developer advocate.
  3. [00:01:44] The Provocative Tweet - Discussion about James's tweet that sparked a lively debate on AI's potential to write code.
  4. [00:03:12] AI Tools in Development - Exploring current AI tools like AWS Q Developer and their capabilities in assisting developers.
  5. [00:09:03] The Role of Specifications and Tests - Importance of comprehensive specifications and tests in an AI-driven future.
  6. [00:17:56] AI's Challenges and Limitations - Discussion on the current limitations of AI tools and the need for human oversight.
  7. [00:29:11] Co-Evolution of AI and Developers - Speculations on how developer roles might evolve with the advancement of AI.
  8. [00:36:16] Future of Development Roles - How AI can make software development more accessible and the importance of maintaining creative and problem-solving aspects.
  9. [00:37:05] Conclusion - Wrap-up and a teaser for the hands-on demo of AWS Q Developer in the next episode.

Full Script

[00:00:17] Simon Maple: On today's episode, we are going to be talking about AI's role in testing, and joining me today is James Ward, developer advocate at AWS at previously Google. And, James is, I just realized actually I'm wearing my Java champion shirt as well. James is also a Java champion, a Test Containers champion. and you're currently a developer advocate,at AWS. Tell us a little bit about that, James.

[00:00:40] James Ward: Yeah. Thanks, Simon, for having me. So this is super fun. Yes, I'm pretty new into AWS. I've been using it for a very long time because I've, had servers in a data center and would have to go in the middle of the night. Work on those servers. And as soon as AWS came out, I was like, Oh, thank God.

[00:00:59] James Ward: So I've started using AWS a long time ago, but, but yeah, so I'm pretty new into AWS and still learning a lot of things there and, and I'm a developer advocate and, working on, I don't know, how do you enterprise developers be more productive using AWS and, obviously with the, Java and Scala and Kotlin and my background get to still use those languages.

[00:01:20] James Ward: And, yeah, so back to the cloud after a couple of years as a product manager on the Kotlin language at Google. So yeah, exciting

[00:01:29] Simon Maple: Yeah. Welcome to the podcast, James. It's a, we've known each other for a fair amount of time from the Java space as well. So it's a pleasure to do this, this episode with you. Now, one of the, one of the things that caught my eye on, should we call it X? No, let's not, let's, on

[00:01:44] James Ward: I still call it Twitter.

[00:01:45] Simon Maple: Yeah, let's call it Twitter for

[00:01:47] James Ward: Just denial, denial of the inevitable future.

[00:01:51] Simon Maple: So one of the things that caught my eye on Twitter was a tweet of yours, and I'm going to read this tweet out. If the AI can write the code iteratively until it passes my tests, why do I need the actual code? If it has bugs, write better tests. Future compilers will just have all this integrated. Humans will have to get good at writing the tests which convey the business and operating requirements. Now, James, I am well aware that Twitter is one of these places that has a lot of sense, a lot of people trolling, a lot of people who have many different opinions.

[00:02:29] Simon Maple: You're

[00:02:29] Simon Maple: fun, what,serious on all of this, with some of it, guessing it to the future, some of it poking fun at AI folks. Where do you sit on this?

[00:02:39] James Ward: Twitter's fun because you can have these conversations with people you don't know and who knows where they're going to lead and all the different perspectives that you see along the way. So yeah, it's fun to, to be able to throw out a crazy idea and then get some interesting responses.

[00:02:55] James Ward: And that's what happened with that tweet was I don't know. I've been using these AI tools more and trying to see how they fit into the work that I do and the code that I write. And so one of the, I think we don't really know like how these tools are going to be used in five, ten years.

[00:03:12] James Ward: And there's probably lots of different ways that they'll be used. But one of the things that I thought about is the, AIs are getting pretty good at writing code for us. Why, if the code that it can write is just the right code, why do I even need to see the code? Why is that even part of my code base?

[00:03:30] James Ward: And so there, after tweeting that, this idea popped into my head. I'm like, I think I've heard about something like this before. And, if you'll remember 4GL languages, where this idea that. you'd have these, like the, you would, I don't know, maybe it was like low code is the like 4GL movement.

[00:03:51] James Ward: And so I was like, somebody has to have come up with what 5GL is. And so of course I went to Wikipedia and was like, it was like, Hey, fifth generation language. And of course it was basically describing what my tweet was about. It's the AI is gonna, you're going to write your tests or, they, and,

[00:04:10] James Ward: kinda more futuristic languages. You don't necessarily write tests, you write like proofs. And so,so yeah, so I was like, oh, like people have obviously ever already thought about this. my, my idea is not novel as always, and so

[00:04:25] Simon Maple: One interesting thing, of course, is the reaction that it gets. And I think it's interesting because we're in that, we're in that funny state with AI, right? Whereby, a lot of people are using a lot of very assisted tools today. So it's very interactive and it tends to be very good at doing kind of like small jobs and people are getting used to that and enabling, Or, bringing in AI into existing workflows. In fact, a great example of that is, I was, chatting with you earlier, talking about AWS CodeWhisperer, which has now been rebranded to AWS, Q Developer. and that does a lot of these kind of things, right? When we think about writing code ourselves, it's there step by step,

[00:05:03] Simon Maple: line by line, helping us as an assistant to get us to the next step, which is here's the line of code I think you wanted to write, or here's the test case that I think you want to create for this. The tweet, however, takes that to its extreme. And there's a lot of people who are still maybe a little bit unsure of today's, the present reality. And I think you always find, people who want to truly believe in it and see the future of where this is going to lead us and others that are very skeptical about whether this is the true path.

[00:05:34] James Ward: It's for good reason. we've. We've been through so many hype cycles, and I have a funny story kind of related to this. There was, a long time ago, I was at the Denver Java User Group, and the presenter didn't show up,and so the organizer asked, Hey, does anybody have anything they can present on?

[00:05:52] James Ward: And this, this grey bearded, man who I didn't know, gets up and he's I got something that I can talk about. And he's I'm going to tell you about something that's going to change the world. And we were like, Oh, interesting. He's like XML. None of us had even heard of XML at the time.

[00:06:11] James Ward: And he's this is XML is going to completely change everything. And then he went on and talked about, his, crystal ball gazing of all the ways that he thought XML is going to change the world. And XML certainly did change the world in many ways. But looking back at that, the ways that it changed the world were very different

[00:06:30] James Ward: from what we thought at the time for how it was going to change the world. And then we went through all the like soap stuff and, all the, WS Splat, stuff. And I don't know if you remember all that, but there was all these attempts at like, how is this, how does this actually get integrated into

[00:06:45] James Ward: daily work and getting our jobs done faster and more efficiently. and certainly XML has changed many things. And now we see it in JSON and other formats, and, protobufs and all sorts of stuff. But, so there was something very significant in what XML was shifting, but no one really knew what that would be like 10, 20 years down the road.

[00:07:08] James Ward: And so I think we're at the same point with AI. It's we see AI being used in all sorts of places. We see the people that are resistant, to, to the change. and I think that, it's just we don't know like how the, what this is going to look like in, in 10 years.

[00:07:22] James Ward: And there's people that are like, Oh, developers are going to be no longer needed. And there's people who, are on the other side of the spectrum. And so yeah, it's that wild time where there's this huge hype cycle, lots of change, lots of crystal ball gazing, and if anyone actually knew, like going back to the XML thing, like no one actually knew like what the future would look like,

[00:07:43] James Ward: and in part because it's iterative, right? And like we try things, we make some changes based on those things and then we try new things. And so it's just you can't imagine how all the iterations are going to play out. so yeah, I think that's exactly where we're at with AI.

[00:07:57] Simon Maple: Yeah, absolutely. So what I'd love to do is actually go through some of, we've grouped some of the replies to this tweet and we'd love to go through

[00:08:04] James Ward: Do we, before we do that, do we need to expand on this idea a little bit?

[00:08:08] Simon Maple: Yeah, actually, let's,let's talk a little bit about it, I think. Yeah, why don't you, in fact, why don't you, rather than medescribe your

[00:08:15] Simon Maple: tweet, why don't you take us through it?

[00:08:17] James Ward: Yeah. So I think today what, some developers are doing is they start writing some code and then maybe you're using a code assistant to fill in the boilerplate almost, or the boring bits and, That's great. it's okay, that's nice that a lot of times I can spend less time thinking about, about, the code that I need to write because the AI has been able to come up with something just based on what I started typing.

[00:08:43] James Ward: And I think that's a good step, but for me, I'm wondering what's why does that code actually need to exist in a human readable form if the AI can actually get it right? So if the AI,if all that I need is some form of way to express what I'm trying to get to, then why does the code need to be readable?

[00:09:03] James Ward: Maybe it does for debugging, maybe it does for,some other reasons, but if we could get to a point with the AIs where the code is been sufficiently, or the specification of the code, what it needs to do has been sufficiently defined, then why does there need to actually be human readable code?

[00:09:23] James Ward: And I think there's a few challenging aspects to that. One, I mentioned both of them in the tweet there, but I'll expand on them. One is, this requires our specifications to be sufficient to know that the functionality is what I would need. And there's all sorts of complexities to doing that because a lot of times the code that we actually are working on has some kind of intangible aspects to it that need to be covered, especially on the operational side.

[00:09:54] James Ward: So on the operational side, you may know that this particular function is going to be called,a million times in a hundred milliseconds or whatever, you may have some ideas around the performance characteristics that you need this function to have. And so you, as a developer, generally would optimize that particular function for those operational production requirements.

[00:10:16] James Ward: But, why don't, why aren't those production requirements specified in our tests? And I think generally our testing frameworks don't get into the operational requirements of a system, the security requirements of a system. And part of what I'm saying would need to happen is we would need to be able to specify not just the functional requirements of something, but also the kind of production, characteristics that we need to get to, like the, in a test would have to say, I expect this thing to return in, 0.5 milliseconds or something like that, and I'd expect it to use, this much memory or, like we generally just do that. those requirements on the development side, not on the testing side, but we would need to then be able to move those requirements to our specification,and not just have that be something that we've iterated to the production requirements that we thought were the right ones.

[00:11:10] Simon Maple: Yeah, and we covered a lot there because then the specification becomes quite broad then right because if you look at the specification you mentioned things like the behavior of the app, which I guess can be defined in a couple of ways one through the tests, the assertions that the app that the behavior needs to meet the other could just be,

[00:11:26] Simon Maple: the, the behavior you want to see through function, functionality description. So I need this calculator to add these two numbers together and then a test or an assertion that follows to say if I provide two and two, I need it to give me the answer four,

[00:11:40] Simon Maple: And, so forth. Then the non functional part of that as well, like you

[00:11:44] Simon Maple: mentioned, the security,the performance, maybe architecture style, requirements you might have.

[00:11:50] Simon Maple: It's quite a big document, right?

[00:11:51] James Ward: Yeah. Yeah. And it's the way that I look at it is I love type systems. And so you as the developer probably should be specifying the actual types of the thing. And so you're going to write your class definitions out. and then you're going to write out the interface that, that is going to be used.

[00:12:11] James Ward: And then what I'm proposing is that you've written your classes, you've written your interface. And you've written your tests and then the actual implementation is what could be filled in by the AI. Because I still do want to have my types. I do want to have the interface of the function definitions to interact with that, that those objects.

[00:12:33] James Ward: And then I want to have the tests that validate the functionality and then the production aspects to it.

[00:12:39] Simon Maple: So you mentioned interfaces, classes, types, and objects.How deep into the implementation, the, or rather, even the ecosystem we're assuming here, probably more like a Java or a Scala like language with some of the verbiage that we're using. Does that matter?Could I, is it possible just to say, look, this is what I want, use the best language I, that you think is going to work for this rather than a language that I am most familiar with?

[00:13:05] Simon Maple: Because if I'm not going to read it as much as, other tools, does it need to be more familiar with me?

[00:13:11] James Ward: Yeah, it's a good question. I, do you think that we have seen many different kind of testing DSLs? and so there's a huge variety of ways that you can describe your tests. and yeah, there, it's going to be interesting to see how that space evolves. there's Cucumber,

[00:13:28] James Ward: the groovy one. Anyways, people have used all sorts of different like DSLs for describing their tests. I have some personal preferences in this direction where, I've moved to the place where I actually like to write my tests as just regular code, no special DSLs, no, no assertion libraries that have all these fancy things on them.

[00:13:50] James Ward: I like to just do assert true. And then that means that my code in my tests actually is normal code. it's not some weird DSL. And so maybe that's part of this is like the way that I have written my tests isn't some bespoke thing only for testing. It's just the normal code. code that I write elsewhere.

[00:14:12] James Ward: So I don't know if that would be a kind of requirement for this AI thing actually working, but, and maybe we see new test testing specification languages evolve as part of this, and I think that's the 5GL description is actually saying that there will be new languages that evolve in this space that are more targeted at this kind of view.

[00:14:33] James Ward: So yeah, it'll be interesting to see how that happens. I think you, you could theoretically do this in any language. but I like to have, there's a few things that I like to have in this space that I think are useful. One is, a good type system so that you can specify your types because I think that is part of your specification and then as part of when some when somebody becomes a consumer of your API then having the types and the functions defined in a very specific way is helpful so you know

[00:15:09] Simon Maple: Using that

[00:15:10] James Ward: How this would work in like dynamic languages.

[00:15:12] Simon Maple: Yeah, absolutely. One of the, one of the, themes of answers that, people replied to, and people did reply in droves to your tweet, was that tests aren't good enough. one person said they only capture a fraction of what code is capable of. And some things are very hard to test. Another person said, no, someone has to still maintain the thing and work out why,if it's behaving in an unexpected way, why this is the case. Another person says, not sure that tests will be enough. Wanna, as you mentioned just now, wanna be able to specify the types, constraints, and things like that, as well. So are people like conflating a little bit too much of the today versus the vision here, James, or is there a point here?

[00:15:53] James Ward: Yeah, I think it's some good feedback, and I think that, yeah, generally our tests today are insufficient to be able to do this, and, but, my, my perspective is that maybe our tests should be more sufficient to cover these sorts of things. Maybe our tests are not enough. Like for me, tests,the usefulness of tests is that it makes me as a developer more productive in the moment that I'm writing the code and then also in the future when things are changing to, verify that I don't have regressions and that sorts of thing.

[00:16:30] James Ward: And so I don't, yeah, for me, it's like, If you see mature projects, what generally they do is if somebody finds a bug, then you go and write the test first to validate that, that the bug does exist and you can reproduce it. And then you come out with the fix and then verify that your tests pass.

[00:16:52] James Ward: That to me is just a good way to work. Cause then you verify that one, you've actually fixed the bug and two, that you're not going to break that functionality in the future because then your CI is going to break. And so for, yeah, for me, a big part of it is our tests should be sufficient to handle all this.

[00:17:09] Simon Maple: And one of the things that you mentioned there at the very top was, maybe we're not in the right place today with the, the quantity of tests or the quality of tests that we have, if we look at how AI could help us there today, this is a wonderful vision or an, a very interesting vision that, that, one day we could quite easily see us getting to, but in the interim, there are still, things like AWS Q Developer is a great example, tools that can help us actually build the test suite that we should have, today. We'll do, actually a nice little demo,and talk through a little bit of that in the episode literally straight after, which will be, which will be being published, on the same day as this one, but yeah, talk us through what developers,in an ideal world today, what the workflow should be to use AI in the testing space.

[00:17:56] James Ward: Yeah, I think some people are, using AIs to take some code and then say, write the test for me. And I, I think maybe if you have existing functionality, that's, that is a viable way to go. Like you, you haven't written a test yet, but you've written the functionality that I think the AIs are getting pretty good at coming up with the test for your existing functionality.

[00:18:23] James Ward: I'm not like a strict TDD person, but I do tend to write tests first, or I write, and I guess there's different levels of it. I write my types, I write my function signatures, and then if, maybe if the logic that I need to build is complex, some level of complexity, then I do actually start writing the test because I don't really understand what I need to build and the functionality that needs to be validated until I've described it in a test.

[00:18:53] James Ward: So I'm not like a totally, and sometimes there's a going back and forth between, okay, I'm going to write some functionality. I'm going to modify my test. and so it's a little iterative for me in different ways, depending on the functionality that I'm writing. And sometimes I don't write tests at all because the logic is, either really hard to test or very, test doesn't add a whole lot of value.

[00:19:14] James Ward: And yeah, I think there's this whole spectrum of. what, where, and when, and how you write your tests, but for me, in most of the way that I write code, I have not done the thing where I highlight some existing functionality and say, no, write the test. but certainly there are people who worked differently in that, that, approach could have value.

[00:19:33] James Ward: So I think there's, there, there's that use case where people want to use it to help them write their tests.the ways that I've been using the AI assistant stuff is when I'm diving into areas that I don't understand,new, I've been working with new languages. I've been working with Rust, for example, or, or yeah, just domains that I'm not as familiar with,

[00:20:00] James Ward: the typical way that I would work is I would go to Google and ask, a question and then I would open like 300 tabs with all the stack overflow answers, all of the, blogs that I can find, all the Reddit pages, whatever. And then I'd have to read through all those things and start to get an understanding of, I'm like manually assembling this knowledge structure in my head.

[00:20:23] James Ward: And now what I generally do is instead of doing that process, I just go into, in my case, Q Developer, and I just start asking questions. And I think that the benefit of that is that it's already done the 300 tab thing and assembled its knowledge and then I can start to cater the response and start to ask follow up questions.

[00:20:44] James Ward: And so for me, that's been the, one of the primary use cases is just the asking questions. And I know people use ChatGPT and I think Copilot and, or you can use Claude from Anthropic,there's lots of different ways that have these knowledge bases, essentially, you can go start asking questions too.

[00:21:01] James Ward: So I think that's the first way that I've gained a lot of value from the AIs.I'll pause there and see if you have anything before

[00:21:09] Simon Maple: Yeah, no, and I hear that a lot, actually, people, in fact, I saw a really interesting video, the other day, where someone, I think they were using Claude or something like that, and, it was actually around a podcast guest, and what they said they did was they would upload entire books, as well as previous episodes or videos that they'd done and push scripts into there, and it would create you know, large amounts of flow and questions and things like that. And that's what AI is amazing at, right? it's about make me much, much more effective by learning all this information in a second or using all this context in a second and then give me that summary of what is useful and what is most important for me to know now,

[00:21:46] Simon Maple: and that efficiency boost is simply quite incredible

[00:21:49] James Ward: Yeah. And a lot of times it's pretty accurate on those kinds of answers. And, but there are times that you get the hallucinations, and I think that, one of the challenging things is how to identify when something feels off. And so there's, I think,for people that are very experienced, they get a spidey sense, tingly sense, Oh, that doesn't feel quite right.

[00:22:12] James Ward: That doesn't align with how I understand the world. And so I think that's something that still is a gap is for people that aren't, experienced enough to be able to get that like spidey sense on something like, they can go down some pretty serious rabbit holes,and then find out, way too late that, oh, this was, I was completely misled in this direction.

[00:22:35] James Ward: And so I don't know how, maybe the models just keep getting better and eventually there's fewer hallucinations or is there a way to get the community to help Check those answers or help be like, Oh, this doesn't feel right. and run it through, a human, that has a brain and experience and can, help, identify the things that may be wrong.

[00:22:55] James Ward: And so yeah, I think that's still an open problem.

[00:22:58] Simon Maple: I was talking to someone just this week, actually, on the podcast,who talked a little bit about the confidence level, because they, because LLMs tend not to want to say no, they want to provide an answer some way, even if an answer doesn't exist. And that's very often, where hallucinations can arise,and sometimes knowing when to say, look, I'm not sure about this. Or I have the confidence issue with the answer I'm giving you. So just be aware that this is an area that needs to be checked as well, because I think a lot of the time, exactly like you say, there are some people that will say, yeah, I can see that this is off or some people that will happily accept it as the truth because they're just not as experienced in that space,

[00:23:33] Simon Maple: so almost having the LLM come back to us and say, look, I'm just confident in this, or I'm not confident in this,

[00:23:38] Simon Maple: you need some additional information or additional input, or maybe human input there to, to go get someone to talk to.

[00:23:43] James Ward: Exactly, and I've seen some people customize their prompts, and so when they start a new prompt session, they'll say something like, if you don't know, say, you don't know. And I don't know if that's universal across all of the LLMs and models, but I have seen that used on some models. And then the LLM, if it doesn't have a high confidence, will actually say, I don't know.

[00:24:07] James Ward: And then I think one of the things that's currently missing from this experience is I haven't seen, maybe it exists, but I haven't seen, I haven't seen an LLM ask follow up questions. And so as a developer, there's a lot of different spaces, and, how many times do we say it depends?

[00:24:26] James Ward: And I guess what I haven't seen the LLMs do is be like, it depends, tell me more about this. and that's where I think there's definitely some room for improvement is instead of just giving an answer. Have the LLM actually start to understand the context and the related things better before it starts spitting out an answer.

[00:24:45] James Ward: And so maybe more of a,like we would do as humans is, Oh, tell me more about that. I want to understand that piece more before I tell you what the answer is.

[00:24:54] Simon Maple: Yeah. Yeah. It requires quite a different dynamic in terms of the chat UI a single prompt. But yeah, a hundred percent. I think I really agree with that. Another response category was just AI plane isn't ready to do this kind of thing. And I know it's accelerating at a ridiculous pace now, and I absolutely agree it's not ready today, but, yeah, they went on to say, have you used Copilot or ChatGPT or Claude or whatever, they still get a lot wrong.

[00:25:19] James Ward: So here's something that is, is beginning to happen, which I think is really fascinating in this space is,this first came up, I was on a work group where there was a bunch of companies that were looking at, how do we convert Java code to Kotlin code? And one of the ideas in that space is, oh, maybe the LLMs can do it for us.

[00:25:40] James Ward: But the challenge is that, as you're saying, sometimes the LLMs are going to get it wrong. And so what can we do about that? And so there's been some experiments in that space where what you do is you have a process around the LLM that takes some Java code, converts it to Kotlin code, and then it runs your compiler, runs your build.

[00:26:00] James Ward: And if that generate new generated code, creates a compile error, then you feed that compiler, back into the LM and say, I got this error, and then you tell it to try again or modify the code to fix the error. And, and then you could iterate through this loop, Maybe you could even change models or change model parameters or something like that to iterate to something that actually compiles.

[00:26:25] James Ward: And then maybe your next step is to then add your actual tests for that thing into the loop as well. And so then you start to, to create this feedback cycle with the LLM to then iterate to something that actually works. All good in theory. I tried this on a Rust project, so I was working in Rust. I'm not a Rust expert.

[00:26:46] James Ward: And I was getting a compiler warning in,on this piece of Rust code. And so I fed that into an LLM and said, Hey, fix this code for me. And so it, gave me back some code. I was like, sweet, great. And so I put that code in and then it's, then I get a compiler instead of just a warning,

[00:27:04] James Ward: I'd now,the code that was given me was now a compiler. And so then I gave the compiler code to the LM and said, fix this compiler for me. And it gave me back the code with the warning. And so then I was just in this recursive loop of it, it couldn't resolve through iteration to something that actually worked.

[00:27:24] Simon Maple: But was that just a context problem where it didn't have enough context from the previous iterations where it learned, okay, I can't do that because I tried that. And that's what happened. Maybe I need to think outside the box. That's almost where not hallucination necessarily, but thinking out the box to try and work out a different solution, could

[00:27:41] James Ward: That's a good point. Yeah. The fuzziness of these things and potentially throwing other models at it as well. It's okay, this model maybe isn't going to be able to iterate to a solution, but let me try a different model. And so yeah, so I think you're right. like this iterative feedback loop with multiple

[00:27:57] James Ward: LLMs potentially could get us much closer to a working solution, but like none of this infrastructure exists around this yet. No one has, no one that I know of is, has written tools that can like actually integrate this iterative feedback loop into a normal development process. And,so yeah, I think we're still at the beginning of this kind of journey for how we can use these tools to iterate

[00:28:22] James Ward: more effectively to working solutions.

[00:28:25] Simon Maple: Yeah, the next category was, and I think there's, there was a couple like this, but talking about eventually, yes, that will be the case. But in the meantime, we need a human in the loop still. I think there's two pieces around this. One is still in terms of the, where the technology is. But if we think about, we just talked about AI not potentially being ready just yet, what about us as humans not being ready? Cause technology can change bang overnight and it's quick. However, humans, they take time to adjust their workflows and time to, do development in a different way. Are we going to be the slowest part of this, this puzzle in terms of changing the way we develop and getting used to it and getting comfortable with trusting what's coming out of it?

[00:29:11] James Ward: Yeah. I think we're in a moment of co evolution with these tools. And yeah, the humans are changing, the AIs are changing, and we're both co evolving. and that's back to the beginning where we talked about not really knowing where this all goes. It's both sides are iteratively moving forward, but they're also entangled and moving forward.

[00:29:35] James Ward: together in a way. And yeah, I think like we as humans are going to have to change how we think about things, how we approach problems, how we use the tools. but then obviously the tools are also changing. And so yeah, it's,For now, one of the most important things when using these tools is what one of my co workers says is, is you still have to use your brain.

[00:29:57] James Ward: What the tools are doing, they're assisting you, they're helping you, but you still need to use your brain and be able to look at some code and be like, that code is not what I need, and and whether it's code or, whatever the LLM is coming up for you. Writing documentation like Javadocs, incredibly useful tool that the LLMs can do for us now.

[00:30:20] James Ward: But you still need to read your Javadoc and be like, is this actually correct? And so yeah, I think it's this pairing between the tools and our brains based on all the experience that we have and the business problem that we are understanding and trying to solve.

[00:30:34] Simon Maple: Yeah. And that actually came up a couple of times as well, just from the point of view of the abstraction, and saying this is just another abstraction to where we are now. This person, this individual didn't imagine a big difference between how we work now to later. So ask a computer to do something, run it, verify it and repeat and iterate through that. which yeah, to some extent is true, it obviously changes the changes what we are physically doing at a machine and potentially, let's talk a little bit about what you consider the dev role to be in this kind of like future space,you were previously talking about, the, it's the human that still needs that brain to actually be able to, whether it's design or whether it's create the app in the way that they want to achieve something. Is the developer just that one level abstracted higher and focusing on the app at that, that slightly higher level ignoring, the lines of code or how is done, things are done and focusing more on the what.

[00:31:28] James Ward: Yeah. And that was. an interesting comment or somebody is like, so we essentially just the future is that developers are business analysts and, are the ones that can take the business requirements and translate them into a specification. Is that what you're proposing? And I don't know if that's what the future looks like.

[00:31:47] James Ward: I don't know if I would be very, super stoked on that because I like solving problems and, getting into, to code and like figuring out the right ways to model things and test things, and, I the like mechanics of a lot of this stuff, not I don't get super excited about understanding users and their needs.

[00:32:06] Simon Maple: So let's talk about that. Cause I think that's a really important piece of, as to where development goes, because developers aren't developers just because, Hey,let's go into this space, cause there's a ton of money in it. They do it because they love problem solving and they enjoy coding. So in this new world, we're going to. What is the, where's the fun? What's the thing that is going to get a developer to have that, amazing moment where they solve something. Is that still going to exist?

[00:32:31] James Ward: I sure hope so. I would be bummed out if, if the AI took the fun parts out of the job, cause, yeah, I think, hopefully, where this goes is that the AI takes the not fun parts and does those for me, and then I get to focus on the fun parts, but I don't know what that actually looks like for,would I enjoy my job if all that I was doing was writing a test and the types and the interface, would that be fun for me?

[00:32:57] James Ward: I don't know if it would. and yeah,

[00:32:59] Simon Maple: That creative piece by seeing an application being built up by writing spec versus actually writing the code, cause you are still creating, but you could argue as well, exactly like you said, if you can get the AI to do those things that you either don't like doing or takes time and isn't actually that part of the creation, everything from, tests as well, but documentation, those kinds of things that keep everything up to the standard, which often when we look in open source spaces and other areas, your mileage varies across many different libraries and packages, right?

[00:33:29] Simon Maple: There's that aspect as well. Going further into the future. Some people say, you know what, this is great, but you didn't go far enough. There's something beyond what you're saying, which is talking about developers or what we will label as a developer in the future, they're just going to write test specs rather than implement,and, people will use AI for writing tests as well. So instead of treating AI as a solution, effectively, just to build code, you'd treat it as a companion that says, I'm going to write some tests. I'm going to write some code. It does it all for me and my hands are clean. it, it broadens the number of people that can be that developer as well, yeah. Where does it stop?

[00:34:10] James Ward: Yeah, I think we've seen so many, so many, I don't know, attempts, but some of them successful around low code or no code. and they're, I think it's a wide space. and so there's going to be, I'm sure places that need developers who can write code and debug code and, get into the gory details of things and figure out the hard problems.

[00:34:33] James Ward: And then just like we have with low code and no code, there's going to be this widening of who can actually be a developer who can create things. And An example that, my co author, podcast co host, Bruce Eckle, likes to talk about is that, we've got a bookstore in, in our little town we live in here in Crespe, Colorado, and the software that exists to manage a bookstore is terrible, because,who is motivated to,build a team of, 20 developers to build good bookstore software, right?

[00:35:04] James Ward: Like it just it doesn't make sense to, for that space to be solved because it's too expensive to get to a good solution and there's not, not enough benefit to people actually investing in that. And so we have a problem that, that there are many domains in our world that could be better solved through software.

[00:35:25] James Ward: And yet there's not enough financial motivation, there people have to spend many years learning how to build systems to be able to solve those problems, and if we can significantly widen the net for who can actually solve these problems, like that is amazing and could get our friend Arvin who runs a bookstore, good bookstore software that he currently doesn't have because there just isn't economic incentives for that to exist.

[00:35:56] James Ward: And so I think, great, let's widen the spectrum of who can solve many of the problems around the world with software. I think that's a great thing, but I don't foresee there is the future is that only people that don't know how to code but know how to use AI tools are the only ones that exist.

[00:36:16] James Ward: I don't, I think it'll just continue to be a spectrum of, we solve many different problems in the space of software and, and, there is, there, it's a very big tent with room for a lot of people and a lot of different tools and a lot of different skill sets and a lot of different experiences.

[00:36:33] James Ward: So that's my crystal ball take on it is I don't think developers are going away and being replaced by AIs anytime soon, but certainly there will be developers that, use these tools to hopefully, enable us to be more productive and focus on the fun parts.

[00:36:47] Simon Maple: Sounds great. Sounds great. James, this has been wonderful, and yeah, next up, actually, we're going to, we're going to get a little bit more hands on, we're going to screen share and actually, actually see some, see some of, Q Developer in action and talk a little bit about how, AI tools can help us, help us actually, write better tests and write more tests,as we develop code.

[00:37:05] Simon Maple: James, for the time being, thank you so much. It was a really great chat.

[00:37:08] James Ward: That was a fun chat. Thank you.

Podcast theme music by Transistor.fm. Learn how to start a podcast here.