AI-Powered Documentation Experience with Amara Graham: Kapa below the surface
Join us as Amara Graham, Head of Developer Experience at Camunda, shares her insights into AI-driven documentation with Kapa. Discover how this innovative AI agent is transforming user interaction and support systems.
Join us as Amara Graham, Head of Developer Experience at Camunda, shares her insights into AI-driven documentation with Kapa. Discover how this innovative AI agent is transforming user interaction and support systems.
Episode Description
In this episode of the AI Native Dev podcast, host Simon Maple welcomes back Amara Graham from Camunda to delve into the transformative potential of AI tools in developer documentation. As the Head of Developer Experience, Amara brings a wealth of knowledge in AI integration and user engagement strategies. The discussion centers around Kapa, an AI agent that enhances user interaction with Camunda's documentation by answering questions and referencing extensive resources. Amara offers a live demonstration of Kapa, highlighting its capabilities, integration with existing workflows, and impact on reducing the support burden on human teams. Listeners will gain insights into the strategic implementation of AI tools and the evolving landscape of developer experience.
Resources Mentioned
Chapters
- [00:00:00] Introduction to Amara Graham and Kapa
- [00:01:00] Overview of Kapa's Features and Evaluation
- [00:02:00] Understanding Kapa's Dashboard and Metrics
- [00:03:00] User Engagement and Feedback Mechanisms
- [00:05:00] Addressing Uncertain Responses
- [00:08:00] Data Sources and Ingestion Processes
- [00:10:00] Integration with Workflow and Tools
- [00:12:00] Future of AI Tools in Documentation
- [00:17:00] Best Practices for Implementing AI Tools
- [00:18:00] Conclusion and Key Takeaways
Overview of Kapa
Amara introduces Kapa as the AI tool chosen by Camunda to enhance user interaction with their documentation. She remarks, "Kapa was the tool that we chose... It's the one that when we were doing our evaluation made the most sense for us." Kapa plays a crucial role in the product lifecycle by facilitating access to information and improving user engagement with the documentation. Amara stresses that while Kapa is not necessarily the best for every organization, it fits well within Camunda's operations, aligning with their goals of streamlining user support and leveraging AI to reduce the burden on human support teams.
Handling Uncertainty in Responses
Amara elaborates on how Kapa deals with uncertain responses, explaining, "When Kapa says I'm not sure how to answer this question, it on this back end logs a little flag." The team has established a threshold for taking action, typically at 10% uncertainty, although "usually we're hovering somewhere around the three, four, five percent range." Monitoring these metrics ensures that the documentation remains high-quality and responsive to user queries. When uncertainties arise, Amara's team evaluates whether updates to the AI's training data or documentation are necessary.
Process for Addressing Uncertain Responses
Handling uncertain responses involves a systematic approach where the team first identifies patterns or commonalities in uncertain queries. This helps in understanding if the uncertainty stems from gaps in documentation, unclear phrasing in user queries, or areas where the AI's training data might be lacking. Once these areas are identified, the next steps include updating documentation, refining AI training data, and in some cases, engaging with product teams to address any underlying issues. This proactive approach ensures continuous improvement and adaptation of the AI tool to better serve user needs.
Data Sources and Ingestion
The discussion moves to how Kapa ingests data from various sources and the frequency of updates. Amara shares, "Some of the sources we have to go through and pull them manually... Some of it gets pulled and ingested automatically." Maintaining fresh and relevant content is crucial, especially around product releases. Regular data ingestion and updates ensure that Kapa provides accurate and timely information to users, reflecting the latest product capabilities and changes.
Importance of Data Freshness
Ensuring that Kapa's data sources are consistently updated is vital for maintaining the tool’s reliability and effectiveness. The process involves both automatic and manual data ingestion to capture the most current information available. This practice is particularly important during product releases, where new features and changes must be accurately reflected in the documentation. By keeping the data fresh, Kapa can provide users with the most relevant answers, thereby enhancing trust and usability.
Integration with Workflow and Tools
Amara discusses how Kapa integrates with existing tools and workflows, such as Slack. She notes, "We can do Slack integrations... But for us, our source of truth, if you will, is always going to be the documentation." Different teams utilize Kapa based on their specific needs, whether for community forums or internal documentation reviews. This flexibility allows teams to tailor Kapa's use to enhance their productivity and communication.
Tailoring Integration for Maximum Efficiency
Integration of AI tools like Kapa into existing workflows enhances organizational efficiency by allowing different teams to leverage the tool according to their specific needs. Whether through direct integration with platforms like Slack or by using Kapa as a standalone documentation resource, teams can achieve a seamless flow of information. This tailored integration supports varied workflows, ensuring that all team members can access the information they need in a manner that aligns with their existing processes, thereby optimizing productivity and collaboration.
The Future of AI Tools in Documentation
Amara explores the potential evolution of AI tools like Kapa, considering the possibility of shifting entirely to AI-based search tools. She observes, "Based on our current user behavior... I think I'll just leave it as is." However, she acknowledges that as user behavior evolves, particularly with the rise of AI pair programming tools, there may be shifts in how users interact with documentation systems. This forward-thinking approach ensures that Camunda remains adaptable to technological and behavioral trends.
Adapting to Future Trends in AI and Documentation
As AI technologies continue to evolve, organizations using tools like Kapa must stay vigilant and adaptable to changing user behaviors and technological advancements. This involves continuously observing user interactions and being open to integrating new functionalities that enhance user experience. Whether it’s embracing more advanced AI capabilities or refining existing ones, remaining agile ensures that the documentation systems remain relevant and effective in meeting user needs.
Best Practices for Implementing AI Tools
Amara shares best practices for selecting and implementing AI tools. She emphasizes, "Make sure it works for you, your team, your community." Tailoring solutions to specific user communities, setting thresholds for monitoring performance, and ensuring alignment with user expectations are vital. Amara advises against a set-it-and-forget-it mentality, underscoring the need for continuous monitoring and engagement with AI tools to maximize their effectiveness.
Key Considerations for Successful AI Tool Deployment
Successful deployment of AI tools involves a careful selection process that considers the unique needs of the organization and its users. Setting performance thresholds helps in assessing the tool's effectiveness, while ongoing monitoring ensures that it continues to meet evolving needs. Engaging with the user community for feedback and making iterative improvements based on this input is crucial for maintaining the tool’s relevance and effectiveness over time.
Full Script
Amara Graham: [00:00:00] And in some cases I can see that they're maybe reacting with a thumbs down because they didn't get the answer that they wanted. The answer was still accurate and correct, but they were like, Oh, I don't like the fact that you don't have this feature yet in the product. I'm like that's not a me problem.
That's a product management problem.
Simon Maple: You're listening to the AI Native Dev brought to you by Tesla.
On today's episode we have Amara Graham from Camunda joining us again. And this time around, we're going to go into a screen share covering a lot of the topics and things that we described in our last podcast, Amara is the Head of Developer Experience at Camunda.
Welcome back. How are you doing?
Amara Graham: I'm doing great. [00:01:00] Thank you.
Simon Maple: Awesome. Awesome. So we're going to see a little bit around Kapa, which is the AI agent that you are currently using to take questions in from your users and work out what the answer should be with citations back to the documentation that you have at Camunda.
We're going to go ahead and share your screen. There we go. And you can take us through a few things, including various conversations, some dashboards, some analytics as well. So why don't you take us through a brief overview of Kapa?
Amara Graham: Yeah, absolutely. So Kapa was the tool that we chose.
I'm not here to do a sales pitch. I can't tell you, is it the best? Is it the greatest? It's the one that we're using. It's the one that when we were doing our evaluation made the most sense for us. So this dashboard is just looking at the last month. I will say we are getting super close to our product release.
And when we see those minor releases, we tend to get a lot of traction, I guess I'll [00:02:00] say. So we have a lot more people coming to our documentation. We potentially have a lot more people using the AI agent So all this to say I can't say this is like the best month we've ever had but it might be a little bit more turbulent, I guess I'll say then than other months.
So yeah, we're looking at the dashboard here and you can see kind of all of the basic things that you're looking for, the total questions uncertainty, which we talked about a bit, you're able to see a definition for what uncertainty is. So when Kapa says I'm not sure how to answer this question, it on this back end logs a little flag.
So for me or other folks on my team, we're able to come in and see what does it mean when it's saying it's uncertain? Can we improve this answer like you see it mentioning in this tooltip? All of those things that kind of roll into the regular maintenance that we have of Kapa. With that too, we also have this concept of reactions.
You can probably do the math yourself here and see that [00:03:00] we don't get a lot of engagement as far as the reactions are concerned, which I always find very interesting because people are usually pretty receptive to giving feedback.
Simon Maple: I guess people are probably far more likely to downvote something and then upvote something because as soon as you get something that you want, you click on it, you go somewhere else. As soon as there's something that's annoying you, you're far more emotive to actually go no I want to downvote this because it's annoyed me.
Amara Graham: When we launched this kind of as a pilot, even really just for internal users, I was expecting a lot more of this kind of engagement and we just didn't get it.
And I think you're onto something there where it's people are either getting what they need or they don't, and in some cases I can see that they're maybe reacting with a thumbs down because they didn't get the answer that they wanted. The answer was still accurate and correct, but they were like, oh, I don't the fact that you don't have this feature yet in the product. I'm like that's not a me problem. That's like a product management problem, but we can funnel those to the right people to answer them. The unique users thing, I don't [00:04:00] really pay much attention to because of people have all sorts of things around, I don't want to be tracked.
We are still largely a German company. So there's things that we allow people to opt out of. So it's not something that I really monitor all that much. And then this is relatively new in the sense that it was not there when we first launched with Kapa. But it's this idea of how are we improving support or how are we potentially taking some of that burden away from our traditional support team, which is staffed by humans.
So I think this is a cool metric to look at every once in a while, but not something that I monitor.
Simon Maple: The uncertain, when the LLM comes back with I can't find an answer to this. I can't find a page on this. Is that a metric that you track as in trying to reduce that month on month?
The way to do that, is that through fixing other documentation, creating new documentation? Or is there like potentially like product messaging or something that needs to be fixed? What are the resolutions to that? I guess
Amara Graham: That's a great [00:05:00] question. We give you the standard dev rel answer.
It depends. Yeah. So I actually down here in this little chart we get a view of the response certainty and internally what I've said is I am not going to take like a deep dive into the certain or uncertain questions until we hit a certain threshold that threshold for me today right now is if it's above 10%.
Okay, I will say that 7 percent here is pretty high our documentation like just not to toot my own horn. Our documentation is typically so good that it's getting closer to a 3 percent uncertain response week over week. So occasionally again, we're heading towards our minor release that's in October.
So we do see a bit more traffic. We do see people start to ask questions about what features are coming. I don't expect [00:06:00] Kapa to answer those questions because the information is just not there, but I set a threshold for myself to say at 10 percent we'll take action. Usually we're hovering somewhere around the three, four, five percent range.
So I can't be too bothered about that. That said, I do dig into, I just have another tab open here, the uncertain tagged questions and click into them and say, okay, can I get a just very basic triage of what's going on here? Was the question worded strange? Is it something that I know is, potentially coming in a future release?
Like I mentioned with this minor release that's coming up. And then we have these tools on the backend to be able to say, do I want to improve this answer directly here and then feed that back into Kapa's training data? Or do I need to say, this is actually something that needs to be fixed in the documentation.
Let's go ahead and do that with a pull request [00:07:00] or depending on the severity or what we need to change, do we need to launch like an official project? So there's several things that are going on behind the scenes with that. But again I'm coming in here monthly and taking a look at things just to make sure it's acting as I would expect it to but I do know there's some seasonality here where I say 10 percent is my threshold for action, but we're typically hovering around 5%. 7 seems a bit high, but knowing that we have this upcoming minor release, again, I'm not really moved to action but I'll still go through and have a look at some of those. conversations see what people are asking, where it's getting flagged as uncertain and then potentially take action on those one off situations.
Simon Maple: Interesting. I really love the fact that you can actually make a change to the training data. And presumably that's just updating some RAG somewhere that kind of allows it to have that additional data.
Amara Graham: Yeah. So I'll quickly show the sources here and how we have them configured.
Some of the sources we have to [00:08:00] go through and pull them manually. You can see. last ingested dates. Some of this data doesn't really change all of that much. I can go through and manually ingest some data. Some of it gets pulled and ingested automatically. But yeah, there's a couple of different things that we can do if we see the answers aren't quite right, or we know that something was updated recently.
Where do we want to make that change? Is it a change to the ingested data. So we need to pull from source or is it something that we need to change one off? We have the option to do either.
Simon Maple: How much change is there in terms of adding new sources? Is there much that gets done on a kind of like weekly basis or is it largely similar over time?
Amara Graham: Again, because we have this kind of like seasonality as we're working towards releases, we do monthly alpha releases and then two or three times a year we do minor releases. Those minor releases are quite large, typically. And so what will happen is as we get closer to that minor, [00:09:00] I will make sure that we have the freshest data possible.
And again, this is running in this like refresh state depending on what source we're pulling from what's getting ingested, but I always make sure that when we do a very significant docs release that we make sure we pull in the documentation so that it's the freshest possible. So maybe that's happening monthly.
Maybe that's having a couple times monthly, but we do have a cron job that basically runs and fetches this. And then I can go through and review. Do I really need to add that? Is there significant changes there can take care of that?
Simon Maple: And I know there's an integrations piece there as well. How would you integrate Kapa into your workflow?
Things like Slack integrations as well and things like that, or is it mostly through the UI?
Amara Graham: Yeah, so we can do Slack integrations. You'll see, maybe if you have really keen eyes, because I didn't quite point it out that we have other teams or other areas that are using Kapa. So we have our Camunda academy [00:10:00] team, discourse is our community forum.
So depending on what they are looking for from a tool like Kapa, they can do some amount of integration for us in the documentation space. We maintain such a large and significant part of the source data, that for us our source of truth, if you will, is always going to be the documentation. So if people have problems with the docs, they're either internally letting us know on respective Slack channels or our docs are open source and public.
So people can create issues, create pull requests, and we're handling more of it there. This is really a check on the accuracy and completeness very similar to if we were doing, I don't know, a traditional review. So you're going in and seeing docs for a new feature. Is it complete? Does it make sense?
Do we think our customers will be able to respond to it? In that sense, Kapa or this Kapa ecosystem [00:11:00] is contributing to that and is a I don't know, a power user in that sense because it's able to collect all of this feedback on all of these various topics and then it's on us to say, do we need to take action and modify that?
Is it something that is just a product gap? Is that something that we would then address with a change to the docs or not? Or is that something that we just ignore? So there's a little bit more gardening, I'll say, going on in the back end where it's do we need to take action on that now? Or as things naturally roll out, the life cycle of the product continues, this will shake out and it's not something that we need to address immediately.
Simon Maple: And actually, one final question you mentioned in the last podcast episode about the fact that you have search as well as the kind of the chat UI available and people can use either. How do you see the numbers here effectively being resolved through a chat UI versus a user potentially getting [00:12:00] the results from a search passing in keywords from a usual search box. Do you see the success rate being higher for one versus the other?
Amara Graham: Yeah, so that's a very interesting question and I don't have a good view of it, because we have two ways to search.
One is through the Kapa AI tool itself, where ask AI is going to be that back and forth dialogue. If you want, you can ask more of a human oriented question or you can just do a regular search here. We see both of those things happening. And I think, like I was mentioning, we'll see people use the ask AI part with keywords as well. So they're using it as a search box. We also have a search box. That's just built in that's powered by Algolia. I think this is something that our docs framework runs on Docusaurus. I think Algolia partnered very early on with Docusaurus to make it available for open source projects.
And there is a little bit then of kind of triage [00:13:00] that I'm doing where I'm watching both spaces to see do we have anything that's maybe trending in one place or the other? Is there any issues with our users getting the information that they need in a successful way or an efficient way and trying to then understand what do we need to do?
What actions do we need to take? Again, that comes down to is our documentation formatted correctly? Is it a way that machines and humans can parse? Some of that comes up with how people are searching the terms that people are using. And then all of this gets baked into how we want to structure our docs pages.
So what you can't see is some of the metadata behind the scenes for these pages include additional keywords so that we're flagging to search engines, to the AI agent, that this is something that's relevant that gets us back into the traditional SEO side of things. Yeah. Which again is why I say machine learning, deep learning was like the early [00:14:00] segway into these AI agents and how people interact with them and this concept of data ingestion.
All this has been around for quite a long time. And so we say for our docs, if we are following SEO best practices, we should be able to also do really well with these AI agents and getting them you know, the right information structured in the right way that they find it as well.
Simon Maple: Yeah. Putting your future hat on, do you see yourselves potentially looking at switching entirely over to just ask AI at some stage?
Amara Graham: Yeah, that's such a great question because again I come back to watching some of the user behavior in here and I don't have to click into these for you to be able to see this little number sign. This is how many kind of interactions or back and forth people are having. I'm sure Kapa has a better definition than that but that's what I'm going to call it.
And you can see that many of these things are these one off, what is DMN? They're just looking for a basic definition. Maybe they're going and viewing this link [00:15:00] and they're quite happy with that and bouncing. So I look at this and say, based on our current user behavior I think if the ask AI agent is responding well to these one off either questions or things like just a statement or a keyword, then I think I'll just leave it as is.
But again that comes back to if you're evaluating any of these similar tools, it depends on your user base. If your user base is totally comfortable asking like fully well thought out questions, and they're getting what they need, then I would say that search is not really keyword search, I guess I should say is not something that they're going to use.
That's fine. But for something like Kapa, it seems to do both pretty well. So if people want to just shout terms at it, it's going to respond. If you want to give it good questions, it's going to respond. And in that case, I'm like, cool. Business as usual.
Simon Maple: It just depends how chatty people want to be.
Amara Graham: Exactly. And I didn't pull up before we started a good example, [00:16:00] but we have some people that get very chatty with it. And I'm just like, oh, okay, this is great. And maybe that also says something about like how we're seeing the evolution of paired programming with some of these tools. And as people get more comfortable with copilot tools and things that they're able to interrogate as they write code or draft documentation that we might see behavioral changes.
But yeah, early on when we first implemented it, I was like, I'm glad this is a robot because people just shout things at it. Give me this parameter thing. But instead of, saying even that they're just like parameter name, keyword this, I'm like, Oh, it has no feelings. It's probably good.
Simon Maple: Awesome.
Amara, I really appreciate you sharing this and and thank you for the under the covers view of what Kapa looks like from the dashboards and from the conversations point of view. Yeah. And I think you mentioned that at the end of the last episode that you very much recommended people try this out.
I think you're having good success with it in, within your developer [00:17:00] experience.
Amara Graham: Yeah, absolutely. And I say, it's not a sales pitch. I know there's a number of vendors out there. What I always tell people who are in like the management or leadership position for choosing these tools is.
Make sure it works for you, your team, your community. So maybe you have a community that's a lot more receptive to uncertainty and you want it to do some amount of hallucination because you want to get that as I don't know, user feedback and validation go for it. For our users though I know I went very structured.
Lots of guidelines and guardrails because I know the tolerance for hallucinations for our community is quite low. So again it's just a matter of knowing your community, knowing your users. And then my biggest thing that I want to tell people is don't just turn it on and ignore it. Make sure that you're monitoring it.
Set thresholds, like I mentioned to say, we need to take action or a deep investigation at a certain uncertainty threshold. But it's a really great tool and it's great for user [00:18:00] validation. It's great for validating things like our docs working correctly because that's a hard question to answer.
And this is a great tool and a great way to have that answer.
Simon Maple: Thank you very much for not just one, but a couple of parts to this episode. We appreciate you joining us and and thank you for sharing all that very useful information.
Amara Graham: Yeah. Thank you so much.
Simon Maple: And thanks all for listening and hope you tune into the next session.
Thanks for tuning in. Join us next time on the AI Native Dev brought to you by Tessl. .
Podcast theme music by Transistor.fm. Learn how to start a podcast here.