Publish
Feb 28, 2025

From Noise to Knowledge using AI + Search featuring David Tippett

David Tippett and Kyle Forster discuss AI assisted Search and it's impact on development.

Back to Blog

AI Assisted Search: What are the benefits and how do we implement for maximum value?

In this episode, we’re diving into the transformative power of AI-assisted search and its impact on developer workflows. Developers are constantly looking for ways to streamline their processes and reduce repetitive tasks, and AI is now at the forefront of making this a reality. With AI-driven search, developers can access critical automation insights faster and more accurately than ever before.

Join David Tippett, Search Engineer at GitHub, and Kyle Forster, Founder of RunWhen, as they discuss how AI-powered search is reshaping DevOps workflows. Through a deep dive into real-world use cases, they’ll show how AI helps developers cut through the noise, surface valuable data, and optimize operational efficiency. Get ready for an insightful exploration of the future of AI-driven automation and how it’s changing the game for developers. View full transcript below.

Transcript:

Kyle: Everybody, thank you so much for coming. We have a fantastic guest on with us this evening. David, I want to sing a whole bunch of your praises, but maybe first let's start, well, as everybody knows, Kyle Forster, founder of RunWhen. Dave, why don't you introduce yourself quickly, and then we can get into it. This is one that I've been really excited about, really, really excited about.

David: Likewise. Likewise. It's always fun talking with you. Yeah. Hi, I'm David, search engineer at GitHub currently. I have a very short history in search engineering and a very long history in data pipeline engineering, et cetera. But super happy to be here and talk about search because it's so important.

Kyle: Yeah. I feel like obviously there's a personal interest and passion of mine. I wrote two different versions of the of our own internal search engine that sits underneath our Assistant algorithm. I'm looking forward to this, frankly, because I have a whole bunch of questions that I wanted to ask about the right way to do search. I did want to open it. You said something right before we turned on the cameras. And you said, I see all kinds of people. I'm just going to read off my notes. 'They think that AI is the answer to their search problems rather than fixing search as the answer to search problems.' And that to me resonated so deeply. Just as an opener, I was hoping that you would just unpack that for everybody who's listening in for a while.

David: Yeah, I think there's this inherent belief that, 'oh, we get AI search, we plug it in, and it becomes instantly better.' But I think one of the problems that you have when you do this is you end up running into... okay, I've still got muddy search. And then I add AI on top of it. And then it's like muddy AI search. And that's where you start to see like all of these like weird things that we saw when people started using LLMs with RAG for those who don't know is retrieval augmented generation, which is where you like search some results ahead of time, you feed it to an LLM in an attempt to make it better. But a lot of times it's like, if your search wasn't good to start with, If you feed those results into an LLM, it's just going to confuse it, and it's going to make your results seem weirder or give very strange responses. I saw one company that rolled this out, and they had automatic responses to emails, and they were shocked. when their AI responder Rickrolled someone, which is they sent a video of Rick Atsley as one of the, you know, like, hey, here's how to resolve this and link to the Rickroll video. I mean, so, and this is where I'm saying, like, you know, it's really important to think about your search as search problems. You know, it's not just something, you know, any software engineer can step into. It's like, It's an area with a lot of specialization, a lot of tooling, and you almost need like kind of like a data sciency ML type person. Maybe not necessarily ML, but data sciency for sure to like get in there and start like evaluating how can we make search better first.

Kyle: I don't want to steer the conversation too much towards the current day job. I'm really curious because I feel like for us, I've faced this problem a lot. When we say search, our search engine doesn't return web pages. Our search engine returns automated tasks that somebody should run in their environment to fix something. And the result is very different. When you think about code search in your day job, I guess a lot of people immediately think like, well, web search and code search are kind of the same. I'm just guessing that they're very, very, very different.

David: Yeah, actually, this is a really interesting place because GitHub has taken a hard branch, and we actually have two internal search systems. This is all public information, so I'm happy to share it. But the first search system is Code Search, and it is specifically tailored to and tuned to just returning code search results because text search, traditionally what you do is you will break apart words into tokens and each token gets put into a dictionary basically that links back to all the documents that reference that word. When GitHub was going through this transition and trying to make code search using a text search engine like ElasticSearch, they were just like, this is not working. And that's where our internal code search engine came around. So they are. They're very different. And even further, there are a lot of use cases for search that people wouldn't necessarily think of. So for example, you were just talking about finding and returning scripts that people should run. Search can also be used as a recommender system, which is where, hey, you previously looked up this. I'm going to recommend that you also look at this. Or even further, there are some really cool things you can do, like deduplication. Like, hey, you are trying to create this issue. This issue that looks very similar to that one already exists. Do you still want to create this issue? Or do you want to go contribute to this other issue? It's an example of some places where you could use search that aren't traditionally what you think of.

Kyle: Do you think that I've always viewed search and recommendation engines as under the covers, basically mathematically the same thing?  I think a lot of people think of them as so wildly different. You're the expert. What do you think?

David: So I think they're different in the sense that a lot of the methodologies you might use are different with them. So we'll say, for example, tech search. This is a really interesting one. A lot of people, when they... look in a search bar and results are returned. What they think has happened in the back end and what may be happening, depending on which search you're using, is, you know, like, oh, it looked for the words in my queries, it found them in matching documents, and then it just returned those documents. So that's, like, a baseline search experience. That's what most people have. But, like, next tier of search experience is, like, building a tailored set of search. And what I mean by that is like, all right, you're looking for an issue. You probably aren't looking for an issue in any repo. You might be looking for an issue in repos that you've contributed to. You might be looking for a recent issue. You're probably looking for an issue maybe that maybe a lot of other people have encountered. So maybe an issue with a very high page count. So these are like, this is the difference between like, you know, the baseline search experience, which is like um what we described earlier where I was like you should just improve your search these are the things I'm talking about is search is not just this like flat plane of I look up text I find text it's all these other facets like hey you know um maybe a lot of people keep coming back to this issue we're gonna boost that one or hey maybe this issue is six years old it's gonna get decreased in the results. Chances are you're not looking for an issue six years ago. So those are the types of things that you might look at. And recommender systems, they can be viewed as different, but they're starting to become much more the same. So we'll talk about vector search. Vector search is one of the key areas I feel like that recommender systems are used because it's like, okay, you have tags or attributes. This person has a salary of approximately this range, has interests that match this. So I'm going to recommend this item because I think it fits in with them. And that's traditionally been done in vectors and it's computed with a bunch of math. um and that's what we're starting to see in like traditional search as well you know we're starting to see people um using these vectors to represent text and then using it to say hey are these you know documents semantically similar do they seem like they have similar content and composition.

Kyle: You know the first one internally we used to talk a lot about google map search because It was just so intuitive. If you're searching restaurants and you're Zoomed pretty far in, it certainly means one thing. But if you search for restaurants and you're Zoomed at country-level view, you probably don't want all the restaurants in that country. Go figure. It's very dependent. Additions to the query.

David: Yeah, exactly. And I think actually, like, this is where it's become really hard as a search engineer, because I mean, you called out the big G word, Google, they have done such a good job of narrowing down exactly what things people are looking for in search, it almost spoils your search experience everywhere else. Because you go to some, you know, little mom and pop site, and you search, and you're like, why can't I just find the, you know, tinsel shaped cookie cutter that I'm looking for. And it's because search is, it's serious engineering. It takes a lot of thought and a lot of tooling to make sure it's right.

Kyle: And I'm curious for folks, well, I mean, at least I'm curious what you think in my view, watching Glean now just beat Wiz's record of the fastest company to a hundred million dollars in ARR history. Like, I mean, amazing, amazing thing. I feel like almost partly because of RAG, we're seeing this renaissance in enterprise search.

David: Yeah, I think.

Kyle: What do you think?

David: I mean, I think it's really, it's exciting on one hand, because like, the focus is turned to search, which a lot of times, you know, search is put in the back burner. It's a feature, you know, when for so many companies and I'll say even GitHub included, like this is the one thing I like to talk about a lot because it helps my job, is search is everywhere in GitHub. It is not just the search bar. So you click in the search bar, you search some things up, you know, that is a very obvious use case of search, but it's starting to show up in other places that it's really interesting. So like our issues view right now, so like when you go to github.com slash, you know, org repo slash issues, that view is powered by search actually. Instead of just a MySQL view, we've said, hey, look, people come here enough and they're looking for tags, they're looking for words and issues, they're looking for these text terms, that that has now backed by search instead of like a MySQL database. So with regards to RAG and how it's made search popular again, I think it's really great because I do think search is really important and I do hope that more people pay attention. But I think the one thing that's very problematic is they think, 'oh, RAG is a solution to my LLM accuracy problems' or 'RAG is a solution to my search problems' when really it's like you put bad in, you're going to get bad out, you know?

Kyle: Yeah. Now, I mean, that, well, I haven't watched my team and I prototype a small RAG pipeline, both for our own use and as a special for one of our customers. I mean, it was pretty stunning how quickly we could make a really bad RAG.

David: Yeah. It was a weekend, and it sucked. After a week, it got a little bit better, but not that bad.

Kyle: Yeah. Well, and I think that comes down to tooling again. So for those in the search space, we have this idea of relevance. And the way that we basically measure relevance is we have some query, we return some results and then we have real people come and say, 'hey, this result was a really good match for this query' or 'it was a poor match'. And then based off of that, those results produce a number. And that number is actually a calculation of how good or bad the search was based off of how high up the best result was. So this tooling, this relevancy tooling, you start with a handful of queries and a handful of results that have been judged, and then you move on to potentially hundreds of other queries. So once you have that in place, it becomes really quick to say, 'okay, I made a change. I tested it. Did the overall results get better or did they get worse?' and it's just not something most people most teams know about just because it's it's a very search specific set of tooling

Kyle: I'm curious from relevancy because we've just started experimenting a little bit with in order to augment our test cases of hand you know by hand hey given this query hey here are the top five results like we kind of used manually Trying to use LLMs to help just expand that, you know, a couple hundred cases out to a couple thousand.

David: Yeah.

Kyle: For what it's worth, we saw, I mean, just this very, very recent on our end, but a gigantic change by using DeepSeek, like a reasoning model, even though three would probably would have done it or a one, but a reasoning model versus traditional. Seemed to match our intuition for relevance ranking.

David: Hmm.

Kyle: insanely well to just basically help us produce unit tests. We're just using it to produce lots of test cases. That's kind of a universal fact.

David: Yeah, no, I think there aren't many companies that are doing tests like that. That is, I would say, you guys are the outliers if you're testing the relevancy. Now, getting down to building a really robust metric around that is very helpful because it like you said, making sure a result is in the list, you know, that's kind of like the first tier of relevancy. Then you want to see like, all right, now how high is that in the list as well, which is really important. But no, as far as like using LLMs, I'm seeing a lot more people nowadays, a lot more companies embracing LLMs for creating these, what we call 'judgments' in the industry, which is like, hey, given this query, given these results, like which is the best? And there's some really interesting research happening right now as far as people creating profiles for these people. So say you are a product manager and these are the attributes of this person, then also rank these. So this is adding another layer of depth where you can start getting past the, this is the query, these are the results and start even fine tuning on who your expected users are. And because it's challenging. It's very challenging to create these data sets for testing. And that's where LLMs have come in and just lowered the bar so much. I mean, it's just so much easier to create these data sets now.

Kyle: Yeah, I don't want to go too much like our own personal experience, but I would, I would say, because we wrote the search engines, V1, V2 and V3 on our side, pre LLMs. And yeah, the difference that suddenly we had in the ability to generate test data was just gigantic.

David: It's huge because it's time consuming and it, you know, it can also be hard to get buy-in from all the different people that you're like, Hey, can you help me build this test data? It's like, nobody wants to spend their time doing that, you know?

Kyle: Yeah, I would say that was kind of like the biggest bottleneck that we had in many cases. And I guess for us as an organization, as we were kind of going through, this was back two years ago, give or take when we were doing search V1, 2, and 3, we were having all of these debates, you know, very early on or should we start from a, from a, you know, from kind of a keyword BM- twenty-five basis, or should we start much more from a semantic search embeddings basis? I feel like while that was a little bit of an esoteric debate like two and a half years ago, now with embeddings like one simple LLM call away to get extraordinary high-quality embeddings, it feels like embeddings have become more of the lingua franca of engineering in our industry. And you can say embeddings without a lot of people going like, wait, you want to represent text as a which, as a what?

David: Yeah.

Kyle: I'm curious, how would you revisit that debate now? And is that something that you end up seeing in your day-to-day conversation?

David: All the time. I mean, we're evaluating vector search, of course, for GitHub, and we reevaluate it pretty frequently. I think the two things that are really challenging to adopt those embeddings is, number one, it's expensive. It is much more expensive compared to traditional text search. Traditional text search can work very well when it's written to disk. It can be retrieved efficiently versus embeddings where vector search, more often than not to get good results, needs to be all loaded in memory. Which is okay, given its memory footprint is smaller than the original data, but it's still expensive. Then the other part of this, and this is the hardest part to get around, is your user base has to be in tune with the fact that the way that they should search is changing. So for example, on GitHub, I would say our average query is probably anywhere between two and six tokens, we'll say, which is like two and six words. That doesn't work super well with a lot of semantic search models. People aren't asking questions and expecting responses in that way. So I think there's becoming this diversion where for things like RAG, it makes a ton of sense because that is how people are doing things. But it even still has its shortcomings. And maybe we'll see some of that change with, there is, what is that called? Sparse embeddings. Sparse embeddings, I think, is going to assist a lot there. But when it comes to precision of search results, especially if you're using super precise terms that are domain-specific, that's where BM-25. You almost just can't beat it.

Kyle: Yeah. I just saw a question pop up from Odell coming in. I think I just showed it up on the screen because it feels very relevant to the topic that we're talking about at the moment. And I had a question, but I want to let you respond to this one first, and then I have a burning question for you right next.

David: Yeah, yeah, absolutely. So the question is, as I read it, "would it be safe to assume that embedding modeling is more responsible for relevance rather than the vector DB itself?" So I'm actually going to throw two things out there. So the question is really asking, hey, is it more important to have good embeddings, or do you need also a good vector DB? Does the quality of the vector DB affect relevance? And I'd say there are two things there. At the core, the embedding is going to be what drives the most relevancy, with the caveat of you can host that embedding in a vector search engine in a way that it is never able to be retrieved well. And that's because there are a couple different ways to represent those embeddings. Most people are familiar with HNSW, which is hierarchical navigable small world graphs, which has become the default for vector search. But there are other ways. So there's IVF, which is... Oh, gosh. I don't actually remember off the top of my head what that stands for. But it's a more bucketed way to do vector search. And the way that you host those embeddings could have a huge effect on relevancy. So IVF, you could see... I think, what was it, like a 5X decrease in relevancy if your bucketing is wrong, which is huge, right? So, and again, like you'd have to know that your vector search engine either supports IVF or does not. You'd have to also consider other things. So like I mentioned different attributes of search. So let's say for example, like page views. Maybe I wanna boost documents that have really high page view counts. Well, in order to combine my boost for high page view counts with the vector score. So like vector search will retrieve a score and then I wanna boost or decrease that score. Your search engine also has to be able to accommodate that. So I would say, at the core, the embedding space is where I would spend the most work to get a good relevance, but also you do need to check on the features of your search engine to make sure it can support your embeddings. And even embedding lengths, like I think Elasticsearch is now up to four thousand, we're going to say four thousand ish for how many tokens it can hold versus something like FICE, which can hold, I think, like fifteen to twenty thousand tokens. So there's a lot to consider when it actually comes to the search engine space itself.

Kyle: I mean, it's fascinating. I just want to loop back for a second because I'm still wondering this point that you brought up that, hey, when you have a search, when your typical search query is two to six tokens, keyword search is fundamentally set up to succeed because I'm assuming implicit in that as like two to six, like fairly long tail tokens is kind of the norm.

David: Yeah.

Kyle: Whereas when your search is like a full English language question, it's phrased as a complete sentence. Oh, that's a, that's probably mostly very common tokens. Yeah, it's semantic. Semantic versus a totally different search technology depending on the query, not on the target.

David: Yeah, actually, so this was one of the interesting things that I think the OpenSearch project did a fair bit of research on when it was first being investigated. You know, this idea of like, which is "better", with all the sample data sets out there. And almost always they found that it was some combination of the two that produced the highest quality results. So for your super long tail queries, yeah, semantic search is going to be like your bread and butter. But for short queries, there's much more value in using traditional BM-25 and doing lots of boosts and tunes, et cetera. But combining the two, you can kind of almost always end up in some middle ground where you're like, all right, our results are really good and we're covered on both ends.

Kyle: That's interesting.

David: Yeah. And actually that's like a whole branch on its own is like, um, scoring for, uh, BM twenty five documents and vector search documents are on two completely different planes. Like, you know, um, it's like zero to one or it's, you know, like an infinite range of numbers. Um, so it's, it gets challenging. How do you combine these documents well? Like what is, how do you build a baseline? Um, And for some people, it's actually using machine learning models to rescore after you've done the retrieval piece. But then it gets challenging because how do you paginate that? You know, like, okay, well, I have five pages of results. How do I combine these results stably for five pages?

Kyle: I'm glad the pagination problem isn't just us.

David: No, it's a stable pagination. Well, I'll say at GitHub scale, it has just become such a incredible challenge because I have people who genuinely have use cases to go past five hundred pages in their issues. And I'm like, why do you have five hundred pages in your issues? Can we work this down? They're like, no. And I'm like, great. So I have to support that. Now I'm going to work on it.

Kyle: Fascinating. That's it, but you know, one, I mean, along these lines, do you think that the line, cause I've seen that too, it feels like there's a fair amount of literature, like in general, the combination of BM-25 plus or call it just general keyword plus plus, you know, a semantic search combining the two would generally lead to better results. Do you think that's as true for code and for other structured data as it is for English language search targets or natural language search targets?

David: I don't, I don't know that I can really speak to that. Yeah. I, I have not personally tested it with code search and, uh, cause I think this is the other challenge is like, Hmm. Actually, this is something that I'm still kind of simmering on with regards to GitHub search page, because in my best case world, I want to turn GitHub search page into you get a single pane of results. You make a search. We're going to search in code. We're going to search in issues. Right now, it's kind of tabbed search, and I just don't feel that's the best. So I don't. I guess you would kind of need hybrid search because I think our, I actually haven't read the paper on code search. I don't touch that within GitHub, but I believe it's represented as vectors. And if that is the case, then yeah, you would kind of need this hybrid search to figure out like, hey, are you searching for an issue? Are you searching for code? Does that kind of answer your question or did I miss the point there?

Kyle: No, no, no. I mean, to your point, like, hey, It turns out across GitHub, certainly across ours, actually multiple different search engines for multiple different uses actually makes a ton of sense. It's not one search infrastructure with a lot of multiple. We have this kind of crazy case that for us turns out to be the very common case at the very core of our IP of our product that somebody describes a problem and our search engine, our Engineering Assistants are expected to respond with all of these different automation scripts that are a solution to that problem, but there's no lexical overlap in between the description of the problem and the title or the documentation of any of the scripts used to solve it.

David: Ah, I see. I see.

Kyle: So example is like, like using like the Kubernetes online boutique terminology, you know, 'the cart service is down'. Or the cart services and the right script. Check if the online boutique namespace pods are healthy. There's not only no lexical overlap, but the conceptual overlap of the cart service is a knowledge graph question relative to the concept of the online boutique namespace. So we actually do a hybrid of... knowledge graph traversal, which turns out for us to actually be the high-order bit, along with BM25 and a vector search.

David: I was going to say, this feels very much like, oh, there is a word for this, like this Not asynchronous. There's a way that you can represent embeddings where it's like, hey, this is my input and this is my output embedding. And that allows you to traverse both and say, hey, even though I know you're talking about the cart service, I recognize that is a component of... Yeah, like you said, it's combining that knowledge graph into also the text search portion. Because yeah, you're going to have to turn that text search into a... knowledge graph query to say, all right, well, where does this belong within the grand scheme of things? And then you can kind of do like a, what is that called? Radius search, like where you start searching larger and larger radiuses to see if you can find some solution. But no, that's a really interesting problem. Yeah, that's interesting.

Kyle: I'm inspired by what I never worked close enough to it to know, so I'm just guessing what the database underneath looks like for Google Maps, right? Where a vector representing a restaurant encodes not only the title of the restaurant, any web pages you know about the restaurant, but it also includes the location of the restaurant, the cuisine of the restaurant, the fact that it's a category of a restaurant, as well as latitude and longitude, so that one search can kill a lot of birds with one stone.

David: Yeah, exactly. Actually, it's interesting, though, because you kind of mentioned vectors there, but at least within open search, like when you do geography-based search, it's more often than not, yeah, just a vector of location, and then it will narrow down the BM25, what is that called? The BM25 portions, indexes, are built into segments that are then, you know, searched across with BM-. So, like, you'll grab, basically, you might say, hey, you know, that matches these coordinates within a radius. It'll grab all the segments of BM-. That kind of match that. And then it'll do BM-. Search across those. I don't know how Google Maps is searched. But, yeah, it is... very interesting trying to build in these coordinate graphs along with doing traditional BM25 search.

Kyle: For us, it turned out to be critically important because most, not all of the time, but most of the time, the high-order bit when somebody gives us a query of like the cart service is down is trying to figure out where in the infrastructure the cart service actually lives. What are all of the things that we can do for that general area of the infrastructure? Make sure we're not accidentally troubleshooting prod when the question was about dev. And then kind of re-rank within that space to say, all right, now within that space, all the other things matter, but it doesn't matter if we're troubleshooting the pods that are down at some point.

David: And that's also where that LLM understanding is really important as well. Because you can generate a list of things that need to be checked. you know, like, all right, where's our namespace? Like, are we talking about dev or prod? Then go search and find things that, you know, kind of match this, you know, within our knowledge graph. And that's where I kind of like the agent model a lot, although it can be potentially very expensive.

Kyle: I want to say, Abdel actually had one more question that I wanted to flash up on the screen, and then I know that we're over time and should wrap up, but let's try to squeeze in this one more question.

David: Absolutely, yeah. So "how do we approach selecting the embedding model for our use case, namely observability and seam use cases that are not benchmarks for the famous models out there?" This is actually really tricky, and I've got a very hot topic on this, and that is that I don't think most people should be embedding their observability data. And I say most people because this is where, like... So many of the models that are out there that are trained, that are readily available are semantic models. They're meant to understand sentences. And you'd really want one that is trained specifically on observability, like security use cases. And unfortunately, not a lot of people I feel like are open sourcing those right now because it's like their company's internal product. Their product is producing those embeddings and finding that for you. Which does raise the question, why is it that we have so many semantic models out there? But yeah, honestly, I don't know that I have a good use case for you or a good answer for you other than also asking yourself, what do you anticipate you'll get out of this? Do you believe that you're going to get better results by using embeddings? And why is that? How do you plan on using that? That's where I start with most people because I think for most observability use cases, embeddings are just, they're very heavy and they're very expensive.

Kyle: Yeah. I mean, I can answer also for, and I couldn't tell actually, maybe the best question is for us too on the RunWhen side. On the embedding model, I kind of touched on it a little bit. We use a really simple model. We actually just use DistilBERT for the text part. We do a lot of query processing, and then we use something literally as simple as DistilBERT. We got the exact same results from DistilBERT and OpenAI because that actually only represents a small part of the overall vector. All of the words in a query represent a small part of the vector. Where that query represents in the infrastructure represents a much larger part of the query. Who the user is and their history represents is a material part of the query. I think it's kind of the point that you made earlier, trying to get all of these other hints about what the user's intent is. We spend a lot of time finding whatever signal we can about user intent, and we bake it into this really, really big vector. And what the user actually typed is like some of the intent.

David: Yeah. And that's what Google is famous for, is that their just knowledge and understanding of user intent is, I think, so much more vast than people give them credit for. And that's why they're able to say, 'hey, David's a software engineer. When he looks up Ruby, he's probably not looking for gemstones.'

Kyle: Yep. A funny story on personalized search and my old roommate, but we'll talk about that some other time. David, this was so wonderful to have you on. I love these kinds of conversations. I mean, this is just like the highlight of the week type of thing. Before we totally close out, though, there's one question that I always like to ask people who come on. What career advice would you give to your former self?

David: It's probably two things. It's two things. It's one, ask ten times more questions than you think you ought to. I think so much of my career was spent just trying to figure everything out on my own and just expecting that I should know these things. When the truth is nobody just knows it. Everybody goes and they have to learn it somehow and asking questions can be one of the quickest ways to get there. And then the second part is just ship it. There have been so many times that I've just like puzzled over the right way to do things and what's the most secure architecture and what is the like most robust skills, etc. More often than not, you just need to ship it in order to start getting that feedback and to start evaluating, did I do it right? Was there something I could have done better? There was a handful of cases where I just shipped it. I ended up rewriting the service three times. However, by the third time I've written it, it was ten times better than if I had spent all this time working on just doing it right from the start. I mean, I just never would have gotten it right.

Kyle: Sage advice, man.

David: That's it. That's what I'm here for. Thank you so much for coming. This was a real treat. This was a real highlight. I think for everybody that tunes into these things.

Kyle: I'm glad to be here, yeah. Thanks, Tim. For everybody else, we'll be on again in about, I believe that we have one more coming up. But maybe we'll keep it honest before KubeCon London. We look forward to talking again in... fantastic lineup coming up afterwards. We'll see you very shortly. Thanks all for tuning in.

David: Thanks.

RunWhen is an AI-powered search & workflow generation platform that surfaces context-aware automation across your stack to reduce alert fatigue and provide support for 24/7 self-service. Import thousands of AI-native tasks to create real-time workflows and automate repetitive operational tasks in minutes. Book a demo here.

🔗 Helpful Links:

Discover RunWhen: https://www.runwhen.com/

Follow us on LinkedIn for live events and updates:  / runwhen  

More from guest, David Tippett ‪@TippyBits‬  :  https://tippybits.com/

Latest from

Check out these blog posts from our expert team.

Ray Myers and Kyle Forster discuss the key differences between coding, engineering, and programming.
For Developers
Runwhen Platform
Digital Assistance

The Difference Between Coding and Engineering featuring Ray Myers

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Maggie Millsap
Maggie Millsap
Jan 17, 2025
Tutorials
For Developers

Reducing the Cost of App Modernization

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Kayla Martin
Kayla Martin
Feb 28, 2024
desktop with keyboard and mouse
Digital Assistance
Tutorials

Streamlining Deployment Troubleshooting with Cautious Cathy

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Kayla Martin
Kayla Martin
Feb 23, 2024
Developer sitting on couch with laptop
For Developers
Tutorials

(Zero-Cost) Kubernetes Resource Tuning in your GitOps Pipelines

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Shea Stewart
Shea Stewart
Feb 15, 2024
Office environment, multiple people working on computers.
Troubleshooting
Digital Assistance

The Future of Remote Troubleshooting

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Kayla Martin
Kayla Martin
Dec 28, 2023
Man focusing while typing on a computer
Company News
Digital Assistance

Mastering Tech Stack Queries: RunWhen's AI Guide for Engineers

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Maggie Millsap
Maggie Millsap
Dec 18, 2023
Developer sitting in a meeting
Runwhen Platform
Digital Assistance

Automating Kubernetes Configurations with Eager Edgar

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Kayla Martin
Kayla Martin
Mar 1, 2023