Publish
Jan 17, 2025

The Difference Between Coding and Engineering featuring Ray Myers

Kyle Forster, CEO, and Ray Myers, Craft vs. Cruft host, dive deep into the nuanced world of software development, exploring the key differences between coding and engineering.

Back to Blog

Coding vs. Engineering in the age of AI Assistants

The tech landscape is rapidly evolving, and with the rise of AI coding assistants, the distinction between coding and engineering has never been more important—or more convoluted. In a recent conversation, Kyle Forster, founder of RunWhen, sat down with Ray Myers, a seasoned technology leader and the newly-appointed Chief Architect at All Hands AI, to unpack this critical topic.

Ray brings a wealth of experience to the discussion, blending deep technical expertise with a keen understanding of leadership and organizational dynamics. As a thoughtful skeptic of AI, he challenges conventional wisdom and raises critical questions about the long-term implications of AI tools on the craft of software development. For Ray, engineering is about more than writing code—it’s about architecture, problem-solving, and the systems thinking that underpin successful projects and teams.

In this blog post, we’re sharing the full transcript from their conversation, where Kyle and Ray explore how the role of engineers is changing in an AI-powered world, why understanding the distinction between coding and engineering is vital, and how teams can rise to meet the challenges of this new era. If you’re a DevOps, SRE, or platform engineer—or anyone passionate about advancing the state of the art in tech—this conversation is for you.

Transcript:

Kyle: Fantastic. Ray, we're live.

Ray: Awesome.

Kyle: For everybody coming on, I'm Kyle Forster. I'm the founder of Run When, and I just want to say thanks a ton for tuning in to this episode. I'm so excited to have Ray with us. Ray has been an invaluable source of advice and coaching and every once in a while, just a shoulder to cry on. through the last couple of years of the RunWin journey, through what has been, Ray, I think on your side, multiple different roles now in multiple different companies. Let me turn it over and let you introduce yourself, but let me just start out with a great big thank you. We all appreciate it.

Ray: Yeah, well, and thank you for having me. So I'm Ray Myers. I am a software engineering tech lead. I've been in the industry, seventeen years, a number of different kinds of organizations and different industries there. Just recently, I have started as chief architect at All Hands AI. But we actually booked this before that because that is such new information. Fortunately, they have allowed me to appear, but the opinions that we're going to talk about, this is what I've developed being software engineering tech lead and kind of AI skeptical pundit here. And I don't know that it'll reflect exactly what you'd hear from my employer, right? I'm not officially, you know, their mouthpiece on this one. But I will say that there is a reason why they're the ones that I went to work for.

Kyle: Yeah, I think that might be it. There's a reason that I'm there, but this is not official. It's probably the cleanest, best intro to these types of things that I think I've heard so far.

Ray: Yeah, but fortunately, they do put their thoughts out there on their blog. So you can find out from the founders' mouths exactly where they see these things going as well.

Kyle: on your side with craft versus craft and then this food board like I mean you've been very very active I would say outside the auspices of the day job in in this area for quite a while. I'd love to get into all that but let's see if we have time because I think there were a couple you know there was one thing we were trading emails about and I just thought your answers were more thoughtful than anybody that I'd ever heard on this topic. And I was kind of hoping that you'd share a little bit with everybody so it's not just like a private conversation between us. The conversation was like, I'm going to summarize it like coding versus engineering.

Ray: Yeah.

Kyle: Because it became the title for this podcast because I liked it so much. Tell me your thoughts.

Ray: So first off, there are people who want to make the argument of whether or not software development counts as engineering in the first place, like whether software engineering is a legitimate engineering field. And Hillel Wayne's probably got the best treatment I've seen on that where he interviewed a bunch of tread engineering field people. And I will just say like, I come out yes, but it's not really important to this distinction. We're just talking about whether software development as it exists in a skilled professional context is the same thing as coding. And my argument is that it contains coding, but it contains many other things. And when we try to advance the state of the art, we must attempt to fully understand it. And this has, I think, been a really important thing to focus on with AI coding assistance, for instance.

Kyle: I mean, there were two pieces of academic work that you kind of sent me. And I was wondering if you could talk about them and summarize them a bit. I can put the links for anybody who's listening in. I can put the links in the chat after the show.

Ray: Yeah, those are both, I think, published in IEEE journals. And one is, 2015 I Know What You Did Last Summer, An Investigation of How Developers Spend Their Time. The other one is... both great names. The other one is, 2019 Today Was a Good Day, The Life of Software Developers. So there's many other, like this is a matter of consensus. We're just talking about like the details at this point, but you can even look at just the amount of time people spend just within their IDE and writing code is still a minority of that. And we spend a lot of time outside the IDE as well. So when we, yeah, it's just the simple matter of typing faster is not the same as more productivity. We have a lot of other activities and sometimes we type things that turn out to be bad. I mean, you know, it's, it's, it's kind of common sense, but we tend to glorify the, you know, one unique aspect of what we do.

Kyle: You know, that, that 'today was a good day' paper in particular, when you sent that my way, I was so inspired by that paper. I actually reached out to Andre Zimmerman, the lead author.

Ray: Oh, great.

Kyle: Yeah. We've had two conversations so far. We're talking about maybe doing some work together. The TLDR, it was an automated analysis of the way five thousand engineers at Microsoft were spending their time. And to meet your point, the TLDR was that on average, on a good coding day, about two and a half hours were spent in the IDE. And that's it. And that's not a design day. That's not a testing day. That's a good coding day. And that to me was kind of one of these like, wow, I knew that intuitively I knew the number was low. Like my experience is that the number is low, but I didn't realize like on average the number's out low.

Ray: Yeah. So essentially, you know, yeah, if we want to optimize the process, I think we need to understand what's in the process. We need to understand also what we're optimizing for. And so, yeah, I had laid out, you know, 'okay, this is what I think software engineering is.' 'This is what I think coding is roughly.' And programming as distinct from coding was a distinction. Leslie Lamport, who's a Turing Award winner, harps on in some interesting talks. But, you know, just whether or not you call them, do you draw the boundaries exactly where I'm drawing them when I use the terms? And, you know, we can define them later if we want. But the point is just to understand the sheer variety of the tasks that are involved in the sheer variety of like concerns we're trying to optimize for other than just speed of implementation.

Kyle: I mean, I really like that today was a good day, Peter Ross, because they took out time spent in meetings that take that time out of the hours. Everybody said, oh, I'm in so many meetings, I don't have time to code. None of that counted. I think they pointed... to a couple of big things. Like one was just the cycle time of like, hey, there's like typing new code. And then there's just the iterating in the CLI, like going back and forth between IDE and CLI and iterating. Did this work? Did this work? Did this work? Did I build more? Did this work? And just the sheer amount of time is spent on that iteration that I thought was fascinating. And then there were a couple of references to broken environments and tests that used to work and no longer work for reasons outside of the developer's code and the sheer amount of time that that represents. That caught my eye. But I'm curious, when you think of like, all right, so if it's a small amount of time that's actually going into typing new code itself, where is the rest of the time going?

Ray: Yeah. Part of it is in understanding what needs to be typed or erased. And that sounds like one thing, just understanding the situation, but that is actually a huge variety of things. We read code in order to do that. We talk to people in order to do that. We go to meetings. We, you know... learn stuff about our local context or, you know, I spent a lot of time in self-study about just industrial skills in general. So understanding. So I guess you could if you really wanted to say like you are always either editing it or you are understanding what needs to be edited. But that you know, massively oversimplifies your interaction with your environment.

Kyle: Yeah, yeah. Or I think there's some of the big bugger like trying to figure out what to do next. And I'll care like, obviously, that kind of your prior role and then current role, like, I mean, you must be spending a lot of time thinking about this part. Where do you think AI tools today fit into the coding cycle, the engineering work outside of coding cycle? I'm going to ask you for a prediction here in a little bit. But I'm really curious where you think tools sit today.

Ray: Yeah, so if we're going to say there's such a thing as coding and such a thing as programming and such a thing as software engineering, I would say that we have right now some pretty impressive coding assistants that don't have a lot of vision for how well they affect your ability to program or how they affect your ability to perform software engineering. and I think the the value then is making them better um you know more seamlessly and productively integrate with the rest of that life cycle with all the copious other tools you know we have that make that tick so does that begin to uh help you and that that's where coding agents should go but then also since there are so many other kinds of tasks, also perhaps other assistance for other things that aren't coding.

Kyle: Well, let's just zoom into that one for a sec.

Ray: Yeah. I think you work on something like that, don't you?

Kyle: Well, yeah, on the other tasks outside of coding, like, we're happy to give you a RunWhen pitch any time, which is the job. But one thing that I really noticed, you know, love it or hate it, my role at run-win, like, I'm still contributing code to our product. And Frankly, in an era of AI coding assistance, it's a lot more fun because I can get a lot, I think of a lot done in an hour, whereas before I could get like a little bit done in an hour. And most days I only have an hour. But maybe to your point on like, am I getting it right coding versus programming? The way you've laid it out, I was having this, I had this amazing hour yesterday where I was using a coding assistant. that helped with a particular API that I hadn't used before. I kind of probably could have guessed, I probably could have looked up the documentation, but like a handful of prompts and this thing had a skeleton of interactions to fetch the data structures that I wanted out of this API that I'd never used. I'm like, wow, that just saved me a lot of time. To your point, learning about how this particular API was structured, et cetera, et cetera. It also made a bunch of assumptions around time versus space trade-offs in the way that it structured the data that came back that I could easily see and I didn't want. And I wanted to make a different set of trade-offs. And so I was able to ask for a different set of trade-offs. And I think that, hey, I see the time-space trade-off in current implementation and I would like to make a different trade-off. Correct me if I'm wrong, you would consider that to be a programming topic. Whereas, hey, can I just fish all of that out of this API? That's a coding topic.

Ray: Yeah, yeah. So you were reasoning about the behavior of the complete program and making decisions about it if you thought it was desirable. And that might not have even been a correctness concern, you know, in your case, but it was, yeah, operational characteristics. Like that's a great example of where you had an assistant that helped you with the coding part. Very blurry line here, but that programming part, it didn't really help you at all. It didn't really, it wasn't equipped to even say that it was making an assumption about the, you know, how the data was going to be represented that would impact that.

Kyle: I mean, just to continue on this particular story, in this particular case, the next step was that the data structure that I wanted needed to get cached in Redis. And the Redis instance in the test cluster was down. And so there went the rest of my hour, which was a bummer.

Ray: Right, right. That pesky context. Right. And this is like, well, we try to deliver something that works in a demo and, you know, someone's going to, to need to, um, use it in this, in this context where they got to worry about that if the Reddit's cluster is up or not. Yeah. Uh, where, where did the time go? I mean, you've described an experience that many people have every day.

Kyle: So just to make sure, you know, just like complete the story here. The coding assistant helping me with the, so I didn't need to read the API documentation. That fits in your ontology's coding. Hey, I'm spaced decision. I would like to revisit that particular decision, make a slightly different decision. Hey, that fits as programming, but not coding. And a very fuzzy line. And then, hey, it was all for naught because Redis was down anyways and I had to figure out how to get that back up and running. And that took most of my time. That would be software engineering that is neither programming nor coding. Does that sound right?

Ray: It could be. I think this is a reasonable enough place to draw the distinction. I also say it would become software engineering if you had to continue to maintain that program over time. I think you're not really even engaging in something we need software engineering for if you're still just writing the first version of it, which is all you're describing, right? I'm thinking of like, what is that program doing in five years? Who is supporting it and stuff? Like when we talk about the long-term effects of these, of really any practice change, which would include that we're using LLMs to help us code, like what is that downstream impact on like the sustainability on maintainability, right? Which at the scale of the program you're talking about might not be a concern yet.

Kyle: That's very, very fair. So you call that it's called for this as a model for conversation, that would be a maintaining phase. That maybe we include an overall software engineering, but maintaining is not quite it's not really programming. And it's not really coding. It's kind of another part of the lifecycle.

Ray: Yeah, while you're maintaining, you will code and program. But, you know, it tends to be a lot less balanced towards the in-editor typing time and more towards the wrestling with my context time. And I tend to see maintenance as just the principal challenge in the industry. It'd be an exaggeration to say writing new stuff is a completely solved problem, but relative to just, I mean, you have a startup because people have existing workloads and Kubernetes.

Kyle: Yes.

Ray: You know, you're writing new code for the sole reason that people have existing code.

Kyle: Very, very good point. Yes.

Ray: You know?

Kyle: Yeah. I mean, I, Well, some funny ... Maybe as a bridge, because I think the maintenance one is interesting. I think a lot of people, I think, who listen to this would kind of define these by traditional job roles. Like, hey, you're a software developer, so therefore your job is this phase. Um, you're, uh, you know, on ops per, you know, SRE, PE, DevOps, you know, platform engineer, like your, your job is over on this other phase, uh, or your job is, you know, infrastructure, you know, horizontal underneath. But I kind of think would, you know, as we're all getting a lot more productive, or at least I'm getting a heck of a lot more productive with AI, like, you know, it's kind of span more roles here. The roles themselves are getting a little bit fuzzier. I'm curious to see like our, as you snapshot today, Where do you think AI tools, like which of the different kind of job functions there, like who's the biggest beneficiary today? And as you kind of think about the evolution of next year and year after year after, do you think that's going to change?

Ray: Yeah, so certainly I think now you have, I mean, in 2025, You already had maybe most people having at least dabbled in it. It'll continue to feel like that. But the difference between dabbling and really leveraged use of it, where you've had to figure out how to really get the most out of this, as you've done. I've seen some of your notes on what your approach is when you use these to get the most value out of them. And it's quite detailed. You're doing a lot of thinking that's not written on the tin when you bought the thing about how you're going to do that. I think that is the state of things to really get the advertised value out of it. You kind of got to invent your own workflow. And that's not really a product yet. So I think we will continue to have a lot of trailblazers. And I think we'll need to see what works over time, what works well on day two. What can I, you know, maybe this works well for Kyle, but can I get a team of Kyles doing that? Or what happens when that happens? These are things we need to shake out. Does that answer your question?

Kyle: I think so. I mean, I think, you know, at least as I look at our engineering team now, as everybody on our team knows, as far as I'm concerned, every individual on the team has unlimited decision-making authority and unlimited budget to try every single AI coding tool that they want, period, full stop. Which leads to a lot of internal trial and error. And I think that's actually going really, really well. The tough part with AI coding tools has been, for us, very large code bases with large data structures that might be defined somewhere in a very different part of the code base than the consumer that's currently using them. It's not insurmountable, but it means that it takes a few extra iterations. The spot where AI coding tools absolutely fly, do an amazing job, is when we're building especially small tools. that we use to either build, maintain, or pre-sales tools, or extra integrations between our product and some other product, where it's a small, separable piece of standalone software. And for that, I mean, it's a pleasure, because the sheer amount of them that are getting it right in the first and second try is just absolutely amazing. Um, so I think that one's going particularly well. I think the product that we're working on specifically is much more like the net, the okay. The Redis is down in the cluster, which lends itself more in the maintenance phase. And at least my personal prediction is like, Hey, a lot of in the self-contained small project space, they'll get better and better and better at the very, very large software, you know, many person engineering team, software project space. And then tools like ours, and we're certainly not the only ones, right? There are a whole bunch of folks that are now chasing this. OK, there's stuff that's actually not related to the source code that really, really, really impacts the engineering process. And how can we use AI in a big way, like modern AI to benefit the people for whom that's a major concern or to benefit the people for whom that really, really impacts them once a week, once every two weeks, once a month? But I think that'll be a slightly later phase. I think the very first phase is clearly up and running and kind of crossed the chasm. This next phase is still in the process of crossing the chasm.

Ray: Yeah, so you've identified a kind of category of use case where like, I just want some self-contained tool and it's very easy for me to get an LLM to help me or an LLM infused agent to help me produce that because I'm able to constrain like the amount of outside context that's caught up in this work, right? And that's, I think is a really insightful trade-off like that kind of thinking will help um people who are you know trying to get this to have more uh insight because I know some people just have the intuition it's like no I can't ask it about this because it doesn't know how our stuff is you know it's gonna give me a bunch of wrong answers if I do this but if I ask it you know to help me make this cli tool that does this you know single If I'm able to constrain the amount of space it has to wrangle, I'm more likely to get a good result. That is certainly the kind of thing that will be helpful to users. It's also a key to why I'm making such a big deal about scaled software engineering being an unsolved problem for these things to such a high degree, right? Because that's what makes our work hard is there's all this context all the time.

Kyle: I mean, this is a really, really esoteric question, but... having just like done a whole bunch of stuff about named entity recognition for a long winding journey on a pre-sales tool it's not worth getting into but the context we always talk about like can we get enough context in can we get enough context and can we get enough context in do you think that there's, this is a really esoteric question, but do you think it's safe fair to say like let's define the amount of context is the amount of named entities call it like: Nouns, um, the proper nouns that somebody or an LLM needs to know? And I think like, man, all of the big LLMs are getting to know like anything where any proper noun that's mentioned more than a handful of times on the web, like they seem to kind of get up on their training data. There are an awful lot of proper nouns. The name of every single microservice that we run is a proper noun. And an LLM can't index that. Many of the verbs that we use around those nouns are kind of industry jargon. But some of them are a little funky. And some of them sound like English but we have a pretty specific connotation around them. So do you think that there's a world in which there's enough public context to matter? I'm personally like a little bit pessimistic because in the particular software engineering domain, there's so many very, very important pronouns that are so organizationally specific that we'll never see LLMs like out of the box build amazing, like massive scale software without knowing all of those pronouns, but we can kind of fake it a little bit around the edges and they can build amazing massive scale software with all the public pronouns. But I'm curious to get your take. That was kind of an esoteric point.

Ray: Yes, so it sounds esoteric, but actually this is central to getting these things to perform well in real context, I think. So I'm glad you brought it up. So in 2025, 2026, I think in these kinds of assistants, really probably in any kind of AI assistant, people are going to be very fixated on trying to, can we customize it for our company? Can we get this thing to give reasonable answers based on the decisions that we've made? And that's not going to be what was just on the internet as of the training cutoff. You know, there are all sorts of choices that we want to be implicit. And things like fine tuning, things like rag, other techniques are, you know, are being introduced to try to make these things more context sensitive. So that's part of it. Like there are a bunch of sort of complicated, you know, ways that we've been able to make these more customizable. It's viable, but there's a limit. And I'm going to break with some people on this. These things are not people! It can have all the right data, and it still won't always make the conclusion you would like it to make because it doesn't have personhood. These are not virtual employees. These are really cool tools.

Kyle: Funny joke. We really see it because we do have one component of our software that we call the Runner. And, you know, the Runner runs things and the Runner is up sometimes, Runner's down sometimes. Maybe we have metrics about the Runner. Maybe we don't, you know, maybe the Runner is this, and is that an uncertain state? And, you know, it really confused out of a LLM, you know, and you say, Hey, you know, what state is the Runner in? And they'll say 'tired'. And you're like, well, you know, me too.

Ray: Yeah. And so, you know, every time you see one of those, like they'll get better, they'll continue to get better. But, you know, if you if you think that if I give it the same context, that it will do the same thing a person would with that same context. You're just crawling up the wrong tree, in my opinion. You know, like these things. It's like saying there's a relational database as good as a human. It's like. It's neither as good nor worse as a human. It's a relational database. We have trained these things to mimic us in certain ways, and that makes us very confusing. But we have to really try and see as clear as we can on what they're good at.

Kyle: Yeah. I mean, we started, I came up with this hypothesis that the time that it takes a human to ramp up on a new team is kind of the time that it takes for them to learn all of these proper nouns that are very, very, very specific to that team. And it's kind of interesting to look at it. Software engineering has a lot of proper nouns for the industry. And then for any particular engineering team, if you've been working on the code base for a couple of years, there are a lot of proper nouns with an awful lot of nuance around them. And then I noticed for larger engineering teams, where you have many different functions, You get like, hey, the DevOps team has their own set of proper nouns and the software developers have their own set of proper nouns. The QA team has their own set of proper nouns. The SREs have their own set of proper nouns. I'm at least a little bit optimistic that over the next couple of years, we'll see every team will wind up with their own initially very siloed set of AI tools. But it won't be that long before we see really, really cool tools that are effectively collaborations in between different teams that are assisted by LLMs that can start to help people, even without understanding the pronouns, at least draw linkages between, hey, the particular code that you're working on is on a particular service. And we just found out from the infrastructure team that the cluster that is running that service is currently down. That's not your source code. Your source code is your source code, not the cluster. Their cluster is their cluster, and it's not your source code. But recognize that you're all both having some trouble. I know this is a little optimistic, but that's just me.

Ray: I'm optimistic that you can build that. We can create that. We can make that a reality. A lot of things that we want to achieve, we could have been.. years ago, and I think it will be a matter of whether those things are incentivized, whether we find a way for that to be what we decide to build. Will we build AI and other tooling that separates us, or will it bring us closer together? That is a decision people will have to make.

Kyle: Yeah. You said decision that people have to make. At the same time, I just looked over and I saw this fantastic comment. "When we work with search engines, humans often wanted to override the results to show certain things first. No general logic applied, more personal agenda." There are new personal decisions within the enterprise engineering team on this one, I think. Actually, I really like that comment for all the nuances.

Ray: yeah uh well you um you have history working on uh at a search company uh of some note so you would you would probably have a lot of experience there.

Kyle: Ray, on a slightly different note: first of all I mean we've talked about yeah called the the topic coding versus programming versus stuff you know software engineering first coding under that coding programming and phases that happen afterwards maintenance we talked a little bit about kind of the different tools that would fit under these different categories. And I love the way that you put it. We talked a little bit about some more esoteric academic topics about, I think, which we both find very particularly interesting. But on a slightly lighter note, I always like to ask this question because somebody I really respect, Jim, actually asked it to me. And so now I know. Plagiarism is the best form of flattery. I butchered that one. But if you were going to give some career advice to your younger self, what would it be?

Ray: One of them would be to try and be curious about things that you might not think to. So an example of people, people like me would identify themselves as software menders who are very concerned with just the sort of longevity of the product. We will always rant about tech debt and we will get told it's not a priority and we will feel sorry for ourselves. and we will operate in areas of neglect and then be surprised when we end up neglected ourselves. And if I could go back, I would say, you know, when they tell you it's not a priority, ask them what is. Get a better picture of what the person's incentives are that is telling you no. Because it often is the case that we're looking out for the health of the system. Ultimately, what we're looking out for is it's involved in some business goal. We just have to figure out what the chain of causation here. And so use your curiosity to create alignment or find alignment there. And the other one would be, don't be afraid to have a personal brand.

Kyle: I would love to actually hear more about that, given the personal brand that you have created in this ecosystem here over a fairly short period of time. But I think, why don't we save that one maybe for our next conversation?

Ray: Great. Well, thanks a lot! Very much!

Kyle: Thank you so much for coming on. We really appreciate this. But it's just good to, you look happy. It's good to see you.

Ray: Yeah, you too. You can catch me on Craft vs. Cruft on YouTube and the Empathy in Tech podcast. Thanks.

Kyle: Thanks, Ray. For everybody listening, we are having our next one. Let's see. Somebody from RunWhen will post the schedule for our next. I believe that we're up in about a week. We have a fantastic set of guest speakers actually scheduled in here through the rest of January, February, and March. We look forward to seeing you on the next one. Thanks.

RunWhen is an AI-powered search & workflow generation platform that surfaces context-aware automation across your stack to reduce alert fatigue and provide support for 24/7 self-service. Import thousands of AI-native tasks to create real-time workflows and automate repetitive operational tasks in minutes. Book a demo here.

🔗 Helpful Links:

Discover RunWhen: https://www.runwhen.com/

Follow us on LinkedIn for live events and updates:  / runwhen  

Learn more about Ray’s work: https://raymyers.org/

Follow Craft vs. Cruft on Youtube:   / @craftvscruft8060  

Empathy in Tech: https://www.empathyintech.com/

Latest from

Check out these blog posts from our expert team.

David Tippett and Kyle Forster webinar: from Noise to Knowledge using AI plus Search
For Developers
Digital Assistance

From Noise to Knowledge using AI + Search featuring David Tippett

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Maggie Millsap
Maggie Millsap
Feb 28, 2025
Tutorials
For Developers

Reducing the Cost of App Modernization

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Kayla Martin
Kayla Martin
Feb 28, 2024
desktop with keyboard and mouse
Digital Assistance
Tutorials

Streamlining Deployment Troubleshooting with Cautious Cathy

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Kayla Martin
Kayla Martin
Feb 23, 2024
Developer sitting on couch with laptop
For Developers
Tutorials

(Zero-Cost) Kubernetes Resource Tuning in your GitOps Pipelines

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Shea Stewart
Shea Stewart
Feb 15, 2024
confused financial officer looking over financial reports
Runwhen Platform
Troubleshooting

Navigating The Kubernetes Cost Curve

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Kyle Forster
Kyle Forster
Jan 17, 2024
Office environment, multiple people working on computers.
Troubleshooting
Digital Assistance

The Future of Remote Troubleshooting

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Kayla Martin
Kayla Martin
Dec 28, 2023
Man focusing while typing on a computer
Company News
Digital Assistance

Mastering Tech Stack Queries: RunWhen's AI Guide for Engineers

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Maggie Millsap
Maggie Millsap
Dec 18, 2023
Developer sitting in a meeting
Runwhen Platform
Digital Assistance

Automating Kubernetes Configurations with Eager Edgar

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Kayla Martin
Kayla Martin
Mar 1, 2023