Spending too much time
just keeping the lights on?
We offer libraries with thousands of automated keep-the-lights-on tasks you can use this afternoon. Our platform helps you build Engineering Assistants for your colleagues that find which to run when they get questions, alerts or tickets.
Thousands of Tasks In Minutes
Sync a Kubernetes and/or cloud account with libraries covering cloud infra, K8s apps, popular OSS and programming frameworks. Wrap your existing bash, python, SQL, REST, Ansible, etc. to build your private library.
More Than A Search Engine
Create an Engineering Assistant for your colleagues with a scope and credentials. They respond to alerts or questions by running tasks and summarizing next steps.
Executive Insights
Built in reports give insights for operational readiness, automation coverage and prioritized opportunities for future automation.
In The Library
Non-prod maintenance without leaving VSCode
Popular tasks from our libraries let Engineering Assistants:
- Fetch logs, restart non-prod services then open a ticket
- Troubleshoot Kubernetes deployments and dependencies and summarize the results
- Health check shared environments
- Look for stack traces in logs, and paste them into tickets with diagnostics developers need
- Find common configuration errors and build a PR with fixes
- Burst or right-size resources
Production assistance with Engineering Assistants
Send production alerts to your read-only Engineering Assistants. It is like having a team of L0 on-call engineers working 24/7.
They find and run hundreds of diagnostic tests across all layers in your tech stack, producing in-depth reports along with summaries of issues they recommend for further attention. They also hang out in slack, ready to run more tasks and add to the report on demand.
Thousands of keep-the-lights-on tasks automated by our experts
We add 20+ new automated tasks per month covering the infrastructure, services and tools that your team uses and maintains every day
Check logs for errors -> if errors then collect stack traces and env vars -> paste to ticket -> if deployment then do a rolling restart. If a CI/CD job has an error then find the tests that it referenced -> find the deployments referenced by the test -> collect env vars, manifest and stack traces from deployment -> if existing ticket then paste to ticket -> if no ticket then create a new ticket. Untangling the test environment again. Collect logs -> -> grep for stack traces -> file ticket -> restart VM (AWS). Check for high error rate nginx paths -> find deployment -> check deployment resource health -> copy logs to a ticket -> restart deployment. Health check /login for http 200s -> check auth microservice logs for errors. Check postgres write-ahead log storage utilization -> add emergency capacity and escalate immediately. Helping developers with repetitive troubleshooting. Check Kubernetes Error events for Deployment -> check logs for application errors -> check CPU/mem/IO metrics -> check node for noisy neighbors -> paste all info into a ticket. Check databricks for failing job references -> if databricks job failed then check node health under Deployment -> if node health is OK then check databricks dependent deployment for Error Events. If developer says service is down -> help the developer run liveness probe check and collect recently logs and notify of find the service owner. Collect env status and pod logs -> paste into new ticket -> rolling restart deployment. Triage noisy alerts. Run test env pre-flight check -> check all Deployments are in ready state -> check transaction table has at least 1 row. Check transaction queue is <100 items deep -> if not, collect env info, deployment logs and file a ticket. Collect StatefulSet manifest -> paste to ticket. Manual health checks. Check certificate is valid -> if not, rotate certificate. Check Ingress for Warning Events. Check Ingress log for error messages. Increase cpu/memory capacity for Azure Web App. Check Ingress for paths with high rates of 500 errors. Read/write test key to Redis. Read test row from postgres -> restart VM if query returns no rows. Check kafka client latency. Restart kafka client to rebalance. Search logs and paste results to ticket. Add env vars to ServiceNow ticket. Confirm no root account logins in last 30 days. Check volume for utilization. Add emergency 10Gi storage capacity to volume. Add emergency 500 millicores CPU capacity. Compare deployment manifest to Vertical Pod Autoscaler CPU/Mem recommendations -> if misaligned then prepare manifest to align them in a PR -> email service owner. Check manifest for readiness probe configurations -> if missing then notify service owner -> if incorrect then prepare a PR with a fix and file a ticket. Check manifest for non-standard open ports -> if non-standard ports then check exception list -> if not on exception list then file a ServiceNow ticket. Check oauth login latency -> if latency is slow then restart VM -> email service owner. Check queue is less than 60% of capacity -> if queue is beyond basic capacity then check CPU / memory -> if CPU/memory is high then copy recent logs to a ticket and emergency restart process. If test body mentions vault error then do test read/write in vault test path with pod credentials -> if vault test read/write fails then try with default read only credentials -> if that fails then notify service owner. Check test DB is running and volume is not full and login string matches and no key tables locked and test user is entered in user table -> if any fail, stop running tests and notify test owner. Check CPU is not >80% for the last 5 minutes. Check memory is not >80% for the last 5 minutes. If resource utilization is over limits, open a PR for capacity increase. Check Azure metrics for http 500 rate overnight. Check logs for errors after deployment scale-up. Check for high CPU/mem after deployment scale-to-one.Run test env pre-flight check that all Deployments are in ready state and transact
Did you say this afternoon?
Install the RunWhen Local agent in your cluster to scan Kubernetes, AWS, GCP and Azure accounts.
By default it will sync the RunWhen read-only libraries. You can add more public or private libraries over time.
A scan of a typical small/medium size cluster will import several thousand tasks in a few minutes.
A best-in-class engineering experience
More powerful than giving everyone dashboards. More secure than giving everyone credentials. More than just developer experience, create a great engineering experience across Dev, QA, DevOps, Platform, SRE, ...
Collaboration increases coverage
Our platform is designed for you to import AI-ready tasks from our community, but also for anyone across your teams to add their own. A CLI command? A SQL query? A REST call, or a shell script? Engineering Assistants (with appropriate access) recommend them and use them in real time, extending their capabilities without ever changing configuration.
Interactive demos in our sandbox
Want to try an Assistant in our sandbox? We have a Kubernetes cluster loaded with applications so you can see what they do.
Where to next?
The default Assistants that come out of the box are designed for Platform/SRE teams to give to developers for Kubernetes troubleshooting. However, it doesn't stop there...
A (paid) community?
Expert authors in our community receive royalties and bounties when RunWhen customers import troubleshooting steps they automated. The community's efforts span infrastructure, cloud services and platform components alongside popular OSS components, programming languages and frameworks.
Running a lean team means you need the best engineers you can find...
Do you really want them spending time on work that you can offload to AI? Some teams are using us to replace low-value, bloated outsourced operation teams with high value, in-house experts. Others are building, augmenting or replacing their Internal Developer Portals with an AI-first strategy.
Ready to get started?
Our private beta is ready for you - Let’s take your team to the next level.