Lightrun this week added a runtime debugging tool that makes use of generative artificial intelligence (AI) to identify the root cause of issues in runtime environments.
Now available in private beta, the Runtime Autonomous AI Debugger provides developers with the insights needed to address issues in their code, with little to no assistance required from an IT operations team.
Lightrun CEO Ilan Peleg says that the Runtime Autonomous AI Debugger first captures IT operations, observability signals and metrics using a software development kit (SDK) provided by Lightrun. A proprietary large language model (LLM) then uses that data to trace issues back to specific lines of code that developers can fix within their integrated development environments (IDE) within a few minutes using suggestions provided by the LLM.
Additionally, it makes it simpler to dynamically identify specific issues as additional code is iteratively added to the application environment, he added.
Rather than relying on observability platforms designed for IT operations teams, Lightrun is making a case for using an SDK combined with an AI model to enable developers to observe code in a way that fits naturally within existing workflows, said Peleg. That will prove crucial as other AI tools make it possible for developers to increase exponentially the volume of code being created, he added.
Unfortunately, much of that code will still be flawed because the tools used to generate code have typically been trained using examples of code of varying quality pulled from across the web, noted Peleg.
It’s not clear to what degree that approach might reduce the need for other types of observability platforms, but at the very least developers will now be able to debug runtime environments without always waiting for guidance to be provided by an IT operations team, said Peleg. In effect, developers will be able to validate that their code works in the runtime environment it is destined to be deployed in, he noted.
In theory, that capability should also reduce the number of issues that DevOps engineers would need to directly address after an application has been deployed in a production environment.
Like it or not, the AI genie, as far as software engineering is concerned, is out of the proverbial bottle. The amount of code moving through DevOps pipelines will inevitably increase to a point where without the help of AI tools and platforms to manage it, a DevOps team will be overwhelmed. Arguably, DevOps teams have a vested interest in making sure developers have the tools needed to debug that code long before it ever becomes part of a software build.
In the meantime, DevOps teams should start identifying bottlenecks in their existing engineering workflows that are likely to only become bigger as the volume of code moving through pipelines increases. The next challenge, of course, is determining to what degree the tools and platforms they rely on today to manage those pipelines might need to be upgraded to accommodate all the code being generated by developers in the age of AI.