Senser is extending the reach of its artificial intelligence for IT operations (AIOps) platform to now include an ability to define and maintain service level agreements (SLAs) and service level objectives (SLOs).
SLOs are a set of internal performance goals that require access to telemetry data from service level indicators (SLIs), while an SLA is a formal commitment to maintaining specific levels of service.
Senser CEO Amir Krayden said the company’s AIOps platform collects data from SLIs and then applies predictive AI models to enable IT teams to achieve SLOs and SLAs. The Senser AIOps platform leverages extended Berkeley Packet Filter (eBPF) and graph technology to gain visibility into the entire IT environment versus requiring IT teams to deploy agent software. Machine learning algorithms are then used to aggregate and analyze that data to define thresholds for predicting performance in addition to recommending benchmarks for tracking SLOs and SLAs.
That approach provides a single source of truth for identifying the actual level of service being provided based on a topology of the infrastructure, network, applications and application programming interfaces (APIs) that makes it possible to identify the root cause of issues and the potential impact of an outage for degradation of performance.
IT teams have been attempting to achieve and maintain SLAs and SLOs for decades, but given all the dependencies that exist in a distributed computing environment, it’s difficult to achieve that goal. Senser is making a case for applying AI within the context of a platform for automating the management of IT to define and maintain SLOs and SLAs to make it possible to consistently manage SLAs and SLOs in a way that reduces the level of cognitive load that would otherwise be required. Senser is also working toward adding generative AI capabilities to provide summaries that explain what IT events have occurred.
Collectively, the goal is to provide IT teams with a more efficient holistic approach to monitoring and observability that legacy platforms are not going to be able to achieve and maintain, said Krayden.
At the core of that capability is eBPF, a technology that allows software to run within a sandbox in the Linux microkernel. That capability enables networking, storage and observability software to scale at much higher levels of throughput because they are no longer running in user space. That’s especially critical for any application that needs to dynamically process massive amounts of data in near-real-time.
As the number of organizations running the latest versions of Linux continues to increase, more hands-on experience with eBPF will be gained. IT teams may not need to concern themselves with what is occurring in the microkernel of the operating systems, but they do need to understand how eBPF ultimately reduces the total cost of running IT at scale.
Ultimately, the goal is to reduce the current level of complexity that today makes effectively managing highly distributed computing environments all but impossible for IT teams to manually maintain in an era where the pace at which applications are being built and deployed only continues to accelerate.