We know that continuous integration and continuous delivery (CI/CD) have become a DevOps best practice. And many have learned that by adding continuous testing (CT), they can create a virtuous loop, ensuring perpetual code quality and security. They’re not wrong. Yes, testing continuously is good practice, but some incorrectly equate the concept of continuous with monitoring, thinking they’re able to see everything that could eventually affect a customer’s experience (CX) with their software application. That’s a fallacy. Many of the most glaring blind spots that wreak havoc on CX exist outside the environment that CI/CD/CT is able to “see.” CI/CD/CT, even when supported by application performance monitoring (APM), doesn’t and cannot monitor for and alert to blind spots across the internet stack. And there are many. Only internet performance monitoring (IPM) can take CI/CD/CT as input streams and shift to continuous resilience as a measurable outcome.
How CI/CD/CT Typically Works
As customer requirements evolve, software development teams write code that triggers a new software build. In the continuous model, each new build moves to a runtime environment for integration and quality assurance and, ultimately, is deployed to end users across public or hybrid clouds. This is an application-specific view of the world.
Since quality assurance is part of the typical continuous process, monitoring is an assumed benefit or outcome. This assumption might be because many now complement their CI/CD/CT processes with APM. What is achieved with APM in this approach is testing, not monitoring. While testing provides a better picture of performance at the application level, it doesn’t provide full visibility into the blind spots that exist between the application layer and the end user.
APM Isn’t Enough
While APM tools focus on code—including everything that negatively affects an application such as database wait times and inefficient code—IPM focuses on the internet, which has become your network. IPM monitors what impacts your customers, workforce or application (API) experiences across the internet.
While APM and IPM may seem similar, they are nothing alike. They may share common tools, such as using synthetic agents, but the difference is in what they monitor and how. In this age of highly distributed applications, that difference means everything.
Where APM can “simulate” what a user may see from cloud location to cloud location, which can confirm application performance, this simulation isn’t true or comprehensive monitoring because it is blind to network-related problems with end-user experience, which are vast. Customers and employees each have a unique internet performance fingerprint and access applications from myriad devices across multiple locations and network infrastructures, not through the individual cloud locations that APM synthetics simulate.
IPM approaches synthetic testing more holistically, replicating multiple user journeys across a much more complex Internet infrastructure to proactively flag performance issues before they impact end users. This true monitoring takes place 24/7, not as point-in-time tests.
Forensics vs. Monitoring
With APM and similar observability solutions that embrace real user monitoring (RUM), it’s important to focus on what monitoring really means. APM passively collects data that is a proxy for user experience, enabling postmortem assessment of causes and possible fixes, but after the fact. Observability looks at logs, metrics and traces and provides an assessment of whether users are having a good experience or why they did not. Neither solution by itself will prevent poor experience, and many fixes won’t take place until a postmortem analysis feeds the next iteration. The poor experience has already done its damage, and the costs are already sunk.
With thousands of potential blind spots across the globe that can slow or disrupt applications, from ISPs and wireless carriers to CDNs, it’s not hard to grasp the increasing and differentiated importance of IPM. Where the combination of CI/CD/CT with APM can proactively prevent deployment of poor code that would affect user experience, only IPM is continuously replicating – even more valuable than monitoring–millions of experiences across a dynamic multi-node network where the unexpected is now expected.
Internet Resilience Vs. Retrospection
Businesses today understand the importance of resilience: It’s a board-level concept. Any business that has lost market share and market cap during sustained downtime or poor user experience understands what’s at stake: Millions if not billions of dollars. IPM is squarely focused on “internet resilience,” and its pillars are availability, reachability, performance and reliability. For businesses reliant on the cloud and employers operating with highly distributed workforces, retrospection alone is a risky proposition. Now that the internet is the new corporate network, internet resilience is a new organizational mantra.
The Takeaway
Whether your organization embraces CI/CD/CT already or is rethinking its approach to DevOps, this article should give you pause. Your job–perhaps as part of a larger team–is to catch performance issues and potential disruptions with your application before client impact is realized. Without IPM, only part of that job is being done.
Why is this so important? Even conservative estimates put the cost of an outage at $6,700 per minute (Gartner) and the “cost of slow,” where something causes an application to perform poorly and it goes unnoticed, is likely significantly higher because it happens all the time. The fallacy isn’t that there’s value in CI/CD/CT–there is. The fallacy is that you’re in control of the entire CX of your internet-deployed products without monitoring the internet stack and ensuring internet resilience. Only by adding IPM can you have this assurance.