Late last month, APMdigest posted a three-part interview with Gartner research VP Will Cappelli to discuss his recently published report, Will IT Operations Analytics Platforms Replace APM Suites?
The title of the report begs the question: Is Application Performance Management becoming obsolete? The answer is maybe unclear at this point. But Cappelli does infer that APM suites will evolve and offer more analytics-based data as these two aspects of application performance monitoring converge over the next five years.
Cappelli says:
“The end-user experience will continue to grow in importance…I’m beginning to get some calls from end-users looking at the CMDBs or asset management databases or trouble ticketing systems and asking if they can use analytics in these areas…That is a new development that takes us beyond performance and availability.”
According to Cappelli, there are three key reasons for this evolution:
1) Application Complexity and Interdependence
Cappelli points out that we will see changes in how we watch and monitor applications as they become increasingly complex and interdependent with one another:
“Non-analytic APM tools will continue to generate more and more data, [and] in order to understand what is going on in the application, deeper and deeper analysis capabilities are required.”
APM vendors need to be very conscious of the law of unintended consequences when it comes to delivering and presenting APM data. For example, the INETCO Insight software collects data off the network and decodes every application and network message to extract data fields and enable complex transaction correlations. It would be very easy to overwhelm an operations user with this detail. Instead, we’ve worked very hard to structure how the data is presented, how much data is retained, and how visualization techniques can make it easy for users to quickly understand and act upon APM information.
2) Automation of Monitoring
The aforementioned complexities and interdependencies of applications means that it won’t be humanly possible, let alone resource-effective, to oversee everything that’s going on. Cappelli explains:
“Root cause analysis…now [is] not really done with analytics tools [but] by looking at the topology map. As the application topologies become a lot more complex, you are not going to be able to just look at a map on the screen and find the root cause. You are going to have to apply some sort of automated algorithm that will identify what could be the cause.”
Our answer to this problem is a unique data model we call the Unified Transaction Model. It is a set of automated algorithms that transforms low-level network and application information gleaned off the network into higher-level application and business transaction objects. Part of this transformation involves mapping the observed activity to a set of ideal models and highlighting where slowdowns or failures occurred. Key data and meta-data is preserved at each layer (e.g. IP addresses, SQL statements, session IDs, etc.) to assist in root cause determination.
3) Proactive Strategies to Potential Problems
As IT organizations place increased emphasis on end-user experiences, predictive analytics will become a key component in monitoring application performance. After all, end users do not care why an app is not working – they just get frustrated when it does not work. So it makes sense to address potential glitches in a proactive fashion, which analytics can better address.
Says Cappelli:
“Until now most of the burden of the application performance monitoring tools has been in the area of retroactive determination of the root cause of the problem. We are seeing, with the people we talk to, more and more focus on getting out ahead of problems before they occur. And in order to do that kind of predictive action, you need some kind of analytics tools.”
Many INETCO Insight customers use our flexible alerting engine to set multiple threshold triggers on things like response time or failure rate. For example, they’ll set a “warning” threshold at 4 seconds (which is a significant delay in performance, but still within their SLA), and a “critical” threshold at 6 seconds. INETCO Insight will automatically capture the transactions responsible for the “warning” event, allowing teams to analyze these and get ahead of problems before they occur.
At INETCO, we have long predicted that analytics are going to be a part of the APM equation. Our latest version of INETCO Insight comes with a new analytics service that monitors transaction usage, as well as performance characteristics from users, devices, and hosts, and then pushes that data via the user interface and an API. This means we now analyze the performance characteristics of every user, device, and component of a distributed application and can provide this information to the higher level IT operations analytics platforms Cappelli describes.
Cappelli also states:
“I think you’ll see IBM, HP, BMC, CA, Compuware rolling out generic IT operations analytics platforms, if they are in multiple enterprise management areas. If they are an APM specialist, like a Compuware, their analytic platforms will tend to be more specific to APM.”
We see this combination – APM and IT operations analytics – as extremely powerful. We’ll be providing a detailed use case example of how you can use our “Object Analytics” capabilities in next Friday’s blog.