Apica Extends Scope and Reach of Platform for Managing Telemetry Data

Apica today updated its Ascent platform to add support for synthetic data that is increasingly being used by artificial intelligence (AI) agents to observe application environments.

Version 2.16 of the platform adds support for a set of real user monitoring (RUM) and service level objective (SLO) dashboards, an ability to correlate changes made to any given rule to the cost of processing telemetry data, and additional performance enhancements.

Andi Mann, chief product technology officer for Apica, said collectively these updates will make it more feasible for DevOps teams to feed telemetry data at scale into observability platforms in a way that enables them to better control costs.

As application environments have become more complex, the amount of telemetry data being generated has exponentially increased. Fortunately, it has become easier to collect that data using, for example, open source OpenTelemetry software. However, many DevOps teams are now trying to manage massive amounts of telemetry data.

The Ascent platform from Apica is designed to provide a means to process, transform, enrich, and govern telemetry data prior to it being stored in an observability platform. That approach provides DevOps teams with the control mechanism they need to ensure only the most relevant telemetry data is passed on to an observability platform where costs are often determined by how much data is being stored.

It’s not clear to what extent DevOps teams are now employing observability platforms to identify the root cause of an application issue, but a recent Futurum Group survey finds well over a third (36%) are planning on spending more than $1 million on observability in 2026, with 7% planning to spend in excess of $5 million.

Arguably, existing observability challenges will almost certainly be further exacerbated by the rise of AI coding. As more AI agents are used to generate code, the number of applications that will be deployed in production environments will only continue to exponentially increase. In many cases, DevOps teams will not understand how these applications have been actually constructed unless they are able to analyze telemetry data.

On the plus side, most observability platforms are adding AI agents that should make it simpler to analyze massive amounts of telemetry data. The challenge is making sure the right telemetry data is discoverable at the right time without increasing costs to the point where observability becomes unsustainable.

Hopefully, there will come a day when incident management becomes much less stressful than it has historically been. More than a few DevOps teams have spent months looking for the root cause of an issue that once discovered was solved in a few minutes. Observability platforms should make it much simpler to discover and remediate those issues long before an application is ever deployed in a production environment. Of course, there will still likely be hundreds of legacy applications running in production environments that may not all be as equally well instrumented but as AI advances continue to be made the number of applications that are generating useful telemetry data is only going to continue to rapidly decline.

Read More

Scroll to Top