

Grafana Labs today at its GrafanaCON 2026 conference revealed it has extended its artificial intelligence (AI) agent to its cloud-based observability platform while at the same time previewing a platform for observing AI applications and an open source framework for evaluating AI agents.
At the same time, the company announced it has developed an instance of the OpenTelemetry tools for collecting telemetry data that in addition to improving support for Kubernetes clusters, can be installed with a single command in Linux environments.
Additionally, the company also unfurled Grafana 13, an update to the company’s core visualization software that adds pre-built and suggested dashboards that dynamically adapt to different use cases, layout templates, guided onboarding to reduce setup time, improved programmability through redesigned dashboard schema, an updated application programming interface (API) for managing dashboards at scale, and support for Git-based workflows, team folders, improved secrets management, and dashboard restore tools
Finally, Grafana Labs also announced that its Loki log aggregation platform has been shifted to a backend based on Kafka messaging software to improve scalability. There is also now a redesigned Loki query engine and scheduler that provides access to a query planner that distributes work across partitions and executes queries in parallel.
Jen Villa, a senior product director for Grafana Labs, said that in addition to providing an AI agent, dubbed Grafana Assistant, the company has added a Model Context Protocol (MCP) server to provide access to external data sources. There is also a command-line interface (CLI) through which the AI agent can be integrated with AI coding tools.
The overall goal is to provide Granfana Assistant with the context it needs for multiple observability use cases spanning everything from when code is initially developed to when applications are deployed in a production environment.
Mitch Ashley, vice president and practice lead for software lifecycle engineering at The Futurum Group, said providers of observability platforms are now competing to become the enforcement surface for AI agents in production. Observability platforms are now being evaluated on how they govern agent behavior across trust boundaries, he added. The autonomy organizations grant agents will eventually be bound by the evidence their observability stack can produce, noted Ashley
It’s not clear at what rate organizations are investing more in observability, but a recent Futurum Group survey finds well over a third (36%) of respondents work for organizations that plan on spending more than $1 million on observability in 2026, with 7% planning to spend in excess of $5 million.
Much of that spending is being driven by a need to move beyond traditional monitoring of pre-defined metrics to adopt observability platforms that make it easier to analyze logs, traces and metrics to discover the root cause of a specific application issue. As application environments continue to become more complex, it’s clear that existing monitoring tools are no longer enough. DevOps teams need to be able to determine the dependencies that exist between components and services to improve overall reliability.
Regardless of approach, the volume of log data being generated is only going to continue to exponentially increase as it becomes easier to instrument application environments. In the meantime, DevOps teams would be well-advised to determine now just how they are going to collect all that telemetry data, but also store and analyze it.