Coralogix Taps Skyflow to Anonymize Log Data Using Tokens

Coralogix and Skyflow have allied to protect sensitive log data that DevOps teams might inadvertently expose while investigating incidents or as data is shared with downstream applications.

Skyflow has developed a platform that instead of eliminating or masking sensitive data applies tokens to enable that data to be searchable and auditable while preserving privacy.

Coralogix CEO Ariel Assaraf said eliminating or masking data strips away context that makes it more difficult to query, correlate, and operationalize log data effectively. For example, identifiers may no longer match across events.

Skyflow, via its Runtime AI Data Control Platform, applies polymorphic encryption and tokenization to personally identifiable information (PII). That capability makes it possible to apply governance policies to sensitive customer data in a way that ensures that data is anonymized.

The overall goal is to ensure that DevOps teams do not inadvertently expose sensitive data in dashboards or other downstream applications, including large language models (LLMs), that might access raw log data, said Assaraf.

In the age of artificial intelligence (AI), telemetry data is now also being incorporated into business workflows, so being able to anonymize that data has become crucial, he added. However, DevOps teams can still apply policies to rehydrate data for approved workflows as needed, noted Assaraf.

Mitch Ashley, vice president and practice lead for software lifecycle engineering at Futurum Group, said the Coralogix-Skyflow integration reflects how regulated log data has become. As DevOps teams pipe telemetry into dashboards and LLMs, tokenization keeps correlation, search, and audit intact while isolating PII data, he added.

Observability platforms that cannot govern PII across query, dashboard, and AI workflows will hit a ceiling with regulated enterprises, noted Ashley. Every downstream consumer, LLMs included, creates exposure that procurement will evaluate at the platform level, he added.

It’s not clear at what rate organizations are investing more in observability, but a recent Futurum Group survey finds well over a third (36%) of respondents work for organizations that plan on spending more than $1 million on observability in 2026, with 7% planning to spend in excess of $5 million.

Much of that spending is being driven by a need to move beyond traditional monitoring of pre-defined metrics to adopt observability platforms that make it easier to analyze logs, traces and metrics to discover the root cause of a specific application issue. As application environments continue to become more complex, it’s clear that existing monitoring tools are no longer enough. DevOps teams need to be able to determine the dependencies that exist between components and services to improve overall reliability. Otherwise, it’s only a matter of time before one or more preventable issues adversely impact application performance or, worse yet, actual availability.

Regardless of approach, the volume of log data being generated is only going to continue to exponentially increase as more application environments become instrumented. The challenge is that as more log data is generated, the chances that it will contain some type of sensitive data that should not be exposed also increase.

Read More

Scroll to Top