

Lineaje this week unfurled a platform that automatically discovers the artificial intelligence components of an application, defines security and governance policies, and then autonomously generates guardrails.
At the core of the Lineaje UnifAI platform are a set of artificial intelligence (AI) capabilities that are integrated with an orchestration framework for applying governance policies that has been embedded within a Model Context Protocol (MCP) server to integrate with AI coding tools.
The Discovery Agents that Lineaje developed continuously map an AI Bill of Materials (AIBOM) to identify every model, agent, MCP server, dependency, skills, and data connections. Additionally, AI agents will craft an AI Kill-Chain model to thwart known threats using the defenses a DevSecOps team has put in place.
Lineaje CEO Javed Hasan said that approach enables the platform to derive application behavior intent from the tools used to design and build an application, which it then uses to generate the appropriate governance policies. DevSecOps teams can also upload internal governance documents that are converted into enforceable policies. In addition, Lineaje’s AI Research Labs continually publishes new policies to address emerging threats in agentic AI environments.
Those updates will prove crucial because the tactics and techniques being created to exploit AI software continue to rapidly evolve, noted Hasan.
Armed with those insights, a DevSecOps team can rely on the UnifAI platform to, in real time, automatically map every system, connection, and behavioral pattern to assess risk, he added.
The platform also automatically recommends policies for data protection, identity and access management, compliance alignment, threat prevention, and vulnerability remediation, eliminating the need for teams to write policies from scratch. The overall goal is to identify and resolve risks before applications are deployed in a production environment, added Hasan.
Ultimately, each DevSecOps team will need to determine to what degree they want to rely on AI to generate and apply governance policies, but in time this process will become automated as confidence in AI increases, noted Hasan.
Mitch Ashley, vice president and practice lead for software lifecycle engineering at the Futurum Group, said the Lineaje UnifAI platform illustrates how AI governance is being operationalized. Automated discovery of AI components, policy generation, and guardrail enforcement can now be embedded directly in the development workflow. This positions governance as a continuous, lifecycle-spanning function rather than a pre-deployment checkpoint, he added.
Additionally, DevSecOps teams gain policy enforcement in a way that is tied to actual application behavior, derived from the tools and dependencies used to build it, noted Ashley. As agentic AI components multiply across the stack, teams that rely on manual policy authorship will face growing exposure; automated AIBOM discovery and kill-chain modeling become baseline requirements, not optional enhancements, said Ashley.
DevSecOps teams will need to determine to what degree they will re-engineer their workflows as more AI components are embedded within their applications. Many of those components, unfortunately, are susceptible to, for example, malicious prompts that are relatively trivial to create. In the absence of any governance policies, the amount of havoc that might be wrought is now far greater than ever.
