

Secure Code Warrior (SCW) this week added an artificial intelligence (AI) agent that both identifies code generated by an AI coding tool and automatically applies the appropriate governance policies.
Company CEO Pieter Danhieux said the SCW Trust Agent makes it possible for DevSecOps teams to use AI to verify which AI models influenced specific commits, correlate that influence to vulnerability exposure, and take corrective action before insecure code is added to a production environment. DevSecOps teams can also use the AI agent to discover any Model Context Protocol (MCP) servers that might have been deployed without permission.
Finally, SCW benchmark data can also be used to evaluate models and enforce approved AI usage policies based on measurable output, noted Danhieux. For example, a developer may be using one AI model to reduce costs without realizing they are also generating more vulnerabilities that would not otherwise be created if they relied on a different AI model.
Armed with those insights it then becomes possible to also assess the overall AI expertise of individual developers in a way that surfaces issues that might require additional guidance and training, added Danhieux.
Each DevSecOps team will ultimately need to decide what AI models to block versus allow depending on the quality of the code generated and any compliance or security issues that may arise because of what country the AI model is hosted in, noted Danhieux.
Mitch Ashley, vice president and practice lead for the Futurum Group, said SCW Trust Agent moves AI code governance upstream to the commit level, tying model influence to specific changes and correlating that attribution to vulnerability exposure. Model provenance becomes a first-class governance signal across the development lifecycle.
AI-centric development is changing many aspects of development at a pace never seen, he added. Code commits per hour can easily double or triple. To address this, scanning technologies must not only move upstream but also transition from scan and report to become remediation agents at the point of origin in order to operate in a multi-agent development environment, said Ashley.
It’s not clear how much of the code finding its way into production environments was generated by a human or a machine, but as AI coding tools continue to evolve it’s now only a question of when most, if not all, will have been generated by a machine. The immediate challenge is providing more transparency into how code is being generated to strike a better balance between quality and cost.
Of course, not every line of code necessarily needs to be written by the AI model that is the most costly to run. Conversely, there is always going to be code that needs to be of the highest quality, which means an AI model with the most advanced reasoning capabilities should be employed.
Regardless of approach, the days when organizations might have allowed developers to use any AI tool to increase productivity are coming to a close. Instead, organizations will soon be exercising a lot more control over what specific AI tools and services application developers can access in an era where exactly how any given line of code was created is about to start mattering a whole lot more.