

Checkmarx this week revamped its DevSecOps platform to include an orchestration framework for managing tasks assigned to artificial intelligence (AI) agents.
Additionally, the company has added two additional artificial intelligence (AI) agents trained to triage vulnerabilities and remediate them using code it generates for review while at the same time adding an ability to discover AI software assets, including models, agents, datasets, prompts and AI bill of materials (AI-BOM) components, to make it simpler to consistently enforce policies.
Finally, Checkmarx has also infused its static application security testing (SAST) and dynamic application security testing (DAST) tools with AI capabilities to discover more vulnerabilities faster.
In total, the Checkmarx One platform now provides access to five AI agents that automate tasks both as code is being created and as it moves through a DevSecOps workflow.
Jonathan Rende, chief product officer for Checkmarx, said the need for tools and platforms capable of identifying and remediating vulnerabilities created by AI coding tools has become a pressing issue. As the overall volume of code being generated continues to increase, so too does the number of vulnerabilities being generated, he added.
At the same time, malicious actors are now using AI to discover and exploit vulnerabilities faster than ever, he added. The only way to address that issue is to rely on a platform that relies on a mix of deterministic rules and probabilistic large language models (LLMs) to enable DevSecOps teams to both resolve issues as code is being written and as it is incorporated into the software supply chain, noted Rende.
Code reviews that once required hours to manually complete can now be conducted in minutes, said Rende. In effect, DevSecOps teams are now caught up in an AI arms race to fix vulnerabilities at machine speed that requires a unified approach to application security posture management (ASPM), he added.
Mitch Ashley, vice president and practice lead for software lifecycle engineering at the Futurum Group, said the orchestration framework for AI agents developed by Checkmarx formalizes the necessary governance layer for AI-generated code. Triage and remediation agents move security accountability directly into the development loop, which is essential as code volume explodes, he added.
The addition of AI-BOM discovery and policy enforcement at the asset layer also addresses the supply chain visibility gap by moving beyond just scanning code to governing the inputs and outputs of the AI development ecosystem itself, noted Ashley.
Ultimately, the combination of deterministic rules and LLMs for remediation is the only path to keep pace with machine-speed vulnerability discovery, said Ashley.
It’s not clear to what degree DevSecOps teams are re-engineering workflows in the age of AI coding but in the absence of any change it’s probable the overall state of application security is going to get worse. On the plus side AI coding tools are generating better code, even to the point where SQL injection vulnerabilities might all be eliminated. However, there is still a significant gap in terms of the number of vulnerabilities being created by these tools and the ability of application developers to discover and remediate them.
In fact, DevSecOps engineers that have always been too few in number are, in the short term at least, likely to overwhelmed by the volume of flawed code that now more than ever needs to be reviewed and validated.
