

CloudBees has made generally available an add-on for continuous integration/continuous deployment (CI/CD) platforms that uses artificial intelligence (AI) to determine which tests should be run first based on the likelihood there will be a failure.
Shawn Ahmed, chief product officer at CloudBees, said CloudBees Smart Tests eliminates the need to run an entire testbed. Instead, this extension to a CI/CD platform surfaces which specific tests are likely to fail, which allows a DevOps team to run them first rather than waiting hours, or sometimes even days, to run an entire suite of tests, noted Ahmed.
Additionally, DevOps teams can run those tests in parallel to further reduce the amount of time required to vet an application workload, which in turn reduces the overall amount of CI/CD processing overhead, added Ahmed.
That capability is now especially critical in the age of AI as the amount of code being generated continues to exponentially increase, he noted. In fact, because most of that code has been generated by a machine, the only way to really understand how it was constructed is to apply machine learning (ML) algorithms at scale to test it, noted Ahmed.
Based on a Launchable platform that CloudBees acquired in 2024, CloudBees Smart Tests is based on ML algorithms that have been trained to identify patterns that enable it to predict which tests a workload is most likely to fail. Compatible with multiple CI/CD platforms, that approach ultimately makes it possible to complete testing as much as 30 to 50 times faster, said Ahmed.
Mitch Ashley, vice president and practice lead for software lifecycle engineering at the Futurum Group, said AI code generation is compressing commit-to-deployment timelines, and test execution is emerging as the bottleneck that determines whether that speed holds. CloudBees Smart Tests shifts test selection from sequential execution to risk-weighted intelligence, he added.
For teams absorbing higher AI-generated code volumes, running full test suites will compound delays, noted Ashley. Test selection is now a pipeline governance decision, and teams that lack ML-based prioritization will find CI/CD overhead growing in proportion to their AI adoption, said Ashley.
Testing, of course, is usually one of the things that DevOps teams most frequently cut back on whenever a deadline looms. As a result, the number of applications that have been deployed without running a complete battery of tests is much higher than most organizations care to admit. However, if it becomes simpler to run tests faster, the overall quality of the applications being deployed should improve. The challenge is in the short term the volume of code being generated today is already overwhelming existing workflows relied on to test software.
Each organization will need to determine to what degree to rework their DevOps workflows in the age of AI but change at this point is inevitable. In the meantime, using AI to accelerate testing is low-hanging fruit that doesn’t, hopefully, require as much reengineering to achieve. The challenge, of course, is summoning up the will to apply ML to testing in the face of existing inertia.
