JFrog today announced it has integrated its DevSecOps platform with a managed machine learning operations (MLOps) platform from Qwak to advance collaboration between teams building and deploying multiple classes of software artifacts.
This alliance comes on the heels of a similar Amazon Sagemaker alliance announced last month that also integrated a managed service for building artificial intelligence (AI) models provided by Amazon Web Services (AWS) with the JFrog Software Supply Chain Platform.
Both machine learning operations (MLOps) platforms provide data science teams with a complete stack of curated tools required for building AI models from the ground up versus, alternatively, customizing AI models that were previously built.
Gal Marder, executive vice president for strategy for JFrog, said integration with the Qwak platform makes it possible to manage the software artifacts created by MLOps teams alongside the rest of the software artifacts a DevSecOps team is already managing. That approach makes it possible to also detect and block the use of malicious ML models in addition to ensuring models comply with company policies and regulatory requirements, he noted.
As it becomes apparent that more AI models will be directly embedded within applications, the need to integrate DevOps workflows with machine learning operations (MLOps) platforms used to create AI models is becoming more pronounced, said Marder.
JFrog Software Supply Chain Platform can be used to provide both data scientists and developers with a single source of truth for securely managing software artifacts using a common repository to foster greater collaboration between teams that, in most cases, are still determining how best to collaborate, he added.
The challenge is data scient teams typically train and deploy AI models every few months, while DevSecOps teams are often updating applications multiple times a month. Data science and DevSecOps teams, as a result, generally have distinct cultures, but in the long term, DevOps and MLOps workflows will eventually merge, said Marder.
Most organizations are still trying to determine how best to operationalize AI using their own data. In the term, AI models are being invoked via application programming interfaces (APIs). Still, it’s also now only a matter of time before more AI models are embedded in applications to improve overall performance. The challenge is that AI models can’t be updated the same way other software artifacts are patched, so managing versions of AI models will require a different set of controls as one model replaces another. JFrog, for example, has developed versioning capabilities that can be applied to AI models in the context of a DevOps workflow.
There is, of course, already no shortage of MLOps platforms, so DevOps teams should expect to see a wave of alliances being formed between the providers of these platforms. Less clear is to what degree those alliances might lead to mergers and acquisitions among the providers of those platforms.
One way or another, however, AI models are coming to DevSecOps workflows. The only issue to be resolved is how best to manage their deployment alongside all the other types of software artifacts already moving through existing DevOps pipelines.