ZEST Security has launched a platform that leverages generative artificial intelligence (AI) to remediate code created using infrastructure-as-code (IaC) tools.
Fresh off raising $5 million in funding, company CEO Snir Ben Shimol said the ZEST platform will both correlate and pinpoint the root cause of a vulnerability risk and generate recommendations and code to enable DevOps teams to address the issue.
Today it takes, on average, 30 to 60 days to manually remediate, for example, a cloud misconfiguration. The ZEST platform will reduce that time to a matter of hours, said Ben Shimol.
It also helps ensure the same mistakes are not repeated by identifying errors before code is used to provision IT infrastructure, he added. That’s critical because 80% of resolved issues resurface as developers make the same mistakes multiple times, noted Ben Shimol.
More challenging still, the tactics and techniques employed by cybercriminals continuously evolve, he added. An instance of cloud infrastructure that seemed perfectly secure can suddenly be vulnerable to a zero-day vulnerability, said Ben Shimol.
At the core of the ZEST platform is a large language model (LLM) that has been customized to identify IaC issues and generate Terraform, Pulumi or AWS CloudFormation code.
While cloud platforms are generally more secure than the average on-premises IT environment, the way are provisioned by developers often leads to misconfigurations that are easily exploited by cybercriminals.
Unfortunately, the average developer lacks cybersecurity expertise so the number of these potential misconfigurations that exist is extremely high. One of the reasons so many applications still reside in on-premises IT environments is that many IT and cybersecurity leaders are concerned that the processes used to build and deploy cloud applications, under the current shared responsibility model that cloud service providers require, are deeply flawed.
It’s not clear to what degree cloud service providers are working to address those concerns by providing developers with access to more training, but in the meantime, LLMs that have been trained to generate specific types of code are emerging. The issue now is finding a way to systematically review all the cloud instances that have been previously programmatically invoked using IaC tools.
In the meantime, the overall quality of the code used to provision cloud environments should steadily improve thanks to the rise of AI. That’s critical at a time when the number of applications being deployed continues to steadily increase. Less clear is to what degree funding for those AI platforms will be funded by cybersecurity teams that are held accountable for application security versus the DevOps teams that provision cloud infrastructure.
One way or another, however, cloud security issues will inevitably be forced as forthcoming regulations require organizations to lock down their software supply chains, many of which today are either partially or wholly dependent on cloud infrastructure.
Ultimately, better cloud security benefits everyone concerned. The challenge is that when cloud security is everyone’s responsibility, there is a tendency for no one to be held specifically accountable for ensuring it is achieved and maintained.