Legit Security today announced it has expanded the scope of its application security posture management (ASPM) platform to make use of artificial intelligence (AI) to discover secrets more accurately in applications that cybercriminals can actually exploit.
Scanners typically generate high false positive rates to the point where developers often start to ignore them. As a result, it’s not uncommon for secrets to be discovered in an application even though an alert was generated.
Legit Security CTO Liav Caspi said the company is now applying a large language model (LLM) specifically trained for source code along with prompt engineering techniques and heuristics to reduce the level of alert noise that developers would otherwise see when scanning for secrets. The overall goal is to correctly identify access keys, passwords, application programming interface (API) keys and other personally identifiable information (PII) that should not be exposed once an application is deployed in a production environment, he noted.
That capability can be applied to detect secrets across all development assets, including code repositories, source code management (SCM) tools, build tools and logs, artifacts and private and public documentation. In addition, this approach also uncovers toxic combinations of secrets or ones that might lie buried within assets such as source code history or modified Confluence pages that developers often forget exist, noted Caspi.
Legit Security also enables DevSecOps teams to apply preventive guardrails on developer endpoints using the Legit command line interface (CLI) to stop secrets from being exposed before code is pushed. In addition, automated workflows can reach developers as part of a pull request or create Slack messages or Jira tickets to streamline remediation. DevSecOps teams can also continuously triage results to create a baseline that includes exceptions.
As part of a larger effort to secure software supply chains, Legit Security has developed a software-as-a-service (SaaS) platform designed to apply security policies across an entire software development life cycle. The company is now extending the capabilities of that platform using AI technologies to detect secrets in both source code and all the components that make up a development pipeline as they are added.
In the wake of an executive order requiring federal agencies to better secure software supply chains, there is now more focus than ever on application security. Unfortunately, many developers are a little too cavalier when it comes to managing secrets during the application development process. They often leave secrets in plain text to facilitate workflows, but more often than many care to admit, those secrets are not removed before an application is deployed. Knowing this, cybercriminals have learned to scan applications for those exposed secrets. Once gained, they then appear to be just another legitimate user of an application that might not be discovered until malware is activated.
It may be a while before the cultural and technical issues that conspire to make the software supply chain insecure are fully addressed. In fact, the problem may get worse before it gets better—simply because the pace at which distributed applications are being built and deployed only continues to accelerate. At the same, the cost of making a secrets mistake is also only going to increase as regulations become more stringent. In fact, it’s now only a matter of time before stiff fines intended to make an example of one or more unfortunate organizations are eventually going to be levied.