IBM has made generally available a framework, dubbed IBM Concert, that leverages generative artificial intelligence and knowledge graphs to surface in real-time dependencies that make it simpler to identify the root cause of issues in a way that enables DevOps teams to more proactively ensure the availability of services.
Based on the IBM watsonx platform, IBM Concert aggregates data from the tools and platforms that span the IT environment into a software-as-a-service (SaaS) service that can be hosted on cloud platforms from IBM or Amazon Web Services (AWS) or in an on-premises IT environment.
Vikram Murali, vice president for application modernization and IT Automation for IBM Automation, said that a lightweight approach to collecting data enables IBM Concert to provide a 360-degree view of the topology that makes up increasingly complex IT environments. IBM built IBM Concert to manage its internal application environment before making the SaaS platform available to customers, noted Murali.
Initially, IBM Concert is optimized for use cases such as application risk management and application compliance management that help DevSecOps teams identify and prioritize critical vulnerabilities but, in the months ahead, IBM plans to expand the reach of IBM Concert to address, for example, cost management.
IBM has been making a case for an AI portfolio based on Granite LLMs it has been developing for several years. IBM Granite code models range from 3B to 34B parameters and come in base and instruction-following model variants. Testing by IBM on benchmarks including HumanEvalPack, HumanEvalPlus and reasoning benchmark GSM8K showed Granite code models do well on code synthesis, fixing, explanation, editing and translation across most major programming languages, including Python, JavaScript, Java, Go, C++ and Rust.
The 20 billion parameter Granite base code model was used to train IBM watsonx Code Assistant (WCA) and also drives watsonx Code Assistant for Z, a model tuned to generate SQL code via a natural language interface.
It’s not clear exactly what LLM exceeds the capabilities of any other, but it’s apparent that some LLMs are better suited for specific use cases than others. Overall, the more parameters an LLM has, the more expensive it is to support in terms of IT infrastructure resources required. In time, most enterprise IT organizations will find themselves invoking a mix of large, medium and small language models that are optimized to automate tasks across a wide range of domains.
The pace at which applications will be built and deployed in the next several years will overwhelm DevOps teams. Developers are making greater use of AI tools to write code faster than ever. DevOps teams will need to employ platforms infused with AI capabilities to keep pace. The challenge is that there may be a gap between when DevOps teams gain access to those tools and the increased amount of code that is already flowing through the pipelines DevOps teams use to manage code bases.
One way or another, machines will be playing a larger role in not only writing code but also managing it.