NVIDIA Cloud Native Technologies
About NVIDIA Cloud Native Technologies
NVIDIA cloud-native technologies enable developers to build and run GPU-accelerated containers with Docker, Podman, and Kubernetes.
Automate the management of all NVIDIA software components needed to provision GPUs in Kubernetes.
Use NVIDIA GPUs with Red Hat's security-centric and enterprise-grade hardened Kubernetes platform.
Operate and manage the lifecycle of the software components and services for running LLM, embedding, and other NIM microservices and models in Kubernetes.
Provision and manage NVIDIA networking resources in a Kubernetes cluster. The Operator installs host networking software to provide high-speed network connectivity.
Partners document how to use the NVIDIA GPU Operator with their Kubernetes platforms.
End-user support is provided by the partner and not NVIDIA.
End-user support is provided by the partner and not NVIDIA.
Use partitioned GPUs for workloads that do not fully saturate the GPU compute capacity.
Gather GPU metrics for use with monitoring solutions such as Prometheus.
The NVIDIA device plugin for Kubernetes provides the following features:
- Exposes the number of GPUs on each node of your cluster.
- Keeps track of the health of your GPUs.
- Runs GPU-enabled containers in your Kubernetes cluster.
NVIDIA GPU Feature Discovery for Kubernetes automatically generates labels for the set of GPUs on a node.
Build and run GPU-accelerated containers with the container runtime library and utilities.
NVIDIA GPUs bring accelerated computing and artificial intelligence to the edge.
Use NVIDIA GPUs with Google Anthos in hybrid and multi-cloud environments.
NVIDIA Cloud Native Stack is a collection of software to run cloud-native workloads on NVIDIA GPUs. The GitHub repository provides installation guides to get you started.
Use a service mesh for service-to-service communication in a microservices architecture.