langchain-nvidia-ai-endpoints package contains LangChain integrations for chat models and embeddings powered by NVIDIA AI Foundation Models, and hosted on the NVIDIA API Catalog.
NVIDIA AI Foundation models are community- and NVIDIA-built models that are optimized to deliver the best performance on NVIDIA-accelerated infrastructure. You can use the API to query live endpoints that are available on the NVIDIA API Catalog to get quick results from a DGX-hosted cloud compute environment, or you can download models from NVIDIA’s API catalog with NVIDIA NIM, which is included with the NVIDIA AI Enterprise license. The ability to run models on-premises gives your enterprise ownership of your customizations and full control of your IP and AI application.
NIM microservices are packaged as container images on a per model/model family basis and are distributed as NGC container images through the NVIDIA NGC Catalog. At their core, NIM microservices are containers that provide interactive APIs for running inference on an AI Model.
Use this documentation to learn how to install the langchain-nvidia-ai-endpoints package and use it for some common functionality for text-generative and embedding models.
Install the package
Access the NVIDIA API Catalog
To get access to the NVIDIA API Catalog, do the following:- Create a free account on the NVIDIA API Catalog and log in.
- Click your profile icon, and then click API Keys. The API Keys page appears.
- Click Generate API Key. The Generate API Key window appears.
- Click Generate Key. You should see API Key Granted, and your key appears.
- Copy and save the key as
NVIDIA_API_KEY. - To verify your key, use the following code.
Work with the API Catalog
Self-host with NVIDIA NIM Microservices
When you are ready to deploy your AI application, you can self-host models with NVIDIA NIM. For more information, refer to NVIDIA NIM Microservices. The following code connects to locally hosted NIM Microservices.Related topics
langchain-nvidia-ai-endpointspackage README- Overview of NVIDIA NIM for Large Language Models (LLMs)
- Overview of NeMo Retriever Embedding NIM
- Overview of NeMo Retriever Reranking NIM
ChatNVIDIAModelNVIDIAEmbeddingsModel for RAG Workflows