Embeddings models in Langflow
Embeddings models convert text into numerical vectors. These embeddings capture semantic meaning of the input text, and allow LLMs to understand context.
Refer to your specific component's documentation for more information on parameters.
In this example of a document ingestion pipeline, the OpenAI embeddings model is connected to a vector database. The component converts the text chunks into vectors and stores them in the vector database. The vectorized data can be used to inform AI workloads like chatbots, similarity searches, and agents.
This embeddings component uses an OpenAI API key for authentication. Refer to your specific embeddings component's documentation for more information on authentication.

This component generates embeddings using the AI/ML API.
Name | Type | Description |
---|
model_name | String | The name of the AI/ML embedding model to use |
aiml_api_key | SecretString | API key for authenticating with the AI/ML service |
Name | Type | Description |
---|
embeddings | Embeddings | An instance of AIMLEmbeddingsImpl for generating embeddings |
This component is used to load embedding models from Amazon Bedrock.
Name | Type | Description |
---|
credentials_profile_name | String | Name of the AWS credentials profile in ~/.aws/credentials or ~/.aws/config, which has access keys or role information |
model_id | String | ID of the model to call, e.g., amazon.titan-embed-text-v1 . This is equivalent to the modelId property in the list-foundation-models API |
endpoint_url | String | URL to set a specific service endpoint other than the default AWS endpoint |
region_name | String | AWS region to use, e.g., us-west-2 . Falls back to AWS_DEFAULT_REGION environment variable or region specified in ~/.aws/config if not provided |
Name | Type | Description |
---|
embeddings | Embeddings | An instance for generating embeddings using Amazon Bedrock |
Connect this component to the Embeddings port of the Astra DB vector store component to generate embeddings.
This component requires that your Astra DB database has a collection that uses a vectorize embedding provider integration.
For more information and instructions, see Embedding Generation.
Name | Display Name | Info |
---|
provider | Embedding Provider | The embedding provider to use |
model_name | Model Name | The embedding model to use |
authentication | Authentication | The name of the API key in Astra that stores your vectorize embedding provider credentials. (Not required if using an Astra-hosted embedding provider.) |
provider_api_key | Provider API Key | As an alternative to authentication , directly provide your embedding provider credentials. |
model_parameters | Model Parameters | Additional model parameters |
Name | Type | Description |
---|
embeddings | Embeddings | An instance for generating embeddings using Astra vectorize |
This component generates embeddings using Azure OpenAI models.
Name | Type | Description |
---|
Model | String | Name of the model to use (default: text-embedding-3-small ) |
Azure Endpoint | String | Your Azure endpoint, including the resource. Example: https://example-resource.azure.openai.com/ |
Deployment Name | String | The name of the deployment |
API Version | String | The API version to use, options include various dates |
API Key | String | The API key to access the Azure OpenAI service |
Name | Type | Description |
---|
embeddings | Embeddings | An instance for generating embeddings using Azure OpenAI |
This component generates embeddings using Cloudflare Workers AI models.
Name | Display Name | Info |
---|
account_id | Cloudflare account ID | Find your Cloudflare account ID |
api_token | Cloudflare API token | Create an API token |
model_name | Model Name | List of supported models |
strip_new_lines | Strip New Lines | Whether to strip new lines from the input text |
batch_size | Batch Size | Number of texts to embed in each batch |
api_base_url | Cloudflare API base URL | Base URL for the Cloudflare API |
headers | Headers | Additional request headers |
Name | Display Name | Info |
---|
embeddings | Embeddings | An instance for generating embeddings using Cloudflare Workers |
This component is used to load embedding models from Cohere.
Name | Type | Description |
---|
cohere_api_key | String | API key required to authenticate with the Cohere service |
model | String | Language model used for embedding text documents and performing queries (default: embed-english-v2.0 ) |
truncate | Boolean | Whether to truncate the input text to fit within the model's constraints (default: False ) |
Name | Type | Description |
---|
embeddings | Embeddings | An instance for generating embeddings using Cohere |
This component computes selected forms of similarity between two embedding vectors.
Name | Display Name | Info |
---|
embedding_vectors | Embedding Vectors | A list containing exactly two data objects with embedding vectors to compare. |
similarity_metric | Similarity Metric | Select the similarity metric to use. Options: "Cosine Similarity", "Euclidean Distance", "Manhattan Distance". |
Name | Display Name | Info |
---|
similarity_data | Similarity Data | Data object containing the computed similarity score and additional information. |
This component connects to Google's generative AI embedding service using the GoogleGenerativeAIEmbeddings class from the langchain-google-genai
package.
Name | Display Name | Info |
---|
api_key | API Key | Secret API key for accessing Google's generative AI service (required) |
model_name | Model Name | Name of the embedding model to use (default: "models/text-embedding-004") |
Name | Display Name | Info |
---|
embeddings | Embeddings | Built GoogleGenerativeAIEmbeddings object |
This component loads embedding models from HuggingFace.
Use this component to generate embeddings using locally downloaded Hugging Face models. Ensure you have sufficient computational resources to run the models.
Name | Display Name | Info |
---|
Cache Folder | Cache Folder | Folder path to cache HuggingFace models |
Encode Kwargs | Encoding Arguments | Additional arguments for the encoding process |
Model Kwargs | Model Arguments | Additional arguments for the model |
Model Name | Model Name | Name of the HuggingFace model to use |
Multi Process | Multi-Process | Whether to use multiple processes |
Name | Display Name | Info |
---|
embeddings | Embeddings | The generated embeddings |
This component generates embeddings using Hugging Face Inference API models.
Use this component to create embeddings with Hugging Face's hosted models.
Name | Display Name | Info |
---|
API Key | API Key | API key for accessing the Hugging Face Inference API |
API URL | API URL | URL of the Hugging Face Inference API |
Model Name | Model Name | Name of the model to use for embeddings |
Cache Folder | Cache Folder | Folder path to cache Hugging Face models |
Encode Kwargs | Encoding Arguments | Additional arguments for the encoding process |
Model Kwargs | Model Arguments | Additional arguments for the model |
Multi Process | Multi-Process | Whether to use multiple processes |
Name | Display Name | Info |
---|
embeddings | Embeddings | The generated embeddings |
This component generates embeddings using LM Studio models.
Name | Display Name | Info |
---|
model | Model | The LM Studio model to use for generating embeddings |
base_url | LM Studio Base URL | The base URL for the LM Studio API |
api_key | LM Studio API Key | API key for authentication with LM Studio |
temperature | Model Temperature | Temperature setting for the model |
Name | Display Name | Info |
---|
embeddings | Embeddings | The generated embeddings |
This component generates embeddings using MistralAI models.