Skip to main content

Embedding Models

Embeddings models are used to convert text into numerical vectors. These vectors can be used for various tasks such as similarity search, clustering, and classification.

AI/ML

This component generates embeddings using the AI/ML API.

Parameters

Inputs

NameTypeDescription
model_nameStringThe name of the AI/ML embedding model to use
aiml_api_keySecretStringAPI key for authenticating with the AI/ML service

Outputs

NameTypeDescription
embeddingsEmbeddingsAn instance of AIMLEmbeddingsImpl for generating embeddings

Amazon Bedrock Embeddings

This component is used to load embedding models from Amazon Bedrock.

Parameters

Inputs

NameTypeDescription
credentials_profile_nameStringName of the AWS credentials profile in ~/.aws/credentials or ~/.aws/config, which has access keys or role information
model_idStringID of the model to call, e.g., amazon.titan-embed-text-v1. This is equivalent to the modelId property in the list-foundation-models API
endpoint_urlStringURL to set a specific service endpoint other than the default AWS endpoint
region_nameStringAWS region to use, e.g., us-west-2. Falls back to AWS_DEFAULT_REGION environment variable or region specified in ~/.aws/config if not provided

Outputs

NameTypeDescription
embeddingsEmbeddingsAn instance for generating embeddings using Amazon Bedrock

Astra DB vectorize

Connect this component to the Embeddings port of the Astra DB vector store component to generate embeddings.

This component requires that your Astra DB database has a collection that uses a vectorize embedding provider integration. For more information and instructions, see Embedding Generation.

Parameters

Inputs

NameDisplay NameInfo
providerEmbedding ProviderThe embedding provider to use
model_nameModel NameThe embedding model to use
authenticationAuthenticationThe name of the API key in Astra that stores your vectorize embedding provider credentials. (Not required if using an Astra-hosted embedding provider.)
provider_api_keyProvider API KeyAs an alternative to authentication, directly provide your embedding provider credentials.
model_parametersModel ParametersAdditional model parameters

Outputs

NameTypeDescription
embeddingsEmbeddingsAn instance for generating embeddings using Astra vectorize

Azure OpenAI Embeddings

This component generates embeddings using Azure OpenAI models.

Parameters

Inputs

NameTypeDescription
ModelStringName of the model to use (default: text-embedding-3-small)
Azure EndpointStringYour Azure endpoint, including the resource. Example: https://example-resource.azure.openai.com/
Deployment NameStringThe name of the deployment
API VersionStringThe API version to use, options include various dates
API KeyStringThe API key to access the Azure OpenAI service

Outputs

NameTypeDescription
embeddingsEmbeddingsAn instance for generating embeddings using Azure OpenAI

Cohere Embeddings

This component is used to load embedding models from Cohere.

Parameters

Inputs

NameTypeDescription
cohere_api_keyStringAPI key required to authenticate with the Cohere service
modelStringLanguage model used for embedding text documents and performing queries (default: embed-english-v2.0)
truncateBooleanWhether to truncate the input text to fit within the model's constraints (default: False)

Outputs

NameTypeDescription
embeddingsEmbeddingsAn instance for generating embeddings using Cohere

Embedding similarity

This component computes selected forms of similarity between two embedding vectors.

Parameters

Inputs

NameDisplay NameInfo
embedding_vectorsEmbedding VectorsA list containing exactly two data objects with embedding vectors to compare.
similarity_metricSimilarity MetricSelect the similarity metric to use. Options: "Cosine Similarity", "Euclidean Distance", "Manhattan Distance".

Outputs

NameDisplay NameInfo
similarity_dataSimilarity DataData object containing the computed similarity score and additional information.

Google generative AI embeddings

This component connects to Google's generative AI embedding service using the GoogleGenerativeAIEmbeddings class from the langchain-google-genai package.

Parameters

Inputs

NameDisplay NameInfo
api_keyAPI KeySecret API key for accessing Google's generative AI service (required)
model_nameModel NameName of the embedding model to use (default: "models/text-embedding-004")

Outputs

NameDisplay NameInfo
embeddingsEmbeddingsBuilt GoogleGenerativeAIEmbeddings object

Hugging Face Embeddings

note

This component is deprecated as of Langflow version 1.0.18. Instead, use the Hugging Face API Embeddings component.

This component loads embedding models from HuggingFace.

Use this component to generate embeddings using locally downloaded Hugging Face models. Ensure you have sufficient computational resources to run the models.

Parameters

Inputs

NameDisplay NameInfo
Cache FolderCache FolderFolder path to cache HuggingFace models
Encode KwargsEncoding ArgumentsAdditional arguments for the encoding process
Model KwargsModel ArgumentsAdditional arguments for the model
Model NameModel NameName of the HuggingFace model to use
Multi ProcessMulti-ProcessWhether to use multiple processes

Hugging Face embeddings Inference API

This component generates embeddings using Hugging Face Inference API models.

Use this component to create embeddings with Hugging Face's hosted models. Ensure you have a valid Hugging Face API key.

Parameters

Inputs

NameDisplay NameInfo
API KeyAPI KeyAPI key for accessing the Hugging Face Inference API
API URLAPI URLURL of the Hugging Face Inference API
Model NameModel NameName of the model to use for embeddings
Cache FolderCache FolderFolder path to cache Hugging Face models
Encode KwargsEncoding ArgumentsAdditional arguments for the encoding process
Model KwargsModel ArgumentsAdditional arguments for the model
Multi ProcessMulti-ProcessWhether to use multiple processes

MistralAI

This component generates embeddings using MistralAI models.

Parameters

Inputs

NameTypeDescription
modelStringThe MistralAI model to use (default: "mistral-embed")
mistral_api_keySecretStringAPI key for authenticating with MistralAI
max_concurrent_requestsIntegerMaximum number of concurrent API requests (default: 64)
max_retriesIntegerMaximum number of retry attempts for failed requests (default: 5)
timeoutIntegerRequest timeout in seconds (default: 120)
endpointStringCustom API endpoint URL (default: "https://api.mistral.ai/v1/")

Outputs

NameTypeDescription
embeddingsEmbeddingsMistralAIEmbeddings instance for generating embeddings

NVIDIA

This component generates embeddings using NVIDIA models.

Parameters

Inputs

NameTypeDescription
modelStringThe NVIDIA model to use for embeddings (e.g., nvidia/nv-embed-v1)
base_urlStringBase URL for the NVIDIA API (default: https://integrate.api.nvidia.com/v1)
nvidia_api_keySecretStringAPI key for authenticating with NVIDIA's service
temperatureFloatModel temperature for embedding generation (default: 0.1)

Outputs

NameTypeDescription
embeddingsEmbeddingsNVIDIAEmbeddings instance for generating embeddings

Ollama Embeddings

This component generates embeddings using Ollama models.

Parameters

Inputs

NameTypeDescription
Ollama ModelStringName of the Ollama model to use (default: llama2)
Ollama Base URLStringBase URL of the Ollama API (default: http://localhost:11434)
Model TemperatureFloatTemperature parameter for the model. Adjusts the randomness in the generated embeddings

Outputs

NameTypeDescription
embeddingsEmbeddingsAn instance for generating embeddings using Ollama

OpenAI Embeddings

This component is used to load embedding models from OpenAI.

Parameters

Inputs

NameTypeDescription
OpenAI API KeyStringThe API key to use for accessing the OpenAI API
Default HeadersDictDefault headers for the HTTP requests
Default QueryNestedDictDefault query parameters for the HTTP requests
Allowed SpecialListSpecial tokens allowed for processing (default: [])
Disallowed SpecialListSpecial tokens disallowed for processing (default: ["all"])
Chunk SizeIntegerChunk size for processing (default: 1000)
ClientAnyHTTP client for making requests
DeploymentStringDeployment name for the model (default: text-embedding-3-small)
Embedding Context LengthIntegerLength of embedding context (default: 8191)
Max RetriesIntegerMaximum number of retries for failed requests (default: 6)
ModelStringName of the model to use (default: text-embedding-3-small)
Model KwargsNestedDictAdditional keyword arguments for the model
OpenAI API BaseStringBase URL of the OpenAI API
OpenAI API TypeStringType of the OpenAI API
OpenAI API VersionStringVersion of the OpenAI API
OpenAI OrganizationStringOrganization associated with the API key
OpenAI ProxyStringProxy server for the requests
Request TimeoutFloatTimeout for the HTTP requests
Show Progress BarBooleanWhether to show a progress bar for processing (default: False)
Skip EmptyBooleanWhether to skip empty inputs (default: False)
TikToken EnableBooleanWhether to enable TikToken (default: True)
TikToken Model NameStringName of the TikToken model

Outputs

NameTypeDescription
embeddingsEmbeddingsAn instance for generating embeddings using OpenAI

Text embedder

This component generates embeddings for a given message using a specified embedding model.

Parameters

Inputs

NameDisplay NameInfo
embedding_modelEmbedding ModelThe embedding model to use for generating embeddings.
messageMessageThe message for which to generate embeddings.

Outputs

NameDisplay NameInfo
embeddingsEmbedding DataData object containing the original text and its embedding vector.

VertexAI Embeddings

This component is a wrapper around Google Vertex AI Embeddings API.

Parameters

Inputs

NameTypeDescription
credentialsCredentialsThe default custom credentials to use
locationStringThe default location to use when making API calls (default: us-central1)
max_output_tokensIntegerToken limit determines the maximum amount of text output from one prompt (default: 128)
model_nameStringThe name of the Vertex AI large language model (default: text-bison)
projectStringThe default GCP project to use when making Vertex API calls
request_parallelismIntegerThe amount of parallelism allowed for requests issued to VertexAI models (default: 5)
temperatureFloatTunes the degree of randomness in text generations. Should be a non-negative value (default: 0)
top_kIntegerHow the model selects tokens for output, the next token is selected from the top k tokens (default: 40)
top_pFloatTokens are selected from the most probable to least until the sum of their probabilities exceeds the top p value (default: 0.95)
tuned_model_nameStringThe name of a tuned model. If provided, model_name is ignored
verboseBooleanThis parameter controls the level of detail in the output. When set to True, it prints internal states of the chain to help debug (default: False)

Outputs

NameTypeDescription
embeddingsEmbeddingsAn instance for generating embeddings using VertexAI

Hi, how can I help you?