-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix (ai): Accept name
on CoreMessage, pass to providers
#2199
base: main
Are you sure you want to change the base?
Conversation
Adds an optional `name` field to the relevant message types in the core package, provider SDK, and the OpenAI provider. A review of the current providers found no others that accept a `name` field, so no other providers are modified. OpenAI's API is leading the way here, whether for multi-user or multi-agent interactions, LLM APIs are likely to accept named participants beyond 'user', 'assistant', etc. As described in vercel#2198, the `name` field may be used with OpenAI's API to provide additional context for system and assistant messages, e.g.: for guardrails and reducing the risks of prompt injection. Fixes vercel#2198
@lgrammel is it possible to get this reviewed? |
Right now, this would be a very OpenAI specific feature. No other provider has implemented message names after they have been around almost a year and I have not heard any requests for this feature (outside of this PR). I want to wait and see if other providers adopt it first or if there is more demand, to prevent adding idiosyncratic functionality to the AI SDK API. |
@lgrammel totally understand. Would you accept a PR with any of these three styles to allow a vendor-specific option without an API contract that it will always be supported? TypeScript definitions// CSS Prefix Style
type CoreMessageA = {
[prefixedExtension: `-${string}-${string}`]: any;
}
// Nested vendor, option style:
type CoreMessageB = {
extensions: {
[vendor: string]: {
[option: string]: any;
}
}
}
// Flat, optional keys (no vendor/provider prefix):
type CoreMessageC = {
extensions: {
[option: string]: any;
}
} // CSS Prefix Style
const foo: CoreMessageA = {
"-openai-name": "copilot"
}
// Nested vendor, option style:
const bar: CoreMessageB = {
extensions: {
openai: {
name: "copilot"
}
}
};
// Flat, optional keys (no vendor/provider prefix):
const baz: CoreMessageC = {
extensions: {
name: "copilot"
}
}; |
@AaronFriel something like this is needed for sure. type safety is an issue. in the past i used generics on the models / providers for this, but it makes the system very complex and brittle. i'm leaning towards option c even tho it compromises type safety. |
fyi i'm planning to add this support to the language model v2 spec |
we now support provider specific extensions on messages that could be used for names: #2697 |
@lgrammel I rebased the PR, and found a bug, curious how you want to proceed: This works as expected: const result = await convertToLanguageModelPrompt({
prompt: {
type: 'messages',
prompt: undefined,
messages: [
{
role: 'user',
content: [
{
type: 'text',
text: 'hello, world!',
experimental_providerMetadata: {
[providerName]: {
name: 'foo',
[key]: value,
},
},
},
],
},
],
},
modelSupportsImageUrls: undefined,
}); However, this does not, but it does type check: const result = await convertToLanguageModelPrompt({
prompt: {
type: 'messages',
prompt: undefined,
messages: [
{
role: 'user',
content: [
{
type: 'text',
text: 'hello, world!',
},
],
experimental_providerMetadata: {
[providerName]: {
name: 'foo',
[key]: value,
},
},
},
],
},
modelSupportsImageUrls: undefined,
}); In the second example, the provider metadata does not flow through and the result. Instead, this is the passing assertion: expect(result).toEqual([
{
role: 'user',
content: [
{
type: 'text',
text: 'hello, world!',
},
],
// No provider metadata!
},
] satisfies LanguageModelV1Prompt); It looks like the issue is that ai/packages/ai/core/prompt/message.ts Lines 48 to 58 in 15791b0
The ai/packages/ai/core/prompt/message.ts Line 77 in 15791b0
Which can then declare its own experimental provider metadata: ai/packages/ai/core/prompt/content-part.ts Lines 11 to 25 in 15791b0
Should:
|
Maybe I'm missing something, but for me it seems to work as expected: https://github.com/vercel/ai/pull/2776/files The provider metadata on the message is passed through on the message (you can ignore the content part metadata for the name feature, it's meant for extensions that work on the content part level) |
Adds an optional
name
field to the relevant message types in the core package, provider SDK, and the OpenAI provider. A review of the current providers found no others that accept aname
field, so no other providers are modified.OpenAI's API is leading the way here, whether for multi-user or multi-agent interactions, LLM APIs are likely to accept named participants beyond 'user', 'assistant', etc.
As described in #2198, the
name
field may be used with OpenAI's API to provide additional context for system and assistant messages, e.g.: for guardrails and reducing the risks of prompt injection.Fixes #2198