-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question]: multiple worrying inconsistencies with openai apis and docs #17066
Comments
Hey @xucian! I'm here to assist you with any bugs, questions, or contribution inquiries. Let's work together to resolve your issue. The issues you're encountering seem to stem from a few misunderstandings and potential gaps in the LlamaIndex implementation regarding OpenAI's structured outputs and the use of the
Given these points, it might be beneficial to review the LlamaIndex code handling these parameters and consider contributing to the documentation or codebase if you identify specific areas for improvement. |
Looking into how to best clean this up, totally get the confusion. FYI you would have gotten results you wanted with Beyond that, tldr
|
Question Validation
Question
basically I was just confused about strict=True not being included with structured outputs with openai, but I've found many other inconsistencies during writing this.
I see structured function calling (docs) needs
strict: True
:inspect:
is this a bug or am I missing something?
I also think there should be an error for structure outputs if the pydantic models don't have the:
that's required for the
"additionalProperties": False,
to be added to the json schema, which is required to enable structured outputs.--
I just recently discovered that I might not need function calling at all and response_format is enough (the frequency of function calling mentions in docs is pretty annoying imo as most of people are looking for a basic structured response).
while writing this I discovered the OpenAI's
strict
param, I set it to true and when inspecting the callstack,strict: True
is still not added. I imagine it was supposed to enter this function, but my debugger doesn't hit it:this is how I'm constructing the LLM:
just while writing this I've seen:
but
OPENAI
here is not marked as deprecatedaand again while writing this, llamaindex's docs appear to contradict openai's docs and actually suggest to always use function calling where available.
at this point my conclusion is llamaindex needs a doc update and some new issues created here on github.
it actually took me around 1-2h of research so it's not like I didn't do anything, but I don't have the necessary energy left to create some PRs (I'm also after a 4h-sleep night and pretty much regretting having the pretention to expect llamaindex would allow me to move and ship fast)
in other words the (apparently arbitrary) preference of function calling by default made me waste 2 days of an already extreme tight schedule during my launch week
moving on, I now tried
pydantic_program_mode=PydanticProgramMode.LLM
hoping this one would take care of adding theresponse_format
and 'strict: True' to the openai api calland guess what, it doesn't happen.
next, I searched the repo for the openai's
response_format
param, and.. no references whatsoever. I don't even know what to say, something just doesn't add upso how come I have to write double/triple the amount of code to generate a structured output compared to using a native api call through openai's sdk? This is how I have to call the thing (that now works):
The text was updated successfully, but these errors were encountered: