Skip to content

Conversation

@pchalasani
Copy link
Contributor

No description provided.

pchalasani and others added 3 commits August 18, 2025 08:11
- Add use_responses_api flag to OpenAIGPTConfig for opt-in routing
- Update LanguageModel.create factory to route based on flag
- Update ChatAgent isinstance checks to support both implementations
- Add supports_strict_tools and supports_json_schema properties
- Enable OpenAIResponses to accept OpenAIGPTConfig
- Add comprehensive routing tests
- Update documentation with routing examples

The Responses API can now be used as a drop-in replacement by setting
use_responses_api=True in OpenAIGPTConfig. This provides seamless
migration while maintaining full backward compatibility.
…ing and documentation (#931)

* fix: Correct OpenAI Responses API format for images, tools, and function calls

- Fix image format: use input_image type with image_url field (was: image/image)
- Fix tool result format: use function_call_output type with call_id (was: tool_result/tool_call_id)
- Fix function call parsing: look for function_call type in output (was: tool_call)
- Fix function call structure: use call_id, name, arguments directly (not nested under function)
- Add unit tests for tool result format conversion
- Update vision test to verify correct image format

* fix: Address Codex review comments for OpenAI Responses API

P1 - Fix assistant message context:
- Include assistant messages in conversation to preserve context
- Serialize assistant tool calls as function_call items
- Properly structure multi-turn conversations with alternating roles

P2 - Fix strict tool schema encoding:
- Move strict flag from top level into function payload
- Remove None strict fields from tool specs

Also:
- Update request format to use properly structured input array
- Add unit tests for conversation context preservation
- Add unit tests for strict flag placement

* fix: Parse tool-call arguments and handle streaming events correctly

- Parse function call arguments from JSON string to dict for LLMFunctionCall
- Serialize dict arguments to JSON string when sending to API
- Fix streaming event handlers: use response.output_item.added,
  response.function_call_arguments.delta, and
  response.function_call_arguments.done instead of response.tool_call.delta
- Update tests to expect dict arguments in responses

* chore: trigger CI

---------

Co-authored-by: Claude <[email protected]>
@pchalasani pchalasani marked this pull request as draft December 28, 2025 13:39
@pchalasani pchalasani marked this pull request as ready for review December 28, 2025 13:39
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +266 to +270
max_tokens: int = 200,
tools: Optional[List[OpenAIToolSpec]] = None,
tool_choice: ToolChoiceTypes | Dict[str, str | Dict[str, str]] = "auto",
functions: Optional[List[LLMFunctionSpec]] = None,
function_call: str | Dict[str, str] = "auto",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Handle functions/function_call in Responses backend

The OpenAIResponses.chat method accepts functions/function_call but never forwards them to either the Responses request or the Chat Completions fallback. That means any caller using the legacy function-calling path (e.g., ChatAgent with use_tools_api=False) will silently lose tool/function calls when use_responses_api=True, breaking tool execution even though the API surface claims to support it. Consider mapping functions to tools for Responses or passing them through in the fallback.

Useful? React with 👍 / 👎.

Comment on lines +528 to +532
"model": self.config.chat_model,
"messages": chat_messages,
"max_tokens": max_output_tokens,
"temperature": self.config.temperature,
"stream": self.config.stream,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Include usage in streaming fallback requests

The Chat Completions fallback builds streaming requests without stream_options={'include_usage': True}. OpenAI only emits chunk.usage in streaming mode when that option is set, so _stream_chat_completions will report zero prompt/completion tokens and cost whenever the Responses API falls back to streaming chat completions. This breaks usage/cost tracking for streaming fallback scenarios.

Useful? React with 👍 / 👎.

@pchalasani pchalasani marked this pull request as draft December 28, 2025 19:31
@pchalasani pchalasani marked this pull request as ready for review December 28, 2025 19:31
@pchalasani pchalasani closed this Dec 28, 2025
@pchalasani pchalasani reopened this Dec 28, 2025
@pchalasani pchalasani marked this pull request as draft December 28, 2025 19:36
@pchalasani pchalasani marked this pull request as ready for review December 28, 2025 19:36
@pchalasani pchalasani marked this pull request as draft December 28, 2025 19:42
@pchalasani pchalasani marked this pull request as ready for review December 28, 2025 19:42
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +554 to +555
# Non-streaming Chat Completions with retry
result = _api_call_with_retry(**cc_req, use_chat_completions=True)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Remove internal flag before Chat Completions fallback

When the Responses API is unavailable and stream=False, the fallback calls _api_call_with_retry(**cc_req, use_chat_completions=True), and that extra flag is forwarded straight into client.chat.completions.create(...). use_chat_completions is not a valid Chat Completions parameter, so the fallback path will error out instead of returning a completion whenever it’s used. Consider stripping the flag before calling the SDK or handling the routing outside the kwargs.

Useful? React with 👍 / 👎.

Comment on lines +340 to +345
# Create a client (reusing cached client if configured)
client = get_openai_client(
api_key=self.config.api_key,
base_url=self.config.api_base,
default_headers=headers,
)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Honor OpenAIGPTConfig client/timeout settings

The Responses backend always calls get_openai_client with only api_key, base_url, and default_headers, ignoring inherited config fields like timeout, organization, http_client_factory/http_client_config, and use_cached_client. This means users who configure shorter timeouts, custom proxies/SSL settings, or org IDs will silently have those settings ignored when switching to OpenAIResponses, leading to hangs or auth/proxy issues that don’t reproduce with OpenAIGPT.

Useful? React with 👍 / 👎.

@github-actions
Copy link

Test Results

2 tests   1 ✅  27s ⏱️
1 suites  0 💤
1 files    1 ❌

For more details on these failures, see this check.

Results for commit e571e22.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants