Skip to content

Conversation

@ftnext
Copy link
Collaborator

@ftnext ftnext commented Jan 1, 2026

Close #850

Add test to show supporting OpenAI compatible endpoint already

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR demonstrates that the speech_recognition library already supports OpenAI-compatible API endpoints through the OPENAI_BASE_URL environment variable, addressing issue #850. The implementation leverages the OpenAI Python client's native support for custom base URLs.

  • Adds a test case demonstrating OpenAI-compatible endpoint support via environment variables
  • Introduces pytest-httpserver as a new dev dependency for HTTP mocking

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.

File Description
tests/recognizers/whisper_api/test_openai_compatible.py New test file that validates OpenAI-compatible API endpoints work by setting OPENAI_BASE_URL environment variable
setup.cfg Adds pytest-httpserver to dev dependencies for the new test's HTTP mocking

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

actual = sut.recognize_openai(audio_data)

assert actual == "Transcription by OpenAI compatible API"
audio_data.get_wav_data.assert_called_once_with()
Copy link

Copilot AI Jan 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The assertion style assert_called_once_with() is inconsistent with the existing OpenAI test in test_openai.py line 35, which uses assert_called_once() (without the _with suffix when no arguments are being checked). For consistency, this should use assert_called_once() since you're verifying it was called exactly once, regardless of arguments.

Suggested change
audio_data.get_wav_data.assert_called_once_with()
audio_data.get_wav_data.assert_called_once()

Copilot uses AI. Check for mistakes.
pytest-randomly
respx
numpy
pytest-httpserver
Copy link

Copilot AI Jan 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding pytest-httpserver as a dev dependency creates duplication with respx, which is already used for mocking HTTP requests in the existing OpenAI tests. This increases the dependency footprint unnecessarily. Consider removing this dependency and using respx instead in the test implementation to maintain consistency.

Suggested change
pytest-httpserver

Copilot uses AI. Check for mistakes.
Comment on lines +6 to +23
def test_transcribe_with_openai_compatible_api(httpserver, monkeypatch):
# https://github.com/Uberi/speech_recognition/issues/850
httpserver.expect_request(
"/v1/audio/transcriptions",
method="POST",
).respond_with_json({"text": "Transcription by OpenAI compatible API"})

monkeypatch.setenv("OPENAI_API_KEY", "EMPTY")
monkeypatch.setenv("OPENAI_BASE_URL", httpserver.url_for("/v1"))

audio_data = MagicMock(spec=AudioData)
audio_data.get_wav_data.return_value = b"audio_data"

sut = Recognizer()
actual = sut.recognize_openai(audio_data)

assert actual == "Transcription by OpenAI compatible API"
audio_data.get_wav_data.assert_called_once_with()
Copy link

Copilot AI Jan 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test uses pytest-httpserver for mocking HTTP requests, while the existing OpenAI tests in test_openai.py use respx for the same purpose. This creates an inconsistent testing approach and adds an unnecessary additional dependency. Consider using respx instead to align with the existing test patterns and maintain consistency across the test suite.

Copilot uses AI. Check for mistakes.
@ftnext ftnext changed the title Support OpenAI spec endpoint docs: OpenAI-compatible self-hosted endpoints are already supported Jan 2, 2026
@ftnext ftnext merged commit bfb826d into Uberi:master Jan 2, 2026
9 checks passed
@ftnext ftnext deleted the support-openai-spec-endpoint branch January 2, 2026 01:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Allow vllm hosted whisper models to be used with recognize_openai

1 participant