Skip to content

AI tracking does not work properly in Python's asynchronous generator scenarios. #3823

@uraurora

Description

@uraurora

Environment

SaaS (https://sentry.io/)

Steps to Reproduce

  1. First, I use an HTTP service to obtain a streaming response (SSE) from an interface, and my local interface is mainly used to relay data and report its token consumption information.
  2. Locally, I use Python FastAPI and employ a Python asynchronous generator to yield each event.
  3. I created a span within the asynchronous generator and used the decorator ai_track on the function. I used with sentry_sdk.start_span(op="ai.chat_completions.create.xxx", name="xxx") as span, and I'm not sure if the op value is set correctly.

Expected Result

I hope the LLM Monitoring works well, but seems only no-stream api does

Actual Result

The stream api does not show anything. I'm not sure whether there's an issue with my configuration or if this method of invocation is not currently supported.

Product Area

Insights

Link

https://moflow.sentry.io/insights/ai/llm-monitoring/?project=4508239351447552&statsPeriod=24h

DSN

No response

Version

2.19.0

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    Status

    Waiting for: Product Owner

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions