Feature Description
The token counter currently does it's own calculation for prompt and completion token.
But, this is limiting us in getting the reasoning tokens used by openai for o1 models.
Kindly provide a way to directly read the token counts returned as part of response from openai.
Reason
Reason is simple - fine grained cost tracking.
Value of Feature
I don't want to consume openai APIs directly, so it will be very valuable for me.