Supporting AI tools #3007
Replies: 7 comments 14 replies
-
After talking with some folks internally and from the community (downloads and stars etc), I believe the most useful platforms for folks would be as follows, grouped by product/service, and then popularity within. ✅ Means we already have a PR in the works for them LLM Providers
Framework
Vector DBs
|
Beta Was this translation helpful? Give feedback.
-
Interesting use cases:the following I am regurgitating from @czyber 😆 for full transparency 🚀
|
Beta Was this translation helpful? Give feedback.
-
Hugging Face does provide some support in the way that you can indicate an OTLP endpoint (Text Generation Interface: Distributed Tracing). Which appear to be what openLLMetry takes advantage of here |
Beta Was this translation helpful? Give feedback.
-
Just a quick update:
|
Beta Was this translation helpful? Give feedback.
-
@antonpirker the documentation isn't clear: If I have the py openai library installed, will monitoring automatically be setup or do I need to |
Beta Was this translation helpful? Give feedback.
-
Hi @antonpirker, we are typically using https://github.com/griptape-ai/griptape as our AI framework. Is there plans to support integration there? The current project I'm working on is using OpenAI models (for now) via Griptape, which in turn uses the OpenAI SDK, so I had thought perhaps it would work. However, it doesn't seem to report token usage, has non realistic 0.02ms durations and of course then no cost estimates. I figure perhaps it's just an incompatibility with how Griptape operates around the SDKs. Did think about them both supporting Open Telemetry but figured I'd check here first as I wasn't immediately sure how that'd work and was time-bound. |
Beta Was this translation helpful? Give feedback.
-
Hi @antonpirker . I am not sure if this thread is still being followed, but some feedback from my side. Sorry for the long post. We are using OpenAI with Langchain. After following the documentation I found it a bit difficult to start using so I created a PR to update the documentation: getsentry/sentry-docs#13905 Currently two use-cases aren't easily covered by current functionality: Used tokens by tagSee number of used tokens by custom tag or user-id. Currently we can only see the total used token by project and enviornment. There is no method to further drill down in the AI LLM-Monitoring tab in sentry. ![]() It would be nice to have further filters similar to traces tab: ![]() Quick overview to view multiple input and output promptsCurrently input and output prompts are hard to find from the AI pipeline tab. Multiple clicks and identifying the correct span in the traces view to find the prompt is cumbersome. It is maybe enough for finding a bug but insufficient for frequent use, especially if the prompt is long. The UI is not made to display long prompts. If the prompt is too long, it is also cut off (maybe configureable?): ![]() When developing an AI pipeline, I'm mainly interested in the data in-and output and prompt in-and output. Of course the requirements depend on each developer and maybe sentry is not the correct tool for these use-cases. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hey everybody!
Not long ago we added an OpenAI integration to the Sentry SDK. Right now we have PRs for two more AI related integrations open, Anthropic and Langchain.
If you work in the AI sector or are an AI enthusiast, I would like to ask you a couple of questions:
Let's get a discussion going, so together we can build kick ass AI support into Sentry!
Beta Was this translation helpful? Give feedback.
All reactions