demo.mp4
OpenLIT allows you to simplify your AI development workflow, especially for Generative AI and LLMs. It streamlines essential tasks like experimenting with LLMs, organizing and versioning prompts, and securely handling API keys. With just one line of code, you can enable OpenTelemetry-native observability, offering full-stack monitoring that includes LLMs, vector databases, and GPUs. This enables developers to confidently build AI features and applications, transitioning smoothly from testing to production.
This project proudly follows and maintains the Semantic Conventions with the OpenTelemetry community, consistently updating to align with the latest standards in Observability.
-
📈 Analytics Dashboard: Monitor your AI application's health and performance with detailed dashboards that track metrics, costs, and user interactions, providing a clear view of overall efficiency.
-
🔌 OpenTelemetry-native Observability SDKs: Vendor-neutral SDKs to send traces and metrics to your existing observability tools.
-
💲 Cost Tracking for Custom and Fine-Tuned Models: Tailor cost estimations for specific models using custom pricing files for precise budgeting.
-
🐛 Exceptions Monitoring Dashboard: Quickly spot and resolve issues by tracking common exceptions and errors with a dedicated monitoring dashboard.
-
💭 Prompt Management: Manage and version prompts using Prompt Hub for consistent and easy access across applications.
-
🔑 API Keys and Secrets Management: Securely handle your API keys and secrets centrally, avoiding insecure practices.
-
🎮 Experiemnt with different LLMs: Use OpenGround to explore, test and compare various LLMs side by side.
flowchart TB;
subgraph " "
direction LR;
subgraph " "
direction LR;
OpenLIT_SDK[OpenLIT SDK] -->|Sends Traces & Metrics| OTC[OpenTelemetry Collector];
OTC -->|Stores Data| ClickHouseDB[ClickHouse];
end
subgraph " "
direction RL;
OpenLIT_UI[OpenLIT] -->|Pulls Data| ClickHouseDB;
end
end
-
Git Clone OpenLIT Repository
Open your command line or terminal and run:
git clone [email protected]:openlit/openlit.git
-
Self-host using Docker
Deploy and run OpenLIT with the following command:
docker compose up -d
For instructions on installing in Kubernetes using Helm, refer to the Kubernetes Helm installation guide.
Open your command line or terminal and run:
pip install openlit
For instructions on using the TypeScript SDK, visit the TypeScript SDK Installation guide.
Integrate OpenLIT into your AI applications by adding the following lines to your code.
import openlit
openlit.init()
Configure the telemetry data destination as follows:
Purpose | Parameter/Environment Variable | For Sending to OpenLIT |
---|---|---|
Send data to an HTTP OTLP endpoint | otlp_endpoint or OTEL_EXPORTER_OTLP_ENDPOINT |
"http://127.0.0.1:4318" |
Authenticate telemetry backends | otlp_headers or OTEL_EXPORTER_OTLP_HEADERS |
Not required by default |
💡 Info: If the
otlp_endpoint
orOTEL_EXPORTER_OTLP_ENDPOINT
is not provided, the OpenLIT SDK will output traces directly to your console, which is recommended during the development phase.
Initialize using Function Arguments
Add the following two lines to your application code:
import openlit
openlit.init(
otlp_endpoint="http://127.0.0.1:4318",
)
Initialize using Environment Variables
Add the following two lines to your application code:
import openlit
openlit.init()
Then, configure the your OTLP endpoint using environment variable:
export OTEL_EXPORTER_OTLP_ENDPOINT = "http://127.0.0.1:4318"
With the Observability data now being collected and sent to OpenLIT, the next step is to visualize and analyze this data to get insights into your AI application's performance, behavior, and identify areas of improvement.
Just head over to OpenLIT at 127.0.0.1:3000
on your browser to start exploring. You can login using the default credentials
- Email:
[email protected]
- Password:
openlituser
We are dedicated to continuously improving OpenLIT. Here's a look at what's been accomplished and what's on the horizon:
Feature | Status |
---|---|
OpenTelemetry-native Observability SDK for Tracing and Metrics | ✅ Completed |
OpenTelemetry-native GPU Monitoring | ✅ Completed |
Exceptions and Error Monitoring | ✅ Completed |
Prompt Hub for Managing and Versioning Prompts | ✅ Completed |
OpenGround for Testing and Comparing LLMs | ✅ Completed |
Vault for Central Management of LLM API Keys and Secrets | ✅ Completed |
Cost Tracking for Custom Models | ✅ Completed |
Real-Time Guardrails Implementation | ✅ Completed |
Programmatic Evaluation for LLM Response | ✅ Completed |
Auto-Evaluation Metrics Based on Usage | 🔜 Coming Soon |
Human Feedback for LLM Events | 🔜 Coming Soon |
Dataset Generation Based on LLM Events | 🔜 Coming Soon |
Search over Traces | 🔜 Coming Soon |
Whether it's big or small, we love contributions 💚. Check out our Contribution guide to get started
Unsure where to start? Here are a few ways to get involved:
- Join our Slack or Discord community to discuss ideas, share feedback, and connect with both our team and the wider OpenLIT community.
Your input helps us grow and improve, and we're here to support you every step of the way.
Connect with OpenLIT community and maintainers for support, discussions, and updates:
- 🌟 If you like it, Leave a star on our GitHub
- 🌍 Join our Slack or Discord community for live interactions and questions.
- 🐞 Report bugs on our GitHub Issues to help us improve OpenLIT.
- 𝕏 Follow us on X for the latest updates and news.
OpenLIT is available under the Apache-2.0 license.