<![CDATA[Docs — observIQ]]><![CDATA[observIQ Documentation]]>https://observiq.comRSS for NodeFri, 22 Nov 2024 15:06:20 GMT<![CDATA[en]]><![CDATA[Frequently Asked Questions]]><![CDATA[If you have a question, it has likely been asked before. Provided below is a list of questions that have come up on several occasions. If you do not see your question on the list, please open up a support request and we would be happy to help you. Bindplane OP Server - I have a setup question that isn't covered in the quickstart guide, how do I deploy with an advanced feature? - Check out our advanced setup page if you need additional installation options such as Kubernetes, TLS or if you will be using a Proxy. - What happens if my connection to BindPlane is interrupted / If the BindPlane OP server goes down, does data get lost? BPOP Server is only for configuration. The BindPlane Agent, which is an OTel Collector, sends telemetry directly to the destination platform. If the BindPlane server goes down the agent telemetry is not interrupted. You cannot push new configurations to the agents until the connection is reestablished. The BindPlane Agent, which is an OTel Collector, sends telemetry directly to the destination platform. - How many agents can I run in one BP OP Server? Our free tier license supports up to 10 agents. We have customers in production using 20,000+ agents. If you are interested in a larger suite of agents reach out to our sales team at [email protected]. - How do you price BindPlane OP? BindPlane OP has several editions and is priced based on the amount of data ingested by agents managed by BindPlane. For more information please reach out to our sales team at [email protected] - How do I upgrade BindPlane OP for Linux? Running the install command without the init flag at the end is enough to upgrade BindPlane. You can get the installer command from going to the download page. Run this script on your BindPlane server to upgrade BindPlane. - Can you deploy BindPlane OP on Kubernetes? You can install the BindPlane OP server and agents on Kubernetes. See Kubernetes Installation for more information. - What license type do I need? We offer a free tier license that supports up to 10 agents ingesting up to 10 GB/Day, a BindPlane for Google license that only supports Google Cloud as a destination, and our enterprise license with no limitations. More information about our license types can be found on our solutions page. Sources File Source - How do you reset the file collector to re-read files in a directory? Under the advanced configuration for the file source, set the file to read at the beginning and uncheck the Enable File Offset Storage. Destinations Google SecOps Windows Events routing to Google SecOps: - Google SecOps can only read 'RAW' telemetry. Please verify that in the Google SecOps Destination settings 'Send Single Field' is checked, with the 'Field to Send' set to 'Body'. - The 'Log Type' in the Google SecOps Destination settings should be set to 'WINEVTLOG' to capture Windows Event Logs. - On the 'Windows Events' Source, under 'Advanced', please make sure 'Raw Logs' is selected. - To send events from a custom channel to Google SecOps, you can specify the channel name in the 'Windows Events' source under 'Advanced Settings'. To find the value for the custom channel name you can run the following commands on the Windows Server to find the value of the custom channel log name: Get-WinEvent -ListLog - To capture DNS logs on Windows, you can add the 'DNS Server' channel in the Windows Events source and under Advanced. - To capture DHCP logs on Windows, you can use the CSV source and point it to your DHCP logs that may be located at 'c:\windows\system32\dhcp\dhcpsrvlog.\.txt']]>https://observiq.com/docs/faqhttps://observiq.com/docs/faqThu, 13 Jun 2024 17:42:05 GMT<![CDATA[Create a Support Bundle]]><![CDATA[To create a support bundle, download and run the script found here. It will output in the directory it's run from, and collects the following information: 1. BindPlane logs 2. BindPlane configuration 3. With agent flag, the agent configuration, and agent information 4. System information It will package the information into a tar.gz file named bindplane_support_bundle_YYYYMMDD_HHMMSS.tar.gz, where YYYY is the year, MM month, DD, day, HH hour, MM minute, SS second. It must be run with sudo or as root.]]>https://observiq.com/docs/support/create-a-support-bundlehttps://observiq.com/docs/support/create-a-support-bundleMon, 11 Dec 2023 16:10:29 GMT<![CDATA[Getting Support]]><![CDATA[Community-Based Support Do you have a question or feedback? Join the discussion at BindPlane Slack. You can join our BindPlane Slack channel here. Entitled Support Our support team is happy to handle technical issues related to BindPlane OP, our OpenTelemetry-based agent, and configuration questions. Direct support for configurations sent to any destination is part of Enterprise Edition entitlement. Opening a Ticket Opening a ticket starts with an email to [email protected] From there, a ticket is opened and you will be emailed tracking information automatically. Google Destination Partnership Support Customers using BindPlane for Google are entitled to support. This support channel is through the Google Partnership. Opening a Ticket with Google There is an integrated support process where customers initiate support requests with Google. These issues are worked mutually with Google for observIQ product technical and configuration issues. For more information on opening a ticket with Google Support: https://cloud.google.com/support/docs]]>https://observiq.com/docs/support/contact-ushttps://observiq.com/docs/support/contact-usThu, 13 Jun 2024 17:42:05 GMT<![CDATA[Sources]]><![CDATA[Sources Available for Bindplane OP Source Metrics Logs Traces : : : : Active Directory Aerospike Apache Combined Apache Common Apache HTTP Apache Spark AWS Cloudwatch AWS S3 Rehydration Azure Blob Storage Azure Blob Rehydration Azure Event Hub BindPlane Agent BindPlane Gateway BindPlane OP Cassandra Cisco ASA Cisco Catalyst Cisco Meraki Cloudflare CockroachDB Common Event Format Couchbase CouchDB CSV Custom Elasticsearch F5 BIG-IP Filelog Fluent Forward Hadoop HAProxy HBase Host Metrics HTTP Check HTTP Log JBoss Journald JVM Kafka Cluster Kafka Node Kafka OTLP Kubernetes Cluster Events Kubernetes Cluster Metrics Kubernetes Container Logs Kubernetes Kubelet Metrics Kubernetes Prometheus Metrics Logstash macOS Microsoft 365 Microsoft IIS Microsoft SQL Server MongoDB MongoDB Atlas MySQL Nginx Okta OpenTelemetry Oracle Database PgBouncer PostgreSQL Prometheus RabbitMQ Redis SAP HANA SAP NetWeaver Solr Splunk (HEC) Splunk (TCP) SQL Query StatsD Syslog TCP Telemetry Generator Tomcat Ubiquiti UDP VMware ESXi VMware vCenter W3C WildFly Windows DHCP Windows Events ZooKeeper]]>https://observiq.com/docs/resources/sourceshttps://observiq.com/docs/resources/sourcesMon, 23 Sep 2024 14:36:33 GMT<![CDATA[Processors]]><![CDATA[What are Processors? Processors can be inserted into your telemetry pipeline to transform your data before it arrives at your destination. Adding attributes, filtering, and converting logs to metrics are all the types of transformations we can do using processors in BindPlane OP. Certain Processors are available only in BindPlane OP Enterprise Edition. Please contact [email protected] for more information. Processors Available for BindPlane OP Processor Metrics Logs Traces Google Edition : : : : : Add Fields Batch Coalesce Compute Metric Statistics Count Telemetry Custom Delete Empty Values Delete Fields Deduplicate Logs Extract Metric Filter by Condition Filter by Field Filter HTTP Status Filter Metric Name Filter by Regex Filter Severity Group by Attributes Google SecOps Standardization Log Sampling Lookup Fields Marshal Mask Sensitive Data Move Field Parse CSV Parse JSON Parse Key Value Parse Severity Parse Timestamp Parse with Regex Parse XML Rename Fields Rename Metric Resource Detection]]>https://observiq.com/docs/resources/processorshttps://observiq.com/docs/resources/processorsMon, 11 Nov 2024 20:37:21 GMT<![CDATA[Processor Impact on Agent Performance]]><![CDATA[Processors are a very powerful tool in BindPlane OP. They allow you to transform your data before it arrives at your destination. This can be useful for adding attributes, filtering, and converting logs to metrics. The question is, how do processors impact agent performance? We'll look at some common uses cases for processors and see how they can impact agent performance. Table of Contents - Benchmarking Setup - Control Runs - Logs Control - Metrics Control - Deduplicating Logs - Converting Logs to Metrics - Lookup Fields - Delete Fields - Logs - Metrics - Takeaways Benchmarking Setup For our benchmarking setup we have two agents running on a GCP n2-standard-2 Compute Engine instance, which has 2 vCPUs and 8GB of memory. One agent will be the test agent that we will be applying processors to. The other agent will be acting as our destination. It is configured with an OTLP Source and custom destination to log the output of the data it receives. For the log generation tests static JSON logs were generated at a rate of 1000 logs per second, across 10 files. This roughly equates to 140MiB of logs per minute. The log source is the Filelog source with JSON parsing configured. For metric generation the Host Metrics source was used with the default configuration. Control Runs A one hour control run was done for both the log and metric generation tests. This was to establish a baseline for the agent's performance without any processors applied. Logs Control The control run for logs was conducted collecting from 10 log files with logs generating at 1000 logs per second to each file. This results in a throughput of 140MiB/m. The agent's average CPU usage was 27% and average memory usage was 2.8% (230MB). This is the topology view of the control configuration: These are the graphs showing CPU and memory usage over the hour long run: Metrics Control The control run for metrics had a data throughput of 99KiB/m. The agent's average CPU usage was 0.2% and average memory usage was 2.1% (16MB). This is the topology view of the control configuration: These are the graphs showing CPU and memory usage over the hour long run: Deduplicating Logs Deduplicating logs is a common data reduction use case for logs. It can help reduce the number of redundant logs that are sent to a destination. For our tests we used the Deduplicate Logs processor to deduplicate logs. The processor is configured to deduplicate logs excluding the timestamp and log.file.name attributes. The benchmark was run for 30 minutes. The agent's average CPU usage was 16% and average memory usage was 2.8% (230MB). This is actually lower than the control run. This is due to the fact that the logs are being deduplicated and the agent is processing less data. This is the topology view of the benchmark configuration: These are the graphs showing CPU and memory usage over the 30 minute run: (The benchmark takes place in the section between the blue lines.) Benchmark Conclusion The above images show that apply a log deduplication processor does not negatively impact the agent's performance. It actually reduces CPU usage as it reduces the amount of data the agent has to process. Converting Logs to Metrics Converting logs to metrics is a great way to reduce cost and data ingestion of logs if you just want to see some metric based off of the logs. For this benchmark we will be solving the use case of wanting to know the number of logs for each severity and discard the logs themselves. We will use the Count Telemetry processor to count the number of info, warning, and error logs. We will use the Filter by Field processor to filter out the logs and only send the metrics. Here is the order in which the processors are applied: The benchmark was run for 30 minutes. The agent's average CPU usage was 20% and average memory usage was 2.5% (205MB). Like the Log DeDuplication Benchmark, this is actually lower than the control run. This is the topology view of the benchmark configuration: (It shows 0B/m after the processors as they are all being filtered out.) These are the graphs showing CPU and memory usage over the 30 minute run: (The benchmark takes place in the section between the blue lines.) Benchmark Conclusion The above images show that apply even with four processors in use it does not negatively impact the agent's performance. Both CPU and memory usage are lower than the control run. Lookup Fields The Lookup Fields processor allows adding additional fields to telemetry based on existing fields. This can be useful for adding additional context before sending to a destination. This benchmark we add an additional field to the logs based on the env field in the body. The benchmark was run for 30 minutes. The agent's average CPU usage was 25% and average memory usage was 2.5% (205MB). This is fairly close to the control run. This is the topology view of the benchmark configuration: These are the graphs showing CPU and memory usage over the 30 minute run: (The benchmark takes place in the section between the blue lines.) Benchmark Conclusion The results are fairly close to the control run. We don't see a performance improvement like the previous runs as we are adding a small amount of metadata to the logs rather than reducing overall throughput. Delete Fields The Delete Fields processor allows removing fields from telemetry. This can be useful for removing extraneous or sensitive data from telemetry before sending it to a destination. This benchmark was run against both logs and metrics deleting fields from both. Logs The benchmark was run for 30 minutes. The agent's average CPU usage was 24% and average memory usage was 2.4% (196MB). This is fairly close to the control run. This is the topology view of the benchmark configuration: These are the graphs showing CPU and memory usage over the 30 minute run: (The benchmark takes place in the section between the blue lines.) Benchmark Conclusion The results are fairly close to the control run with using slightly less CPU and memory as the logs are slightly smaller due to the fields being removed. Metrics The benchmark was run for 30 minutes. The agent's average CPU usage was 0.2% and average memory usage was 2.1% (172MB). This is identical to the metrics control run. This is the topology view of the benchmark configuration: These are the graphs showing CPU and memory usage over the 30 minute run: (The benchmark takes place in the section between the blue lines.) Benchmark Conclusion The results were identical to the metrics control run. Since volume of metrics and the rate at which they are processed is low, the impact of the processor is negligible. Takeaways The benchmarks show that using processors do not negatively impact the agent's performance. In fact, in some cases it can reduce the amount of data the agent has to process and reduce CPU usage. Data reduction can significantly reduce the resource usage of an agent. Ordering processors so data reduction happens earlier in the pipeline can help reduce the overall resource usage of the agent. The impact of a processor is dependent on the volume of data being processed. The log benchmarks showed a greater impact from processors than the metric benchmarks. This is due to the volume of logs being processed being much higher than the volume of metrics.]]>https://observiq.com/docs/resources/processor-impact-agent-performancehttps://observiq.com/docs/resources/processor-impact-agent-performanceWed, 30 Oct 2024 16:06:44 GMT<![CDATA[Destinations]]><![CDATA[Destinations Available for Bindplane OP Destination Logs Metrics Traces Persistent Queuing Proxy : : : : : : Amazon Managed Prometheus AWS S3 Azure Blob Storage Azure Monitor BindPlane Gateway Coralogix ClickHouse Custom Datadog Dev Null Dynatrace Dynatrace (deprecated) Elasticsearch (Legacy) Elasticsearch (OTLP) Google Cloud Google Cloud Managed Service for Prometheus Google SecOps (Chronicle) Google SecOps (Chronicle) Forwarder Grafana Cloud Honeycomb.io Honeycomb Refinery InfluxDB Jaeger Kafka Logz.io Loki New Relic OpenTelemetry Observe Prometheus Prometheus Remote Write QRadar Snowflake Splunk HEC Splunk Observability Cloud Sumo Logic Victoria Metrics Zipkin 1 The Logz.io destination supports persistent queuing for both traces and logs. For metrics, WAL may be configured to increase data resiliency in case of agent restarts.]]>https://observiq.com/docs/resources/destinationshttps://observiq.com/docs/resources/destinationsWed, 16 Oct 2024 19:04:42 GMT<![CDATA[ZooKeeper]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Metrics"] Choose Telemetry Type. collection_interval int 60 Sets how often (seconds) to scrape for metrics. address string localhost IP address or hostname of the ZooKeeper system. port int 2181 Port of the ZooKeeper system.]]>https://observiq.com/docs/resources/sources/zookeeperhttps://observiq.com/docs/resources/sources/zookeeperThu, 11 Apr 2024 15:26:58 GMT<![CDATA[Windows Events]]><![CDATA[Platform Metrics Logs Traces Windows Prerequisites for Remote Configuration Supported Versions: - Windows Vista or later Minimum Setup Requirements: - User Permissions: - The user must be a member of the Event Log Readers group. - The user must have DCOM and WMI permissions for remote access. - Firewall Configuration: - Ensure the firewall rules allow the necessary ports: TCP 135, 445, and dynamic RPC ports (49152-65535). - Windows Firewall Exception: - Enable the "Remote Event Log Management" exception on the remote machine. Configuration Table Windows Event Log Receiver Parameter Type Default Description system_event_input bool true Enable the System event channel. app_event_input bool true Enable the Application event channel. security_event_input bool true Enable the Security event channel. suppress_rendering_info bool false When this is enabled, the source will not attempt to resolve rendering info. This can improve performance but comes at a cost of losing some details in the event log. custom_channels strings Custom channels to read events from. Remote Configuration Options Parameter Type Default Description remote.server string The server to connect to for remote event logs. remote.username string The username to authenticate with the server. remote.password string The password to authenticate with the server. remote.domain string The domain of the server (optional).]]>https://observiq.com/docs/resources/sources/windows-eventshttps://observiq.com/docs/resources/sources/windows-eventsThu, 31 Oct 2024 18:44:23 GMT<![CDATA[Windows DHCP]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Windows Configuration Table Parameter Type Default Description : : : : file_path strings "C:/Windows/System32/dhcp/DhcpSrvLog-\.log" File or directory paths to tail for logs. start_at enum end Start reading the file from the 'beginning' or 'end'.]]>https://observiq.com/docs/resources/sources/windows-dhcphttps://observiq.com/docs/resources/sources/windows-dhcpThu, 09 Nov 2023 09:26:01 GMT<![CDATA[WildFly]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows Configuration Table Parameter Type Default Description : : : : standalone_file_path strings /opt/wildfly/standalone/log/server.log File paths to tail for standalone server logs. enable_domain_server bool true Enable to read domain server logs. domain_server_path strings '/opt/wildfly/domain/servers/\/log/server.log' File paths to tail for domain server logs. enable_domain_controller bool true Enable to read domain controller logs. domain_controller_path strings '/opt/wildfly/domain/log/\.log' File paths to tail for domain controller logs. start_at enum end Start reading the file from the 'beginning' or 'end'. timezone timezone "UTC" The timezone to use when parsing timestamps.]]>https://observiq.com/docs/resources/sources/wildflyhttps://observiq.com/docs/resources/sources/wildflyThu, 09 Nov 2023 09:26:01 GMT<![CDATA[W3C]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Parameter Type Default Description : : : : file_path strings File or directory paths to tail for logs. exclude_file_log_path strings File or directory paths to exclude. delimiter enum tab Delimiter character used between the fields of the W3C log line. Valid values include: tab, space. encoding enum utf-8 The encoding of the file being read. Valid values include: nop, utf-8, utf-16le, utf-16be, ascii, big5. header_delimiter enum default Delimiter character used between fields in the W3C Field header. The value of the "Delimiter" parameter is used by default. Valid values include: tab, space, default. include_file_name_attribute bool true Whether to add the file name as the attribute "log.file.name". include_file_path_attribute bool false Whether to add the file path as the attribute "log.file.path". include_file_name_resolved_attribute bool false Whether to add the file name after symlinks resolution as the attribute "log.file.name_resolved". include_file_path_resolved_attribute bool false Whether to add the file path after symlinks resolution as the attribute "log.file.path_resolved".]]>https://observiq.com/docs/resources/sources/w3chttps://observiq.com/docs/resources/sources/w3cThu, 09 Nov 2023 09:26:01 GMT<![CDATA[VMware vCenter]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Prerequisites This receiver has been built to support ESXi and vCenter versions: - 7.5 - 7.0 - 6.7 A Read Only user assigned to a vSphere with permissions to the vCenter server, cluster and all subsequent resources being monitored must be specified in order for the receiver to retrieve information about them. Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Metrics"] Choose Telemetry Type. endpoint\ string Endpoint to the vCenter Server or ESXi host that has the sdk path enabled. Required. The expected format is :// \\n \\ni.e: https://vcsa.hostname.localnet username\ string Username used to authenticate. password\ string Password used to authenticate. tls string Not Required. Will use defaults for configtls.TLSClientSetting. By default insecure settings are rejected and certificate verification is on. collection_interval int 60 Sets how often (seconds) to scrape for metrics. \_required field_]]>https://observiq.com/docs/resources/sources/vmware-vcenterhttps://observiq.com/docs/resources/sources/vmware-vcenterThu, 11 Apr 2024 15:26:58 GMT<![CDATA[VMware ESXi]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Parameter Type Default Description : : : : listen_port\ int 5140 The port to bind to and receive syslog. Collector must be running as root (Linux) or Administrator (windows) when binding to a port below 1024. listen_ip string "0.0.0.0" The IP address to bind to and receive syslog. enable_tls bool false Whether or not to use TLS. cert_file string Path to the x509 PEM certificate. key_file string Path to the x509 PEM private key. \_required field_]]>https://observiq.com/docs/resources/sources/vmware-esxihttps://observiq.com/docs/resources/sources/vmware-esxiThu, 09 Nov 2023 09:26:01 GMT<![CDATA[UDP]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Parameter Type Default Description : : : : listen_port\ int Port to listen on. listen_ip string "0.0.0.0" IP Address to listen on. log_type string udp Arbitrary for attribute 'log_type'. Useful for filtering between many udp sources. parse_format enum none Method to use when parsing. Valid values are none, json, and regex. When regex is selected, 'Regex Pattern' must be set. regex_pattern string The regex pattern used when parsing log entries. multiline_line_start_pattern string Regex pattern that matches the beginning of a log entry, for handling multiline logs. multiline_line_end_pattern string Regex pattern that matches the end of a log entry, useful for terminating parsing of multiline logs. parse_timestamp bool false Whether to parse the timestamp from the log entry. timestamp_field string timestamp The field containing the timestamp in the log entry. parse_timestamp_format enum ISO8601 The format of the timestamp in the log entry. Choose a common format, or specify a custom format. Options include "ISO8601", "RFC3339", "Epoch", and "Manual". epoch_timestamp_format enum s The layout of the epoch-based timestamp. Required when parse_timestamp_format is set to "Epoch".. Options include "s", "ms", "us", "ns", "s.ms", "s.us", "s.ns". manual_timestamp_format string '%Y-%m-%dT%H:%M:%S.%f%z' The strptime layout of the timestamp. Used when parse_timestamp_format is set to "Manual". timezone timezone UTC The timezone to use if the Timestamp Format doesn't include a timezone. Otherwise, the timezone in the Timestamp Format will be respected. NOTE: This is also required to parse timezone abbreviations, due to their ambiguity. parse_severity bool false Whether to parse severity from the log entry. severity_field string severity The field containing the severity in the log entry. parse_to string body The field that the log will be parsed to. Some exporters handle logs favorably when parsed to attributes over body and vice versa. async_readers int 1 Determines how many workers read from UDP port and push to buffer. Generally this value should remain at 1 unless otherwise directed by observIQ support. async_processors int 3 Determines how many workers read from buffer (pushed by readers) and process logs before sending downstream. Increasing this value can be useful when the agent is under significant load. max_queue_length int 100 Determines size of buffer being used by async reader workers. When buffer reaches max number, reader workers will block until buffer has room. Increasing this value can be useful if you anticipate short durations of increased log volume. Generally, you should increase async_processors before increasing this value. \_required field_]]>https://observiq.com/docs/resources/sources/udphttps://observiq.com/docs/resources/sources/udpWed, 23 Oct 2024 13:40:24 GMT<![CDATA[Ubiquiti]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Parameter Type Default Description : : : : listen_port int 5140 A UDP port which the agent will listen for syslog messages. listen_ip string "0.0.0.0" An IP address for the agent to bind. Typically 0.0.0.0 for most configurations. timezone timezone "UTC" The timezone to use when parsing timestamps. \_required field_]]>https://observiq.com/docs/resources/sources/ubiquitihttps://observiq.com/docs/resources/sources/ubiquitiThu, 09 Nov 2023 09:26:01 GMT<![CDATA[Tomcat]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Prerequisites This source supports Apache Tomcat versions 9.0.x and 10.x. Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Metrics"] Choose Telemetry Type. collection_interval int 60 Sets how often (seconds) to scrape for metrics. address string localhost IP address or hostname to scrape for JMX metrics. port int 9012 Port to scrape for JMX metrics. jar_path string "/opt/opentelemetry-java-contrib-jmx-metrics.jar" Full path to the JMX metrics jar.]]>https://observiq.com/docs/resources/sources/tomcathttps://observiq.com/docs/resources/sources/tomcatThu, 11 Apr 2024 15:26:58 GMT<![CDATA[Telemetry Generator]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Description The Telemetry Generator is a source that generates random telemetry data for testing purposes. This source is useful for testing the load and pipeline configurations. Minimum Agent Versions - Introduced: v1.46.0 - Updated to include Host Metrics & Windows Events: v1.47.0 Supported Pipelines - Logs - Metrics - Traces Configuration for all generators Field Default Required Description Payloads per second 1 true The number of payloads this receiver will generate per second. Logs Generator Configuration Field Description Resource Attributes A map of resource attributes to be included in the generated telemetry. Values can be any. Attributes A map of attributes to be included in the generated telemetry. Values can be any. body The body of the log severity The severity of the log message OTLP Replay Generator The OTLP Replay Generator replays JSON-formatted telemetry. It adjusts the timestamps of the telemetry relative the current time, with the most recent record moved to the current time, and the previous records the same relative duration in the past. The text in the OTLP JSON box should be valid JSON-formatted OTLP, such as the JSON created by plog.JSONMarshaler,ptrace.JSONMarshaler, or pmetric.JSONMarshaler. Field Description Type The type of telemetry to replay: logs, metrics, or traces. OTLP JSON A string of JSON encoded OTLP telemetry Host Metrics Generator The host metrics generator creates synthetic host metrics, from a list of pre-defined metrics. The metrics resource attributes can be set in the Resource Attributes section of the configuration. Windows Events Generator The Windows Events Generator replays a sample of recorded Windows Event Log data. It has no additional configuration.]]>https://observiq.com/docs/resources/sources/telemetry-generatorhttps://observiq.com/docs/resources/sources/telemetry-generatorTue, 19 Mar 2024 01:26:01 GMT<![CDATA[TCP]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Parameter Type Default Description : : : : listen_port\ int Port to listen on. listen_ip string "0.0.0.0" IP Address to listen on. log_type string tcp Arbitrary for attribute 'log_type'. Useful for filtering between many tcp sources. encoding enum utf-8 The encoding of the data being read. See the list of supported encodings. parse_format enum none Method to use when parsing. Valid values are none, json, and regex. When regex is selected, 'Regex Pattern' must be set. regex_pattern string The regex pattern used when parsing log entries. multiline_line_start_pattern string Regex pattern that matches the beginning of a log entry, for handling multiline logs. multiline_line_end_pattern string Regex pattern that matches the end of a log entry, useful for terminating parsing of multiline logs. parse_timestamp bool false Whether to parse the timestamp from the log entry. timestamp_field string timestamp The field containing the timestamp in the log entry. parse_timestamp_format enum ISO8601 The format of the timestamp in the log entry. Choose a common format, or specify a custom format. Options include "ISO8601", "RFC3339", "Epoch", and "Manual". epoch_timestamp_format enum s The layout of the epoch-based timestamp. Required when parse_timestamp_format is set to "Epoch".. Options include "s", "ms", "us", "ns", "s.ms", "s.us", "s.ns". manual_timestamp_format string '%Y-%m-%dT%H:%M:%S.%f%z' The strptime layout of the timestamp. Used when parse_timestamp_format is set to "Manual". timezone timezone UTC The timezone to use if the Timestamp Format doesn't include a timezone. Otherwise, the timezone in the Timestamp Format will be respected. NOTE: This is also required to parse timezone abbreviations, due to their ambiguity. parse_severity bool false Whether to parse severity from the log entry. severity_field string severity The field containing the severity in the log entry. parse_to string body The field that the log will be parsed to. Some exporters handle logs favorably when parsed to attributes over body and vice versa. enable_tls bool false Whether or not to use TLS. tls_certificate_path string Path to the TLS cert to use for TLS-required connections. tls_private_key_path string Path to the TLS key to use for TLS-required connections. tls_min_version enum "1.2" The minimum TLS version to support. 1.0 and 1.1 should not be considered secure. Valid values include: 1.3, 1.2, 1.1, 1.0. max_log_size string "1Mib" The maximum size of a log entry to read. A log entry will be truncated if it is larger than max_log_size. Protects against reading large amounts of data into memory. \_required field_]]>https://observiq.com/docs/resources/sources/tcphttps://observiq.com/docs/resources/sources/tcpTue, 15 Oct 2024 13:59:37 GMT<![CDATA[Syslog]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Parameter Type Default Description : : : : protocol\ enum "rfc3164" The RFC protocol to use when parsing incoming syslog. Valid values are rfc3164 or rfc5424. connection_type enum udp The transport protocol to use. Valid values are udp or tcp. data_flow enum high Enable high flow or reduced low flow. listen_port\ int 5140 The port to bind to and receive syslog. Collector must be running as root (Linux) or Administrator (windows) when binding to a port below 1024. listen_ip\ string "0.0.0.0" The IP address to bind to and receive syslog. timezone enum UTC RFC3164 only. The timezone to use when parsing timestamps. enable_octet_counting bool false Whether or not to parse using a trailer character. This is a special character that will be the termination character for syslog messages. This is only applicable to tcp and rfc5424 configurations. non_transparent_framing_trailer enum LF Whether or not to enable octet counting on syslog framing. This framing allows for the transmission of all characters inside a syslog message. This is only applicable to tcp and rfc5424 configurations. enable_mutual_tls bool false Whether or not to use mutual TLS. cert_file string Path to the TLS cert to use for TLS-required connections. key_file string Path to the TLS key to use for TLS-required connections. ca_file string When set, enforces mutual TLS authentication and verifies client certificates. tls_min_version enum "1.2" The minimum TLS version to support. 1.0 and 1.1 should not be considered secure. max_log_size string "1Mib" When using tcp, the maximum size of a log entry to read. A log entry will be truncated if it is larger than max_log_size. Protects against reading large amounts of data into memory. async_readers int 1 When using udp, determines how many workers read from UDP port and push to buffer. Generally this value should remain at 1 unless otherwise directed by observIQ support. async_processors int 3 When using udp, determines how many workers read from buffer (pushed by readers) and process logs before sending downstream. Increasing this value can be useful when the agent is under significant load. max_queue_length int 100 When using udp, determines size of buffer being used by async reader workers. When buffer reaches max number, reader workers will block until buffer has room. Increasing this value can be useful if you anticipate short durations of increased log volume. Generally, you should increase async_processors before increasing this value. \_required field_]]>https://observiq.com/docs/resources/sources/sysloghttps://observiq.com/docs/resources/sources/syslogWed, 23 Oct 2024 13:40:24 GMT<![CDATA[StatsD]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Parameter Type Default Description : : : : listen_ip string "0.0.0.0" IP Address to listen on. listen_port int 8125 Port to listen on and receive metrics from statsd clients. aggregation_interval int 60 The aggregation time in seconds that the receiver aggregates the metrics. enable_metric_type bool false Enable the statsd receiver to be able to emit the metric type as a label. is_monotonic_counter bool false Set all counter-type metrics the statsd receiver received as monotonic.]]>https://observiq.com/docs/resources/sources/statsdhttps://observiq.com/docs/resources/sources/statsdThu, 09 Nov 2023 09:26:01 GMT<![CDATA[SQL Query]]><![CDATA[Description Write an SQL query to execute on a compatible database server and generate logs from the result. Supported Platforms Bindplane Agent: v1.40.0+ Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Field Description : : Driver Which database driver should be used. Typically indicates which kind of database is being queried. Options include "postgres", "mysql", "snowflake", "sqlserver", "sap-hana", and "oracle". Database Connection Options A driver specific string specifying how to connect to the database. Usually contains information like host, port, authorization credentials, TLS configuration, and other connection options. Query The SQL query to run. The results of the query are used to generate the telemetry specified below. Log Body Column Defines the name of the column whose value will become the body for the generated log. Tacking Column Used for parameterized queries. Defines the name of the column to retrieve for the parameter value on subsequent query runs. See this OTel Documentation for more information. Tracking Start Value Used for parameterized queries. Defines the initial value of the tracking column to compare against on subsequent query runs. See this OTel Documentation for more information. Collection Interval How frequently to execute queries to retrieve log data. Default is '10s'. Enable Tracking Storage If using tracking values, enable this to persist those values when the collector is restarted. Directory will be "$OIQ_OTEL_COLLECTOR_HOME/storage". See this OTel Documentation for more information. Enable Query Logging Whether or not the collector should log the SQL query with associated parameters when the query is ran. Example Configuration In this example, we are connecting to a postgres database using a postgres driver specific connection string. We are using a simple query which is retrieving rows of logs from a table. We are tracking the id column to avoid creating duplicate logs. Web Interface Standalone Source]]>https://observiq.com/docs/resources/sources/sql-queryhttps://observiq.com/docs/resources/sources/sql-queryWed, 19 Jun 2024 15:05:05 GMT<![CDATA[Splunk (TCP)]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Kubernetes Gateway Configuration Table Parameter Type Default Description : : : : listen_ip string "0.0.0.0" IP Address to listen on. listen_port\ int Port to listen on. log_type string splunk_tcp Arbitrary for attribute 'log_type'. Useful for filtering between many log sources. parse_format enum none Method to use when parsing. Valid values are none, json, and regex. When regex is selected, 'Regex Pattern' must be set. regex_pattern string The regex pattern used when parsing log entries. multiline_line_start_pattern string Regex pattern that matches the beginning of a log entry, for handling multiline logs. multiline_line_end_pattern string Regex pattern that matches the end of a log entry, useful for terminating parsing of multiline logs. parse_timestamp bool false Whether to parse the timestamp from the log entry. timestamp_field string timestamp The field containing the timestamp in the log entry. parse_timestamp_format enum ISO8601 The format of the timestamp in the log entry. Choose a common format, or specify a custom format. Options include "ISO8601", "RFC3339", "Epoch", and "Manual". epoch_timestamp_format enum s The layout of the epoch-based timestamp. Required when parse_timestamp_format is set to "Epoch".. Options include "s", "ms", "us", "ns", "s.ms", "s.us", "s.ns". manual_timestamp_format string '%Y-%m-%dT%H:%M:%S.%f%z' The strptime layout of the timestamp. Used when parse_timestamp_format is set to "Manual". timezone timezone UTC The timezone to use if the Timestamp Format doesn't include a timezone. Otherwise, the timezone in the Timestamp Format will be respected. NOTE: This is also required to parse timezone abbreviations, due to their ambiguity. parse_severity bool false Whether to parse severity from the log entry. severity_field string severity The field containing the severity in the log entry. parse_to string body The field that the log will be parsed to. Some exporters handle logs favorably when parsed to attributes over body and vice versa. enable_tls bool false Whether or not to use TLS. tls_certificate_path string Path to the TLS cert to use for TLS-required connections. tls_private_key_path string Path to the TLS key to use for TLS-required connections. tls_min_version enum "1.2" The minimum TLS version to support. 1.0 and 1.1 should not be considered secure. Valid values include: 1.3, 1.2, 1.1, 1.0. \_required field_ Kubernetes The Splunk TCP source type supports Kubernetes Gateway agents. Splunk forwarders can send logs to the agents using the clusterIP services. Prerequisites - BindPlane OP v1.46.0 or newer Configuration Add the Splunk TCP source to your Gateway agent configuration. Set "Listen Address" to 0.0.0.0 and Listen Port to 9997. The Splunk forwarders should be configured to forward telemetry to bindplane-gateway-agent.bindplane-agent.svc.cluster.local on port 9997. If the Splunk forwarders live outside of the cluster, you must make the bindplane-gateway-agent service in the bindplane-agent namespace available using TCP ingress or by defining your own service that can receive traffic from outside of the cluster. See the Kubernetes service documentation for more information. Below is an example Splunk forwarder outputs configuration.]]>https://observiq.com/docs/resources/sources/splunk-tcphttps://observiq.com/docs/resources/sources/splunk-tcpWed, 14 Feb 2024 12:40:11 GMT<![CDATA[Splunk (HEC)]]><![CDATA[Description The Splunk HTTP Event Collector source can be used to receive events (logs) from applications that emit events in the Splunk HEC format. Events are converted to OTLP format and can be sent to any destination. The HEC source can be combined with the Splunk HEC Destination. This allows BindPlane's agent to sit in the middle of a Splunk pipeline, giving you the ability to leverage BindPlane's processing capabilities. Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Kubernetes Gateway Configuration Table Parameter Type Default Description : : : : listen_port int 8888 Port to listen on. listen_ip string "0.0.0.0" IP Address to listen on. access_token_passthrough string false Whether to preserve incoming access token (Splunk header value) as "com.splunk.hec.access_token" metric resource label. enable_tls bool false Whether or not to use TLS. tls_certificate_path string Path to the TLS cert to use for TLS-required connections. tls_private_key_path string Path to the TLS key to use for TLS-required connections. Example Configuration The HEC source type has two required parameters: - Listen IP Address - Listening Port It is recommended to enable the Access Token Passthrough option if you wish to preserve the Splunk access token header as a resource attribute com.splunk_hec.access_token. Once configured, incoming events will be displayed as logs like this: Kubernetes The Splunk HEC source type supports Kubernetes Gateway agents. Splunk HEC forwarders can send logs to the agents using the clusterIP services. Prerequisites - BindPlane OP v1.49.0 or newer Configuration Add the Splunk HEC source to your Gateway agent configuration. Set "Listen Address" to 0.0.0.0 and Listen Port to 8088. The Splunk forwarders should be configured to forward telemetry to bindplane-gateway-agent.bindplane-agent.svc.cluster.local on port 8088. If the Splunk forwarders live outside of the cluster, you must make the bindplane-gateway-agent service in the bindplane-agent namespace available using TCP ingress or by defining your own service that can receive traffic from outside of the cluster. See the Kubernetes service documentation for more information.]]>https://observiq.com/docs/resources/sources/splunk-hec-sourcehttps://observiq.com/docs/resources/sources/splunk-hec-sourceThu, 23 May 2024 14:03:50 GMT<![CDATA[Solr]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Metrics"] Choose Telemetry Type. collection_interval int 60 Sets how often (seconds) to scrape for metrics. address string localhost IP address or hostname to scrape for JMX metrics. port int 9012 Port to scrape for JMX metrics. jar_path string "/opt/opentelemetry-java-contrib-jmx-metrics.jar" Full path to the JMX metrics jar.]]>https://observiq.com/docs/resources/sources/solrhttps://observiq.com/docs/resources/sources/solrThu, 11 Apr 2024 15:26:58 GMT<![CDATA[SAP NetWeaver]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Prerequisites Supports SAP Netweaver 7.10+. SAP Netweaver requires a valid OS user and password via HTTP basic auth and unix write permission. The receiver must run on the host to execute OS executables in order to collect certificate, rfc, and session metrics. Configuration Table Parameter Type Default Description : : : : hostname\ string localhost The hostname or IP address of the SAP Netweaver system. port int 50013 The TCP port of the SAP Netweaver system (for HTTP, use port 50013. for HTTPS use port 50014). collection_interval int 60 Sets how often (seconds) to scrape for metrics. username\ string The username to use when connecting to SAP Netweaver. password\ string The password to use when connecting to SAP Netweaver. profile string The profile path in the form of /sapmnt/SID/profile/SID_INSTANCE_HOSTNAME to collect sapnetweaver.abap.rfc.count and sapnetweaver.abap.session.count metrics enable_tls bool false Whether or not to use TLS. insecure_skip_verify bool false Enable to skip TLS certificate verification. ca_file string Certificate authority used to validate the database server's TLS certificate. cert_file string A TLS certificate used for client authentication, if mutual TLS is enabled. key_file string A TLS private key used for client authentication, if mutual TLS is enabled. \_required field_]]>https://observiq.com/docs/resources/sources/sap-netweaverhttps://observiq.com/docs/resources/sources/sap-netweaverWed, 10 Apr 2024 20:00:01 GMT<![CDATA[SAP HANA]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Parameter Type Default Description : : : : file_path strings "/usr/sap/_/HDB_/_/trace/_.trc" File paths to logs. timezone timezone "UTC" The hostname or IP address of the Elasticsearch API. start_at enum end Start reading the file from the 'beginning' or 'end'.]]>https://observiq.com/docs/resources/sources/sap-hanahttps://observiq.com/docs/resources/sources/sap-hanaThu, 09 Nov 2023 09:26:01 GMT<![CDATA[Redis]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Prerequisites This source supports Redis version 6.2. Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Metrics"] Choose Telemetry Type. file_path strings One-click installer: - \"/var/log/redis/redis_6379.log\" \\nUbuntu / Debian: - \"/var/log/redis/redis-server.log\" \\nsrc: - \"/var/log/redis_6379.log\" \\nCentOS / RHEL: - \"/var/log/redis/redis.log\" \\nSLES: - \"/var/log/redis/default.log\" Path to Redis log file(s). start_at enum end Start reading the file from the 'beginning' or 'end'. endpoint string "localhost:6379" The endpoint of the Redis server. transport enum tcp The transport protocol being used to connect to Redis. Valid values are tcp or unix. password string The password used to access the Redis instance; must match the password specified in the requirepass server configuration option. collection_interval int 60 Sets how often (seconds) to scrape for metrics. enable_tls bool false Whether or not to use TLS. insecure_skip_verify bool false Enable to skip TLS certificate verification. ca_file string Certificate authority used to validate the database server's TLS certificate. cert_file string A TLS certificate used for client authentication, if mutual TLS is enabled. key_file string A TLS private key used for client authentication, if mutual TLS is enabled.]]>https://observiq.com/docs/resources/sources/redishttps://observiq.com/docs/resources/sources/redisThu, 11 Apr 2024 15:26:58 GMT<![CDATA[RabbitMQ]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Prerequisites Supports RabbitMQ versions 3.8 and 3.9. The RabbitMQ Management Plugin must be enabled by following the official instructions. Also, a user with at least monitoring level permissions must be used for monitoring. Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Metrics"] Choose Telemetry Type. daemon_log_paths strings - "/var/log/rabbitmq/[email protected]" Path to Rabbitmq log file(s). start_at enum end Start reading the file from the 'beginning' or 'end'. username\ string Username used to authenticate. password\ string Password used to authenticate. endpoint string http://localhost:15672 The endpoint of the Rabbitmq server. collection_interval int 60 Sets how often (seconds) to scrape for metrics. enable_tls bool false Whether or not to use TLS. insecure_skip_verify bool false Enable to skip TLS certificate verification. ca_file string Certificate authority used to validate the database server's TLS certificate. cert_file string A TLS certificate used for client authentication, if mutual TLS is enabled. key_file string A TLS private key used for client authentication, if mutual TLS is enabled. \_required field_]]>https://observiq.com/docs/resources/sources/rabbitmqhttps://observiq.com/docs/resources/sources/rabbitmqThu, 11 Apr 2024 15:26:58 GMT<![CDATA[Prometheus]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Parameter Type Default Description : : : : job_name\ string The name of the scraper job. Will be set as service.name resource label. static_targets\ strings List of endpoints to scrape. collection_interval int 60 Sets how often (seconds) to scrape for metrics. metrics_path string "/metrics" HTTP Resource path on which to fetch metrics from targets. \_required field_]]>https://observiq.com/docs/resources/sources/prometheushttps://observiq.com/docs/resources/sources/prometheusThu, 09 Nov 2023 09:26:01 GMT<![CDATA[PostgreSQL]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Prerequisites This source supports PostgreSQL versions 10.18 and higher. The monitoring user must be granted SELECT on pg_stat_database. Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Metrics"] Choose Telemetry Type. postgresql_log_path strings For CentOS / RHEL: - \"/var/log/postgresql/postgresql_.log\" \\nFor SLES: - \"/var/lib/pgsql/data/log/postgresql_.log\" \\nFor Debian / Ubuntu: - \"/var/lib/pgsql/_/data/log/postgresql_.log\" Path to Postgres log file(s). start_at enum end Start reading the file from the 'beginning' or 'end'. username\ string Username used to authenticate. password\ string Password used to authenticate. endpoint string localhost:5432 The endpoint of the Postgres server. If transport is set to unix, the endpoint will internally be translated from host:port to /host.s.PGSQL.port. transport enum tcp The transport protocol used to connect to Postgres. Valid values are tcp, or unix. databases strings The list of databases for which the receiver will attempt to collect statistics. If an empty list is provided, the receiver will attempt to collect statistics for all databases. collection_interval int 60 Sets how often (seconds) to scrape for metrics. enable_tls bool false Whether or not to use TLS. enable_tlsinsecure_skip_verify bool false Enable to skip TLS certificate verification. ca_file string Certificate authority used to validate the database server's TLS certificate. cert_file string A TLS certificate used for client authentication, if mutual TLS is enabled. key_file string A TLS private key used for client authentication, if mutual TLS is enabled. \_required field_]]>https://observiq.com/docs/resources/sources/postgresqlhttps://observiq.com/docs/resources/sources/postgresqlFri, 12 Apr 2024 14:10:48 GMT<![CDATA[PgBouncer]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Parameter Type Default Description : : : : file_path strings /var/log/pgbouncer/pgbouncer.log Path to log file(s). start_at enum end Start reading the file from the 'beginning' or 'end'.]]>https://observiq.com/docs/resources/sources/pgbouncerhttps://observiq.com/docs/resources/sources/pgbouncerThu, 09 Nov 2023 09:26:01 GMT<![CDATA[Oracle Database]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Metrics Requirements To collect metrics from OracleDB, a user with SELECT access to the relevant views is required. To create a new user with those permissions, run the following SQL script as a user with sufficient permissions connected to the Oracle DB instance as SYSDBA or SYSOPER. Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Metrics"] Choose Telemetry Type. audit_log_path strings "/u01/app/oracle/product/_/dbhome_1/admin/_/adump/\.aud" File paths to audit logs. enable_alert_log bool true alert_log_path strings "/u01/app/oracle/product/_/dbhome_1/diag/rdbms/_/\_/trace/alert\_\_.log" File paths to alert logs. enable_listener_log bool true listener_log_path strings "/u01/app/oracle/product/_/dbhome_1/diag/tnslsnr/_/listener/alert/log.xml" File paths to listener logs. start_at enum end Start reading the file from the 'beginning' or 'end'. host string localhost Host to scrape metrics from. port int 1521 Port of the host to scrape metrics from. username\ string Database user to run metric queries with. password string Password for user. sid string Site Identifier. One or both of sid or service_name must be specified. service_name string OracleDB Service Name. One or both of sid or service_name must be specified. wallet string OracleDB Wallet file location (must be URL encoded). collection_interval int 60 Sets how often (seconds) to scrape for metrics. enable_audit_log bool true Enable to collect audit logs. \_required field_]]>https://observiq.com/docs/resources/sources/oracle-databasehttps://observiq.com/docs/resources/sources/oracle-databaseThu, 11 Apr 2024 15:26:58 GMT<![CDATA[OpenTelemetry (OTLP)]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Kubernetes Node (DaemonSet) Kubernetes Gateway OpenShift 4 Node (DaemonSet) Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Metrics", "Traces"] Choose Telemetry Type. listen_address string "0.0.0.0" The IP address to listen on. grpc_port int 4317 TCP port to receive OTLP telemetry using the gRPC protocol. The port used must not be the same as the HTTP port. Set to 0 to disable. http_port int 4318 TCP port to receive OTLP telemetry using the HTTP protocol. The port used must not be the same as the gRPC port. Set to 0 to disable. enable_tls bool false Whether or not to use TLS. insecure_skip_verify bool false Enable to skip TLS certificate verification. ca_file string Certificate authority used to validate the OTLP server's TLS certificate. cert_file string A TLS certificate used for client authentication if mutual TLS is enabled. key_file string A TLS private key used for client authentication if mutual TLS is enabled. Example Configuration The OTLP source type does not have any required fields. By default, the OTLP source will listen on ports 4317/gRPC and 4318/HTTP on all IP addresses without TLS. Kubernetes The OTLP source type supports Kubernetes, OpenShift Node (DaemonSet), and Gateway agents. Applications within the cluster can forward metrics, logs, and traces to the agents using the clusterIP services. Prerequisites - BindPlane OP v1.31.0 or newer Configuration The OTLP source type does not require additional configuration. It can be attached to any Kubernetes, OpenShift Node (DaemonSet), or Gateway configuration. The following endpoints can forward telemetry to the managed Node (DaemonSet) agents. Protocol Service Endpoint : : : gRPC clusterIP bindplane-node-agent.bindplane-agent.svc.cluster.local:4317 gRPC headless clusterIP bindplane-node-agent-headless.bindplane-agent.svc.cluster.local:4317 HTTP clusterIP http://bindplane-node-agent.bindplane-agent.svc.cluster.local:4318 The following endpoints can forward telemetry to the managed Gateway agents. Protocol Service Endpoint : : : gRPC clusterIP bindplane-gateway-agent.bindplane-agent.svc.cluster.local:4317 gRPC headless clusterIP bindplane-gateway-agent-headless.bindplane-agent.svc.cluster.local:4317 HTTP clusterIP http://bindplane-gateway-agent.bindplane-agent.svc.cluster.local:4318 It is a matter of preference if you should forward telemetry to the DaemonSet or Gateway agents. It is recommended to use the Gateway agent, if DaemonSet resource consumption is a concern, as the Gateway agent can scale independent of cluster size.]]>https://observiq.com/docs/resources/sources/opentelemetryhttps://observiq.com/docs/resources/sources/opentelemetryThu, 13 Jun 2024 17:42:05 GMT<![CDATA[Okta]]><![CDATA[Supported Platforms Bindplane Agent: v1.59.0+ Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Parameter Type Default Description : : : : okta_domain\ string The Okta domain to collect logs from (Excluding "https://").Find your Okta Domain api_token\ string An Okta API Token generated from the above Okta domain.How to Create an Okta API Token poll_interval string 1m The rate at which this receiver will poll Okta for logs. This value must be in the range [1 second - 24 hours] and must be a string readable by Golang's time.ParseDuration.Okta recommends between 60s - 300s. \_required field_ Depending on your Okta plan, setting the poll_interval below 10 seconds risks your API Token getting rate limited. You can increase the rate limit allocated to your API Token to minimize the chances of getting rate limited while using a short poll_interval. Okta - Set Token Rate Limits Example Configuration Web Interface Standalone Source]]>https://observiq.com/docs/resources/sources/oktahttps://observiq.com/docs/resources/sources/oktaWed, 28 Aug 2024 20:09:59 GMT<![CDATA[Nginx]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Prerequisites This source supports nginx versions 1.18 and 1.20. Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Metrics"] Choose Telemetry Type. data_flow enum high Enable high flow or reduced low flow. log_format enum default access_log_paths strings - "/var/log/nginx/access.log" Path to NGINX access log file(s). error_log_paths strings - "/var/log/nginx/error.log" Path to NGINX error log file(s). start_at enum end Start reading the file from the 'beginning' or 'end'. endpoint\ string "http\://localhost:80/status" The endpoint of the NGINX server. collection_interval int 60 Sets how often (seconds) to scrape for metrics. enable_tls bool false Whether or not to use TLS. insecure_skip_verify bool false Enable to skip TLS certificate verification. ca_file string Certificate authority used to validate the database server's TLS certificate. cert_file string A TLS certificate used for client authentication if mutual TLS is enabled. key_file string A TLS private key used for client authentication if mutual TLS is enabled. \_required field_]]>https://observiq.com/docs/resources/sources/nginxhttps://observiq.com/docs/resources/sources/nginxThu, 11 Apr 2024 15:26:58 GMT<![CDATA[MySQL]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Prerequisites This source supports MySQL versions 5.7 and 8.0. Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Metrics"] Choose Telemetry Type. enable_general_log bool false Enable to read and parse the general log file. general_log_paths strings - \"/var/log/mysql/general.log\" Path to the general log file(s). enable_slow_log bool true Enable to read and parse the slow query log. slow_query_log_paths strings - \"/var/log/mysql/slow.log\" Path to the slow query log file(s). enable_error_log bool true Enable to read and parse the error log. error_log_paths strings For CentOS / RHEL: - \"/var/log/mysqld.log\" \\nFor SLES: - \"/var/log/mysql/mysqld.log\" \\nFor Debian / Ubuntu: - \"/var/log/mysql/error.log\" Path to the error log file(s). start_at enum end Start reading the file from the 'beginning' or 'end'. username\ string Username used to authenticate. password\ string Password used to authenticate. endpoint string localhost:3306 The endpoint of the MySQL server. transport enum tcp The transport protocol used to connect to MySQL. database string The database name. If not specified, metrics will be collected for all databases. collection_interval int 60 Sets how often (seconds) to scrape for metrics. \_required field_]]>https://observiq.com/docs/resources/sources/mysqlhttps://observiq.com/docs/resources/sources/mysqlThu, 11 Apr 2024 15:26:58 GMT<![CDATA[MongoDB]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Prerequisites This source supports MongoDB versions 2.6, 3.x, 4.x, and 5.0. Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Metrics"] Choose Telemetry Type. log_paths strings - \"/var/log/mongodb/mongodb.log_\" \\n - \"/var/log/mongodb/mongod.log_\" Path to Mongodb log file(s). start_at enum end Start reading the file from the 'beginning' or 'end'. hosts strings "localhost:27017" List of host:port or unix domain socket endpoints. \\n \\n- For standalone MongoDB deployments this is the hostname and port of the mongod instance.\\n- For replica sets specify the hostnames and ports of the mongod instances that are in the replica set configuration. If the replica_set field is specified, nodes will be auto discovered.\\n- For a sharded MongoDB deployment, please specify a list of the mongos hosts. username string If authentication is required, specify a username with \"clusterMonitor\" permission. password string The password user's password. collection_interval int 60 Sets how often (seconds) to scrape for metrics. enable_tls bool false Whether or not to use TLS. insecure_skip_verify bool false Enable to skip TLS certificate verification. ca_file string Certificate authority used to validate the database server's TLS certificate. cert_file string A TLS certificate used for client authentication if mutual TLS is enabled. key_file string A TLS private key used for client authentication if mutual TLS is enabled.]]>https://observiq.com/docs/resources/sources/mongodbhttps://observiq.com/docs/resources/sources/mongodbThu, 11 Apr 2024 15:26:58 GMT<![CDATA[MongoDB Atlas]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Metrics"] Choose Telemetry Type. log_project_name\ string "" Project to collect logs for. collect_audit_logs bool false Enable to collect Audit Logs. It must be enabled on the project, and the API Key must have Organization Owner permissions. log_filter_mode\ enum All Mode of filtering clusters. Either collect from all clusters or specify an inclusive list or exclusive list. Valid values: All, Inclusive, Exclusive log_include_clusters strings Clusters in the project to collect logs from. Applicable if log_filter_mode is Inclusive log_exclude_clusters strings Clusters in the project to exclude from log collection. Applicable if log_filter_mode is Exclusive public_key\ string "" API Public Key with at least Organization Read Only permissions. private_key\ string "" API Private Key. collection_interval int 180 Sets how often (seconds) to scrape for granularity enum PT1M Duration interval between measurement data points. Read more here. Valid values: PT1M, PT5M, PT1H, P1D enable_alerts bool false Enable to collect alerts. alert_collection_mode\ enum poll Method of collecting alerts. In poll mode alerts are scraped from the API. In listen mode a server is set up to listen for incoming alerts. Valid values: poll, listen. alert_project_name\ string "" Project to collect alerts from. Applicable if alert_collection_mode is poll. alert_filter_mode\ enum All Mode of filtering clusters. Either collect from all clusters or specify an inclusive list or exclusive list. Applicable if alert_collection_mode is poll. Valid values: All, Inclusive, Exclusive. alert_include_clusters strings Clusters in the project to collect alerts from. Applicable if log_filter_mode is Inclusive and alert_collection_mode is poll. alert_exclude_clusters strings Clusters in the project to exclude from the alert collection. Applicable if log_filter_mode is Exclusive and alert_collection_mode is poll. page_size int 100 The number of alerts to collect per API request. Applicable if alert_collection_mode is poll. max_pages int 10 The limit of how many pages of alerts will be requested per project. Applicable if alert_collection_mode is poll. listen_secret\ string "" Secret key configured for push notifications. Applicable if alert_collection_mode is listen. listen_endpoint\ string "0.0.0.0:4396" Local "ip:port" to bind to, to listen for incoming webhooks. Applicable if alert_collection_mode is listen. enable_listen_tls bool false Enable TLS for alert webhook server. Applicable if alert_collection_mode is listen. listen_tls_key_file string "" Local path to the TLS key file. Applicable if enable_listen_tls is true and alert_collection_mode is listen. listen_tls_cert_file string "" Local path to the TLS cert file. Applicable if enable_listen_tls is true and alert_collection_mode is listen. \_required field_]]>https://observiq.com/docs/resources/sources/mongodb-atlashttps://observiq.com/docs/resources/sources/mongodb-atlasThu, 11 Apr 2024 15:26:58 GMT<![CDATA[Microsoft SQL Server]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Windows Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Metrics"] Choose Telemetry Type. start_at enum end Start reading the file from the 'beginning' or 'end'. collection_interval int 60 Sets how often (seconds) to scrape for metrics.]]>https://observiq.com/docs/resources/sources/microsoft-sql-serverhttps://observiq.com/docs/resources/sources/microsoft-sql-serverThu, 11 Apr 2024 15:26:58 GMT<![CDATA[Microsoft IIS]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Windows Prerequisites This source supports IIS versions 8.5 and 10.0. Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Metrics"] Choose Telemetry Type. file_path strings ["C:/inetpub/logs/LogFiles/W3SVC_//_.log"] File or directory paths to tail for logs. exclude_file_log_path strings File or directory paths to exclude. timezone enum UTC RFC3164 only. The timezone to use when parsing timestamps. start_at enum end Start reading the file from the 'beginning' or 'end'. collection_interval int 60 Sets how often (seconds) to scrape for metrics. Metrics Metric Unit Description : : : iis.connection.active {connections} Number of active connections. iis.connection.anonymous {connections} Number of connections established anonymously. iis.connection.attempt.count {attempts} Total number of attempts to connect to the server. iis.network.blocked By Number of bytes blocked due to bandwidth throttling. iis.network.file.count {files} Number of transmitted files. iis.network.io By Total amount of bytes sent and received. iis.request.count {requests} Total number of requests of a given type. iis.request.queue.age.max ms Age of oldest request in the queue. iis.request.queue.count {requests} Current number of requests in the queue. iis.request.rejected {requests} Total number of requests rejected. iis.thread.active {threads} Current number of active threads. iis.uptime s The amount of time the server has been up.]]>https://observiq.com/docs/resources/sources/microsoft-iishttps://observiq.com/docs/resources/sources/microsoft-iisFri, 12 Apr 2024 14:10:48 GMT<![CDATA[Microsoft 365]]><![CDATA[Note: Instructions for configuring your Microsoft 365 tenant are included at the bottom of this document, but here are a few important notes to acknowledge. After configuring this source, both metrics and logs will take some time to become available. - Metrics are reported by the API every 24 hours, with each report containing the metrics that were created two days prior. This means metrics generated on June 25 will appear to be from June 27. The receiver scrapes every hour, but note that data points collected within the same 24-hour reporting interval will be duplicates. - Logs require an extra step the first time the tenant is configured for logs. Instructions are included at the bottom of this document. After this configuration step, it will take up to 60 minutes for logs to be enabled - do not run the receiver until this finishes. It may take up to 12 hours for logs to be made available by the API, after they have been enabled. After that point, logs will typically take 0-3 hours to be reported after being generated, but there is no guarantee given by the API. Finally, note that the client secret used to connect will expire (recommended 180 days) and needs to be re-generated, and your source will need to be reconfigured with the new client secret. The initial delay for logs will not be repeated upon reconfiguration. Supported Platforms Platform Metrics Logs Traces Linux Windows macOS Prerequisites - Created instance of Microsoft 365 with the following subscriptions: Microsoft 365 Business Basic, Microsoft 365 E5 Compliance, and Microsoft 365 E3 (Works with the respective "upgraded" versions as well.) - Access to an Admin account for the instance of 365 to be monitored. Metrics Metric Unit Description Attribute Values m365.onedrive.files.active.count {files} The number of active files across the OneDrive in the last seven days. m365.onedrive.files.count {files} The number of total files across the OneDrive for the last seven days. m365.onedrive.user_activity.count {users} The number of users who have interacted with a OneDrive file, by action in the last seven days. activity: view_edit, synced, internal_share, external_share m365.outlook.app.user.count {users} The number of unique users per app over the period of time in the organization Outlook in the last seven days. app: pop3, imap4, smtp, windows, mac, web, mobile, other_mobile m365.outlook.email_activity.count {emails} The number of email actions by members over the period of time in the organization Outlook. activity: read, sent, received m365.outlook.mailboxes.active.count {mailboxes} The number of mailboxes that have been active each day in the organization for the last seven days. m365.outlook.quota_status.count {mailboxes} The number of mailboxes in the various quota statuses over the period of time in the org in the last seven days. state: under_limit, warning, send_prohibited, send_receive_prohibited, indeterminate m365.outlook.storage.used By The amount of storage used in Outlook by the organization in the last seven days. m365.sharepoint.files.active.count {files} The number of active files across all sites in the last seven days. m365.sharepoint.files.count {files} The number of total files across all sites in the last seven days. m365.sharepoint.pages.unique.count {views} The number of unique views of pages across all sites in the last seven days. m365.sharepoint.pages.viewed.count {pages} The number of unique pages viewed across all sites in the last seven days. m365.sharepoint.site.storage.used By The amount of storage used by all sites across SharePoint in the last seven days. m365.sharepoint.sites.active.count {sites} The number of active sites across SharePoint in the last seven days. m365.teams.calls.count {calls} The number of MS Teams calls from users in the organization in the last seven days. m365.teams.device_usage.users {users} The number of unique users by device/platform that used Teams in the last seven days. device: Android, iOS, Mac, Windows, Chrome OS, Linux, Web m365.teams.meetings.count {meetings} The number of MS Teams meetings for users in the organization in the last seven days. m365.teams.messages.private.count {messages} The number of MS Teams private-messages sent by users in the organization in the last seven days. m365.teams.messages.team.count {messages} The number of MS Teams team-messages sent by users in the organization in the last seven days. Source Configuration Table Parameter Type Default Description telemetry_types telemetrySelector ["Logs", "Metrics"] Choose Telemetry Type. tenant_id\ string Identifies the instance of Microsoft 365 to be monitored. client_id\ string Identifier this receiver will use when monitoring. client_secret\ string The private key this receiver will use when monitoring must belong to the given Client ID. poll_interval duration 5m Sets how often (minutes) to collect logs. collection_interval duration 1h Sets how often (hours) to scrape for metrics. \ _required field_ Example Source Configuration Once running on an agent, metrics will look like this: And logs will look like this: Configuring Microsoft 365 The steps below outline how to configure Microsoft 365 to allow the receiver to collect metrics from the tenant to be monitored. 1. Login to Azure: Log in to Microsoft Azure under an Admin account for the instance of 365 to be monitored. 2. Register the receiver in Azure AD: Navigate to Azure Active Directory. Then go to "App Registrations" and select "New Registration". Give the app a descriptive name like "365 Receiver". For "Supported account types", select the Single Tenant option and leave the Redirect URL empty. 3. Add API Permissions: Select "View API Permissions" beneath the general application info and click "Add Permissions". The permissions needed for metrics and logs differ, so whichever monitoring is necessary, the respective permissions are outlined below. - Metrics: Select "Microsoft Graph", then "Application Permissions". Find the "Reports" tab and select "Reports.Read.All". Click "Add Permissions" at the bottom of the panel. - Logs: Select "Office 365 Management APIs", then "Application Permissions". Now select the "ActivityFeed.Read", "ActivityFeed.ReadDlp", and "ServiceHealth.Read" permissions. Click "Add Permissions" at the bottom of the panel. 4. Grant Admin Consent: Select the "Grant admin consent for {organization}" button and confirm the pop-up. This will allow the application to access the data returned by the Microsoft Graph and Office 365 Management APIs. 5. Generate Client Secret: Select the "Certificates & secrets" tab in the left panel. Under the "Client Secrets" tab, select "New Client Secret." Give it a meaningful description and select the recommended period of 180 days. Save the text in the "Value" column since this is the only time the value will be accessible. The receiver needs be reconfigured with a newly generated Client Secret once the initial one expires. 6. Save Client ID and Tenant ID values: You will also need the "client_id" value found on the information page for the application that was created. The value will be listed as "Application (client) id." You will also need the tenant value, which will be listed as "Directory (tenant) id." Save these values for later. The first time an instance of Microsoft 365 is set up for monitoring, an extra step for collecting logs is required. 1. Log into Microsoft Purview Compliance Portal with an admin account. 2. Navigate to "Solutions" then "Audit". 3. If auditing is not turned on for your organization, a banner is displayed prompting you to start recording user and admin activity. 4. Select "Start recording user and admin activity". It will take up to 60 minutes for the change to take effect, so until that point, do not run the receiver with logs turned on, or else it will fail.]]>https://observiq.com/docs/resources/sources/microsoft-365https://observiq.com/docs/resources/sources/microsoft-365Thu, 23 May 2024 14:03:50 GMT<![CDATA[macOS]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : macOS Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Metrics"] Choose Telemetry Type. enable_system_log bool true Enable to collect macOS system logs. system_log_path string "/var/log/system.log" The absolute path to the System log. enable_install_log bool true Enable to collect macOS install logs. install_log_path string "/var/log/install.log" The absolute path to the Install log. start_at enum end Start reading the file from the 'beginning' or 'end'. host_collection_interval int 60 Sets how often (seconds) to scrape for metrics.]]>https://observiq.com/docs/resources/sources/macoshttps://observiq.com/docs/resources/sources/macosThu, 11 Apr 2024 15:26:58 GMT<![CDATA[Logstash]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Directions & Caveats For clear directions on use, and the caveats on how to configure Logstash, see: Using Logstash with BindPlane OP Configuration Table Parameter Type Default Description : : : : listen_port\ int 2255 Port to listen on. listen_ip string "0.0.0.0" IP Address to listen on. log_type string logstash Arbitrary for attribute 'log_type'. Useful for filtering between many logstash sources. parse_format enum none Method to use when parsing. Valid values are none, json, and regex. When regex is selected, 'Regex Pattern' must be set. regex_pattern string The regex pattern used when parsing log entries. multiline_line_start_pattern string Regex pattern that matches the beginning of a log entry, for handling multiline logs. multiline_line_end_pattern string Regex pattern that matches the end of a log entry, useful for terminating parsing of multiline logs. parse_timestamp bool false Whether to parse the timestamp from the log entry. timestamp_field string timestamp The field containing the timestamp in the log entry. parse_timestamp_format enum ISO8601 The format of the timestamp in the log entry. Choose a common format, or specify a custom format. Options include "ISO8601", "RFC3339", "Epoch", and "Manual". epoch_timestamp_format enum s The layout of the epoch-based timestamp. Required when parse_timestamp_format is set to "Epoch".. Options include "s", "ms", "us", "ns", "s.ms", "s.us", "s.ns". manual_timestamp_format string '%Y-%m-%dT%H:%M:%S.%f%z' The strptime layout of the timestamp. Used when parse_timestamp_format is set to "Manual". timezone timezone UTC The timezone to use if the Timestamp Format doesn't include a timezone. Otherwise, the timezone in the Timestamp Format will be respected. NOTE: This is also required to parse timezone abbreviations, due to their ambiguity. parse_severity bool false Whether to parse severity from the log entry. severity_field string severity The field containing the severity in the log entry. parse_to string body The field that the log will be parsed to. Some exporters handle logs favorably when parsed to attributes over body and vice versa. preserve_original bool false When this option is set to true, the original event will be preserved under the attributes enable_tls bool false Whether or not to use TLS. tls_certificate_path string Path to the TLS cert to use for TLS-required connections. tls_private_key_path string Path to the TLS key to use for TLS-required connections. tls_min_version enum "1.2" The minimum TLS version to support. 1.0 and 1.1 should not be considered secure. Valid values include: 1.3, 1.2, 1.1, 1.0. \_required field_]]>https://observiq.com/docs/resources/sources/logstashhttps://observiq.com/docs/resources/sources/logstashTue, 05 Dec 2023 18:59:32 GMT<![CDATA[Kubernetes Prometheus Node]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Kubernetes DaemonSet OpenShift 4 DaemonSet Configuration Table Field Description : : Cluster Name\ The cluster name that will be added as the k8s.cluster.name resource attribute. Relabel Configs Enable or disable relabel configurations. See Relabel Configs. Scrapers Enable or disable HTTP and HTTPS scrapers. Collection Interval Sets how often (seconds) to scrape for metrics. Skip TLS Certificate Verification Enable to skip TLS certificate verification. TLS Certificate Authority File Certificate authority used to validate the exporters TLS certificate. See Transport Layer Security. TLS Client Certificate File A TLS certificate used for client authentication if mutual TLS is enabled. See Transport Layer Security. TLS Client Private Key File A TLS private key used for client authentication if mutual TLS is enabled. See Transport Layer Security. \_required field_ Relabel Configs Relabel configs are used to control how detected pods are scraped. There are four options. prometheus.io/scrape When enabled, only pods with the prometheus.io/scrape: "true" annotation will be considered for scraping. This option is enabled by default, to prevent the receiver from scraping all pods. prometheus.io/path The default HTTP path is /metrics. The path can be overridden by enabling this option and configuring the prometheus.io/path annotation. prometheus.io/scheme When this option is enabled, the HTTP scraper (configured with the "Scrapers" option) will only scrape pods that have the prometheus.io/scheme: "http" annotation set. Similarly, the HTTPS scraper will only scrape pods that have the prometheus.io/scheme: "https" annotation set. This option is recommended when using both HTTP and HTTPS scrapers. prometheus.io/job-name When this option is enabled, the service.name resource attribute will be set to the value of the pod annotation prometheus.io/job-name. This allows you to dynamically set service.name, which defaults to kubernetes-pod-http and kubernetes-pod-https, depending on which scraper is in use. Example Configuration When using Relabel configs, make sure to annotate your pods. Pod annotations are set at spec.template.metadata.annotations, not to be confused with metadata.annotations. Updating pod annotations will cause your pods to be re-deployed. Transport Layer Security When using TLS, if you need to configure a TLS certificate authority or a client key pair, update your BindPlane Agent YAML manifest to include a volumeMount that will mount your TLS files into the container. You can find documentation for mounting secrets into a container here. Example Configuration By default, the Prometheus source is configured to use the HTTP scraper and the prometheus.io/scrape relabel config is enabled. This means the receiver will only scrape pods that have prometheus.io/scrape: "true" set in their annotations. A cluster name is required, and will be set as k8s.cluster.name. You can use a placeholder value if you intend to use Resource Detection or Add Fields processors. See Dynamic Cluster Name for more details. Once running on an agent, some notable resource attributes are: - k8s.cluster.name - k8s.node.name - k8s.container.name - k8s.pod.name - service.name: The name Prometheus job name]]>https://observiq.com/docs/resources/sources/kubernetes-prometheus-nodehttps://observiq.com/docs/resources/sources/kubernetes-prometheus-nodeWed, 30 Oct 2024 16:06:44 GMT<![CDATA[Kubernetes Kubelet Metrics]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Kubernetes DaemonSet OpenShift 4 DaemonSet Configuration Table Parameter Type Default Description : : : : cluster_name\ string The cluster name that will be added as the k8s.cluster.name resource attribute. metric_groups enums all Metric groups to the collector. Supported options include node, pod, container, and volume. collection_interval int 60s Sets how often (seconds) to scrape for metrics. \_required field_ Example Configuration The Kubernetes Kubelet metrics source has one required parameter: - Cluster Name: The name of the cluster, which will be inserted as the k8s.cluster.name resource attribute Once running on an agent, metrics will look like this:]]>https://observiq.com/docs/resources/sources/kubernetes-kubelet-metricshttps://observiq.com/docs/resources/sources/kubernetes-kubelet-metricsThu, 13 Jun 2024 12:58:52 GMT<![CDATA[Kubernetes Container Logs]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Kubernetes DaemonSet OpenShift 4 DaemonSet Configuration Table Field Default Description : : : Cluster Name\ The cluster name that will be added as the k8s.cluster.name resource attribute. Log Source File Where to read logs from. Generally, this is file. file source supports Docker json-file and Containerd cri-o log formats. Options include file and journald. File Path(s) /var/log/containers/\.log When log_source is file. File or directory paths to tail for logs. Defaults to all container logs. Journald Path\ When log_source is journald. The directory containing Journald's log files. Exclude File Path /var/log/containers/bindplane--agent- File or directory paths to exclude. Generally, the collector's own log should be excluded. Start At end Start reading the logs from the 'beginning' or 'end'. Recombine Logs Options for configuring multi line logging. See Multi Line Logging. \_required field_ Example Configuration The Kubernetes Container logs source has one required parameter: - Cluster Name: The name of the cluster, which will be inserted as the k8s.cluster.name resource attribute Once running on an agent, logs will look like this: Multi Line Logging Multi line logging is useful for re-assembling multi line logs into a single log entry. Multi-line log re-assembly requires that all logs emitted by the application are consistent in structure. For example, the logs must start or end in a consistent way, in order to reliably match on the beginning or end of the log. If your application has inconsistent logging, multi-line log re-assembly will behave in irregular ways, such as combining two unique logs into one. Multi line logging is supported by configuring a selector, selector match expression, and a recombine match expression. Field Description Selector The OTTL path to match on. Selector Match Expression A regular expression used to match the selector. Recombine Type Whether or not to recombine logs by matching the first or last line in the log. Recombine With The delimiter used to recombine logs. Defaults a single space or newline character. Recombine Match Expression The regular expression used to recombine the multi-line log. Example In this example, there are two Deployments. One is logging XML while the other is logging JSON. The XML logs are a combination of multi-line and single line logs. Each log has a timestamp prefix indicating the start of the log. The JSON logs are a combination of multi-line and single line logs. Each log has a trailing } without a comma, indicating the end of the log. Multi-line logging can be configured by matching on the First Entry of the XML logs and the Last Entry of the JSON logs. The k8s.pod.name resource attribute is used to select the XML and JSON pods. Once configured, logs will be re-assembled into a single line.]]>https://observiq.com/docs/resources/sources/kubernetes-container-logshttps://observiq.com/docs/resources/sources/kubernetes-container-logsWed, 17 Jul 2024 13:47:01 GMT<![CDATA[Kubernetes Cluster Metrics]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Kubernetes Deployment OpenShift 4 Deployment Configuration Table Parameter Type Default Description : : : : cluster_name\ string The cluster name that will be added as the k8s.cluster.name resource attribute. node_conditions_to_report enums all Enable or disable which node conditions should be included in metrics collection. allocatable_types_to_report enums all Allocatable resource types to report. collection_interval int 60s How often to collect metrics from the Kubernetes API distribution enum kubernetes The Kubernetes distribution. Used to enable support for OpenShift quota metrics. \_required field_ Example Configuration The Kubernetes Cluster metrics source has one required parameter: - Cluster Name: The name of the cluster, which will be inserted as the k8s.cluster.name resource attribute - If running on OpenShift, you can select OpenShift as the Kubernetes Distribution in order to collector quota metrics. Once running on an agent, metrics will look like this:]]>https://observiq.com/docs/resources/sources/kubernetes-cluster-metricshttps://observiq.com/docs/resources/sources/kubernetes-cluster-metricsThu, 13 Jun 2024 12:58:52 GMT<![CDATA[Kubernetes Cluster Events]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Kubernetes Deployment OpenShift 4 Deployment Configuration Table Parameter Type Default Description : : : : cluster_name\ string The cluster name that will be added as the k8s.cluster.name resource attribute. namespaces strings Namespaces to collect events from. Default to all namespaces. \_required field_ Example Configuration The Kubernetes Events source has two options: - Cluster Name: The name of the cluster, which will be inserted as the k8s.cluster.name resource attribute - Namespaces: List of namespaces to collector events from. This is optional and defaults to all namespaces. Once running on an agent, events will look like this:]]>https://observiq.com/docs/resources/sources/kubernetes-cluster-eventshttps://observiq.com/docs/resources/sources/kubernetes-cluster-eventsThu, 13 Jun 2024 12:58:52 GMT<![CDATA[Kafka OTLP]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table No encoding field for metric events is available because the only option, otlp_proto, is set by default. Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Metrics", "Traces"] Choose Telemetry Type. protocol_version enum "2.0.0" The Kafka protocol version to use when communicating with brokers. Valid values are: "2.2.1", "2.2.0", "2.0.0", or "1.0.0". brokers strings localhost:9092 List of brokers to connect and subscribe to for metrics, traces, and logs. group_id string otel-collector Consumer group to consume messages from. client_id string otel-collector The consumer client ID that the receiver will use. log_topic string otlp_logs The topic name for subscribing to log events. log_encoding enum otlp_proto The encoding of the log event pulled from the Kafka topic. otlp_proto, raw, text, or json metric_topic string otlp_metrics The topic name for subscribing to metric events. trace_topic string otlp_spans The topic name for subscribing to trace events. trace_encoding enum otlp_proto The encoding of the trace event pulled from the Kafka topic. otlp_proto, jaeger_proto, jaeger_json, zipkin_proto, zipkin_json, or zipkin_thrift enable_auth bool false auth_type enum basic basic, sasl, or kerberos basic_username string basic_password string sasl_username string sasl_password enum sasl_mechanism string SCRAM-SHA-256 SCRAM-SHA-256, SCRAM-SHA-512, or PLAIN kerberos_service_name string kerberos_realm string kerberos_config_file string /etc/krb5.conf kerberos_auth_type enum keytab keytab or basic kerberos_keytab_file string /etc/security/kafka.keytab kerberos_username string kerberos_password string]]>https://observiq.com/docs/resources/sources/kafka-otlphttps://observiq.com/docs/resources/sources/kafka-otlpThu, 11 Apr 2024 15:26:58 GMT<![CDATA[Kafka Node]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Metrics"] Choose Telemetry Type. enable_server_log bool true server_log_path strings /home/kafka/kafka/logs/server.log File paths to tail for server logs. enable_controller_log bool true controller_log_path strings /home/kafka/kafka/logs/controller.log File paths to tail for controller logs. enable_state_change_log bool true state_change_log_path strings /home/kafka/kafka/logs/state-change.log File paths to tail for stage change logs. enable_log_cleaner_log bool true log_cleaner_log_path strings /home/kafka/kafka/logs/state-cleaner.log File paths to tail for log cleaner logs. start_at enum end Start reading the file from the 'beginning' or 'end'. address string localhost IP address or hostname to scrape for JMX metrics. port int 9999 Port to scrape for JMX metrics. jar_path string "/opt/opentelemetry-java-contrib-jmx-metrics.jar" Full path to the JMX metrics jar. collection_interval int 60 Sets how often (seconds) to scrape for metrics.]]>https://observiq.com/docs/resources/sources/kafka-nodehttps://observiq.com/docs/resources/sources/kafka-nodeThu, 11 Apr 2024 15:26:58 GMT<![CDATA[Kafka Cluster]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Parameter Type Default Description : : : : cluster_name\ string Friendly name used for the resource kafka.cluster.name. protocol_version enum "2.0.0" The Kafka protocol version to use when communicating with brokers. Valid values are: "2.2.1", "2.2.0", "2.0.0", or "1.0.0". brokers strings localhost:9092 List of brokers to scrape for metrics. client_id string otel-metrics-receiver The consumer client ID that the receiver will use. collection_interval int 60 Sets how often (seconds) to scrape for metrics. enable_auth bool false auth_type enum basic basic, sasl, or kerberos basic_username string basic_password string sasl_username string sasl_password string sasl_mechanism enum SCRAM-SHA-256 SCRAM-SHA-256, SCRAM-SHA-512, or PLAIN kerberos_service_name string kerberos_realm string kerberos_config_file string /etc/krb5.conf kerberos_auth_type enum keytab keytab or basic kerberos_keytab_file string /etc/security/kafka.keytab kerberos_username string kerberos_password string \_required field_]]>https://observiq.com/docs/resources/sources/kafka-clusterhttps://observiq.com/docs/resources/sources/kafka-clusterThu, 09 Nov 2023 09:26:01 GMT<![CDATA[JVM]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Prerequisites This source supports Java versions 11 and 16. Configuration Table Parameter Type Default Description : : : : collection_interval int 60 Sets how often (seconds) to scrape for metrics. address string localhost IP address or hostname to scrape for JMX metrics. port int 9999 Port to scrape for JMX metrics. jar_path string "/opt/opentelemetry-java-contrib-jmx-metrics.jar" Full path to the JMX metrics jar.]]>https://observiq.com/docs/resources/sources/jvmhttps://observiq.com/docs/resources/sources/jvmThu, 09 Nov 2023 09:26:01 GMT<![CDATA[Journald]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Configuration Table Parameter Type Default Description : : : : units strings Service Units to filter on. If not set, all units will be read. directory string The directory containing Journald's log files. If not set, /run/log/journal and /run/journal will be used. priority enum "info" Set log level priority. Valid values are: trace, info, warn, error, and fatal. start_at enum end Start reading the journal from the 'beginning' or 'end'.]]>https://observiq.com/docs/resources/sources/journaldhttps://observiq.com/docs/resources/sources/journaldThu, 09 Nov 2023 09:26:01 GMT<![CDATA[JBoss]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Parameter Type Default Description : : : : file_path strings /usr/local/JBoss/EAP-_/_/log/server.log File paths to tail for logs. timezone timezone "UTC" The timezone to use when parsing timestamps. start_at enum end Start reading the file from the 'beginning' or 'end'.]]>https://observiq.com/docs/resources/sources/jbosshttps://observiq.com/docs/resources/sources/jbossThu, 09 Nov 2023 09:26:01 GMT<![CDATA[HTTP Log]]><![CDATA[Prerequisites The log source should be able to send its logs to an endpoint, commonly called "LogPush". The log source should also be able to reach the agent over the network, so any firewall rules must be adjusted to allow TCP and HTTP traffic to flow to the configured IP address and port. Request Format The request body should be JSON for requests made to the HTTP log receiver. Supported Platforms Bindplane Agent: v1.39.0+ Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Field Description : : Listen Address Specifies what IP address the receiver should listen on for logs being sent as POST requests. HTTP Port Specifies what port the receiver should use for listening for logs. Path Specifies a path the receiver should be listening to for logs. Useful when the log source also sends other data to the endpoint, such as metrics. Enable TLS Option to configure the receiver's HTTP server to use TLS. Cert File Location Local path to the TLS cert file. Key File Location Local path to the TLS key file. Example Configuration Basic Configuration For basic configuration, only the listen_address and http_port parameters are needed. Web Interface Standalone Source]]>https://observiq.com/docs/resources/sources/httphttps://observiq.com/docs/resources/sources/httpThu, 23 May 2024 14:03:50 GMT<![CDATA[HTTP Check]]><![CDATA[Supported Platforms Bindplane Agent: v1.40.0+ Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Field Description : : Hostname Specifies the hostname or IP address of the endpoint you want to check. HTTP Port Specifies what port to listen on. Path Specifies a path on the URL to perform the check on. Method Option to configure the HTTP request method to use on the check. Headers Option to configure the HTTP request headers to be used on the check. Enable TLS Option to configure the receiver's HTTP server to use TLS. Mutual TLS Option to enable TLS mutual authentication. Skip TlS Certificate Verifcation Option to skip TLS certificate verification. Mutual TLS Option to enable TLS mutual authentication. TLS Certificate Authority File Local path to the TLS certificate authority file. TLS Client Certificate File Local path to the TLS cert file. TLS Client Private Key File Local path to the TLS key file. Initial Delay Specifies how long the source should wait (seconds) before conducting the check. Collection Interval Specifies how often (seconds) to scrape for metrics. Example Configuration Basic Configuration For basic configuration, only the hostname and port parameters are needed. Web Interface Standalone Source]]>https://observiq.com/docs/resources/sources/http-checkhttps://observiq.com/docs/resources/sources/http-checkThu, 23 May 2024 14:03:50 GMT<![CDATA[Host Metrics]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Field Description : : Collection Interval Sets how often (seconds) to scrape for metrics. Load Metrics Enable to collect load metrics. Compatible with all platforms. Filesystem Metrics Enable to collect filesystem metrics. Compatible with all platforms. Memory Metrics Enable to collect memory metrics. Compatible with all platforms. Network Metrics Enable to collect network metrics. Compatible with all platforms. Paging Metrics Enable to collect paging metrics. Compatible with all platforms. CPU Metrics Enable to collect CPU metrics. Compatible with Linux and Windows. Disk Metrics Enable to collect disk metrics. Compatible with Linux and Windows. Processes Metrics Enable to collect process count metrics. Compatible with Linux only. Process Metrics Enable to collect individual process metrics. Compatible with Linux and Windows. The collector must be running as root (Linux) and Administrator (Windows). Mute Process Name Errors Enable to prevent process name errors from being logged. Mute Process EXE Errors Enable to prevent process exe lookup errors from being logged. Mute Process IO Errors Enable to prevent input output errors from being logged. Mute Process Username Errors Enable to prevent process username lookup errors from being logged. Metrics Metric Supported OS Description : : : cpu All except macOS CPU utilization metrics disk All except macOS Disk I/O metrics load All CPU load metrics filesystem All File System utilization metrics memory All Memory utilization metrics network All Network interface I/O metrics & TCP connection metrics paging All Paging/Swap space utilization and I/O metrics processes Linux Process count metrics process Linux & Windows Per process CPU, Memory, and Disk I/O metrics]]>https://observiq.com/docs/resources/sources/host-metricshttps://observiq.com/docs/resources/sources/host-metricsTue, 09 Jan 2024 15:41:26 GMT<![CDATA[HBase]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Metrics"] Choose Telemetry Type. collection_interval int 60 Sets how often (seconds) to scrape for metrics. address string localhost IP address or hostname to scrape for JMX metrics. jar_path string "/opt/opentelemetry-java-contrib-jmx-metrics.jar" Full path to the JMX metrics jar. enable_master_jmx bool true Enable to scrape the master server's JMX port. master_jmx_port int 10101 Master server's JMX Port. enable_region_jmx bool true Enable to scrape the region server's JMX port. region_jmx_port int 10102 Region server's JMX Port. enable_master_log bool true Enable to read master logs. master_log_path strings "/usr/local/hbase/logs/hbase-master-\.log" File paths to tail for master logs. enable_region_log bool true Enable to read region server logs. region_log_path strings "/usr/local/hbase/logs/hbase-regionserver-\.log" File paths to tail for region server logs. enable_zookeeper_log bool false Enable to read zookeeper logs. zookeeper_log_path strings "/usr/local/hbase/logs/hbase-zookeeper-\.log" File paths to tail for zookeeper logs. start_at enum end Start reading the file from the 'beginning' or 'end'.]]>https://observiq.com/docs/resources/sources/hbasehttps://observiq.com/docs/resources/sources/hbaseTue, 16 Apr 2024 18:12:20 GMT<![CDATA[HAProxy]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Parameter Type Default Description : : : : file_path strings /var/log/haproxy/haproxy.log Log File paths to tail for logs. start_at enum true Start reading logs from the 'beginning' or 'end'. timezone timezone "UTC" The timezone to use when parsing timestamps.]]>https://observiq.com/docs/resources/sources/haproxyhttps://observiq.com/docs/resources/sources/haproxyThu, 09 Nov 2023 09:26:01 GMT<![CDATA[Hadoop]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Metrics"] Choose Telemetry Type. collection_interval int 60 Sets how often (seconds) to scrape for metrics. address string localhost IP address or hostname to scrape for JMX metrics. port int 8004 Port to scrape for JMX metrics. jar_path string "/opt/opentelemetry-java-contrib-jmx-metrics.jar" Full path to the JMX metrics jar. enable_datanode_logs bool true Enable to collect datanode logs. datanode_log_path strings "/usr/local/hadoop/logs/hadoop-_-datanode-_.log" File paths to tail for datanode logs. enable_resourcemgr_logs bool true Enable to collect resource manager logs. resourcemgr_log_path strings "/usr/local/hadoop/logs/hadoop-_-resourcemgr-_.log" File paths to tail for resource manager logs. enable_namenode_logs bool true Enable to collect namenode logs. namenode_log_path strings "/usr/local/hadoop/logs/hadoop-_-namenode-_.log" File paths to tail for namenode logs. start_at enum end Start reading the file from the 'beginning' or 'end'.]]>https://observiq.com/docs/resources/sources/hadoophttps://observiq.com/docs/resources/sources/hadoopTue, 16 Apr 2024 18:12:20 GMT<![CDATA[Fluent Forward]]><![CDATA[Description ​The Fluent Forward source type can receive logs from Fluentd and Fluentbit agents or software capable of emitting logs using the Fluent Forward Protocol. ​ The Fluent Forward source type is useful for integrating Open Telemetry-based collectors into an existing environment where Fluentd or Fluentbit is the primary collection system for logs. Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Parameter Type Default Description : : : : listen_address string 0.0.0.0 The IP address to listen on and receive logs from Fluent Forward capable agents. port int 24224 TCP port to listen on and receive logs from Fluent Forward capable agents. See the Open Telemetry Fluent Forward Receiver documentation for more information.]]>https://observiq.com/docs/resources/sources/fluentforwardhttps://observiq.com/docs/resources/sources/fluentforwardThu, 09 Nov 2023 09:26:01 GMT<![CDATA[File]]><![CDATA[This source offers a delete_after_read option that can be hazardous. When this option is combined with file globbing, it will delete every file that matches the globbing pattern. Use with caution and care. Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Parameter Type Default Description : : : : file_path\ strings File or directory paths to tail for logs. exclude_file_path strings "" File or directory paths to exclude. log_type string "file" A friendly name that will be added to each log entry as an attribute. parse_format enum none Method to use when parsing. Valid values are none, json, and regex. When regex is selected, 'Regex Pattern' must be set. regex_pattern string The regex pattern that is used when parsing log entries. multiline_line_start_pattern string Regex pattern that matches the beginning of a log entry for handling multiline logs. multiline_line_end_pattern string Regex pattern that matches the end of a log entry, useful for terminating parsing of multiline logs. parse_timestamp bool false Whether to parse the timestamp from the log entry. timestamp_field string timestamp The field containing the timestamp in the log entry. parse_timestamp_format enum ISO8601 The format of the timestamp in the log entry. Choose a common format, or specify a custom format. Options include "ISO8601", "RFC3339", "Epoch", and "Manual". epoch_timestamp_format enum s The layout of the epoch-based timestamp. It's required when parse_timestamp_format is set to "Epoch". Options include "s", "ms", "us", "ns", "s.ms", "s.us", "s.ns". manual_timestamp_format string '%Y-%m-%dT%H:%M:%S.%f%z' The strptime layout of the timestamp. It's used when parse_timestamp_format is set to "Manual". timezone timezone UTC The timezone to use if the Timestamp Format doesn't include a timezone. Otherwise, the timezone in the Timestamp Format will be respected. NOTE: This is also required to parse timezone abbreviations due to their ambiguity. parse_severity bool false Whether to parse severity from the log entry. severity_field string severity The field containing the severity in the log entry. encoding enum utf-8 The encoding of the file being read. Valid values are nop, utf-8, utf-16le, utf-16be, ascii, and big5. include_file_name_attribute bool true Whether to add the file name as the attribute log.file.name. include_file_path_attribute bool false Whether to add the file path as the attribute log.file.path. include_file_name_resolved bool false Whether to add the file name after symlinks resolution as the attribute log.file.name_resolved. include_file_path_resolved bool false Whether to add the file path after symlinks resolution as the attribute log.file.path_resolved. delete_after_read bool false Whether to delete the file(s) after reading. Only valid in combination start_at: beginning. offset_storage_dir string $OIQ_OTEL_COLLECTOR_HOME/storage The directory where the offset storage file will be created. It is okay if multiple receivers use the same directory. By default, the observIQ Distro for OpenTelemetry Collector sets $OIQ_OTEL_COLLECTOR_HOME in its runtime. poll_interval int 200 The duration of time in milliseconds between filesystem polls. max_concurrent_files int 1024 The maximum number of log files from which logs will be read concurrently. If the number of files matched exceeds this number, then files will be processed in batches. parse_to string body The field that the log will be parsed to. Some exporters handle logs favorably when parsed to attributes over body and vice versa. start_at enum end Start reading the file from the 'beginning' or 'end'. \_required field_]]>https://observiq.com/docs/resources/sources/fileloghttps://observiq.com/docs/resources/sources/filelogTue, 03 Sep 2024 17:23:47 GMT<![CDATA[F5 BIG-IP]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Parameter Type Default Description : : : : address\ string The hostname or IP address of the Big-IP environment. port int 443 The TCP port of the Big-IP environment. username\ string Username used for authenticating with Big-IP. password\ string Password used for authenticating with Big-IP. collection_interval int 60 Sets how often (seconds) to scrape for metrics. strict_tls_verify bool false Enable to require TLS certificate verification. ca_file string Certificate authority used to validate the database server's TLS certificate. It's not required if the collector's operating system already trusts the certificate authority. mutual_tls bool false Enable to require TLS mutual authentication. cert_file string A TLS certificate used for client authentication if mutual TLS is enabled. key_file string A TLS private key used for client authentication if mutual TLS is enabled. \_required field_]]>https://observiq.com/docs/resources/sources/f5https://observiq.com/docs/resources/sources/f5Thu, 09 Nov 2023 09:26:01 GMT<![CDATA[Elasticsearch]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Prerequisites This receiver supports Elasticsearch versions 7.9+. If Elasticsearch security features are enabled, you must have either the monitor or manage cluster privilege. See the Elasticsearch docs for more information on authorization and Security privileges. Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Metrics"] Choose Telemetry Type. hostname\ string "localhost" The hostname or IP address of the Elasticsearch API. port int 9200 The TCP port of the Elasticsearch API. username string Username used to authenticate. password string Password used to authenticate. collection_interval int 60 Sets how often (seconds) to scrape for metrics. nodes strings \_node Filters that define which nodes are scraped for node-level metrics. It should be set to '\_node' if the collector is installed on all nodes. '\_all' if a single collector is scraping the entire cluster. https://www.elastic.co/guide/en/elasticsearch/reference/7.9/cluster.htmlcluster-nodes. skip_cluster_metrics bool false Enable to disable the collection of cluster-level metrics. json_log_paths strings - \"/var/log/elasticsearch/__server.json\" \\n- \"/var/log/elasticsearch/__deprecation.json\" \\n- \"/var/log/elasticsearch/__index_search_slowlog.json\" \\n- \"/var/log/elasticsearch/__index_indexing_slowlog.json\" \\n- \"/var/log/elasticsearch/_audit.json\" File paths for the JSON formatted logs. gc_log_paths strings - \"/var/log/elasticsearch/gc.log\" File paths for the garbage collection logs. start_at enum end Start reading the file from the 'beginning' or 'end'. \_required field_]]>https://observiq.com/docs/resources/sources/elasticsearchhttps://observiq.com/docs/resources/sources/elasticsearchTue, 16 Apr 2024 18:12:20 GMT<![CDATA[Custom]]><![CDATA[Description This Custom source can be used to directly configure an OpenTelemetry Receiver. The Custom source is useful for testing new receivers or for fulfilling a use case that is not supported by BindPlane natively. The Custom Source can only be used with components that are present in the BindPlane Agent. See the Included Components documentation for a list of supported components. Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Kubernetes Node (DaemonSet) Kubernetes Deployment Kubernetes Gateway OpenShift 4 Node (DaemonSet) OpenShift 4 Deployment The supported platforms and types will be limited to those allowed by the specific receiver used in the configuration. Configuration Field Description : : Telemetry Types The kind of telemetry that will be gathered by the receiver. Can be any combination of metrics, logs, and traces. Configuration The YAML configuration for the receiver. Example Configuration SQL Server Receiver The SQL Server Receiver is already configurable via the Microsoft SQL Server source, but the custom source can be used to access configuration options that are not exposed in BindPlane. Web Interface Standalone Source]]>https://observiq.com/docs/resources/sources/customhttps://observiq.com/docs/resources/sources/customThu, 23 May 2024 14:03:50 GMT<![CDATA[CSV]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Parameter Type Default Description : : : : header\ string A comma-delimited list of keys assigned to each of the columns. file_path\ strings File or directory paths to tail for logs. exclude_file_path strings File or directory paths to exclude. log_type string csv A friendly name that will be added to each log entry as an attribute. start_at enum end Start reading the file from the 'beginning' or 'end'. encoding enum utf-8 The encoding of the file being read. Valid values include: nop, utf-8, utf-16le, utf-16be, ascii, big5.]]>https://observiq.com/docs/resources/sources/csvhttps://observiq.com/docs/resources/sources/csvThu, 09 Nov 2023 09:26:01 GMT<![CDATA[CouchDB]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Metrics"] Choose Telemetry Type. hostname\ string The hostname or IP address of the CouchDB system. port int 5984 The TCP port of the CouchDB system. username\ string The username to use when connecting to CouchDB. password\ string The password to use when connecting to CouchDB. collection_interval int 60 Sets how often (seconds) to scrape for metrics. enable_tls bool false Whether or not to use TLS when connecting to CouchDB. strict_tls_verify bool false Enable to require TLS certificate verification. ca_file string Certificate authority used to validate TLS certificates. It's not required if the collector's operating system already trusts the certificate authority. mutual_tls bool false Enable to require TLS mutual authentication. cert_file string A TLS certificate used for client authentication if mutual TLS is enabled. key_file string A TLS private key used for client authentication if mutual TLS is enabled. log_paths strings "/var/log/couchdb/couchdb.log" Path to CouchDB log file(s). start_at enum end Start reading the file from the 'beginning' or 'end'. \_required field_]]>https://observiq.com/docs/resources/sources/couchdbhttps://observiq.com/docs/resources/sources/couchdbTue, 16 Apr 2024 18:12:20 GMT<![CDATA[Couchbase]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Metrics"] Choose Telemetry Type. hostname\ string "localhost" The hostname or IP address of the Couchbase API. port int 8091 The TCP port of the Couchbase API. username\ string Username used to authenticate. password\ string Password used to authenticate. collection_interval int 60 Sets how often (seconds) to scrape for metrics. enable_error_log bool true Enable to read error logs. error_log_path strings "/opt/couchbase/var/lib/couchbase/logs/error.log" Log File paths to tail for error logs. enable_info_log bool false Enable to read info logs. info_log_path strings "/opt/couchbase/var/lib/couchbase/logs/info.log" Log File paths to tail for info logs. enable_debug_log bool false Enable to read debug logs. debug_log_path strings "/opt/couchbase/var/lib/couchbase/logs/debug.log" Log File paths to tail for debug logs. enable_access_log bool false Enable to read http access logs. http_access_log_path strings "/opt/couchbase/var/lib/couchbase/logs/http_access.log" Log File paths to tail for http access logs. enable_internal_access_log bool false Enable to read internal access logs. http_internal_access_log_path strings "/opt/couchbase/var/lib/couchbase/logs/http_access_internal.log" Log File paths to tail for internal access logs. enable_babysitter_log bool false Enable to read babysitter logs. babysitter_log_path strings "/opt/couchbase/var/lib/couchbase/logs/babysitter.log" Log File paths to tail for babysitter logs. enable_xdcr_log bool false Enable to read xdcr logs. xdcr_log_path strings "/opt/couchbase/var/lib/couchbase/logs/goxdcr.log" Log File paths to tail for xdcr logs. start_at enum end Start reading logs from 'beginning' or 'end'. \_required field_]]>https://observiq.com/docs/resources/sources/couchbasehttps://observiq.com/docs/resources/sources/couchbaseTue, 16 Apr 2024 18:12:20 GMT<![CDATA[Common Event Format]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Parameter Type Default Description : : : : file_log_path\ strings Specify a single path or multiple paths to read one or many files. You may also use a wildcard (\) to read multiple files within a directory. exclude_file_log_path strings "" Specify a single path or multiple paths to exclude one or many files from being read. You may also use a wildcard (\) to exclude multiple files from being read within a directory. log_type string "cef" Adds the specified 'Type' as a log record attribute to each log message. location timezone "UTC" The geographic location (timezone) to use when parsing logs that contain a timestamp timezone timezone "UTC" The timezone to use when parsing timestamps. start_at enum end Start reading the file from the 'beginning' or 'end'. \_required field_]]>https://observiq.com/docs/resources/sources/common-event-formathttps://observiq.com/docs/resources/sources/common-event-formatThu, 09 Nov 2023 09:26:01 GMT<![CDATA[CockroachDB]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Metrics"] Choose Telemetry Type. hostname\ string "localhost" The hostname or IP address of the CockroachDB system. Required: true. port int 8080 The port to listen on for DB Console HTTP requests. Read more username string "" The username to use when connecting to CockroachDB. Read more password string "" The password to use when connecting to CockroachDB. TLS must be configured in the Advanced section if this field is set. Sensitive: true. Read more enable_tls bool false Whether or not to use TLS. ca_file_path string "" File path for CA certification file for CockroachDB (only needed if you have a secure cluster). Read more cert_file_path string "" A TLS certificate used for client authentication, if mutual TLS is enabled. Read more key_file_path string "" A TLS private key used for client authentication, if mutual TLS is enabled. Read more server_name string "" The name of the server. Read more insecure_skip_verify bool false Disable validation of the server certificate. collection_interval int 60 How often (seconds) to scrape for metrics. enable_health_log\ bool true Enable to collect health logs. Read more health_log_path\ strings ["/var/log/cockroach-data/logs/cockroach-health.log"] The absolute path to the CockroachDB health logs. enable_dev_log\ bool true Enable to collect general developer logs. Read more dev_log_path\ strings ["/var/log/cockroach-data/logs/cockroach.log"] The absolute path to the CockroachDB Dev Logs. enable_error_log\ bool true Enable to collect stderr logs. error_log_path\ strings ["/var/log/cockroach-data/logs/cockroach-stderr.log"] The absolute path to the CockroachDB stderr logs. enable_sql_schema_log\ bool true Enable to collect sql schema logs. Read more sql_schema_log_path\ strings ["/var/log/cockroach-data/logs/cockroach-sql-schema.log"] The absolute path to the CockroachDB sql schema logs. enable_telemetry_log\ bool true Enable to collect telemetry logs. Read more telemetry_log_path\ strings ["/var/log/cockroach-data/logs/cockroach-telemetry.log"] The absolute path to the CockroachDB telemetry logs. enable_kv_distribution_log\ bool true Enable to collect kv distribution logs. Read more kv_distribution_log_path\ strings ["/var/log/cockroach-data/logs/cockroach-kv-distribution.log"] The absolute path to the CockroachDB kv distribution logs. enable_pebble_log\ bool true Enable to collect cockroachdb pebble logs. Read more pebble_log_path\ strings ["/var/log/cockroach-data/logs/cockroach-pebble.log"] The absolute path to the CockroachDB pebble logs. offset_storage_dir\ string "$OIQ_OTEL_COLLECTOR_HOME/storage" The directory the offset storage file will be created in. timezone\ timezone "UTC" The timezone to use when parsing timestamps. start_at\ enum end Start reading logs from 'beginning' or 'end'. parse_to\ enum body Parse structured log parts to either body or attributes. retain_raw_logs\ bool false Preserve the original log message in a raw_log key. \_required field_ Example Configuration This configuration shows the default values for the CockroachDB Source.]]>https://observiq.com/docs/resources/sources/cockroachdbhttps://observiq.com/docs/resources/sources/cockroachdbThu, 23 May 2024 14:03:50 GMT<![CDATA[Cloudflare]]><![CDATA[Prerequisites - Cloudflare Enterprise plan - Publicly signed CA certificate - Follow the OpenTelemetry receiver documentation getting started section for help configuring Cloudflare LogPush jobs. Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Parameter Type Default Description : : : : listen_address string 0.0.0.0 The IP address to listen on. The collector must be available on the internet in order to receive logs from Cloudflare. https_port int 8443 TCP port to receive incoming uploads from the LogPush job(s). cert_file\ string A TLS certificate used to encrypt communications on the listening server. Cloudflare requires that this certificate be signed by a public certificate authority. key_file\ string A TLS private key used to encrypt communications on the listening server. Cloudflare requires TLS. secret string String used to validate that messages are coming from an expected source. attributes map Any Cloudflare LogPush field names provided as keys will be mapped to attributes using the map value as the attribute name. timestamp_field string EdgeStartTimestamp The name of the field that should be parsed to represent the Timestamp of the log record. \_required field_ External Documentation - Cloudflare LogPush documentation]]>https://observiq.com/docs/resources/sources/cloudflarehttps://observiq.com/docs/resources/sources/cloudflareThu, 09 Nov 2023 09:26:01 GMT<![CDATA[Cisco Meraki]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Parameter Type Default Description : : : : listen_port int 5140 A UDP port, which the agent will listen for syslog messages. listen_ip string "0.0.0.0" An IP address for the agent to bind. Typically 0.0.0.0 for most configurations.]]>https://observiq.com/docs/resources/sources/cisco-merakihttps://observiq.com/docs/resources/sources/cisco-merakiThu, 09 Nov 2023 09:26:01 GMT<![CDATA[Cisco Catalyst]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Parameter Type Default Description : : : : listen_port int 5140 A UDP port, which the agent will listen for syslog messages. listen_ip string "0.0.0.0" An IP address for the agent to bind. Typically 0.0.0.0 for most configurations. timezone timezone "UTC" The timezone to use when parsing timestamps.]]>https://observiq.com/docs/resources/sources/cisco-catalysthttps://observiq.com/docs/resources/sources/cisco-catalystThu, 09 Nov 2023 09:26:01 GMT<![CDATA[Cisco ASA]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Parameter Type Default Description : : : : listen_port int 5140 A TCP port, which the agent will listen for syslog messages. listen_ip string "0.0.0.0" An IP address for the agent to bind. Typically 0.0.0.0 for most configurations.]]>https://observiq.com/docs/resources/sources/cisco-asahttps://observiq.com/docs/resources/sources/cisco-asaThu, 09 Nov 2023 09:26:01 GMT<![CDATA[Cassandra]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Prerequisites This source supports Apache Cassandra versions 3.11 and 4.0. Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Metrics"] Choose Telemetry Type. address string localhost IP address or hostname to scrape for Cassandra metrics. port int 7199 Port to scrape for Cassandra metrics. jar_path string "/opt/opentelemetry-java-contrib-jmx-metrics.jar" Full path to the JMX metrics jar. collection_interval int 60 How often (seconds) to scrape for metrics. enable_system_logs bool true Enables collecting system logs. system_log_path strings ["/var/log/cassandra/system.log"] File paths to system logs. enable_debug_logs bool true Enables the collection of debug logs. debug_log_path strings ["/var/log/cassandra/debug.log"] File paths to debug logs. enable_gc_logs bool true Enables collection of garbage collector logs. gc_log_path strings ["/var/log/cassandra/gc.log"] File paths to garbage collection logs. timezone timezone UTC The timezone to use when parsing timestamps. start_at enum end Start reading logs from 'beginning' or 'end'.]]>https://observiq.com/docs/resources/sources/cassandrahttps://observiq.com/docs/resources/sources/cassandraTue, 16 Apr 2024 18:12:20 GMT<![CDATA[BindPlane OP]]><![CDATA[Description Monitor the logs of a BindPlane OP server. Requires a BindPlane Agent be installed on the same system as the server. Supported Platforms Bindplane Agent: v1.40.0+ Platform Metrics Logs Traces : : : : Linux macOS Windows Configuration Field Description : : BindPlane OP Log Path The absolute path to the BindPlane log. Default is /var/log/bindplane/bindplane.log. Enable File Offset Storage When enabled, the current position into a file will be saved to disk, and reading will resume from where it left off after a collector restart. Offset Storage Directory The directory that the offset storage file will be created. Default is $OIQ_OTEL_COLLECTOR_HOME/storage. Start At Where in the log file to begin reading logs from. Can be beginning or end. Example Configuration Basic Configuration For basic configuration, the defaults are used. Web Interface Standalone Source]]>https://observiq.com/docs/resources/sources/bindplane-ophttps://observiq.com/docs/resources/sources/bindplane-opThu, 23 May 2024 14:03:50 GMT<![CDATA[BindPlane Gateway]]><![CDATA[Description The BindPlane Gateway source is an OTLP source meant to be used for gateway nodes. When using this source in conjunction with a BindPlane Gateway destination from another configuration, telemetry traveling through this source will not be double counted in the Summary view. Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Kubernetes Node (DaemonSet) Kubernetes Gateway OpenShift 4 Node (DaemonSet) Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Metrics", "Traces"] Choose Telemetry Type. listen_address string "0.0.0.0" The IP address to listen on. grpc_port int 4317 TCP port to receive OTLP telemetry using the gRPC protocol. The port used must not be the same as the HTTP port. Set to 0 to disable. http_port int 4318 TCP port to receive OTLP telemetry using the HTTP protocol. The port used must not be the same as the gRPC port. Set to 0 to disable. enable_tls bool false Whether or not to use TLS. insecure_skip_verify bool false Enable to skip TLS certificate verification. ca_file string Certificate authority used to validate the database server's TLS certificate. cert_file string A TLS certificate used for client authentication if mutual TLS is enabled. key_file string A TLS private key used for client authentication if mutual TLS is enabled. Example Configuration The BindPlane Gateway source type does not have any required fields. By default, the BindPlane Gateway source will listen on ports 4317/gRPC and 4318/HTTP on all IP addresses without TLS. Kubernetes The BindPlane Gateway source type supports Kubernetes, OpenShift Node (DaemonSet), and Gateway agents. Applications within the cluster can forward metrics, logs, and traces to the agents using the clusterIP services. Prerequisites - BindPlane OP v1.52.0 or newer Configuration The BindPlane Gateway source type does not require additional configuration. It can be attached to any Kubernetes, OpenShift Node (DaemonSet), or Gateway configuration. The following endpoints can forward telemetry to the managed Node (DaemonSet) agents. Protocol Service Endpoint : : : gRPC clusterIP bindplane-node-agent.bindplane-agent.svc.cluster.local:4317 gRPC headless clusterIP bindplane-node-agent-headless.bindplane-agent.svc.cluster.local:4317 HTTP clusterIP http://bindplane-node-agent.bindplane-agent.svc.cluster.local:4318 The following endpoints can forward telemetry to the managed Gateway agents. Protocol Service Endpoint : : : gRPC clusterIP bindplane-gateway-agent.bindplane-agent.svc.cluster.local:4317 gRPC headless clusterIP bindplane-gateway-agent-headless.bindplane-agent.svc.cluster.local:4317 HTTP clusterIP http://bindplane-gateway-agent.bindplane-agent.svc.cluster.local:4318 It is a matter of preference if you should forward telemetry to the DaemonSet or Gateway agents. It is recommended to use the Gateway agent, if DaemonSet resource consumption is a concern, as the Gateway agent can scale independent of cluster size.]]>https://observiq.com/docs/resources/sources/bindplane-gatewayhttps://observiq.com/docs/resources/sources/bindplane-gatewayThu, 23 May 2024 14:03:50 GMT<![CDATA[BindPlane Agent]]><![CDATA[Description Monitor the logs and metrics of the BindPlane agent the config is applied to. Supported Platforms Bindplane Agent: v1.40.0+ Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Field Description : : Agent Log Path The absolute path to the bindplane-agent log. Default is $OIQ_OTEL_COLLECTOR_HOME/log/collector.log. Enable File Offset Storage When enabled, the current position into a file will be saved to disk, and reading will resume from where it left off after a collector restart. Offset Storage Directory The directory that the offset storage file will be created. Default is $OIQ_OTEL_COLLECTOR_HOME/storage. Start At Where in the log file to begin reading logs from. Can be beginning or end. Collection Interval Sets how often (seconds) to scrape for metrics. Example Configuration Basic Configuration For basic configuration, the defaults are used. Web Interface Standalone Source]]>https://observiq.com/docs/resources/sources/bindplane-agenthttps://observiq.com/docs/resources/sources/bindplane-agentFri, 25 Oct 2024 18:37:14 GMT<![CDATA[Azure Event Hub]]><![CDATA[Prerequisites The source must have access to an Azure Event Hub with the necessary permissions, the minimum permission being Listen. You can configure resources' Diagnostic settings to send logs to the Azure Event Hub. Read more here: Azure Event Hub. Each event hub should only accept one telemetry type. Supported Platforms Bindplane Agent: v1.39.0+ Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Field Description : : Telemetry Type The type of telemetry to gather with this receiver. Connection A string describing the connection to an Azure event hub. Consumer Group The Consumer Group to read from. Defaults to $$Default. Dollar signs must be escaped with another dollar sign. Partition The partition to watch. If empty, it will watch all partitions. Offset The offset at which to start watching the event hub. If empty, starts with the latest offset. Log Format The log format to use when parsing logs from Event Hub. Must be one of azure or raw. Raw logs are byte encoded, see the "Raw Encoding" option. Raw Encoding The encoding used when decoding the raw logs into human readable text. Defaults to utf-8. The raw byte encoding can be preserved by selected byte. Example Configuration Basic Configuration For basic configuration, the connection parameter is required. Optionally, specify consumer group, partition, and offset. Web Interface Standalone Source]]>https://observiq.com/docs/resources/sources/azure-event-hubhttps://observiq.com/docs/resources/sources/azure-event-hubTue, 05 Nov 2024 16:44:23 GMT<![CDATA[Azure Blob Storage]]><![CDATA[Prerequisites Setup an Event Hub to listen for Blob Create events. More information on how to set this up can be found in the Azure documentation here. Blob Format Stored logs and traces must be in OTLP JSON format in order to be correctly parsed by the receiver. Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Traces"] Choose Telemetry Type. connection_string\ string "" The connection string for the Azure Storage account. Information can be found here. event_hub_endpoint\ string "" The Azure Event Hub endpoint triggering on the Blob Create events. Information can be found here. logs_container\ string "logs" Name of the Azure Storage container where logs are stored. traces_container\ string "traces" Name of the Azure Storage container where traces are stored \_required field_ Example Configuration Basic Configuration For basic configuration only, the connection_string for Azure Storage and the event_hub_endpoint need to be supplied. In the example below, we are using a fake connection_string and event_hub_endpoint in the form Azure expects. Web Interface Standalone Source]]>https://observiq.com/docs/resources/sources/azure-blob-storage-sourcehttps://observiq.com/docs/resources/sources/azure-blob-storage-sourceThu, 23 May 2024 14:03:50 GMT<![CDATA[Azure Blob Rehydration]]><![CDATA[Prerequisites Ensure you have access to an Azure Blob Storage account. Set up your Azure Blob Storage to store OTLP data in the required format for rehydration. More information on how to set this up can be found in the Azure documentation here. Blob Format Rehydrated data must be in OTLP format for correct processing. Ensure your data adheres to this format. Supported Platforms Platform Supported : : Linux Windows macOS Configuration Fields Field Description : : Connection String The connection string for the Azure Blob Storage account. Container Name of the Azure Storage container from which to rehydrate data. Poll Interval The interval for checking new blobs, e.g., '1m' for one minute. Starting Time UTC start time for rehydration. Ending Time UTC end time for rehydration. Delete on Read If true, blobs are deleted after rehydration. Enable Storage Enable to specify a storage extension for tracking rehydration progress. Storage Directory Directory for storing rehydration state, useful for maintaining state and resuming operations. (Only relevant if Enable Storage is true) Example Configuration Basic Configuration This configuration sets up Azure Blob Rehydration with necessary details such as connection string, container, and time range for rehydration. Adjust the Connection String, Container, Starting Time, and Ending Time to match your configuration.]]>https://observiq.com/docs/resources/sources/azure-blob-rehydrationhttps://observiq.com/docs/resources/sources/azure-blob-rehydrationThu, 23 May 2024 14:03:50 GMT<![CDATA[AWS S3 Rehydration]]><![CDATA[Object Format Rehydrated data must be in OTLP format JSON for correct processing. Ensure your data adheres to this format. Supported Platforms Platform Supported : : Linux Windows macOS Configuration Fields Field Description : : Region The AWS recognized region string Bucket Name of the S3 Bucket to rehydration telemetry from. Folder Prefix Root directory of the bucket to rehydration telemetry from. Poll Interval The interval for checking new objects, e.g., '1m' for one minute. Starting Time UTC start time for rehydration in the format YYYY-MM-DDTHH:MM. Ending Time UTC end time for rehydration in the format YYYY-MM-DDTHH:MM. Delete on Read If true, objects are deleted after rehydration. Enable Storage Enable to specify a storage extension for tracking rehydration progress. Storage Directory Directory for storing rehydration state, useful for maintaining state and resuming operations. (Only relevant if Enable Storage is true) Example Configuration Basic Configuration This configuration sets up AWS S3 Rehydration with necessary details such as Region, Bucket, and time range for rehydration. Adjust the Region, Bucket, Starting Time, and Ending Time to match your configuration.]]>https://observiq.com/docs/resources/sources/aws-s3-rehydrationhttps://observiq.com/docs/resources/sources/aws-s3-rehydrationThu, 23 May 2024 14:03:50 GMT<![CDATA[AWS Cloudwatch]]><![CDATA[Prerequisites While installing AWS CLI is not required in order to collect logs using the AWS Cloudwatch source type, it still provides an easier means of authentication. With AWS Cloudwatch, users are required to provide some form of authentication. This can be either profile credentials or environment variables that provide access keys for user accounts. Setting these credentials up can prove tedious and confusing. Luckily, AWS CLI can generate the user's profile/credentials with the aws configure command. Here is the AWS CLI Getting Started Guide, which outlines a few prerequisites. Some prerequisites: 1. Creating an IAM user account - Required Permissions - logs:GetLogEvents - logs:DescribeLogGroups - logs:DescribeLogStreams - The user does not require console access 2. Create an access key ID and secret access key In the AWS CLI getting started guide, it will instruct you to install it for your current user or all users. The observIQ OTEL Collector runs as root by default, meaning the AWS CLI and credentials should be installed under the collector system's root account. Credentials Credential and Config Files AWS Authentication utilizes a user profile specified in the user's home directory at .aws/credentials. Each profile's credentials should include, at minimum, the profile name, access key, and secret access key. In addition to the credentials file, there is also a .aws/config. This includes less sensitive configuration options such as the region, output format, etc. A typical entry in the config file should look as such. More information on AWS Configuration and Credentials Environment Variables Alternatively, AWS Environment variables can be specified to override a credentials file. You can modify the collector's environment variables by configuring a Systemd override. Run sudo systemctl edit observiq-otel-collector and add your access key, secret key, and region. After making that change, reload Systemd and restart the collector service. Setup 1. Once logged in, Select the Configs tab at the top of the Bindplane Home page. 2. Select a pre-existing config or create a new one. 3. Add a new source and select AWS Cloudwatch. 4. After configuring credentials using the AWS CLI on the collector system, using the default values in the source form should enable the collector to collect logs from Cloudwatch. If credentials were configured using environment variables, you will need to leave the Profile field blank 5. Click save configuration. 6. Add the destination type of your choosing. 7. Apply configuration to the desired agent. 8. Voila. Logs should be collecting Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Parameter Type Default Description : : : : region\ enum us-east-1 The AWS recognized region string. profile \ string "default" The AWS profile is used to authenticate; if none is specified, the default is chosen from the list of profiles. credential_type enum profile Determines whether to pull credentials from a credentials file or use environment variables for authentication. discovery_type enum AutoDiscover Configuration for Log Groups. By default, all Log Groups and Log Streams will be collected. limit int 50 Limits the number of discovered log groups. prefix string "" A prefix for log groups to limit the number of log groups discovered. names strings \[] A list of full log stream names to filter the discovered log groups to collect from. prefixes strings \[] A list of prefixes to filter the discovered log groups to collect from. named_groups awsCloudwatchNamedField \[] Configuration for Log Groups. By default, all Log Groups and Log Streams will be collected. imds_endpoint string "" A way of specifying a custom URL to be used by the EC2 IMDS client to validate the session. poll_interval int 1 The duration of waiting in between requests (minutes). max_events_per_request int 50 The maximum number of events to process per request to Cloudwatch. \_required field_ Discovery Type Default Settings When starting with an AWS Cloudwatch set to its default values, you should see log collection from all log groups with no filtering of log streams. The default polling interval for Cloudwatch is 1 minute, so there may be a delay before seeing any logs coming through. AutoDiscover When using Discovery Type AutoDiscover, there are some optional parameters that can be added to / filter the amount of logs collected. - limit: limits the number of discovered log groups(default = 50). - prefix: Prefix for log groups to limit the number of log groups discovered - prefix: /aws/eks/ - If omitted, all log groups up to the limit will be collected. - names: A list of full log stream names to filter the discovered log groups to collect from. - names: [kube-apiserver-ea9c831555adca1815ae04b87661klasdj] - prefixes: A list of log stream prefixes to filter the discovered log groups to collect from. - prefixes: [kube-api-controller] Named This Discovery Type filters logs by listing only the desired log groups to collect from and omitting any other log groups. When selecting this Discovery Type, at least one log group is required otherwise no logs would be collected. When listing log groups the ID field of each log group instance should match the full name of the group. Additionally, Named also provides prefixes and names parameters for each listed log group that filters out the listed log streams. These parameters should be listed underneath each log group's IDs as they are unique to each individual log group.]]>https://observiq.com/docs/resources/sources/aws-cloudwatchhttps://observiq.com/docs/resources/sources/aws-cloudwatchThu, 09 Nov 2023 09:26:01 GMT<![CDATA[Apache Spark]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Field Description : : Telemetry Types The types of telemetry to gather with this receiver. Endpoint The endpoint of the Apache Spark REST API. Enable TLS Whether to use TLS when connecting to Apache Spark. Skip TLS Certificate Verification Enable to skip TLS certificate verification. This is relevant if TLS is enabled. TLS Certificate Authority File Certificate authority used to validate TLS certificates. This is relevant if TLS is enabled. Mutual TLS Client Certificate File A TLS certificate used for client authentication. This is relevant if TLS is enabled. TLS Client Private Key File A TLS private key used for client authentication. This is relevant if TLS is enabled. Collection Interval Sets how often (seconds) to scrape for metrics. Allowed Spark Application Names Filters that define which Spark applications are scraped for metrics. If undefined, all applications at the endpoint will be scraped. Spark Metrics A list of Cluster, Job, Executor, and Stage metrics to be included or excluded.]]>https://observiq.com/docs/resources/sources/apache-sparkhttps://observiq.com/docs/resources/sources/apache-sparkTue, 16 Apr 2024 18:12:20 GMT<![CDATA[Apache HTTP]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Metrics"] Choose Telemetry Type. hostname\ string localhost The hostname or IP address of the Apache HTTP system. port int 3000 The TCP port of the Apache HTTP system. collection_interval int 60 Sets how often (seconds) to scrape for metrics. enable_tls bool false Whether or not to use TLS when connecting to the Apache HTTP server. strict_tls_verify bool false Enable to require TLS certificate verification. ca_file string Certificate authority used to validate TLS certificates. It's not required if the collector's operating system already trusts the certificate authority. mutual_tls bool false Enable to require TLS mutual authentication. cert_file string A TLS certificate used for client authentication if mutual TLS is enabled. key_file string A TLS private key used for client authentication if mutual TLS is enabled. start_at enum end Start reading logs from 'beginning' or 'end'. access_log_path strings ["/var/log/apache2/access.log"] Access Log File paths to tail for logs. error_log_path strings ["/var/log/apache2/error.log"] Error Log File paths to tail for logs. timezone timezone "UTC" The timezone to use when parsing timestamps. \_required field_]]>https://observiq.com/docs/resources/sources/apache-httphttps://observiq.com/docs/resources/sources/apache-httpTue, 16 Apr 2024 18:12:20 GMT<![CDATA[Apache Common]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Parameter Type Default Description : : : : file_path strings ["/var/log/apache2/access.log"] Path to Apache common formatted log file. start_at enum end Start reading logs from 'beginning' or 'end'.]]>https://observiq.com/docs/resources/sources/apache-commonhttps://observiq.com/docs/resources/sources/apache-commonThu, 09 Nov 2023 09:26:01 GMT<![CDATA[Apache Combined]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Parameter Type Default Description : : : : file_path strings ["/var/log/apache_combined.log"] Paths to Apache combined formatted log files. start_at enum end Start reading logs from 'beginning' or 'end'.]]>https://observiq.com/docs/resources/sources/apache-combinedhttps://observiq.com/docs/resources/sources/apache-combinedThu, 09 Nov 2023 09:26:01 GMT<![CDATA[Aerospike]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Linux Windows macOS Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Metrics"] Choose Telemetry Type. hostname\ string localhost The hostname or IP address of the Aerospike system. port int 3000 The TCP port of the Aerospike system. collection_interval int 60 Sets how often (seconds) to scrape for metrics. collect_cluster_metrics bool false Whether discovered peer nodes should be collected. aerospike_enterprise bool false Enable Aerospike enterprise authentication. username\ string The username to use when connecting to Aerospike. password\ string The password to use when connecting to Aerospike. start_at enum end Start reading Aerospike Journald logs from 'beginning' or 'end'. \_required field_]]>https://observiq.com/docs/resources/sources/aerospikehttps://observiq.com/docs/resources/sources/aerospikeTue, 16 Apr 2024 18:12:20 GMT<![CDATA[Active Directory]]><![CDATA[Supported Platforms Platform Metrics Logs Traces : : : : Windows Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Metrics"] Choose Telemetry Type. collection_interval int 60 Sets how often (seconds) to scrape for metrics.]]>https://observiq.com/docs/resources/sources/active-directoryhttps://observiq.com/docs/resources/sources/active-directoryTue, 16 Apr 2024 18:12:20 GMT<![CDATA[Resource Detection]]><![CDATA[Description The resource detection processor can be used to detect resource information from the host, in a format that conforms to the OpenTelemetry resource semantic conventions, and append or override the resource value in telemetry data with this information. Supported Types Metrics Logs Traces BindPlane Agent : : : : v1.40.0+ Configuration Table Field Description : : Detector Detector(s) to use for resource detection. Available detectors include aks, azure, consul, ec2, eks, gcp, k8snode, openshift, and system. Override Whether or not to overwrite existing resource attributes. Timeout Time duration after which a resource detector request will timeout. Hostname Source When the system detector is in use, specifies the source used to detect the system hostname. Options include os, dns, cname, and lookup. EC2 Tags Specifies a list of regex's to match to EC2 instance tag keys that will be added as resource attributes to processed data. K8s Node Environment Variable The K8s node environment variable that has the node name to retrieve metadata for. Defaults to "KUBE_NODE_NAME" because the BindPlane Agent has this automatically set. Consul Address The address of the Consul server. If unset, the environment variable "CONSUL_HTTP_ADDR" will be used if it exists. Consul Authentication The type of authentication to use with Consul. One of 'Token' or 'Token File' is required if Consul's ACL System is enabled. Options are "None", "Token", or "Token File". Consul Token Token is used to provide a per-request ACL token which overrides the agent's default token. If unset, the environment variable "CONSUL_HTTP_TOKEN" will be used if it exists. Consul Token File Token File is a file containing the current token to use for this client. If provided, it is read once at startup and never again. If unset, the environment variable "CONSUL_HTTP_TOKEN_FILE" will be used if it exists. Consul Datacenter Optional Consul Datacenter to use. If not provided, the default agent datacenter is used. Consul Namespace Optional namespace to attach to each Consul request. If unset, the environment variable "CONSUL_NAMESPACE" will be used if it exists. Consul Metadata Labels Allowlist of Consul Metadata keys to use as resource attributes. Multiple detectors may be selected on a single processor. However, if multiple processors use a common attribute name, the first detector will have precedent. For more info, see this OTel documentation. Example Configurations Google Compute Engine (GCE) In this example, the Resource Detection Processor is configured to use the GCP detector to detect GCE resource attributes. Amazon EC2 In this example, the Resource Detection Processor is configured to use the EC2 detector. Azure Compute Instance In this example, the Resource Detection Processor is configured to use the Azure detector to detect Azure Virtual Machine resource attributes. Kubernetes The Resource Detection Processor can detect Kubernetes resources on the following platforms: GKE, Amazon EKS, Azure AKS. Using the GCP detector, you can detect cloud-based Kubernetes resources.]]>https://observiq.com/docs/resources/processors/resource_detection_v2https://observiq.com/docs/resources/processors/resource_detection_v2Wed, 05 Jun 2024 17:59:27 GMT<![CDATA[Rename Metric]]><![CDATA[Description The Rename Metric processor can be used to rename metrics. Use The Rename Metric processor is utilized for renaming metrics. It supports renaming either the entire name of a single metric or the prefix of multiple metrics. Supported Types Metrics Logs Traces : : : Configuration Field Description : : Operation The operation to perform when renaming a metric. Name: Rename an entire metric. Prefix: Rename the prefix of multiple metrics. Old Name The name of an incoming metric to rename. Required when Operation is set to Name. New Name The new name of an incoming metric. Required when Operation is set to Name. Old Prefix The prefix of incoming metrics to rename. Required when Operation is set to Prefix. New Prefix The new prefix of incoming metrics. Required when Operation is set to Prefix. Example Configuration(s) Renaming A Single Metric In this configuration, the system.network.packets metric is renamed to just network.traffic. Web Interface Renaming A Metric Prefix In this configuration, the system prefix for host metrics is replaced with macos. Web Interface]]>https://observiq.com/docs/resources/processors/rename-metrichttps://observiq.com/docs/resources/processors/rename-metricWed, 05 Jun 2024 17:59:27 GMT<![CDATA[Rename Fields]]><![CDATA[Description The Rename Fields processor can be used to rename resource, attribute, and log body fields. Supported Types Metrics Logs Traces : : : Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Metrics", "Traces"] The list of telemetry types the processor will act on. Condition string true An OTTL condition that must evaluate to true to apply this processor. By default, the processor applies to all telemetry. Field Type string attributes Determines whether the field is renamed in the body, attributes, or resource fields of the telemetry. Note: Body fields are applicable only for logs. Old Name string "" Specifies the existing field in the telemetry that the processor will rename. New Name string "" Indicates the new field name that will replace the old field name in the telemetry data. Example Configuration Renaming log body fields In this example, we rename the status field in a log body to status_code. Since this change only applies to logs, we have disabled metrics and traces for this processor. Web Interface Standalone Processor]]>https://observiq.com/docs/resources/processors/rename-fields-v2https://observiq.com/docs/resources/processors/rename-fields-v2Wed, 30 Oct 2024 16:06:44 GMT<![CDATA[Rename Field]]><![CDATA[Description The Rename Field processor can be used to rename resource, attribute, and log body fields. This processor has been deprecated and replaced with a new Rename Fields processor that supports additional functionality and improved layout. While it will continue to function, it will no longer receive any enhancements and you should migrate to the new processor. For more information about the new processor, see here. Supported Types Metrics Logs Traces : : : Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Metrics", "Traces"] Choose Telemetry Type. resource_keys map {} A map of resource keys to rename. The value represents the new name. attribute_keys map {} A map of attribute keys to rename. The value represents the new name. body_keys map {} A map of body keys to rename. The value represents the new name. Example Configuration Renaming log body fields In this example, we rename the status field in a log body to status_code. Since this change only applies to logs, we have disabled metrics and traces for this processor. Web Interface Standalone Processor]]>https://observiq.com/docs/resources/processors/rename-fieldhttps://observiq.com/docs/resources/processors/rename-fieldThu, 24 Oct 2024 19:42:30 GMT<![CDATA[Parse XML]]><![CDATA[Metrics Logs Traces BindPlane Agent : : : : v1.46.0+ Description The Parse XML Processor is utilized to parse XML document strings from specified fields within log, metric, or trace data. It's particularly useful when your telemetry data contains a serialized XML document, and you need to convert them into a structured format for easier analysis and filtering. The processor supports specifying the source field and the target field for the parsed XML data, offering flexibility in handling diverse data structures. Use When dealing with telemetry data that includes an XML document embedded within logs, metrics, or traces, the Parse XML Processor becomes instrumental. For instance, logs from certain applications or systems might contain XML documents representing specific attributes or metadata. By utilizing the Parse XML Processor, these XML documents can be parsed and converted into structured data, enhancing readability and facilitating more complex queries and analyses. Multi-line XML It's common for XML to be formatted to span multiple lines. When reading XML logs from a file, make sure to configure the multiline section of the File source to properly read the whole XML document. The parsed XML is structured as follows: 1. All character data for an XML element is trimmed and placed in the content field. 2. The tag for an XML element is trimmed and placed in a tag field. 3. The attributes for an XML element is placed as a mapping of attribute name to attribute value in the attribute field. 4. Processing instructions, directives, and comments are ignored and not represented in the parsed XML. 5. All child XML elements are parsed as above, and placed in an array in a children field. As an example, see the following XML: This XML, when parsed, becomes: Configuration Field Description : : Telemetry Type The type of telemetry to apply the processor to. Condition The condition to apply the XML parsing. It supports OTTL expressions for logs, metrics, and traces. This field determines which telemetry data entries are processed based on their content and attributes. Source Field Type Determines the type of source field for logs, metrics, or traces. This can be Resource, Attribute, Body, or Custom for logs and Resource, Attribute, or Custom for metrics and traces. It defines where the processor should look to find the XML document to parse. Source Field Specifies the exact field where the XML document is located, based on the selected Source Field Type. For instance, if the Source Field Type is Attribute, this field should specify the particular attribute containing the XML document. Target Field Type Like the Source Field Type, this field determines the type of target field for logs, metrics, or traces where the parsed XML data will be stored. The options are similar, allowing users to store the parsed data as a resource, attribute, body, or in a custom field. Target Field Specifies the exact field where the parsed XML data will be stored, based on the selected Target Field Type. This allows users to organize and structure the parsed data in a manner that facilitates easy querying and analysis. Example Configurations Parse XML from Logs In this example, we have a basic log that details an action and the user that triggered the action, like an audit log. This log is in XML format, and we'd like to parse the content into a structured log. Here is a sample log record: In order to parse the body of the log record, and store it on the parsed_xml attribute, we can configure the Parse XML processor as follows: - Telemetry: Logs - Condition: true - Source Field Type: Body - Source Field: Left empty - Target Field Type: Attribute - Target Field: parsed_xml After parsing, the log record looks like this:]]>https://observiq.com/docs/resources/processors/parse-xmlhttps://observiq.com/docs/resources/processors/parse-xmlWed, 05 Jun 2024 17:59:27 GMT<![CDATA[Parse with Regex]]><![CDATA[Metrics Logs Traces BindPlane Agent : : : : v1.36.0+ Description The Parse with Regex Processor is designed to extract and transform telemetry dataincluding logs, metrics, and tracesusing regular expressions (regex) with named capture groups. This enables users to define specific regex patterns with named capture groups to parse and reformat data from different source fields, enhancing data analysis and insights. Use This processor is invaluable when users need to extract and categorize specific elements from unstructured or semi-structured data. Users can employ regex patterns with named capture groups to classify extracted data, making it easily identifiable and accessible for further analysis, monitoring, or alerting. Configuration Field Description : : Telemetry Types The types of telemetry to apply the processor to. Condition The condition, expressed in OTTL, that must be met for the processor to be applied. Allows users to apply specific criteria to select the data entries to be processed. Source Field Type Indicates the type of the source field where the regex will be applied. It can be Resource, Attribute, Body, or Custom for logs; Resource, Attribute, or Custom for metrics and traces. Source Field Specifies the specific field where the regex is applied, pertinent to the selected Source Field Type. Target Field Type Indicates the type of the target field where the parsed data will be stored. It can be Resource, Attribute, Body, or Custom for logs; Resource, Attribute, or Custom for metrics and traces. Regex Pattern The regex pattern with a named capture group used for parsing the data, essential for extracting or transforming specific data elements within the telemetry data. Example Configurations Extract Error Codes from Log Messages In this example, the Parse with Regex Processor is configured to extract error codes embedded within log messages. Given the unstructured nature of these messages, the use of a regex pattern with a named capture group is crucial for efficient extraction and categorization. Here is a sample log entry divided into body and attributes: Body: Attributes: The objective is to extract the error code "ER1023" and assign it to a new attribute for enhanced analysis. The configuration for the Parse with Regex Processor is as follows: - Condition: "body contains 'ErrorCode:'" - Source Field Type: Body - Source Field: message - Target Field Type: Attribute - Regex Pattern: "ErrorCode: (?P\w+)" With this setup, the named capture group "errorCode" is employed to categorize the extracted error code. The processed log entry would appear with an updated attributes section as follows: Attributes After Processing: Now, the error code is not only extracted but also categorized under the "errorCode" attribute, facilitating effortless filtering and analysis. This structured format allows for precise monitoring and troubleshooting, especially when dealing with specific error codes.]]>https://observiq.com/docs/resources/processors/parse-with-regexhttps://observiq.com/docs/resources/processors/parse-with-regexThu, 26 Sep 2024 02:03:45 GMT<![CDATA[Parse Timestamp]]><![CDATA[Metrics Logs Traces BindPlane Agent : : : : v1.36.0+ Description The Parse Timestamp Processor is designed to extract and standardize timestamps from specified fields in log, metric, or trace data. It ensures uniformity of timestamp data across different sources and formats, facilitating seamless analysis and correlation of time-series data. Use In environments where telemetry data comes in various timestamp formats or from different fields, uniformity in timestamp data is crucial for accurate analysis and monitoring. The Parse Timestamp Processor addresses this by allowing users to specify the source field and format, enabling the extraction and standardization of timestamps across diverse data types and sources. Configuration Field Description : : Telemetry Types The types of telemetry to apply the processor to. Condition A condition expressed in OTTL that must be true for the processor to be applied. It supports dynamic evaluation, allowing for conditional application of the processor based on the content or attributes of the telemetry data. Source Field Type Determines the type of source field to parse the timestamp from. It can be either Resource, Attribute, or Body for logs, and Resource or Attribute for metrics and traces. Source Field The specific field from which to parse the timestamp. Time Format The format of the timestamp in the source field. Supported formats include RFC3339, ISO8601, Epoch, and Manual, catering to a wide range of timestamp formats encountered in real-world data. Epoch Layout Applicable when the Time Format is set to Epoch. It determines the layout of the epoch timestamp, ensuring accurate parsing of timestamps represented as epoch time. Manual Layout Required when the Time Format is set to Manual. It defines the strptime layout for parsing timestamps, offering flexibility to handle custom timestamp formats beyond the standard RFC3339 and ISO8601 formats. Example Configurations Standardizing Log Timestamps In this example, we configure the Parse Timestamp Processor to extract and standardize timestamps embedded within log messages. The log entries contain timestamps in various formats, and the goal is to normalize them for consistent analysis. Sample log entry with a non-standard timestamp format: The configuration for the Parse Timestamp Processor is set as follows: - Condition: "attributes['timestamp'] != nil" - Source FieldType: Attribute - Source Field: timestamp - Time Format: Manual - Manual Layout: %d/%m/%Y %H:%M:%S As a result, the log entry is processed to extract and standardize the timestamp, transforming it into a consistent, machine-readable format for enhanced querying and analysis. Processed log entry: This setup ensures that all timestamps, regardless of their original format, are standardized to facilitate accurate and efficient data analysis.]]>https://observiq.com/docs/resources/processors/parse-timestamphttps://observiq.com/docs/resources/processors/parse-timestampWed, 05 Jun 2024 17:59:27 GMT<![CDATA[Parse Severity]]><![CDATA[This processor has been deprecated and replaced with a new Parse Severity processor that supports additional functionality. While it will continue to function, it will no longer receive any enhancements and you should migrate to the new processor. For more information about the new processor, see here. Metrics Logs Traces BindPlane Agent : : : : - - v1.36.0+ Description The Parse Severity Processor is designed to normalize severity fields in log data into user-defined values, enhancing the consistency and readability of log data. By allowing users to map existing severity values to standard levels, it aids in the uniform analysis and visualization of logs across varied sources. Use The processor is essential in environments where logs from different sources use varied severity naming conventions. By mapping these diverse severity indicators to standard values, it ensures that the severity data remains consistent, making it easier to filter, analyze, and generate insights from the log data. Configuration Field Description : : Condition An OTTL condition that must evaluate to true for the processor to be applied to the logs, allowing selective processing of entries. Log Body, Resource, or Attributes Specifies the location of the severity value in the log entry: in the log body, resource, or attributes. Severity Field The specific field that contains the severity value to be parsed and normalized. Severity Mappings A mapping that translates current severity values in the logs to standard values, ensuring consistency across different log sources. Example Configurations Normalize Severity Levels in Log Data In this example, the Parse Severity Processor is configured to normalize severity levels from the "level" field in the log body into user-defined standard levels. Here is a sample log entry: Body: The objective is to map the "err" severity level to a standard "error" level for consistency across all log entries. The configuration for the Parse Severity Processor is as follows: - Condition: "true" (applies to all logs) - Log Body, Resource, or Attributes: Body - Severity Field: level - Severity Mappings: With this setup, when the log entry is processed, the "severity" field is updated as follows: Log After Processing: The severity level "err" is now normalized to "error," allowing for a uniform representation of severity levels across all log entries. This normalization facilitates more straightforward log analysis, filtering, and alerting, especially when dealing with logs from multiple sources with different severity naming conventions.]]>https://observiq.com/docs/resources/processors/parse-severityhttps://observiq.com/docs/resources/processors/parse-severityWed, 05 Jun 2024 17:59:27 GMT<![CDATA[Parse Severity]]><![CDATA[Metrics Logs Traces BindPlane Agent : : : : - - v1.36.0+ Description The Parse Severity Processor is designed to normalize severity fields in log data into user-defined values, enhancing the consistency and readability of log data. By allowing users to map existing severity values to standard levels, it aids in the uniform analysis and visualization of logs across varied sources. Use The processor is essential in environments where logs from different sources use varied severity naming conventions. By mapping these diverse severity indicators to standard values, it ensures that the severity data remains consistent, making it easier to filter, analyze, and generate insights from the log data. Configuration Field Description : : Condition An OTTL condition that must evaluate to true for the processor to be applied to the logs, allowing selective processing of entries. Match Specifies the location of the severity value in the log entry: body, resource, or attributes. Severity Field The specific field that contains the severity value to be parsed and normalized. Trace A list of values that 'TRACE' severity should be mapped to. Debug A list of values that 'DEBUG' severity should be mapped to. Info A list of values that 'INFO' severity should be mapped to. Warn A list of values that 'WARN' severity should be mapped to. Error A list of values that 'ERROR' severity should be mapped to. Fatal A list of values that 'FATAL' severity should be mapped to. Example Configurations Available Parsing Formats In addition to simple string matching, this processor supports some unique value mapping options. For example, HTTP status code ranges can easily be assigned using notation such as 2xx, seen below. Available HTTP status code ranges include 1xx, 2xx, 3xx, 4xx, and 5xx. Another unique value mapping is a range of numbers, such as 8-12. This will map any number in that range, such as 9, to the log level this range is assigned to. Normalize Severity Levels in Log Data In this example, the Parse Severity Processor is configured to normalize severity levels from the "level" field in the log body into user-defined standard levels. Here is a sample log entry: Body: The objective is to map the "err" severity level to a standard "error" level for consistency across all log entries. The configuration for the Parse Severity Processor is as follows: - Condition: "true" (applies to all logs) - Log Body, Resource, or Attributes: Body - Severity Field: level - Severity Mappings: With this setup, when the log entry is processed, the "severity" field is updated as follows: Log After Processing: The severity level "err" is now normalized to "error," allowing for a uniform representation of severity levels across all log entries. This normalization facilitates more straightforward log analysis, filtering, and alerting, especially when dealing with logs from multiple sources with different severity naming conventions.]]>https://observiq.com/docs/resources/processors/parse-severity-v2https://observiq.com/docs/resources/processors/parse-severity-v2Wed, 05 Jun 2024 17:59:27 GMT<![CDATA[Parse Key Value]]><![CDATA[Metrics Logs Traces BindPlane Agent : : : : v1.45.0+ Description The Parse Key Value processor is utilized to parse key value pairs from specified fields within log, metric, or trace data. The processor supports specifying the source field and the target field for the parsed key value data, offering flexibility in handling diverse data structures. Use When dealing with telemetry data that includes key value pairs embedded within logs, metrics, or traces, the Parse Key Value Processor becomes instrumental. For instance, logs from certain applications or systems might contain key value pairs representing specific attributes or metadata. By utilizing the Parse Key Value Processor, these key value pairs can be parsed and converted into structured data, enhancing readability and facilitating more complex queries and analyses. Configuration Field Description : : Telemetry Types The types of telemetry to apply the processor to. Condition The condition to apply the Key Value parsing. It supports OTTL expressions for logs, metrics, and traces. This field determines which telemetry data entries are processed based on their content and attributes. Source Field Type Determines the type of source field for logs, metrics, or traces. This can be Resource, Attribute, Body, or Custom for logs and Resource, Attribute, or Custom for metrics and traces. It defines where the processor should look to find the key value pairs to parse. Source Field Specifies the exact field where the key value pairs are located, based on the selected Source Field Type. For instance, if the Source Field Type is Attribute, this field should specify the particular attribute containing the key value pairs. Target Field Type Like the Source Field Type, this field determines the type of target field for logs, metrics, or traces where the parsed key value pairs will be stored. The options are similar, allowing users to store the parsed data as a resource, attribute, body, or in a custom field. Target Field Specifies the exact field where the parsed key value pairs data will be stored, based on the selected Target Field Type. This allows users to organize and structure the parsed data in a manner that facilitates easy querying and analysis. Delimiter Specifies the string that should be used to split a key value pair. The default is =. Pair Delimiter Specifies the string that should be used to separate multiple pairs from each other. The default is a single space(" "). - It is not supported to parse from the "Body" or "Attributes" field into the "Resource" field. Example Configurations Parse Key Value Pairs from Logs In this example, we are looking to parse key value pairs from a log's attribute field and store the parsed data in another attribute field. The logs contain key value pairs detailing additional information about log events, and we want to make this data more accessible. We want to parse the key value pairs from the eventDetails attribute and store them as structured data within the log entry. The configuration for the Parse Key Value Processor would be: - Condition: "attributes['eventDetails'] != nil" - Source Field Type: Attribute - Source Field: eventDetails - Target Field Type: Attribute - Target Field: parsedEventDetails - Delimiter: : - Pair Delimiter: ! The resulting log entry after processing would be: This structured format makes it easier to filter and analyze the log data based on the action and status fields.]]>https://observiq.com/docs/resources/processors/parse-key-valuehttps://observiq.com/docs/resources/processors/parse-key-valueWed, 05 Jun 2024 17:59:27 GMT<![CDATA[Parse JSON]]><![CDATA[Metrics Logs Traces BindPlane Agent : : : : v1.36.0+ Description The Parse JSON Processor is utilized to parse JSON strings from specified fields within log, metric, or trace data. It's particularly useful when your telemetry data contains serialized JSON strings, and you need to convert them into a structured format for easier analysis and filtering. The processor supports specifying the source field and the target field for the parsed JSON data, offering flexibility in handling diverse data structures. Use When dealing with telemetry data that includes JSON strings embedded within logs, metrics, or traces, the Parse JSON Processor becomes instrumental. For instance, logs from certain applications or systems might contain JSON strings representing specific attributes or metadata. By utilizing the Parse JSON Processor, these JSON strings can be parsed and converted into structured data, enhancing readability and facilitating more complex queries and analyses. Configuration Field Description : : Telemetry Types The types of telemetry to apply the processor to. Condition The condition to apply the JSON parsing. It supports OTTL expressions for logs, metrics, and traces. This field determines which telemetry data entries are processed based on their content and attributes. Source Field Type Determines the type of source field for logs, metrics, or traces. This can be Resource, Attribute, Body, or Custom for logs and Resource, Attribute, or Custom for metrics and traces. It defines where the processor should look to find the JSON string to parse. Source Field Specifies the exact field where the JSON string is located, based on the selected Source Field Type. For instance, if the Source Field Type is Attribute, this field should specify the particular attribute containing the JSON string. Target Field Type Like the Source Field Type, this field determines the type of target field for logs, metrics, or traces where the parsed JSON data will be stored. The options are similar, allowing users to store the parsed data as a resource, attribute, body, or in a custom field. Target Field Specifies the exact field where the parsed JSON data will be stored, based on the selected Target Field Type. This allows users to organize and structure the parsed data in a manner that facilitates easy querying and analysis. Example Configurations Parse JSON from Logs In this example, we are looking to parse JSON strings from a log's attribute field and store the parsed data back into another attribute field. The logs contain JSON strings detailing additional information about log events, and we want to make this data more accessible. Here is a sample log entry: We want to parse the JSON string from the eventDetails attribute and store it as structured data within the log entry. The configuration for the Parse JSON Processor would be: - Condition: "attributes['eventDetails'] != nil" - Source Field Type: Attribute - Source Field: eventDetails - Target Field Type: Attribute - Target Field: parsedEventDetails The resulting log entry after processing would be: This structured format makes it easier to filter and analyze the log data based on the action and status fields.]]>https://observiq.com/docs/resources/processors/parse-jsonhttps://observiq.com/docs/resources/processors/parse-jsonWed, 05 Jun 2024 17:59:27 GMT<![CDATA[Parse CSV]]><![CDATA[Metrics Logs Traces BindPlane Agent : : : : v1.45.0+ Description The Parse CSV Processor is utilized to parse CSV strings from specified fields within log, metric, or trace data. It's particularly useful when your telemetry data contains serialized CSV strings, and you need to convert them into a structured format for easier analysis and filtering. The processor supports specifying the source field and the target field for the parsed CSV data, offering flexibility in handling diverse data structures. Use When dealing with telemetry data that includes CSV strings embedded within logs, metrics, or traces, the Parse CSV Processor becomes instrumental. For instance, logs from certain applications or systems might contain CSV strings representing specific attributes or metadata. By utilizing the Parse CSV Processor, these CSV strings can be parsed and converted into structured data, enhancing readability and facilitating more complex queries and analyses. Configuration Field Description : : Telemetry Type The type of telemetry to apply the processor to. Condition The condition to apply the CSV parsing. It supports OTTL expressions for logs, metrics, and traces. This field determines which telemetry data entries are processed based on their content and attributes. Source Field Type Determines the type of source field for logs, metrics, or traces. This can be Resource, Attribute, Body, or Custom for logs and Resource, Attribute, or Custom for metrics and traces. It defines where the processor should look to find the CSV string to parse. Source Field Specifies the exact field where the CSV string is located, based on the selected Source Field Type. For instance, if the Source Field Type is Attribute, this field should specify the particular attribute containing the CSV string. Target Field Type Like the Source Field Type, this field determines the type of target field for logs, metrics, or traces where the parsed CSV data will be stored. The options are similar, allowing users to store the parsed data as a resource, attribute, body, or in a custom field. Target Field Specifies the exact field where the parsed CSV data will be stored, based on the selected Target Field Type. This allows users to organize and structure the parsed data in a manner that facilitates easy querying and analysis. Header Field Type Like the Source Field Type, this field determines the type of header field for parsing the CSV line. The default option, Static String, allows you to specify the CSV headers as a fixed string. The other options are similar to Source Field, allowing users to select dynamic headers from a resource, attribute, body, or in a custom field. Headers Only relevent when Header Field Type is set to Static String. This is the static CSV header row to use when parsing. Header Field Specifies the exact field where the CSV header row is located. This header will be used to determine the fields to use when parsing the CSV string. Delimiter Specifies the delimiter to be used as the separator between fields. By default, "," is used. Header Delimiter Specifies the delimiter to be used for the header row, if it differs from the delimiter used in the CSV row. If unspecified, Delimiter is used as the header delimiter. Mode Specifies the mode to use when parsing. Strict mode follows normal CSV parsing rules. Lazy Quotes allows bare quotes in the middle of an unquoted field. Ignore Quotes ignores all quoting rules for CSV, splitting purely based on the delimiter. Example Configurations Parse CSV from Logs In this example, we are looking to parse CSV strings from a log's body field and store the parsed data into the attributes field. The logs contain CSV strings detailing a web request, and we want to make this data more accessible. Here is a sample log entry: We want to parse the CSV string from the Body and store it as structured data within the log entry. The configuration for the Parse CSV Processor would be: - Condition: true - Source Field Type: Body - Source Field: Left empty - Target Field Type: Attribute - Target Field: Left empty - Header Field Type: Static String - Headers: ip,method,status - Delimiter: \t - Header Delimiter: ip,method,status - Mode: Strict The resulting log entry after processing would be: This structured format makes it easier to filter and analyze the log data based on the ip, method and status fields.]]>https://observiq.com/docs/resources/processors/parse-csvhttps://observiq.com/docs/resources/processors/parse-csvWed, 05 Jun 2024 17:59:27 GMT<![CDATA[Move Field]]><![CDATA[Description The Move Field processor can be used to move a telemetry field. Use The Move Field processor is utilized for moving telemetry fields in metrics, logs, and traces based on specified conditions. Supported Types Metrics Logs Traces : : : Configuration Field Description : : Telemetry Types The types of telemetry to apply the processor to. Condition A condition that determines when this processor is applied. Move From The telemetry field to move from. Move To The telemetry field to move to. Example Configuration(s) Moving An Entire Log Body In this configuration, the entire body field is moved to a field on attributes called body_nested. This method is useful for destinations that ignore or use the body field in an undesired manner. Web Interface Nesting A Field This configuration will nest an attributes field named time_local within another field named simply time. This is useful for simplifying or standardizing the data structure of incoming logs. In this example, note the use of bracket notation to create nested fields. Web Interface]]>https://observiq.com/docs/resources/processors/move-fieldhttps://observiq.com/docs/resources/processors/move-fieldWed, 05 Jun 2024 17:59:27 GMT<![CDATA[Mask Sensitive Data]]><![CDATA[Description The Mask Sensitive Data processor can be used to detect and mask sensitive data. Supported Types Metrics Logs Traces : : : Configuration Table Parameter Type Default Description : : : : telemetry_types strings [Metrics, Logs, Traces] Which types of telemetry to apply masking rules to. default_rules enums See default rules Commonly used masking rules. custom_rules map See custom rules Create custom rules with the key being the rule name and the value being a regular expression to match against. exclude_resource_keys strings A list of resource keys to exclude from masking. exclude_attribute_keys strings A list of attribute keys to exclude from masking. exclude_body_keys strings A list of log body keys to exclude from masking. \_required field_ Default Rules Values - Credit Card: \b(?:(?:(?:\d{4}[- ]?){3}\d{4}\d{15,16}))\b - Date of Birth: \b(0?[1-9]1[0-2])\/(0?[1-9][12]\d3[01])\/(?:\d{2})?\d{2}\b - Email: \b[a-zA-Z0-9._\/\+\-]+@[A-Za-z0-9.\-]+\.?[a-zA-Z]{0,6}\b - International Bank Account Number (IBAN): \b[A-Z]{2}\d{2}[A-Z\d]{1,30}\b - IPv4 Address: \b(?:[0-9]{1,3}\.){3}[0-9]{1,3}\b - IPv6 Address: \b(?:[0-9a-fA-F]{1,4}:){7}[0-9a-fA-F]{1,4}\b - MAC Address: \b([0-9A-Fa-f]{2}[:-]){5}[0-9A-Fa-f]{2}\b - Phone Number: \b((\+\b)[1l][\-\. ])?\(?\b[\dOlZSB]{3,5}([\-\. ]\) ?)[\dOlZSB]{3}[\-\. ][\dOlZSB]{4}\b - Social Security Number (SSN): \b\d{3}[- ]\d{2}[- ]\d{4}\b - US City, State: \b[A-Z][A-Za-z\s\.]+,\s{0,1}[A-Z]{2}\b - US Street Address: \b\d+\s[A-z]+\s[A-z]+(\s[A-z]+)?\s\d\b - US Zipcode: \b\d{5}(?:[-\s]\d{4})?\b - UUID/GUID: \b[a-fA-F0-9]{8}-[a-fA-F0-9]{4}-4[a-fA-F0-9]{3}-[89aAbB][a-fA-F0-9]{3}-[a-fA-F0-9]{12}\b Example Configuration Basic Configuration Below is an example of configuration using the defaults. Web Interface Standalone Processor Custom Rules Values Here you can add custom rules for masking. The Key is the name of the rule and the Value is the regular expression to match against. Example The default rule for Date of Birth masking would not match against a date that is separated by dashes, e.g 01-01-1990, but we can include a stricter regular expression in the Custom Rules parameter. Here we created a rule called birth_date_dash with value \b(0[1-9]1[0-2])-(0[1-9][12]\d3[01])-(1920)\d{2}\b. This will match against dates separated by dashes. Web Interface Standalone Processor Exclusions You can exclude fields from being masked based on their key by specifiying excluded keys in the body, resources, or attributes respectively. Web Interface Standalone Processor]]>https://observiq.com/docs/resources/processors/mask-sensitive-datahttps://observiq.com/docs/resources/processors/mask-sensitive-dataWed, 05 Jun 2024 17:59:27 GMT<![CDATA[Marshal]]><![CDATA[Description The Marshal Processor moves fields onto the body and turns them into a JSON or Key Value string. To use this processor, the input body must not be a string; it must contain one or more fields. Supported Types Metrics Logs Traces : : : Configuration Table Parameter Type Default Description : : : : format enum None Which format to marshal to. Can be JSON, KV, or None. log_fields_to_include ottlFields [] Which log fields to include. log_fields_to_exclude ottlFields [] Which log fields to exclude. log_selection enum Include Whether to use include or exclude mode for log field selection. body_fields_to_include ottlFields [] Which body fields to include. body_fields_to_exclude ottlFields [] Which body fields to exclude. body_selection enum Exclude Whether to use include or exclude mode for body field selection. attribute_fields_to_include ottlFields [] Which attribute fields to include. attribute_fields_to_exclude ottlFields [] Which attribute fields to exclude. attribute_selection enum Exclude Whether to use include or exclude mode for attribute field selection. resource_fields_to_include ottlFields [] Which resource fields to include. resource_fields_to_exclude ottlFields [] Which resource fields to exclude. resource_selection enum Exclude Whether to use include or exclude mode for resource field selection. flatten bool false Whether to flatten fields after moving to body. log_field string bp.log The name of the body field to move log fields into. If empty, moves fields to top level. attribute_field string bp.attrs The name of the body field to move attribute fields into. If empty, moves fields to top level. resource_field string bp.res The name of the body field to move resource fields into. If empty, moves fields to top level. kv_delimiter string = The delimiter to use between key and value. kv_pair_delimiter string The delimiter to use between key value pairs. sort_by_keys bool false Ensure deterministic ordering of keys before marshaling. Basic Configuration Below is an example of configuration using the defaults. It will select all body, attributes, and resource fields but will not flatten or marshal them. Web Interface Standalone Processor Key Value Example The configuration below will flatten and marshal the body into a string like this: name=test bp.log.severity_number=5 bp.attrs.baba=you bp.res.field1=val1 bp.res.field2=val2 In the advanced section, the KV delimiters can be customized and the bp.log, bp.attrs, and bp.res fields can be renamed or ignored, putting fields directly onto the body. Web Interface Standalone Processor]]>https://observiq.com/docs/resources/processors/marshalhttps://observiq.com/docs/resources/processors/marshalFri, 11 Oct 2024 15:08:18 GMT<![CDATA[Lookup Fields]]><![CDATA[Description The Lookup Fields processsor can be used to add matching telemetry fields from a CSV file. Use The Lookup Fields processsor is used to dynamically add fields based on an existing telemetry value. For each unit of telemetry processed, this processor will grab the value of a field and perform a lookup using a CSV file. If that value exists, the processor will add all other fields associated with that CSV row. The processor re-reads the CSV file every 60 seconds, allowing updates to be made without restarting the agent. Supported Types Metrics Logs Traces BindPlane Agent : : : : v1.45.0\+ Configuration Field Description : : CSV The CSV file used to perform a lookup operation. Context The context of the lookup operation. The source field must exist here in order to perform a lookup. If a lookup succeeds, all matching fields are also added to this location. Field The field to lookup in the specified context. A lookup operation is performed if the name and value of this field match a header and value in the CSV file. Example Configuration(s) Adding Fields Based on Host In this configuration, the processor will perform a lookup on the host.name value of all incoming metrics. If this field exists on the resource of a metric and the specified CSV file contains a matching value with that header, all other fields in that CSV row will be added to the metric. For example, in this particular configuration, if a metric has a value of MacBook-Pro-4.local for host.name, the corresponding values for region and env will automatically be added to the same context. Web Interface Example CSV]]>https://observiq.com/docs/resources/processors/lookup-fieldshttps://observiq.com/docs/resources/processors/lookup-fieldsWed, 11 Sep 2024 17:46:43 GMT<![CDATA[Log Sampling]]><![CDATA[Description The Log Sampling processor can be used to filter out logs with a configured "drop ratio". Supported Types Metrics Logs Traces : : : Configuration Table Parameter Type Default Description : : : : drop_ratio enum "0.50" The probability an entry is dropped (used for sampling). A value of 1.0 will drop 100% of matching entries, while a value of 0.0 will drop 0%. condition string true An [OTTL] expression used to match which log records to sample from. All paths in the [log context] are available to reference. All [converters] are available to use. [OTTL]: https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/v0.109.0/pkg/ottlreadme [converters]: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/v0.109.0/pkg/ottl/ottlfuncs/README.mdconverters [log context]: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/v0.109.0/pkg/ottl/contexts/ottllog/README.md Use of the condition parameter requires Bindplane Agent v1.61.0 or above. Value drop ratio's range from "0.0" (0%) to "1.00" (100%) with 5% increments. Note that the drop ratio value is a string. Example Configuration Filter out 75% of logs where Attribute "ID" == 1. Web Interface Standalone Processor]]>https://observiq.com/docs/resources/processors/log-samplinghttps://observiq.com/docs/resources/processors/log-samplingTue, 24 Sep 2024 09:50:47 GMT<![CDATA[Group by Attributes]]><![CDATA[Metrics Logs Traces BindPlane Agent : : : : v1.0.0+ Description The Group by Attributes Processor groups telemetry with shared attribute values into the same resource. Use 1. "Promote" attributes so telemetry with those attributes gets grouped under a _resource_ of that value. 2. Compact telemetry data that shares a resource after batching. Configuration Field Description : : Telemetry Types The types of telemetry to apply the processor to. Attribute Keys The attribute keys to group by and move to the resource. When no keys are specified, resources with the same attributes are compacted into the one resource. Example Configurations 1. Promote an attribute An example of promoting the user attribute on logs: Before processor: After: 2. Compact telemetry with the same resource By default, the processor will compact telemetry that shares the same resource. Using this processor after the batch processor can reduce the amount of data sent to some destinations.]]>https://observiq.com/docs/resources/processors/group-by-attributeshttps://observiq.com/docs/resources/processors/group-by-attributesThu, 13 Jun 2024 17:17:32 GMT<![CDATA[Google SecOps Standardization]]><![CDATA[This processor requires agent version 1.64.0 or newer to send fields to Google SecOps. In older agent versions, namespace and ingestion label fields will be added to telemetry but not parsed in Google SecOps. Description The Google SecOps Standardization processor can be used to add the log_type ingestion label, which specefies the appropriate SecOps Parser for your logs. Use The Google SecOps Standardization processor is to be used alongside the Google SecOps Exporter. This processor allows the user to configure the log type, namespace, and ingestion labels for logs sent to SecOps. Supported Types Metrics Logs Traces : : : Configuration Field Description : : Log Type The type of log that will be sent. Namespace User-configured environment namespace to identify the data domain the logs originated from. Ingestion Labels Key-value pairs of labels to be applied to the logs when sent to chronicle. Example Configuration Configure Google SecOps for Windows events This example configuration sets logType to "WINEVTLOG", namespace to "security", and ingestionLabels to a key-value pair: "environment" and "production". Web Interface Standalone Processor]]>https://observiq.com/docs/resources/processors/google-secops-standardizationhttps://observiq.com/docs/resources/processors/google-secops-standardizationMon, 11 Nov 2024 19:01:29 GMT<![CDATA[Filter Severity]]><![CDATA[Description The Severity Filter processor can be used to filter out logs that do not meet a given severity threshold. Supported Types Metrics Logs Traces : : : Configuration Table Parameter Type Default Description : : : : severity enum TRACE Minimum severity to match. Log entries with lower severities will be filtered. condition string true An [OTTL] expression used to match which log records to sample from. All paths in the [log context] are available to reference. All [converters] are available to use. [OTTL]: https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/v0.109.0/pkg/ottlreadme [converters]: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/v0.109.0/pkg/ottl/ottlfuncs/README.mdconverters [log context]: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/v0.109.0/pkg/ottl/contexts/ottllog/README.md Valid severity levels: - TRACE - INFO - WARN - ERROR - FATAL Example Configuration Filter out INFO and TRACE logs where Attribute ID is less than 3. Web Interface Standalone Processor Configuration with Embedded Processor]]>https://observiq.com/docs/resources/processors/filter-severityhttps://observiq.com/docs/resources/processors/filter-severityTue, 24 Sep 2024 18:24:18 GMT<![CDATA[Filter Metric Name]]><![CDATA[Metric Name Filter Processor The Metric Name Filter processor can be used to include or exclude metrics based on their name. Supported Types Metrics Logs Traces : : : Configuration Table Parameter Type Default Description : : : : action enum exclude exclude or include metrics that match. match_type enum strict Method for matching values. Strict matching requires that 'value' be an exact match. Regexp matching uses re2 to match a value. metric_names strings required One or more metric names to match on. Example Configuration Web Interface Exclude Regexp Filter out (exclude) metrics that match the expression k8s.node.. Include Strict Include metrics that match, drop all other metrics.]]>https://observiq.com/docs/resources/processors/filter-metric-namehttps://observiq.com/docs/resources/processors/filter-metric-nameWed, 05 Jun 2024 17:59:27 GMT<![CDATA[Filter HTTP Status]]><![CDATA[Description The HTTP Status processor can be used to filter out logs that contain a status code between a minimum and a maximum status code. Supported Types Metrics Logs Traces : : : Configuration Table Parameter Type Default Description : : : : minimum enum 100 Minimum Status to match. Log entries with lower status codes will be filtered. maximum enum 599 Maximum Status to match. Log entries with higher status codes will be filtered. Valid Minimum Status Codes: - 100 - 200 - 300 - 400 - 500 Valid Maximum Status Codes: - 199 - 299 - 399 - 499 - 599 Example Configuration Filter out all 1xx status codes and 2xx status codes. Web Interface Standalone Processor Configuration with Embedded Processor]]>https://observiq.com/docs/resources/processors/filter-http-statushttps://observiq.com/docs/resources/processors/filter-http-statusWed, 05 Jun 2024 17:59:27 GMT<![CDATA[Filter by Regex]]><![CDATA[Filter by Regex Processor The Filter by Regex processor can be used to include or exclude logs based on Regex matching body fields. This processor is exclusive to the Google license. Enterprise and Free users should use the Filter By Condition processor, which has more robust filtering. Metrics Logs Traces : : : Field Description : : Action The action to take when the filter condition is met. Include will retain matching logs. Exclude will remove matching logs. Regex The regular expression (Regex) that logs will be evaluated against. Match The type containing the field the Regex will be evaluated against. Options are "Body" and "Attributes". Field (Body) If Field Type is set to "body", this is the name of the body field Regex will be evaluated against. Leave empty to apply to the entire body. Field (Attributes) If Field Type is set to "attributes", this is the name of the attribute field Regex will be evaluated against. Example Configuration In this example, we exclude logs that have the body field "path" matching this Regex: .+(?:ql). Web Interface Standalone Processor]]>https://observiq.com/docs/resources/processors/filter-by-regexhttps://observiq.com/docs/resources/processors/filter-by-regexWed, 17 Jul 2024 13:47:01 GMT<![CDATA[Filter by Field]]><![CDATA[Filter by Field Processor The Filter by Field processor can be used to include or exclude telemetry based on matched resource attributes, attributes, or log body fields. This processor has been deprecated and replaced with a new Filter by Condition processor. While it will continue to function, it will no longer receive any enhancements and you should migrate to the new processor. Supported Types Metrics Logs Traces : : : Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Metrics", "Traces"] The list of telemetry types the processor will act on. action enum exclude Whether to include (retain) or exclude (drop) matches. match_type enum strict Method for matching values. Strict matching requires that 'value' be an exact match. Regexp matching uses re2 to match a value. attributes map [] Attribute key value pairs to filter on. Telemetry is filtered if all attribute, resource, and body key pairs are matched. resources map [] Resource key value pairs to filter on. Telemetry is filtered if all attribute, resource, and body key pairs are matched. bodies map [] Log body key value pairs to filter on. Log records filtered if all attribute, resource, and body key pairs are matched. Example Configuration Excluding matching log records In this example, we exclude logs that have all of the following: - A host.name resource attribute that equals dev-server - An environment attribute that equals dev - A remote-ip log body field that equals 127.0.0.1 Web Interface Standalone Processor]]>https://observiq.com/docs/resources/processors/filter-by-fieldhttps://observiq.com/docs/resources/processors/filter-by-fieldWed, 05 Jun 2024 17:59:27 GMT<![CDATA[Filter by Condition]]><![CDATA[Filter by Condition Processor The Filter by Condition processor can be used to include or exclude telemetry based on a condition that is evaluated against the telemetry data. Supported Types Metrics Logs Traces : : : Configuration Field Description : : Choose Telemetry Type The types of telemetry to filter. Action The action to take when the condition is met. Include will retain matching telemetry. Exclude will remove matching telemetry. Condition The condition to match against telemetry to include or exclude based on the action. Example Configuration Excluding matching log records In this example, we exclude logs that have all of the following: - A host.name resource attribute that equals dev-server - An environment attribute that equals dev - A remote-ip log body field that equals 127.0.0.1 Web Interface API Reference This processor can be defined as yaml and applied using the CLI or API. Type filter-by-condition Parameters Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Metrics", "Traces"] The types of telemetry to filter. action enum exclude The action to take when the condition is met. Include will retain matching telemetry. Exclude will remove matching telemetry. condition condition {"ottl":""} The condition to match against telemetry to include or exclude based on the action. Standalone Processor]]>https://observiq.com/docs/resources/processors/filter-by-conditionhttps://observiq.com/docs/resources/processors/filter-by-conditionWed, 05 Jun 2024 17:59:27 GMT<![CDATA[Extract Metric (Deprecated)]]><![CDATA[This processor has been deprecated and replaced with a new Extract Metric processor that supports additional functionality. While it will continue to function, it will no longer receive any enhancements and you should migrate to the new processor. For more information about the new processor, see here. Description The Extract Metric Processor can look at all logs matching a filter, extract a numerical value from a field, and then create a metric with that value. Both the name and units of the created metric can be configured. Additionally, fields from matching logs can be preserved as metric attributes. Supported Types Metrics Logs Traces : : : Supported Agent Versions v1.14.0+ Configuration Field Type Default Description match string true A boolean expression used to match which logs to count. By default, all logs are counted. path string A boolean expression used to specify the field to extract from a matching log. metric_name string log.count The name of the metric created. metric_units string {logs} The unit of the metric created. See Unified Code for Units of Measure for available units. attributes map {} The mapped attributes of the metric created. Each key is an attribute name. Each value is an expression that extracts data from the log. Expression Language In order to match or extract values from logs, the following keys are reserved and can be used to traverse the logs data model. Key Description body Used to access the body of the log. attributes Used to access the attributes of the log. resource Used to access the resource of the log. severity_enum Used to access the severity enum of the log. severity_number Used to access the severity number of the log. In order to access embedded values, use JSON dot notation. For example, body.example.field can be used to access a field two levels deep on the log body. However, if a key already possesses a literal dot, users will need to use bracket notation to access that field. For example, when the field service.name exists on the log's resource, users will need to use resource["service.name"] to access this value. For more information about syntax and available operators, see the Expression Language Definition. Example Configurations Default Configuration By default, all logs collected by the source will be counted, with the value used to create a new metric called log.count with the unit of {logs}. Break Down HTTP Request Durations by Status In this configuration, we want to parse our HTTP server logs to create metrics representing how long each request took, broken down by status code. Our logs are JSON with the following structure: The match expression will exclude all logs without a status code in its body: Our path expression will be the path to the duration field of the body, which we know is the request duration in milliseconds. We'll name this metric http.request.duration, then we'll use the status code for the status_code metric attribute on the created metric:]]>https://observiq.com/docs/resources/processors/extract-metrichttps://observiq.com/docs/resources/processors/extract-metricTue, 30 Jul 2024 17:59:27 GMT<![CDATA[Extract Metric]]><![CDATA[Description The Extract Metric Processor creates new metrics based on log telemetry. For logs matching a filter, the processor will extract a numerical value from a field and then create a metric with that value. The name, unit, and type of the created metric can be configured. Additionally, fields from matching logs can be preserved as metric attributes. Supported Types Metrics Logs Traces : : : Supported Agent Versions v1.14.0+ Configuration The configuration of an Extract Metric processor consists of a number of sub-metrics that are defined using fields described below. Field Description : : Metric Name The name of the metric that will be created. Match The log context the source field is located in. Options are Body, Attributes, and Resource. Metric Field The name of the source field containing a numeric value that will become the new metric value. Metric Type The type of metric that will be created. Options are gauge_double, gauge_int, counter_double, and counter_int. Metric Unit The unit of the created metric. Some default choices provided with the ability to create a custom unit. Attributes Existing attributes on the source log that should be carried over. Can also specify new metric attributes. Each value is an OTTL path expression that extracts data from the log. Example Configuration In this configuration we are creating a new latency metric based on a field in the log body. Here is the new latency metric created as a gauge_double with seconds set as the unit. Now we can see the new metric in the snapshot view.]]>https://observiq.com/docs/resources/processors/extract-metric-v2https://observiq.com/docs/resources/processors/extract-metric-v2Tue, 30 Jul 2024 17:59:27 GMT<![CDATA[Delete Fields]]><![CDATA[Description The Delete Fields processor can be used to remove attributes, resource attributes, and log record body keys from telemetry in the pipeline. Deleting Metric attributes may be unsound. Be careful when deleting metric attributes. Deleting attributes on metrics may cause multiple data points to have the same set of attributes, causing a datapoint collision. Supported Types Metrics Logs Traces : : : Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Metrics", "Traces"] The list of telemetry types the processor will act on. condition string true An OTTL condition that must evaluate to true to apply this processor. By default, the processor applies to all telemetry. body_keys string [] One or more body key names to remove from telemetry data. Note: Body fields are applicable only for logs. attributes string [] One or more attribute names to remove from telemetry data. resource_attributes string [] One or more resource attribute names to remove from telemetry data. Example Configuration This example configuration removes the "spid" body field, the "log.file.name" attribute, and the "host.id" resource attribute from any log record. Web Interface Standalone Processor Configuration with Embedded Processor]]>https://observiq.com/docs/resources/processors/delete-fields-v2https://observiq.com/docs/resources/processors/delete-fields-v2Tue, 22 Oct 2024 20:11:50 GMT<![CDATA[Delete Empty Values]]><![CDATA[Description The Delete Empty Values processor can be used to delete null and other empty values from telemetry resource attributes, telemetry attributes, or a log record's body. Supported Types Metrics Logs Traces : : : Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Metrics", "Traces"] The list of telemetry types the processor will act on. deleted_values enums ["Null Values"] List of values types to delete. May include "Null Values", "Empty Lists", and "Empty Maps". Enabling "Null Values" will remove both empty (zero-length) strings and entirely empty values. Enabling "Empty Lists" will delete empty (no element) list values. Enabling "Empty Maps" will delete empty (no key) map values. exclude_resource_keys strings [] List of resource keys to exclude from deletion. exclude_attribute_keys strings [] List of attribute keys to exclude from deletion. exclude_body_keys string [] List of body keys to exclude from deletion. empty_string_values strings [] List of string values that are considered "empty". String fields will be deleted if they match any of the strings in this list. Example Configuration This example configuration removes empty values from NGINX logs, where "-" is used to denote an empty field. Web Interface Standalone Processor Configuration with Embedded Processor]]>https://observiq.com/docs/resources/processors/delete-empty-valueshttps://observiq.com/docs/resources/processors/delete-empty-valuesFri, 14 Jun 2024 14:04:08 GMT<![CDATA[Delete Fields]]><![CDATA[Description The Delete Fields processor can be used to remove attributes, resource attributes, log record body keys from telemetry in the pipeline. This processor has been deprecated and replaced with a new Delete Fields processor that supports additional functionality and improved layout. While it will continue to function, it will no longer receive any enhancements and you should migrate to the new processor. For more information about the new processor, see here. Supported Types Metrics Logs Traces : : : Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Metrics", "Traces"] The list of telemetry types the processor will act on. log_condition string true An OTTL condition that must evaluate to true to apply this processor. By default, the processor applies to all logs. log_resource_attributes strings [] One or more resource attribute names to remove from logs. log_attributes strings [] One or more attribute names to remove from logs. log_body_keys strings [] One or more body key names to remove from log records. datapoint_condition string true An OTTL condition that must evaluate to true to apply this processor to metrics. By default, the processor applies to all datapoints. metric_resource_attributes strings [] One or more resource attribute names to remove from metrics. metric_attributes strings [] One or more attribute names to remove from datapoints. enable_traces bool true If true, this processor will operate on traces. span_condition string true An OTTL condition that must evaluate to true to apply this processor. By default, the processor applies to all spans. trace_resource_attributes strings [] One or more resource attribute names to remove from traces. trace_attributes strings [] One or more attribute names to remove from spans. Example Configuration This example configuration removes the "host.id" resource attribute, the "log.file.name" attribute, and the "spid" body field from any log record. Web Interface Standalone Processor Configuration with Embedded Processor]]>https://observiq.com/docs/resources/processors/delete-attributehttps://observiq.com/docs/resources/processors/delete-attributeWed, 23 Oct 2024 17:46:02 GMT<![CDATA[Deduplicate Logs]]><![CDATA[Description The Deduplicate Logs processor can be used to deduplicate logs over a time range and emit a single log with the count of duplicate logs. Logs are considered duplicates if the following match: - Severity - Log Body - Resource Attributes - Log Attributes Supported Types Metrics Logs Traces : : : Configuration Table Parameter Type Default Description : : : : interval\ int 10 The interval in seconds on which to aggregate logs. An aggregated log will be emitted after the interval passes. log_count_attribute\ string log_count The name of the count attribute of deduplicated logs that will be added to the emitted log. timezone\ string UTC The timezone of the first_observed_timestamp and last_observed_timestamp log attributes that are on the emitted log. exclude_fields strings A list of fields to exclude from duplicate matching. Fields can be excluded from the log body or attributes. These fields will not be present in the emitted log. More details can be found here. \_required field_ exclude_fields Parameter The exclude_fields parameter allows the user to remove fields from being considered when looking for duplicate logs. Fields can be excluded from either the body or attributes of a log. Though the entire body cannot be excluded. Nested fields can be specified by delimiting each part of the path with a .. If a field contains a . as part of its name it can be escaped by using \.. Below are a few examples and how to specify them: - Exclude timestamp field from the body -> body.timestamp - Exclude a host.name field from the log attributes -> attributes.host\.name - Exclude a nested ip field inside a src attribute -> attributes.src.ip Example Configuration Basic Configuration Setting a custom log_count_attribute and timezone while deduplicating logs on a 60 second interval. Web Interface Standalone Processor Exclude Fields This example shows the addition of exclude_fields. More information on exclude_fields can be found here. Web Interface Standalone Processor]]>https://observiq.com/docs/resources/processors/deduplicate-logshttps://observiq.com/docs/resources/processors/deduplicate-logsWed, 05 Jun 2024 17:59:27 GMT<![CDATA[Custom]]><![CDATA[Custom Processor The Custom processor can be used to inject a custom processor configuration into a pipeline. A list of supported processors can be found here. The Custom processor is useful for solving use cases not covered by BindPlane OP's other processor types. Supported Types Metrics Logs Traces : : : The custom processor type can support all telemetry types, however, it is up to the user to enable / disable the correct types based on the processor being used. Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Metrics", "Traces"] The list of telemetry types the processor will act on. configuration yaml required Enter any supported Processor and the YAML will be inserted into the configuration. Example Configuration Inject the following resource processor configuration: Web Interface Standalone Processor Configuration with Embedded Processor]]>https://observiq.com/docs/resources/processors/customhttps://observiq.com/docs/resources/processors/customWed, 05 Jun 2024 17:59:27 GMT<![CDATA[Count Telemetry]]><![CDATA[Metrics Logs Traces BindPlane Agent : : : : v1.27.0\+ Description The Count Telemetry Processor can count the number of logs, metric data points, or trace spans matching some filter, and create a metric with that value. Both the name and units of the created metric can be configured. Additionally, fields from matching logs can be preserved as metric attributes. Use Count Telemetry is especially useful as a way to convert your logs to metrics, allowing you to drop logs you don't need while still capturing signal from them. A frequent use case is to count how many logs you're getting from your web server by http status code. This lets you see if you're getting 500s, without paying to store logs for your 200s. See below for specific configuration examples. Configuration Field Description : : Telemetry Types The types of telemetry to apply the processor to. Match Expression OTTL expression to find matching logs. Uses the log context for logs, .datapoint context for metrics, and span context for traces. Metric Name The name of the metric created. Metric Units The unit of the metric created. See Unified Code for Units of Measure for available units. Metric Attributes The mapped attributes of the metric created. Each key is an attribute name. Each value is an expression that extracts data from the log. Example Configurations Count all telemetry By default, enabling metrics, traces, or logs will count all of their respective telemetry types. Below is an example of what this looks like when we want to count all logs. Count HTTP Requests by Status (logs) In this configuration, we want to parse our HTTP server logs to count how many requests were completed, broken down by status code. Our logs are JSON with the following structure: The match expression will exclude all logs without a status code in its body: We'll name this metric http.request.count, then we'll use the status code for the status_code metric attribute on the created metric:]]>https://observiq.com/docs/resources/processors/count-telemetryhttps://observiq.com/docs/resources/processors/count-telemetryThu, 13 Jun 2024 17:17:32 GMT<![CDATA[Compute Metric Statistics]]><![CDATA[Description The Compute Metric Statistics processor can be used to calculate statistics for metrics over fixed time intervals to reduce metric throughput. Supported Types Metrics Logs Traces : : : Configuration Table Parameter Type Default Description : : : : interval int 60 The interval in seconds on which to aggregate metrics. Caclulated metrics will be emitted after the interval passes. include regex . A regex that must match against the metric's name in order to calculate statistics from it. The default value matches all metrics. stats []string ["min", "max", "avg"] A list of statistics to calculate on matched metrics. Valid values are: min, max, avg, first, and last. Example Configuration Calculate Average, Minimum, and Maximum Calculate the average, minimum, and maximum values of each incoming metric, and emit them with a .avg, .min, and .max suffix, respectively. Web Interface Standalone Processor Take the Most Recent Value Take the last value of a metric over a 60 second interval, and emit the metric with a suffix of .last. Web Interface Standalone Processor]]>https://observiq.com/docs/resources/processors/compute-metric-statshttps://observiq.com/docs/resources/processors/compute-metric-statsWed, 05 Jun 2024 17:59:27 GMT<![CDATA[Coalesce]]><![CDATA[Description The Coalesce processor can be used to consolidate many field names into a single field name. Use The Coalesce processor is utilized for consolidating telemetry fields in metrics, logs, and traces based on specified conditions. While this is similar on concept to SQL Coalesce, it has some key differences. Especially around the order of precedence. See the behavior section. Supported Types Metrics Logs Traces : : : Configuration Field Description : : Telemetry Types The types of telemetry to apply the processor to. Condition A condition that determines when this processor is applied. Action The action to take (insert, update, upsert) when coalescing telemetry Coalesce From The telemetry fields to coalesce from. Coalesce To The telemetry field to coalesce to. Behavior The behavior of this processor is dependent on the selected action. When insert is selected, the target field only gets coalesced to if it doesn't exist. Precedence: First item in list. When update is selected, the target field gets coalesced to only if it already exists. Precedence: Last item in list. When upsert it selected, the target field gets coalesces to regardless whether it existed or not. Precedence: Last item in list. Example Configuration(s) Coalesce timestamp fields This configuration will coalesce the body fields ts, time, and timestamp to an attribute named time for later parsing. Upsert is used for the action. Web Interface Coalesce severity fields This configuration will coalesce the body fields sev, severity, and level to an attribute named severity for later parsing. Upsert is used for the action. Web Interface Coalesce datacenter fields This configuration will coalesce the body fields dc, datacenter, and location to an attribute named datacenter for later parsing. Upsert is used for the action. Web Interface]]>https://observiq.com/docs/resources/processors/coalescehttps://observiq.com/docs/resources/processors/coalesceMon, 23 Sep 2024 13:54:19 GMT<![CDATA[Batch]]><![CDATA[Description The batch processor accepts spans, metrics, or logs and places them into batches. Batching helps better compress the data and reduce the number of outgoing connections required to transmit the data. This processor supports both size and time based batching. Supported Types Metrics Logs Traces : : : Configuration Table Parameter Type Default Description : : : : send_batch_size int 8192 Number of spans, metric data points, or log records after which a batch will be sent regardless of the timeout. send_batch_max_size int 0 The upper limit of the batch size. 0 means no upper limit of the batch size. This property ensures that larger batches are split into smaller units. It must be greater than or equal to send batch size. timeout string 200ms Time duration after which a batch will be sent regardless of size. Example: 2s (two seconds). Example Configuration Batch telemetry with the following options: - Send batches of size 200 - Max batch size of 1000 - Build batches for up to two seconds Web Interface]]>https://observiq.com/docs/resources/processors/batchhttps://observiq.com/docs/resources/processors/batchWed, 05 Jun 2024 17:59:27 GMT<![CDATA[Add Fields]]><![CDATA[This processor has been deprecated and replaced with a new Add Fields processor that supports additional functionality and improved layout. While it will continue to function, it will no longer receive any enhancements and you should migrate to the new processor. For more information about the new processor, see here. Description The Add Fields processor can be used to add attributes, resources, and log record body keys from telemetry in the pipeline. Use The Add Fields processor is utilized for enriching telemetry data by appending or modifying attributes, resources, and log record body keys in metrics, logs, and traces based on specified conditions. Supported Types Metrics Logs Traces : : : Configuration Field Description : : Telemetry Types The types of telemetry to apply the processor to. Log Condition An OTTL condition that must evaluate to true to apply this processor. By default, the processor applies to all logs. Datapoint Condition An OTTL condition that must evaluate to true to apply this processor to metrics. By default, the processor applies to all datapoints. Span Condition An OTTL condition that must evaluate to true to apply this processor. By default, the processor applies to all spans. Resource Keys One or more resource attribute names to add to logs. Resource Action insert: Add resource if it does not exist. update: Update existing value. upsert: Insert or update. Attribute Keys One or more attribute names to add to logs. Attribute Action insert: Add attribute(s) if it does not exist. update: Update existing value. upsert: Insert or update. Body Keys One or more body key names to add to log records. Body Action insert: Add body key(s) if it does not exist. update: Update existing value. upsert: Insert or update. Example Configuration(s) Append Resource to Metrics for Categorization by Source In this configuration, additional resource data is appended to the metrics to categorize and identify them based on their source. This is particularly useful for differentiating and filtering metrics that are generated from various environments or locations. - environment: dev - location: us-east1-b Web Interface Add Custom Attributes to Logs for Enhanced Searchability Based on Error Status This configuration will add custom resource (application_name) to logs that have a log level of 'ERROR'. This can help in quickly identifying and tracing critical issues within a specific or part of the application. our example log body: The log condition we use to determine what logs to append the resource to:]]>https://observiq.com/docs/resources/processors/add-fieldshttps://observiq.com/docs/resources/processors/add-fieldsMon, 07 Oct 2024 18:49:43 GMT<![CDATA[Add Fields]]><![CDATA[Description The Add Fields processor can be used to add attributes, resources, and log record body keys from telemetry in the pipeline. Use The Add Fields processor is utilized for enriching telemetry data by appending or modifying attributes, resources, and log record body keys in metrics, logs, and traces based on specified conditions. Supported Types Metrics Logs Traces : : : Configuration Field Description : : Telemetry Types The types of telemetry to apply the processor to. Condition An OTTL condition that must evaluate to true to apply this processor. By default, the processor applies to all telemetry. Field Type Indicates the context in which the processor should operate: Attributes, Body, or Resource. Note: Body fields are applicable only for logs. Action Insert: Add field if it does not exist. Update: Update existing value. Upsert: Insert or update. Key Key to add or modify to the telemetry. Value Value associated with the defined key to add or modify to the telemetry. Example Configuration Append Resource to Metrics for Categorization by Source In this configuration, additional resource data is appended to the metrics to categorize and identify them based on their source. This is particularly useful for differentiating and filtering metrics that are generated from various environments or locations. - environment: dev - location: us-east1-b Web Interface Add Custom Attributes to Logs for Enhanced Searchability Based on Error Status This configuration will add custom resource (application_name) to logs that have a log level of 'ERROR'. This can help in quickly identifying and tracing critical issues within a specific part of the application. Example log body: The log condition we use to determine what logs to append the resource to:]]>https://observiq.com/docs/resources/processors/add-fields-v2https://observiq.com/docs/resources/processors/add-fields-v2Wed, 30 Oct 2024 17:03:48 GMT<![CDATA[Zipkin]]><![CDATA[Supported Types Metrics Logs Traces : : : Configuration Table Parameter Type Default Description : : : : hostname\ string Hostname or IP address of the Zipkin server. port int 14250 Port (gRPC) of the Zipkin server. path\ string "/api/v2/spans" API path to send traces to. enable_tls bool false Whether or not to use TLS. insecure_skip_verify bool false Enable to skip TLS certificate verification. ca_file string Certificate authority used to validate TLS certificates. It is required only if the underlying operating system does not trust Zipkin's certificate. mutual_tls bool false Whether or not to use mutual TLS authentication. cert_file string A TLS certificate used for client authentication. key_file bool A TLS private key used for client authentication. \_required field_ Supported Retry and Queuing Settings This destination supports the following retry and queuing settings: Sending Queue Persistent Queue Retry on Failure : : :]]>https://observiq.com/docs/resources/destinations/zipkinhttps://observiq.com/docs/resources/destinations/zipkinThu, 09 Nov 2023 09:26:01 GMT<![CDATA[Victoria Metrics]]><![CDATA[Description This Victoria Metrics destination configures an OTLP exporter to send metrics to Victoria Metrics using the OTLP protocol. Supported Types Logs Metrics Traces BindPlane Agent : : : : v1.57.0+ Configuration Table Field Description : : Hostname Hostname or IP address of the Victoria Metrics server. Port TCP port to which the exporter is going to send metrics. Additional Headers Add additional headers to be attached to each request. Enable TLS Whether or not to use TLS. Skip TLS Certificate Verification Enable to skip TLS certificate verification. TLS Certificate Authority File Certificate authority used to validate TLS certificates. Mutual TLS Whether or not to use mutual TLS authentication. Mutual TLS Client Certificate File A TLS certificate used for client authentication. Mutual TLS Client Private Key File A TLS private key used for client authentication. Supported Retry and Queuing Settings This destination supports the following retry and queuing settings: Sending Queue Persistent Queue Retry on Failure : : : Example Configuration In this configuration, we specify the hostname of the Victoria Metrics server telemetry is going to be sent to. Web Interface Standalone Destination]]>https://observiq.com/docs/resources/destinations/victoria-metricshttps://observiq.com/docs/resources/destinations/victoria-metricsTue, 20 Aug 2024 16:21:04 GMT<![CDATA[Sumo Logic]]><![CDATA[This destination has been deprecated and replaced with a new Sumo Logic destination. While it will continue to function, it will no longer receive any enhancements and you should migrate to the new destination. For more information about the new destination, see this documentation. Description This Sumo Logic destination can be configured to send logs and metrics data to a Sumo Logic HTTP logs and metrics source. Prerequisites A pre-existing Sumo Logic HTTP logs and metrics source needs to be configured for the exporter to work. Read more. Supported Types Logs Metrics Traces Bindplane Agent : : : : v1.37.0+ Configuration Field Description : : Choose Telemetry Type Select which types of telemetry to export. Logs and metrics are supported. Endpoint Endpoint for the Sumo Logic HTTP logs and metrics source. See the Sumo Logic documentation for more information. Compression Compression algorithm to use when sending data to Sumo Logic. Max Request Body Size Maximum HTTP request body size in bytes (1048576 = 1MiB) before compression is applied. Metadata Attributes List of regex(s) for attributes that should be sent as metadata. Log Format Format to send logs in to Sumo Logic. Available formats are JSON and text. Metric Format Format to send metrics in to Sumo Logic. See the Sumo Logic documentation for more information. Graphite Template Available when Metric Format is set to graphite. Template to be used for metric names. Default is just the metric name which is templated as %{_metric_}. Source Category Template that overrides the source category label configured for the Sumo Logic HTTP logs and metrics source. Source Name Template that overrides the source name label configured for the Sumo Logic HTTP logs and metrics source. Source Host Template that overrides the source host label configured for the Sumo Logic HTTP logs and metrics source. Timeout Timeout limit for each attempt to send data to Sumo Logic in seconds. Maximum timeout limit is 55s. For more information on the templated fields (graphite, source category, source name, and source host) refer to the exporter documentation for more information. This destination supports the following retry and queuing settings: Sending Queue Persistent Queue Retry on Failure : : : Example Configuration The following example configuration will send logs and metrics. Default compression and max request body size will be used. Logs will be sent as JSON and metrics will be sent as Graphite and utilize the Graphite template to alter the metric names. The source host template will be used as well to override the one on the Sumo Logic HTTP logs and metrics source. Sending and persistent queues will be used as well as retry on failure. Web Interface Standalone Destination]]>https://observiq.com/docs/resources/destinations/sumo-logichttps://observiq.com/docs/resources/destinations/sumo-logicThu, 13 Jun 2024 17:42:05 GMT<![CDATA[Sumo Logic]]><![CDATA[Description This Sumo Logic destination can be configured to send logs and metrics data to a Sumo Logic HTTP logs and metrics source. Prerequisites A pre-existing Sumo Logic HTTP logs and metrics source needs to be configured for the exporter to work. Read more. Supported Types Logs Metrics Traces Bindplane Agent : : : : v1.52.0+ Configuration Field Description : : Choose Telemetry Type Select which types of telemetry to export. Logs and metrics are supported. Endpoint Endpoint for the Sumo Logic HTTP logs and metrics source. See the Sumo Logic documentation for more information. Compression Compression algorithm to use when sending data to Sumo Logic. Max Request Body Size Maximum HTTP request body size in bytes (1048576 = 1MiB) before compression is applied. Log Format Format to send logs in to Sumo Logic. Available formats are JSON and text. Metric Format Format to send metrics in to Sumo Logic. See the Sumo Logic documentation for more information. Available values are prometheus and OTLP. Timeout Timeout limit for each attempt to send data to Sumo Logic in seconds. Maximum timeout limit is 55s. This destination supports the following retry and queuing settings: Sending Queue Persistent Queue Retry on Failure : : : Example Configuration The following example configuration will send logs and metrics. Default compression and max request body size will be used. Logs will be sent as JSON and metrics will be sent as Graphite and utilize the Graphite template to alter the metric names. The source host template will be used as well to override the one on the Sumo Logic HTTP logs and metrics source. Sending and persistent queues will be used as well as retry on failure. Web Interface Standalone Destination]]>https://observiq.com/docs/resources/destinations/sumo-logic-v2https://observiq.com/docs/resources/destinations/sumo-logic-v2Fri, 14 Jun 2024 14:04:08 GMT<![CDATA[Splunk (HEC)]]><![CDATA[Prerequisites Splunk Authentication Token and network access to the Splunk indexer. Creating a Splunk Token Got to the Settings Menu> Tokens Example: Creating a Token within Splunk Network Requirements Network access to the Splunk indexer, TCP: 8088 is the default. Supported Platforms Platform Logs Metrics Traces : : : : Linux Windows macOS Configuration Table Parameter Type Default Description : : : : token string Authentication token used when connecting to the HTTP Event Collector. index string Optional name of the Splunk index targeted. hostname string localhost Hostname or IP address of the HTTP Event Collector. port int 8088 TCP port to which the exporter is going to send data. path string /services/collector/event The HTTP API path to which the exporter is going to send data. max_request_size int 2097152 The maximum size (in bytes) of a request sent to the destination. A value of 0 will send unbounded requests. The maximum allowed value is 838860800 (800MB). max_event_size int 2097152 The maximum size (in bytes) of an individual event. Events larger than this will be dropped. The maximum allowed value is 838860800 (800MB). enable_compression bool true Compress telemetry data using gzip before sending. enable_tls bool false Whether or not to use TLS. insecure_skip_verify bool false Enable to skip TLS certificate verification. ca_file string Certificate authority that is used to validate TLS certificates. Configuration Example: Splunk Destination configuration Supported Retry and Queuing Settings This destination supports the following retry and queuing settings: Sending Queue Persistent Queue Retry on Failure : : :]]>https://observiq.com/docs/resources/destinations/splunk-hechttps://observiq.com/docs/resources/destinations/splunk-hecTue, 04 Jun 2024 14:21:20 GMT<![CDATA[Snowflake]]><![CDATA[Description This destination can send logs, metrics, and traces to Snowflake, a cloud data warehouse service. Supported Types Logs Metrics Traces BindPlane Agent : : : : v1.45.0+ Prerequisites - Snowflake data warehouse - Snowflake user with appropriate privileges The following guide will detail how to get a user initialized in Snowflake that can be used with this destination. It is meant to get telemetry flowing with limited time spent configuring. If you'd like to take an alternative approach, check out the exporter documentation on this topic here. Snowflake has a variety of ways to connect to it, but these instructions will be tailored for "Classic Console" as all accounts have access to it. Before starting, log in to Classic Console using a user that has access to the ACCOUNTADMIN role or another role in you Snowflake account that has permission to grant privileges and create users. If the default role is not the required one, then you'll need to assume that role using this SQL command (replace the role as needed): These instructions will grant privileges to one of the default roles Snowflake is initialized with, SYSADMIN. If you want to grant privileges to a different role then just switch out SYSADMIN for your role in the SQL commands. 1. Grant Warehouse Usage First, we need to grant the USAGE privilege to the SYSADMIN role on the data warehouse telemetry data will be stored in. Run this SQL command next (replace TEST with your warehouse name): 2. Grant Create Database Privilege Next the SYSADMIN role needs to be granted the ability to create databases in the Snowflake account. Run the following SQL to do so: 3. Create New User For BindPlane Now a new user needs to be created that the BindPlane Agent can login as. The user should also have the default role assigned as SYSADMIN, although it isn't necessary. Note: If the default role is not assigned, then the exporter will need to be configured with the correct role to work. Remember the login name and password you use and configure the destination with these values. Replace the user, password, and login name in the following SQL to match yours: 4. Grant Privilege to SYSADMIN Role Even though the default role was set as SYSADMIN we still need to grant the new account permission to it. This can be done using the next SQL command (replace user as needed): Now we have a Snowflake user with the correct permissions to be able to create a database, schemas, and tables and also use the configured warehouse to store telemetry data in. Configuration Field Description : : Choose Telemetry Type The kinds of telemetry to send to Snowflake. Account Identifier The account identifier for the Snowflake account that data should be sent to. Warehouse THe Snowflake warehouse that telemetry data should be stored in. Username Username the exporter will use to authenticate with Snowflake. Password Password the exporter will use to authenticate with Snowflake. Database The Snowflake database that telemetry schemas will be stored in. Will be created if it doesn't exist. Log Schema The schema that the log table will be stored in. Will be created if it doesn't exist. Log Table The table that logs will be stored in. Will be created if it doesn't exist. Metric Schema The schema that the metric tables will be stored in. Will be created if it doesn't exist. Metric Table The prefix used for metric tables. Tables are created if they don't exist. See this exporter documentation for more. Trace Schema The schema that the trace table will be stored in. Will be created if it doesn't exist. Trace Table The table that traces will be stored in. Will be created if it doesn't exist. Role The Snowflake role the exporter should use. Only required if the default role of the provided credentials does not have correct privileges. Parameters Additional optional parameters the exporter should use when connecting to Snowflake. This option is generally not required. See this Snowflake documentation for more. This destination supports the following retry and queuing settings: Sending Queue Persistent Queue Retry on Failure : : : Example Configuration Here we will configure this destination to send all telemetry types to a Snowflake account using the default values for database resources. We'll also configure the sending queue, persistent queue, and retry on failure. Web Interface Standalone Destination]]>https://observiq.com/docs/resources/destinations/snowflakehttps://observiq.com/docs/resources/destinations/snowflakeTue, 04 Jun 2024 14:21:20 GMT<![CDATA[Splunk Observability Cloud]]><![CDATA[Supported Types Metrics Logs Traces : : : Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ['Logs', 'Metrics', 'Traces'] The types of telemetry data to send to Splunk Observability Cloud. Valid values are: Logs, Metrics, and Traces. token\ string Token used to authenticate with the Splunk (SignalFx) metric, trace, and log APIs realm enum us0 The Splunk API realm (region) to use when sending metrics, traces, and logs. Valid values are: us0, us1, us2, eu0, or jp0. \_required field_ Supported Retry and Queuing Settings This destination supports the following retry and queuing settings: Sending Queue Persistent Queue Retry on Failure : : :]]>https://observiq.com/docs/resources/destinations/signalfxhttps://observiq.com/docs/resources/destinations/signalfxMon, 12 Feb 2024 18:49:58 GMT<![CDATA[Google SecOps (Chronicle)]]><![CDATA[For agent v1.63.0 or older, Fallback Log Type is required. Currently v2 of the legacy ingestion API and the alpha version of the DataPlane API are supported Supported Types Metrics Logs Traces : : : Prerequisites Before setting up the Google SecOps destination, ensure you have a Google Cloud account and access to the Google SecOps security analytics platform. More details on setting this up can be found in the Google Cloud documentation here Configuration Fields Protocol - gRPC selects the legacy API, using the malachite endpoints and gRPC for injestion - https selects the DataPlane API, using the DataPlane endpoints and HTTP for ingestion Legacy Ingestion API (Malachite) Field Description : : Endpoint The endpoint for sending to Google SecOps. Authentication Method Method used for authenticating to Google Cloud: auto, json, file. Credentials JSON value from a Google Service Account credential file. Required if Authentication Method is set to 'json'. Credentials File Path to a Google Service Account credential file on the collector system. Required if Authentication Method is set to 'file'. Log Type Type of log to be sent to Google SecOps. The Supported Log Types can be seen here. Customer ID The customer ID used for sending logs. Field to send If Send Single Field is selected, Body or Attributes to select the source of the field to send Body Field or Attribute Field If Send Single Field is selected, an OTTL formatted field from either the Body or Attributes that contains the raw log data DataPlane API (https) Field Description : : Region The Google SecOps region to send to. Injestion will only succeed for regions your credentials are provisioned for. Authentication Method Method used for authenticating to Google Cloud: auto, json, file. Credentials JSON value from a Google Service Account credential file. Required if Authentication Method is set to 'json'. Credentials File Path to a Google Service Account credential file on the collector system. Required if Authentication Method is set to 'file'. Log Type Type of log to be sent to Google SecOps. The Supported Log Types can be seen here. Customer ID The customer ID used for sending logs. Project Name The project name used for sending logs. Found in the Google Cloud Platform section of the SecOps settings. Forwarder Name The Config ID of the forwarder used for sending logs. Found in the Forwarders section of the SecOps Settings. Field to send If Send Single Field is selected, Body or Attributes to select the source of the field to send Body Field or Attribute Field If Send Single Field is selected, an OTTL formatted field from either the Body or Attributes that contains the raw log data Sources Google SecOps expects to be sent raw unstructured logs. Therefore, when sending logs to SecOps, you should only use the following supported sources: - Windows Events (With Advanced -> Raw Logs enabled) - Microsoft SQL Server - Common Event Format - CSV - File - HTTP - TCP - UDP Log Type Handling / Google SecOps Parsing Google Secops uses the log_type ingestion label to determine which SecOps Parser should be applied to logs. In BindPlane you can set the log_type ingestion label in one of the following ways: 1. Automatic Mapping: BindPlane will automatically create the log_type ingestion label for sources that use one of the following log_types. In these cases, you dont need to take any action. attributes[log_type] chronicle_log_type (Ingestion Label) windows_event.security WINEVTLOG windows_event.application WINEVTLOG windows_event.system WINEVTLOG sql_server MICROSFT_SQL 2. Set Google SecOps Log Type: You can use the Google SecOps Standardization Processor to specify the appropriate SecOps ingestion label (log_type). Its best practice to always explicitly set this when sending logs to Google Secops. You can optionally specify a namespace to identify an appropriate data domain and add additional ingestion labels to configure custom metadata. Note: The log_type field will take precedence over any automatic mapping that may occur. 3. Fallback: The Google SecOps Destination has a Fallback Log Type field that you can set as a fallback option, in the case that you did not set chronicle_log_type or BindPlane couldnt automatically map the log_type for you. Credentials This exporter requires a Google Cloud service account with access to the Google SecOps API. The service account must have access to the endpoint specfied in the config. For the legacy API (gRPC), besides the default endpoint (https://malachiteingestion-pa.googleapis.com), there are also regional endpoints that can be used here. When using the DataPlane API (https), the available regions can be found here For additional information on accessing SecOps, see the Chronicle documentation, and DataPlane documentation Supported Retry and Queuing Settings This destination supports the following retry and queuing settings: Sending Queue Persistent Queue Retry on Failure : : :]]>https://observiq.com/docs/resources/destinations/secopshttps://observiq.com/docs/resources/destinations/secopsMon, 11 Nov 2024 19:01:29 GMT<![CDATA[Google SecOps (Chronicle) Forwarder]]><![CDATA[Supported Types Metrics Logs Traces : : : Prerequisites Ensure you have a Google SecOps forwarder set up and running. More details on setting this up can be found in the Security Operations documentation here. Configuration Fields Field Description : : Export Type The method of export to use, either syslog or file. Raw Log Field The field name containing raw log data. Syslog Endpoint The SecOps forwarder endpoint for Syslog (if Syslog is chosen as the export type). Syslog Transport The transport protocol to use (e.g., TCP, UDP) for Syslog. Enable TLS Whether or not to use TLS for secure transmission (relevant for Syslog). Skip TLS Verification Option to skip TLS certificate verification (if TLS is enabled). TLS Certificate File Path to the x509 PEM certificate (if TLS is enabled). TLS Private Key File Path to the x509 PEM private key (if TLS is enabled). TLS CA File Path to the x509 PEM certificate authority file (if TLS is enabled). File Path The path to the file for storing logs (if File is chosen as the export type). Example Configurations Syslog Configuration Standalone Destination for Syslog Configuration File Configuration Standalone Destination for File Configuration]]>https://observiq.com/docs/resources/destinations/secops-forwarderhttps://observiq.com/docs/resources/destinations/secops-forwarderTue, 04 Jun 2024 14:21:20 GMT<![CDATA[QRadar]]><![CDATA[Supported Types Metrics Logs Traces : : : Configuration Table Parameter Default : : : Field to Send Body Whether to send a Body or Attribute field to QRadar. Body Field When Field to Send is Body, this is the body field that will be sent. If empty, all Body fields are sent to QRadar. Attribute Field When Field to Send is Attribute, this is the attribute field that will be sent. If empty, all Attribute fields are sent to QRadar. QRadar Endpoint The QRadar endpoint to send logs to. Transport Protocol tcp the transport protocol to use. Must be one of tcp or udp. \_required field_ Supported Retry and Queuing Settings This destination supports the following retry and queuing settings: Sending Queue Persistent Queue Retry on Failure : : :]]>https://observiq.com/docs/resources/destinations/qradarhttps://observiq.com/docs/resources/destinations/qradarWed, 02 Oct 2024 10:42:40 GMT<![CDATA[Prometheus]]><![CDATA[Supported Types Metrics Logs Traces : : : Configuration Table Parameter Type Default Description : : : : listen_port int 9000 The TCP port the Prometheus exporter should listen on to be scraped by a Prometheus server listen_address string "127.0.0.1" The IP address the Prometheus exporter should listen on to be scraped by a Prometheus server namespace string When set, exports metrics under the provided value]]>https://observiq.com/docs/resources/destinations/prometheushttps://observiq.com/docs/resources/destinations/prometheusMon, 15 Apr 2024 12:36:34 GMT<![CDATA[Prometheus Remote Write]]><![CDATA[Supported Types Logs Metrics Traces : : : Configuration Table Parameter Type Default Description : : : : hostname\ string The hostname or IP address for the remote write backend. port\ int 9009 The port remote write backend. path\ string The API Path of the remote write URL. Ex: api/v1/metrics. headers map Additional headers to attach to each HTTP Request. The following headers cannot be changed: Content-Encoding, Content-Type, X-Prometheus-Remote-Write-Version, and User-Agent. external_labels map Label names and values to be attached as metric attributes. namespace string "" Prefix to attach to each metric name. enable_resource_to_telemetry_conversion bool false When enabled, it will convert all resource attributes to metric attributes. enable_write_ahead_log bool false Whether or not to enable a Write Ahead Log for the exporter. wal_buffer_size int 300 Number of objects to store in Write Ahead Log before truncating. Applicable if enable_write_ahead_log is true. wal_truncate_frequency int 60 Sets how often, in seconds, the Write Ahead Log should be truncated. Applicable if enable_write_ahead_log is true. enable_tls bool false Whether or not to use TLS. strict_tls_verify bool false Strict TLS Certificate Verification. ca_file string Certificate authority used to validate TLS certificates. This is not required if the collector's operating system already trusts the certificate authority. cert_file string A TLS certificate used for client authentication if mutual TLS is enabled. key_file string A TLS private key used for client authentication if mutual TLS is enabled. \_required field_ Supported Retry and Queuing Settings This destination supports the following retry and queuing settings: Sending Queue Persistent Queue Retry on Failure : : : In order to buffer metrics to disk, the Write Ahead Log (WAL) may be enabled.]]>https://observiq.com/docs/resources/destinations/prometheus-remote-writehttps://observiq.com/docs/resources/destinations/prometheus-remote-writeMon, 15 Apr 2024 12:36:34 GMT<![CDATA[OpenTelemetry (OTLP)]]><![CDATA[Description This OTLP destination configures an OTLP exporter to send metrics, logs, and traces to an endpoint. Supported Types Logs Metrics Traces BindPlane Agent : : : : v1.36.0+ Configuration Table Field Description : : Hostname Hostname or IP address where the exporter will send OTLP data. Port TCP port to which the exporter is going to send OTLP data. Protocol The OTLP protocol to use when sending OTLP telemetry. Can be gRPC or HTTP. Compression Compression algorithm to use when sending data to the OTLP server. Ensure that the server supports the compression algorithm selected. Kinds of compression depend on Protocol. Additional Headers Add additional headers to be attached to each request. Enable TLS Whether or not to use TLS. Skip TLS Certificate Verification Enable to skip TLS certificate verification. TLS Certificate Authority File Certificate authority used to validate TLS certificates. Server Name Override Optional virtual hostname. Indicates the name of the server requested by the client. This option is generally not required. Read more here. Mutual TLS Whether or not to use mutual TLS authentication. Mutual TLS Client Certificate File A TLS certificate used for client authentication. Mutual TLS Client Private Key File A TLS private key used for client authentication. Supported Retry and Queuing Settings This destination supports the following retry and queuing settings: Sending Queue Persistent Queue Retry on Failure : : : Example Configuration In this configuration, we specify the hostname of the OTLP server telemetry is going to be sent to, as well as what protocol will be used. Web Interface Standalone Destination]]>https://observiq.com/docs/resources/destinations/otlphttps://observiq.com/docs/resources/destinations/otlpFri, 14 Jun 2024 14:04:08 GMT<![CDATA[Observe]]><![CDATA[Description This Observe destination can be used to send metrics, logs, and traces to an Observe account. Supported Types Logs Metrics Traces Bindplane Agent : : : : v1.41.0+ Prerequisites By default, The Observe platform will not display metrics and logs. You can enable metrics and logs by selecting "Applications" and choosing "Manage" on the OpenTelemetry application. The manage page will let you toggle support for metrics and logs. Configuration Field Description : : Customer ID A 12-digit number that identifies your account and is displayed in the URL you use to log into Observe. You can learn more here. Token A token with write-access to a Datastream in your Observe account. You can learn more here. The Observe platform relies on the "Timestamp" field for indexing logs. You can use the Parse Timestamp processor to parse your application's timestamps if they are not already parsed by your configured sources. Most BindPlane sources will handle timestamps correctly without additional configuration. This destination supports the following retry and queuing settings: Sending Queue Persistent Queue Retry on Failure : : : Example Configuration This example configuration will forward metrics, traces, and logs to the account 165866210346 using the token we created in the Observe web interface. Web Interface Standalone Destination]]>https://observiq.com/docs/resources/destinations/observeinchttps://observiq.com/docs/resources/destinations/observeincThu, 18 Jul 2024 19:53:03 GMT<![CDATA[New Relic]]><![CDATA[Supported Types Logs Metrics Traces : : : Configuration Table Parameter Type Default Description : : : : endpoint\ enum https://otlp.nr-data.net Endpoint where the exporter sends data to New Relic. Endpoints are region-specific, so use the one according to where your account is based. Valid values are https://otlp.nr-data.net, https://otlp.eu01.nr-data.net, or https://gov-otlp.nr-data.net. license_key\ string License key used for data ingest. \_required field_ Supported Retry and Queuing Settings This destination supports the following retry and queuing settings: Sending Queue Persistent Queue Retry on Failure : : :]]>https://observiq.com/docs/resources/destinations/new-relichttps://observiq.com/docs/resources/destinations/new-relicMon, 15 Apr 2024 12:36:34 GMT<![CDATA[Loki]]><![CDATA[Supported Types Logs Metrics Traces : : : Configuration Table Parameter Type Default Description : : : : endpoint string The target URL to send Loki log streams to (e.g. [http://loki:3100/loki/api/v1/push]()). to. headers map Additional headers to attach to each HTTP Request. configure_tls bool false Configure advanced TLS settings. insecure_skip_verify bool false Enable to skip TLS certificate verification. ca_file string Certificate authority used to validate TLS certificates. mutual_tls bool false Whether or not to use mutual TLS authentication. cert_file string A TLS certificate used for client authentication. key_file bool A TLS private key used for client authentication. Supported Retry and Queuing Settings This destination supports the following retry and queuing settings: Sending Queue Persistent Queue Retry on Failure : : :]]>https://observiq.com/docs/resources/destinations/lokihttps://observiq.com/docs/resources/destinations/lokiThu, 13 Jun 2024 17:42:05 GMT<![CDATA[Logz.io]]><![CDATA[Supported Types Logs Metrics Traces : : : Configuration Table Parameter Type Default Description : : : : telemetry_types\ strings ['Logs', 'Metrics', 'Traces'] The kind of telemetry that will be sent to the Exporter. Can be any combination of Logs, Metrics, and Traces. logs_token\ string Your logz.io account token for your logs account metrics_token\ string Your logz.io account token for your metrics account enable_write_ahead_log bool false Enables write-ahead logging for exporting metrics. wal_storage_path string $OIQ_OTEL_COLLECTOR_HOME/storage/logzio_metrics_wal Path of the directory the WAL is stored in. Must be unique to this destination. wal_buffer_size int 300 Number of objects to store in Write Ahead Log before truncating. wal_truncate_frequency int 60 Sets how often, in seconds, the Write Ahead Log should be truncated. listener_url\ string https://listener.logz.io:8053 The URL of the Logz.io listener in your region tracing_token string Your logz.io account token for your tracing account region\ enum "us" Your logz.io account region code. Valid options are: us, eu, uk, nl, wa, ca, au timeout int 30 Time to wait per individual attempt to send data to a backend \_required field_ Supported Retry and Queuing Settings This destination supports the following retry and queuing settings: Sending Queue Persistent Queue Retry on Failure : : : \ \ \ This destination only supports sending queues and persistent queues for traces and logs. To queue metrics to disk, enable the Write Ahead Log (WAL) for metrics. Example Configuration Basic Configuration Web Interface Standalone Destination]]>https://observiq.com/docs/resources/destinations/logziohttps://observiq.com/docs/resources/destinations/logzioTue, 04 Jun 2024 14:21:20 GMT<![CDATA[Kafka]]><![CDATA[Supported Types Logs Metrics Traces : : : Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Metrics", "Traces"] Specifies which types of telemetry to export. protocol_version enum "2.0.0" The Kafka protocol version to use when communicating with brokers. Valid values are: "2.2.1", "2.2.0", "2.0.0", or "1.0.0". brokers strings localhost:9092 A list of the brokers to connect to when sending metrics, traces, and logs. timeout int 5 Timeout (seconds) for every attempt to send data to the backend. log_topic string otlp_logs The name of the topic to export logs to. log_encoding enum otlp_proto The encoding to use when publishing logs to Kafka. Options are otlp_proto, otlp_json, and raw. metric_topic string otlp_metrics The name of the topic to export metrics to. metric_encoding enum otlp_proto The encoding to use when publishing metrics to Kafka. Options are otlp_proto and otlp_json. trace_topic string otlp_spans The name of the topic to export traces to. trace_encoding enum oltp_proto The encoding to use when publishing traces to Kafka. Options are otlp_proto, otlp_json, jaeger_proto, jeager_json, zipkin_proto, and zipkin_json. compression enum gzip The compression algorithm to use when publishing data to Kafka. Options are gzip, snappy, lz4, and none. enable_auth bool false auth_type enum basic basic, sasl, or kerberos basic_username string basic_password string sasl_username string sasl_password enum sasl_mechanism string SCRAM-SHA-256 SCRAM-SHA-256, SCRAM-SHA-512, or PLAIN kerberos_service_name string kerberos_realm string kerberos_config_file string /etc/krb5.conf kerberos_auth_type enum keytab keytab or basic kerberos_keytab_file string /etc/security/kafka.keytab kerberos_username string kerberos_password string Supported Retry and Queuing Settings This destination supports the following retry and queuing settings: Sending Queue Persistent Queue Retry on Failure : : : Example Configuration Basic Configuration Web Interface Standalone Destination]]>https://observiq.com/docs/resources/destinations/kafkahttps://observiq.com/docs/resources/destinations/kafkaTue, 04 Jun 2024 14:21:20 GMT<![CDATA[Jaeger]]><![CDATA[Description This Jaeger destination configures an OTLP exporter to send traces to a Jaeger server for ingestion. Replaces the pre-existing Jaeger destination that utilized a Jaeger specific exporter. Supported Types Metrics Logs Traces Bindplane Agent : : : : v1.36.0+ Configuration Field Description : : Hostname Hostname or IP address of the Jaeger server. Port Port of the Jaeger server, either gRPC or HTTP depending on Protocol, to send OTLP data to. Read more. Protocol The OTLP protocol to use when sending to the Jaeger server. Can be gRPC or HTTP. Compression Compression algorithm to use when sending data to the OTLP server. Ensure that the server supports the compression algorithm selected. Kinds of compression depend on Protocol. Additional Headers Add additional headers to be attached to each request. Enable TLS Whether or not to use TLS. Skip TLS Certificate Verification Enable to skip TLS certificate verification. TLS Certificate Authority File Certificate authority used to validate TLS certificates. Mutual TLS Whether or not to use mutual TLS authentication. Mutual TLS Client Certificate File A TLS certificate used for client authentication. Mutual TLS Client Private Key File A TLS private key used for client authentication. This destination supports the following retry and queuing settings: Sending Queue Persistent Queue Retry on Failure : : : Example Configuration Basic Configuration For basic configuration, we specify the hostname of the Jaeger server traces are going to be sent to, as well as what protocol will be used (in this case gRPC). Web Interface Standalone Destination Advanced Configuration This configuration is similar to the basic configuration but also utilizes TLS, Sending Queue, Persistent Queue, and Retry on Failure. Web Interface Standalone Destination]]>https://observiq.com/docs/resources/destinations/jaeger-otlphttps://observiq.com/docs/resources/destinations/jaeger-otlpTue, 04 Jun 2024 14:21:20 GMT<![CDATA[InfluxDB]]><![CDATA[Description The InfluxDB destination supports sending logs, metrics, and traces to an InfluxDB system. Supported Types Metrics Logs Traces : : : Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Metrics"] Choose Telemetry Type. host string localhost The IP address or hostname of the InfluxDB server to send telemetry to port int 8086 The port that the InfluxDB server is listening on for telemetry data. api_path string /api/v2/write URL path to send telemetry to. org string Name of the InfluxDB organization that the target bucket belongs to. bucket string Name of the InfluxDB bucket to write telemetry to. token string The authentication token used to authenticate with InfluxDB, if configured. metrics_schema enum telegraf-prometheus-v1 The metrics schema to use when writing metrics to InfluxDB. span_dimensions strings ["service.name","span.name"] Span attributes to use as InfluxDB tags. log_dimensions strings ["service.name"] Log attributes to use as InfluxDB tags. headers map {} Additional headers to attach to each HTTP request. configure_tls bool false Configure advanced TLS settings. insecure_skip_verify bool false Enable to skip TLS certificate verification. ca_file string Certificate authority used to validate the InfluxDB server's TLS certificate. compression enum gzip Compression algorithm to use when sending telemetry to InfluxDB. Supported Retry and Queuing Settings This destination supports the following retry and queuing settings: Sending Queue Persistent Queue Retry on Failure : : : Example Configuration Web Interface InfluxDB OSS running locally: InfluxDB Cloud:]]>https://observiq.com/docs/resources/destinations/influxdbhttps://observiq.com/docs/resources/destinations/influxdbTue, 11 Jun 2024 17:01:55 GMT<![CDATA[Honeycomb Refinery]]><![CDATA[Description Sends logs and traces to Honeycomb Refinery. Prerequisites This destination requires network access to a Honeycomb Refinery deployment. If refinery is configured to forward telemetry to Honeycomb.io, an API key is required. The API key should have "Send Events" and "Create Dataset" permissions. See the Honeycomb Refinery and Honeycomb quick start guide for more information. Supported Types Logs Metrics Traces Bindplane Agent : : : : v1.60.0+ Configuration Field Description : : Choose Telemetry Type The kind of telemetry that should be sent to Honeycomb. Hostname Hostname or IP address where the exporter will send OTLP data. Port TCP port to which the exporter is going to send OTLP data. API Key The API key to use for sending telemetry to Honeycomb. Make sure the key has the "Send Events" permission. If the provided dataset(s) do not exist, use the "Create Dataset" permission as well. See this Honeycomb documentation for more. Compression Compression algorithm to use when sending data to Honeycomb. Available options are none or gzip. Enable TLS Whether or not to use TLS. Skip TLS Certificate Verification Enable to skip TLS certificate verification. TLS Certificate Authority File Certificate authority used to validate TLS certificates. Server Name Override Optional virtual hostname. Indicates the name of the server requested by the client. This option is generally not required. Read more here. Mutual TLS Whether or not to use mutual TLS authentication. Mutual TLS Client Certificate File A TLS certificate used for client authentication. Mutual TLS Client Private Key File A TLS private key used for client authentication. This destination supports the following retry and queuing settings: Sending Queue Persistent Queue Retry on Failure : : : Honeycomb Datasets Your Refinery server can be configured to route telemetry to specific Honeycomb datasets, however, Refinery will respect the service.name resource field when determining which dataset the telemetry should belong to. You can route telemetry to datasets dynamically by using the Add Fields processor to set service.name resource field. Example Configuration Web Interface Standalone Destination]]>https://observiq.com/docs/resources/destinations/honeycomb_refineryhttps://observiq.com/docs/resources/destinations/honeycomb_refineryWed, 30 Oct 2024 16:06:44 GMT<![CDATA[Honeycomb.io]]><![CDATA[Description Sends logs, metrics, and traces to Honeycomb.io using the OTLP exporter. Prerequisites A Honeycomb.io account, team, environment, and API key will need to be created before being able to send telemetry with this destination. See this Honeycomb quick start guide for more information. With a Honeycomb environment and API key setup, you can configure this destination to send to Honeycomb. Note that the API key will need the "Send Events" permission. If configuring this destination with a dataset that does not exist, the API key will also need the "Create Dataset" permission to work properly. Supported Types Logs Metrics Traces Bindplane Agent : : : : v1.36.0+ Configuration Field Description : : Choose Telemetry Type The kind of telemetry that should be sent to Honeycomb. API Key The API key to use for sending telemetry to Honeycomb. Make sure the key has the "Send Events" permission. If the provided dataset(s) do not exist, use the "Create Dataset" permission as well. See this Honeycomb documentation for more. Metrics Dataset Honeycomb dataset that metrics will be sent to. Will be created if it does not exist. See this Honeycomb documentation for more. Logs Dataset Honeycomb dataset that logs will be sent to. Will be created if it does not exist. If not set, the "service.name" log resource attribute will be used. See this Honeycomb documentation for more. Protocol The OTLP protocol to use when sending to Honeycomb. Can be HTTP or gRPC. See this Honeycomb documentation for more. Compression Compression algorithm to use when sending data to Honeycomb. Available options are none or gzip. Additional Headers Add additional headers to be attached to requests. This destination supports the following retry and queuing settings: Sending Queue Persistent Queue Retry on Failure : : : Honeycomb Datasets The Honeycomb destination can be configured to route telemetry to specific Honeycomb datasets, however, Refinery will respect the service.name resource field when determining which dataset the telemetry should belong to. You can route telemetry to datasets dynamically by using the Add Fields processor to set service.name resource field. Example Configuration For this configuration we'll configure to send logs, metrics, and traces. We specify an API key, Metrics Dataset, and Logs Dataset. In the Advanced section we specify gRPC protocol and gzip compression. We'll also enable Retry on Failure, Sending Queue, and Persistent Queueing. Web Interface Standalone Destination]]>https://observiq.com/docs/resources/destinations/honeycombhttps://observiq.com/docs/resources/destinations/honeycombWed, 30 Oct 2024 16:06:44 GMT<![CDATA[Grafana Cloud]]><![CDATA[Description This Grafana Cloud destination can be used to send metrics, logs, and traces to a Grafana Cloud instance. Supported Types Logs Metrics Traces BindPlane Agent : : : : v1.36.0 Prerequisites In order for the BindPlane Agent to send data to Grafana Cloud, it needs to be configured with a valid Access Policy token, OTLP endpoint, and Grafana Cloud instance ID. Access Policy Token An Access Policy needs to be created with the correct permissions, which can be done in the Cloud Portal. Navigate to "Access Policies" underneath the "Security" section and choose "Create Access Policy". Name the new policy something descriptive and choose the "Realm" that best fits your use case. Under "Scopes" select the "Write" permission for the kind(s) of telemetry being exported to Grafana Cloud. On the card of the newly created Access Policy, select "Add Token". Give this token a descriptive name and select the expiration date that best fits. Note that the BindPlane Agent will need to be reconfigured with a new Access Policy token when the current one expires. The token value that appears next is what is needed for configuration. For more information, see this Grafana Cloud documentation. OTLP Endpoint and Instance ID These two values can be found in the same place. On the home page of your instance's Cloud Portal there should be an "OpenTelemetry" card. Click the "Configure" button and on the next page you can find the OTLP Endpoint and Instance ID for your Grafana Cloud instance. Copy these values and use them for configuring this destination. For more information, see this Grafana Cloud documentation. Configuration Field Description : : Choose Telemetry Type Select which types of telemetry to export to Grafana Cloud. OTLP Endpoint The URL to send OTLP data to. Can be found in the Cloud Portal under the OpenTelemetry card. Grafana Cloud Instance ID The ID for your Grafana Cloud instance. Can be found in the Cloud Portal under the OpenTelemetry card. Cloud Access Policy Token A token created for an Access Policy in Grafana Cloud. The Access Policy needs write permission for the telemetry being sent. Compression Compression algorithm to use when sending data to Grafana Cloud. Supported Retry and Queuing Settings This destination supports the following retry and queuing settings: Sending Queue Persistent Queue Retry on Failure : : : Example Configuration This example configuration is sending just metrics to Grafana Cloud. The endpoint, instance ID, and token are configured for our Grafana Cloud instance. Also compression is enabled with gzip and retry on failure, sending queue, and the persistent queue and enabled with default values. Web Interface Standalone Destination]]>https://observiq.com/docs/resources/destinations/grafana-cloudhttps://observiq.com/docs/resources/destinations/grafana-cloudTue, 04 Jun 2024 14:21:20 GMT<![CDATA[Google Cloud]]><![CDATA[Supported Types Logs Metrics Traces : : : Prerequisites Network Requirements The following Google Cloud APIs must be enabled - Cloud Monitoring - Cloud Logging - Cloud Trace The following network access is required between the BindPlane Agent and the following Google API endpoint URLs. Metrics: metrics. Logging Google Cloud Logging API v2: logging.googleapis.com:443 Metrics Google Cloud Monitoring API v3: monitoring.googleapis.com:443 Traces Google Cloud Trace API: cloudtrace.googleapis.com:443 Typically, when sent telemetry from a GCP hosted system, these URLs are part of default network access. Authentication The Google Cloud destination supports two forms of authentication. - Access Scopes - Access Scopes are used when the BindPlane Agent is running on a GCP-hosted VM. - Service Account - Services Account auth is the _only_ option for sending telemetry from non-GCP-hosted agents. Google Cloud Access Scopes When running within Google Cloud, you can configure your Compute Engine instances with the following access scopes. This will allow the Google Cloud destination to configure authentication automatically. - logging.write - monitoring.write - trace.append You can read more about access scopes here. Service Account If running outside of Google Cloud, or within Google Compute Engine without access scopes, you can create a service account for authentication. 1. Create a Google service account following this documentation. 2. Assign your service account the following roles 1. Logs Writer: roles/logging.logWriter 2. Monitoring Metric Writer: roles/monitoring.metricWriter 3. Cloud Trace Agent: roles/cloudtrace.agent 3. Create and download a Service Account Access Key following this documentation. The downloaded access key will be used when configuring the Google Cloud destination. Configuration Table Parameter Type Default Description : : : : project string The Google Cloud Project ID to send logs, metrics, and traces to. auth_type enum auto The method used for authenticating to Google Cloud. 'auto' will attempt to use the collector's environment, which is useful when running on Google Cloud or when you have set GOOGLE_APPLICATION_CREDENTIALS in the collector's environment. 'json' takes the JSON contents of a Google Service Account's credentials file. 'file' is the file path to a Google Service Account credential file. credentials string JSON value from a Google Service Account credential file. credentials_file string Path to a Google Service Account credential file on the collector system. The collector's runtime user must have permission to read this file. Supported Retry and Queuing Settings This destination supports the following retry and queuing settings: Sending Queue Persistent Queue Retry on Failure : : : Example Configurations Automatic Configuration This example uses the auto Authentication Method. When running within Google Cloud with the correct access scopes, the destination will perform automatic authentication and send metrics, traces, and logs to the project bindplane-gcp. Service Account Credentials (JSON) BindPlane OP can embed credentials into the agent configuration when the authentication method json is selected. When using the json option, paste the service account JSON key into the text box. This method is convenient as it does not require copying the service account key on the agent system. Service Account Credentials (File) If you would prefer to copy the service account key to the agent system instead of having BindPlane handle it, you can select the file option. In this example, the service account access key file is located at /opt/observiq-otel-collector/service_account.json.]]>https://observiq.com/docs/resources/destinations/google-cloudhttps://observiq.com/docs/resources/destinations/google-cloudThu, 17 Oct 2024 17:56:27 GMT<![CDATA[Google Cloud Managed Service for Prometheus]]><![CDATA[Google Cloud Managed Service for Prometheus is Google Cloud's fully managed, multi-cloud, cross-project solution for Prometheus metrics. This destination is compatible with Prometheus metrics only. Supported Types Metrics Logs Traces : : : Limitations The Google Managed Prometheus destination is intended to be used with Prometheus metrics, which can be received with the following sources: - Prometheus - OTLP This limitation is due to Google requiring the following resource attributes: - service.instance.id - service.name Google will translate those attributes into instance and job respectively. When scraping a Prometheus exporter with the Prometheus source type, the required resource attributes are present automatically. Any resource attributes that do not map up with Google's Prometheus target resource type will be removed from the metric. This can make it difficult to guarantee uniqueness between data points that were emitted from different sources. See the following documentation for more information on how Google handles monitored resource types - Resource Attribute Handling. - Prometheus Target resource type Prerequisites Network Requirements The following Google Cloud APIs must be enabled - Cloud Monitoring The following network access is required between the BindPlane Agent and the following Google API endpoint URLs. Metrics: metrics. Metrics Google Cloud Monitoring API v3: monitoring.googleapis.com:443 Typically, when sent telemetry from a GCP hosted system, these URLs are part of default network access. Authentication The Google Managed Prometheus destination supports two forms of authentication. - Access Scopes - Service Account Google Cloud Access Scopes When running within Google Cloud, you can configure your Compute Engine instances with the following access scopes, allowing the Google Managed Prometheus destination to configure authentication automatically. - monitoring.write You can read more about access scopes here. Service Account If running outside of Google Cloud, or within Google Compute Engine without access scopes, you can create a service account for authentication. 1. Create a Google service account following this documentation. 2. Assign your service account the following roles 1. Monitoring Metric Writer 3. Create and download a Service Account Access Key following this documentation. The downloaded access key will be used when configuring the Google Cloud destination. Configuration Table Parameter Type Default Description : : : : project string The Google Cloud Project ID to send logs, metrics, and traces to. auth_type enum auto The method used for authenticating to Google Cloud. 'auto' will attempt to use the collector's environment, useful when running on Google Cloud or when you have set GOOGLE_APPLICATION_CREDENTIALS in the collector's environment. 'json' takes the JSON contents of a Google Service Account's credentials file. 'file' is the file path to a Google Service Account credential file. credentials string JSON value from a Google Service Account credential file. credentials_file string Path to a Google Service Account credential file on the collector system. The collector's runtime user must have permission to read this file. default_location enum us-central1 Google Managed Prometheus requires a "location" resource attribute. This parameter inserts the resource attribute if it does not already exist. Supported Retry and Queuing Settings This destination supports the following retry and queuing settings: Sending Queue Persistent Queue Retry on Failure : : : Example Configurations Automatic Authentication This example uses the auto Authentication Method. When running within Google Cloud with the correct access scopes, the destination will perform automatic authentication and send metrics, traces, and logs to the project bindplane-gcp. Service Account Credentials (JSON) BindPlane OP can embed credentials into the agent configuration when the authentication method json is selected. When using the json option, paste the service account JSON key into the text box. This method is convenient as it does not require copying the service account key on the agent system. Service Account Credentials (File) If you would prefer to copy the service account key to the agent system instead of having BindPlane handle it, you can select the file option. In this example, the service account access key file is located at /opt/observiq-otel-collector/service_account.json. Usage Create a configuration with the Prometheus source and Google Managed Prometheus destination. As an example, you can target the agent's metrics port, 8888. Configure the destination to point to your Google Cloud project, and add your service account credentials. The finished configuration will have a Prometheus source, and Google Managed Prometheus destination. You can add processors to add, remove, and modify metrics. Next, add at least one agent and rollout the configuration. In Cloud Monitoring, search for "Prometheus Target" using the metrics explorer. Select a metric and note that the job, service_instance_idand service_name metric labels can be used to identify which agent the metric originates from.]]>https://observiq.com/docs/resources/destinations/google-cloud-managed-service-for-prometheushttps://observiq.com/docs/resources/destinations/google-cloud-managed-service-for-prometheusThu, 13 Jun 2024 17:17:32 GMT<![CDATA[Elasticsearch (OTLP)]]><![CDATA[Description The Elasticsearch (OTLP) Destination configures an OTLP Exporter to send telemetry data (logs, metric, traces) to Elastic for ingestion. The OTLP gRPC Exporter is used for Self-Managed Elastic instances, and the OTLP/HTTP Exporter is used for Elastic Cloud instances. Supported Types Logs Metrics Traces BindPlane Agent : : : : v1.36.0+ Elasticsearch Exporter vs OTLP Exporter Per Elastic Documentation: "When using the OpenTelemetry Collector, you should always prefer sending data via the OTLP exporter to an Elastic APM Server. Other methods, like using the elasticsearch exporter to send data directly to Elasticsearch will send data to the Elastic Stack, but will bypass all of the validation and data processing that the APM Server performs. In addition, your data will not be viewable in the Kibana Observability apps if you use the elasticsearch exporter." How to Find Your APM Server URL and Secret Token Elastic Cloud - Navigate to your Elastic deployment. - Navigate to Management > Fleet > Agent Policies (Search for _agent policies_). - Select the Agent Policy you wish to configure your agent for. If none exist, one must be created. - Under the integrations tab, there should be a row titled Elastic APM. On the far right of this row is a menu of actions. Select the action Edit Integration. - Your Server URL is listed under General > Server Configuration > URL. - Your Secret Token is listed under Agent Authorization > Secret token and can be configured if desired. Self-Managed For Kubernetes hosted Elastic, reference the Elastic docs: Connect to the APM Server Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector Logs, Metrics, Traces Specifies which types of telemetry to export to Elasticsearch. deployment_type enum Elastic Cloud The deployment model of your elastic instance. Either Elastic Cloud or Self-Managed. Used to determine whether the http or gRPC protocol will be used, respectively. server_url string The URL of your Elastic APM Server. Telemetry will be sent to server_url/v1/logs, server_url/v1/metrics, server_url/v1/traces respectively. Only relevant for Elastic Cloud instances. hostname string The hostname or IP address of your Elastic APM Server. Only relevant for Self-Managed Elastic instances. grpc_port int 8200 TCP port to which the exporter is going to send OTLP data. Only relevant for Self-Managed Elastic instances. secret_token string The Secret Token for agents to authenticate with your Elastic APM Server. enable_tls bool true Enable advanced TLS settings. Only relevant for Self-Managed Elastic. Elastic Cloud instances always use TLS with TLS Verification enabled. insecure_skip_verify bool false Enable to skip TLS certificate verification. ca_file string Certificate authority used to validate the database server's TLS certificate. tls_server_name_override string Optional virtual hostname. Indicates the name of the server requested by the client. This option is generally not required. mutual_tls bool false Whether or not to use mutual TLS authentication. cert_file string A TLS certificate used for client authentication if mutual TLS is enabled. key_file string A TLS private key used for client authentication if mutual TLS is enabled. compression enum gzip Compression algorithm to use when sending data to the OTLP server. Must be one of none, gzip, and zlib. headers map {} Additional headers to attach to each request. Supported Retry and Queuing Settings This destination supports the following retry and queuing settings: Sending Queue Persistent Queue Retry on Failure : : : Example Configuration Web Interface]]>https://observiq.com/docs/resources/destinations/elasticsearch_otlphttps://observiq.com/docs/resources/destinations/elasticsearch_otlpWed, 16 Oct 2024 19:25:54 GMT<![CDATA[Elasticsearch (Legacy)]]><![CDATA[Description The Elasticsearch (Legacy) Destination configures the Elasticsearch exporter to send telemetry data (logs, metric, traces) to Elastic for ingestion. If your Elastic deployment has the APM Server Integration, using the Elasticsearch (OTLP) Destination is recommended as described below. Supported Types Metrics Logs Traces : : : Elasticsearch Exporter vs OTLP Exporter Per Elastic Documentation: "When using the OpenTelemetry Collector, you should always prefer sending data via the OTLP exporter to an Elastic APM Server. Other methods, like using the elasticsearch exporter to send data directly to Elasticsearch will send data to the Elastic Stack, but will bypass all of the validation and data processing that the APM Server performs. In addition, your data will not be viewable in the Kibana Observability apps if you use the elasticsearch exporter." Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector Logs, Traces Specifies which types of telemetry to export to Elasticsearch. enable_elastic_cloud bool false Whether or not to enable support for Elastic Cloud. endpoints strings List of Elasticsearch URLs. e.g https://elastic.corp.net:9200. cloudid string The ID of the Elastic Cloud Cluster to publish events to. The cloudid can be used instead of endpoints. logs_index string logs-generic-default The index or datastream name to publish logs to. traces_index string traces-generic-default The index or datastream name to publish traces to. pipeline string Optional Ingest Node pipeline ID used for processing documents published by the exporter. enable_auth bool false Whether or not to enable authentication. auth_type enum basic Authentication Type to use. Options include "basic" and "apikey". user string Username used for HTTP Basic Authentication. password string Password used for HTTP Basic Authentication. api_key string Authorization API Key. configure_tls bool false Configure advanced TLS settings. insecure_skip_verify bool false Enable to skip TLS certificate verification. ca_file string Certificate authority used to validate the database server's TLS certificate. mutual_tls bool false Whether or not to use mutual TLS authentication. cert_file string A TLS certificate used for client authentication if mutual TLS is enabled. key_file string A TLS private key used for client authentication if mutual TLS is enabled. retry_on_failure_enabled bool true Attempt to resend telemetry data that has failed to be transmitted to the destination. num_workers int 0 The number of workers publishing bulk requests concurrently. If 0, it defaults to the number of CPU cores. Supported Retry and Queuing Settings This destination supports the following retry and queuing settings: Sending Queue Persistent Queue Retry on Failure : : : \ \ This destination only partially supports Retry on Failure; See the above configuration table for specific configuration for this destination type. Example Configuration Web Interface]]>https://observiq.com/docs/resources/destinations/elasticsearchhttps://observiq.com/docs/resources/destinations/elasticsearchWed, 16 Oct 2024 19:25:54 GMT<![CDATA[Dynatrace]]><![CDATA[This destination has been deprecated and replaced with a new Dynatrace destination. While it will continue to function, it will no longer receive any enhancements and you should migrate to the new destination. For more information about the new destination, see this documentation. Supported Types Metrics Logs Traces : : : Configuration Table Parameter Type Default Description : : : : metric_ingest_endpoint string "" Dynatrace Metrics Ingest v2 endpoint. This is required if OneAgent is not running on the agent host. More information on the endpoint and structure can be found here. api_token string API Token that is restricted to Ingest metrics scope. Required if metric_ingest_endpoint is specified. More information here. prefix string Metric Prefix that will be prepended to each metric name in prefix.name format. enable_tls bool false Whether or not to use TLS. insecure_skip_verify bool false Enable to skip TLS certificate verification. ca_file bool Certificate authority used to validate the database server's TLS certificate. cert_file bool A TLS certificate used for client authentication if mutual TLS is enabled. key_file bool A TLS private key used for client authentication if mutual TLS is enabled. Supported Retry and Queuing Settings This destination supports the following retry and queuing settings: Sending Queue Persistent Queue Retry on Failure : : :]]>https://observiq.com/docs/resources/destinations/dynatracehttps://observiq.com/docs/resources/destinations/dynatraceThu, 08 Feb 2024 21:27:35 GMT<![CDATA[Dynatrace]]><![CDATA[Description This Dynatrace destination configures an OTLP exporter to send telemetry data (logs, metrics, traces) to a Dynatrace server for ingestion. It supports sending data to both SaaS and ActiveGate deployment types of Dynatrace. Supported Types Metrics Logs Traces Bindplane Agent : : : v1.36.0+ Note for Metrics: Currently monotonic cumulative sums are not supported Configuration Field Description : : Choose Telemetry Type Select which types of telemetry to export (Logs, Metrics, Traces). Deployment Type Select the Dynatrace deployment type (SaaS or ActiveGate). ActiveGate Hostname or IP The hostname or IP address of your ActiveGate (required if Deployment Type is ActiveGate). Port The port to connect to. Default is 9999 for ActiveGate (required if Deployment Type is ActiveGate). Environment ID The Environment ID for your Dynatrace instance. Dynatrace API Token The API token used to authenticate with the Dynatrace API. Skip TLS Certificate Verification Enable to skip TLS certificate verification. TLS Certificate Authority File Certificate authority used to validate TLS certificates. For more information on specific parameters, see Dynatrace documentation. This destination supports the following retry and queuing settings: Sending Queue Persistent Queue Retry on Failure : : : Example Configuration For a basic configuration, specify the telemetry types to export, deployment type, environment ID, and API Token. Web Interface Standalone Destination]]>https://observiq.com/docs/resources/destinations/dynatrace-otlphttps://observiq.com/docs/resources/destinations/dynatrace-otlpTue, 04 Jun 2024 14:21:20 GMT<![CDATA[Dev Null]]><![CDATA[Description Serves as a placeholder in a pipeline without configuring a destination. Useful for testing extensions or collector pipeline throughput without including a destination. Supported Types Metrics Logs Traces : : : Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector ["Logs", "Metrics", "Traces"] Choose Telemetry Type.]]>https://observiq.com/docs/resources/destinations/dev-nullhttps://observiq.com/docs/resources/destinations/dev-nullTue, 16 Apr 2024 14:04:22 GMT<![CDATA[Datadog]]><![CDATA[Supported Types Logs Metrics Traces : : : Configuration Table Parameter Type Default Description : : : : site string US1 The Datadog site to send telemetry to. api_key string The API Key that is used for authentication. Supported Retry and Queuing Settings This destination supports the following retry and queuing settings: Sending Queue Persistent Queue Retry on Failure : : :]]>https://observiq.com/docs/resources/destinations/datadoghttps://observiq.com/docs/resources/destinations/datadogMon, 15 Apr 2024 12:36:34 GMT<![CDATA[Custom]]><![CDATA[Description This Custom destination can be used to directly configure an OpenTelemetry Exporter. The Custom destination is useful for testing new Exporters or for fulfilling a niche use case that is not supported by BindPlane natively. The Custom Destination can only be used with components that are present in the BindPlane Agent. See the Included Components documentation for a list of supported components. Supported Types Logs Metrics Traces Bindplane Agent : : : : v1.30.0+ Configuration Field Description : : Choose Telemetry Type The kind of telemetry that will be sent to the Exporter. Can be any combination of logs, metrics, and traces. Configuration The YAML configuration for the Exporter. Example Configuration Logging Exporter The Logging Exporter is useful for debugging a pipeline, allowing the user to see realtime telemetry when viewing the agent's log file. It can be configured using the Custom Destination. Web Interface Standalone Destination AWS Kinesis Exporter At the time of this writing, BindPlane does not support the Kinesis Exporter natively. However, the bindplane-agent does support the Kinesis Exporter. Web Interface Standalone Destination]]>https://observiq.com/docs/resources/destinations/customhttps://observiq.com/docs/resources/destinations/customTue, 04 Jun 2024 14:21:20 GMT<![CDATA[CrowdStrike Falcon LogScale]]><![CDATA[Description This destination configures an exporter to send logs to CrowdStrike Falcon Logscale. Supported Types Logs Metrics Traces BindPlane Agent : : : : v1.57.2+ Configuration Table Field Description : : Hostname Hostname of the CrowdStrike Falcon LogScale server. Port TCP port to which the exporter is going to send metrics. Ingest Token The token which provides authentication to ingest logs. Supported Retry and Queuing Settings This destination supports the following retry and queuing settings: Sending Queue Persistent Queue Retry on Failure : : : Creating an Ingest Token An ingest token can be configured by following the CrowdStrike documentation. At the time of writing, navigate to 'Repositories and Views' and go the the repository you would like to set as the receiving repository. Then, go the settings of that repository and navigate down to ingest tokens.]]>https://observiq.com/docs/resources/destinations/crowdstrike-falcon-logscalehttps://observiq.com/docs/resources/destinations/crowdstrike-falcon-logscaleMon, 09 Sep 2024 13:52:56 GMT<![CDATA[Coralogix]]><![CDATA[Supported Types Logs Metrics Traces : : : Configuration Parameter Type Default Description : : : : private_key\ string "" API Private Key. More information on finding your key can be found here. application_name\ string "" OTel objects that are sent to Coralogix are tagged with this Application Name. Find more on application names here. region\ string EUROPE1 Region of your Coralogix account associated with the provided private_key. See the reference table to see telemetry ingress endpoints related to each region. enable_metrics bool true enable_logs bool true enable_traces bool true subsystem_name string "" OTel objects that are sent to Coralogix are tagged with this Subsystem Name. Find more on application names here. timeout int 5 Seconds to wait per individual attempt to send data to a backend resource_attributes bool false application_name_attributes strings [] Ordered list of resource attributes that are used for Coralogix AppName. subsystem_name_attributes strings [] Ordered list of resource attributes that are used for Coralogix SubSystem. \_required field_ Coralogix Region Ingress Endpoints Region Traces Endpoint Metrics Endpoint Logs Endpoint : : : : USA1 otel-traces.coralogix.us:443 otel-metrics.coralogix.us:443 otel-logs.coralogix.us:443 APAC1 otel-traces.app.coralogix.in:443 otel-metrics.coralogix.in:443 otel-logs.coralogix.in:443 APAC2 otel-traces.coralogixsg.com:443 otel-metrics.coralogixsg.com:443 otel-logs.coralogixsg.com:443 EUROPE1 otel-traces.coralogix.com:443 otel-metrics.coralogix.com:443 otel-logs.coralogix.com:443 EUROPE2 otel-traces.eu2.coralogix.com:443 otel-metrics.eu2.coralogix.com:443 otel-logs.eu2.coralogix.com:443 Supported Retry and Queuing Settings This destination supports the following retry and queuing settings: Sending Queue Persistent Queue Retry on Failure : : :]]>https://observiq.com/docs/resources/destinations/coralogixhttps://observiq.com/docs/resources/destinations/coralogixMon, 15 Apr 2024 12:36:34 GMT<![CDATA[ClickHouse]]><![CDATA[Description This ClickHouse destination can be used to send metrics, logs, and traces to a ClickHouse server. Supported Types Metrics Logs Traces Bindplane Agent : : : : v1.41.0+ Configuration Field Description : : Telemetry Types The kind of telemetry that will be sent to the ClickHouse server. Can be any combination of logs, metrics, and traces. Protocol The kind of protocol to be used when sending to the ClickHouse server. Can be TCP, HTTP, or HTTPS. See this ClickHouse documentation for more information. Endpoint The endpoint to use to send telemetry data to ClickHouse. Supports multiple endpoints. See this ClickHouse documentation for more information. Username Username to use to authenticate with the ClickHouse server. See this ClickHouse documentation for more information. Password Password to use to authenticate with the ClickHouse server. See this ClickHouse documentation for more information. Database Name of the database to use when interacting with the ClickHouse server. Logs Table Name Name of the table inside Database in ClickHouse to store log data in. Creates the table if it does not already exist. Metrics Table Name Name of the table inside Database in ClickHouse to store metric data in. Creates the table if it does not already exist. Traces Table Name Name of the table inside Database in ClickHouse to store traces data in. Creates the table if it does not already exist. TTL The data time-to-live, for example '30m'. 0 means no TTL. Make sure the telemetry sent has a timestamp field. Timeout Timeout for each attempt to send data to ClickHouse. Connection Parameters Additional connection parameters with map format. Used as query parameters in the URL. See this ClickHouse documentation for more information. Metrics stored in ClickHouse are grouped by their type (sum, gauge, etc.) and stored in tables specific to that type by appending _type to the table name. For example, using the example configuration below, gauge metrics would be stored in bpop.bp_metrics_gauge. To read more, see this documentation. TLS can be configured using Connection Parameters. To do so, add an entry called secure with the value true. An example can be seen below. The agent will fail to start if it cannot reach the ClickHouse server on start up. If the ClickHouse server becomes unreachable while the agent is already running, it will continue running while reporting an error. The agent can also fail to start if configured to use TTL, but the telemetry being sent lacks a Timestamp field. This destination supports the following retry and queuing settings: Sending Queue Persistent Queue Retry on Failure : : : Example Configuration Here we set a single endpoint that uses the TCP protocol. We are sending metrics, logs, and traces, so we have them selected and table names for them. We also utilize authentication and are setting compression and TLS via the connection parameters. Finally, we have the sending queue, persistent queue, and retry on failure enabled as well. Web Interface Standalone Destination]]>https://observiq.com/docs/resources/destinations/clickhousehttps://observiq.com/docs/resources/destinations/clickhouseMon, 16 Sep 2024 16:32:55 GMT<![CDATA[BindPlane Gateway]]><![CDATA[Description The BindPlane Gateway destination is an OTLP destination meant to be used for nodes sending to a gateway. When using this destination in conjunction with a BindPlane Gateway source from another configuration, telemetry traveling through this destination will not be double counted in the Summary view. Supported Types Metrics Logs Traces BindPlane Agent : : : : v1.52.0+ Configuration Table Field Description : : Hostname Hostname or IP address where the exporter will send OTLP data. Port TCP port to which the exporter is going to send OTLP data. Protocol The OTLP protocol to use when sending OTLP telemetry. Can be gRPC or HTTP. Compression Compression algorithm to use when sending data to the OTLP server. Ensure that the server supports the compression algorithm selected. Kinds of compression depend on Protocol. Additional Headers Add additional headers to be attached to each request. Enable TLS Whether or not to use TLS. Skip TLS Certificate Verification Enable to skip TLS certificate verification. TLS Certificate Authority File Certificate authority used to validate TLS certificates. Server Name Override Optional virtual hostname. Indicates the name of the server requested by the client. This option is generally not required. Read more here. Mutual TLS Whether or not to use mutual TLS authentication. Mutual TLS Client Certificate File A TLS certificate used for client authentication. Mutual TLS Client Private Key File A TLS private key used for client authentication. Supported Retry and Queuing Settings This destination supports the following retry and queuing settings: Sending Queue Persistent Queue Retry on Failure : : : Example Configuration In this configuration, we specify the hostname of the BindPlane Gateway server telemetry is going to be sent to, as well as what protocol will be used. Web Interface Standalone Destination]]>https://observiq.com/docs/resources/destinations/bindplane-gatewayhttps://observiq.com/docs/resources/destinations/bindplane-gatewayTue, 04 Jun 2024 14:21:20 GMT<![CDATA[Azure Monitor]]><![CDATA[Description This Azure Monitor destination configures an exporter to send telemetry data (logs, metrics, traces) to Azure Monitor for ingestion. Supported Types Metrics Logs Traces Bindplane Agent : : : v1.36.0+ Configuration Field Description : : Choose Telemetry Type The telemetry type to apply this processor to (Logs, Metrics, Traces). Connection String The connection string to authenticate with Azure Monitor. Span Events Enabled Whether to send span events as part of telemetry data (relevant for Traces). Max Batch Size The maximum number of telemetry items sent in a batch. Max Batch Interval The maximum interval in seconds before a batch is sent. This destination supports the following retry and queuing settings: Sending Queue Persistent Queue Retry on Failure : : : Example Configuration For a basic configuration, specify the telemetry types, connection string, and optionally, the span events setting for traces. Web Interface Standalone Destination]]>https://observiq.com/docs/resources/destinations/azure-monitorhttps://observiq.com/docs/resources/destinations/azure-monitorTue, 04 Jun 2024 14:21:20 GMT<![CDATA[Azure Blob Storage]]><![CDATA[Supported Types Metrics Logs Traces : : : ​The Azure Blob Storage destination saves telemetry as OTLP JSON files in Azure Blob Storage. Configuration Parameter Type Default Description : : : : telemetry_types\ telemetrySelector Logs, Metrics, Traces Specifies which types of telemetry to export. connection_string\ string "" The connection string for the Azure Storage account. More information can be found here. container\ string "" Name of the Azure Storage container to export telemetry into. prefix string "" The root directory of the blob path to export telemetry into. blob_prefix string "" Prefix for the name of the exported telemetry files. partition\ enum minute The granularity of the timestamps in the blob path, either "minute" or "hour". compression enum gzip The compression algorithm to use when exporting telemetry, either "none" or "gzip" \_required field_ Supported Retry and Queuing Settings This destination supports the following retry and queuing settings: Sending Queue Persistent Queue Retry on Failure : : : Example Configuration Basic Configuration For basic configuration, we specify the connection_string as well as a container, prefix, and blob_prefix. By default, we are still using a partition of minute and gzip for the compression value. This will create a blob path that looks like the following inside the otel container: Web Interface Standalone Destination Specify Partition and Default Configuration This configuration is the same as the basic configuration but will specify a partition of hour and compression set to none. This will create a blob path that looks like the following inside the otel container: Web Interface Standalone Destination]]>https://observiq.com/docs/resources/destinations/azure-blob-storage-destinationhttps://observiq.com/docs/resources/destinations/azure-blob-storage-destinationTue, 04 Jun 2024 14:21:20 GMT<![CDATA[AWS S3]]><![CDATA[Supported Types Metrics Logs Traces : : : ​The AWS S3 destination saves telemetry into timestamped JSON files in an S3 bucket. ​ Configuration Parameter Type Default Description : : : : telemetry_types\ telemetrySelector ["Logs", "Metrics", "Traces"] Specifies which types of telemetry to export. region\ enum us-east-1 AWS region of the bucket to export telemetry to. bucket\ string "" Name of the S3 Bucket to export telemetry into. prefix string "" The root directory of the bucket to export telemetry into. file_prefix string "" Prefix for the name of exported telemetry files. granularity\ enum minute The granularity of the timestamps in the S3 key, either "minute" or "hour". \_required field_ Credentials ​ With AWS S3, users are required to provide some form of authentication. There are two ways to configure this: either entering profile credentials manually or using the CLI, or environment variables that specify access keys for user accounts. ​ In the AWS CLI getting started guide it will instruct you to install for your current user or all users. The observIQ OTel Collector runs as root by default, meaning the AWS CLI and credentials should be installed under the collector system's root account. CLI The easiest way to configure this is using the aws CLI program provided by AWS. The AWS CLI Getting Started guide describes how to install the CLI and configure it with credentials. The AWS S3 destination uploads telemetry to the specified bucket, so the credentials configured should be associated with an account that has s3:PutObject permissions for that bucket. Environment Variables Alternatively, AWS Environment variables can be specified to override a credentials file. You can modify the collector's environment variables by configuring a systemd override. Run sudo systemctl edit observiq-otel-collector and add your access key, secret key, and region: After making that change, reload Systemd and restart the collector service.]]>https://observiq.com/docs/resources/destinations/aws-s3https://observiq.com/docs/resources/destinations/aws-s3Fri, 12 Apr 2024 10:33:42 GMT<![CDATA[Amazon Managed Prometheus]]><![CDATA[Description This Amazon Managed Prometheus destination can be used to send metrics to an Amazon Managed Prometheus (AMP) workspace in AWS. Supported Types Metrics Logs Traces BindPlane Agent : : : : v1.41.0+ Prerequisites Getting the BindPlane Agent authenticated and authorized with AWS requires completing the following steps. Before starting, make sure you are logged in to AWS Console and have permission to create users, create roles, and generate access tokens. 1. Create an AWS User the exporter can use. To start, head to the Identity and Access Management (IAM) service and under Access management navigate to Users. There should be an option to Create user on this page. This is the user the exporter and BindPlane Agent will be authenticated as, so name it appropriately. This new user will not need access to AWS Management Console, so leave this option unselected. For permissions, you don't need to specify any at this point since the user will be assuming a role with the required permissions. Once the user is created, navigate to its summary page and copy the ARN (we will need it in the next step). 2. Create an AWS Role the exporter can assume. Now we need to create an AWS Role the exporter will assume. This is what will allow the exporter/BindPlane Agent to send data to the AMP workspace. To do this, head to IAM and under Access management navigate to Roles and select Create role. Under the first step, Select trusted entity, we are defining which AWS resource will be allowed to use this role, which we want to be the user we created in the first step. Select Custom trust policy. We want to edit just the Principal JSON field to contain the ARN copied in the first step. It should look similar to this: The next step in creating the role is Add permissions. Search for "Prometheus" in the search bar and a number of permissions should show up. The exporter only needs AmazonPrometheusRemoteWriteAccess, so select that. In the next area, give the role an appropriate name and description and finish with Create role. As an alternative to the AWS pre-defined AmazonPrometheusRemoteWriteAccess permission policy, you may also create your own permission policy. To read more, see this AWS documentation. If taking this approach, be sure the exporter will still have write access to the desired AMP Workspace or the BindPlane Agent will fail. 3. Give BindPlane Agent access keys. Now that the BindPlane Agent will have access to the AMP workspace, it needs access keys so it can access AWS as the user that was created for it in step 1. Head back to IAM and Users and select the user created in step 1. Now select the Security credentials tab and find the Access keys section. For the first step of creating a key, select Other and continue to the next step. Provide an appropriate description and select Create access key. Make note of or download the Access key and Secret access key values as AWS won't show these again. Now that the access keys have been created, we need to create environment variables for them on the same machine as the BindPlane Agent. The environment variables to add are AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY, which correspond to the values retrieved early in this step. You can modify the collector's environment variables by configuring a systemd override. Run sudo systemctl edit observiq-otel-collector and add the keys: When finished, reload Systemd. Now the BindPlane Agent will be able to authenticate with AWS as the user that was created back in step 1. For more information on specifying the AWS credentials for the BindPlane Agent, please see this AWS documentation. See this AWS documentation for alternative ways of using AWS environment variables for you specific environment. 4. Retrieve Amazon Managed Prometheus workspace endpoint. The last pre-requisite step to configuring the Amazon Managed Prometheus destination is retrieving the remote write endpoint. If you haven't already created an Amazon Managed Prometheus workspace, search for "Amazon Prometheus" in the AWS console search bar and initialize one. With a workspace created, navigate to All workspaces inside of the Amazon Prometheus AWS service. Select the workspace you wish to send metrics to. On the summary page, copy the value for Endpoint - remote write URL. This is the endpoint you'll use when configuring the destination. Configuration Field Description : : Endpoint The remote write endpoint to send metrics to the Amazon Managed Prometheus workspace. See this AMP documentation for more. Region The region the Amazon Managed Prometheus workspace is located in. One location it can be found is in the given endpoint. See this AMP documentation for more. AWS Role The Amazon Resource Name (ARN) for the AWS role the exporter will assume. The role should have permission to write to Amazon Managed Prometheus. See this AMP documentation for more. STS Region The region to use for assuming the AWS Role. Useful for cross region authentication (i.e. if the agent is located in a different region than the AMP workspace). See this AMP documentation for more. Session Name Optional name to give the session when the exporter assumes the AWS Role. Useful to differentiate sessions when multiple principals could assume the given role. See this AWS Documentation for more. Compression Compression algorithm to use when sending data to Amazon Managed Prometheus. Namespace Prefix to attach to each metric name. See this Prometheus documentation for more. Max Batch Size Maximum size, in bytes, for a batch of metrics to be sent to the AMP endpoint. If a batch is larger than this limit, it will be broken up into multiple batches. Convert Attributes Whether or not to convert all resource attributes to metric attributes. Unit & Type Suffixes Whether or not to attach the metric unit and metric type to the end of the metric name. Created Metric Whether or not a "\_created" metric is exported for Summary, Histogram, and Monotonic Sum metric points when "StartTimeUnixNano" is set. HTTP Headers Additional headers to attach to each HTTP Request. The following headers cannot be changed: Content-Encoding, Content-Type, X-Prometheus-Remote-Write-Version, and User-Agent. External Labels Label names and values to be attached as metric attributes. See this Prometheus documentation for more. Enable TLS Whether or not to use TLS. Strict TLS Verify Whether or not to use Strict TLS Certification Verification. Certificate Authority File Certificate authority used to validate TLS certificates. Not required if the collector's operating system already trusts the certificate authority. Enable Mutual TLS Whether ot not to use Mutual TLS. Client Certification File A TLS certificate used for client authentication, if Mutual TLS is enabled. Client Private Key File A TLS private key used for client authentication, if Mutual TLS is enabled. Enable Remote Queue Whether or not to enable a remote write queue. This helps balance outgoing results. Remote Write Queue Size The number of metrics that can be queued. Remote Write Queue Consumers The minimum number of workers to use to fan out the outgoing requests. This destination supports the following retry and queuing settings: Sending Queue Persistent Queue Retry on Failure : : : Example Configuration Here we configure the destination to send to AMP by providing the endpoint, region, and arn. Some advanced configuration options we make use of include compression, namespace, max batch size, convert attributes, unit and type suffixes, created metric, and external labels by defining a label called "bp_agent" with a value of "agent1". We also enable TLS, Strict TLS, and Mutual TLS and provide a certificate authority file, client certificate file, and client key file. We also have a remote write queue and Retry on Failure enabled with their respective default configurations. Web Interface Standalone Destination]]>https://observiq.com/docs/resources/destinations/aws-managed-prometheushttps://observiq.com/docs/resources/destinations/aws-managed-prometheusTue, 04 Jun 2024 14:21:20 GMT<![CDATA[No Registered Transform Agents]]><![CDATA[Troubleshooting the "No Registered Transform Agents" Error in BindPlane OP. Issue Overview The BindPlane Transform Agent is required for Live Preview. When the Transform Agent is misbehaving, or there is a configuration issue, the user may be presented with the following error in the BindPlane UI or log. > failed to get transform agent client: there are no registered transform agents Support This error generally indicates a misconfiguration issue, however, if Live Preview was working in the past and is not working now, it is recommended that you engage with Support. Linux On Linux, the Transform Agent operates as a subprocess. Make sure your configuration file at /etc/bindplane/config.yaml has the following section. Run the following command to check if the BindPlane service is running: If the Transform Agent sub-process is missing, check the logs at /var/log/bindplane/bindplane.log and sudo journalctl -f unit bindplane lines 200. It is expected that the Transform Agent always be running without additional configuration, so it is recommended to engage observIQ Support. Docker On Docker, check to see if the Transform Agent container is running. If the Transform Agent is running, make sure BindPlane is configured with the following environment variables. If the environment is configured correctly, view the recent logs from the BindPlane container. If the Transform Agent is running, and the environment is configured with the correct service name or hostname, it is recommended to engage observIQ Support. Kubernetes On Kubernetes, the Transform Agent is fully managed by the BindPlane OP Helm Chart. Make sure the Transform Agent containers are running. If the Transform Agent container is running, check to see if it has an endpoint in the bindplane-transform-agent clusterIP service. Notice that in the example, the Transform Agent pod has the IP address 10.244.0.3. Check for service endpoints for the pod's IP address. If the Transform Agent pod is running and contains a service endpoint, check the BindPlane server pod logs. Use the following commands as a reference. Select Deployment if operating BindPlane in high availability. If the Transform Agent is running, and the Transform Agent clusterIP service has a valid endpoint, it is recommended to engage observIQ Support.]]>https://observiq.com/docs/kb/transform-agent/no-registered-transform-agentshttps://observiq.com/docs/kb/transform-agent/no-registered-transform-agentsFri, 09 Aug 2024 20:20:04 GMT<![CDATA[Using Splunk UF with BindPlane OP]]><![CDATA[BindPlane OP and the BindPlane Agent can be used to collect data from your Splunk Universal Forwarders. This allows you to start taking advantage of BindPlane OP without the need to re-instrument your collectors at the edge. Step 1: Update your outputs.conf on your Universal Forwarders By default, the Splunk Universal Forwarder (UF) sends data over TCP in Splunks proprietary Splunk to Splunk (S2S) protocol. In order to allow the Bindplane Agent to receive data from the UF, it will need to be sent in a raw format instead. This is accomplished by creating a Splunk output configuration stanza that disables the S2S protocol by setting the parameter sendCookedData to false. Below is a sample outputs.conf file, after youve made the required changes. Step 2: Deploy a BindPlane Agent as a Gateway This is the agent youll be routing data through and is what will be managed by BindPlane OP. In a production environment, this is likely to be a fleet of agents behind a load balancer. See our Collector Sizing and Scaling docs for more details on determining your collector architecture. Step 3: Build the Configuration 1. Create a new configuration 2. Add the TCP Source and configure it to receive from your Universal Forwarders (as shown below) 3. Add the Splunk destination and configure it to point to your Splunk Enterprise or Splunk Observability Cloud environment. Step 4: Transform the Data Once youve verified data is flowing through the BindPlane Agent to Splunk without issue, you can now start re-routing data to different destinations and inserting processors into your pipeline to reduce the amount of data youre sending.]]>https://observiq.com/docs/how-to-guides/using-splunk-uf-with-bindplane-ophttps://observiq.com/docs/how-to-guides/using-splunk-uf-with-bindplane-opFri, 26 Apr 2024 22:44:21 GMT<![CDATA[Using Splunk OTEL Collector with BindPlane OP]]><![CDATA[BindPlane OP and the BindPlane Agent can be used to collect data from your Splunk OTel Collectors. This allows you to start taking advantage of BindPlane OP without the need to re-instrument your collectors at the edge. Step 1: Deploy a BindPlane Agent as a Gateway This is the agent you will be routing data through and is what will be managed by BindPlane OP. In a production environment, this is likely to be a fleet of agents behind a load balancer. See our Collector Sizing and Scaling docs for more details on determining your collector architecture. Step 2: Build the Configuration 1. Create a new configuration 2. Add the OTLP Source. 3. Add a destination of your choice and configure it. Step 3: Configure your Splunk OTel Collectors to forward to BindPlane Agent Modify your Splunk OTel Collector configuration to use an otlp exporter. The exporter has many configuration options, see the readme for details. Below is a minimalist configuration example. Replace bindplane-gateway with the hostname or IP address of your BindPlane agent. Update your pipelines to include the new exporter. This example assumes you have a traces, metrics, and logs pipeline. Your configuration may differ.]]>https://observiq.com/docs/how-to-guides/using-splunk-otel-collector-with-bindplanehttps://observiq.com/docs/how-to-guides/using-splunk-otel-collector-with-bindplaneTue, 22 Oct 2024 14:42:06 GMT<![CDATA[Using Logstash with BindPlane OP]]><![CDATA[BindPlane OP and the BindPlane Agent can be used to collect data from your Logstash agents. This allows you to start taking advantage of BindPlane OP without the need to re-instrument your collectors at the edge. Step 1: Update your output stanza of the logstash conf.d files on your Logstash agents Caveats BindPlane expects the output from Logstash to be in JSON format. It depends on the codec => json_lines configuration, such as in the examples below to work as expected. Example output stanza This output stanza sends to a BindPlane agent installed on a host with the IP 10.10.1.5, and configured to listen on port 2255 (the default). Below are a pair of sample logstash conf.d files. After adding these, or modifying the output stanza of existing ones, restart the logstash service. Config for collecting from /var/log files using the logstash file plugin Config for collecting from a json formatted log file using the logstash beats plugin Step 2: Deploy a BindPlane Agent as a Gateway This is the agent you will be routing data through and is what will be managed by BindPlane OP. In a production environment, this is likely to be a fleet of agents behind a load balancer. See our Collector Sizing and Scaling docs for more details on determining your collector architecture. Step 3: Build the Configuration 1. Create a new configuration 2. Add the Logstash Source and configure it to receive from your Logstash agents (as shown below) 3. Add a destination of your choice and configure it. Step 4: Transform the Data Once you have verified data is flowing through the BindPlane Agent to your destination without issue, you can now start re-routing data to different destinations and inserting processors into your pipeline to reduce the amount of data you are sending.]]>https://observiq.com/docs/how-to-guides/using-logstash-with-bindplane-ophttps://observiq.com/docs/how-to-guides/using-logstash-with-bindplane-opMon, 10 Jun 2024 15:38:35 GMT<![CDATA[Routing Telemetry]]><![CDATA[Routing Telemetry to a specific destination There are many ways to include or exclude logs sent to a particular destination. Two ways we will be walking through are: 1. Excluding logs based on a shared attribute, so they will not be sent to an individual destination. 2. Only sending logs that meet a criteria, in this example a added attribute. For this exercise we will start with excluding the logs. Excluding Logs We will start by using the Filter by Field Processor. First we will select the 'Destination' Processor on the right side. Now we will identify a shared attribute across all of the logs we would like to exclude. We can do that by expanding entries in the telemetry example in the left hand column. Now we can add the Filter by Field Processor. We will want to make sure we change the match type to regex, if we will be using a fuzzy search. This enables you to use regex to grab something specific. If you select 'strict', it will need to be verbatim. In this example I will be excluding every log with the attribute key of 'log_type' and a value of 'bindplane\' based on the values pulled from the sample on the left column. After saving, we will only need to rollout the change to make the change take effect. Now we can verify our change worked. As you can see in this example, entries of 'log_type bindplane\' are being filtered out of the right hand side. Including only tagged Logs The other way that logs can be sent to a single destination is by manually tagging the log file, then only moving the tagged logs to a single destination. First we will add a incoming processor to a log that we wish to tag. Now we will select the Add Fields Processor Here we will be using a 'upsert' attribute action, and for this example a field of 'source' and a value of 'syslog'. We can save that processor now, and move on to the Destination Processor on the right hand side, next to the Destination you would like to send these logs to. For this we will be using the Filter by Field Processor. We will want to configure it for this example by specifying the Action of 'include' and a Match Type of 'strict'. Below that in the Attribute Fields section, we will specify the field as 'Source' and the value as 'tomcat'. After you save and roll out the configuration to the agents, you can verify it is working by going back in to the destination processor which will have what is being sent to the destination in the far right column.]]>https://observiq.com/docs/how-to-guides/routing-telemetryhttps://observiq.com/docs/how-to-guides/routing-telemetryWed, 30 Oct 2024 16:06:44 GMT<![CDATA[Reduce Log Volume with the Severity Filter]]><![CDATA[Reducing the volume of logs youre sending to your destinations is a great way to increase the signal in your analysis tools and mitigate the costs associated with long-term log storage. This tutorial will show you how to use the Severity Filter processor in BindPlane OP to filter out logs in your pipeline. Below, I have a simple pipeline configured thats sending Postgres logs to both a Google and Splunk destination. Postgres is being used as an example, log filtering will work with any log source. To start, Navigate to any agent and use Snapshots to inspect the log stream and determine what to filter. 1. Navigate to the bottom of the configuration page and click one of the agents. 2. On the agent details page, click the button called "View Recent Telemetry" at the top right of the page. When I do that, I see a lot of info and debug logs that Id like to continue sending to Google, but I dont need in Splunk. Lets filter them out. 1. Navigate back to your configuration. 2. Click the processor node just before the Splunk destination. 3. Choose the Severity Filter processor and set the minimum severity to Warn. 4. Youll see a 1 appear on the processor node, indicating that the processor was successfully added. 5. After the agent receives the new configuration, youll see the throughput measurements update to reflect the reduction of data going to Splunk. For even more control, you can use the Filter Log Record Attribute processor to filter your logs based on other attributes. Use Snapshots to inspect the log and determine what youd like to filter.]]>https://observiq.com/docs/how-to-guides/reduce-log-volume-with-the-severity-filterhttps://observiq.com/docs/how-to-guides/reduce-log-volume-with-the-severity-filterThu, 13 Jun 2024 12:58:52 GMT<![CDATA[Multi-Node Architecture on Google Cloud]]><![CDATA[Google Cloud can be used to host a scalable BindPlane OP architecture by leveraging multiple BindPlane OP instances in combination with Compute Engine, Cloud Load Balancer, and Pub/Sub. Prerequisites The following requirements must be met: - You must have access to a Google Cloud Project - You must have a BindPlane OP Enterprise or Google license - You must be comfortable working with the following Google services - Compute Engine - Cloud Load Balancer - Pub/Sub Architecture See the High Availability documentation for details on the architecture that is used in this guide. Deployment Firewall Create a firewall rule that will allow connections to BindPlane on TCP/3001. - Name: bindplane - Target Tags: bindplane - Source Filters: - IP ranges: 0.0.0.0\. - Protocols and Ports: TCP/3001 \Allowing access from all IP ranges will allow anyone on the internet access to BindPlane OP. This firewall rule should be restricted to allow access only from networks you trust. Compute Engine In this guide, we will create three compute instances, bindplane-0, bindplane-1, and bindplane-prometheus. See the prerequisites for information on individually sizing your instances. We expect this deployment to handle 200 agents, so we will select the n2-standard-2 instance type, which has the exact core count required, and more than enough memory. We will use the same instance settings for Prometheus. - 2 cores - 8 GB memory - 60 GB persistent ssd For the BindPlane instances, use the following additional configuration. - Static public IP addresses - Scopes - Set Cloud Platform to "enabled" - Set pub/sub to "enabled" - Network Tags: bindplane Prometheus Prometheus is used as a shared storage backend for BindPlane OP's agent throughput measurements. Connect to the bindplane-prometheus instance and follow our Self-Managed Prometheus documentation. Cloud SQL PostgreSQL is used as a shared storage backend for BindPlane OP. Google has many options available for production use cases, such as replication and private VPC peering. Deploy In this guide, we will deploy a basic configuration with: - 4 cores - 16GB memory - 250GB SSD for storage - Authorized Networks (Under "connections") set to the public IP addresses of the previously deployed compute instances - If you would prefer to keep the CloudSQL instance off of the public internet, Configure connectivity using VPC peering All other options are left unconfirmed or set to their default values. Configure Once the Cloud SQL instance is deployed, we need to create a database and a database user. On the database's page, select "create database" and name it bindplane. On the user's page, add a new user named bindplane and use a secure password, or choose the "generate password" option. Note the password, it will be required when BindPlane OP is configured. Pub/Sub Google Pub/Sub is used by BindPlane OP to share information between instances. Create a new topic named bindplane. Uncheck the "add a default subscription" option. You can keep all other options set to their default value. Cloud Load Balancer In order to distribute connections between multiple BindPlane OP instances, a TCP load balancer is required. This guide will use an internet-facing load balancer, however, an internal load balancer is also supported. Create a load balancer with the following options: - From the internet to my VMs - Single region only - Pass-through - Target Pool or Target Instance Backend Configuration Configure the Backend with the following options: - Name: bindplane - Region: The region used for your compute instances, pub/sub topic, and CloudSQL instance - Backends: "Select Existing Instances" - Select your BindPlane OP instances - Health check: Choose "Create new health check" - Name: bindplane - Protocol: http - Port: 3001 - Request Path: /health - Health criteria: Use default values Frontend Configuration Configure the Frontend with the following options: - New Frontend IP and Port: - Name: bindplane - Port: 3001 Review and Create Review the configuration and choose "Create". Once created, the load balancer will exist and it should be failing the healtchecks, because BindPlane OP is not installed and configured yet. Install BindPlane OP With Cloud SQL, Pub/Sub, and the load balancer configured, BindPlane OP can be installed on the previously deployed compute instances. Install Script Connect to both instances using SSH and issue the installation command: Initial Configuration Once the script finishes, run the init server command on one of the instances. You will copy the generated configuration file to the second instance after configuring the first. 1. License Key: Paste your license key. 2. Server Host: 0.0.0.0 to listen on all interfaces. 3. Server Port: 3001 4. Remote URL: The IP address of your load balancer. 1. Example: http://35.238.177.64:3001 5. Enable Multi Project: Yes 6. Auth Type: Single User\ 7. Storage Type: postgres 8. Host: Public IP address of the CloudSQL instance. 9. Port: 5432 10. Database Name: bindplane 11. SSL Mode: require 12. Maximum Number of Database Connections: 100 13. PostgreSQL Username: bindplane 14. PostgreSQL Password: The password you configured during the CloudSQL setup. 15. Event Bus Type: Google PubSub 16. PubSub Project ID: Your Google project id 17. PubSub Credentials File: Leave this blank, authentication will be handled automatically. 18. PubSub Topic: bindplane 19. PubSub Subscription: Leave blank, subscriptions will be managed by each BindPlane instance. 20. Accept Eula: Choose yes if you agree. 21. Restart the server?: no You can select LDAP or Active Directory if you do not wish to use basic auth. This guide's scope will not cover external authentication. Copy the contents from the file /etc/bindplane/config.yaml to the same location on the second instance. This will ensure both instances have an identical configuration. Specifically, both instances require the same value for auth.sessionSecret. Configure Remote Prometheus BindPlane OP uses Prometheus to store agent throughput metrics. When operating with multiple nodes, a shared Prometheus instance is required. Stop BindPlane OP: Open the configuration file with your favorite editor. Make sure to use sudo or the root user as the configuration file is owned by the bindplane system project. Find the Prometheus section. It will look like this: Make two changes. 1. Add enableRemote: true 2. Update host: bindplane-prometheus The final configuration will look like this: These changes will instruct BindPlane to use a remote Prometheus instance. Start BindPlane Restart all BindPlane instances in order to pickup the latest configuration. Once BindPlane starts, the Pub/Sub subscriptions are configured automatically: After a few moments, the load balancer healthchecks will begin to pass: Cloud SQL activity can be monitored by enabling Query Insights. Use BindPlane OP Connect to BindPlane OP Browse to http://:3001 and sign into the BindPlane installation using the username and password you used during the configuration step. Install Agents On the agents page, choose "Install Agent" and inspect the installation command. The -e flag should be set to the load balancer address. If it is not, this indicates a misconfiguration in BindPlane's remoteURL configuration option in /etc/bindplane/config.yaml. To quickly test, deploy an agent to each of the BindPlane compute instances.]]>https://observiq.com/docs/how-to-guides/multi-node-architecture-on-google-cloudhttps://observiq.com/docs/how-to-guides/multi-node-architecture-on-google-cloudWed, 25 Sep 2024 15:39:46 GMT<![CDATA[Changing BindPlane OP Authentication Type]]><![CDATA[Using the CLI, you can migrate your authentication type that BindPlane OP uses from System, LDAP or Active Directory. Migrating between Authentication Types in BindPlane OP Prerequisites The BindPlane OP free edition license does not include LDAP or Active Directory authentication types. More information on version comparisons can be found here as well as requesting a trial license. Backup existing environment The first step in any major change to your BindPlane OP instance is to back up your current settings and environment. This is an important step for disaster recover and should be performed before any major changes to your environment. The latest backup instructions can be found here. You will also want to back up your configuration file located at /etc/bindplane/config.yaml which will contain the system authentication username and password should you want to switch back to system auth. Switching from System, Active Directory or LDAP Authentication Methods The following example will switch your BindPlane OP auth type between System, Active Directory or LDAP Authentication and migrate them over. For an overview of what the options are for LDAP/Active Directory and their descriptions, see the table provided in our documentation. Use the following command run from the BindPlane OP server itself, usually through SSH. If your configuration file is not in a default location, change /etc/bindplane/config.yaml to the applicable path: If you are switching from System Authentication, this will ask you for Active Directory or LDAP specific information that were outlined in the link above. If you run into any problems or questions with what it is asking for, simply abort the process and open a support ticket to ensure you have the right information. Once you have set up the new authentication scheme, the first user you log in as will be an Organizational Admin. All Projects will be owned by that user. If you are migrating BACK to System Authentication, for instance if you originally used system auth and migrated to Active Directory and now you are moving back to system auth again, be sure to enter your original username and password to retain the projects and org status. If you backed up the original config from the Backup Existing Environment step you can simply get it from there.]]>https://observiq.com/docs/how-to-guides/migrate-auth-typeshttps://observiq.com/docs/how-to-guides/migrate-auth-typesWed, 17 Jul 2024 17:39:09 GMT<![CDATA[Kubernetes]]><![CDATA[Kubernetes Monitoring BindPlane OP supports monitoring Kubernetes clusters for Metrics, Traces, and Logs. See Kubernetes Monitoring for details. Dynamic Cluster Name Detection BindPlane OP supports detecting Kubernetes cluster names for the k8s.cluster.name resource attribute. See Kubernetes Cluster Name Detection for details.]]>https://observiq.com/docs/how-to-guides/kuberneteshttps://observiq.com/docs/how-to-guides/kubernetesThu, 11 Jan 2024 13:03:21 GMT<![CDATA[GitOps]]><![CDATA[BindPlane OP's API allows developers to manage configuration state with GitOps. This guide will showcase how to use Github actions for automating the deployment of resources to BindPlane OP. Prerequisites The following prerequisites must be satisfied before BindPlane can be automated. BindPlane Authentication You must have access to your BindPlane OP Server using an API key (when multi-project is enabled) or username and password (when single-project is enabled). BindPlane CLI You must have the ability to run CLI commands against your BindPlane OP server from your workstation. You can create a quick CLI profile with the following commands (where gitflow is the name of the profile): If you are using a username and password, replace the api-key flag with the username and password flags. See the CLI documentation for more information. Network Access BindPlane OP must be reachable by the CI/CD "runner". If your BindPlane OP server is hidden behind a corporate firewall, you can look into using Self-hosted Runners. Most installations of BindPlane will be listening on port 3001 unless configured otherwise or placed behind a load balancer. Repository You must have a new or existing repository that you can use while following this guide. This repository will be used to store BindPlane resource files as well as rendered OpenTelemetry "raw" configuration files. Github Actions Repository Secrets The following repository secrets must be defined in your repository. - BINDPLANE_REMOTE_URL: The remote URL of your BindPlane OP server, usually in the form of http://:3001. - BINDPLANE_API_KEY: API key if you do not want to use a username and password BindPlane supports username and password, however, an API key is required when multi-project is enabled. If BindPlane is not configured for multi-project, you must have the following secrets for your basic authentication username and password. - BINDPLANE_USERNAME - BINDPLANE_PASSWORD Export Existing Resources Existing resources should be exported to your repository before enabling the Github Action. This will output all existing destinations to destinations.yaml and configurations to configurations.yaml. You can move these files to any directory within your repository. Make a note of where they live, as their paths will be required when configuring the GitHub Action. Sensitive Values Destinations and configurations that have sensitive values (passwords, tokens, API keys) will not export the actual value. Instead, the value will be a placeholder "(sensitive)". BindPlane will never allow you to retrieve a sensitive value. BindPlane destination resource example: Raw OTEL configuration example: When the configuration is pushed to the agent, the correct value will be included in the configuration. Github Action Workflow Create a new workflow at .github/workflows/bindplane.yml. Open the observIQ/bindplane-op-action repository to view the full list of configurable parameters. If using username and password, replace the bindplane_api_key option with: Update target_branch: main to point to the branch you wish to use as your source of truth. When the action is running against this branch, it will apply resources to BindPlane. Make sure destination_path and configuration_path point to the relative path of the previously exported destination and configuration resource files. If you do not wish to write back the raw OpenTelemetry configuration files to the repo, set enable_otel_config_write_back to false. Commit and Test Commit the destination resource, configuration resource, and actions workflow YAML files to the repository. If committing to a branch other than the target_branch, make sure to open a pull request to merge these changes to the target branch. Once the target branch has the BindPlane resources and the actions workflow, you should see the action running under the repository's "Actions" tab. Because the existing resources are up to date with the repository's resources, the results will be uneventful. The actions will pass without taking action. If enable_otel_config_write_back is set to true, the action will have committed the raw OpenTelemetry configuration back to the repo. You can test changes by editing one of the resources in the repository (destinations.yaml or configurations.yaml). Save and commit the change to the target branch either directly or by using your pull request workflow. Once the change is merged into the target branch, the following will happen: 1. The action will apply the resources to BindPlane 2. All affected BindPlane configurations will have a pending rollout 3. The updated raw OpenTelemetry configurations will be committed back to the repo if enable_otel_config_write_back is true Configurations that have a pending rollout can be triggered by BindPlane Web Interface users. Automatic Rollout The action can be configured to trigger rollouts automatically after updating a configuration. When automatic rollouts are enabled, configuration changes made by the action will immediately apply to agents that are attached to that configuration. Set enable_auto_rollout to true. Updating Resources Resources can be updated using two methods. You can edit the resources (destination and configuration YAML files) files directly or you can edit resources in the BindPlane UI and export them using the CLI, similar to the original export covered at the beginning of this guide. Direct Edit Direct edits can be done by editing the configuration files that the action is pointing to. In this example, the otlp_grpc destination's grpc_port is updated from 4317 to 44317. You can use git diff to view your changes. With the changes in place, commit the change directly to the target branch (branch the action is deploying changes from) or go through your normal pull request and review workflow with your team. Once the change is merged or committed to the target branch, the action will deploy the change to BindPlane. If Auto Rollout is not enabled, you will need to log into BindPlane's web interface and trigger the rollout manually. UI Export As an alternative to editing the resource files directly, you can modify configurations in the web interface and re-export them to your repository. 1. Edit the configuration within the BindPlane web interface. 2. Do NOT roll the configuration out. 3. Re-run the CLI export commands in the Export Existing Resources section. 4. Verify the changes with git diff 5. Commit the changes to the target branch, or follow a pull request workflow to merge the changes to the target branch. Once the change is merged or committed to the target branch, the action will deploy the change to BindPlane. If Auto Rollout is not enabled, you will need to log into BindPlane's web interface and trigger the rollout manually. Updating Sensitive Values At this time, sensitive values must be updated by following the UI Export workflow. It is important to avoid storing sensitive values in the Git repository. Role Based Access Control When using the action with Auto Rollout enabled, it is recommended to restrict your Web Interface user's access by using RBAC. You can allow users to make configuration changes and prevent them from rolling the configuration out by setting their permission level to user. See the Rollout Permissions guide for more information. The action's username or API key should be associated with an admin user to allow it to trigger rollouts.]]>https://observiq.com/docs/how-to-guides/gitopshttps://observiq.com/docs/how-to-guides/gitopsThu, 13 Jun 2024 17:42:05 GMT<![CDATA[Connecting Other OpenTelemetry Collectors Using the OpAMP Extension]]><![CDATA[BindPlane works best with the BindPlane Agent which is built using OpenTelemetry Collector Contrib. However, other OpenTelemetry based collectors can be added to the BindPlane using the OpenTelemetry OpAMP Extension. Prerequisites For an OpenTelemetry Collector to be added to BindPlane, you will need to do the following: - Include the OpenTelemetry OpAMP Extension in your collector build. Configuration To configure the OpAMP Extension to connect to BindPlane, you will need to add the following to your OpenTelemetry collector configuration: 1. The OpAMP spec requires that the instance_uid field be a valid ULID. This field is used to uniquely identify the collector instance. You can generate a ULID using a ULID generator. 2. The OpAMP endpoint is the websocket address of your BindPlane server. It should start with either ws:// for plain text or wss:// for TLS. You can find the endpoint in the BindPlane UI by navigating to the "Agents" page, clicking "Install Agent", choosing a Platform, and then looking at the generated command. The endpoint will appear after the -e flag. 3. The secret key is used to authenticate the collector with BindPlane. As with the endpoint, you can find the secret key in the BindPlane UI by navigating to the "Agents" page, clicking "Install Agent", choosing a Platform, and then looking at the generated command. The secret key will appear after the -s flag. 4. The labels field is used to add metadata to the collector. This field is optional, but it can be used to add help identify the collector in the the BindPlane UI. The labels should be a comma-separated list of key-value pairs, where the key and value are separated by an equals sign. For example, environment=production,region=us-west. 5. When using ws://, currently the tls insecure option is required. Other tls settings may be used when using wss://. Limitations The OpenTelemetry OpAMP Extension has the following limitations: - The OpAMP Extension does not accept remote configuration, so the Collector configuration cannot be modified using the BindPlane UI. The current Collector configuration will be visible on the Agent page as yaml. The "Choose Another Configuration" button will not appear. - The View Recent Telemetry button will not appear because this feature is specific to the BindPlane Agent. - The Operating System and MAC Address fields will not be populated in the BindPlane UI. These fields are only populated by the BindPlane Agent.]]>https://observiq.com/docs/how-to-guides/connecting-other-collectors-using-opamp-extensionhttps://observiq.com/docs/how-to-guides/connecting-other-collectors-using-opamp-extensionWed, 20 Mar 2024 19:26:45 GMT<![CDATA[Azure LDAP]]><![CDATA[BindPlane OP's LDAP authentication support can be configured to work with Azure Entra ID. This guide will walk you through the process of configuring BindPlane OP to use Azure Entra's LDAP functionality as an authentication backend. Prerequisites You must have access to an existing Azure account, with permissions to manage users and Microsoft Entra Domain Services. You must create or have access to an existing Domain Service. You can follow this Microsoft tutorial. If running outside of Azure, you must enable "secure LDAP" and "Allow secure LDAP access over the internet". See the documentation for details. You must have DNS configured so the BindPlane server can resolve the Azure Entra Domain Services hostname. BindPlane Configuration BindPlane can be configured using the Initialization Command when operating BindPlane on a Linux server. If using Kubernetes, see the Kubernetes configuration section. Initialization Command On your BindPlane server, execute the init command. Follow the prompts until you reach the authentication questions. - Select "Active Directory" when prompted for an authentication method. - Provide your Directory Services IP address. - If BindPlane is operating outside of the Azure environment, provide the "Secure LDAP external IP addresses". - Provide the LDAP port. - 389 if operating within the Azure environment, without TLS. - 636 if operating with TLS, from within or outside of the Azure environment. If you want to use TLS, choose yes when prompted. TLS is required when operating outside of the Azure environment. It is recommended that you select "No" when prompted to skip TLS verification and provide a certificate authority in the next prompt. If using TLS, you must choose yes when prompted for mutual TLS, and private a certificate and private key. The certificate authority and mutual TLS keypair files must be readable by the bindplane Linux user. When prompted to configure the "Base DN", provide your domain services base dn. For example, if your Domain services name is bindplane-ldap.onmicrosoft.com, your Base DN will be dc=bindplane-ldap,dc=onmicrosoft,dc=com The Base DN can be extended to include organizational units, using the following syntax: When prompted for the Use Search Filter, input (userPrincipalName=%s). This will allow users to log in to BindPlane using their Entra ID email address. e.g. [email protected]. When prompted for the bind username and password, provide the user principal and password for your bind user. This user must have permission to bind to the domain services LDAP server. Example Configuration Once the configuration is initialized, the auth section will look like this: In this example, the domain services hostname is bindplane-ldap.onmicrosoft.com and the certificate is valid for the hostname bindplane-ldap.onmicrosoft.com. The TLS certificates and private key are located at /etc/bindplane/azure_ldap with the following permissions. High Availability If operating BindPlane in high availability, make sure the configuration changes to the auth section of the configuration file are copied to the other servers. Kubernetes The BindPlane Helm Chart v1.10.0 or newer supports Azure LDAP. See the Readme and the Initialization section on this page for details on each option. Before you begin, make sure a secret containing the TLS certificates exists in the namespace that BindPlane is deployed to. Update your values file with the following options. Make sure to update them to reflect your environment. Update your Helm deployment with the new options. Troubleshooting You can use the ldapsearch utility to interface with Azure LDAP. It is useful for validating your certificate, bind user, and base DN. Example usage: Make sure to update the LDAP connection string (-H), bind user (-D) bind password (-w) and base dn (-b). Resources - Azure Identity Documentation - LDAP authentication with Microsoft Entra ID - What is Microsoft Entra Domain Services? - Tutorial: Create and configure a Microsoft Entra Domain Services managed domain - Enable secure LDAP for Microsoft Entra Domain Services - LDAP Search Utility]]>https://observiq.com/docs/how-to-guides/azure-ldaphttps://observiq.com/docs/how-to-guides/azure-ldapFri, 20 Sep 2024 12:15:34 GMT<![CDATA[Postgres]]><![CDATA[Postgres Store BindPlane OP supports using Postgres for its primary data store. See Postgres Store for details. Postgres TLS BindPlane OP supports connecting to Postgres using TLS. See Postgres TLS for details.]]>https://observiq.com/docs/how-to-guides/postgres/postgreshttps://observiq.com/docs/how-to-guides/postgres/postgresTue, 17 Sep 2024 19:47:44 GMT<![CDATA[Postgres TLS]]><![CDATA[BindPlane OP supports TLS and mutual TLS when connecting to Postgres. Prerequisites This guide assumes you already have BindPlane OP and Postgres deployed and configured. Before following this guide, make sure you have performed the steps in the previous Postgres Store guide. Lastly, the guide assumes you have already configured Postgres to use TLS or mutual TLS. SSL Mode Before configuring TLS, familiarize yourself with the following Postgres SSL mode options. BindPlane supports four SSL mode options. Mode Description disable TLS is not used. require TLS is used, but does not verify the server certificate. verify-ca TLS is used and verifies the server certificate. verify-full Same as verify-ca, but with mutual TLS and a client TLS key pair is configured. You can review the official descriptions here. Keep in mind that BindPlane supports a subset of the options found in the official Postgres documentation. Linux When operating BindPlane OP on Linux, you can enable TLS by editing the configuration file at /etc/bindplane/config.yaml. Find the store section and modify the store.postgres sub section. Modify store.postgres.sslmode to require or verify-ca. If using verify-ca, configure a certificate authority by setting store.postgres.sslRootCert to the path of a CA certificate file that can be used to verify the Postgres server's authenticity. The resulting configuration file should look similar to this: sslRootCert is not required when using verify-ca if the operating system's trust store includes your CA certificate. Mutual TLS can be configured by setting sslmode to verify-full and including the sslCert and sslKey options. When copying certificates to the BindPlane server, set the filesystem ownership and permissions. After you have re-configured BindPlane and deployed the TLS files, restart the service. Watch the BindPlane log file for issues. If the service appears stopped, and the log file is not useful, check the journal output of the service. If no errors are encountered, BindPlane is correctly configured to use TLS when connecting to Postgres. Kubernetes The BindPlane OP Helm Chart supports configuring BindPlane to use TLS by leveraging Kubernetes secrets. Assuming you have the following files: - ca.crt: The CA certificate - client.crt: The mutual TLS client certificate (optional) - client.key: The mutual TLS client private key (optional) Create a Kubernetes secret. Omit the client keypair if you do not intend to use mutual TLS. Update your values configuration to include the sslmode and sslsecret options. Use sslmode verify-ca and omit the client keypair if you are not using mutual tls. Upgrade your Helm deployment to apply the changes. The BindPlane pods should restart without startup errors. If the new BindPlane pod(s) enter a crashloop, check their logs to investigate the error. If the pods come up successfully, TLS is configured and working.]]>https://observiq.com/docs/how-to-guides/postgres/postgres-tlshttps://observiq.com/docs/how-to-guides/postgres/postgres-tlsTue, 17 Sep 2024 19:47:44 GMT<![CDATA[Postgres Store]]><![CDATA[BindPlane OP stores organizations, accounts, agent metadata, configurations and more in Postgres when configured to use Postgres as the primary datastore. Using Postgres is a prerequisite for operating BindPlane in High Availability. This guide will cover the deployment of BindPlane OP and Postgres 16 on Linux (Debian 12) and Kubernetes. Prerequisites You must have a BindPlane license key before following this guide. If you do not have a license, you can request one on the Download page. If deploying BindPlane to Kubernetes, you must have Helm installed. Linux 1. Architecture This guide will reference two virtual machines, one for the BindPlane control-plane (bindplane) and one for the Postgres installation (bindplane-postgres). It is best practice to deploy Postgres to a dedicated machine, allowing multiple BindPlane instances to make use of it if you decide to use High Availability. The network in this example contains the required DNS entries to support reaching the machines by their short hostname bindplane and bindplane-postgres. If you do not have DNS in your environment, use IP addresses instead of hostnames when configuring BindPlane to connect to Postgres. 2. Postgres Installation and Configuration Start by installing Postgres. This guide is using Debian 12, but you can use your preferred distribution, just know that the commands to install and manage Postgres may differ. Configure the Postgres apt repository. Install Postgres 16 from the Postgres repository. Enable and start the Postgres service. Configure Postgres to listen on all interfaces. Edit the Postgres configuration file and find listen_addresses. Uncomment listen_addresses and set the value to 0.0.0.0. It should look like this: If your system has iptables or firewalld enabled, make sure to allow port 5432/tcp. Next we need to update the Authentication configuration. Configure Postgres to allow remote connections. Find the lines that looks like this: Update the configuration by replacing 127.0.0.1/32 and ::1/128. It should look like this: User setup and Database creation Connect to the Postgres installation by switching to the postgres user and running the psql client command. Execute the setup queries found in the User and Database section in the Postgres Going to Production documentation. Restart the service. With Postgres installed and configured, you can move onto installing and configuring BindPlane. 3. BindPlane Installation and Configuration Install BindPlane by following the instructions on the Download page. Once the package is installed, select y to initialize the configuration. 1. Input your license key 2. Server Host: 0.0.0.0 3. Server Port: 3001 4. Remote URL: http://bindplane:3001, the remote URL should match your hostname or IP address. 5. Authentication Method: Single User 6. Username: admin 7. Password: Your secure password 8. Storage Type: postgres 9. Postgres Host: bindplane-postgres, this value should match your Postgres server's hostname or IP address. 10. Postgres Port: 5432 11. Postgres Database Name: bindplane 12. Postgres SSL Mode: disable, see Postgres TLS for TLS configuration, as a follow up to this guide. 13. Maximum Number of Database Connections: 100 14. Postgres Username: bindplane 15. Postgres Password: Your password 16. Event Bus Type: Local 17. Automatically restart: y Watch the BindPlane log file for any issues: BindPlane will log the following lines which indicate Postgres is configured and working. If the Using postgres store log is not immediately followed by an error log, Postgres is configured correctly. 4. Verification Log into the BindPlane OP web interface at http://bindplane:3001. Replace bindplane with your hostname or IP address. If you can create a configuration successfully, Postgres is working as intended. Kubernetes 1. Architecture This guide will use minikube to deploy Postgres and BindPlane using high availability. In production, it is recommended to deploy Postgres to a virtual machine, a SaaS provider (CloudSQL, RDS, etc) or to use a Postgres operator such as zalando/postgres-operator. Start by configuring minikube or your Kubernetes provider of choice. 2. Postgres Installation and Configuration The Postgres YAML manifest provided in this guide is not production ready. It does not use secure authentication. It does not provide volume persistence, meaning data will be lost when the Postgres pod is updated or replaced. Begin by deploying the Postgres deployment to Kubernetes. You can inspect the YAML manifest here. If not using the provided Postgres example deployment, make sure to follow the User and Database section in the Postgres Going to Production documentation when provisioning your database host. Once the pod is deployed, the postgres namespace will look like this: The service postgres will route traffic to the pod postgres-0. Postgres is accessible using the username postgres and password password. 3. BindPlane Installation and Configuration Setup your Helm client to support deploying the BindPlane OP Helm Chart Create the BindPlane license secret, where $BINDPLANE_LICENSE is your BindPlane license key. Create a Helm values.yaml file. This configuration will deploy BindPlane with two replicas, configured to connect to Postgres using the clusterIP service at postgres.postgres.svc.cluster.local. In this configuration, BindPlane is not exposed by ingress, but can be reached using port forwarding. Deploy BindPlane High Availability. Once the chart is deployed, the following pods will be present: - bindplane-ha - Web interface - API - Agent connections - bindplane-ha-jobs - Manages the database initialization and migrations - Periodic jobs, such as cleaning up disconnected Kubernetes agents. - bindplane-ha-nats - For supporting BindPlane High Availability Event Bus. - bindplane-ha-prometheus - Acts as the storage for agent throughput measurement data - Contains the required configuration or supporting BindPlane - bindplane-ha-transform-agent - For Live Preview processing 4. Verification Access BindPlane over port forwarding. Once the tunnel is running, you can reach BindPlane at http://localhost:3001. If you can successfully create a configuration, Postgres is configured and working correctly. Commonly Asked Questions Migration from legacy Bbolt Store If you are using bolt store and would like to switch to Postgres, reference the following documentation: - Linux: Bolt Store to Postgres - Kubernetes: Postgres Migration Does BindPlane work with SaaS hosted Postgres? Yes, BindPlane supports the popular cloud providers such as Google Cloud CloudSQL, AWS RDS, and Azure Database. As long as the cloud provider is exposing a Postgres server, BindPlane can use it. BindPlane does not officially support Postgres like systems, such as AlloyDB or CockroachDB. Does BindPlane support Transport Layer Security (TLS)? Yes, BindPlane supports TLS and mutual TLS when connecting to Postgres. After following this guide, reference the Postgres TLS guide.]]>https://observiq.com/docs/how-to-guides/postgres/postgres-storehttps://observiq.com/docs/how-to-guides/postgres/postgres-storeSun, 27 Oct 2024 17:07:00 GMT<![CDATA[Kubernetes Postgres Migration]]><![CDATA[As your BindPlane OP environment grows in size and importance, it can be desirable to migrate BindPlane's backend to Postgres. Using Postgres allows BindPlane to operate in High Availability. Prerequisites The following prerequisites must be met before performing the migration. CLI Your CLI version should match your control plane version. The migration requires the BindPlane CLI. Make sure you have a profile configured. Example profile configuration: Credentials The migration process will involve exporting and importing resources using the BindPlane CLI. The export process will not return sensitive data, such as credentials. You must have access to all credentials used by your resources, such as usernames and passwords, API keys, etc. Deprecated Resources When resources are deprecated by BindPlane, they are removed unless they are in use. When migrating from Bolt store to Postgres, any deprecated resources will need to be replaced. Generally, a resource is deprecated and replaced by a newer version if a breaking change is made. Using the BindPlane CLI and jq, check to see if any of your resource types are deprecated. Example output. The parse_severity processor is deprecated. The delete_empty_values processor is not deprecated. If there are deprecated resource types, it indicates that you have a configuration or resource library component that is deprecated. Remove the component from your configurations and delete it from the resource library, if it is present there. Secret Key Note the secret key. You can find your secret key with the bindplane get account command. Example: You will need your secret key during the migration process. Postgres Postgres should be pre-provisioned and include a database named bindplane or something similar. There should be a user with full access to the database named bindplane. The BindPlane Helm chart does not deploy or manage the Postgres instance. Max Connections Make a note of the maximum connections allowed to Postgres. When configuring BindPlane, you can set the maximum number of connections per BindPlane pod. Prometheus During the migration, a new Prometheus instance will be deployed by the Helm chart. This means you will lose existing Prometheus data. Prometheus is used by BindPlane to track agent throughput and data reduction metrics. It is not considered critical data when performing the migration. If you manage your own Prometheus instance, outside of the Helm chart, you will not lose data. Migration Steps During the migration, users mustn't modify BindPlane resources. Doing so will cause the exported resources to become out of sync. 1. Export Resources Make sure there are no pending configuration changes. When exporting a configuration, the pending version will be exported if changes are present. Export resources to a directory called resources. Avoid modifying resources after exporting. 1.1 Sensitive Values Using a text editor, find all instances of (sensitive) in your resource files. Make a note of each resource that contains a sensitive value. When the resources are imported, you will need to update them with the correct value. 2. Deploy High Availability BindPlane Make sure the new BindPlane deployment uses the same image tag as your current deployment. 2.1 Helm Values Copy your Helm values file to a new file, values-ha.yaml or backup the existing values file. Make the following changes to the HA values file. 1. Add the replica top level key, and set it to 3 2. Set backend.type to postgres 3. Configure backend.postgres with the values that are correct for your Postgres instance. 4. Update the top-level resources section to reflect multiple pods. Follow the High Availability Sizing documentation for details. 5. If using the Helm-managed Prometheus, ensure its resources and capacity are increased if you expect a high agent count. 6. Enable the NATS Event Bus. 7. Add the top-level nats configuration. This configuration is not nested under eventbus. 8. Update config.server_url and config.remote_url to use a "staging" hostname. During the migration, BindPlane will need to be accessed over a temporary hostname to allow the existing deployment to continue functioning. 2.2 Helm Deployment Deploy the new BindPlane HA environment using Helm. Make sure to use a new name. Chart version 1.12.0 or newer is required for NATS event bus. Once deployed, the old and new HA deployments will be present side by side, if deployed to the same namespace. It is expected that some of the pods will have restarted during the initial deployment. 2.3 Ensure Access Make sure you can access the new environment using your existing solution (NGINX, Istio Gateway, etc). This may require you to create an additional ingress configuration. It is important to keep the current ingress solution routing to your old BindPlane deployment, to prevent downtime. Access to the new HA environment should be handled using a temporary solution, allowing you to import resources. 3. Configure Project 3.1 Initial Sign In Sign into BindPlane for the first time, and follow the onboarding prompts. 3.2 Configure CLI Create a new CLI profile for accessing the new environment. Ensure the CLI profile is working by issuing a get agent command. The command should not return any agents. If it does, you might be using the profile for the old BindPlane instance. 3.3 Update Project Secret Key Update the new project's secret key with the secret key used by the old BindPlane deployment. This will allow agents to connect to the new BindPlane instance and project. It will look similar to this. Update spec.secretKey in ha-account.yaml to match the secret key from the original environment. Update the account using the apply command. Check if it worked: bindplane get account. The output should show the original secret key. You can also check the UI. 4 Import Resources 4.1 Import Import your resources using the apply command. 4.2 Update Sensitive Values Find all components that use sensitive values, and update them. The resources exported from the previous BindPlane instance will not contain sensitive values such as usernames and passwords or API keys. 4.3 Rollout Configurations Trigger a rollout for all configurations to ensure they will be pushed to agents during the cut-over to the HA instance. 5 Cut Over Access Now that the new BindPlane instance has had its secret key configured and resources imported, it is ready to accept connections from agents. 5.1 Helm Configuration Update config.server_url and config.remote_url to your primary hostname, the hostname used by the old BindPlane instance. Use Helm to deploy the change. 5.2 Update your ingress solution to point to the new BindPlane HA deployment. Note that agents will likely remain connected to the old BindPlane instance until one of two things happens: 1. The agent(s) are restarted 2. The old BindPlane instance is restarted Restart the old BindPlane StatefulSet pod to force agents to reconnect, allowing them to connect to new HA system. 6 Verify Functionality 1. Ensure resource utilization is acceptable 2. Ensure agents are connecting 3. Ensure Recent Telemetry is working 4. Ensure Live Preview is working 5. Ensure configuration rollout can be triggered and finished 6.1 Abort If the new system is not working correctly or appears unstable, revert your ingress change to point to the old BindPlane instance. Restart all BindPlane HA pods to force agents to reconnect to the old BindPlane system. 7 Finish and Cleanup At this point, you have successfully migrated to BindPlane High Availability with NATS and Postgres. You can safely and optionally remove the old BindPlane instance using helm uninstall. The New BindPlane deployment does not rely on any resources from the old BindPlane instance.]]>https://observiq.com/docs/how-to-guides/kubernetes/postgres-migrationhttps://observiq.com/docs/how-to-guides/kubernetes/postgres-migrationWed, 17 Jul 2024 13:47:01 GMT<![CDATA[Kubernetes Monitoring]]><![CDATA[BindPlane OP supports managing Kubernetes agents allowing you to streamline the observability of your cluster. Before following this guide, be sure to familiarize yourself with the Kubernetes Install, Upgrade, and Uninstall Agents documentation. Objective Monitoring a Kubernetes cluster involves collecting metrics and logs from the various components that makeup the cluster. Metrics Kubelet The Kubelet API is hosted on each node within the cluster. It can be used to gather node, pod, container, and volume metrics. Each Kubelet's scope is limited to the node it is running on. The Kubelet API is useful for tracking pod and container performance metrics, such as CPU or memory utilization. API Server The Kubernetes API Server is hosted within the cluster as a Deployment. It can be used to gather higher-level cluster metrics, such as Deployment or Pod phase. Logs Container Logs Kubernetes container logs are written to the node's filesystem. Each Kubernetes node is responsible for hosting these logs. Generally, the logs are written to /var/log/pods and are symlinked in /var/log/containers. Each log file has the following format: The BindPlane agent will extract metadata from the log file name following OpenTelemetry's Semantic Conventions. Cluster Events The Kubernetes API server can be used to retrieve Kubernetes Events in the form of logs. Kubernetes Events are useful for observing issues such as pod crash loop event. Tracing Kubernetes does not emit traces, however, applications instrumented to emit OpenTelemetry traces are supported. See the OpenTelemetry section for details. OpenTelemetry If your applications are instrumented with OpenTelemetry, they can be configured to forward metrics, traces, and logs to the BindPlane agents. Implementation This guide will describe how to configure three configurations: - Kubernetes Node - Kubernetes Cluster - Kubernetes Gateway The BindPlane Node and Cluster agents will forward their telemetry to the BindPlane Gateway agent(s) using a clusterIP service. Prerequisites You should have the following in place before moving forward with BindPlane Kubernetes Agent deployment. - Access to your Kubernetes cluster - Access to your BindPlane OP server If you do not have BindPlane OP installed, you can follow one of these two guides for deploying BindPlane OP to a Linux server or Kubernetes. - Install BindPlane OP Server - Install BindPlane OP Server on Kubernetes Create Configurations Before agents can be deployed to the cluster, configurations must be created. Node Configuration On the Configurations page, choose "Create Configuration". Create a Kubernetes Node configuration. The node configuration will be deployed as a DaemonSet. The DaemonSet will allow the collection of container logs and Kubelet metrics from each node. Choose next to view the list of available sources. Select the Container source and configure it with a cluster name. You can use placeholder value if you intend to detect the cluster name using the resource detection processor. This processor can be configured during the gateway configuration setup. Select the Kubelet source and configure it with a cluster name. You can use placeholder value if you intend to detect the cluster name using the resource detection processor. This processor can be configured during the gateway configuration setup. Optionally, select the OpenTelemetry source. The DaemonSet can receive metrics, logs, and traces from applications in your cluster. If you would prefer to have your Gateway Agent handle receiving OpenTelemetry, you can skip this step. At this point, you should have both Kubernetes sources and the OpenTelemetry (optional) source. Choose next to move to the destination configuration page. Search for "OpenTelemetry" and select the "OpenTelemetry (OTLP)" destination. Configure the hostname field with the following value: Leave all other options set to their default values. Once you have configured the destination, choose "Save". You will be presented with the new pipeline. Cluster Configuration On the Configurations page, choose "Create Configuration". Create a Kubernetes Cluster configuration. The cluster configuration will be deployed as a Deployment with a single pod. The Deployment will allow the collection of cluster metrics and events (logs) from the Kubernetes API server. Choose next to view the list of available sources. Select the Kubernetes Cluster source and configure it with a cluster name. You can use placeholder value if you intend to detect the cluster name using the resource detection processor. This processor can be configured during the gateway configuration setup. Select the Kubernetes Events source and configure it with a cluster name. You can use placeholder value if you intend to detect the cluster name using the resource detection processor. This processor can be configured during the gateway configuration setup. At this point, you should have both Kubernetes sources. Choose "next" to move to the destination configuration page. Select the same destination that you created for the node configuration and choose "Save". Once the configuration is saved, you will be presented with the new pipeline. Gateway Configuration On the Configurations page, choose "Create Configuration". Create a Kubernetes Gateway configuration. The gateway configuration will be deployed as a StatefulSet. Deployment with HPA will be supported in the future, as an alternative to StatefulSet. Choose next to view the list of available sources. Select the OpenTelemetry (OTLP) source. The default values will match the values used by the previously created OpenTelemetry destination. This will allow the Gateway Agent to receive telemetry from the other agents. After saving the source, choose next to move to the destination configuration page. In this example, I am going to use the Google Cloud destination. Feel free to choose the destination that best fits your environment. If you do not have a destination at this time, you can use the custom destination and configure the logging exporter. This exporter will act as a "no-op", and allow you to test the configuration without shipping telemetry to a real destination. Example Google Cloud destination: If you would like to use the custom destination, enable all three telemetry options and include the following for the configuration block: Example Logging destination: Once you have configured the destination, choose "Save". You will be presented with the new pipeline. If you would like to detect the Kubernetes Cluster name, you can use the resource detection processor. Cluster name detection is available for Google GKE only. Support for Amazon EKS and Azure AKS is coming soon. Add a processor to the source side of the pipeline by clicking on the processor icon (It can be found between the source icon and the destination icon). Choose "Add Processor" and search for "Resource Detection". Choose "Done" and then "Save". Deploy Agents Once the configurations are created, you can move on to deploying agents. Retrieve YAML Manifests On the Agents page, select the "Install Agent" button. Choose the Kubernetes Node platform and the Kubernetes Node configuration you created earlier. Select "Next" and you will be presented with a yaml text box. Choose "Copy" and save the contents to a file named bindplane-node-agent.yaml Repeat these steps for the Cluster and Gateway agents. Save their yaml output to files named bindplane-cluster-agent.yaml and bindplane-gateway-agent.yaml. Kubectl Apply With all three manifests saved, you can apply them with a single command: The output will look like this: The following resources are created - Namespace: bindplane-agent - RBAC - Service Account: bindplane-agent - Cluster Role: bindplane-agent - Cluster Role Binding: bindplane-agent - Node Agent - clusterIP service: bindplane-node-agent - clusterIP service (headless): bindplane-node-agent-headless - DaemonSet: bindplane-node-agent - Cluster Agent - Deployment: bindplane-cluster-agent - Gateway Agent - clusterIP service: bindplane-gateway-agent - clusterIP service (headless): bindplane-gateway-agent-headless - DaemonSet: bindplane-gateway-agent Once the agents are deployed, they will appear on the Agents page. Agents are named with the following convention: - Node agents take the name of the node they are running on - The Cluster agent takes the name of the underlying pod - Gateway agents take the name of the underlying pod Initial Configuration Rollout With the agents connected, you must perform the initial rollout of the configurations. Navigate to the Configurations page and select your Gateway configuration. Select the "Start Rollout" button. This will push the first version of the configuration to the agents. Navigate to the Node and Cluster configurations and trigger their initial rollout. Once the configurations are rolled out, give them ten minutes to start displaying throughput measurements. Click on an individual agent and select "Recent Telemetry" to view recent logs and metrics. If the agent does not have recent telemetry, try selecting a different one. If activity in the cluster is low, recent telemetry may not be available on every agent right away. Security Each agent manifest has a secret key that is used for authentication to BindPlane OP. If you intend to commit the manifest to git, you should first update the secret key environment variable to use a Kubernetes Secret. You can create a secret and reference it. Once the secret value is removed from the manifest, it can be safely commited to git. Troubleshooting Agents do not appear on the Agents page If the agent pods are running, but not appearing on the Agents page, make sure your BindPlane server's remote URL parameter is set correctly. If operating BindPlane on Linux, check the configuration at /etc/bindplane/config.yaml. If using Helm to operate BindPlane on Kubernetes, make sure the config.remote_url value is correct. The Helm chart...]]>https://observiq.com/docs/how-to-guides/kubernetes/kubernetes-monitoringhttps://observiq.com/docs/how-to-guides/kubernetes/kubernetes-monitoringTue, 11 Jun 2024 20:29:42 GMT<![CDATA[GKE Workload Identity]]><![CDATA[Google Cloud supports mapping Kubernetes service accounts to Google Cloud IAM service accounts using a feature called Workload Identity Federation. Objective BindPlane requires access to Google Pub/Sub when operating in High Availability using a multi-replica Deployment. BindPlane can authenticate to Pub/Sub using OAuth Scopes or with Workload Identity Federation. This guide will focus on how to configure workload identity. Prerequisites You must have access to a Google Kubernetes Engine cluster with workload identity enabled. GKE Autopilot has workload identity enabled by default. Configuration Review the Configure applications to use Workload Identity Federation for GKE instructions. If you deploy BindPlane to a cluster without the Pub/Sub OAUTH scopes, you can expect to see the following error logs: This is because the Kubernetes service account has not been mapped to IAM. Kubernetes Service Account The BindPlane OP Helm Chart creates service accounts for you. The name of the service account is derived from the name of your Helm deployment. You can find your service account with kubectl -n get sa. All pods deployed by the Helm chart will use this service account. IAM Mapping Step 4 in Configure applications to use Workload Identity Federation for GKE instructs you to create an IAM policy binding that binds the Kubernetes service account to your project's IAM. Restart BindPlane If you previously deployed BindPlane, and the pods are crashing due to Pub/Sub permission errors, restart the pods by deleting them or using the kubectl rollout restart command. Once the new pods are started, they will not return Pub/Sub errors if the workload identity mapping was successful. Resources - BindPlane OP Helm Chart - Workload Identity Federation - Workload Identity How To]]>https://observiq.com/docs/how-to-guides/kubernetes/gke-workload-identityhttps://observiq.com/docs/how-to-guides/kubernetes/gke-workload-identityMon, 24 Jun 2024 14:32:52 GMT<![CDATA[Kubernetes Node Agent on GKE Auto Pilot]]><![CDATA[GKE Autopilot is not officially supported on the BindPlane Node Agent at this time. This is due to volume mount restrictions that are in place on auto-pilot clusters. The BindPlane Node Agent deployment manifest can be modified to deploy to auto-pilot clusters. Modifications Follow the Install Kubernetes Agents documentation. After downloading the YAML manifest, open it in your preferred text editor. Modify the volumes section at spec.template.spec.volumes and comment or remove the following volume definitions: - runlog - dockerlogs Modify the storage volume to use an emptyDir volume type. Modify opentelemetry-collector container's volumeMounts at spec.template.spec.containers and comment or remove the following volume mount definitions: - runlog - dockerlogs Apply Once the modifications have been made, the YAML manifest can be applied to your clusters. Frequently Asked Questions Q: Will this support Docker-based clusters? A: No, however, GKE Auto Pilot clusters use the containerd runtime and do not require the docker volume mounts. Q: Is an emptyDir volume safe for configuration persistence A: The hostPath volume is used to ensure the agent's configuration is persisted in the unlikely event that the BindPlane agent pod is updated and restarted during a BindPlane control plane outage. GKE Auto Pilot does not allow hostPath volumes, therefore a temporary volume is used to store the configuration pushed by BindPlane to the agent.]]>https://observiq.com/docs/how-to-guides/kubernetes/gke-auto-pilothttps://observiq.com/docs/how-to-guides/kubernetes/gke-auto-pilotMon, 24 Jun 2024 14:32:16 GMT<![CDATA[Kubernetes Dynamic Cluster Name]]><![CDATA[OpenTelemetry publishes semantic conventions for common resources. The OpenTelemetry Kubernetes Semantic Conventions can be found here. BindPlane detects most of the resource attributes when using the Kubernetes sources. BindPlane does not detect the cluster name (k8s.cluster.name) because Kubernetes does not have the concept of a name. It is crucial to set k8s.cluster.name to filter between multiple clusters in your environment. BindPlane has three methods for setting the k8s.cluster.name resource attribute. - Static configuration value - Environment variable - Resource Detection Static Cluster Name BindPlane's Kubernetes source types require the user to input a static cluster name. This is the easiest way to set the cluster name. See the source type documentation for details: - Cluster Events - Cluster Metrics - Container Logs - Kubelet Metrics Using a static cluster name will mean that you need one configuration per Kubernetes cluster. This solution is simple, but will not scale well in large environments. If you wish to use a single agent configuration for many clusters, see the Environment Variable and Resource Detection sections. Environment Variable BindPlane can use the Add Fields Processor to update the k8s.cluster.name resource attribute to the value of an environment variable. Use a placeholder cluster in your configuration's source(s). Next, use the Add Fields Processor to upsert the resource attribute k8s.cluster.name with the value of the environment variable CLUSTER_NAME. Follow the agent installation workflow to retrieve the Kubernetes YAML manifest for the agent. Add an environment variable to the YAML manifest. The value should be the name of your cluster. If you have multiple clusters, make sure to copy the YAML manifest, once per cluster. Deploy the agent configuration to your cluster. Once the agents are deployed, pick one of them and view its recent telemetry. You should see the cluster name defined in the agent's environment instead of the placeholder name. Be sure to configure the Add Fields processor on the configuration that matches the agent you have deployed. Resource Detection BindPlane can use the Resource Detection Processor to detect the cluster name by making an API request to your cloud provider. Currently, only Google Cloud's GKE is supported. Support for Amazon EKS and Azure AKS is coming soon. See the Resource Detection Kubernetes section for configuration details. Frequently Asked Questions Q: Can I set the cluster name using a Gateway agent? A: Yes. If you are forwarding telemetry from the Node and Cluster agents to a Gateway agent, you can configure the Add Fields or Resource Detection processor on the Gateway configuration.]]>https://observiq.com/docs/how-to-guides/kubernetes/dynamic-cluster-namehttps://observiq.com/docs/how-to-guides/kubernetes/dynamic-cluster-nameWed, 30 Oct 2024 16:06:44 GMT<![CDATA[BindPlane Agent]]><![CDATA[Agent Architecture The BindPlane Agent can be configured to support several architectures. See the Agent Architecture documentation for more information. Agent Sizing and Scaling The BindPlane Agent's resource consumption is directly tied to its throughput and use case. See the Agent Sizing and Scaling documentation for details on sizing and scaling the BindPlane Agent. Agent Resilience Many of the BindPlane Agent's components support delivery guarantee and load balancing. See the Agent Resilience documentation for details on retry, persistent queue, and load balancing.]]>https://observiq.com/docs/going-to-production/bindplane-agenthttps://observiq.com/docs/going-to-production/bindplane-agentTue, 09 Jan 2024 11:51:48 GMT<![CDATA[Monitor the Bindplane Infrastructure]]><![CDATA[Monitoring Bindplane OP and Bindplane Agents provides visibility into the health of your Observability Pipeline. We will walk through a few steps to easily set up sources that will forward the Bindplane OP server logs and the Bindplane Agent logs to the destination of your choice. Bindplane OP Monitoring 1. The first step is to deploy an agent on the Bindplane OP server itself. This will deploy like any agent, please follow the Quickstart Guide if you have any questions. 2. Create a separate configuration for the Bindplane OP server as well. 3. When an agent is running on the Bindplane OP server and it is added to a configuration, we will select a source like in the image below: We can leave the settings default for this example: 4. After we hit the 'Save' button we can click the 'Start Rollout' to push the configuration to the agent. All of your Bindplane OP logs will flow into the destination that you have configured. Next, we can set up agent monitoring.]]>https://observiq.com/docs/going-to-production/bindplane/monitoring-bindplanehttps://observiq.com/docs/going-to-production/bindplane/monitoring-bindplaneMon, 10 Jun 2024 12:50:01 GMT<![CDATA[Single Instance]]><![CDATA[When BindPlane OP server's default architecture is monolithic. In this mode, BindPlane is not reliant upon external services. All components are included in the installation. BindPlane manages several sub-processes: - Prometheus: For recording agent throughput metrics - Transform Agent: For Live Preview The Prometheus and Transform Agent software are included with the BindPlane server installation and do not require configuration by the user. Installing BindPlane on Linux is as simple as running the installation script and following the initialization prompts. Read more by checking out the Quick Start Guide. Event Bus Local The Local event bus is the default event bus used by BindPlane. Unless operating BindPlane in high availability mode, the Local event bus is sufficient. The configuration will look like this by default: Store Bolt Store Bolt Store (bbolt) is the recommended storage backend when operating BindPlane in single instance mode. It stores information in a file on the local filesystem. Because the database is local to the server, it is very fast and does not incur latency that can be observed with other network-based systems. The configuration is very simple. Default installations of BindPlane will look like this: The file bindplane.db is owned by the bindplane user and group, with 0640 permissions. Prometheus All BindPlane OP installations include a bundled version of Prometheus. BindPlane will use the bundled Prometheus as its default measurement metrics storage backend. It is unnecessary to configure Prometheus when using the bundled option. This documentation can be used as a reference for the default Prometheus installation. Configuration The configuration file at /etc/bindplane/config.yaml will contain the following prometheus block after the installation is configured. Directory Structure Once BindPlane OP is started, the /var/lib/bindplane/prometheus directory structure will look like this: Prometheus's configuration and storage are located at /var/lib/bindplane/prometheus. Process BindPlane OP manages the Prometheus process directly as a subprocess. When viewing the process list with ps, you will notice the following: The Prometheus process is executed with the following flags: - config.file prometheus.yml: The main Prometheus configuration file, managed by BindPlane. - web.config.file web.yml: The Prometheus web configuration file, managed by BindPlane. - storage.tsdb.retention.time 2d: The retention time, managed by BindPlane. BindPlane uses rollup metrics for tracking agent measurements over time and does not require Prometheus to store data for longer than two days. - web.listen-address localhost:9090: Listen address, managed by BindPlane. Prometheus is not reachable outside of the BindPlane system. - web.enable-remote-write-receiver: BindPlane uses remote write to push metrics to Prometheus.]]>https://observiq.com/docs/going-to-production/bindplane/architecture/single-instancehttps://observiq.com/docs/going-to-production/bindplane/architecture/single-instanceMon, 16 Sep 2024 17:31:39 GMT<![CDATA[Prometheus]]><![CDATA[Prometheus is required when operating BindPlane in high availability. BindPlane stores time series metrics in Prometheus. These time series metrics allow BindPlane to track agent throughput measurements over time. When viewing the summary page or a specific configuration, the measurement data being displayed is the work of the measurement database. When operating BindPlane in single instance mode, a bundled version of Prometheus will be used. The user does not need to install or configure Prometheus. Prerequisites Sizing Prometheus can be scaled vertically based on the number of managed agents. The volume of time series metrics being pushed to Prometheus will scale linearly with the number of agents. Agent Count CPU Cores Memory 1 - 40,000 2 8GB 40,000 - 80,000 4 16GB Installation Prometheus should be installed on a dedicated system. The installation will be accessed by each BindPlane server in your environment. Follow the installation documentation for instructions for deploying a shared Prometheus instance. Configuration BindPlane will use a bundled version of Prometheus by default. The configuration must be updated to use a remote Prometheus instance. See the configuration configuration documentation for instructions on configuring your BindPlane servers to use the shared Prometheus instance.]]>https://observiq.com/docs/going-to-production/bindplane/architecture/prometheushttps://observiq.com/docs/going-to-production/bindplane/architecture/prometheusThu, 11 Apr 2024 12:15:06 GMT<![CDATA[PostgeSQL]]><![CDATA[PostgreSQL is required when operating BindPlane in high availability. PostgreSQL allows multiple BindPlane servers to share and access data concurrently, enabling the ability to operate BindPlane in high availability. Prerequisites Version BindPlane OP supports the following PostgreSQL versions: - 14 - 15 - 16 User and Database Create a database named bindplane and a user that has full access to that database. The BindPlane user does not require permission to do anything outside of its database. The following psql commands can be used to create the database and user. Make sure to replace your_password with a secure password. Switch to the bindplane database with \c bindplane, or reconfigure your Postgres client to reconnect to the bindplane database. Once connected, execute the following GRANT commands. Sizing Postgres can be scaled vertically based on the number of managed agents. See the Postgresql Prerequisites documentation for sizing details. Installation It is up to the user to deploy and manage Postgres. If operating in a cloud environment, it is recommended that you use one of the following services. - Google Cloud SQL - Amazon Relational Database Service (RDS) - Azure Database for PostgreSQL Configuration The configuration looks like this: Make sure to update host, username, and password. For a list of supported options, see the Postgres Configuration section in the configuration documentation.]]>https://observiq.com/docs/going-to-production/bindplane/architecture/postgresqlhttps://observiq.com/docs/going-to-production/bindplane/architecture/postgresqlFri, 12 Apr 2024 09:57:08 GMT<![CDATA[Load Balancer]]><![CDATA[A load balancer is required when operating BindPlane OP in high availability mode. Incoming agent and client requests will be distributed among all BindPlane servers. Prerequisites The following requirements must be met by your load balancer choice: - Support WebSocket connections - Support TCP or HTTP load balancing - Support TCP or HTTP health checks Installation It is up to the user to deploy and manage the load balancer. If operating in a cloud environment, it is recommended that you use one of the following services. - Google Cloud Load Balancing - Amazon Elastic Load Balancing - Azure Load Balancer If running on-premise, we recommend: - HAProxy - NGINX Configuration The load balancer should be configured to balance requests between all of your BindPlane servers on port 3001. If your load balancer supports sticky connections, enable it. If not, round-robin load balancing is supported. Configure the health checks to target /health on port 3001. When BindPlane is healthy, it will return a status 200 OK.]]>https://observiq.com/docs/going-to-production/bindplane/architecture/load-balancerhttps://observiq.com/docs/going-to-production/bindplane/architecture/load-balancerTue, 30 Jan 2024 20:47:57 GMT<![CDATA[High Availability]]><![CDATA[BindPlane OP is capable of running in high availability (HA) mode. BindPlane OP runs in an active-active HA mode, meaning that the workload of the application is spread across all nodes in the cluster. When operating BindPlane OP in high availability, several user-managed components are required. - Load balancer - PostgreSQL - Prometheus Measurement Database - Event Bus One or more BindPlane OP servers can be installed. Each server is stateless. All data is stored in the Postgres and Prometheus databases. Communication between servers is handled by the event bus. The configuration between all BindPlane servers should be the same. Example Implementations Google Cloud See the Multi Node Architecture On Google Cloud guide for details on how to deploy BindPlane server's distributed architecture on Google Cloud using services such as Cloud Load Balancer, Pub/Sub, and CloudSQL.]]>https://observiq.com/docs/going-to-production/bindplane/architecture/high-availabilityhttps://observiq.com/docs/going-to-production/bindplane/architecture/high-availabilityThu, 10 Oct 2024 18:27:30 GMT<![CDATA[Event Bus]]><![CDATA[BindPlane uses an event bus to communicate between components within BindPlane. When operating BindPlane in high availability mode, the event bus can be used to send events between BindPlane servers. When operating in high availability, the following event bus options are available: - NATS - Google Pub/Sub NATS The NATS event bus is BindPlane's embedded event bus, suitable for high availability without the need for external infrastructure. See the NATS Configuration documentation for more information. Google Pub/Sub Google Cloud Pub/Sub is an excellent event bus choice for users with access to Google Cloud. Setup is simple, and the maintenance overhead of Pub/Sub is very low. For a list of supported options, see the Google Pub/Sub section in the configuration documentation.]]>https://observiq.com/docs/going-to-production/bindplane/architecture/eventbushttps://observiq.com/docs/going-to-production/bindplane/architecture/eventbusTue, 30 Jan 2024 20:47:57 GMT<![CDATA[Architecture]]><![CDATA[BindPlane OP Server is made up of several components. The store, measurements database, and event bus. BindPlane can be configured as a single instance or high availability mode. By default, BindPlane server operates in the single instance mode where all components are included in the installation. Optionally, users can operate BindPlane server in high availability mode. This documentation will cover each component and where they fit within both operation modes. Single Instance Single instance is suitable for small, medium, and large deployments of BindPlane. This mode is the easiest way to get started as it requires a single Linux server or virtual machine. Single instance mode is the recommended deployment model for users who prefer simplicity at the expense of fault tolerance. When fault tolerance is required, BindPlane can be configured in high availability mode. When BindPlane server is down due to maintenance or an outage, managed agents will continue to operate like normal. Agents do not depend on BindPlane to be available. When operating in single instance mode, BindPlane does not depend on third-party systems such as a remote database. For more details, see the Single Instance documentation. High Availability High availability is a great choice for users who can support a more complex deployment to gain fault tolerance. When operating BindPlane in high availability mode, an individual BindPlane server can go offline for maintenance without causing an outage. The high availability architecture is significantly more complex, as it requires the following supporting systems: - 2 or more BindPlane OP servers - Load balancer - PostgreSQL database - Dedicated Prometheus deployment - Event bus For more details, see the High Availability documentation.]]>https://observiq.com/docs/going-to-production/bindplane/architecture/architecturehttps://observiq.com/docs/going-to-production/bindplane/architecture/architectureTue, 30 Jan 2024 20:47:57 GMT<![CDATA[Prometheus]]><![CDATA[BindPlane OP uses Prometheus as its agent measurement time series backend. BindPlane supports connecting to a remote Prometheus installation as an alternative to its bundled Prometheus. Remote installations are great for operating BindPlane with high availability. The Prometheus instance should be dedicated to BindPlane, and not shared with other services. Prometheus Installation and Configuration There are two methods for installing Prometheus. Linux package and manual installation. The recommended installation method is to use the Linux package. This package installs Prometheus pre-configured to work with BindPlane. - Linux Package - Manual Configuration After installing Prometheus on a dedicated instance, you will need to update your BindPlane server(s) configuration file to connect to the remote Prometheus server. - Configuration - Authentication and TLS]]>https://observiq.com/docs/going-to-production/bindplane/architecture/prometheus/prometheushttps://observiq.com/docs/going-to-production/bindplane/architecture/prometheus/prometheusMon, 22 Jan 2024 17:01:07 GMT<![CDATA[Prometheus Package Installation]]><![CDATA[Each BindPlane OP release includes a matching version of Prometheus that can be used to deploy a dedicated Prometheus server. The package simplifies installation because it handles user creation and configuration management. If you wish to configure install and configure Prometheus on your own, you can follow the manual installation documentation. The recommended approach is to use the provided Linux package. Download Each BindPlane release includes packages for Debian and RHEL based distributions. Ubuntu and CentOS users should use the deb and rpm packages, respectively. Install Install the package with your package manager. Once the package is installed, it must be enabled and started. Configuration See the configuration documentation for configuration details. Uninstall The package can be removed using the package manager.]]>https://observiq.com/docs/going-to-production/bindplane/architecture/prometheus/install-packagehttps://observiq.com/docs/going-to-production/bindplane/architecture/prometheus/install-packageThu, 04 Apr 2024 08:43:26 GMT<![CDATA[Prometheus Manual Installation]]><![CDATA[Installing Prometheus manually is accomplished by following the steps outlined on this page. The recommended approach is to install Prometheus using a Linux package. See the Linux Package documentation for details. Prerequisites Version Prometheus version 2.47.2 or newer. Create User and Group Create a Prometheus user and group. This user will be used to execute the Prometheus process. Download Release Download the v2.47.2 release. Extract the archive to your working directory. Binary Installation Install binaries to /usr/bin. Configuration Configure the Prometheus configuration directories and files. Configure the Prometheus storage directories and files. Populate prometheus.yml. Populate rules.yml. Leave web.yml alone for now. See the TLS section for details on how to secure communication between BindPlane OP and Prometheus Systemd Service Create the Systemd service. Enable and start Prometheus. Validate You can validate that Prometheus is running with the following curl command.]]>https://observiq.com/docs/going-to-production/bindplane/architecture/prometheus/install-manualhttps://observiq.com/docs/going-to-production/bindplane/architecture/prometheus/install-manualThu, 04 Apr 2024 08:43:26 GMT<![CDATA[Prometheus Configuration]]><![CDATA[When operating a self-managed Prometheus instance, BindPlane's server configuration must be updated to connect to the remote Prometheus instance. BindPlane Configuration After installing BindPlane OP, update the configuration file at /etc/bindplane/config.yaml using the editor of your choice. - Set prometheus.enableRemote to true - Set prometheus.host to the IP address or Hostname of your Prometheus server. Once enableRemote and host are configured, restart the BindPlane server process. At this point, BindPlane OP is installed and configured to use the remote Prometheus instance. Security Prometheus supports several options for security. Basic authentication (Basic auth), Transport Layer Security (TLS), and Mutual TLS (mTLS). Basic Authentication Follow the Prometheus Basic Auth Password Hashing documentation to generate a password hash. Once you have your hash, update /etc/prometheus/web.yml with your basic auth username and password hash. Restart the Prometheus service. Test by making a curl request, without basic auth. You should expect a "401 Unauthorized" response. Test by making a curl request with your username and password. You should expect a "200 OK" response. This will indicate that basic auth is working correctly. Next, we need to update BindPlane with the new credentials. Edit /etc/bindplane/config.yaml on all of your BindPlane servers. Restart the BindPlane service. Transport Layer Security (TLS) Copy the certificate keypair to /etc/prometheus/tls. The example commands assume that you have a certificate key pair in your working directory named prometheus.crt and prometheus.key Server side TLS can be configured by editing the web configuration file at /etc/prometheus/web.yml and configuring the certificate file and private key file paths. Restart the Prometheus service. You can test if Prometheus is using TLS by using curl. You should expect a "200 OK" response. This will indicate that server side TLS is working correctly. Next, we need to update BindPlane to use TLS when communicating with Prometheus. On all of your servers, perform the following steps. Copy the certificate authority to /etc/bindplane/tls. The example commands assume that you have a certificate authority public key named ca.crt in your working directory. Edit /etc/bindplane/config.yaml on all of your BindPlane servers and add the tls.tlsCa parameter. Make sure prometheus.host matches the hostname of the Prometheus server's certificate. If the hostname does not match, you can set prometheus.tls.tlsSkipVerify to true to skip TLS verification. Skipping TLS verification is not recommended in a production environment. Restart the BindPlane service. Mutual TLS Copy the certificate keypair and certificate authority to /etc/prometheus/tls. The example commands assume that you have a certificate key pair in your working directory named prometheus.crt and prometheus.key and a certificate authority named ca.crt. Mutual TLS can be configured by editing the web configuration file at /etc/prometheus/web.yml and configuring the certificate file, private key file paths and certificate authority paths. Restart the Prometheus service. You can test if Prometheus is using TLS by using curl on the Prometheus system. You should expect a "200 OK" response. This will indicate that mutual TLS is working correctly. Next, we need to update BindPlane to use mutual TLS when communicating with Prometheus. On all of your servers, perform the following steps. Copy the certificate authority and client keypair to /etc/bindplane/tls. The example commands assume that you have a certificate key pair in your working directory named bindplane.crt and bindplane.key and a certificate authority named ca.crt. Edit /etc/bindplane/config.yaml on all of your BindPlane servers and add the tls parameters. Make sure prometheus.host matches the hostname of the Prometheus server's certificate. If the hostname does not match, you can set prometheus.tls.tlsSkipVerify to true to skip TLS verification. Skipping TLS verification is not recommended in a production environment. Restart the BindPlane service.]]>https://observiq.com/docs/going-to-production/bindplane/architecture/prometheus/configurationhttps://observiq.com/docs/going-to-production/bindplane/architecture/prometheus/configurationMon, 09 Sep 2024 09:32:31 GMT<![CDATA[Agent Sizing and Scaling]]><![CDATA[Agent When the collector is running as an agent, you must be mindful of resource consumption in order to avoid starving other services. Generally, agent collectors consume very little resources because they are handling the telemetry of an individual system. You can reference this table as a starting point for agent system requirements. The resource recommendations do not consider multiple exporters or processors. The addition of processors can impact performance significantly. Telemetry Throughput Logs / second Cores Process Memory (MB)\ : : : : 200 MiB/m 10,000 0.25 300 400 MB/m 20,000 0.5 300 1 GB/m 50,000 1 300 2 GB/m 100,000 2 500 \ Process memory is the amount of memory the agent is expected to consume. The host system should have enough memory to satisfy the agent and all other services. Gateway Gateway collectors receive telemetry over the network. Pairing them with a load balancer is recommended in order to provide fault tolerance and the ability to scale horizontally. Horizontal scaling is preferable because it provides fault tolerance and can eliminate exporter bottlenecks. Gateway best practices: - Minimum two collectors behind a load balancer - Minimum 2 cores per collector - Minimum 8GB memory per collector - 60GB usable space for persistent queue per collector When deciding how many collectors your workload requires, take the expected throughput or log rate and use this table as a starting point. The table assumes that each collector has four CPU cores and 16GB of memory. The table does not account for processors. When adding processors, the compute requirements will increase. Telemetry Throughput Logs / second Collectors : : : 5 GB/m 250,000 2 10 GB/m 500,000 3 20 GB/m 1,000,000 5 100 GB/m 5,000,000 25 It is important to over provision your collector fleet in order to provide fault tolerance. If one or more collector systems fail or are brought offline for maintenance, the remaining collectors must have enough available capacity to handle the telemetry throughput. When dealing with a fixed number of collectors, you can scale their CPU and memory vertically in order to increase throughput. See agent sizing table at the beginning of this page.]]>https://observiq.com/docs/going-to-production/agent/sizing-and-scalinghttps://observiq.com/docs/going-to-production/agent/sizing-and-scalingMon, 18 Nov 2024 16:25:12 GMT<![CDATA[Agent Resilience]]><![CDATA[Reliable collector architecture can be obtained with the combination of retry, queue, and load balancing. Retry BindPlane destinations have the ability to retry sending telemetry batches when there is an error or a network outage. Configuration Retry is enabled by default on all destinations that support it. By default, failed requests will be retried after five seconds and progressively back off for up to 30 seconds. After five minutes, requests will be permanently dropped. Best Practices For workloads that cannot afford to have telemetry dropped, the five-minute maximum elapsed time should be increased significantly. Keep in mind that a large max elapsed time combined with a large backend outage will cause the collector to "buffer" a significant amount of telemetry to disk. Gateway collectors should be provisioned with disks large enough to sustain an outage lasting hours or days. If overwhelming the backend during an outage recovery is not a concern, reducing the max interval to match the initial interval can decrease the time it will take to recover from an outage, as telemetry sending will be retried more frequently. Sending Queue When telemetry requests are retried, they are first stored in a sending queue. This sending queue is stored on disk in order to guarantee persistence in the event of a collector system crash. Configuration The sending queue has three options - Number of consumers - Queue size - Persistent queuing Number of Consumers This option determines how many batches will be retried in parallel. For example, 10 consumers will retry 10 batches at a time. If each batch contains 100 logs, the collector will retry 1,000 logs. Generally, the default value of 10 is suitable for low and high-volume systems. Decreasing this number will cause the collector to recover from large outages slower, but will keep resource consumption low. Alternatively, increasing this number will mean that the collector is going to put more strain on the backend because it will be retrying more batches in parallel. Queue Size The queue size option determines how many batches are stored in the queue. When the queue is at capacity, additional batches will be dropped. Keep in mind that the queue size is the number of batches. You can calculate the number of metrics, traces, and logs by taking the batch size and multiplying it by the queue size. You can use the Batch processor to configure batch sizes. Persistent Queuing Persistent queue is a feature that allows the BindPlane agent to buffer telemetry batches to disk when a request to the backend fails. The BindPlane agent supports persistent queue by default and it is recommended that it be enabled at all times. Persistent queue protects against data loss if the agent system is suddenly shut down due to a crash or other outside factors. If persistent queue is disabled, failed telemetry batches will be buffered in memory. This will increase performance on high throughput systems, at the expense of reliability. During an outage, memory buffering will increase memory consumption drastically and can cause the BindPlane agent to crash if the system runs out of memory. Load Balancing Load balancing allows you to operate a fleet of gateway agents for increased performance and redundancy. Load balancers allow you to scale your gateway fleet horizontally and sustain failures without ensuring an outage. The BindPlane collector can work with a wide range of load balancers when operating in gateway mode. This documentation will not discuss any particular option, as most popular load-balancing solutions support the required options for operating multiple collectors reliably. Load balancing best practices - Health checks. The load balancer should be configured to ensure the collector is ready to receive traffic. - Even connection distribution. Connections should be distributed evenly among collectors. - Protocol support: OpenTelemetry has a wide range of network-based receivers. In order to support all of them, the load balancer should support transport protocols TCP and UDP as well as application protocols HTTP and gRPC. Use Cases The following source types can be used with a load balancer: - OTLP - Syslog - TCP / UDP - Splunk HEC - Fluent Forward Any source type that receives telemetry from remote systems over the network is a suitable candidate for load balancing.]]>https://observiq.com/docs/going-to-production/agent/resiliencehttps://observiq.com/docs/going-to-production/agent/resilienceMon, 10 Jun 2024 12:50:01 GMT<![CDATA[Monitor the Bindplane Agent]]><![CDATA[To monitor agent logs, we will set up the Bindplane Agent source that will send log files from the Agent itself. These logs contain information about the health of your Bindplane Agent. For this, we will need an already deployed agent from any existing configuration you already have set up. No additional server configuration is needed, we will just go into any of the configurations you would like to gather Agent logs from and click 'Add Source'. From there select the 'Bindplane Agent' source like in the example below: We can leave this on default as well for this example, and simply click 'Save': All that is left is to push out the configuration to the Agents by running a "Start Rollout". With that source rolled out to the Agent machines, your Bindplane Agent logs will now be sent to the destination of your choice. Below is an example of those logs on a Google Cloud Destination: Important Adding processors to this agent could cause problems, as it would create entries in this same log file, which could lead to infinite error messages. Add any processors sparingly and thoroughly test afterward to ensure it is following the intended behavior. If you haven't yet, you can also set up monitoring of the Bindplane OP server itself.]]>https://observiq.com/docs/going-to-production/agent/monitoring-bp-agenthttps://observiq.com/docs/going-to-production/agent/monitoring-bp-agentMon, 10 Jun 2024 12:50:01 GMT<![CDATA[Agent Architecture]]><![CDATA[The BindPlane OP collector supports operating in two modes: Agent and Gateway. The mode is not configurable, it is implicit based on the sources configured. For example, a collector configured with the Nginx source is running in agent mode, while a collector configured with the OTLP source (receiving telemetry from multiple collectors) is running in aggregation mode. Agent Agent mode is used for collecting telemetry from an individual system (e.g. Database host, API server). Agents are used for collecting, processing, and shipping telemetry from an individual host to a destination. This destination may be your monitoring backend or an additional set of collectors (Gateways) which may perform additional processing and routing. Collectors running in agent mode do not require additional configuration. Once a collector is installed, you can attach a configuration which gathers local logs, metrics, and traces from the system. Use Cases A collector is running in agent mode anytime it is deployed to an endpoint system. The following are examples, and do not cover all use cases. - NGINX web server - PostgreSQL database server Gateway Gateway mode is used for receiving telemetry from one or more collectors over the network, optionally performing additional processing, and routing to a destination. Gateway collectors are optional, as agent collectors can ship telemetry directly to your telemetry backend. Use Cases 1. Isolating Backend Credentials Instead of deploying credentials to all of your agent systems, you can keep credentials exclusively on the gateway collectors. This simplifies credential rotation and reduces the security attack surface as credentials are deployed to a subset of your systems. 2. Offloading Processing Overhead Generally, you want your agent collectors to perform as little work as possible. If you have heavy processing requirements, it can be useful to offload that processing to a fleet of gateway collectors. For example, instead of filtering telemetry with an expensive regex operation, you can have the gateway collectors perform that task. Generally, gateway collectors are running on a dedicated system. The processing overhead can be justified because it does not rob the compute power of other services running on the same system, unlike an agent collector that may be running on a database server. 3. Network Security Gateway collectors could be located within a DMZ, firewalled from the internal network. You can configure your network to allow your agent collectors to forward to the gateway collectors while blocking the gateway collectors from reaching into your application network. This will allow you to send telemetry to a cloud-based backend without granting your endpoints access to the internet. Supported Source Types Collectors are running in gateway mode when they are configured with a source type that receives telemetry from multiple remote systems. Gateway source examples: - OTLP - Syslog - TCP / UDP Any source type which handles telemetry from one or more remote agents is considered to be n gateway.]]>https://observiq.com/docs/going-to-production/agent/collector-deployment-architectureshttps://observiq.com/docs/going-to-production/agent/collector-deployment-architecturesFri, 26 Apr 2024 22:44:21 GMT<![CDATA[Quickstart Guide]]><![CDATA[What is BindPlane OP? BindPlane Observability Pipeline gives you the ability to collect, refine, and ship metrics, logs, and traces to any destination. BindPlane OP provides the controls you need to reduce observability costs and simplify the deployment and management of telemetry agents at scale. To get started with BindPlane, see our Getting Started guide for a quick and easy deployment. If you need more advanced deployment configurations such as Kubernetes, TLS or if you will be using a proxy, check out the advanced installation page. Features - Manage the lifecycle of telemetry agents, starting with the BindPlane Agent, a distribution of OpenTelemetry. - Build, deploy, and manage telemetry configurations for different Sources and deploy them to your agents - Ship metric, log, and trace data to one or many Destinations - Utilize flow controls to adjust the flow of your data in real-time Architecture BindPlane OP is a lightweight web server (no dependencies) that you can deploy anywhere in your environment. It's composed of the following components: - GraphQL Server: provides configuration and agent details via GraphQL - REST Server: BindPlane CLI and UI make requests to the server via REST - WebSocket Server: Agents connect to receive configuration updates via OpAMP - Store: pluggable storage manages configuration and Agent state - Manager: dispatches configuration changes to Agents]]>https://observiq.com/docs/getting-started/quickstart-guidehttps://observiq.com/docs/getting-started/quickstart-guideWed, 15 Mar 2023 09:52:05 GMT<![CDATA[Google Marketplace Deployments]]><![CDATA[This guide will walk you through deploying BindPlane OP Enterprise on a GCE instance using the Google Cloud Deployment Manager. Deploying and Configuring BindPlane OP Enterprise Navigate to BindPlane OP Enterprise offering in the Google Marketplace Here. From the Overview tab, click Launch to start configuring your deployment. BindPlane OP Enterprise requires an enterprise license. Contact the observIQ sales team for more information. Deploying the image From the New BindPlane Enterprise deployment page, provide a friendly Deployment name and choose a Zone: Configure your Machine type and Storage Under the Machine type and Boot Disk sections, leave the default recommended values for Series, Machine Type, Book disk type, and Boot disk in size in GB\. Note: these are observIQs recommended settings; the machine type and boot disk size can be changed based on the size and scale of your monitored environment. Configure Networking Under the Networking section, select the default interface, verify the Allow TCP port 3001 from the internet is checked, and enter 0.0.0.0/0 as the Source IP range. Deploy the BindPlane OP Enterprise Image Review the GCP Marketplace and observIQ Terms of Service, and deploy the image. This will kick off the deployment within the Cloud Deployment Manager. A typical BindPlane OP Enterprise deployment takes about 2 minutes. Accessing BindPlane OP Enterprise After deploying the image, follow the on-screen instructions to SSH into the GCE VM hosting BindPlane OP, view and change the default password, and assign a static IP for your instance. After following the settings in the marketplace, the BindPlane OP interface will be available with the Ports you configured. For next steps, follow our Quickstart guide, starting on Step 2 for Accessing the BindPlane UI]]>https://observiq.com/docs/getting-started/google-marketplace-deploymenthttps://observiq.com/docs/getting-started/google-marketplace-deploymentFri, 10 Nov 2023 14:57:59 GMT<![CDATA[3. Install Your First Agent]]><![CDATA[Now that BindPlane OP is running, we'll install our first agent to start collecting telemetry. BindPlane OP is built around OpenTelemetry and uses the BindPlane Agent distribution for OpenTelemetry. To learn more, check out the GitHub page. Installation 1. Navigate to the Agents tab and select the Install Your First Agent button inside the Agent table. 2. Our Agent Installation wizard will walk you through installing an Agent. The first step is to select the platform you'll be installing an agent on. When you've completed the form click the Next button. Installation for containerized agents on Kubernetes and Openshift first requires you to specify the assigned configuration for the agent. If that's the case you must first create a configuration for that platform. 3. Now it's time to install your agent. Installation differs depending on the platform specified. In the case of Windows, macOS, or Linux installation you'll need to copy the installation command. Installed agents will appear in the table on this page. When you're done you can view all agents with the \\Return to all Agents\\ button. Upon successfully installing the agent, it will appear in the table below the install script.]]>https://observiq.com/docs/getting-started/quickstart-guide/install-first-agenthttps://observiq.com/docs/getting-started/quickstart-guide/install-first-agentThu, 13 Jun 2024 19:03:39 GMT<![CDATA[1. Install BindPlane OP Server]]><![CDATA[Welcome to BindPlane OP This guide is broken into four steps to help you get up and running with BindPlane OP. BindPlane OP will run on Linux or as a container using Docker. This guide will walk you through installing BindPlane OP on a Linux or Docker system, installing an agent, and collecting and routing your telemetry. Step 1: Installing and Configuring BindPlane OP Installation Check out our advanced setup page if you need additional installation options such as Kubernetes, TLS or if you will be using a Proxy. The first step is to download Bindplane OP. The download page has a few steps we need to cover first: 1. The first is to select the platform you will be running it on. You can choose Linux or Docker. 2. The next step will be selecting the version, it is recommended to use the latest available version. 3. If you don't already have a license, you will have the option to generate a free license as well, save this we will need it soon. 4. The next step will be the command you can run in your terminal for installing Bindplane. - If you use Docker, make the appropriate changes that are mentioned in the instructions below the script. Run the modified script in your terminal. Congratulations, you are done. You can go to the next section Access Bindplane OP UI - If you are installing it on Linux, the command you will run will look like the example below: Configuration Initialize Server Type y to continue the installation process. This will initialize the server with some configuration parameters, which updates the fields in the config.yaml located by default at /etc/bindplane/config.yaml: - License Key: A license is required to initialize the server configuration. If you do not have a license, you can request one on the Download page. - Server Host: Set to the instance's IP address, or 0.0.0.0 to bind to all IP addresses. - Server Port: Set to 3001 (the default value) unless you have a reason to change it. - Remote URL: Set to the URL that should be used to communicate with BindPlane externally. Generally, this is your server's hostname or IP address followed by the port. If BindPlane is behind a load balancer please follow the High Availability instructions. - Authentication Method: Choose the authentication type you would like to configure. (Free Edition users will not be prompted, instead, basic auth is configured automatically) - LDAP and Active Directory (Google Edition or Enterprise) - Enable TLS: If enabled, TLS will be used when communicating with the directory server. - Enable Mutual TLS: If enabled, mutual TLS authentication will be used when communicating with the directory server. - TLS Certificate: Path to the X509 PEM TLS certificate to use when mutual TLS is enabled. - Private Key: Path to the X509 PEM TLS private key to use when mutual TLS is enabled. - Certificate Authority: Optional path to the X509 PEM TLS certificate authority that should be used to validate the directory server's certificate. - Insecure Skip Verify: Choose "n" here. It is not recommended to skip certificate verification outside of a development environment. - Server Address: Set to the IP address or hostname of the directory server. - Server Port: Set to the port of the directory server. - Base DN: Set to the distinguished name that should be used to search for users. - Search Filter: Set to the search filter that should be used to search for users. - Bind Username: Set to the username that should be used when authenticating with the directory server. - Bind Password: Set to the password that should be used when authenticating with the directory server. - Single User - Username: Set to your desired basic auth username - Password: Set to your desired basic auth password - Store Type: Choose what storage method BindPlane should use. - PostgreSQL (Enterprise): Provide connection parameters for the PostgreSQL database to connect to. - Host: Set to the IP address or hostname of the PostgreSQL instance. - Port: Set to the port that the PostgreSQL instance is reachable on. - Database Name: Set to the name of the database to use for storage. BindPlane will create the database at startup if it does not already exist. - SSL Mode: Set to the preferred SSL Mode for connecting to the PostgreSQL instance. - Username: Set to the PostgreSQL user to authenticate as. - Password: Set to the password for the chosen PostgreSQL user. - BBolt: Use a BBolt database that BindPlane will manage for storage. BBolt is a simple database which is stored on the BindPlane system at /var/lib/bindplane/storage/bindplane.db Restart Server At the end of initialization, you'll be prompted to automatically restart BindPlane to have the changes take effect. If you choose not to restart automatically, use the following command to restart the server manually. That's it; you've successfully installed BindPlane OP. Next, we'll show you how to access the BindPlane OP UI in your browser.]]>https://observiq.com/docs/getting-started/quickstart-guide/install-bindplane-op-serverhttps://observiq.com/docs/getting-started/quickstart-guide/install-bindplane-op-serverSun, 27 Oct 2024 17:07:00 GMT<![CDATA[4. Build Your First Configuration]]><![CDATA[Now that you have an agent running, we'll configure it to start collecting telemetry and shipping it to your preferred destination. To set up an agent configuration, navigate to the Configurations tab and click Create Configuration. You'll now be in the configuration wizard. 1. Give your config a name (see naming rules below). 2. Choose a platform for it to run on that corresponds to your agent(s) 3. You can optionally add a description for the config, then click Next Rules for naming configs: - must be 63 characters or less - must begin and end with an alphanumeric character ( [a-z0-9A-Z] ) - can contain dashes ( - ), underscores ( \_ ), dots ( . ), and alphanumerics between Add a Source Next, we'll add sources to our configuration. Sources are where you'd like to collect metrics, logs, or traces. We will start by collecting some host metrics using the Host source. 1. Click Add Source 2. Choose "Host" 3. Choose the metrics you'd like to collect 4. Click Save when you're all done Click Save when you're done with the source configuration. You can add more sources, click on existing ones to see their configuration and edit or remove them, or click Next to move on to adding a destination. Add a Destination The last step is to add a destination. This is where you'd like to ship your telemetry for storage or analysis. BindPlane OP supports the most popular destinations out of the box. You can find a full list here. For this example, we will show you how to configure a Google Cloud destination. 1. Click Add Destination 2. Select "Google Cloud" from the list of destinations 3. Enter a name (corresponding with the same naming rules listed above) 4. Fill in your Project ID 5. Select the desired authentication method If the VM running your BindPlane agent is already in Google Cloud, then you can leave the authentication method as auto. Creating a credentials file for Google Cloud A Google Cloud Service Account can used for authentication by creating a service account and key. Create a service account with the following roles: - Metrics: roles/monitoring.metricWriter - Logs: roles/logging.logWriter - Traces: roles/cloudtrace.agent Create a service account JSON key and place it on the system that is running the collector. 6. Click Save to save the destination 7. Click Save again to finish building your configuration Apply Configuration The next page is the Details page for the config you just created. 1. Click the Add Agents button to add an agent 2. Select which agents you'd like to apply the config to 3. Click Apply Rollout The Configuration Now that you've built your configuration and specified the agents it should be applied to, you need to roll out the configuration. Rollouts are how we deploy configuration changes to our agents. Click "Start Rollout", and your configuration will be sent to your agent(s)! Next Steps Congratulations! You've successfully configured BindPlane OP, and you should now see telemetry flowing into your destination. If you run into any issues during setup, don't hesitate to contact us on Slack; we'd be happy to help. Next, you should take some time to explore the integrations available in BindPlane OP on our Sources page and Destinations page. Once you've configured your first pipeline, we can begin exploring the real power of BindPlane OP: routing, transforming, and reducing your telemetry data.]]>https://observiq.com/docs/getting-started/quickstart-guide/build-your-first-configurationhttps://observiq.com/docs/getting-started/quickstart-guide/build-your-first-configurationThu, 13 Jun 2024 12:58:52 GMT<![CDATA[2. Access BindPlane OP UI]]><![CDATA[The BindPlane UI can be accessed with your browser on port 3001. The URL will be http://:3001 with IP Address being the IP of the BindPlane server. To log in, use the credentials you specified when running the init command. Overview of the UI You'll find the following pages once you've accessed the BindPlane OP UI. Overview The Overview page summarizes the flow and consumption of your configurations and destinations. It's used to quickly understand your overall throughput for metrics, logs, and traces. The ability to sort to the "Top Three" will let you see the Configurations and Destinations with the most consumption at a glance. Agents This is the first stop to view your fleet of agents quickly. From this page, you can navigate to view agent configuration, status, or errors directly by clicking on the agent name. Alternatively, you can see which configuration resources are deployed to these agents in the "Configuration" column. Quickly narrow your view with the search bar and suggested filters. Configurations Here is the entry point to view all your current configurations. From here, you can select a configuration to edit or create a new configuration with the button on the top right. Destinations Here is where you'll find all your reusable destination resources. In particular, any Destinations you have created inside configurations can be viewed, edited, and deleted on this tab. Now that you've accessed the UI, let's help you install your first agent.]]>https://observiq.com/docs/getting-started/quickstart-guide/access-bindplane-op-uihttps://observiq.com/docs/getting-started/quickstart-guide/access-bindplane-op-uiSun, 27 Oct 2024 17:07:00 GMT<![CDATA[Snapshots]]><![CDATA[Snapshots Snapshots provide a way to view logs, metrics, and traces recently collected by an agent. Viewing Snapshots can be viewed by clicking the View Recent Telemetry button found below the details table on an agent page. Logs Metrics If no metrics are available, it's possible none have been collected yet. For example, if an agent has been running for 30 seconds and collecting metrics with a 60-second interval, it won't have any recent metrics to show. Limitations For recent telemetry to be available for an agent, there are two requirements: 1. The agent must be connected to BindPlane 2. The agent must be using a managed configuration]]>https://observiq.com/docs/feature-guides/snapshotshttps://observiq.com/docs/feature-guides/snapshotsFri, 24 May 2024 17:06:13 GMT<![CDATA[Rollouts]]><![CDATA[This is an overview of Rollouts using the Standard Rollout. See Progressive Rollouts What are Rollouts? Rollouts are the way BindPlane OP manages deploying configuration changes to agents. Rollouts allow you to stage and compare a new version of a configuration before deploying the changes to agents. Rollouts also apply changes to agents incrementally, so that errors due to any configuration changes are isolated to only a few agents. Rolling out a new Version The currently deployed version of the configuration is displayed by default and is editable. Edits are not instantly applied to agents. When an edit is made, it is added to a new version that is created as a draft. When the new version is ready to be deployed, press the Start Rollout button to begin deploying the new version to agents. After pressing the Start Rollout button, the new configuration begins to roll out incrementally to agents. First, the configuration is rolled out to 3 agents. Every five seconds, another batch of agents will be rolled out. The batch size will multiply by 5 each time, up to a cap of 100. If any agents experience errors when the new configuration is applied, the rollout will be automatically paused so that the configuration may be corrected. While making edits, you can switch to the live tab or click the compare button to view the currently deployed version. You may also discard your draft entirely. Permissions (RBAC) RBAC is only available in BindPlane OP Enterprise or BindPlane for Google. Learn more here. The following table shows what actions each role is able to perform. Can Start Rollouts Can Edit New Versions Can View Staged Configurations : : : : Admin User Viewer Resolving Rollout Errors Sometimes, rolling out a configuration will result in agent errors. When an agent reports an error due to a configuration rollout, the rollout will automatically pause. You can click the red error text to open a new tab that shows the agents that are errored. Clicking on one of the agents will show the specific error message. In order to resolve the error, edit the configuration and roll out a new version that corrects the error.]]>https://observiq.com/docs/feature-guides/rolloutshttps://observiq.com/docs/feature-guides/rolloutsWed, 17 Jul 2024 13:47:01 GMT<![CDATA[Role-based Access Control]]><![CDATA[Overview This document outlines BindPlane OP Role-Based Access Control (RBAC). BindPlane is organized by Organization and Project, where one organization can contain one or many projects. Prerequisites Before configuring RBAC, ensure the following prerequisites are met. License A Google or Enterprise license is required for using RBAC. Authentication Mode BindPlane must be configured to use LDAP, Active Directory, or other multi-user authentication mode. The default System authentication mode does not support multiple users. BindPlane Cloud supports multi-user by default and does not require additional configuration. RBAC Roles Organization Roles Organizations have two RBAC roles: Organization Admin - Full control over the organization. - Can create new projects. Organization User - View organization details. Project Roles Projects have three RBAC roles: Project Admin - Full control over the project. - Can add and remove users within the project. - Can modify configurations and trigger rollouts. Project User - Install and assign agents to configurations. - Can modify configurations within the project. - Cannot trigger rollouts. - Cannot invite or manage other users within the project. Project Viewer - Read-only access to the project. Role Assignment Users can be invited to a project by using the Invite Users button on the Project page. When users are added to a Project, they are implicitly added to the organization. Users can be invited by email or with an invite link. In both cases, a role must be selected. An Admin can modify a user's role by navigating to the Users tab on the Project page. From there, the user can be selected and their role can be modified.]]>https://observiq.com/docs/feature-guides/rbachttps://observiq.com/docs/feature-guides/rbacFri, 26 Jul 2024 14:51:31 GMT<![CDATA[Progressive Rollouts]]><![CDATA[Progressive Rollouts are a BindPlane OP Enterprise Edition feature. What are Progressive Rollouts? A Progressive Rollout allows you to rollout a configuration to a pre-defined subset of agents and pause before continuing the rollout to other groups of agents. Defining Stages To use a progressive rollout, go to the rollout options which are accessible in the configuration details section. Each stage is given a name and a set of labels. The stage will include all agents whose labels include every label specified by the stage. In Action Once the stages are defined and the agents are added, Start Rollout will rollout to _only_ the agents captured by the first stage. The rollout will continue only when continue rollout is clicked. Each stage can be hovered to see its name and how many agents have completed. The third stage matches agents that have the env:prod label. Agents that have already been counted are not added to this stage despite matching the label. Unassigned There are still some agents that have not been matched by any of the stages. There is always a final stage added to capture these agents. Empty stages Sometimes a stage does not match any agents. The rollout can be continued through this stage, doing nothing.]]>https://observiq.com/docs/feature-guides/progressive-rolloutshttps://observiq.com/docs/feature-guides/progressive-rolloutsThu, 13 Jun 2024 17:42:05 GMT<![CDATA[Processors]]><![CDATA[What Are Processors? Processors can be inserted into your telemetry pipeline to transform your data before it arrives at your destination. Adding attributes, filtering, and converting logs to metrics are all the types of transformations we can do using processors in BindPlane OP. Processors can be added from within any configuration and are indicated by the nodes that exist between the Sources and Destinations in your pipeline. You can insert processors at two points in your pipeline: 1. After Sources: Processors inserted after a source will only be applied to that particular sources data. They won't impact any other data stream in your pipeline. 2. Before Destinations: Processors inserted before a destination will be applied to _all_ data flowing into it from all sources. The topology visualization makes it very clear how your data is flowing and what is being impacted by a particular processor. You should use this visualization as part of your workflow when making decisions about routing and transforming your data. It's what makes BindPlane OP great. How to Insert a Processor To insert a new processor, simply click on one of the processor nodes in your pipeline. As shown below, you'll be presented with a modal that let's you begin adding processors. Processors are executed from top to bottom and can be rearranged by simply dragging and dropping them into the desired order of operations. When a processor has been added, a count will appear on the processor node to indicate the number of added processors. If no count is shown that means there are no processors at that node. Bringing it All Together Processors in BindPlane OP are incredibly powerful and will transform the way you think about your Observability. Take a look at the next section below to learn how to use processors in combination with Snapshots to reduce the volume of logs you're sending to your analysis tools.]]>https://observiq.com/docs/feature-guides/processorshttps://observiq.com/docs/feature-guides/processorsThu, 23 May 2024 18:15:38 GMT<![CDATA[Pausing Telemetry]]><![CDATA[Pausing Sources and Destinations By default, telemetry is collected from all sources in a configuration and sent to all supported destinations in that configuration. However, there may be times when you don't want to collect from a certain source or export to a destination. To support those situations, BindPlane allows you to "Pause" sources and destinations. When a source is paused, agents won't try to collect from it. When a destination is paused, no collected telemetry will be sent to it. An important detail to remember is that pausing or resuming a source or destination in a configuration will update all agents using that configuration to pause/resume that source/destination. Pausing a Source Sources can be paused from the page of either an Agent or its configuration. To pause a source, click on the card for the source you want to pause in the topology view. Its current status will be shown in the bottom left corner, either Running or Paused. If running, clicking the Pause button will pause collection of that source. After clicking "Pause", the topology reflects that the Active Directory source has been paused. Pausing a Destination Destinations can be paused like sources, by clicking on the appropriate card in the topology view of a configuration. Its current status will be shown in the bottom left corner, either Running or Paused. If running, clicking the Pause button will pause the sending of telemetry to that destination. If the only destination in a configuration is paused, the agent will also pause collecting all sources as the telemetry has nowhere to go. A major difference between pausing sources and destinations is that while pausing a source only affects that configuration, pausing a destination will pause it in all configurations including it. For example, imagine a single Google Cloud Monitoring destination, example-gcp-project, is being used in several configurations to send telemetry to that GCP project. If you need to stop all telemetry from being sent, pausing the example-gcp-project destination in one configuration will pause it in all other configurations. After clicking "Pause", the topology reflects that the Gcloud-qa destination has been paused.]]>https://observiq.com/docs/feature-guides/pause-resumehttps://observiq.com/docs/feature-guides/pause-resumeFri, 24 May 2024 20:18:30 GMT<![CDATA[Metric Filtering]]><![CDATA[Metric Filtering Sources that collect metrics can be configured to filter out any number of metrics. When a metric is filtered out, it will not be sent to any destination. Configuration Metric filtering can be configured for a source when initially creating it, or an existing source can be edited to change its filtering. Once saved all agents collecting that source will be updated to use the updated filter settings. The controls for filtering metrics are found in the Advanced section of the source configuration form. Available metrics are organized in groups, allowing you to quickly enable or disable filtering for multiple related metrics. The checkbox next to each metric indicates whether it will be sent to destinations. That is, unchecking the box for a metric will filter it out. In the above image, all metrics in the Virtual Server group will be filtered out, as well as bigip.pool.availability and bigip.pool.packet.count.]]>https://observiq.com/docs/feature-guides/metric-filteringhttps://observiq.com/docs/feature-guides/metric-filteringFri, 24 May 2024 17:06:13 GMT<![CDATA[Live Preview]]><![CDATA[Live Preview Live Preview provides a real-time preview of changes you make to your telemetry, giving you the power to validate the impact before rolling out the change to your agents. Viewing From a configuration page, simply click on any processor node in your pipeline. You'll then be presented with a full-screen editing experience. Use Snapshots on the left to inspect your data, add processors to transform it, and view the results in real-time with Live Preview on the right. This becomes an excellent sandbox to experiment with changes before you commit them to your production agents. _This shows Live Preview with a Severity Filter processor applied._ Limitations While Live Preview works with most processors, you will not see a preview when you configure the following processors: - Batch - Count Telemetry - Deduplicate Logs - Extract Metric - Compute Metric Statistics - Resource Detection]]>https://observiq.com/docs/feature-guides/live-previewhttps://observiq.com/docs/feature-guides/live-previewFri, 24 May 2024 20:18:30 GMT<![CDATA[Library]]><![CDATA[Create and edit library resources Library resources can be created from the library or added from one of your configurations. They are marked with a filled bookmark and are each given a unique (case-insensitive) name. They can be edited from the library or a configuration. Editing one of these resources will be reflected everywhere it is used. If a resource is in more than one configuration, you will be prompted to confirm the update. Configurations containing the resource must still be rolled out afterward. Use a library resource To use a Host source you added in the library, create or go to an existing configuration and add a Host source. You will be prompted with a list of existing library Host sources. Add an existing resource to the library If you have an existing resource that you want to add to the library, edit the resource and click the bookmark to give it a name and add it. It must be fully saved in a configuration for this option to appear. Unlink a resource from the library You can also unlink a resource by clicking the bookmark while a resource is in the library. This will save the resource as-is in your configuration and it will no longer be linked to the library resource. Destinations are always library resources, and cannot be unlinked. Delete resources in the library You can delete resources from the library as long as they are not being used in a configuration. Select the resources you want to delete, and the button will appear. Any resources not that cannot be deleted will show a list of the configurations they are in.]]>https://observiq.com/docs/feature-guides/libraryhttps://observiq.com/docs/feature-guides/libraryFri, 24 May 2024 20:18:30 GMT<![CDATA[Audit Trail]]><![CDATA[Audit Trail is a BindPlane OP Enterprise Edition feature. What is Audit Trail? Audit Trail is a feature for BindPlane OP Enterprise Edition that creates a log of events that can be used for auditing resources that are created and modified within BindPlane OP. With the audit trail, you can keep track of changes to configurations, rollouts, and users in your project. Configuration The audit trail feature is automatically enabled in BindPlane OP Enterprise Edition. Retention may be configured in your server config, by setting the auditTrail.retentionDays configuration option. In this example, events are configured to be retained for 60 days: By default, the audit trail will retain audit events for 30 days. Viewing Audit Events Audit events can be viewed through either the UI or the CLI. UI The audit logs can be accessed by admins of the project by clicking the gear icon in the top right of the BindPlane UI, then on the Audit Logs option. On the Audit Logs page, you will see the following: 1. You can filter by the affected configuration. This input accepts both the configuration name, as well as the configuration name + version (e.g. myconfig:3 would filter out all logs except for ones affecting version 3 of myconfig). 2. You can filter by the user whose action created the log. 3. You can set the minimum date of logs to view. 4. You can set the maximum date of logs to view. 5. You can export and download the current view with all active filters to a CSV file. Below, you will see a table of all audit events that match the current filters. CLI To retrieve audit events, the bindplane get audit-events command can be used. In addition to the standard options for bindplane get, there are some extra parameters that may optionally be specified in order to filter the retrieved audit events: Flag Description configuration The name of the configuration to filter by max-date The maximum date for the events filter, in the format of YYYYMMDDHHMMSS min-date The minimum date for the events filter, in the format of YYYYMMDDHHMMSS user The display name of the user who made the change to filter by For a full list of configuration flags, run the bindplane get audit-events help command. CLI Examples Output Audit Events as CSV Get All Audit Events Generated by a Specific User Get All Audit Events for a Specific Configuration Get All Audit Events for a Specific Configuration (with version) Get All Audit Events for the Past Day Types of Events Currently, there are three categories of events that are logged to the audit trail. Configuration Events When a configuration is created or modified, an event is logged specifying which resource of the config was modified, along with the user that modified it. The following events may be emitted: Action Resource Kind Description Created Source A new source of the type specified by the resource name has been added to the configuration. Created Processor A new processor of the type specified by the resource name has been added to the configuration. Created Destination A new destination of the type specified by the resource name has been added to the configuration. Modified Source A source of the type specified by the resource name has been modified for the configuration. Modified Processor A processor of the type specified by the resource name has been modified for the configuration. Modified Destination A destination of the type specified by the resource name has been modified for the configuration. Deleted Source A source of the type specified by the resource name has been removed from the configuration Deleted Processor A processor of the type specified by the resource name has been removed from the configuration. Deleted Destination A destination of the type specified by the resource name has been removed from the configuration. Rollout Events When a new rollout is created, started, paused, or resumed, an event is logged for the configuration. The following events may be emitted: Action Resource Kind Description Pending Rollout A new rollout has been created in a Pending state for the configuration. Started Rollout A rollout has been started for the configuration. Paused Rollout An in-progress rollout has been paused for the configuration. Resumed Rollout A previously paused rollout has been resumed for the configuration. User Events When users are added, removed, or modified to an project, an audit event is logged for that user. The following events may be emitted: Action Resource Kind Description Created User The user specified by the resource name has been added to the project. Modified User The user specified by the resource name has had their role changed to the role specified in the resource name. Deleted User The user specified by the resource name has been removed from the project.]]>https://observiq.com/docs/feature-guides/audit-trailhttps://observiq.com/docs/feature-guides/audit-trailThu, 13 Jun 2024 17:42:05 GMT<![CDATA[BindPlane OP Editions]]><![CDATA[BindPlane OP Editions - Feature Comparison BindPlane OP is available in three editions: - Free: A robust free tier for when you're sending less than 10GB/Day or managing fewer than 10 agents. This is a great way to get started with BindPlane OP. - BindPlane OP for Google: We've partnered with Google to provide an edition of BindPlane OP for Google Cloud Operations users. Collect on-prem telemetry and route to any Google destination at no cost. The Google edition does not include data reduction, advanced transformation capabilities, or the ability to send to non-Google destinations. - Enterprise: The enterprise tier is perfect for when you're ready to go to production. It includes everything in the free edition, plus RBAC, LDAP, Multi-Project, and much more. A full comparison of each edition can be found at https://observiq.com/solutions]]>https://observiq.com/docs/bindplane-editions/bindplane-enterprise-edition-featureshttps://observiq.com/docs/bindplane-editions/bindplane-enterprise-edition-featuresThu, 13 Jun 2024 17:42:05 GMT<![CDATA[Extensions]]><![CDATA[What are Extensions? Extensions are components in a Configuration that add capabilities to the Agent. They use OpenTelemetry extensions under the hood. A common use case for extensions could be to configure a local server to return the Agent's healthy status. Managing Extensions Extensions are configured on a per Configuration basis. To view, edit, or modify extensions for a Configuration, select the Advanced Configuration icon on the bottom right-hand side of the Topology Graph on any Configuration detail page. From the opened menu select "Manage Extensions" and the Extension Editor will be brought up. Extensions Available for BindPlane OP Extension BP OP Enterprise Only : : Go Performance Profiler Health Check Custom]]>https://observiq.com/docs/agent-configuration/extensionshttps://observiq.com/docs/agent-configuration/extensionsTue, 09 Jan 2024 11:00:00 GMT<![CDATA[Go Performance Profiler]]><![CDATA[Go Performance Profiler Extension The Go Performance Profiler Extension can be used to enable the Go Performance Profiler, also known as pprof, for an Agent. It configures an HTTP server that exposes runtime profiling data in the format expected by the pprof visualization tool. Configuration Table Parameter Type Default Description : : : : listen_address string 127.0.0.1 The IP address or hostname to bind the profiler to. Setting to 0.0.0.0 will listen to all network interfaces. tcp_port int 1777 The TCP port to bind the profiler to. block_profile_fraction fraction 0 The fraction of blocking events that are profiled, must be a number between 0 and 1. A value of zero will profile no blocking events. mutex_profile_fraction fraction 0 The fraction of mutex contention events that are profiled, must be a number between 0 and 1. A value of zero will profile no mutex contention events should_write_file bool false If true, the agent will write the CPU profile to a file on shutdown. cpu_profile_file_name string $OIQ_OTEL_COLLECTOR_HOME/observiq-otel-collector.pprof The file name to write the CPU Profile. The default is observiq-otel-collector.pprof written in the Agent's home directory. The CPU profile file is only written once the Agent has been stopped and the should_write_file parameter is set to true. Example Configuration Web Interface Standalone Extension Configuration with Embedded Extension]]>https://observiq.com/docs/agent-configuration/extensions/pprofhttps://observiq.com/docs/agent-configuration/extensions/pprofMon, 10 Jun 2024 12:50:01 GMT<![CDATA[Health Check]]><![CDATA[Health Check Extension Extension The Health Check Extension enables an HTTP URL that can be probed to check the status of the BindPlane Agent. Configuration Table Parameter Type Default Description : : : : listen_address string 0.0.0.0 Hostname or IP address where the agent will publish the health check status. listen_port int 13133 HTTP port on which to publish the health check status. path string / the path to be configured for the health check server healthy_response_body string A static body that overrides the default response returned by health check service when the agent is healthy. unhealthy_response_body string A static body that overrides the default response returned by the health check service when the agent is unhealthy. enable_tls bool false Whether or not to use TLS. cert_file string A TLS certificate used for authentication. key_file Private Key File A TLS private key used for authentication. mutual_tls bool false Whether or not to use mutual TLS authentication client_ca_file string Certificate authority used to validate the client TLS certificates. While the check_collector_pipeline configuration exists for the OpenTelemetry Health Check Extension, it's configuration is not exposed because its not working as expected. The BindPlane Health Check extension will be updated once a new extension is available as a replacement. More details can be found on the OpenTelemetry Collector Contrib issue. Example Configuration Basic Configuration For a basic configuration, we need to specify the listen_address, listen_port, and path parameters. Web Interface Standalone Extension Configuration with Embedded Extension]]>https://observiq.com/docs/agent-configuration/extensions/health_checkhttps://observiq.com/docs/agent-configuration/extensions/health_checkMon, 10 Jun 2024 12:50:01 GMT<![CDATA[Custom]]><![CDATA[Custom Extension The Custom extension can be used to inject a custom OTel extension into a Configuration. A List of supported extensions can be found here. Configuration Table Parameter Type Default Description : : : : telemetry_types telemetrySelector [] Choose Telemetry Type. configuration yaml required Enter any supported Extension and the YAML will be inserted into the configuration. Example Configuration In this example, we use the Custom extension type to inject the following health check extension. Web Interface Standalone Extension Configuration with Embedded Extension]]>https://observiq.com/docs/agent-configuration/extensions/customhttps://observiq.com/docs/agent-configuration/extensions/customMon, 10 Jun 2024 12:50:01 GMT<![CDATA[Agent Configuration Encryption]]><![CDATA[Sensitive values (e.g. passwords, API keys, credential blobs) in the BindPlane Agent on-disc configuration file can be encrypted using the AES credential provider. The agent needs to be configured with the environment variable OTEL_AES_CREDENTIAL_PROVIDER set to a valid AES encryption key in base64 format. An AES 32-byte (AES-256) key can be generated using the following command: Caveats Once the agent is configured with an encryption key, the key must be provided to the agent on startup. If the key is lost, the agent will be unable to decrypt the configuration file, and the agent will fail to start. In order to safely rotate the key the agent is using, either reinstall the agent, providing the new key at that time, or configure the agent without any sensitive parameters by pausing all destinations in the configuration. The agent can then be restarted with the new key, the destinations restarted, and the configuration with sensitive parameters can be rolled out. Configuration In all these examples, replace with the base64 encoded AES key, for example n0joqT/sBPaOiudEovYiW3oM51SegcuyY6c0TACG/yQ=. Linux You can configure the OTEL_AES_CREDENTIAL_PROVIDER environment variable by using a Systemd override. Run the following command: Modify the agents systemd file's override to look like this: Then run the following command to reload the systemd configuration: Windows Start powershell as adminstrator and run the following command: Then restart the service: Alternatively, the key can be set in the Windows Registry Editor by adding a new environment variable named OTEL_AES_CREDENTIAL_PROVIDER with the value : And restart the service using the Services application: MacOS Add OTEL_AES_CREDENTIAL_PROVIDER to the EnvironmentVariables dict in the launchd service file /Library/LaunchDaemons/com.observiq.collector.plist (other values are shown for context): Then restart the agent: External Links - AES Credential Provider]]>https://observiq.com/docs/agent-configuration/advanced/config_encryptionhttps://observiq.com/docs/agent-configuration/advanced/config_encryptionThu, 14 Nov 2024 10:08:15 GMT<![CDATA[Terraform]]><![CDATA[Terraform can be used to manage BindPlane OP resources. Free, Enterprise, and Cloud editions are supported. Terraform enables managing BindPlane OP resources with configuration as code. In addition to BindPlane OP's robust resource versioning, Terraform configuration can be saved to source control for additional versioning and change management. Documentation The Terraform Provider documentation can be found on Hashicorp's documentation site Example Usage - Provider configuration - Destination example - Processor example - Source example - Configuration example External Links - BindPlane Provider - BindPlane Provider Releases]]>https://observiq.com/docs/advanced-setup/terraformhttps://observiq.com/docs/advanced-setup/terraformWed, 13 Dec 2023 11:59:08 GMT<![CDATA[Monitoring]]><![CDATA[BindPlane OP is instrumented with OpenTelemetry metrics, allowing server operators to track BindPlane's health using the monitoring backend of their choice. Metrics See Metrics For a comprehensive list of metrics. Monitoring BindPlane Server BindPlane can be configured to export metrics about its health, for example allowing you to see a sudden drop in connected agents or dangerously high memory utilization. These metrics can be collected by two methods: scraping a Prometheus endpoint on the server, or sending to an OTLP endpoint. For both methods, we recommend collecting the metrics with the BindPlane Agent, which can then perform processing and forward metrics to your monitoring platform. By default, your config.yaml file will contain this base metrics block, which doesn't export anything: OTLP Use the following values for the metrics block of your config.yaml. With this configuration, metrics will be exported every 60 seconds to an OTLP endpoint without TLS. To have your BindPlane Agent collect these metrics, add an OTLP source listening on port 4317. Prometheus Use the following values for the metrics block of your config.yaml. With this configuration, metrics will be available in Prometheus format at the /metrics of the BindPlane server. The endpoint will be available without authentication. To protect the endpoint with basic auth, provide the username and password parameters: We recommend collecting the metrics using the BindPlane Agent.]]>https://observiq.com/docs/advanced-setup/monitoringhttps://observiq.com/docs/advanced-setup/monitoringMon, 04 Nov 2024 15:36:35 GMT<![CDATA[Installation]]><![CDATA[- Prerequisites - Install BindPlane OP Server - Install and Uninstall Agents - Upgrade or Uninstall BindPlane OP Server - Installing the BindPlane OP Client - All Available Package Downloads - Configure the BindPlane OP Server - How to use the CLI]]>https://observiq.com/docs/advanced-setup/installationhttps://observiq.com/docs/advanced-setup/installationFri, 10 Nov 2023 14:57:59 GMT<![CDATA[Configuration]]><![CDATA[Options BindPlane server configuration can be found at /etc/bindplane/config.yaml. BindPlane will look for flags, environment variables, and a configuration file, with precedence: flags > environment variables > configuration file. Server and client configurations can be bootstrapped using the init command. See the initialization section. For detailed examples, see the configurations section. Host IP Address the BindPlane server binds to. This can be a single address or 0.0.0.0 for all interfaces. Option Flag Environment Variable Default network.host host BINDPLANE_HOST 127.0.0.1 Port TCP port the BindPlane server binds to. This must be an unprivileged port when running BindPlane as a non-root user. Option Flag Environment Variable Default network.port port BINDPLANE_PORT 3001 Remote URL URL used to reach the BindPlane server. This must be set in all client and server configurations and must be a valid URL with a protocol (HTTP / HTTPS), hostname or IP address, and port. If the server is behind a proxy or load balancer, the proxy URL can be used. Option Flag Environment Variable Default network.remoteURL remote-url BINDPLANE_REMOTE_URL http://127.0.0.1:3001 CorsAllowedOrigins A list of origin domains allowed to make requests to BindPlane OP. It should at least contain the domain of the hosted UI. An empty or null value matches all origins. A wildcard "\" is also allowed to match all origins. In most cases, this value can be the same as network.remoteURL. Option Flag Environment Variable Default network.corsAllowedOrigins cors-allowed-origins BINDPLANE_CORS_ALLOWED_ORIGINS Logging Log output (file or stdout). When log output is set to file, a log file path can be specified. Option Flag Environment Variable Default logging.output logging-output BINDPLANE_LOGGING_OUTPUT file logging.filePath logging-file-path BINDPLANE_LOGGING_FILE_PATH /.bindplane/bindplane.log Server installations will use /var/log/bindplane/bindplane.log, which is set using an environment variable in the systemd service configuration. Log files are rotated and gzip compressed, and cleaned up automatically by BindPlane. Log files have a max size of 100mb and up to 10 rotates or 30 days of age, whichever comes first. Using an external utility such as logrotate is not recommended. Metrics BindPlane can be configured to forward metrics to an OpenTelemetry collector. The easiest way to get up and running is to deploy an agent on the same machine BindPlane is installed on. The agent should be configured with the OpenTelemetry source. Configure BindPlane to send metrics over localhost. Once configured, the managed agent can forward the metrics to the destination of your choice. Metrics are sent to OpenTelemetry collectors using the gRPC protocol. Option Flag Environment Variable Default metrics.type metrics-type BINDPLANE_METRICS_TYPE metrics.interval metrics-interval BINDPLANE_METRICS_INTERVAL 1m0s metrics.otlp.endpoint metrics-otlp-endpoint BINDPLANE_METRICS_OTLP_ENDPOINT metrics.otlp.insecure metrics-otlp-insecure BINDPLANE_METRICS_OTLP_INSECURE Tracing BindPlane supports configuration to enable tracing. tracing.type can be set to google or otlp. Option Flag Environment Variable Default tracing.type tracing-type BINDPLANE_TRACING_TYPE When tracing.type is set to otlp, some more configuration is possible. Option Flag Environment Variable Default tracing.otlp.endpoint tracing-otlp-endpoint BINDPLANE_TRACING_OTLP_ENDPOINT tracing.otlp.insecure tracing-otlp-insecure BINDPLANE_TRACING_OTLP_INSECURE FALSE TLS BindPlane supports server side TLS and mutual TLS. See the tls examples for detailed usage. Option Flag Environment Variable network.tlsCert tls-cert BINDPLANE_TLS_CERT network.tlsKey tls-key BINDPLANE_TLS_KEY network.tlsCA tls-ca BINDPLANE_TLS_CA network.tlsSkipVerify tls-skip-verify BINDPLANE_TLS_SKIP_VERIFY network.tlsMinVersion tls-min-version BINDPLANE_TLS_MIN_VERSION Server - network.tlsCert: Enables server-side TLS - network.tlsKey: Enables server-side TLS - network.tlsCA: Enables mutual TLS Client - network.tlsCA: Allows the client to trust the server certificate. Not required if the host operating system already trusts the server certificate. - network.tlsCert: Enables mutual TLS - network.tlsKey: Enables mutual TLS - network.tlsSkipVerify: Skip server certificate verification Storage Backend BindPlane supports two storage backends, bbolt and postgres. Option Flag Environment Variable Default store.type store-type BINDPLANE_STORE_TYPE bbolt BBolt Option Flag Environment Variable Default store.bbolt.path \store-bbolt-path BINDPLANE_STORE_BBOLT_PATH /.bindplane/storage Postgres Postgres can be used as a local or remote storage backend. Postgres storage is enabled when store.type is set to postgres. Postgres is a BindPlane OP Enterprise feature. Option Flag Environment Variable Default store.postgres.host postgres-host BINDPLANE_POSTGRES_HOST localhost store.postgres.port postgres-port BINDPLANE_POSTGRES_PORT 5432 store.postgres.database postgres-database BINDPLANE_POSTGRES_DATABASE bindplane store.postgres.sslmode postgres-ssl-mode BINDPLANE_POSTGRES_SSL_MODE disable store.postgres.sslrootcert postgres-ssl-root-cert BINDPLANE_POSTGRES_SSL_ROOT_CERT Optional store.postgres.sslcert postgres-ssl-cert BINDPLANE_POSTGRES_SSL_CERT Optional store.postgres.sslkey postgres-ssl-key BINDPLANE_POSTGRES_SSL_KEY Optional store.postgres.username postgres-username BINDPLANE_POSTGRES_USERNAME store.postgres.password postgres-password BINDPLANE_POSTGRES_PASSWORD postgres.maxConnections postgres-max-connections BINDPLANE_POSTGRES_MAX_CONNECTIONS 100 Example Postgres configuration: Event Bus BindPlane uses an event bus to communicate between components within BindPlane. When operating BindPlane with multiple servers, the event bus can be used to send events between BindPlane servers. Option Flag Environment Variable eventBus.type event-bus-type BINDPLANE_EVENT_BUS_TYPE The event bus type supports the following options: - local - nats - googlePubSub Local Event Bus The local event bus is the default event bus used by BindPlane. The local event bus does not have a configuration. It can be used by setting the event bus type to local. Google Pub/Sub Event Bus The Google Pub/Sub event bus can be used when operating multiple BindPlane servers. Option Flag Environment Variable eventBus.googlePubSub.projectID event-bus-type BINDPLANE_GOOGLE_PUB_SUB_PROJECT_ID eventBus.googlePubSub.credentialsFile google-pub-sub-credentials-file BINDPLANE_GOOGLE_PUB_SUB_CREDENTIALS_FILE eventBus.googlePubSub.topic google-pub-sub-topic BINDPLANE_GOOGLE_PUB_SUB_TOPIC When operating BindPlane on Google Compute Engine with the pub/sub" oath scopes enabled, BindPlane will handle authentication automatically. The configuration is simple and requires only the projectID and topic options. When running outside of Google Cloud, or without the Pub/Sub oauth scopes, you can use a Google Service Account Credential by setting the credentialsFile option. This credentials file must be installed on the BindPlane server's filesystem and be readable by the bindplane user. BindPlane will manage its own Pub/Sub subscription. Subscriptions are created and named based on the server's hostname. BindPlane will attempt to clean up its subscription on shutdown. Subscriptions are automatically cleaned up by Google Cloud if they have been disconnected for more than one day. NATS Event Bus NATS can be used as the event bus for BindPlane OP Enterprise and is a good option for distributed on-prem deployments. NATS is embedded into BindPlane and does not require external infrastructure. See the NATS Configuration documentation for more information. Server Session Secret A UUIDv4 is used for encoding web UI login cookies. This should be a new random UUIDv4. This value should be different than auth.secretKey. Option Flag Environment Variable auth.sessionSecret session-secret BINDPLANE_SESSION_SECRET Prometheus It is not necessary to make changes to the BindPlane Prometheus configuration when using BindPlane's bundled Prometheus. Base Configuration. Option Description prometheus.enable Whether or not to enable Prometheus as the measurement backend. prometheus.enableRemote Whether or not to use a remote Prometheus instance. When disabled, BindPlane will manage a local Prometheus child process. prometheus.localFolder The directory where the Prometheus binary and dependencies are located. prometheus.host The hostname or ip address of the Prometheus instance. prometheus.port The port of the Prometheus instance's API. prometheus.queryPathPrefix The path prefix of the query endpoint. This parameter is useful if using a Prometheus compatible system such as Mimir. Authentication. Authentication is supported for remote Prometheus deployments. BindPlane OP supports two authentication modes. - No authentication - Basic authentication Prometheus does not use authentication by default. Follow the Prometheus Basic Auth Password Hashing documentation for more information. Option Description prometheus.auth.type The authentication type to use. Supported options are none and basic (Basic Authentication). prometheus.auth.username The username to use when basic authentication is enabled. prometheus.auth.password The password to use when basic authentication is enabled. TLS BindPlane supports connecting to Prometheus with TLS and Mutual TLS. Option Description prometheus.enableTLS Whether or not to use TLS when communicating with Prometheus. prometheus.tls.tlsSkipVerify Whether or not to skip verification of the Prometheus server's TLS certificate. It is not recommended to enable this option. prometheus.tls.tlsCa The x509 PEM encoded certificate authority file to use to verify the Prometheus server's TLS certificate. Alternatively, the CA certificate can be imported into the host's trust store, instead of configuring this option. prometheus.tls.tlsCert The x509 PEM encoded client certificate file to use for mutual TLS...]]>https://observiq.com/docs/advanced-setup/configurationhttps://observiq.com/docs/advanced-setup/configurationWed, 09 Oct 2024 10:41:55 GMT<![CDATA[CLI]]><![CDATA[You can access the BindPlane CLI by using the bindplane command from the install directory or preceded by the absolute path of the install directory. Installing BindPlane Client (Remote CLI) See the Installation page for install instructions. CLI Commands Command Description : : apply Apply resources completion Generate the autocompletion script for the specified shell copy Make a copy of a resource delete Delete bindplane resources get Display one or more resources help Help about any command init Initialize an installation install Install a new agent label List or modify the labels of a resource profile Profile commands. serve Starts the server sync Sync an agent-version from github update Update an existing agent upload Upload an offline agent upgrade package version Prints BindPlane version rollout Manage one or more rollout Flags Description : : : : -c, config string full path to configuration file env string BindPlane environment. One of test development production (default "production") -h, help help for bindplane host string domain on which the BindPlane server will run (default "localhost") logging-file-path string full path of the BindPlane log file, defaults to $HOME/.bindplane/bindplane.log logging-output string output of the log. One of: file stdout tracing-otlp-endpoint string endpoint to send OTLP traces to tracing-otlp-insecure set true to allow insecure TLS -o, output string output format. One of: json\table\yaml\raw (default "table") password string password to use with Basic auth (default "admin") port string port on which the rest server is listening (default "3001") profile string configuration profile name to use remote-url string http url that clients use to connect to the server tls-ca strings TLS certificate authority file(s) for mutual TLS authentication tls-cert string TLS certificate file tls-key string TLS private key file tls-skip-verify Whether to verify the server's certificate chain and host name when making client requests tracing-type string type of trace to use for tracing requests, either 'otlp' or 'google' username string username to use with Basic auth (default "admin")]]>https://observiq.com/docs/advanced-setup/clihttps://observiq.com/docs/advanced-setup/cliFri, 10 Nov 2023 14:23:29 GMT<![CDATA[Metrics]]><![CDATA[BindPlane OP can be configured to expose metrics using OpenTelemetry Protocol or Prometheus. See Monitoring for configuration details. Key Performance Indicators Metrics denoted with "KPI" are key performance indicators. BindPlane OP administrators should pay close attention to KPIs to ensure BindPlane is operating normally. Event Bus The BindPlane Event Bus is responsible for publishing and consuming messages between BindPlane components. When operating BindPlane in High Availability, the Event Bus is responsible for sharing messages between BindPlane servers. Event Bus metrics can be used to gain visability into the health of the Event Bus. A misbehaving Event Bus will cause issues with configuration rollouts, Live Preview, and Recent Telemetry Snapshots. NATS nats.clientProducer (KPI) Number of times the NATS client producer handler was called. It is important to watch for the error attribute. Consistent producer errors indicate that the event bus is not functioning normally. - Type: Counter - Attributes - error: The error returned by the producer handler. One of "none", "max_payload", "unknown". - client_name: NATS client name. - cluster_name: NATS cluster name. - subject: NATS subject name. Example: nats.clientConsumer (KPI) Number of times the NATS client consumer handler was called. It is important to watch for the error attribute. Consistent consumer errors indicate that the event bus is not functioning normally. - Type: Counter - Attributes - error: The error returned by the consumer handler. One of "none", "max_payload", "unknown". - client_name: NATS client name. - cluster_name: NATS cluster name. - subject: NATS subject name. Example: nats.slowConsumer (KPI) Number of slow messages processed by the NATS client. See Slow Consumers for more information. When the slow consumer count is greater than 0, event bus messages are being dropped. - Type: Counter - Attributes - client_name: NATS client name. - subject: NATS subject name. Example: nats.clientProducerSize Size of the payload sent by the NATS client. - Type: Histogram - Attributes - client_name - cluster_name - error: The error returned by the producer handler. One of "none", "max_payload", "unknown". - This error will match the error on nats.clientProducer. - subject: NATS subject name - Buckets: - 0 - 5 - 10 - 25 - 50 - 76 - 100 - 250 - 500 - 750 - 1000 - 2500 - 5000 - 7500 - 10000 Example: nats.server.active_peers_count (KPI) Number of active peers connected to the NATS server. When operating BindPlane in High Availability, all BindPlane instances should report the same number of active peers. e.g. A three node deployment should report 2 active peers. If active peers is not consistent between BindPlane servers, a configuration or network issue is the likely culprit. - Type: Gauge Example: PubSub pubsub.messages Number of PubSub messages sent and received. - Type: Counter - Attributes - direction: One of received, sent. Example: pubsub.io Amount of PubSub data sent and received. - Type: Counter - Attributes - direction: One of received, sent. Example: pubsub.errors (KPI) Number of PubSub errors. PubSub errors indicates an issue with publishing or consuming messages from Google PubSub. - Type: Counter Example: OpAMP agent.wait (KPI) Time spent waiting due to maxConcurrency configuration option. This option prevent too many agents from re-connecting at the same time. During a BindPlane server restart, it is expected that agent.wait will increase temporarily as agent reconnect. If you experience signficant agent.wait time, BindPlane is likely having an issue with agent connections. High agent.wait times can result in agents appearing "disconnected" and degraded overall performance. - Type: Histogram - Buckets: - 0 - 5 - 10 - 25 - 50 - 76 - 100 - 250 - 500 - 750 - 1000 - 2500 - 5000 - 7500 - 10000 Example: agent.connecting (KPI) Number of times agents have attempted to connect. The result attribute is critical to understanding if agents are connecting properly. - Type: Counter - Attributes - result - conflict: The agent connection was rejected because an agent with the same agent_id is already connected. - connected: The agent connected successfully. - disconnected: The agent disconnected. - error: An error occurred during agent connection. - limited: The agent connection was rejected because the agent's account has reached the maximum allowed number of agents. - unauthorized: The agent connection was rejected because an account matching the agent's secret-key was not found. Example: agent.configure Number of times agents have been configured (push). - Type: Counter - Attributes - result - configuring: Successfully set agent status to configuring during agent configuration. - disconnected: Failed to push update to agent because it is disconnected. - error An error occurred during agent configuration. - readonly: Agent does not support remote configuration. Example: agent.verify Number of times agents have been verified (pull). - Type: Counter - Attributes - result: - configuring: Agent status was changed to configuring because it was not running the correct configuration. - error An error occurred during agent configuration verification. - missing - readonly: Agent does not support remote configuration. - validated: Agent update skipped because agent already has the correct configuration or does not have a configuration assigned. - validated-hash: Agent configuration hash matches the hash previously pushed to the agent. - waiting: Agent configuration cannot be validated because the agent is applying the configuration. Example: agent.upgrade Number of times agents have been upgraded. - Type: Counter - Attributes - error: An error occurred while upgrading an agent. - upgrading: An upgrade request occurred. Example: agent.report Number of agent snapshot requests. - Type: Counter - Attributes - result: - disconnected: Snapshot request failed because the agent is disconnected. - error: An error occurred while requesting a snapshot from an agent. - sent: Snapshot request was successfully sent to the agent. Example: agent_messages Number of agent messages received. - Type: Counter - Attributes - components: A list of agent components. Example: agent.heartbeat Number of agent heartbeats received. - Type: Counter - Attributes Example: connected_agents (KPI) Number of connected agents. If connected_agents is less than the total number of agents, it is possible some agents are experiencing connectivity issues. This metric should be tracked with the agent.connecting metric. - Type: Gauge custom_messages.processed Number of custom messages that have been processed. - Type: Counter - Attributes - message_capability: Message capability. - message_type: Message type. Example: custom_messages.process_time Time spent processing custom messages. - Type: Histogram - Attributes - message_capability: Message capability. - message_type: Message type. - Buckets: - 0 - 5 - 10 - 25 - 50 - 76 - 100 - 250 - 500 - 750 - 1000 - 2500 - 5000 - 7500 - 10000 Example: throughput_metrics.processed Total amount of throughput metric datapoints that have been processed and batched. - Type: Counter - Attributes - message_capability: Message capability. - message_type: Message type. Example: measurements.process_time This metric is deprecated. Use custom_messages.process_time instead. Amount of time spent processing measurements. - Type: Histogram - Attributes - Buckets: - 0 - 5 - 10 - 25 - 50 - 76 - 100 - 250 - 500 - 750 - 1000 - 2500 - 5000 - 7500 - 10000 throughput_metrics.processed Total amount of throughput metric datapoints that have been processed and batched. - Type: Counter Example: Web Server requests (KPI) Number of HTTP requests. The status attribute will include the HTTP status code. It is important to monitor for 4xx and 5xx status codes. An excessive number of 4xx status codes could indicate agent authentication issues. Any number of 5xx status codes is unexpected, and could indicate a configuration issue or bug within BindPlane. - Type: Counter - Attributes - method: Request method. - status: Request status. - url: Request URL path. Example: request_duration Time taken to process the request in seconds. - Type: Histogram - Attributes - method: Request method. - status: Request status. - url: Request URL path. - Unit: Seconds - Buckets: - 0.1 - 0.5 - 1 - 2 - 5 - 10 Example: request_size Size of the request received. - Type: Histogram - Attributes - method: Request method. - status: Request status. - url: Request URL path. - Unit: Bytes - Buckets: - 0 - 5 - 10 - 25 - 50 - 76 - 100 - 250 - 500 - 750 - 1000 - 2500 - 5000 - 7500 - 10000 Example: Store eventbus.latency Time between sending an event to the event bus and the handler receiving it. - Type: Histogram - Attributes - event: One of received or handled. - handler: The component handling the event, one of manager or graphql. - type: The event type is always updates. Other values may exist in the future. - Buckets: - 0 - 5 - 10 - 25 - 50 - 76 - 100 - 250 - 500 - 750 - 1000 - 2500 - 5000 - 7500 - 10000 Example: store.updateRollout Number of times the store updateRollout method was called. - Type: Counter Example: Postgres wait_count Number of times a query had to wait for a connection. - Type: Counter Example: wait_time (KPI) Total time spent waiting for connections. If wait_time is consistently above 0, it could mean BindPlane's Postgres max connections configuration option is set too low or Postgres is experiencing performance issues. - Type: Counter - Unit: milliseconds Example: active_connections (KPI) Number of open active connections. If active_connections is consistently 100% of the configured max connections, BindPlane may be experiencing performance issues. Generally, BindPlane's max connections should not exceed 100 (default). Increasing max connections might mask an underlying issue and is not recomended. - Type: Gauge Example: idle_connections Number of open idle connectio...]]>https://observiq.com/docs/advanced-setup/monitoring/metricshttps://observiq.com/docs/advanced-setup/monitoring/metricsThu, 07 Nov 2024 13:54:57 GMT<![CDATA[Multi Project Migration]]><![CDATA[Multi Project Multi-Project BindPlane is an Enterprise feature that allows users to create multiple tenants (projects). Each project can have its own users, configurations, and managed agents. Multi-Project supports user invitations in order to support collaboration between multiple users. Objective Export resources from an individual BindPlane OP project and import them into another project on the same system or a different BindPlane OP system. Prerequisites The following requirements must be met: - You are running BindPlane OP Enterprise. Contact the observIQ sales team for more information about upgrading to BindPlane OP Enterprise. - BindPlane OP v1.25.0 or newer. If on an older version, upgrade before attempting a migration. - BindPlane OP is configured with multi-project enabled. Procedure The migration procedure has the following steps: 1. Create API Keys 2. Configure BindPlane CLI profiles 3. Export resources from an project 4. Import resources to the new project 5. Validate 6. Migrate Agents Create API Keys Following the API Keys documentation, create an API key for your source and destination projects. We are going to refer to them as "export" and "import". Note the API keys, but do not create their profiles. Follow the next section for profile instructions. Configure CLI Profiles Configure your BindPlane CLI with two profiles. One for the source project and server, and another for the target project and server. Create export profile: Create the import profile: Replace the values for the remote URL and API key with your BindPlane OP server remote URL(s) and API keys. Export Resources Switch to the export profile: Export destinations, sources, processors, and configurations from the project: Import Resources Switch to the import profile: Import destinations, sources, processors, and configurations using the apply command: Validate Log into the BindPlane OP project and ensure the configurations and destinations are present. If everything looks right, the migration is finished. Migrate Agents Previously connected agents will need to be updated with the new project's secret key. Follow the Migrate Agents documentation.]]>https://observiq.com/docs/advanced-setup/migration/postgres-multi-account-bolt-store-to-multi-account-postgreshttps://observiq.com/docs/advanced-setup/migration/postgres-multi-account-bolt-store-to-multi-account-postgresThu, 13 Jun 2024 17:42:05 GMT<![CDATA[Migrate Agents]]><![CDATA[Objective You have migrated resources from one BindPlane OP installation to another. All managed agents need to be updated in order to connect to the new BindPlane OP server or project. Prerequisites The following requirements must be met: - SSH (Linux) or Remote Desktop (Windows) access to the managed agents - Sudo or root access (Linux) or Administrator permissions (Windows) Procedure 1. Update Manager Configuration 2. Restart Agent 3. Validate Update Manager Configuration Edit the manager configuration with your editor of choice. On Linux, the path is /opt/observiq-otel-collector/manager.yaml and on Windows it is C:/Program Files/observIQ OpenTelemetry Collector/manager.yaml. The configuration will look similar to this: Modify the endpoint value to reflect the new IP address or hostname of the BindPlane OP server. Be sure to keep the protocol (ws / wss) and path (/v1/opamp) the same. If migrating between projects on the same BindPlane OP server, nothing needs to be changed here. Modify the secret_keyto match the secret key of the new BindPlane OP server or project. You can find your secret key on the agent install page by selecting "Install Agent". Once the manager configuration is updated, save the file and close your editor. Restart Agent Restart the agent after modifying the manager configuration. On Linux: On Windows, use the "services" app or the following command: Validate Once the agent(s) are restarted, log into the BindPlane OP server's web interface. The agents will now be connected to the new server or project.]]>https://observiq.com/docs/advanced-setup/migration/migrate-agentshttps://observiq.com/docs/advanced-setup/migration/migrate-agentsThu, 13 Jun 2024 17:42:05 GMT<![CDATA[Upgrade BindPlane OP Server]]><![CDATA[Upgrade Chart Version You can update your version of the Helm chart with the Helm repo update command. Upgrade BindPlane Version BindPlane OP Server can be upgraded by updating the image version tag in your values.yaml file. Once the new image tag is in place, run the upgrade command.]]>https://observiq.com/docs/advanced-setup/kubernetes-installation/server/upgradehttps://observiq.com/docs/advanced-setup/kubernetes-installation/server/upgradeTue, 16 Jan 2024 23:50:36 GMT<![CDATA[Uninstall BindPlane OP Server]]><![CDATA[Helm When BindPlane OP is managed by Helm, it can be uninstalled with the helm uninstall command. This example assumes that BindPlane was deployed with Helm using the application name "bindplane". Manually BindPlane can be cleaned up manually by deleting the namespace it was deployed to.]]>https://observiq.com/docs/advanced-setup/kubernetes-installation/server/uninstallhttps://observiq.com/docs/advanced-setup/kubernetes-installation/server/uninstallTue, 16 Jan 2024 23:23:50 GMT<![CDATA[Single Instance]]><![CDATA[Architecture When BindPlane is deployed as a StatefulSet, it has the following architecture. - BindPlane as a Single pod. - Deployed as a StatefulSet. - BBolt storage backend using a persistent volume claim. - Prometheus time series database - Deployed as a StatefulSet. - Persistent storage using a persistent volume claim. - Prometheus is deployed and managed by the chart using observIQ's Prometheus image. - Single transform agent pod, for live preview. BindPlane uses Prometheus as a storage backend for agent throughput metrics. It is unnecessary to manage Prometheus outside of the Helm chart. Prerequisites System Requirements - Storage class which supports persistent volume claims (When running as a StatefulSet). - See the instance sizing guidelines for recommended disk capacity. Installation Add the BindPlane OP Helm chart to your workstation. Create a values.yaml file, which will be used to configure your Helm deployment. Add the initial options. Make sure to set the following: - config.username: Your basic auth username for the Administrator project. - config.password: Your basic auth password for the Administrator project. - config.sessions_secret: A random uuid. You can use uuidgen to create one. Follow the the instance sizing guidelines when modifying the resource requests and limits. Deploy BindPlane to the bindplane namespace using Helm and your previously created values.yaml configuration file. After a few moments, check the namespace by running kubectl -n bindplane get pod. You will see three pods. - BindPlane - Prometheus - Live Preview transform agent. Frequently Asked Questions Q: Why is the StatefulSet limited to one pod? A: BindPlane is limited to a single instance when configured with local storage. See the BindPlane Deployment Installation documentation for details on how to deploy BindPlane OP using a scalable architecture.]]>https://observiq.com/docs/advanced-setup/kubernetes-installation/server/statefulsethttps://observiq.com/docs/advanced-setup/kubernetes-installation/server/statefulsetThu, 13 Jun 2024 17:42:05 GMT<![CDATA[BindPlane OP Server]]><![CDATA[BindPlane OP is fully supported on Kubernetes. Prerequisites Helm BindPlane OP is deployed with Helm. Make sure you have Helm installed on your workstation. The BindPlane OP Helm Chart source can be found here. Supported Distributions The following Kubernetes distributions are officially supported: - Google Kubernetes Engine (GKE) - Amazon Elastic Kubernetes Service (EKS) - Azure Kubernetes Service (AKS) - OpenShift 4.x Self-managed Kubernetes clusters are supported. See the System Requirements section for details. Installation BindPlane OP supports two architectures for Kubernetes. Single instance (StatefulSet) and high availability (Deployment). The StatefulSet supports running BindPlane as a single pod without dependencies. It does not require a dedicated database or event bus. The StatefulSet is suitable for simple environments where BindPlane can be scaled vertically and 100% uptime is not a requirement. The Deployment supports running BindPlane with multiple pods. Providing resiliency and load balancing among the BindPlane instances. When using the Deployment architecture, a dedicated database and event bus are required. The Deployment is suitable for environments where horizontal scaling and uptime are requirements. - Installation Guide - High Availability Installation Guide Usage Once BindPlane OP Server is deployed, it can be reached by the remote URL endpoint. By default, the remote URL is set to the service endpoint. This remote URL is suitable for deploying agents within the cluster. If you would like to reach BindPlane from outside of the cluster, see the Next Steps section. When not exposing BindPlane with ingress, you can use port forwarding to connect to the web interface. Navigate to http://localhost:3001 on your workstation. Next Steps Agent Installation With BindPlane deployed, you can move on to installing agents to your cluster. See the Kubernetes Agent Installation documentation for details. Ingress If you would like to reach BindPlane from outside of the cluster (web interface, agent connections, etc), follow the BindPlane Server Ingress documentation.]]>https://observiq.com/docs/advanced-setup/kubernetes-installation/server/installhttps://observiq.com/docs/advanced-setup/kubernetes-installation/server/installTue, 16 Jan 2024 23:57:15 GMT<![CDATA[High Availability]]><![CDATA[This feature is only available in BindPlane OP Enterprise. Learn more here. Architecture When BindPlane is deployed as a Deployment, it has the following architecture. - BindPlane with multiple replicas - Deployed as a Deployment. - Prometheus time series database - Deployed as a StatefulSet. - Prometheus is deployed and managed by the chart using observIQ's Prometheus image. - One or more Transform agent pods, for live preview - PostgreSQL storage backend BindPlane uses Prometheus as a storage backend for agent throughput metrics. It is unnecessary to manage Prometheus outside of the Helm chart. PostgreSQL is not deployed by the BindPlane Helm chart and must be deployed as a prerequisites. Prerequisites Licensing An Enterprise license is required when operating BindPlane in High Availability. Learn more here. PostgreSQL PostgreSQL must be deployed and reachable from the cluster. Postgres requirements - Database named bindplane - User with full permission to the bindplane database - Reachable from BindPlane's Kubernetes cluster Event Bus BindPlane requires an external event bus when operating with more than one pod. See the Event Bus documentation for details. Installation Add the BindPlane OP Helm chart to your workstation. Create a values.yaml file, which will be used to configure your Helm deployment. - license: Your Enterprise license. Add the initial options. Make sure to set the following: - config.username: Your basic auth username for the Administrator project. - config.password: Your basic auth password for the Administrator project. - config.sessions_secret: A random uuid. You can use uuidgen to create one. - config.eventbus.type: The event bus type to use. This example will use Google Pub/Sub. See the Helm Event Bus Configuration doc for available options. - backend.postgres.host: The Hostname or IP address of the PostgreSQL server. - backend.postgres.port: The PostgreSQL server's port. - backend.postgres.username: The username the BindPlane server should use to connect to Postgres. - backend.postgres.password: The password the BindPlane server should use to connect to Postgres. Deploy BindPlane to the bindplane namespace using Helm and your previously created values.yaml configuration file. After a few moments, check the namespace by running kubectl -n bindplane get pod. You will see three pods. - BindPlane - Prometheus - Live Preview transform agent.]]>https://observiq.com/docs/advanced-setup/kubernetes-installation/server/deploymenthttps://observiq.com/docs/advanced-setup/kubernetes-installation/server/deploymentThu, 13 Jun 2024 17:42:05 GMT<![CDATA[PostgreSQL]]><![CDATA[When operating BindPlane in a distributed architecture, a shared PostgreSQL instance is required. Basic Example This example will configure BindPlane to connect to the host postgres.mycorp.net on port 5432.]]>https://observiq.com/docs/advanced-setup/kubernetes-installation/server/components/postgreshttps://observiq.com/docs/advanced-setup/kubernetes-installation/server/components/postgresTue, 16 Jan 2024 23:50:36 GMT<![CDATA[Kubernetes Ingress]]><![CDATA[Make sure BindPlane Server is configured with a secure password before exposing it. Basic Example BindPlane OP can be exposed by Kubernetes ingress. This example will expose BindPlane on the host bindplane.local using the nginx ingress class. It is recommended that TLS be configured when exposing BindPlane with ingress. TLS Example This example will expose BindPlane on the host bindplane.mycorp.net using the nginx ingress class. It will also set the Cert Manager Annotation cert-manager.io/cluster-issuer, which will trigger Cert Manager to retrieve a TLS certificate and store it in the secret named bindplane-nginx-tls.]]>https://observiq.com/docs/advanced-setup/kubernetes-installation/server/components/ingresshttps://observiq.com/docs/advanced-setup/kubernetes-installation/server/components/ingressTue, 16 Jan 2024 23:50:36 GMT<![CDATA[Event Bus]]><![CDATA[When operating BindPlane in a distributed architecture, an external event bus must be configured. NATS The NATS event bus is BindPlane's embedded event bus, suitable for high availability without the need for external infrastructure. NATS is configured by setting eventbus.type to nats. Resource Tuning When using NATS, three dedicated StatefulSet pods are deployed. You can set their resource allocation by setting nats.resources. Google Pub/Sub Automatic Authentication Google Pub/Sub can be configured without credentials when using Google Application Default Credentials. When running on a Google Kubernetes Engine cluster, BindPlane can authenticate to Pub/Sub without the use of a service account as long as the GKE node pool has the Required Scopes enabled. Service Account Credentials If operating outside of Google Cloud, a service account JSON credential can be used. This example creates a secret named bindplane-pubsub which contains the service account JSON key.]]>https://observiq.com/docs/advanced-setup/kubernetes-installation/server/components/eventbushttps://observiq.com/docs/advanced-setup/kubernetes-installation/server/components/eventbusTue, 16 Jan 2024 23:50:36 GMT<![CDATA[Upgrade Kubernetes Agents]]><![CDATA[Upgrade BindPlane OP does not support upgrading container-based agents when using the web interface. This is because the container is immutable and operates with a read-only filesystem. Volumes are used for any data that needs to be written at runtime. You can upgrade the agent version by re-deploying the agents using the process outlined in the Install section. Alternatively, you can modify the image tag and re-deploy with kubectl apply.]]>https://observiq.com/docs/advanced-setup/kubernetes-installation/agent/upgradehttps://observiq.com/docs/advanced-setup/kubernetes-installation/agent/upgradeWed, 17 Jan 2024 00:05:32 GMT<![CDATA[Uninstall Kubernetes Agents]]><![CDATA[Uninstall Agents can be uninstalled by using the kubectl delete command against the previously downloaded YAML manifest. If the yaml file is not available, you can clean up all resources with the following commands:]]>https://observiq.com/docs/advanced-setup/kubernetes-installation/agent/uninstallhttps://observiq.com/docs/advanced-setup/kubernetes-installation/agent/uninstallWed, 17 Jan 2024 00:05:32 GMT<![CDATA[Install Kubernetes Agents]]><![CDATA[Install Kubernetes Agent installation has a different flow than normal agents. Steps 1. Create a configuration for a Kubernetes platform 1. Kubernetes Node: Deploys an agent to each node in the cluster using a DaemonSet. 2. Kubernetes Cluster: Deploys an agent as a single pod Deployment. 3. Kubernetes Gateway: Deploys a scalable set of agents using a StatefulSet. 4. OpenShift Daemonset: Deploys an agent to each node in the cluster. 5. OpenShift Deployment: Deploys an agent as a single pod deployment. 2. Navigate to the agent's page and select "Install Agents" 3. Choose a Kubernetes Platform 4. Select your configuration from step 1 5. Copy the YAML manifest to a file 6. Deploy the YAML manifest with kubectl apply -f The agents will be deployed to the cluster in the bindplane-agent namespace and connect to BindPlane OP automatically. Example Installation Create a configuration using a Kubernetes-compatible source. This example uses the Kubernetes Event Logs source. Once the configuration has been created, navigate to the Agents page and select "Install Agents". Select your Kubernetes platform and configuration. You will be prompted to copy the YAML manifest. Copy it and save it to a file. Ensure that the OPAMP_ENDPOINTenvironment variable has the correct value for your server. If you did not configure ingress, this value should match your deployment clusterIP service name and namespace. In this example, the service name is "my-bindplane" and the namespace is "default". If you configured ingress, your OPAMP_ENDPOINT should contain the ingress hostname and port. The port should be 80 for non-TLS ingress, and 443 if ingress TLS is enabled. Similarly, the protocol should be ws (websocket) when TLS is not configured, and wss (secure web socket) when TLS is enabled. Deploy the YAML manifest with kubectl apply -f . Once deployed, your agent(s) will appear on the Agents page, and they will be bound to your configuration. TLS Kubernetes agents can be configured to connect to BindPlane using TLS. If the BindPlane TLS certificate is publicly signed, no action is required. If the certificate is signed by an internal certificate authority, the agent can be configured with a custom certificate authority for verifying the BindPlane certificate. Your certificate authority file (ca.crt) can be added to a secret in the bindplane-agent namespace using the following command. Once the secret is created, you can modify your agent YAML manifest. Specifically, you need to append to the volumes, volumeMounts, and env sections of the agent container. Using this example, the CA certificate ca.crt will be mounted to /opt/tls/ca.crt. The OpAMP client will be configured to use this certificate authority when validating CA certificates. You can learn more about the various OpAMP environment variables here. Mutual TLS When using mutual TLS, the same process is used. In this case, a client keypair is provided. This example uses client.crt and client.key.]]>https://observiq.com/docs/advanced-setup/kubernetes-installation/agent/installhttps://observiq.com/docs/advanced-setup/kubernetes-installation/agent/installWed, 07 Aug 2024 19:04:49 GMT<![CDATA[BindPlane Agent Container Images]]><![CDATA[The BindPlane Agent can be pulled from several container registries and offers multiple formats suitable for a variety of requirements. Registries The BindPlane Agent container images can be pulled from the following registries: Registry Image Architecture Support GitHub Container Registry (default) ghcr.io/observiq/bindplane-agent amd64, arm64 Docker Hub observiq/bindplane-agent amd64, arm64 Google Artifact Registry us-central1-docker.pkg.dev/observiq-containers/agent/bindplane-agent amd64, arm64 Image Tags There are two image tags available for each version: Standard Image - Format: {{major.minor.patch}} - Example: 1.55.0 - Base: Ubuntu The standard image is used by default and supports all agent receivers as it contains the required Systemd libraries (Journald receiver) and Java runtime (JMX receiver). Minimal Image - Format: {{major.minor.patch}}-minimal - Example: 1.55.0-minimal - Base: Scratch The minimal image is a scratch-based container, suitable for environments requiring low surface area images. It does not contain the required dependencies to support journald or JMX receivers. It does support all Kubernetes sources, except the container logs source's journald input.]]>https://observiq.com/docs/advanced-setup/kubernetes-installation/agent/imageshttps://observiq.com/docs/advanced-setup/kubernetes-installation/agent/imagesThu, 13 Jun 2024 17:42:05 GMT<![CDATA[BindPlane Agent architecture]]><![CDATA[BindPlane manages three different Kubernetes Agent types. Node, Cluster, and Gateway. Each agent type serves a unique purpose within the cluster. They can be used together or independently. Node The BindPlane Node agents are deployed as a DaemonSet. The Node agent scales up and down with the size of the cluster. Each agent is responsible for collecting logs and metrics from the node it is running on. Additionally, the Node agent supports receiving OTLP metrics, traces, and logs from other services running in the cluster, via clusterIP service. Supported Sources - Kubernetes Container Logs - Kubernetes Kubelet Metrics - Kubernetes Prometheus - OpenTelemetry (OTLP) - BindPlane Gateway Persistence The Node agent makes use of Host Path volumes to persist the agent configuration file and exporter persistent queue directories. The Host Path /var/lib/observiq/otelcol/containeris mounted at /etc/otel/storage within the agent container. Persistence allows the agents to operate during a BindPlane or backend outage. Cluster The BindPlane Cluster agent is deployed as a Deployment. The Cluster agent operates a single pod and is responsible for collecting cluster-level metrics and events (logs). The Cluster agent is not intended to scale above one pod, therefore it is limited to sources that should run on a single replica to avoid duplicate telemetry. Supported Sources - Kubernetes Events - Kubernetes Cluster Metrics Gateway The BindPlane Gateway agents are deployed as a Deployment or StatefulSet. The Gateway agent is intended to operate as an aggregation layer, allowing other agents or services to forward metrics, traces, and logs to the gateway for additional processing before being shipped to a backend. The Gateway agent is optional for monitoring a Kubernetes cluster. It should be used during the following situations: - Offload processing from the Node and Cluster agent - Limit backend access to a dedicated set of agents. - You already have a solution for logs and metrics, and need a solution to receive telemetry from services running in the cluster. Supported Sources - OpenTelemetry (OTLP) - Splunk TCP - Splunk HEC Persistence The Gateway agent uses a Volume Claim Template to generate and assign volumes to each Gateway agent pod. The volume contains the agent's configuration file and exporter persistent queue directories. If an agent pod is killed or restarted for any reason, the volume assigned to that pod will be attached to the new instance of the pod, allowing the new pod to continue where the previous instance left off. Persistence allows the agents to operate during a BindPlane or backend outage. Limitations Change Configurations Kubernetes agents are bound to a single configuration. Changes to that configuration are supported, however, changing to a new configuration is not supported. To change to a new configuration, you can re-deploy the agents by using the Install Agents page and selecting your new configuration. Non Kubernetes Agents Agents running outside of Kubernetes can be installed as long as ingress has been configured. See the getting started guide for agent installation instructions.]]>https://observiq.com/docs/advanced-setup/kubernetes-installation/agent/architecturehttps://observiq.com/docs/advanced-setup/kubernetes-installation/agent/architectureThu, 13 Jun 2024 17:42:05 GMT<![CDATA[Upgrade, Downgrade or Uninstall BindPlane OP Server]]><![CDATA[We recommend backing up your environment prior to an upgrade. See our Backup and Disaster Recovery Guide Upgrading BindPlane OP Server Upgrading the BindPlane Server is as simple as re-running the install script without the init flag. A convenient piped one liner is below. Additionally, if you want to upgrade to a specific version, you can do it using the below command. Replace 1.72.1 with the specific version you want. Downgrading BindPlane OP Server Downgrading is generally not recommended. If you need to downgrade, please contact the observIQ suppport team. Uninstall BindPlane OP Server 1. Stop the process: 2. Remove the package - Debian and Ubuntu: - CentOS and RHEL 8 and newer (use yum for anything older) 3. Optionally remove leftover data Docker & Kubernetes To upgrade on container platforms, simply change the version numbers in your docker-compose.yaml or values.yaml, as appropriate, and reapply.]]>https://observiq.com/docs/advanced-setup/installation/uninstall-bindplane-op-serverhttps://observiq.com/docs/advanced-setup/installation/uninstall-bindplane-op-serverMon, 16 Sep 2024 19:44:54 GMT<![CDATA[Prerequisites]]><![CDATA[BindPlane Instance Sizing BindPlane OP's resource requirements will differ based on the number of managed agents. CPU, Memory, Disk throughput / IOPS, and network consumption will increase as the number of managed agents increases. Follow this table for CPU, memory, and storage capacity sizing. Agent Count BindPlane Nodes Fault Tolerance CPU Cores Memory Database 1-100 1 N/A 2 4GB bbolt 100-25,000 1 N/A 4 16GB postgres 1-60,000 3 1 2 8GB postgres 60,000-125,000 5 1 2 8GB postgres 125,000-250,000 10 2 2 8GB postgres Postgres is required for production deployments of BindPlane OP. Bbolt should only be used for proof of concept deployments. When exceeding 25,000 agents, it is recommended to operate BindPlane in High Availability. High Availability and Fault Tolerance When operating BindPlane in High Availability, you need to consider how many agents you expect a single BindPlane instance to handle. Take the total number of BindPlane instances, and subtract the maximum number of nodes you expect to become unavailable due to maintenance. It is important to make sure each node is not responsible for more than 30,000 agents during a node outage. See Load Balancer Connection Constraints for details. Load Balancer Connection Constraints Most load balancers will be limited to roughly 65,535 connections per backend instance. When sizing your BindPlane cluster, you must consider how many agents each node will be responsible for during maximum fault tolerance. A good rule of thumb is to not exceed 30,000 agents. This is because each agent will open two connections to BindPlane. One for OpAMP remote management, and one for publishing throughput metrics. If you have 100,000 agents, a cluster size of three would be insufficient as each node would be responsible for roughly 33,000 agents. 33,000 agents 2 results in 66,000 TCP connections to each BindPlane instance. This situation gets worse if you bring one node down for maintenance, as each BindPlane instance would become responsible for 50,000 agents, or 100,000 TCP connections. Postgres Sizing When using PostgreSQL storage back-end, performance is generally limited by the number of CPU cores and Memory available. It is recommended that the storage backing Postgres be low latency (SSD) and capable of high throughput. Agent Count CPU Cores Memory 1-60,000 4 16GB 60,000-125,000 8 32GB 125,000-250,000 16 64GB Network Requirements Bandwidth BindPlane OP maintains network connections for the following: - Agent Management - Agent Throughput Measurements - Command line and Web user interfaces Maximum network throughput scales linearly with the number of connected agents. As a rule of thumb, expect to consume 265B/s for every connected agent, or 2.12Mbps per 1,000 agents. Firewall BindPlane OP can run on a local area network and behind a firewall. BindPlane OP does not need to be reachable from the internet, however, if agents. or users outside of your WAN require access, a VPN or inbound firewall rules must be configured to allow access. Ports BindPlane OP listens on port 3001 by default. This port is configurable. See the configuration documentation. The BindPlane port is used for: - Agent command and control using the Open Agent Management Protocol (OpAMP) (Websocket) - Agent throughput measurement requests (HTTP POST request) - Browser and CLI users (HTTP and Websocket) Browsers and API Clients The firewall must allow HTTP traffic to reach BindPlane OP on the configured port. Agents Agents must be able to initiate connections to BindPlane OP for OpAMP (websocket) and throughput measurements (HTTP). BindPlane OP will never initiate connections to the agent. The firewall can be configured to prevent BindPlane OP from reaching the agent networks, however, agent networks must be able to reach BindPlane OP on the configured port. Agent Updates BindPlane OP will reach out to github.com/observIQ/bindplane-agent/releases to detect new agent releases. This feature is optional. You can disable Github polling by setting agentVersions.syncInterval to 0 in your BindPlane configuration.]]>https://observiq.com/docs/advanced-setup/installation/prerequisiteshttps://observiq.com/docs/advanced-setup/installation/prerequisitesTue, 06 Aug 2024 14:58:44 GMT<![CDATA[Install BindPlane OP Server]]><![CDATA[BindPlane OP Server runs on Linux and supports the following distributions: - Red Hat, Centos, Oracle Linux 7, 8, 9 - Debian 11 and 12 - Ubuntu LTS 20.04, 22.04 - SUSE Linux 12 and 15 - Alma and Rocky Linux Prerequisites You should verify that your system meets the recommended Resource Requirements While BindPlane OP Server will generally run on any modern distribution of Linux, systemd is the only supported init system. Install BindPlane OP Server Debian and RHEL-style packages are available for BindPlane Server. An installation script is available to simplify installation. Additionally, you can download packages directly, see our Downloads. Once installed initialized, you can check the service. For upgrade, downgrade and uninstall instructions, please see Upgrade, Downgrade or Uninstall BindPlane OP Server Docker BindPlane OP can run as a container using Docker. See the Download page for instructions. Container Image Repositories BindPlane OP container images can be found in the following locations: - Github Packages: ghcr.io/observiq/bindplane-ee - Google Artifact Repository: us-central1-docker.pkg.dev/observiq-containers/bindplane/bindplane-ee - Docker Hub: observiq/bindplane-ee Container images are tagged with the release version. For example, Release "v1.35.0" will have the tag "observiq/bindplane-ee:1.35.0".]]>https://observiq.com/docs/advanced-setup/installation/install-bindplane-op-serverhttps://observiq.com/docs/advanced-setup/installation/install-bindplane-op-serverMon, 16 Sep 2024 19:42:09 GMT<![CDATA[Install and Uninstall Agents]]><![CDATA[BindPlane OP works in conjunction with the BindPlane Agent, which can be installed on Linux or Windows and supports the following distributions: The BindPlane Agent runs on Linux or Windows and supports the following distributions: - AlmaLinux 8, 9 - CentOS 7, 8 Stream, 9 Stream - Debian 10, 11, 12 - Redhat Enterprise Linux 7, 8, 9 - Rocky Linux 7, 8, 9 - Scientific Linux 7 - SuSE Enterprise Linux 12 and 15 - Ubuntu LTS 18.04, 20.04, 22.04 - Windows Server 2012 R2, 2016, 2019, 2022 Additionally, the following are supported only in BindPlane Enterprise - CentOS 6 - Redhat Enterprise Linux 6 - Scientific Linux 6 - SuSE Enterprise Linux 11 SP4 (Earlier SP are not supported) - Windows 7, 10, 11 - Windows Server 2008 Other Operating Systems Other operating systems, especially modern Linux variants using systemd, will likely work. These are still considered unsupported, as they have not been tested and vetted by observIQ. Installation Script Install Agents from BindPlane OP To install the collector, you should get the installation script from BindPlane OP Server as outlined in Step 3 of our Quickstart Guide. Doing so ensures the agent instantly connects to BindPlane OP and can be managed without additional configuration. Agent or Collector: What's the difference? We often use the terms Agent and Collector interchangeably. When you see either throughout the product or the documentation, we're always referring to the BindPlane Agent Ansible Ansible can be used as an alternative to the agent install script. The BindPlane Agent Ansible Role can be integrated into your Ansible playbooks to manage the installation of BindPlane OP-managed agents. Ansible is useful for installing agents at scale, where the installation script would be cumbersome to manage on hundreds or thousands of systems. Usage The BindPlane Agent role must be cloned to your workstation and added to your playbook before it can be deployed. Clone Repo Clone the Ansible Role Git repository to your roles directory. The following command will clone the repository to the directory roles/bindplane_agent. Update Playbook Update your Ansible Playbook to include the bindplane_agent role. The role requires the following parameters: - version: BindPlane Agent version - endpoint: The remote URL of the BindPlane OP server - secret_key: The secret key of the BindPlane OP server. You can find the secret key on the install agent page, or with the CLI command bindplane secret get. Deploy Once the role is configured in your playbook, you can deploy the agents to your host group. For example: This command assumes you have a playbook file playbook.yml and a site file site.yml in your working directory, along with the roles/bindplane_agent directory. Additional Documentation A comprehensive list of configuration options can be found in the BindPlane Agent Role Github repository. Uninstall The BindPlane Agent On Linux, macOS, or Windows, run the following command to uninstall the collector: Optionally, on Windows, you can uninstall the collector via the control panel. Simply follow the steps below. 1. Navigate to the control panel, then to the "Uninstall a program" dialog. 2. Locate the observIQ OpenTelemetry Collector entry, and select uninstall. 3. Follow the wizard to complete the removal of the collector. Data that persists after uninstalling The agents configuration files and log files are not removed when an agent is uninstalled. If you will not be re-installing the agent at a later time you may want to remove this folder. It will contain log files, the BindPlane OP configuration yaml, and any queue storage. Those files are located at "/opt/observiq-otel-collector" on Linux and "C:\Program Files\observIQ OpenTelemetry Collector" on Windows.]]>https://observiq.com/docs/advanced-setup/installation/install-agenthttps://observiq.com/docs/advanced-setup/installation/install-agentTue, 15 Oct 2024 19:34:04 GMT<![CDATA[Package Downloads]]><![CDATA[The recommended way to install BindPlane OP is using the install commands found on the Installation page. Alternatively, these are direct downloads to server and client packages. Direct Download Links BindPlane OP release are uploaded [here](). Server Linux Packages DEB - AMD64 - ARM64 RPM - AMD64 - ARM64 Client Binaries macOS - Intel - Apple Silicon Linux - AMD64 - ARM64 Windows - AMD64]]>https://observiq.com/docs/advanced-setup/installation/downloadshttps://observiq.com/docs/advanced-setup/installation/downloadsWed, 29 Nov 2023 17:29:30 GMT<![CDATA[TLS]]><![CDATA[BindPlane OP supports TLS. This guide will focus on using Step CLI to create certificates, however, you can acquire certificates using your preferred method. Certificates must be x509 PEM encoded. TLS with Step CLI Step CLI can be used to create your own certificate authority and server certificates. Step provides an easy-to-use interface. Alternatively, you could use OpenSSL. Prerequisites This guide assumes you will be deploying BindPlane and its agents to a network that has a working Domain Name System (DNS). It is expected that agent systems will be able to connect to Bindplane using its fully qualified domain name (FQDN). If you do not have working DNS, it is possible to use /etc/hosts as a workaround. See this guide for details. Environment For this demonstration, we have four compute instances running on Google Cloud. The objective is to configure BindPlane OP to use a server TLS certificate, and have all clients and collectors connect using TLS. The following instances are deployed: - bindplane: Instance that hosts the BindPlane OP server. - collector-debian: Debian-based instance that will host a BindPlane OP agent. - collector-centos: CentOS-based instance that will host a BindPlane OP agent. - collector-windows: Windows Server instance that will host a BindPlane OP agent. Each instance belongs to a VPC in the project bindplane, which means each instance has a DNS name with the following format: {{instance name}}.c.bindplane.internal. Each instance has the following fully qualified domain name (FQDN): - bindplane: bindplane.c.bindplane.internal - collector-debian: collector-debian.c.bindplane.internal - collector-centos: collector-centos.c.bindplane.internal - collector-windows: collector-windows.c.bindplane.internal All instances within the network can resolve each other using their FQDN. DNS plays a critical role when using TLS, as it allows certificates to be verified against their hostname. If the hostname does not match the certificate, the connection will be rejected unless steps are taken to disable TLS verification. Deploy and Configure BindPlane Follow the BindPlane OP Server Install Guide to install BindPlane OP. Once installed, modify the /etc/bindplane/config.yaml to look like this: Note that auth.secretKey and auth.sessionSecret should be random uuid values. You can generate your own with the uuidgen command. Make sure network.remoteURL use the correct FQDN. You can check your server's FQDN using the hostname command: Once BindPlane is configured, restart the server. Verify that BindPlane OP is working by connecting to the public IP address on port 3001. In this example, that would be http://bindplane.c.bindplane.internal:3001. Create Certificates with Step On the instance running your BindPlane OP server, install the step command line. Instructions for installing step can be found here. Create Certificate Authority The following commands will write a certificate and private key to tls-ca/ca.crt and tls-ca/ca.key in your working directory. Create BindPlane Server Certificate The following commands will generate a server certificate signed by the CA previously created. The certificate and private key will be written to /etc/bindplane/tls/bindplane.crt and /etc/bindplane/tls/bindplane.key Configure BindPlane to use TLS With the server certificate created, make the following changes to /etc/bindplane/config.yaml: 1. Modify network.remoteURL to use https 2. Add tlsCert and tlsKey Your configuration will look similar to this: With the configuration updated, restart BindPlane OP: To verify that BindPlane OP is using TLS, navigate to your server's IP address using https. For example, https://bindplane.c.bindplane.internal:3001. You should expect your browser to present a warning screen. This is because your workstation does not trust the certificate. This is expected because you have not imported the certificate authority into your trust store. At this time, it is safe to skip the warning and continue. Note that this warning should never be ignored in production, or in areas where it is not expected. Import Certificate Authority on Collector Systems In all instances that will be running a BindPlane OP agent, we need to import the certificate authority. This will allow the collector software to trust the BindPlane server certificate. 1. Copy tls-ca/ca.crt to all systems that will be running a BindPlane agent 2. Import the ca.crt into the trust store on all agent systems 3. Install agents For instructions on how to import a certificate authority, see this blog. Once all agent systems have the certificate authority imported, you can install agents using the command generated in the BindPlane OP web interface. Example Linux install command: Note that the command uses the value from server.remoteURL in /etc/bindplane/config.yaml as the endpoint that the agent should connect to. The wss protocol indicates that TLS should be used. Once installed, the manager configuration at /opt/observiq-otel-collector/manager.yaml will look something like this: Finished! Agents appear in the web interface, indicating that TLS is working.]]>https://observiq.com/docs/advanced-setup/configuration/tlshttps://observiq.com/docs/advanced-setup/configuration/tlsMon, 24 Jun 2024 14:05:16 GMT<![CDATA[Retry and Queueing]]><![CDATA[Sending Queue A sending queue is a buffer that stores telemetry data temporarily before sending it to the destination. The sending queue ensures that telemetry data is not lost due to network connectivity issues or server outages and helps to minimize the number of network connections required for efficient transmission. Parameter Type Default Description : : : : sending_queue_enabled bool true Enable to buffer telemetry data temporarily before sending sending_queue_num_consumers int 10 The number of consumers that dequeue batches. sending_queue_queue_size int 5000 Maximum number of batches kept in memory before dropping. Persistent Queue In addition to the sending queue, the persistent queue may be enabled. When enabled, telemetry data is persisted to the disk, which provides data resiliency in cases where the collector restarts. The sending queue must be enabled to enable persistent queuing. Parameter Type Default Description : : : : persistent_queue_enabled bool true Enable to buffer telemetry data to disk instead of in memory. persistent_queue_directory string $OIQ_OTEL_COLLECTOR_HOME/storage The path to a directory where telemetry data will be buffered. Retry on Failure Retry on failure settings are used to determine whether the exporter should attempt to resend telemetry data that has failed to be transmitted to the destination endpoint. When this setting is enabled, the exporter will automatically retry failed transmissions at a configurable interval until the data is successfully transmitted. This helps to ensure that telemetry data is not lost due to temporary network connectivity issues or server outages. Parameter Type Default Description : : : : retry_on_failure_enabled bool true Attempt to resend telemetry data that has failed to be transmitted to the destination. retry_on_failure_initial_interval int 5 Time (in seconds) to wait after the first failure before retrying. retry_on_failure_max_interval int 30 The upper bound (in seconds) on backoff. retry_on_failure_max_elapsed_time int 300 The maximum amount of time (in seconds) spent trying to send a batch, used to avoid a never-ending retry loop. When set to 0, the retries are never stopped.]]>https://observiq.com/docs/advanced-setup/configuration/retry-and-queue-settingshttps://observiq.com/docs/advanced-setup/configuration/retry-and-queue-settingsThu, 18 Jan 2024 20:24:51 GMT<![CDATA[Proxy Configuration]]><![CDATA[Forward Proxy BindPlane OP and BindPlane Agent support the use of an HTTP forward proxy for (egress) connections. The Proxy is configured using the HTTP_PROXY and HTTPS_PROXY environment variables. Configure BindPlane OP You can configure the proxy environment variables by using a Systemd override. Run the following command: Modify the unit file's override to look like this: Note that this example is using http for both HTTP_PROXY and HTTPS_PROXY. This is because the proxy server is not configured to use TLS. Connections to https sites (such as github.com and Google Cloud API) are still encrypted with TLS. See TLS for more details. After saving the file, you can reload systemd and restart BindPlane. BindPlane will now proxy outgoing requests using the configured proxy. Configure BindPlane Agent The process for BindPlane Agent is identical to BindPlane OP. Create a Systemd override. Configure the HTTP_PROXY and HTTPS_PROXY environment variables. Reload systemd and restart the service. Authentication Username and password authentication is supported using the following form: TLS TLS is always used between the proxy and the destination when connecting to a TLS secured endpoint, such as https://logging.googleapis.com or https://otlp-gateway-prod-us-central-0.grafana.net/otlp. This is often confusing because TLS is not required for the connection between BindPlane / BindPlane Agent and the proxy. If your proxy has a TLS listener, you can use TLS for the connection between BindPlane / BindPlane Agent and the proxy like this: This will proxy http and https requests using TLS between your proxy client and server. Note that your BindPlane OP server and your BindPlane Agents must trust the certificate that is in use by the proxy. You can read more about adding ca certificates to your servers by reviewing the following: - Debian based systems - RHEL based systems - Windows]]>https://observiq.com/docs/advanced-setup/configuration/proxyhttps://observiq.com/docs/advanced-setup/configuration/proxyThu, 17 Oct 2024 09:35:11 GMT<![CDATA[Offline Agent Package Installation and Upgrades]]><![CDATA[This feature is only available in BindPlane OP Enterprise or BindPlane for Google. Learn more here. Enable Offline Agent Package Hosting and Upgrades This feature allows BindPlane OP to host the agent packages. This is used in environments where either BindPlane or the Agent system does not have external network access to GitHub. BPOP offline agent configuration In order to use offline agent upgrades, the feature must first be enabled. To enable offline agent upgrades, the offline option must be enabled. The folder where agent upgrade artifacts will be stored when uploaded may also be configured. By default, agent upgrade artifacts are stored in /var/lib/bindplane/agent-upgrades. Here is an example config enabling offline mode, which has the 'offline: true' added right after the 'apiVersion' section. Upload an Agent Upgrade Artifact Package Agent artifact packages can be uploaded to the BindPlane OP server to allow agents to upgrade to new versions, as well as allow agents to be installed through BindPlane OP while in offline mode. These packages can be found and downloaded from the releases page of the observIQ Distro for OpenTelemetry GitHub repository. You can download the artifact package to the BindPlane OP server through SSH like the example below: To upload an agent upgrade artifact package, use the bindplane upload agent-upgrade command. This requires that you first set up the CLI. If you have not done so previously, you can set up a profile like the example below: The artifact package should be downloaded onto the machine from which you are running the bindplane cli, which may or may not be the BindPlane OP server. In this example, version 1.59.1 of the collector is being uploaded to BindPlane OP: If the file has been renamed, you must specify the version flag with the version you are uploading: Delete Old Agent Artifact Packages Agent versions and agent artifact packages can be removed using the bindplane delete agent-version command: This will delete the version from BindPlane OP and remove the unpacked artifact package from the disk of the BindPlane OP server.]]>https://observiq.com/docs/advanced-setup/configuration/offline-agent-upgradeshttps://observiq.com/docs/advanced-setup/configuration/offline-agent-upgradesFri, 06 Sep 2024 15:57:06 GMT<![CDATA[NATS as Event Bus]]><![CDATA[This feature is only available in BindPlane OP Enterprise. Learn more here. NATS can be used as the event bus for BindPlane OP Enterprise and is a good option for distributed on-prem deployments. NATS is embedded into BindPlane and does not require external infrastructure. Configuration In order to use NATS as the event bus the eventBus.type field must be set to nats and the eventBus.nats config must be filled out. On Linux, the path to the configuration file is /etc/bindplane/config.yaml. Here is an example configuration snippet using NATS as the event bus. In this example, there are three BindPlane OP servers named bindplane-0, bindplane-1, and bindplane-2. Each BindPlane server is operating the NATS client and server. Each NATS client will connect to its local server over localhost. Each NATS server will connect to other servers using their hostname and port. Configuration Parameters NATS Event Bus can be configured with the following configuration options, flags, and environment variables. Option Flag Environment Variable eventBus.nats.client.name nats-client-name BINDPLANE_NATS_CLIENT_NAME eventBus.nats.client.endpoint nats-client-endpoint BINDPLANE_NATS_CLIENT_ENDPOINT eventBus.nats.client.subject nats-client-subject BINDPLANE_NATS_CLIENT_SUBJECT eventBus.nats.server.enable nats-server-enable BINDPLANE_NATS_SERVER_ENABLE eventBus.nats.server.name nats-server-name BINDPLANE_NATS_SERVER_NAME eventBus.nats.server.client.host nats-server-client-host BINDPLANE_NATS_SERVER_CLIENT_HOST eventBus.nats.server.client.port nats-server-client-port BINDPLANE_NATS_SERVER_CLIENT_PORT eventBus.nats.server.http.host nats-server-http-host BINDPLANE_NATS_SERVER_HTTP_HOST eventBus.nats.server.http.port nats-server-http-port BINDPLANE_NATS_SERVER_HTTP_PORT eventBus.nats.server.cluster.name nats-server-cluster-name BINDPLANE_NATS_SERVER_CLUSTER_NAME eventBus.nats.server.cluster.host nats-server-cluster-host BINDPLANE_NATS_SERVER_CLUSTER_HOST eventBus.nats.server.cluster.port nats-server-cluster-port BINDPLANE_NATS_SERVER_CLUSTER_PORT eventBus.nats.server.cluster.advertise nats-server-cluster-advertise BINDPLANE_NATS_SERVER_CLUSTER_ADVERTISE eventBus.nats.server.cluster.routes nats-server-cluster-routes BINDPLANE_NATS_SERVER_CLUSTER_ROUTES eventBus.nats.tls.enableTLS nats-enable-tls BINDPLANE_NATS_ENABLE_TLS eventBus.nats.tls.tlsCert nats-tls-cert BINDPLANE_NATS_TLS_CERT eventBus.nats.tls.tlsKey nats-tls-key BINDPLANE_NATS_TLS_KEY eventBus.nats.tls.tlsCA nats-tls-ca BINDPLANE_NATS_TLS_CA eventBus.nats.tls.tlsSkipVerify nats-tls-skip-verify BINDPLANE_NATS_TLS_SKIP_VERIFY Default installations of BindPlane will include the following configuration. Notice that the event bus type is local, NATS is disabled by default. Client Name The NATS client name can be set with eventBus.nats.client.name. It is required that clients have unique names. It is safe for this value to match NATS server's name when BindPlane is operating the NATS client and server. Default value: System's hostname. Client Endpoint The endpoint used by the client to connect to a NATS server can be set with eventBus.nats.client.endpoint. The endpoint should be a URI containing the nats scheme as well as the hostname and port of the NATS server. Generally, localhost is used to target the server operating on the same node. Default value: nats://localhost:4222. Client Subject The eventBus.nats.client.subject option configures the NATS subject used to publish and consume events from the event bus. All clients should have the same subject. Default value: bindplane-event-bus. Server Enable The eventBus.nats.server.enable option enables the embedded NATS server. For small BindPlane deployments (3 to 5 nodes), it is recommended to operate NATS client and server on all BindPlane OP nodes. For large deployments (> 5), it is recommended to enable NATS server on three nodes. Default value: false. Server Name The NATS server name can be set with eventBus.nats.server.name. It is required that servers have unique names. It is safe for this value to match the NATS client's name when BindPlane is operating the NATS client and server. Default value: System's hostname. Server Client Host The eventBus.nats.server.client.host option is used to configure the network interface used by the NATS server to receive incoming connections from clients. This can be localhost if the server is only receiving connections from the local NATS client, in situations where BindPlane is operating the client and server. Default value: localhost. Server Client Port The eventBus.nats.server.client.port option is used to configure the TCP port used by the NATS server to receive incoming connections from clients. Default value: 4222 Server HTTP Host The eventBus.nats.server.http.host option is used to configure the network interface used to expose the NATS server Monitoring API. You can find documentation for the API here. This should be set to localhost, with any monitoring tools running on the server system. Default value: localhost. Server HTTP Port The eventBus.nats.server.http.port option is used to configure the TCP port used by the NATS server to expose the Monitoring API. Default value: 8222. Server Cluster Name The eventBus.nats.server.cluster.name option sets the name of the NATS cluster. All nodes within the NATS cluster should have the same cluster name. Default value: bindplane. Server Cluster Host The eventBus.nats.server.cluster.host option is used to configure the network interface used to expose the NATS server's cluster interface. When operating more than one NATS server, it should be set to 0.0.0.0 or a specific IP address that is reachable by all other NATS servers. Default value: localhost. Server Cluster Port The eventBus.nats.server.cluster.port option is used to configure the TCP port used by the NATS server's cluster interface. Default value: 6222. Server Cluster Advertise The eventBus.nats.server.cluster.advertise option can be used to advertise the endpoint other servers in the cluster should use to reach the NATS server. This option should be considered advanced and is generally not required. The configured value should be of the form host:port, it should not contain a URI scheme. Default value: Unset. Server Cluster Routes The eventBus.nats.server.cluster.routes option is used to define a list of servers that the NATS server should connect to. This list can contain the local server. In this example, there are three BindPlane servers. All three servers will make connections to each endpoint in the list of routes. The servers will detect if they are connected to themselves, and automatically remove the route as it is unnecessary. Default value: Unset. Authentication Authentication is supported by configuring TLS. The NATS event bus uses mutual TLS to authenticate the client and server. TLS Configuration The following options can be set under eventBus.nats.tls. When TLS is enabled, NATS will use mutual TLS to authenticate the NATS clients and servers. A certificate authority file is required to enforce the use of mutual TLS. Option Description Default enableTLS Enable or disable TLS false tlsCert File path to TLS x509 PEM encoded certificate required tlsKey File path to TLS x509 PEM encoded private key required tlsCA File path(s) to TLS x509 PEM encoded certificate authority required tlsSkipVerify Enable or disable strict hostname verification false The following example enables TLS by setting enableTLS, tlsCert, tlsKey, and tlsCa. Generating Certificates You can use Step CLI, OpenSSL, or other tools to generate certificates. Certificates do not need to be publicly signed. The following examples will use step to generate a certificate authority and a signed certificate suitable for use with NATS. Create the certificate authority: Modify the san flag values to the hostnames of your BindPlane servers. If you have more than three servers, add additional san flags. You can also issue unique certificates for each server. Copy ca.crt, nats.crt, nats.key to /etc/bindplane on all of your servers. After copying them, set the filesystem permissions. Update your NATS configuration section to include the TLS options. - eventBus.nats.enableTLS - eventBus.nats.tls.tlsCert - eventBus.nats.tls.tlsKey - eventBus.nats.tls.tlsCa]]>https://observiq.com/docs/advanced-setup/configuration/nats-as-eventbushttps://observiq.com/docs/advanced-setup/configuration/nats-as-eventbusMon, 16 Sep 2024 16:10:03 GMT<![CDATA[Increase Max Open Files Limit]]><![CDATA[File Handles Linux processes are limited to 1024 open file handles by default. BindPlane's file handles consist of network connections and open files. You can use the following calculation to determine the estimated number of open file handles by BindPlane: 500 + (2 Number of Agents). For example, if you have 200 agents, you can expect to see up to 900 file handles. The number of file handles will differ between BindPlane configurations. For example, when using PostgreSQL as a storage backend, BindPlane will use up to 100 network connections by default. When using Bolt Store, BindPlane will consume one file handle. Using 500 file handles as a base allows the calculation to account for all BindPlane OP configurations. Configure Max File Handles BindPlane OP relies on the systemd option LimitNOFILE to limit the maximum number of open files. By default, this value is 55000. You can configure the max open files by using a Systemd override. Run the following command: Modify the unit file's override to look like this: After saving the file, you can reload systemd and restart BindPlane.]]>https://observiq.com/docs/advanced-setup/configuration/increase-max-open-files-limithttps://observiq.com/docs/advanced-setup/configuration/increase-max-open-files-limitTue, 11 Jun 2024 20:29:42 GMT<![CDATA[Active Directory Authentication]]><![CDATA[This feature is only available in BindPlane OP Enterprise and BindPlane for Google. Learn more here. BindPlane OP supports Active Directory for authentication. Active Directory allows users to offload authentication and authorization duties to their Active Directory server. BindPlane's Role-Based Access Control works in conjunction with Active Directory. 1. Prerequisites Before you begin, make sure the following requirements are met. - You have a BindPlane Enterprise or BindPlane for Google License key - The BindPlane server has network access to the Active Directory Server's Hostname or IP address - You know the Base Distinguished Name (base dn) of your Active Directory server. - You understand that the first user to log into BindPlane will become the BindPlane administrator - Additional users will need to be invited by the administrator 2. Configuration Active Directory configuration will differ depending on the platform BindPlane is deployed to. Linux users should follow the Linux section. Kubernetes Helm users should follow the Kubernetes section. 2.1. Linux If you have not previously installed BindPlane, review the installation procedure here. On the Linux server hosting BindPlane, execute the init command to reconfigure BindPlane. Respond to the prompts until you reach the "Choose an authentication method" prompt. Select "Active Directory". In this example, the Active Directory server's IP address is 192.168.1.2. The Bind username is bindplane-ldap. For "Base DN", we are using dc=corp,dc=net, which will allow any Active Directory user to authenticate to BindPlane using their sAMAccountName or userPrincipalName name. The configuration file at /etc/bindplane/config.yaml will look like this. Once BindPlane is configured and restarted, log into BindPlane to become the Organization Administrator. If you have trouble logging in, proceed to the Troubleshooting section. 2.1.1. TLS TLS is supported. When re-run the init command from step 2.1. Select yes when prompted to enable TLS. 2.2. Kubernetes BindPlane is deployed to Kubernetes using the BindPlane OP Helm Chart. If you have not previously deployed BindPlane, review Kubernetes Installation guide before proceeding. The Helm chart supports Active Directory by configuring the auth.type and auth.ldap value options. In this example, the values file contains the same values used in the Linux example. Deploy or update your existing Helm deployment to include the new authentication options. 2.2.1. TLS The BindPlane Helm chart supports TLS and mutual TLS. Before configuring TLS, you must create a Kubernetes secret containing the TLS certificate authority and optional mutual TLS client certificate key-pair. In this example, the CA certificate is located at ca.crt and the (optional) client keypair is located at client.crt and client.key. Update the namespace and file names to match your environment. Once the secret ldap-tls is created, update your values file to include the TLS options. For TLS, configure the TLS certificate authority. For mutual TLS, configure the TLS certificate authority and client key-pair. 2.3. Restrict Access Despite being able to authenticate, users require an invitation before they can successfully log into BindPlane. This means you do not need to restrict which LDAP users and groups can authenticate. If you wish to restrict the user base, you can do so by updating your search filter to include an Active Directory group. The default search filter will attempt to match the user's username to sAMAccountName or userPrincipalName. You can restrict the search filter by including a memberOf filter. In this example, we are requiring that the user be part of the bindplane group. Working with search filters can be difficult and error-prone, see the Troubleshooting section for example usage of the ldapsearch command. 3. Troubleshooting 3.1 LDAP Search The ldapsearch utility is useful for interacting with Active Directory. You can use it to describe a user or group. If you are using TLS, set the following environment variable. If you are using mutual TLS, set the following environment variables in addition to LDAPTLS_CACERT. 3.2 Third Party Documentation - Search filter names https://learn.microsoft.com/en-us/windows/win32/ad/naming-properties - Understanding search filters https://confluence.atlassian.com/kb/how-to-write-ldap-search-filters-792496933.html]]>https://observiq.com/docs/advanced-setup/configuration/active-directoryhttps://observiq.com/docs/advanced-setup/configuration/active-directoryThu, 15 Aug 2024 14:04:24 GMT<![CDATA[Installation]]><![CDATA[The BindPlane OP Client allows you to manage your BindPlane OP server remotely. It lets you view agents, modify configurations, and create custom resource types. Once installed, the bindplane command will be available and can be used to connect to a BindPlane OP Server. See the Configuration page for configuration instructions. Installing Client Linux The client can be installed by downloading the correct package and placing the binary in your path. Installing Client (AMD64) Uninstalling Client macOS BindPlane OP supports any macOS version 10.13 or newer. Installing Client (AMD64) Installing Client (ARM64) BindPlane OP Server on macOS The macOS client includes some server configuration, however BindPlane OP Server is not officially supported on macOS Uninstalling Client]]>https://observiq.com/docs/advanced-setup/cli/installationhttps://observiq.com/docs/advanced-setup/cli/installationFri, 15 Dec 2023 19:01:01 GMT<![CDATA[API Keys]]><![CDATA[API Keys are available in BindPlane OP Enterprise edition and BindPlane OP Cloud. Create an API Key API Keys can be created from the UI by visiting the Project page, found in the top-right settings menu. Navigate to the API Key tab and click "Generate New API Key". A window will show you your new API Key. This will be the only time this key is available to you. If you do not copy the key now you will have to regenerate another API key. Each user can have one API key per project. Using your API Key with the CLI You can set the API Key value in your configuration by using the bindplane profile command. e.g. This sets your "default" profile to use the API key you just received. You may also need to set your remote-url to the correct endpoint, for app.bindplane.com users: Make sure you are using the default profile Now verify you can retrieve resources, say Source Types Using your API Key with the REST API You can use your API with the REST API by setting the X-Bindplane-Api-Key header. An example using curl with BindPlane OP Cloud:]]>https://observiq.com/docs/advanced-setup/cli/api-keyshttps://observiq.com/docs/advanced-setup/cli/api-keysWed, 19 Jun 2024 20:25:36 GMT<![CDATA[Resources]]><![CDATA[BindPlane OP resources can be backed up using the CLI. This method should be used in addition to backing up Bolt Store or PostgreSQL. Prerequisites - BindPlane OP v1.25.0 or newer. If on an older version, upgrade before attempting a migration. Profile The backup and restore process will require that you have configured your BindPlane CLI profile. The profile allows you to connect to the BindPlane OP server. In this example, the BindPlane OP server has a remote URL of http://192.168.1.10:3001 and the profile name is "example". Configure Profile with username and password: If using BindPlane OP Enterprise with multi-project, you can use an API key instead of username and password: Backup After running the CLI commands, a file with the name migration-resources-.yaml will exist in your working directory. It is recommended that the resource yaml file be moved to a remote system, such as a backup server or a secure object storage service like Google Cloud Storage or Amazon S3. Restore BindPlane resources can be restored by using the apply command. After configuring your CLI profile, run the following command: BindPlane OP will create new resources, or update existing resources if they already exist. If applying resources to an in-use system, configurations that are updated will have pending rollouts that must be triggered by the user.]]>https://observiq.com/docs/advanced-setup/backup-and-disaster-recovery/resourceshttps://observiq.com/docs/advanced-setup/backup-and-disaster-recovery/resourcesThu, 13 Jun 2024 17:42:05 GMT<![CDATA[PostgreSQL]]><![CDATA[When BindPlane OP is configured to use PostgreSQL as the storage backend, all data is stored in a database on the PostgreSQL system. Tooling The PostgreSQL ecosystem is rich with backup tools. This guide will focus on the simplest approach, pgdump. You can use your favorite backup tooling as an alternative to pgdump. Prerequisites - Command line access to the PostgreSQL database Backup Using the pg_dump command, export the database named bindplane to a file in the working directory. Once finished, a file with the date will exist in the working directory. For example, bindplane-2023-08-03_15:16:47.pgsql. It is recommended that the exported database files be moved to a remote system, such as a backup server or a secure object storage service like Google Cloud Storage or Amazon S3. Restore To restore backup of the PostgreSQL database, use the following process: 1. Stop the server: sudo systemctl stop bindplane 2. Ensure the target database exists 3. Use psql to restore the backup: psql -d bindplane < bindplane-2023-08-03_15:16:47.pgsql 4. Start BindPlane: sudo systemctl start bindplane]]>https://observiq.com/docs/advanced-setup/backup-and-disaster-recovery/postgresqlhttps://observiq.com/docs/advanced-setup/backup-and-disaster-recovery/postgresqlWed, 15 Nov 2023 16:10:29 GMT<![CDATA[Bolt Store]]><![CDATA[When BindPlane OP is configured to use Bolt Store as the storage backend, a bbolt database on the filesystem is created. Bbolt is a high-performance database suitable for operating BindPlane OP in a single-node configuration. Backup The Bolt Store database file is constantly written to. In order to guarantee consistency and avoid corruption of the backup file, BindPlane OP must be stopped before the database file can be copied. After copying the database file, the storage directory will look something like this: It is recommended that the copied database files be moved to a remote system, such as a backup server or a secure object storage service like Google Cloud Storage or Amazon S3. Restore To restore a backup of Bolt Store, use the following process: 1. Stop the server: sudo systemctl stop bindplane 2. Backup the current database file 3. Copy a previous database backup file to /var/lib/bindplane/storage/bindplane.db 4. Start BindPlane: sudo systemctl start bindplane]]>https://observiq.com/docs/advanced-setup/backup-and-disaster-recovery/bolt-storehttps://observiq.com/docs/advanced-setup/backup-and-disaster-recovery/bolt-storeThu, 02 Nov 2023 16:23:52 GMT