Crate opentelemetry_otlp
source ·Expand description
The OTLP Exporter supports exporting logs, metrics and traces in the OTLP format to the OpenTelemetry collector or other compatible backend.
The OpenTelemetry Collector offers a vendor-agnostic implementation on how to receive, process, and export telemetry data. In addition, it removes the need to run, operate, and maintain multiple agents/collectors in order to support open-source telemetry data formats (e.g. Jaeger, Prometheus, etc.) sending to multiple open-source or commercial back-ends.
Currently, this crate only support sending telemetry in OTLP via grpc and http (in binary format). Supports for other format and protocol will be added in the future. The details of what’s currently offering in this crate can be found in this doc.
§Quickstart
First make sure you have a running version of the opentelemetry collector you want to send data to:
$ docker run -p 4317:4317 otel/opentelemetry-collector:latest
Then install a new pipeline with the recommended defaults to start exporting telemetry. You will have to build a OTLP exporter first.
Exporting pipelines can be started with new_pipeline().tracing()
and
new_pipeline().metrics()
, and new_pipeline().logging()
respectively for
traces, metrics and logs.
use opentelemetry::global;
use opentelemetry::trace::Tracer;
fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync + 'static>> {
// First, create a OTLP exporter builder. Configure it as you need.
let otlp_exporter = opentelemetry_otlp::new_exporter().tonic();
// Then pass it into pipeline builder
let _ = opentelemetry_otlp::new_pipeline()
.tracing()
.with_exporter(otlp_exporter)
.install_simple()?;
let tracer = global::tracer("my_tracer");
tracer.in_span("doing_work", |cx| {
// Traced app logic here...
});
Ok(())
}
§Performance
For optimal performance, a batch exporter is recommended as the simple
exporter will export each span synchronously on dropping. You can enable the
[rt-tokio
], [rt-tokio-current-thread
] or [rt-async-std
] features and
specify a runtime on the pipeline builder to have a batch exporter
configured for you automatically.
[dependencies]
opentelemetry_sdk = { version = "*", features = ["async-std"] }
opentelemetry-otlp = { version = "*", features = ["grpc-tonic"] }
let tracer = opentelemetry_otlp::new_pipeline()
.tracing()
.with_exporter(opentelemetry_otlp::new_exporter().tonic())
.install_batch(opentelemetry_sdk::runtime::AsyncStd)?;
§Feature Flags
The following feature flags can enable exporters for different telemetry signals:
trace
: Includes the trace exporters (enabled by default).metrics
: Includes the metrics exporters.logs
: Includes the logs exporters.
The following feature flags generate additional code and types:
serialize
: Enables serialization support for type defined in this create viaserde
.
The following feature flags offer additional configurations on gRPC:
For users uses tonic
as grpc layer:
grpc-tonic
: Usetonic
as grpc layer. This is enabled by default.gzip-tonic
: Use gzip compression fortonic
grpc layer.tls-tonic
: Enable TLS.tls-roots
: Adds system trust roots to rustls-based gRPC clients using the rustls-native-certs cratetls-webkpi-roots
: Embeds Mozilla’s trust roots to rustls-based gRPC clients using the webkpi-roots crate
The following feature flags offer additional configurations on http:
http-proto
: Use http as transport layer, protobuf as body format.reqwest-blocking-client
: Use reqwest blocking http client.reqwest-client
: Use reqwest http client.reqwest-rustls
: Use reqwest with TLS with system trust roots viarustls-native-certs
crate.reqwest-rustls-webkpi-roots
: Use reqwest with TLS with Mozilla’s trust roots viawebkpi-roots
crate.
§Kitchen Sink Full Configuration
Example showing how to override all configuration options.
Generally there are two parts of configuration. One is metrics config
or tracing config. Users can config it via OtlpTracePipeline
or OtlpMetricPipeline
. The other is exporting configuration.
Users can set those configurations using OtlpExporterPipeline
based
on the choice of exporters.
use opentelemetry::{global, KeyValue, trace::Tracer};
use opentelemetry_sdk::{trace::{self, RandomIdGenerator, Sampler}, Resource};
use opentelemetry_sdk::metrics::reader::{DefaultAggregationSelector, DefaultTemporalitySelector};
use opentelemetry_otlp::{Protocol, WithExportConfig, ExportConfig};
use std::time::Duration;
use tonic::metadata::*;
fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync + 'static>> {
let mut map = MetadataMap::with_capacity(3);
map.insert("x-host", "example.com".parse().unwrap());
map.insert("x-number", "123".parse().unwrap());
map.insert_bin("trace-proto-bin", MetadataValue::from_bytes(b"[binary data]"));
let tracer_provider = opentelemetry_otlp::new_pipeline()
.tracing()
.with_exporter(
opentelemetry_otlp::new_exporter()
.tonic()
.with_endpoint("http://localhost:4317")
.with_timeout(Duration::from_secs(3))
.with_metadata(map)
)
.with_trace_config(
trace::config()
.with_sampler(Sampler::AlwaysOn)
.with_id_generator(RandomIdGenerator::default())
.with_max_events_per_span(64)
.with_max_attributes_per_span(16)
.with_max_events_per_span(16)
.with_resource(Resource::new(vec![KeyValue::new("service.name", "example")])),
)
.install_batch(opentelemetry_sdk::runtime::Tokio)?;
global::set_tracer_provider(tracer_provider);
let tracer = global::tracer("tracer-name");
let export_config = ExportConfig {
endpoint: "http://localhost:4317".to_string(),
timeout: Duration::from_secs(3),
protocol: Protocol::Grpc
};
let meter = opentelemetry_otlp::new_pipeline()
.metrics(opentelemetry_sdk::runtime::Tokio)
.with_exporter(
opentelemetry_otlp::new_exporter()
.tonic()
.with_export_config(export_config),
// can also config it using with_* functions like the tracing part above.
)
.with_resource(Resource::new(vec![KeyValue::new("service.name", "example")]))
.with_period(Duration::from_secs(3))
.with_timeout(Duration::from_secs(10))
.with_aggregation_selector(DefaultAggregationSelector::new())
.with_temporality_selector(DefaultTemporalitySelector::new())
.build();
tracer.in_span("doing_work", |cx| {
// Traced app logic here...
});
Ok(())
}
Structs§
- Configuration for the OTLP exporter.
- HttpExporterBuilder
http-proto
orhttp-json
Configuration for the OTLP HTTP exporter. - LogExporter
logs
OTLP exporter that sends log data - MetricsExporter
metrics
Export metrics in OTEL format. - Build a OTLP metrics or tracing exporter builder. See functions below to understand what’s currently supported.
- OtlpLogPipeline
logs
Recommended configuration for an OTLP exporter pipeline. - OtlpMetricPipeline
metrics
Pipeline to build OTLP metrics exporter - General builder for both tracing and metrics.
- OtlpTracePipeline
trace
Recommended configuration for an OTLP exporter pipeline. - SpanExporter
trace
OTLP exporter that sends tracing information - TonicConfig
grpc-tonic
Configuration for tonic - TonicExporterBuilder
grpc-tonic
Configuration for the tonic OTLP GRPC exporter.
Enums§
- The compression algorithm to use when sending data.
- Wrap type for errors from this crate.
- OTLP log exporter builder
- MetricsExporterBuilder
metrics
OTLP metrics exporter builder. - The communication protocol to use when exporting data.
- SpanExporterBuilder
trace
OTLP span exporter builder.
Constants§
- Compression algorithm to use, defaults to none.
- Target to which the exporter is going to send signals, defaults to https://localhost:4317. Learn about the relationship between this constant and metrics/spans/logs at https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/exporter.md#endpoint-urls-for-otlphttp
- Default target to which the exporter is going to send signals.
- Key-value pairs to be used as headers associated with gRPC or HTTP requests Example:
k1=v1,k2=v2
Note: as of now, this is only supported for HTTP requests. - Compression algorithm to use, defaults to none.
- Target to which the exporter is going to send logs
- Key-value pairs to be used as headers associated with gRPC or HTTP requests for sending logs. Example:
k1=v1,k2=v2
Note: this is only supported for HTTP. - Maximum time the OTLP exporter will wait for each batch logs export.
- Compression algorithm to use, defaults to none.
- Target to which the exporter is going to send metrics, defaults to https://localhost:4317/v1/metrics. Learn about the relationship between this constant and default/spans/logs at https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/exporter.md#endpoint-urls-for-otlphttp
- Key-value pairs to be used as headers associated with gRPC or HTTP requests for sending metrics. Example:
k1=v1,k2=v2
Note: this is only supported for HTTP. - Max waiting time for the backend to process each metrics batch, defaults to 10s.
- Protocol the exporter will use. Either
http/protobuf
orgrpc
. - Default protocol, using http-json.
- Max waiting time for the backend to process each signal batch, defaults to 10 seconds.
- Default max waiting time for the backend to process each signal batch.
- Compression algorithm to use, defaults to none.
- Target to which the exporter is going to send spans, defaults to https://localhost:4317/v1/traces. Learn about the relationship between this constant and default/metrics/logs at https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/exporter.md#endpoint-urls-for-otlphttp
- Key-value pairs to be used as headers associated with gRPC or HTTP requests for sending spans. Example:
k1=v1,k2=v2
Note: this is only supported for HTTP. - Max waiting time for the backend to process each spans batch, defaults to 10s.
Traits§
- Provide access to the export config field within the exporter builders.
- Expose methods to override export configuration.
Functions§
- Create a builder to build OTLP metrics exporter or tracing exporter.
- Create a new pipeline builder with the recommended configuration.