

otlp_json: ** EXPERIMENTAL ** payload is JSON serialized from ExportTraceServiceRequest if set as a traces exporter or ExportMetricsServiceRequest for metrics or ExportLogsServiceRequest for logs.otlp_proto: payload is Protobuf serialized from ExportTraceServiceRequest if set as a traces exporter or ExportMetricsServiceRequest for metrics or ExportLogsServiceRequest for logs.encoding (default = otlp_proto): The encoding of the traces sent to kafka.topic (default = otlp_spans for traces, otlp_metrics for metrics, otlp_logs for logs): The name of the kafka topic to export to.brokers (default = localhost:9092): The list of kafka brokers.The following settings can be optionally configured: protocol_version (no default): Kafka protocol version e.g.

Message payload encoding is configurable. Processors for higher throughput and resiliency. That blocks and does not batch messages, therefore it should be used with batch and queued retry This exporter uses a synchronous producer The following Dockerfile, FROM bitnami/kafka-exporter:latestĬOPY run.sh /opt/bitnami/kafka-exporter/binĮcho "Waiting for the Kafka cluster to come up."Īnd the following docker-compose.Kafka exporter exports logs, metrics, and traces to Kafka. Go_gc_duration_seconds 0Ī more reliable way to do this is to use a wrapper script in order to perform an application-specific health check as described in. I0103 18:49:04.703058 1 kafka_exporter.go:934] Listening on HTTP :9308Īnd on I can see metrics: # HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles. If I just run docker-compose up and allow the Kafka Exporter to fail, and then separately run the danielqsh/kafka-exporter container, it works: > docker run -it -p 9308:9308 -network bitnami-docker-kafka_app-tier danielqsj/kafka-exporter It would appear that this is just caused by a race condition with the Kafka Exporter trying to connect to Kafka before it has started up.
#Kafka exporter how to
Any ideas on how to get this example to work? However, it seems that either the value of kafka:9092 is not right for the value of the rver flag, or the flag is not getting picked up, or perhaps there is some kind of race condition where the exporter fails and exits before Kafka is up and running. I've tried to use the answer to How to pass arguments to entrypoint in docker-compose.yml to specify a command for the kafka-exporter service which - assuming the entrypoint is defined in exec form - should append additional flags to the invocation of the Docker Exporter binary. However, if I run this with docker-compose up, I get the following error: bitnami-docker-kafka-kafka-exporter-1 | F0103 17:44:12.545739 1 kafka_exporter.go:865] Error Init Kafka Client: kafka: client has run out of available brokers to talk to (Is your cluster reachable?) KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181 I'm trying to run the following docker-compose.yml: version: '2' I'm trying to use the Kafka Exporter packaged by Bitnami,, together with the Bitnami image for Kafka.
