0% found this document useful (0 votes)
42 views

StackPrometheusClient

Uploaded by

enovka
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views

StackPrometheusClient

Uploaded by

enovka
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

10/29/24, 8:33 PM StackEdit

Quickstart

This tutorial shows the quickest way to get started with the Prometheus Java metrics
library.

Dependencies

We use the following dependencies:

prometheus-metrics-core is the actual metrics library.

prometheus-metrics-instrumentation-jvm provides out-of-the-box JVM


metrics.

prometheus-metrics-exporter-httpserver is a standalone HTTP server for


exposing Prometheus metrics.

GradleMaven

<dependency>
<groupId>io.prometheus</groupId>
<artifactId>prometheus-metrics-core</artifactId>
<version>1.0.0</version>
</dependency>
<dependency>
<groupId>io.prometheus</groupId>
<artifactId>prometheus-metrics-instrumentation-jvm</artifactId>
<version>1.0.0</version>
</dependency>
<dependency>
<groupId>io.prometheus</groupId>
<artifactId>prometheus-metrics-exporter-httpserver</artifactId>
<version>1.0.0</version>
</dependency>

There are alternative exporters as well, for example if you are using a Servlet container
like Tomcat or Undertow you might want to use prometheus-exporter-servlet-
https://stackedit.io/app# 1/49
10/29/24, 8:33 PM StackEdit

jakarta rather than a standalone HTTP server.

Example Application

import io.prometheus.metrics.core.metrics.Counter;
import io.prometheus.metrics.exporter.httpserver.HTTPServer;
import io.prometheus.metrics.instrumentation.jvm.JvmMetrics;

import java.io.IOException;

public class App {

public static void main(String[] args) throws InterruptedException, IOE

JvmMetrics.builder().register(); // initialize the out-of-the-box J

Counter counter = Counter.builder()


.name("my_count_total")
.help("example counter")
.labelNames("status")
.register();

counter.labelValues("ok").inc();
counter.labelValues("ok").inc();
counter.labelValues("error").inc();

HTTPServer server = HTTPServer.builder()


.port(9400)
.buildAndStart();

System.out.println("HTTPServer listening on port http://localhost:"

Thread.currentThread().join(); // sleep forever


}
}

Result
https://stackedit.io/app# 2/49
10/29/24, 8:33 PM StackEdit

Run the application and view http://localhost:9400/metrics with your browser to see the
raw metrics. You should see the my_count_total metric as shown below plus the jvm_
and process_ metrics coming from JvmMetrics .

# HELP my_count_total example counter


# TYPE my_count_total counter
my_count_total{status="error"} 1.0
my_count_total{status="ok"} 2.0

Prometheus Configuration

To scrape the metrics with a Prometheus server, download the latest Prometheus server
release, and configure the prometheus.yml file as follows:

global:
scrape_interval: 10s # short interval for manual testing

scrape_configs:

- job_name: "java-example"
static_configs:
- targets: ["localhost:9400"]

Registry

In order to expose metrics, you need to register them with a PrometheusRegistry . We


are using a counter as an example here, but the register() method is the same for all
metric types.

Registering a Metrics with the Default Registry

https://stackedit.io/app# 3/49
10/29/24, 8:33 PM StackEdit

Counter eventsTotal = Counter.builder()


.name("events_total")
.help("Total number of events")
.register(); // <-- implicitly uses PrometheusRegistry.defaultRegistry

The register() call above builds the counter and registers it with the global static
PrometheusRegistry.defaultRegistry . Using the default registry is recommended.

Registering a Metrics with a Custom Registry

You can also register your metric with a custom registry:

PrometheusRegistry myRegistry = new PrometheusRegistry();

Counter eventsTotal = Counter.builder()


.name("events_total")
.help("Total number of events")
.register(myRegistry);

Registering a Metric with Multiple Registries

As an alternative to calling register() directly, you can build() metrics without


registering them, and register them later:

// create a counter that is not registered with any registry

Counter eventsTotal = Counter.builder()


.name("events_total")
.help("Total number of events")
.build(); // <-- this will create the metric but not register it

// register the counter with the default registry

PrometheusRegistry.defaultRegistry.register(eventsTotal);

// register the counter with a custom registry.

https://stackedit.io/app# 4/49
10/29/24, 8:33 PM StackEdit

// This is ok, you can register a metric with multiple registries.

PrometheusRegistry myRegistry = new PrometheusRegistry();


myRegistry.register(eventsTotal);

Custom registries are useful if you want to maintain different scopes of metrics, like a
debug registry with a lot of metrics, and a default registry with only a few metrics.

IllegalArgumentException: Duplicate Metric Name in


Registry

While it is ok to register the same metric with multiple registries, it is illegal to register
the same metric name multiple times with the same registry. The following code will
throw an IllegalArgumentException :

Counter eventsTotal1 = Counter.builder()


.name("events_total")
.help("Total number of events")
.register();

Counter eventsTotal2 = Counter.builder()


.name("events_total")
.help("Total number of events")
.register(); // <-- IllegalArgumentException, because a metric with tha

Unregistering a Metric

There is no automatic expiry of unused metrics (yet), once a metric is registered it will
remain registered forever.

However, you can programmatically unregistered an obsolete metric like this:

PrometheusRegistry.defaultRegistry.unregister(eventsTotal);

https://stackedit.io/app# 5/49
10/29/24, 8:33 PM StackEdit

Labels

The following shows an example of a Prometheus metric in text format:

# HELP payments_total total number of payments


# TYPE payments_total counter
payments_total{status="error",type="paypal"} 1.0
payments_total{status="success",type="credit card"} 3.0
payments_total{status="success",type="paypal"} 2.0

The example shows a counter metric named payments_total with two labels: status
and type . Each individual data point (each line in text format) is identified by the
unique combination of its metric name and its label name/value pairs.

Creating a Metric with Labels

Labels are supported for all metric types. We are using counters in this example,
however the labelNames() and labelValues() methods are the same for other
metric types.

The following code creates the counter above.

Counter counter = Counter.builder()


.name("payments_total")
.help("total number of payments")
.labelNames("type", "status")
.register();

counter.labelValues("credit card", "success").inc(3.0);


counter.labelValues("paypal", "success").inc(2.0);
counter.labelValues("paypal", "error").inc(1.0);

The label names have to be specified when the metric is created and cannot change.
The label values are created on demand when values are observed.

https://stackedit.io/app# 6/49
10/29/24, 8:33 PM StackEdit

Creating a Metric without Labels

Labels are optional. The following example shows a metric without labels:

Counter counter = Counter.builder()


.name("payments_total")
.help("total number of payments")
.register();

counter.inc(3.0);

Cardinality Explosion

Each combination of label names and values will result in a new data point, i.e. a new
line in text format. Therefore, a good label should have only a small number of possible
values. If you select labels with many possible values, like unique IDs or timestamps, you
may end up with an enormous number of data points. This is called cardinality explosion.

Here’s a bad example, don’t do this:

Counter loginCount = Counter.builder()


.name("logins_total")
.help("total number of logins")
.labelNames("user_id", "timestamp") // not a good idea, this will resul
.register();

String userId = UUID.randomUUID().toString();


String timestamp = Long.toString(System.currentTimeMillis());

loginCount.labelValues(userId, timestamp).inc();

Initializing Label Values

If you register a metric without labels, it will show up immediately with initial value of
zero.

https://stackedit.io/app# 7/49
10/29/24, 8:33 PM StackEdit

However, metrics with labels only show up after the label values are first used. In the
example above

counter.labelValues("paypal", "error").inc();

The data point

payments_total{status="error",type="paypal"} 1.0

will jump from non-existent to value 1.0. You will never see it with value 0.0.

This is usually not an issue. However, if you find this annoying and want to see all
possible label values from the start, you can initialize label values with
initLabelValues() like this:

Counter counter = Counter.builder()


.name("payments_total")
.help("total number of payments")
.labelNames("type", "status")
.register();

counter.initLabelValues("credit card", "success");


counter.initLabelValues("credit card", "error");
counter.initLabelValues("paypal", "success");
counter.initLabelValues("paypal", "error");

Now the four combinations will be visible from the start with initial value zero.

# HELP payments_total total number of payments


# TYPE payments_total counter
payments_total{status="error",type="credit card"} 0.0
payments_total{status="error",type="paypal"} 0.0
payments_total{status="success",type="credit card"} 0.0
payments_total{status="success",type="paypal"} 0.0

Expiring Unused Label Values

https://stackedit.io/app# 8/49
10/29/24, 8:33 PM StackEdit

There is no automatic expiry of unused label values (yet). Once a set of label values is
used, it will remain there forever.

However, you can programmatically remove label values like this:

counter.remove("paypal", "error");
counter.remove("paypal", "success");

Const Labels

If you have labels values that never change, you can specify them in the builder as
constLabels() :

Counter counter = Counter.builder()


.name("payments_total")
.help("total number of payments")
.constLabels(Labels.of("env", "dev"))
.labelNames("type", "status")
.register();

However, most use cases for constLabels() are better covered by target labels set by
the scraping Prometheus server, or by one specific metric (e.g. a build_info or a
machine_role metric). See also target labels, not static scraped labels.

Metric Types

The Prometheus Java metrics library implements the metric types defined in the
OpenMetrics standard:

Counter
Gauge
Histogram
Summary
Info

https://stackedit.io/app# 9/49
10/29/24, 8:33 PM StackEdit

StateSet
GaugeHistogram and Unknown

Counter

Counter is the most common and useful metric type. Counters can only increase, but
never decrease. In the Prometheus query language, the rate() function is often used for
counters to calculate the average increase per second.

Counter values do not need to be integers. In many cases counters represent a


number of events (like the number of requests), and in that case the counter value is
an integer. However, counters can also be used for something like “total time spent
doing something” in which case the counter value is a floating point number.

Here’s an example of a counter:

Counter serviceTimeSeconds = Counter.builder()


.name("service_time_seconds_total")
.help("total time spent serving requests")
.unit(Unit.SECONDS)
.register();

serviceTimeSeconds.inc(Unit.millisToSeconds(200));

The resulting counter has the value 0.2 . As SECONDS is the standard time unit in
Prometheus, the Unit utility class has methods to convert other time units to seconds.

As defined in OpenMetrics, counter metric names must have the _total suffix. If you
create a counter without the _total suffix the suffix will be appended automatically.

Gauge

Gauges are current measurements, such as the current temperature in Celsius.

Gauge temperature = Gauge.builder()


.name("temperature_celsius")

https://stackedit.io/app# 10/49
10/29/24, 8:33 PM StackEdit

.help("current temperature")
.labelNames("location")
.unit(Unit.CELSIUS)
.register();

temperature.labelValues("Berlin").set(22.3);

Histogram

Histograms are for observing distributions, like latency distributions for HTTP services
or the distribution of request sizes. Unlike with counters and gauges, each histogram
data point has a complex data structure representing different aspects of the
distribution:

Count: The total number of observations.


Sum: The sum of all observed values, e.g. the total time spent serving requests.
Buckets: The histogram buckets representing the distribution.

Prometheus supports two flavors of histograms:

Classic histograms: Bucket boundaries are explicitly defined when the histogram is
created.
Native histograms (exponential histograms): Infinitely many virtual buckets.

By default, histograms maintain both flavors. Which one is used depends on the scrape
request from the Prometheus server.

By default, the Prometheus server will scrape metrics in OpenMetrics format and
get the classic histogram flavor.
If the Prometheus server is started with --enable-feature=native-histograms ,
it will request metrics in Prometheus protobuf format and ingest the native
histogram.
If the Prometheus server is started with --enable-feature=native-histogram
and the scrape config has the option scrape_classic_histograms: true , it will
request metrics in Prometheus protobuf format and ingest both, the classic and
the native flavor. This is great for migrating from classic histograms to native
histograms.

See examples/example-native-histogram for an example.


https://stackedit.io/app# 11/49
10/29/24, 8:33 PM StackEdit

Histogram duration = Histogram.builder()


.name("http_request_duration_seconds")
.help("HTTP request service time in seconds")
.unit(Unit.SECONDS)
.labelNames("method", "path", "status_code")
.register();

long start = System.nanoTime();


// do something
duration.labelValues("GET", "/", "200").observe(Unit.nanosToSeconds(System.

Histograms implement the TimerApi interface, which provides convenience methods for
measuring durations.

The histogram builder provides a lot of configuration for fine-tuning the histogram
behavior. In most cases you don’t need them, defaults are good. The following is an
incomplete list showing the most important options:

nativeOnly() / classicOnly() : Create a histogram with one representation


only.
classicBuckets(...) : Set the classic bucket boundaries. Default buckets are
.005 , .01 , .025 , .05 , .1 , .25 , .5 , 1 , 2.5 , 5 , and 10 . The default bucket
boundaries are designed for measuring request durations in seconds.
nativeMaxNumberOfBuckets() : Upper limit for the number of native histogram
buckets. Default is 160. When the maximum is reached, the native histogram
automatically reduces resolution to stay below the limit.

See Javadoc for Histogram.Builder for a complete list of options. Some options can be
configured at runtime, see config.

Histograms and summaries are both used for observing distributions. Therefore, the
both implement the DistributionDataPoint interface. Using the
DistributionDataPoint interface directly gives you the option to switch between
histograms and summaries later with minimal code changes.

Example of using the DistributionDataPoint interface for a histogram without labels:

DistributionDataPoint eventDuration = Histogram.builder()


.name("event_duration_seconds")
.help("event duration in seconds")
.unit(Unit.SECONDS)

https://stackedit.io/app# 12/49
10/29/24, 8:33 PM StackEdit

.register();

// The following still works perfectly fine if eventDuration


// is backed by a summary rather than a histogram.
eventDuration.observe(0.2);

Example of using the DistributionDataPoint interface for a histogram with labels:

Histogram eventDuration = Histogram.builder()


.name("event_duration_seconds")
.help("event duration in seconds")
.labelNames("status")
.unit(Unit.SECONDS)
.register();

DistributionDataPoint successfulEvents = eventDuration.labelValues("ok");


DistributionDataPoint erroneousEvents = eventDuration.labelValues("error");

// Like in the example above, the following still works perfectly fine
// if the successfulEvents and erroneousEvents are backed by a summary rath
successfulEvents.observe(0.7);
erroneousEvents.observe(0.2);

Summary

Like histograms, summaries are for observing distributions. Each summary data point
has a count and a sum like a histogram data point. However, rather than histogram
buckets summaries maintain quantiles.

Summary requestLatency = Summary.builder()


.name("request_latency_seconds")
.help("Request latency in seconds.")
.unit(Unit.SECONDS)
.quantile(0.5, 0.01)
.quantile(0.95, 0.005)
.quantile(0.99, 0.005)
.labelNames("status")
.register();

https://stackedit.io/app# 13/49
10/29/24, 8:33 PM StackEdit

requestLatency.labelValues("ok").observe(2.7);

The example above creates a summary with the 50th percentile (median), the 95th
percentile, and the 99th percentile. Quantiles are optional, you can create a summary
without quantiles if all you need is the count and the sum.

The terms “percentile” and “quantile” mean the same thing. We use percentile when
we express it as a number in [0, 100], and we use quantile when we express it as a
number in [0.0, 1.0].

The second parameter to quantile() is the maximum acceptable error. The call
.quantile(0.5, 0.01) means that the actual quantile is somewhere in [0.49, 0.51].
Higher precision means higher memory usage.

The 0.0 quantile (min value) and the 1.0 quantile (max value) are special cases because
you can get the precise values (error 0.0) with almost no memory overhead.

Quantile values are calculated based on a 5 minutes moving time window. The default
time window can be changed with maxAgeSeconds() and numberOfAgeBuckets() .

Some options can be configured at runtime, see config.

In general you should prefer histograms over summaries. The Prometheus query
language has a function histogram_quantile() for calculating quantiles from histograms.
The advantage of query-time quantile calculation is that you can aggregate histograms
before calculating the quantile. With summaries you must use the quantile with all its
labels as it is.

Info

Info metrics are used to expose textual information which should not change during
process lifetime. The value of an Info metric is always 1 .

Info info = Info.builder()


.name("jvm_runtime_info")
.help("JVM runtime info")
.labelNames("version", "vendor", "runtime")
.register();

https://stackedit.io/app# 14/49
10/29/24, 8:33 PM StackEdit

String version = System.getProperty("java.runtime.version", "unknown");


String vendor = System.getProperty("java.vm.vendor", "unknown");
String runtime = System.getProperty("java.runtime.name", "unknown");

info.setLabelValues(version, vendor, runtime);

The info above looks as follows in OpenMetrics text format:

# TYPE jvm_runtime info


# HELP jvm_runtime JVM runtime info
jvm_runtime_info{runtime="OpenJDK Runtime Environment",vendor="Oracle Corpo

The example is taken from the prometheus-metrics-instrumentation-jvm module, so


if you have JvmMetrics registered you should have a jvm_runtime_info metric out-
of-the-box.

As defined in OpenMetrics, info metric names must have the _info suffix. If you create
a counter without the _info suffix the suffix will be appended automatically.

StateSet

StateSet are a niche metric type in the OpenMetrics standard that is rarely used. The
main use case is to signal which feature flags are enabled.

StateSet stateSet = StateSet.builder()


.name("feature_flags")
.help("Feature flags")
.labelNames("env")
.states("feature1", "feature2")
.register();

stateSet.labelValues("dev").setFalse("feature1");
stateSet.labelValues("dev").setTrue("feature2");

The OpenMetrics text format looks like this:

# TYPE feature_flags stateset


# HELP feature_flags Feature flags
https://stackedit.io/app# 15/49
10/29/24, 8:33 PM StackEdit

feature_flags{env="dev",feature_flags="feature1"} 0
feature_flags{env="dev",feature_flags="feature2"} 1

GaugeHistogram and Unknown

These types are defined in the OpenMetrics standard but not implemented in the
prometheus-metrics-core API. However, prometheus-metrics-model implements
the underlying data model for these types. To use these types, you need to implement
your own Collector where the collect() method returns an UnknownSnapshot or a
HistogramSnapshot with .gaugeHistogram(true) .

Callbacks

The section on metric types showed how to use metrics that actively maintain their
state.

This section shows how to create callback-based metrics, i.e. metrics that invoke a
callback at scrape time to get the current values.

For example, let’s assume we have two instances of a Cache , a coldCache and a
hotCache . The following implements a callback-based cache_size_bytes metric:

Cache coldCache = new Cache();


Cache hotCache = new Cache();

GaugeWithCallback.builder()
.name("cache_size_bytes")
.help("Size of the cache in Bytes.")
.unit(Unit.BYTES)
.labelNames("state")
.callback(callback -> {
callback.call(coldCache.sizeInBytes(), "cold");
callback.call(hotCache.sizeInBytes(), "hot");
})
.register();

https://stackedit.io/app# 16/49
10/29/24, 8:33 PM StackEdit

The resulting text format looks like this:

# TYPE cache_size_bytes gauge


# UNIT cache_size_bytes bytes
# HELP cache_size_bytes Size of the cache in Bytes.
cache_size_bytes{state="cold"} 78.0
cache_size_bytes{state="hot"} 83.0

Better examples of callback metrics can be found in the prometheus-metrics-


instrumentation-jvm module.

The available callback metric types are:

GaugeWithCallback for gauges.


CounterWithCallback for counters.
SummaryWithCallback for summaries.

The API for gauges and counters is very similar. For summaries the callback has a few
more parameters, because it accepts a count, a sum, and quantiles:

SummaryWithCallback.builder()
.name("example_callback_summary")
.help("help message.")
.labelNames("status")
.callback(callback -> {
callback.call(cache.getCount(), cache.getSum(), Quantiles.EMPTY, "o
})
.register();

Performance

This section has tips on how to use the Prometheus Java client in high performance
applications.

Specify Label Values Only Once

https://stackedit.io/app# 17/49
10/29/24, 8:33 PM StackEdit

For high performance applications, we recommend to specify label values only once, and
then use the data point directly.

This applies to all metric types. Let’s use a counter as an example here:

Counter requestCount = Counter.builder()


.name("requests_total")
.help("total number of requests")
.labelNames("path", "status")
.register();

You could increment the counter above like this:

requestCount.labelValue("/", "200").inc();

However, the line above does not only increment the counter, it also looks up the label
values to find the right data point.

In high performance applications you can optimize this by looking up the data point only
once:

CounterDataPoint successfulCalls = requestCount.labelValues("/", "200");

Now, you can increment the data point directly, which is a highly optimized operation:

successfulCalls.inc();

Enable Only One Histogram Representation

By default, histograms maintain two representations under the hood: The classic
histogram representation with static buckets, and the native histogram representation
with dynamic buckets.

While this default provides the flexibility to scrape different representations at runtime,
it comes at a cost, because maintaining multiple representations causes performance
overhead.
https://stackedit.io/app# 18/49
10/29/24, 8:33 PM StackEdit

In performance critical applications we recommend to use either the classic


representation or the native representation, but not both.

You can either configure this in code for each histogram by calling classicOnly() or
nativeOnly(), or you use the corresponding config options.

One way to do this is with system properties in the command line when you start your
application

java -Dio.prometheus.metrics.histogramClassicOnly=true my-app.jar

or

java -Dio.prometheus.metrics.histogramNativeOnly=true my-app.jar

If you don’t want to add a command line parameter every time you start your
application, you can add a prometheus.properties file to your classpath (put it in the
src/main/resources/ directory so that it gets packed into your JAR file). The
prometheus.properties file should have the following line:

io.prometheus.metrics.histogramClassicOnly=true

or

io.prometheus.metrics.histogramNativeOnly=true

Future releases will add more configuration options, like support for configuration via
environment variable IO_PROMETHEUS_METRICS_HISTOGRAM_NATIVE_ONLY=true .

Multi-Target Pattern

This is for the upcoming release 1.1.0.

https://stackedit.io/app# 19/49
10/29/24, 8:33 PM StackEdit

To support multi-target pattern you can create a custom collector overriding the
purposed internal method in ExtendedMultiCollector see
SampleExtendedMultiCollector in io.prometheus.metrics.examples.httpserver

public class SampleExtendedMultiCollector extends ExtendedMultiCollector {

public SampleExtendedMultiCollector() {
super();
}

@Override
protected MetricSnapshots collectMetricSnapshots(PrometheusScrapeReques

GaugeSnapshot.Builder gaugeBuilder = GaugeSnapshot.builder();


gaugeBuilder.name("x_load").help("process load");

CounterSnapshot.Builder counterBuilder = CounterSnapshot.builder();


counterBuilder.name(PrometheusNaming.sanitizeMetricName("x_calls_to

String[] targetNames = scrapeRequest.getParameterValues("target");


String targetName;
String[] procs = scrapeRequest.getParameterValues("proc");
if (targetNames == null || targetNames.length == 0) {
targetName = "defaultTarget";
procs = null; //ignore procs param
} else {
targetName = targetNames[0];
}
Builder counterDataPointBuilder = CounterSnapshot.CounterDataPointS
io.prometheus.metrics.model.snapshots.GaugeSnapshot.GaugeDataPointS
Labels lbls = Labels.of("target", targetName);

if (procs == null || procs.length == 0) {


counterDataPointBuilder.labels(lbls.merge(Labels.of("proc", "de
gaugeDataPointBuilder.labels(lbls.merge(Labels.of("proc", "defa
counterDataPointBuilder.value(70);
gaugeDataPointBuilder.value(Math.random());

counterBuilder.dataPoint(counterDataPointBuilder.build());
gaugeBuilder.dataPoint(gaugeDataPointBuilder.build());

} else {
for (int i = 0; i < procs.length; i++) {
counterDataPointBuilder.labels(lbls.merge(Labels.of("proc",
gaugeDataPointBuilder.labels(lbls.merge(Labels.of("proc", p

https://stackedit.io/app# 20/49
10/29/24, 8:33 PM StackEdit

counterDataPointBuilder.value(Math.random());
gaugeDataPointBuilder.value(Math.random());

counterBuilder.dataPoint(counterDataPointBuilder.build());
gaugeBuilder.dataPoint(gaugeDataPointBuilder.build());
}
}
Collection<MetricSnapshot> snaps = new ArrayList<MetricSnapshot>();
snaps.add(counterBuilder.build());
snaps.add(gaugeBuilder.build());
MetricSnapshots msnaps = new MetricSnapshots(snaps);
return msnaps;
}

public List<String> getPrometheusNames() {


List<String> names = new ArrayList<String>();
names.add("x_calls_total");
names.add("x_load");
return names;
}

PrometheusScrapeRequest provides methods to access http-related infos from the


request originally received by the endpoint

public interface PrometheusScrapeRequest {


String getRequestURI();

String[] getParameterValues(String name);


}

Sample Prometheus scrape_config

- job_name: "multi-target"

# metrics_path defaults to '/metrics'


# scheme defaults to 'http'.
params:
proc: [proc1, proc2]
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
https://stackedit.io/app# 21/49
10/29/24, 8:33 PM StackEdit

- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: localhost:9401
static_configs:
- targets: ["target1", "target2"]

It’s up to the specific MultiCollector implementation how to interpret the target


parameter. It might be an explicit real target (i.e. via host name/ip address) or as an alias
in some internal configuration. The latter is more suitable when the MultiCollector
implementation is a proxy (see https://github.com/prometheus/snmp_exporter) In this
case, invoking real target might require extra parameters (e.g. credentials) that might be
complex to manage in Prometheus configuration (not considering the case where the
proxy might become an “open relay”)

Exports

More

JavaDoc
Releases
Github

1. client_java
2. /
3. Exporters
4. /
5. Formats

Formats

All exporters the following exposition formats:

OpenMetrics text format


https://stackedit.io/app# 22/49
10/29/24, 8:33 PM StackEdit

Prometheus text format


Prometheus protobuf format

Moreover, gzip encoding is supported for each of these formats.

Scraping with a Prometheus server

The Prometheus server sends an Accept header to specify which format is requested.
By default, the Prometheus server will scrape OpenMetrics text format with gzip
encoding. If the Prometheus server is started with --enable-feature=native-
histograms , it will scrape Prometheus protobuf format instead.

Viewing with a Web Browser

If you view the /metrics endpoint with your Web browser you will see Prometheus
text format. For quick debugging of the other formats, exporters provide a debug URL
parameter:

/metrics?debug=openmetrics : View OpenMetrics text format.


/metrics?debug=text : View Prometheus text format.
/metrics?debug=prometheus-protobuf : View a text representation of the
Prometheus protobuf format.

Filter

All exporters support a name[] URL parameter for querying only specific metric names.
Examples:

/metrics?name[]=jvm_threads_current will query the metric named


jvm_threads_current .
/metrics?name[]=jvm_threads_current&name[]=jvm_threads_daemon will query
two metrics, jvm_threads_current and jvm_threads_daemon .

https://stackedit.io/app# 23/49
10/29/24, 8:33 PM StackEdit

Add the following to the scape job configuration in prometheus.yml to make the
Prometheus server send the name[] parameter:

params:
name[]:
- jvm_threads_current
- jvm_threads_daemon

HTTPServer

The HTTPServer is a standalone server for exposing a metric endpoint. A minimal


example application for HTTPServer can be found in the examples directory.

HTTPServer server = HTTPServer.builder()


.port(9400)
.buildAndStart();

By default, HTTPServer binds to any IP address, you can change this with hostname() or
inetAddress().

HTTPServer is configured with three endpoints:

/metrics for Prometheus scraping.


/-/healthy for simple health checks.
/ the default handler is a static HTML page.

The default handler can be changed with defaultHandler().

Authentication and HTTPS

authenticator() is for configuring authentication.


httpsConfigurator() is for configuring HTTPS.

You can find an example of authentication and SSL in the jmx_exporter.

https://stackedit.io/app# 24/49
10/29/24, 8:33 PM StackEdit

Properties

See config section (todo) on runtime configuration options.

io.prometheus.exporter.httpServer.port : The port to bind to.

Servlet

The PrometheusMetricsServlet is a Jakarta Servlet for exposing a metric endpoint.

web.xml

The old-school way of configuring a servlet is in a web.xml file:

<?xml version="1.0" encoding="UTF-8"?>


<web-app xmlns="https://jakarta.ee/xml/ns/jakartaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="https://jakarta.ee/xml/ns/jakartaee https:
version="5.0">
<servlet>
<servlet-name>prometheus-metrics</servlet-name>
<servlet-class>io.prometheus.metrics.exporter.servlet.jakarta.Prome
</servlet>
<servlet-mapping>
<servlet-name>prometheus-metrics</servlet-name>
<url-pattern>/metrics</url-pattern>
</servlet-mapping>
</web-app>

Programmatic

Today, most Servlet applications use an embedded Servlet container and configure
Servlets programmatically rather than via web.xml . The API for that depends on the

https://stackedit.io/app# 25/49
10/29/24, 8:33 PM StackEdit

Servlet container. The examples directory has an example of an embedded Tomcat


container with the PrometheusMetricsServlet configured.

Spring

You can use the PrometheusMetricsServlet in Spring applications. See our Spring doc.

Pushgateway

The Prometheus Pushgateway exists to allow ephemeral and batch jobs to expose their
metrics to Prometheus. Since these kinds of jobs may not exist long enough to be
scraped, they can instead push their metrics to a Pushgateway. The Pushgateway then
exposes these metrics to Prometheus.

The PushGateway Java class allows you to push metrics to a Prometheus Pushgateway.

Example

GradleMaven

<dependency>
<groupId>io.prometheus</groupId>
<artifactId>prometheus-metrics-core</artifactId>
<version>1.3.0</version>
</dependency>
<dependency>
<groupId>io.prometheus</groupId>
<artifactId>prometheus-metrics-exporter-pushgateway</artifactId>
<version>1.3.0</version>
</dependency>

public class ExampleBatchJob {

private static PushGateway pushGateway = PushGateway.builder()

https://stackedit.io/app# 26/49
10/29/24, 8:33 PM StackEdit

.address("localhost:9091") // not needed as localhost:9091 is t


.job("example")
.build();

private static Gauge dataProcessedInBytes = Gauge.builder()


.name("data_processed")
.help("data processed in the last batch job run")
.unit(Unit.BYTES)
.register();

public static void main(String[] args) throws Exception {


try {
long bytesProcessed = processData();
dataProcessedInBytes.set(bytesProcessed);
} finally {
pushGateway.push();
}
}

public static long processData() {


// Imagine a batch job here that processes data
// and returns the number of Bytes processed.
return 42;
}
}

Basic Auth

The PushGateway supports basic authentication.

PushGateway pushGateway = PushGateway.builder()


.job("example")
.basicAuth("my_user", "my_password")
.build();

The PushGatewayTestApp in integration-tests/it-pushgateway has a complete


example of this.

Bearer token
https://stackedit.io/app# 27/49
10/29/24, 8:33 PM StackEdit

The PushGateway supports Bearer token authentication.

PushGateway pushGateway = PushGateway.builder()


.job("example")
.bearerToken("my_token")
.build();

The PushGatewayTestApp in integration-tests/it-pushgateway has a complete


example of this.

SSL

The PushGateway supports SSL.

PushGateway pushGateway = PushGateway.builder()


.job("example")
.scheme(Scheme.HTTPS)
.build();

However, this requires that the JVM can validate the server certificate.

If you want to skip certificate verification, you need to provide your own
HttpConnectionFactory. The PushGatewayTestApp in integration-tests/it-
pushgateway has a complete example of this.

Configuration Properties

The PushGateway supports a couple of properties that can be configured at runtime.


See config.

Spring

Alternative: Use Spring’s Built-in Metrics Library


https://stackedit.io/app# 28/49
10/29/24, 8:33 PM StackEdit

Spring Boot has a built-in metric library named Micrometer, which supports Prometheus
exposition format and can be set up in three simple steps:

1. Add the org.springframework.boot:spring-boot-starter-actuator


dependency.
2. Add the io.micrometer:micrometer-registry-prometheus as a runtime
dependency.
3. Enable the Prometheus endpoint by adding the line
management.endpoints.web.exposure.include=prometheus to
application.properties .

Note that Spring’s default Prometheus endpoint is /actuator/prometheus , not


/metrics .

In most cases the built-in Spring metrics library will work for you and you don’t need the
Prometheus Java library in Spring applications.

Use the Prometheus Metrics Library in Spring

However, you may have your reasons why you want to use the Prometheus metrics
library in Spring anyway. Maybe you want full support for all Prometheus metric types,
or you want to use the new Prometheus native histograms.

The easiest way to use the Prometheus metrics library in Spring is to configure the
PrometheusMetricsServlet to expose metrics.

Dependencies:

prometheus-metrics-core : The core metrics library.


prometheus-metrics-exporter-servlet-jakarta : For providing the /metrics
endpoint.
prometheus-metrics-instrumentation-jvm : Optional - JVM metrics

The following is the complete source code of a Spring Boot REST service using the
Prometheus metrics library:

import io.prometheus.metrics.core.metrics.Counter;
import io.prometheus.metrics.exporter.servlet.jakarta.PrometheusMetricsServ
import io.prometheus.metrics.instrumentation.jvm.JvmMetrics;
import org.springframework.boot.SpringApplication;

https://stackedit.io/app# 29/49
10/29/24, 8:33 PM StackEdit

import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.web.servlet.ServletRegistrationBean;
import org.springframework.context.annotation.Bean;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

@SpringBootApplication
@RestController
public class DemoApplication {

private static final Counter requestCount = Counter.builder()


.name("requests_total")
.register();

public static void main(String[] args) {


SpringApplication.run(DemoApplication.class, args);
JvmMetrics.builder().register();
}

@GetMapping("/")
public String sayHello() throws InterruptedException {
requestCount.inc();
return "Hello, World!\n";
}

@Bean
public ServletRegistrationBean<PrometheusMetricsServlet> createPromethe
return new ServletRegistrationBean<>(new PrometheusMetricsServlet()
}
}

The important part are the last three lines: They configure the
PrometheusMetricsServlet to expose metrics on /metrics :

@Bean
public ServletRegistrationBean<PrometheusMetricsServlet> createPrometheusMe
return new ServletRegistrationBean<>(new PrometheusMetricsServlet(), "/
}

The example provides a Hello, world! endpoint on http://localhost:8080, and Prometheus


metrics on http://localhost:8080/metrics.

https://stackedit.io/app# 30/49
10/29/24, 8:33 PM StackEdit

JVM

The JVM instrumentation module provides a variety of out-of-the-box JVM and process
metrics. To use it, add the following dependency:

GradleMaven

<dependency>
<groupId>io.prometheus</groupId>
<artifactId>prometheus-metrics-instrumentation-jvm</artifactId>
<version>1.0.0</version>
</dependency>

Now, you can register the JVM metrics as follows:

JvmMetrics.builder().register();

The line above will initialize all JVM metrics and register them with the default registry.
If you want to register the metrics with a custom PrometheusRegistry , you can pass
the registry as parameter to the register() call.

The sections below describe the individual classes providing JVM metrics. If you don’t
want to register all JVM metrics, you can register each of these classes individually
rather than using JvmMetrics .

JVM Buffer Pool Metrics

JVM buffer pool metrics are provided by the JvmBufferPoolMetrics class. The data is
coming from the BufferPoolMXBean. Example metrics:

# HELP jvm_buffer_pool_capacity_bytes Bytes capacity of a given JVM buffer


# TYPE jvm_buffer_pool_capacity_bytes gauge
jvm_buffer_pool_capacity_bytes{pool="direct"} 8192.0
jvm_buffer_pool_capacity_bytes{pool="mapped"} 0.0
# HELP jvm_buffer_pool_used_buffers Used buffers of a given JVM buffer pool
# TYPE jvm_buffer_pool_used_buffers gauge

https://stackedit.io/app# 31/49
10/29/24, 8:33 PM StackEdit

jvm_buffer_pool_used_buffers{pool="direct"} 1.0
jvm_buffer_pool_used_buffers{pool="mapped"} 0.0
# HELP jvm_buffer_pool_used_bytes Used bytes of a given JVM buffer pool.
# TYPE jvm_buffer_pool_used_bytes gauge
jvm_buffer_pool_used_bytes{pool="direct"} 8192.0
jvm_buffer_pool_used_bytes{pool="mapped"} 0.0

JVM Class Loading Metrics

JVM class loading metrics are provided by the JvmClassLoadingMetrics class. The data is
coming from the ClassLoadingMXBean. Example metrics:

# HELP jvm_classes_currently_loaded The number of classes that are currentl


# TYPE jvm_classes_currently_loaded gauge
jvm_classes_currently_loaded 1109.0
# HELP jvm_classes_loaded_total The total number of classes that have been
# TYPE jvm_classes_loaded_total counter
jvm_classes_loaded_total 1109.0
# HELP jvm_classes_unloaded_total The total number of classes that have bee
# TYPE jvm_classes_unloaded_total counter
jvm_classes_unloaded_total 0.0

JVM Compilation Metrics

JVM compilation metrics are provided by the JvmCompilationMetrics class. The data is
coming from the CompilationMXBean. Example metrics:

# HELP jvm_compilation_time_seconds_total The total time in seconds taken f


# TYPE jvm_compilation_time_seconds_total counter
jvm_compilation_time_seconds_total 0.152

JVM Garbage Collector Metrics

https://stackedit.io/app# 32/49
10/29/24, 8:33 PM StackEdit

JVM garbage collector metrics are provided by the JvmGarbageCollectorMetric class.


The data is coming from the GarbageCollectorMXBean. Example metrics:

# HELP jvm_gc_collection_seconds Time spent in a given JVM garbage collecto


# TYPE jvm_gc_collection_seconds summary
jvm_gc_collection_seconds_count{gc="PS MarkSweep"} 0
jvm_gc_collection_seconds_sum{gc="PS MarkSweep"} 0.0
jvm_gc_collection_seconds_count{gc="PS Scavenge"} 0
jvm_gc_collection_seconds_sum{gc="PS Scavenge"} 0.0

JVM Memory Metrics

JVM memory metrics are provided by the JvmMemoryMetrics class. The data is coming
from the MemoryMXBean and the MemoryPoolMXBean. Example metrics:

# HELP jvm_memory_committed_bytes Committed (bytes) of a given JVM memory a


# TYPE jvm_memory_committed_bytes gauge
jvm_memory_committed_bytes{area="heap"} 4.98597888E8
jvm_memory_committed_bytes{area="nonheap"} 1.1993088E7
# HELP jvm_memory_init_bytes Initial bytes of a given JVM memory area.
# TYPE jvm_memory_init_bytes gauge
jvm_memory_init_bytes{area="heap"} 5.20093696E8
jvm_memory_init_bytes{area="nonheap"} 2555904.0
# HELP jvm_memory_max_bytes Max (bytes) of a given JVM memory area.
# TYPE jvm_memory_max_bytes gauge
jvm_memory_max_bytes{area="heap"} 7.38983936E9
jvm_memory_max_bytes{area="nonheap"} -1.0
# HELP jvm_memory_objects_pending_finalization The number of objects waitin
# TYPE jvm_memory_objects_pending_finalization gauge
jvm_memory_objects_pending_finalization 0.0
# HELP jvm_memory_pool_collection_committed_bytes Committed after last coll
# TYPE jvm_memory_pool_collection_committed_bytes gauge
jvm_memory_pool_collection_committed_bytes{pool="PS Eden Space"} 1.30023424
jvm_memory_pool_collection_committed_bytes{pool="PS Old Gen"} 3.47078656E8
jvm_memory_pool_collection_committed_bytes{pool="PS Survivor Space"} 2.1495
# HELP jvm_memory_pool_collection_init_bytes Initial after last collection
# TYPE jvm_memory_pool_collection_init_bytes gauge
jvm_memory_pool_collection_init_bytes{pool="PS Eden Space"} 1.30023424E8
jvm_memory_pool_collection_init_bytes{pool="PS Old Gen"} 3.47078656E8
jvm_memory_pool_collection_init_bytes{pool="PS Survivor Space"} 2.1495808E7
# HELP jvm_memory_pool_collection_max_bytes Max bytes after last collection
https://stackedit.io/app# 33/49
10/29/24, 8:33 PM StackEdit

# TYPE jvm_memory_pool_collection_max_bytes gauge


jvm_memory_pool_collection_max_bytes{pool="PS Eden Space"} 2.727870464E9
jvm_memory_pool_collection_max_bytes{pool="PS Old Gen"} 5.542248448E9
jvm_memory_pool_collection_max_bytes{pool="PS Survivor Space"} 2.1495808E7
# HELP jvm_memory_pool_collection_used_bytes Used bytes after last collecti
# TYPE jvm_memory_pool_collection_used_bytes gauge
jvm_memory_pool_collection_used_bytes{pool="PS Eden Space"} 0.0
jvm_memory_pool_collection_used_bytes{pool="PS Old Gen"} 1249696.0
jvm_memory_pool_collection_used_bytes{pool="PS Survivor Space"} 0.0
# HELP jvm_memory_pool_committed_bytes Committed bytes of a given JVM memor
# TYPE jvm_memory_pool_committed_bytes gauge
jvm_memory_pool_committed_bytes{pool="Code Cache"} 4128768.0
jvm_memory_pool_committed_bytes{pool="Compressed Class Space"} 917504.0
jvm_memory_pool_committed_bytes{pool="Metaspace"} 6946816.0
jvm_memory_pool_committed_bytes{pool="PS Eden Space"} 1.30023424E8
jvm_memory_pool_committed_bytes{pool="PS Old Gen"} 3.47078656E8
jvm_memory_pool_committed_bytes{pool="PS Survivor Space"} 2.1495808E7
# HELP jvm_memory_pool_init_bytes Initial bytes of a given JVM memory pool.
# TYPE jvm_memory_pool_init_bytes gauge
jvm_memory_pool_init_bytes{pool="Code Cache"} 2555904.0
jvm_memory_pool_init_bytes{pool="Compressed Class Space"} 0.0
jvm_memory_pool_init_bytes{pool="Metaspace"} 0.0
jvm_memory_pool_init_bytes{pool="PS Eden Space"} 1.30023424E8
jvm_memory_pool_init_bytes{pool="PS Old Gen"} 3.47078656E8
jvm_memory_pool_init_bytes{pool="PS Survivor Space"} 2.1495808E7
# HELP jvm_memory_pool_max_bytes Max bytes of a given JVM memory pool.
# TYPE jvm_memory_pool_max_bytes gauge
jvm_memory_pool_max_bytes{pool="Code Cache"} 2.5165824E8
jvm_memory_pool_max_bytes{pool="Compressed Class Space"} 1.073741824E9
jvm_memory_pool_max_bytes{pool="Metaspace"} -1.0
jvm_memory_pool_max_bytes{pool="PS Eden Space"} 2.727870464E9
jvm_memory_pool_max_bytes{pool="PS Old Gen"} 5.542248448E9
jvm_memory_pool_max_bytes{pool="PS Survivor Space"} 2.1495808E7
# HELP jvm_memory_pool_used_bytes Used bytes of a given JVM memory pool.
# TYPE jvm_memory_pool_used_bytes gauge
jvm_memory_pool_used_bytes{pool="Code Cache"} 4065472.0
jvm_memory_pool_used_bytes{pool="Compressed Class Space"} 766680.0
jvm_memory_pool_used_bytes{pool="Metaspace"} 6659432.0
jvm_memory_pool_used_bytes{pool="PS Eden Space"} 7801536.0
jvm_memory_pool_used_bytes{pool="PS Old Gen"} 1249696.0
jvm_memory_pool_used_bytes{pool="PS Survivor Space"} 0.0
# HELP jvm_memory_used_bytes Used bytes of a given JVM memory area.
# TYPE jvm_memory_used_bytes gauge
jvm_memory_used_bytes{area="heap"} 9051232.0

https://stackedit.io/app# 34/49
10/29/24, 8:33 PM StackEdit

jvm_memory_used_bytes{area="nonheap"} 1.1490688E7

JVM Memory Pool Allocation Metrics

JVM memory pool allocation metrics are provided by the


JvmMemoryPoolAllocationMetrics class. The data is obtained by adding a
NotificationListener to the GarbageCollectorMXBean. Example metrics:

# HELP jvm_memory_pool_allocated_bytes_total Total bytes allocated in a giv


# TYPE jvm_memory_pool_allocated_bytes_total counter
jvm_memory_pool_allocated_bytes_total{pool="Code Cache"} 4336448.0
jvm_memory_pool_allocated_bytes_total{pool="Compressed Class Space"} 875016
jvm_memory_pool_allocated_bytes_total{pool="Metaspace"} 7480456.0
jvm_memory_pool_allocated_bytes_total{pool="PS Eden Space"} 1.79232824E8
jvm_memory_pool_allocated_bytes_total{pool="PS Old Gen"} 1428888.0
jvm_memory_pool_allocated_bytes_total{pool="PS Survivor Space"} 4115280.0

JVM Runtime Info Metric

The JVM runtime info metric is provided by the JvmRuntimeInfoMetric class. The data is
obtained via system properties and will not change throughout the lifetime of the
application. Example metric:

# TYPE jvm_runtime info


# HELP jvm_runtime JVM runtime info
jvm_runtime_info{runtime="OpenJDK Runtime Environment",vendor="Oracle Corpo

JVM Thread Metrics

JVM thread metrics are provided by the JvmThreadsMetrics class. The data is coming
from the ThreadMXBean. Example metrics:

https://stackedit.io/app# 35/49
10/29/24, 8:33 PM StackEdit

# HELP jvm_threads_current Current thread count of a JVM


# TYPE jvm_threads_current gauge
jvm_threads_current 10.0
# HELP jvm_threads_daemon Daemon thread count of a JVM
# TYPE jvm_threads_daemon gauge
jvm_threads_daemon 8.0
# HELP jvm_threads_deadlocked Cycles of JVM-threads that are in deadlock wa
# TYPE jvm_threads_deadlocked gauge
jvm_threads_deadlocked 0.0
# HELP jvm_threads_deadlocked_monitor Cycles of JVM-threads that are in dea
# TYPE jvm_threads_deadlocked_monitor gauge
jvm_threads_deadlocked_monitor 0.0
# HELP jvm_threads_peak Peak thread count of a JVM
# TYPE jvm_threads_peak gauge
jvm_threads_peak 10.0
# HELP jvm_threads_started_total Started thread count of a JVM
# TYPE jvm_threads_started_total counter
jvm_threads_started_total 10.0
# HELP jvm_threads_state Current count of threads by state
# TYPE jvm_threads_state gauge
jvm_threads_state{state="BLOCKED"} 0.0
jvm_threads_state{state="NEW"} 0.0
jvm_threads_state{state="RUNNABLE"} 5.0
jvm_threads_state{state="TERMINATED"} 0.0
jvm_threads_state{state="TIMED_WAITING"} 2.0
jvm_threads_state{state="UNKNOWN"} 0.0
jvm_threads_state{state="WAITING"} 3.0

Process Metrics

Process metrics are provided by the ProcessMetrics class. The data is coming from the
OperatingSystemMXBean, the RuntimeMXBean, and from the /proc/self/status file
on Linux. The metrics with prefix process_ are not specific to Java, but should be
provided by every Prometheus client library, see Process Metrics in the Prometheus
writing client libraries documentation. Example metrics:

# HELP process_cpu_seconds_total Total user and system CPU time spent in se


# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 1.63
# HELP process_max_fds Maximum number of open file descriptors.

https://stackedit.io/app# 36/49
10/29/24, 8:33 PM StackEdit

# TYPE process_max_fds gauge


process_max_fds 524288.0
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 28.0
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 7.8577664E7
# HELP process_start_time_seconds Start time of the process since unix epoc
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.693829439767E9
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 1.2683624448E10

OTLP

The Prometheus Java client library allows you to push metrics to an OpenTelemetry
endpoint using the OTLP protocol.

To implement this, you need to include prometheus-metrics-exporter as a


dependency

GradleMaven

<dependency>
<groupId>io.prometheus</groupId>
<artifactId>prometheus-metrics-exporter-opentelemetry</artifactId>
<version>1.0.0</version>
</dependency>

Initialize the OpenTelemetryExporter in your Java code:


https://stackedit.io/app# 37/49
10/29/24, 8:33 PM StackEdit

OpenTelemetryExporter.builder()
// optional: call configuration methods here
.buildAndStart();

By default, the OpenTelemetryExporter will push metrics every 60 seconds to


localhost:4317 using grpc protocol. You can configure this in code using the
OpenTelemetryExporter.Builder, or at runtime via
io.prometheus.exporter.opentelemetry.* properties.

In addition to the Prometheus Java client configuration, the exporter also recognizes
standard OpenTelemetry configuration. For example, you can set the
OTEL_EXPORTER_OTLP_METRICS_ENDPOINT environment variable to configure the
endpoint. The Javadoc for OpenTelemetryExporter.Builder shows which settings have
corresponding OTel configuration. The intended use case is that if you attach the
OpenTelemetry Java agent for tracing, and use the Prometheus Java client for metrics, it
is sufficient to configure the OTel agent because the Prometheus library will pick up the
same configuration.

The examples/example-exporter-opentelemetry folder has a docker compose with a


complete end-to-end example, including a Java app, the OTel collector, and a
Prometheus server.

Tracing

OpenTelemetry’s vision statement says that telemetry should be loosely coupled,


allowing end users to pick and choose from the pieces they want without having to
bring in the rest of the project, too. In that spirit, you might choose to instrument your
Java application with the Prometheus Java client library for metrics, and attach the
OpenTelemetry Java agent to get distributed tracing.

First, if you attach the OpenTelemetry Java agent you might want to turn off OTel’s built-
in metrics, because otherwise you get metrics from both the Prometheus Java client
library and the OpenTelemetry agent (technically it’s no problem to get both metrics, it’s
just not a common use case).

https://stackedit.io/app# 38/49
10/29/24, 8:33 PM StackEdit

# This will tell the OpenTelemetry agent not to send metrics, just traces a
export OTEL_METRICS_EXPORTER=none

Now, start your application with the OpenTelemetry Java agent attached for traces and
logs.

java -javaagent:path/to/opentelemetry-javaagent.jar -jar myapp.jar

With the OpenTelemetry Java agent attached, the Prometheus client library will do a lot
of magic under the hood.

service.name and service.instance.id are used in OpenTelemetry to uniquely


identify a service instance. The Prometheus client library will automatically use the
same service.name and service.instance.id as the agent when pushing
metrics in OpenTelemetry format. That way the monitoring backend will see that
the metrics and the traces are coming from the same instance.
Exemplars are added automatically if a Prometheus metric is updated in the
context of a distributed OpenTelemetry trace.
If a Span is used as an Exemplar, the Span is marked with the Span attribute
exemplar="true" . This can be used in the OpenTelemetry’s sampling policy to
make sure Exemplars are always sampled.

Here’s more context on the exemplar="true" Span attribute: Many users of tracing
libraries don’t keep 100% of their trace data, because traces are very repetitive. It is very
common to sample only 10% of traces and discard 90%. However, this can be an issue
with Exemplars: In 90% of the cases Exemplars would point to a trace that has been
thrown away.

To solve this, the Prometheus Java client library annotates each Span that has been used
as an Exemplar with the exemplar="true" Span attribute.

The sampling policy in the OpenTelemetry collector can be configured to keep traces
with this attribute. There’s no risk that this results in a significant increase in trace data,
because new Exemplars are only selected every minRetentionPeriodSeconds seconds.

Here’s an example of how to configure OpenTelemetry’s tail sampling processor to


sample all Spans marked with exemplar="true" , and then discard 90% of the traces:

https://stackedit.io/app# 39/49
10/29/24, 8:33 PM StackEdit

policies:
[
{
name: keep-exemplars,
type: string_attribute,
string_attribute: { key: "exemplar", values: [ "true" ] }
},
{
name: keep-10-percent,
type: probabilistic,
probabilistic: { sampling_percentage: 10 }
},
]

The examples/example-exemplar-tail-sampling/ directory has a complete end-to-end


example, with a distributed Java application with two services, an OpenTelemetry
collector, Prometheus, Tempo as a trace database, and Grafana dashboards. Use docker-
compose as described in the example’s README to run the example and explore the
results.

Names

OpenTelemetry naming conventions are different from Prometheus naming conventions.


The mapping from OpenTelemetry metric names to Prometheus metric names is well
defined in OpenTelemetry’s Prometheus and OpenMetrics Compatibility spec, and the
OpenTelemetryExporter implements that specification.

The goal is, if you set up a pipeline as illustrated below, you will see the same metric
names in the Prometheus server as if you had exposed Prometheus metrics directly.

https://stackedit.io/app# 40/49
10/29/24, 8:33 PM StackEdit

The main steps when converting OpenTelemetry metric names to Prometheus metric
names are:

Replace dots with underscores.


If the metric has a unit, append the unit to the metric name, like _seconds .
If the metric type has a suffix, append it, like _total for counters.

Dots in Metric and Label Names

OpenTelemetry defines not only a line protocol, but also semantic conventions, i.e.
standardized metric and label names. For example, OpenTelemetry’s Semantic
Conventions for HTTP Metrics say that if you instrument an HTTP server with
OpenTelemetry, you must have a histogram named http.server.duration .

Most names defined in semantic conventions use dots. In the Prometheus server, the
dot is an illegal character (this might change in future versions of the Prometheus
server).

The Prometheus Java client library allows dots, so that you can use metric names and
label names as defined in OpenTelemetry’s semantic conventions. The dots will
automatically be replaced with underscores if you expose metrics in Prometheus format,
but you will see the original names with dots if you push your metrics in OpenTelemetry
format.

That way, you can use OTel-compliant metric and label names today when
instrumenting your application with the Prometheus Java client, and you are prepared in
case your monitoring backend adds features in the future that require OTel-compliant
instrumentation.

Config

Location of the Properties File


Metrics Properties
Exemplar Properties
Exporter Properties

https://stackedit.io/app# 41/49
10/29/24, 8:33 PM StackEdit

Exporter Filter Properties


Exporter HTTPServer Properties
Exporter OpenTelemetry Properties
Exporter PushGateway Properties

The Prometheus metrics library provides multiple options how to override configuration
at runtime:

Properties file
System properties

Future releases will add more options, like configuration via environment variables.

Example:

io.prometheus.exporter.httpServer.port = 9401

The property above changes the port for the HTTPServer exporter to 9401.

Properties file: Add the line above to the properties file.


System properties: Use the command line parameter -
Dio.prometheus.exporter.httpServer.port=9401 when starting your
application.

Location of the Properties File

The properties file is searched in the following locations:

/prometheus.properties in the classpath. This is for bundling a properties file


with your application.
System property -Dprometheus.config=/path/to/prometheus.properties .
Environment variable PROMETHEUS_CONFIG=/path/to/prometheus.properties .

Metrics Properties

https://stackedit.io/app# 42/49
10/29/24, 8:33 PM StackEdit

Name

Javadoc

Note

io.prometheus.metrics.exemplarsEnabled

Counter.Builder.withExemplars()

(1) (2)

io.prometheus.metrics.histogramNativeOnly

Histogram.Builder.nativeOnly()

(2)

io.prometheus.metrics.histogramClassicOnly

Histogram.Builder.classicOnly()

(2)

io.prometheus.metrics.histogramClassicUpperBounds

Histogram.Builder.classicUpperBounds()

(3)

io.prometheus.metrics.histogramNativeInitialSchema

Histogram.Builder.nativeInitialSchema()

io.prometheus.metrics.histogramNativeMinZeroThreshold

Histogram.Builder.nativeMinZeroThreshold()

io.prometheus.metrics.histogramNativeMaxZeroThreshold

Histogram.Builder.nativeMaxZeroThreshold()

io.prometheus.metrics.histogramNativeMaxNumberOfBuckets

Histogram.Builder.nativeMaxNumberOfBuckets()

https://stackedit.io/app# 43/49
10/29/24, 8:33 PM StackEdit

io.prometheus.metrics.histogramNativeResetDurationSeconds

Histogram.Builder.nativeResetDuration()

io.prometheus.metrics.summaryQuantiles

Summary.Builder.quantile(double)

(4)

io.prometheus.metrics.summaryQuantileErrors

Summary.Builder.quantile(double, double)

(5)

io.prometheus.metrics.summaryMaxAgeSeconds

Summary.Builder.maxAgeSeconds()

io.prometheus.metrics.summaryNumberOfAgeBuckets

Summary.Builder.numberOfAgeBuckets()

Notes

(1) withExemplars() and withoutExemplars() are available for all metric types, not just for
counters
(2) Boolean value. Format: property=true or property=false .
(3) Comma-separated list. Example: .005, .01, .025, .05, .1, .25, .5, 1, 2.5,
5, 10 .
(4) Comma-separated list. Example: 0.5, 0.95, 0.99 .
(5) Comma-separated list. If specified, the list must have the same length as
io.prometheus.metrics.summaryQuantiles . Example: 0.01, 0.005, 0.005 .

There’s one special feature about metric properties: You can set a property for one
specific metric only by specifying the metric name. Example: Let’s say you have a
histogram named latency_seconds .

io.prometheus.metrics.histogramClassicUpperBounds = 0.2, 0.4, 0.8, 1.0

The line above sets histogram buckets for all histograms. However:

https://stackedit.io/app# 44/49
10/29/24, 8:33 PM StackEdit

io.prometheus.metrics.latency_seconds.histogramClassicUpperBounds = 0.2, 0.

The line above sets histogram buckets only for the histogram named latency_seconds .

This works for all Metrics properties.

Exemplar Properties

Name

Javadoc

Note

io.prometheus.exemplars.minRetentionPeriodSeconds

ExemplarsProperties.getMinRetentionPeriodSeconds()

io.prometheus.exemplars.maxRetentionPeriodSeconds

ExemplarsProperties.getMaxRetentionPeriodSeconds()

io.prometheus.exemplars.sampleIntervalMilliseconds

ExemplarsProperties.getSampleIntervalMilliseconds()

Exporter Properties

Name

Javadoc

Note

io.prometheus.exporter.includeCreatedTimestamps

ExporterProperties.getIncludeCreatedTimestamps()

(1)
https://stackedit.io/app# 45/49
10/29/24, 8:33 PM StackEdit

io.prometheus.exporter.exemplarsOnAllMetricTypes

ExporterProperties.getExemplarsOnAllMetricTypes()

(1)

(1) Boolean value, true or false . Default see Javadoc.

Exporter Filter Properties

Name

Javadoc

Note

io.prometheus.exporter.filter.metricNameMustBeEqualTo

ExporterFilterProperties.getAllowedMetricNames()

(1)

io.prometheus.exporter.filter.metricNameMustNotBeEqualTo

ExporterFilterProperties.getExcludedMetricNames()

(2)

io.prometheus.exporter.filter.metricNameMustStartWith

ExporterFilterProperties.getAllowedMetricNamePrefixes()

(3)

io.prometheus.exporter.filter.metricNameMustNotStartWith

ExporterFilterProperties.getExcludedMetricNamePrefixes()

(4)

(1) Comma sparated list of allowed metric names. Only these metrics will be exposed.
(2) Comma sparated list of excluded metric names. These metrics will not be exposed.

https://stackedit.io/app# 46/49
10/29/24, 8:33 PM StackEdit

(3) Comma sparated list of prefixes. Only metrics starting with these prefixes will be
exposed.
(4) Comma sparated list of prefixes. Metrics starting with these prefixes will not be
exposed.

Exporter HTTPServer Properties

Name

Javadoc

Note

io.prometheus.exporter.httpServer.port

HTTPServer.Builder.port()

Exporter OpenTelemetry Properties

Name

Javadoc

Note

io.prometheus.exporter.opentelemetry.protocol

OpenTelemetryExporter.Builder.protocol()

(1)

io.prometheus.exporter.opentelemetry.endpoint

OpenTelemetryExporter.Builder.endpoint()

io.prometheus.exporter.opentelemetry.headers

OpenTelemetryExporter.Builder.headers()

(2)
https://stackedit.io/app# 47/49
10/29/24, 8:33 PM StackEdit

io.prometheus.exporter.opentelemetry.intervalSeconds

OpenTelemetryExporter.Builder.intervalSeconds()

io.prometheus.exporter.opentelemetry.timeoutSeconds

OpenTelemetryExporter.Builder.timeoutSeconds()

io.prometheus.exporter.opentelemetry.serviceName

OpenTelemetryExporter.Builder.serviceName()

io.prometheus.exporter.opentelemetry.serviceNamespace

OpenTelemetryExporter.Builder.serviceNamespace()

io.prometheus.exporter.opentelemetry.serviceInstanceId

OpenTelemetryExporter.Builder.serviceInstanceId()

io.prometheus.exporter.opentelemetry.serviceVersion

OpenTelemetryExporter.Builder.serviceVersion()

io.prometheus.exporter.opentelemetry.resourceAttributes

OpenTelemetryExporter.Builder.resourceAttributes()

(3)

(1) Protocol can be grpc or http/protobuf .


(2) Format: key1=value1,key2=value2
(3) Format: key1=value1,key2=value2

Many of these attributes can alternatively be configured via OpenTelemetry


environment variables, like OTEL_EXPORTER_OTLP_ENDPOINT . The Prometheus metrics
library has support for OpenTelemetry environment variables. See Javadoc for details.

Exporter PushGateway Properties

Name

https://stackedit.io/app# 48/49
10/29/24, 8:33 PM StackEdit

Javadoc

Note

io.prometheus.exporter.pushgateway.address

PushGateway.Builder.address()

io.prometheus.exporter.pushgateway.scheme

PushGateway.Builder.scheme()

io.prometheus.exporter.pushgateway.job

PushGateway.Builder.job()

https://stackedit.io/app# 49/49

You might also like