diff --git a/CLAUDE.md b/CLAUDE.md
index 492b11dc0..9c1c60571 100644
--- a/CLAUDE.md
+++ b/CLAUDE.md
@@ -69,19 +69,46 @@ Pre-built instrumentations: `prometheus-metrics-instrumentation-jvm`, `-caffeine
## Code Style
- **Formatter**: Google Java Format (enforced via Spotless)
-- **Line length**: 100 characters
+- **Line length**: 100 characters (enforced for ALL files including Markdown, Java, YAML, etc.)
- **Indentation**: 2 spaces
- **Static analysis**: Error Prone with NullAway (`io.prometheus.metrics` package)
- **Logger naming**: Logger fields must be named `logger` (not `log`, `LOG`, or `LOGGER`)
- **Assertions in tests**: Use static imports from AssertJ (`import static org.assertj.core.api.Assertions.assertThat`)
- **Empty catch blocks**: Use `ignored` as the exception variable name
+- **Markdown code blocks**: Always specify language (e.g., ` ```java`, ` ```bash`, ` ```text`)
## Linting and Validation
-- **IMPORTANT**: Always run `mise run build` after modifying Java files to ensure all lints, code formatting (Spotless), static analysis (Error Prone), and checkstyle checks pass
-- **IMPORTANT**: Always run `mise run lint:super-linter` after modifying non-Java files (YAML, Markdown, shell scripts, JSON, etc.)
-- Super-linter is configured to only show ERROR-level messages via `LOG_LEVEL=ERROR` in `.github/super-linter.env`
-- Local super-linter version is pinned to match CI (see `.mise/tasks/lint/super-linter.sh`)
+**CRITICAL**: These checks MUST be run before creating any commits. CI will fail if these checks fail.
+
+### Java Files
+
+- **ALWAYS** run `mise run build` after modifying Java files to ensure:
+ - Code formatting (Spotless with Google Java Format)
+ - Static analysis (Error Prone with NullAway)
+ - Checkstyle validation
+ - Build succeeds (tests are skipped; run `mise run test` or `mise run test-all` to execute tests)
+
+### Non-Java Files (Markdown, YAML, JSON, shell scripts, etc.)
+
+- **ALWAYS** run `mise run lint:super-linter` after modifying non-Java files
+- Super-linter will **auto-fix** many issues (formatting, trailing whitespace, etc.)
+- It only reports ERROR-level issues (configured via `LOG_LEVEL=ERROR` in `.github/super-linter.env`)
+- Common issues caught:
+ - Lines exceeding 100 characters in Markdown files
+ - Missing language tags in fenced code blocks
+ - Table formatting issues
+ - YAML/JSON syntax errors
+
+### Running Linters
+
+```bash
+# After modifying Java files (run BEFORE committing)
+mise run build
+
+# After modifying non-Java files (run BEFORE committing)
+mise run lint:super-linter
+```
## Testing
diff --git a/docs/content/getting-started/metric-types.md b/docs/content/getting-started/metric-types.md
index 46d53ece1..844d63a9c 100644
--- a/docs/content/getting-started/metric-types.md
+++ b/docs/content/getting-started/metric-types.md
@@ -121,6 +121,94 @@ for [Histogram.Builder](/client_java/api/io/prometheus/metrics/core/metrics/Hist
for a complete list of options. Some options can be configured at runtime,
see [config]({{< relref "../config/config.md" >}}).
+### Custom Bucket Boundaries
+
+The default bucket boundaries are designed for measuring request durations in seconds. For other
+use cases, you may want to define custom bucket boundaries. The histogram builder provides three
+methods for this:
+
+**1. Arbitrary Custom Boundaries**
+
+Use `classicUpperBounds(...)` to specify arbitrary bucket boundaries:
+
+```java
+Histogram responseSize = Histogram.builder()
+ .name("http_response_size_bytes")
+ .help("HTTP response size in bytes")
+ .classicUpperBounds(100, 1000, 10000, 100000, 1000000) // bytes
+ .register();
+```
+
+**2. Linear Boundaries**
+
+Use `classicLinearUpperBounds(start, width, count)` for equal-width buckets:
+
+```java
+Histogram queueSize = Histogram.builder()
+ .name("queue_size")
+ .help("Number of items in queue")
+ .classicLinearUpperBounds(10, 10, 10) // 10, 20, 30, ..., 100
+ .register();
+```
+
+**3. Exponential Boundaries**
+
+Use `classicExponentialUpperBounds(start, factor, count)` for exponential growth:
+
+```java
+Histogram dataSize = Histogram.builder()
+ .name("data_size_bytes")
+ .help("Data size in bytes")
+ .classicExponentialUpperBounds(100, 10, 5) // 100, 1k, 10k, 100k, 1M
+ .register();
+```
+
+### Native Histograms with Custom Buckets (NHCB)
+
+Prometheus supports a special mode called Native Histograms with Custom Buckets (NHCB) that uses
+schema -53. In this mode, custom bucket boundaries from classic histograms are preserved when
+converting to native histograms.
+
+The Java client library automatically supports NHCB:
+
+1. By default, histograms maintain both classic (with custom buckets) and native representations
+2. The classic representation with custom buckets is exposed to Prometheus
+3. Prometheus servers can convert these to NHCB upon ingestion when configured with the
+ `convert_classic_histograms_to_nhcb` scrape option
+
+Example:
+
+```java
+// This histogram will work seamlessly with NHCB
+Histogram apiLatency = Histogram.builder()
+ .name("api_request_duration_seconds")
+ .help("API request duration")
+ .classicUpperBounds(0.01, 0.05, 0.1, 0.5, 1.0, 5.0, 10.0) // custom boundaries
+ .register();
+```
+
+On the Prometheus side, configure the scrape job:
+
+```yaml
+scrape_configs:
+ - job_name: "my-app"
+ scrape_protocols: ["PrometheusProto"]
+ convert_classic_histograms_to_nhcb: true
+ static_configs:
+ - targets: ["localhost:9400"]
+```
+
+{{< hint type=note >}}
+NHCB is useful when:
+
+- You need precise bucket boundaries for your specific use case
+- You're migrating from classic histograms and want to preserve bucket boundaries
+- Exponential bucketing from standard native histograms isn't a good fit for your distribution
+ {{< /hint >}}
+
+See [examples/example-custom-buckets](https://github.com/prometheus/client_java/tree/main/examples/example-custom-buckets)
+for a complete example with Prometheus and Grafana.
+
Histograms and summaries are both used for observing distributions. Therefore, the both implement
the `DistributionDataPoint` interface. Using the `DistributionDataPoint` interface directly gives
you the option to switch between histograms and summaries later with minimal code changes.
diff --git a/examples/example-custom-buckets/README.md b/examples/example-custom-buckets/README.md
new file mode 100644
index 000000000..a7a6a8564
--- /dev/null
+++ b/examples/example-custom-buckets/README.md
@@ -0,0 +1,170 @@
+# Native Histograms with Custom Buckets (NHCB) Example
+
+This example demonstrates how to use native histograms with custom bucket boundaries (NHCB) in
+Prometheus Java client. It shows three different types of custom bucket configurations and how
+Prometheus converts them to native histograms with schema -53.
+
+## What are Native Histograms with Custom Buckets?
+
+Native Histograms with Custom Buckets (NHCB) is a Prometheus feature that combines the benefits of:
+
+- **Custom bucket boundaries**: Precisely defined buckets optimized for your specific use case
+- **Native histograms**: Efficient storage and querying capabilities of native histograms
+
+When you configure Prometheus with `convert_classic_histograms_to_nhcb: true`, it converts classic
+histograms with custom buckets into native histograms using schema -53, preserving the custom
+bucket boundaries.
+
+## Example Metrics
+
+This example application generates three different histogram metrics demonstrating different
+bucket configuration strategies:
+
+### 1. API Latency - Arbitrary Custom Boundaries
+
+```java
+Histogram apiLatency = Histogram.builder()
+ .name("api_request_duration_seconds")
+ .classicUpperBounds(0.01, 0.05, 0.1, 0.5, 1.0, 5.0, 10.0)
+ .register();
+```
+
+**Use case**: Optimized for typical API response times in seconds.
+
+### 2. Queue Size - Linear Boundaries
+
+```java
+Histogram queueSize = Histogram.builder()
+ .name("message_queue_size")
+ .classicLinearUpperBounds(10, 10, 10) // 10, 20, 30, ..., 100
+ .register();
+```
+
+**Use case**: Equal-width buckets for monitoring queue depth or other discrete values.
+
+### 3. Response Size - Exponential Boundaries
+
+```java
+Histogram responseSize = Histogram.builder()
+ .name("http_response_size_bytes")
+ .classicExponentialUpperBounds(100, 10, 6) // 100, 1k, 10k, 100k, 1M, 10M
+ .register();
+```
+
+**Use case**: Data spanning multiple orders of magnitude (bytes, milliseconds, etc).
+
+## Build
+
+This example is built as part of the `client_java` project:
+
+```shell
+./mvnw package
+```
+
+This creates `./examples/example-custom-buckets/target/example-custom-buckets.jar`.
+
+## Run
+
+With the JAR file present, run:
+
+```shell
+cd ./examples/example-custom-buckets/
+docker-compose up
+```
+
+This starts three Docker containers:
+
+- **[http://localhost:9400/metrics](http://localhost:9400/metrics)** - Example application
+- **[http://localhost:9090](http://localhost:9090)** - Prometheus server (with NHCB enabled)
+- **[http://localhost:3000](http://localhost:3000)** - Grafana (user: _admin_, password: _admin_)
+
+You might need to replace `localhost` with `host.docker.internal` on macOS or Windows.
+
+## Verify NHCB Conversion
+
+### 1. Check Prometheus Configuration
+
+The Prometheus configuration enables NHCB conversion:
+
+```yaml
+scrape_configs:
+ - job_name: "custom-buckets-demo"
+ scrape_protocols: ["PrometheusProto"]
+ convert_classic_histograms_to_nhcb: true
+ scrape_classic_histograms: true
+```
+
+### 2. Verify in Prometheus
+
+Visit [http://localhost:9090](http://localhost:9090) and run queries:
+
+```promql
+# View histogram metadata (should show schema -53 for NHCB)
+prometheus_tsdb_head_series
+
+# Calculate quantiles from custom buckets
+histogram_quantile(0.95, rate(api_request_duration_seconds[1m]))
+
+# View raw histogram structure
+api_request_duration_seconds
+```
+
+### 3. View in Grafana
+
+The Grafana dashboard at [http://localhost:3000](http://localhost:3000) shows:
+
+- p95 and p50 latencies for API endpoints (arbitrary custom buckets)
+- Queue size distribution (linear buckets)
+- Response size distribution (exponential buckets)
+
+## Key Observations
+
+1. **Custom Buckets Preserved**: The custom bucket boundaries you define are preserved when
+ converted to NHCB (schema -53).
+
+2. **Dual Representation**: By default, histograms maintain both classic and native
+ representations, allowing gradual migration.
+
+3. **Efficient Storage**: Native histograms provide more efficient storage than classic histograms
+ while preserving your custom bucket boundaries.
+
+4. **Flexible Bucket Strategies**: You can choose arbitrary, linear, or exponential buckets based
+ on your specific monitoring needs.
+
+## When to Use Custom Buckets
+
+Consider using custom buckets (and NHCB) when:
+
+- **Precise boundaries needed**: You know the expected distribution and want specific bucket edges
+- **Migrating from classic histograms**: You want to preserve existing bucket boundaries
+- **Specific use cases**: Default exponential bucketing doesn't fit your distribution well
+ - Temperature ranges (might include negative values)
+ - Queue depths (discrete values with linear growth)
+ - File sizes (exponential growth but with specific thresholds)
+ - API latencies (specific SLA boundaries)
+
+## Differences from Standard Native Histograms
+
+| Feature | Standard Native Histograms | NHCB (Schema -53) |
+| ----------------- | ------------------------------- | --------------------------------- |
+| Bucket boundaries | Exponential (base 2^(2^-scale)) | Custom boundaries |
+| Use case | General-purpose | Specific distributions |
+| Mergeability | Can merge with same schema | Cannot merge different boundaries |
+| Configuration | Schema level (0-8) | Explicit boundary list |
+
+## Cleanup
+
+Stop the containers:
+
+```shell
+docker-compose down
+```
+
+## Further Reading
+
+
+
+
+- [Prometheus Native Histograms Specification](https://prometheus.io/docs/specs/native_histograms/)
+- [Prometheus Java Client Documentation](https://prometheus.github.io/client_java/)
+- [OpenTelemetry Exponential Histograms](https://opentelemetry.io/docs/specs/otel/metrics/data-model/#exponentialhistogram)
diff --git a/examples/example-custom-buckets/docker-compose.yaml b/examples/example-custom-buckets/docker-compose.yaml
new file mode 100644
index 000000000..7579faa3f
--- /dev/null
+++ b/examples/example-custom-buckets/docker-compose.yaml
@@ -0,0 +1,26 @@
+version: "3"
+services:
+ example-application:
+ image: eclipse-temurin:25.0.1_8-jre@sha256:9d1d3068b16f2c4127be238ca06439012ff14a8fdf38f8f62472160f9058464a
+ network_mode: host
+ volumes:
+ - ./target/example-custom-buckets.jar:/example-custom-buckets.jar
+ command:
+ - /opt/java/openjdk/bin/java
+ - -jar
+ - /example-custom-buckets.jar
+ prometheus:
+ image: prom/prometheus:v3.9.1@sha256:1f0f50f06acaceb0f5670d2c8a658a599affe7b0d8e78b898c1035653849a702
+ network_mode: host
+ volumes:
+ - ./docker-compose/prometheus.yml:/prometheus.yml
+ command:
+ - --enable-feature=native-histograms
+ - --config.file=/prometheus.yml
+ grafana:
+ image: grafana/grafana:12.3.2@sha256:ba93c9d192e58b23e064c7f501d453426ccf4a85065bf25b705ab1e98602bfb1
+ network_mode: host
+ volumes:
+ - ./docker-compose/grafana-datasources.yaml:/etc/grafana/provisioning/datasources/grafana-datasources.yaml
+ - ./docker-compose/grafana-dashboards.yaml:/etc/grafana/provisioning/dashboards/grafana-dashboards.yaml
+ - ./docker-compose/grafana-dashboard-custom-buckets.json:/etc/grafana/grafana-dashboard-custom-buckets.json
diff --git a/examples/example-custom-buckets/docker-compose/grafana-dashboard-custom-buckets.json b/examples/example-custom-buckets/docker-compose/grafana-dashboard-custom-buckets.json
new file mode 100644
index 000000000..11ae25775
--- /dev/null
+++ b/examples/example-custom-buckets/docker-compose/grafana-dashboard-custom-buckets.json
@@ -0,0 +1,349 @@
+{
+ "annotations": {
+ "list": []
+ },
+ "editable": true,
+ "fiscalYearStartMonth": 0,
+ "graphTooltip": 0,
+ "id": null,
+ "links": [],
+ "panels": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "prometheus"
+ },
+ "description": "API request duration with custom bucket boundaries (0.01, 0.05, 0.1, 0.5, 1.0, 5.0, 10.0 seconds). Shows how custom buckets are preserved in NHCB (schema -53).",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "barWidthFactor": 0.6,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "tooltip": false,
+ "viz": false,
+ "legend": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green",
+ "value": null
+ }
+ ]
+ },
+ "unit": "s"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 8,
+ "w": 24,
+ "x": 0,
+ "y": 0
+ },
+ "id": 1,
+ "options": {
+ "legend": {
+ "calcs": ["mean", "max"],
+ "displayMode": "table",
+ "placement": "right",
+ "showLegend": true
+ },
+ "tooltip": {
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "prometheus"
+ },
+ "editorMode": "code",
+ "expr": "histogram_quantile(0.95, rate(api_request_duration_seconds[1m]))",
+ "instant": false,
+ "legendFormat": "{{endpoint}} {{status}} (p95)",
+ "range": true,
+ "refId": "A"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "prometheus"
+ },
+ "editorMode": "code",
+ "expr": "histogram_quantile(0.5, rate(api_request_duration_seconds[1m]))",
+ "hide": false,
+ "instant": false,
+ "legendFormat": "{{endpoint}} {{status}} (p50)",
+ "range": true,
+ "refId": "B"
+ }
+ ],
+ "title": "API Latency - Custom Buckets (Arbitrary Boundaries)",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "prometheus"
+ },
+ "description": "Queue size with linear bucket boundaries (10, 20, 30, ..., 100). Demonstrates equal-width buckets for discrete values.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "barWidthFactor": 0.6,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "tooltip": false,
+ "viz": false,
+ "legend": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green",
+ "value": null
+ }
+ ]
+ },
+ "unit": "short"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 8,
+ "w": 12,
+ "x": 0,
+ "y": 8
+ },
+ "id": 2,
+ "options": {
+ "legend": {
+ "calcs": ["mean", "max"],
+ "displayMode": "table",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "prometheus"
+ },
+ "editorMode": "code",
+ "expr": "histogram_quantile(0.95, rate(message_queue_size[1m]))",
+ "instant": false,
+ "legendFormat": "{{queue_name}} (p95)",
+ "range": true,
+ "refId": "A"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "prometheus"
+ },
+ "editorMode": "code",
+ "expr": "histogram_quantile(0.5, rate(message_queue_size[1m]))",
+ "hide": false,
+ "instant": false,
+ "legendFormat": "{{queue_name}} (p50)",
+ "range": true,
+ "refId": "B"
+ }
+ ],
+ "title": "Queue Size - Linear Buckets",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "prometheus"
+ },
+ "description": "HTTP response size with exponential bucket boundaries (100, 1k, 10k, 100k, 1M, 10M bytes). Shows exponential growth for data spanning multiple orders of magnitude.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "barWidthFactor": 0.6,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "tooltip": false,
+ "viz": false,
+ "legend": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green",
+ "value": null
+ }
+ ]
+ },
+ "unit": "bytes"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 8,
+ "w": 12,
+ "x": 12,
+ "y": 8
+ },
+ "id": 3,
+ "options": {
+ "legend": {
+ "calcs": ["mean", "max"],
+ "displayMode": "table",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "prometheus"
+ },
+ "editorMode": "code",
+ "expr": "histogram_quantile(0.95, rate(http_response_size_bytes[1m]))",
+ "instant": false,
+ "legendFormat": "{{endpoint}} (p95)",
+ "range": true,
+ "refId": "A"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "prometheus"
+ },
+ "editorMode": "code",
+ "expr": "histogram_quantile(0.5, rate(http_response_size_bytes[1m]))",
+ "hide": false,
+ "instant": false,
+ "legendFormat": "{{endpoint}} (p50)",
+ "range": true,
+ "refId": "B"
+ }
+ ],
+ "title": "Response Size - Exponential Buckets",
+ "type": "timeseries"
+ }
+ ],
+ "refresh": "5s",
+ "schemaVersion": 39,
+ "tags": ["custom-buckets", "nhcb", "native-histogram"],
+ "templating": {
+ "list": []
+ },
+ "time": {
+ "from": "now-5m",
+ "to": "now"
+ },
+ "timepicker": {},
+ "timezone": "browser",
+ "title": "Native Histograms with Custom Buckets (NHCB)",
+ "uid": "custom-buckets-nhcb",
+ "version": 1,
+ "weekStart": ""
+}
diff --git a/examples/example-custom-buckets/docker-compose/grafana-dashboards.yaml b/examples/example-custom-buckets/docker-compose/grafana-dashboards.yaml
new file mode 100644
index 000000000..3225b88ae
--- /dev/null
+++ b/examples/example-custom-buckets/docker-compose/grafana-dashboards.yaml
@@ -0,0 +1,8 @@
+apiVersion: 1
+
+providers:
+ - name: "Custom Buckets (NHCB) Example"
+ type: file
+ options:
+ path: /etc/grafana/grafana-dashboard-custom-buckets.json
+ foldersFromFilesStructure: false
diff --git a/examples/example-custom-buckets/docker-compose/grafana-datasources.yaml b/examples/example-custom-buckets/docker-compose/grafana-datasources.yaml
new file mode 100644
index 000000000..d442d28d2
--- /dev/null
+++ b/examples/example-custom-buckets/docker-compose/grafana-datasources.yaml
@@ -0,0 +1,7 @@
+apiVersion: 1
+
+datasources:
+ - name: Prometheus
+ type: prometheus
+ uid: prometheus
+ url: http://localhost:9090
diff --git a/examples/example-custom-buckets/docker-compose/prometheus.yml b/examples/example-custom-buckets/docker-compose/prometheus.yml
new file mode 100644
index 000000000..5c5782023
--- /dev/null
+++ b/examples/example-custom-buckets/docker-compose/prometheus.yml
@@ -0,0 +1,14 @@
+---
+global:
+ scrape_interval: 5s # very short interval for demo purposes
+
+scrape_configs:
+ - job_name: "custom-buckets-demo"
+ # Use protobuf format to receive native histogram data
+ scrape_protocols: ["PrometheusProto"]
+ # Convert classic histograms with custom buckets to NHCB (schema -53)
+ convert_classic_histograms_to_nhcb: true
+ # Also scrape classic histograms for comparison
+ scrape_classic_histograms: true
+ static_configs:
+ - targets: ["localhost:9400"]
diff --git a/examples/example-custom-buckets/pom.xml b/examples/example-custom-buckets/pom.xml
new file mode 100644
index 000000000..b7e104e5a
--- /dev/null
+++ b/examples/example-custom-buckets/pom.xml
@@ -0,0 +1,62 @@
+
+
This example shows three different types of custom bucket configurations: + * + *
These histograms maintain both classic (with custom buckets) and native representations. When
+ * Prometheus is configured with {@code convert_classic_histograms_to_nhcb: true}, the custom bucket
+ * boundaries are preserved in the native histogram format (schema -53).
+ */
+public class Main {
+
+ public static void main(String[] args) throws IOException, InterruptedException {
+
+ JvmMetrics.builder().register();
+
+ // Example 1: API latency with arbitrary custom boundaries
+ // Optimized for typical API response times in seconds
+ Histogram apiLatency =
+ Histogram.builder()
+ .name("api_request_duration_seconds")
+ .help("API request duration with custom buckets")
+ .unit(Unit.SECONDS)
+ .classicUpperBounds(0.01, 0.05, 0.1, 0.5, 1.0, 5.0, 10.0)
+ .labelNames("endpoint", "status")
+ .register();
+
+ // Example 2: Queue size with linear boundaries
+ // Equal-width buckets for monitoring queue depth
+ Histogram queueSize =
+ Histogram.builder()
+ .name("message_queue_size")
+ .help("Number of messages in queue with linear buckets")
+ .classicLinearUpperBounds(10, 10, 10) // 10, 20, 30, ..., 100
+ .labelNames("queue_name")
+ .register();
+
+ // Example 3: Response size with exponential boundaries
+ // Exponential growth for data spanning multiple orders of magnitude
+ Histogram responseSize =
+ Histogram.builder()
+ .name("http_response_size_bytes")
+ .help("HTTP response size in bytes with exponential buckets")
+ .classicExponentialUpperBounds(100, 10, 6) // 100, 1k, 10k, 100k, 1M, 10M
+ .labelNames("endpoint")
+ .register();
+
+ HTTPServer server = HTTPServer.builder().port(9400).buildAndStart();
+
+ System.out.println(
+ "HTTPServer listening on port http://localhost:" + server.getPort() + "/metrics");
+ System.out.println("\nGenerating metrics with custom bucket configurations:");
+ System.out.println("1. API latency: custom boundaries optimized for response times");
+ System.out.println("2. Queue size: linear boundaries (10, 20, 30, ..., 100)");
+ System.out.println("3. Response size: exponential boundaries (100, 1k, 10k, ..., 10M)");
+ System.out.println("\nPrometheus will convert these to NHCB (schema -53) when configured.\n");
+
+ Random random = new Random(0);
+
+ while (true) {
+ // Simulate API latency observations
+ // Fast endpoint: mostly < 100ms, occasionally slow
+ double fastLatency = Math.abs(random.nextGaussian() * 0.03 + 0.05);
+ String status = random.nextInt(100) < 95 ? "200" : "500";
+ apiLatency.labelValues("/api/fast", status).observe(fastLatency);
+
+ // Slow endpoint: typically 1-3 seconds
+ double slowLatency = Math.abs(random.nextGaussian() * 0.5 + 2.0);
+ apiLatency.labelValues("/api/slow", status).observe(slowLatency);
+
+ // Simulate queue size observations
+ // Queue oscillates between 20-80 items
+ int queueDepth = 50 + (int) (random.nextGaussian() * 15);
+ queueDepth = Math.max(0, Math.min(100, queueDepth));
+ queueSize.labelValues("default").observe(queueDepth);
+
+ // Priority queue: usually smaller
+ int priorityQueueDepth = 10 + (int) (random.nextGaussian() * 5);
+ priorityQueueDepth = Math.max(0, Math.min(50, priorityQueueDepth));
+ queueSize.labelValues("priority").observe(priorityQueueDepth);
+
+ // Simulate response size observations
+ // Small responses: mostly < 10KB
+ double smallResponse = Math.abs(random.nextGaussian() * 2000 + 5000);
+ responseSize.labelValues("/api/summary").observe(smallResponse);
+
+ // Large responses: can be up to several MB
+ double largeResponse = Math.abs(random.nextGaussian() * 200000 + 500000);
+ responseSize.labelValues("/api/download").observe(largeResponse);
+
+ Thread.sleep(1000);
+ }
+ }
+}
diff --git a/examples/pom.xml b/examples/pom.xml
index 5b93c068f..d0c364067 100644
--- a/examples/pom.xml
+++ b/examples/pom.xml
@@ -30,6 +30,7 @@
According to the Prometheus specification + * (https://prometheus.io/docs/specs/native_histograms/), native histograms with custom buckets + * (schema -53) are exposed as classic histograms with custom bucket boundaries. Prometheus servers + * can then convert these to NHCB upon ingestion when configured with + * convert_classic_histograms_to_nhcb. + * + *
These tests verify that: + * + *
See issue #1838 for more context.
+ */
+class CustomBucketsHistogramTest {
+
+ @Test
+ void testCustomBucketsWithArbitraryBoundaries() {
+ // Create a histogram with arbitrary custom bucket boundaries
+ Histogram histogram =
+ Histogram.builder()
+ .name("http_request_duration_seconds")
+ .help("HTTP request duration with custom buckets")
+ .classicUpperBounds(0.01, 0.05, 0.1, 0.5, 1.0, 5.0, 10.0)
+ .build();
+
+ // Observe some values
+ histogram.observe(0.008);
+ histogram.observe(0.045);
+ histogram.observe(0.3);
+ histogram.observe(2.5);
+ histogram.observe(7.8);
+
+ HistogramSnapshot snapshot = histogram.collect();
+ HistogramSnapshot.HistogramDataPointSnapshot data = snapshot.getDataPoints().get(0);
+
+ // Verify custom bucket boundaries are set correctly
+ List