31. March 2022
by Andreas Grub | 1651 words | ~8 min read
cloud observability grafana tempo loki prometheus spring boot spring kubernetes exemplars tracing telemetry instrumentation
Cloud observability is a crucial component of any serious Kubernetes deployment. It tells operations that something is working too slowly with metrics or helps developers debug tricky issues only occurring on production environments with logs. However, have you ever looked at a 0.9-percentile request latency graph and wondered why it spiked at certain times? Of course, those spikes are only happening on the busy production environment and are not reproducible with simpler means. You, as the poor soul tasked with debugging this issue, start combing the logs for interesting errors around that time span. Usually, that is cumbersome as there are so much logs and so little hints where to look at.
Here, exemplars may help: In short, they are shown together with metrics and tell you about example events contributing to that 0.9 percentile bucket. Those events (or requests) are uniquely identified with a trace id, which makes it quick and easy to correlate them with logs or look at the corresponding request trace across the whole cluster.
This post presents a demo in Minikube which instruments and observes a simple Spring Boot application using Grafana with Prometheus, Loki and Tempo. It focuses on how to use the new and heavily promoted exemplars feature, which enables a quick transition from metrics to traces and logs.
If you’re impatient, check out how to run the demo locally.
Being able to observe an application or workload within the cloud is crucial for a reliable operation and for debugging potential issues. To this end, observability usually consists of metrics, logs and traces, as detailed in the following:
Metrics are usually counters and gauges exposed by the application, which may tell about the healthiness or enable looking at “utilization/saturation/rate/error/duration” (USERED) type information. Metrics are limited in cardinality and certainly do not allow per-request investigation.
Logs are written by the application and, depending on the log level, may contain very detailed human-readable information about the application. They may contain structured data, such as a trace id, to correlate them with particular requests made against the application.
Traces are collected during a single request, uniquely identified with a trace id, and are typically propagated across many applications communicating within a cluster. They show application internals such as database accesses and record the duration of each such operation as separate spans.
There is a large amount of tools and SaaS providers that support the aggregation of the above-mentioned diagnosability data. Here, we will propose using the following Grafana-based stack for a Kubernetes cluster, which is completely free software:
In order to enable observability for a specific workload, the workload needs to be instrumented. How the instrumentation works depends on the chosen programming language and frameworks the application is using. Here, we use the OpenTelemetry Java Agent to instrument a Spring Boot application written in Java.
This section highlights the accompanying demo for this blog post. It’s now a good time to spin up the demo locally and then follow this guide interactively on your local machine. See below for a thorough explanation what design decision are made for the demo and what technical details are taken care of.
Open the locally running Grafana and browse to the Spring Boot Demo dashboard. You’ll see the following panel:
It uses the default Spring Boot http_server_requests
metric to display the average, 0.7-percentile and 0.9-percentile latency of all HTTP requests made against the application.
For demo purposes, a cron job runs inside the cluster every 5 minutes executing a request against the /trigger-me
endpoint of the Spring Boot application.
So it might take a while to actually see metrics if you’ve just spun up the demo.
Hovering one of the “exemplar dots” shows:
Note how exemplars represent the bridge from aggregated metrics to request-based traces and subsequent detailed logs. Then, clicking the “View in Tempo” button takes you to the trace of the recorded exemplar:
The trace would be much more feature-rich thanks to the automatic instrumentation by the OpenTelemetry agent. For example, it would show database accesses and also external calls to other applications or remote endpoints. However, the deployed demo app is quite simple.
Next, we can have a look at the log output of the application pod by clicking the indicated button above:
The above Loki query is limited to the time range of the trace and focuses on the pod which produced the trace (or span). It would be possible to include the trace id as well to narrow down the shown logs even more, but that approach might miss logs not properly reported with the trace id.
Finally, whenever you encounter a log reporting a trace id, you can view the trace in Tempo:
Hopefully, this visual guide has given you a good impression what the demo is capable of. In particular, the exemplar support should appear quite useful to you if you were ever asked by operations why a metric has triggered an alert, and you needed to figure out what could be the root cause. Read on if you are interested in more technical details.
Please always refer to the full implementation of the demo. The next sections highlight some interesting code snippets only.
Injecting the OpenTelemetry agent and building the Spring Boot application is done using Docker:
|
|
The OpenTelemetry agent then injects the trace_id
into Logback’s Mapped Diagnostic Context (MDC), which can then be added to all log statements as part of the application.yaml
:
|
|
This enables correlation of log statements to a trace, which may involve many applications during one request.
Also, the Spring Boot Helm chart tells the OpenTelemetry agent to deliver traces to Tempo using the following environment variables (see deployment.yaml
):
|
|
The KUBE_POD_NAME
is important to show logs from a given trace of that pod.
Finally, getting the rather new exemplar support working with Spring Boot Actuator currently requires to manually import the 1.9.0-SNAPSHOT
version of Micrometer and also wiring up the PrometheusMetricsRegistry
with a DefaultExemplarSampler
. This little hack will become obsolete once Micrometer 1.9.0 is released and this PR against Spring Boot is merged.
Note that exemplars for timer metrics, such as http_server_requests
, are only reported for histogram bucket counters, from which the quantiles are calculated.
That means you need to explicitly enable distribution statistics within your application.yaml
as follows:
|
|
The Prometheus deployment needs to explicitly enable the feature for exemplar support. This is done in the kube-prometheus-stack
Helm chart using the following values.yaml
config part:
|
|
In order to make Grafana show the various buttons to automatically go from
logs to traces, from traces to logs and also from exemplars to traces, the
data sources need to be configured as follows.
See also additionalDataSources
in the values.yaml
of the kube-prometheus-stack
Helm chart.
Tempo data source
See documentation.
|
|
You might consider adding one or both of the following lines here:
|
|
Loki data source
See documentation.
|
|
Prometheus data source
See documentation.
|
|
For Prometheus, the urlDisplayLabel
wasn’t really documented in Grafana and omitting it makes the “View in Tempo” button for exemplars disappear. It can be found in the source though.
Currently, the demo just deploys some carefully configured Helm charts into Minikube.
This could be improved by developing an cloud-observability-stack
helm chart combining Loki, Tempo, Prometheus, Promtail and Grafana.
Furthermore, one could build a Kubernetes operator to inject Java instrumentation into the running workloads during startup based on an opt-in Kubernetes annotation. Such a solution exists in the form of the OpenTelemetry operator. Unfortunately, this operator is less well integrated with Grafana as it is using the OpenTelemetry collector and requires the deployment of the Cert Manager Operator which causes additional operational overhead.
Feel free to raise issues in the demo project or contact the author via mail if you have further ideas or suggestions what to improve.
This blog post was sponsored by my current employer, QAware. If you enjoy working on challenging topics such as engineering proper cloud observability, then join us!