Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 12. Monitoring your application
This section contains information about monitoring your Thorntail–based application running on OpenShift.
12.1. Accessing JVM metrics for your application on OpenShift Link kopierenLink in die Zwischenablage kopiert!
12.1.1. Accessing JVM metrics using Jolokia on OpenShift Link kopierenLink in die Zwischenablage kopiert!
Jolokia is a built-in lightweight solution for accessing JMX (Java Management Extension) metrics over HTTP on OpenShift. Jolokia allows you to access CPU, storage, and memory usage data collected by JMX over an HTTP bridge. Jolokia uses a REST interface and JSON-formatted message payloads. It is suitable for monitoring cloud applications thanks to its comparably high speed and low resource requirements.
For Java-based applications, the OpenShift Web console provides the integrated hawt.io console that collects and displays all relevant metrics output by the JVM running your application.
Prerequistes
-
the
oc
client authenticated - a Java-based application container running in a project on OpenShift
- latest JDK 1.8.0 image
Procedure
List the deployment configurations of the pods inside your project and select the one that corresponds to your application.
oc get dc
oc get dc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NAME REVISION DESIRED CURRENT TRIGGERED BY MY_APP_NAME 2 1 1 config,image(my-app:6) ...
NAME REVISION DESIRED CURRENT TRIGGERED BY MY_APP_NAME 2 1 1 config,image(my-app:6) ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Open the YAML deployment template of the pod running your application for editing.
oc edit dc/MY_APP_NAME
oc edit dc/MY_APP_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following entry to the
ports
section of the template and save your changes:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Redeploy the pod running your application.
oc rollout latest dc/MY_APP_NAME
oc rollout latest dc/MY_APP_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The pod is redeployed with the updated deployment configuration and exposes the port
8778
.- Log into the OpenShift Web console.
- In the sidebar, navigate to Applications > Pods, and click on the name of the pod running your application.
- In the pod details screen, click Open Java Console to access the hawt.io console.
Additional resources
12.2. Application metrics Link kopierenLink in die Zwischenablage kopiert!
Thorntail provides ways of exposing application metrics in order to track performance and service availability.
12.2.1. What are metrics Link kopierenLink in die Zwischenablage kopiert!
In the microservices architecture, where multiple services are invoked in order to serve a single user request, diagnosing performance issues or reacting to service outages might be hard. To make solving problems easier, applications must expose machine-readable data about their behavior, such as:
- How many requests are currently being processed.
- How many connections to the database are currently in use.
- How long service invocations take.
These kinds of data are referred to as metrics. Collecting metrics, visualizing them, setting alerts, discovering trends, etc. are very important to keep a service healthy.
Thorntail provides a fraction for Eclipse MicroProfile Metrics, an easy-to-use API for exposing metrics. Among other formats, it supports exporting data in the native format of Prometheus, a popular monitoring solution. Inside the application, you need nothing except this fraction. Outside of the application, Prometheus typically runs.
Additional resources
- The MicroProfile Metrics GitHub page.
- The Prometheus homepage
- A popular solution to visualize metrics stored in Prometheus is Grafana. For more information, see the Grafana homepage.
12.2.2. Exposing application metrics Link kopierenLink in die Zwischenablage kopiert!
In this example, you:
- Configure your application to expose metrics.
- Collect and view the data using Prometheus.
Note that Prometheus actively connects to a monitored application to collect data; the application does not actively send metrics to a server.
Prerequisites
Prometheus configured to collect metrics from the application:
Download and extract the archive with the latest Prometheus release:
wget https://github.com/prometheus/prometheus/releases/download/v2.4.3/prometheus-2.4.3.linux-amd64.tar.gz tar -xvf prometheus-2.4.3.linux-amd64.tar.gz
$ wget https://github.com/prometheus/prometheus/releases/download/v2.4.3/prometheus-2.4.3.linux-amd64.tar.gz $ tar -xvf prometheus-2.4.3.linux-amd64.tar.gz
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Navigate to the directory with Prometheus:
cd prometheus-2.4.3.linux-amd64
$ cd prometheus-2.4.3.linux-amd64
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Append the following snippet to the
prometheus.yml
file to make Prometheus automatically collect metrics from your application:- job_name: 'thorntail' static_configs: - targets: ['localhost:8080']
- job_name: 'thorntail' static_configs: - targets: ['localhost:8080']
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The default behavior of Thorntail-based applications is to expose metrics at the
/metrics
endpoint. This is what the MicroProfile Metrics specification requires, and also what Prometheus expects.
The Prometheus server started on
localhost
:Start Prometheus and wait until the
Server is ready to receive web requests
message is displayed in the console../prometheus
$ ./prometheus
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Include the
microprofile-metrics
fraction in thepom.xml
file in your application:pom.xml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Annotate methods or classes with the metrics annotations, for example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Here, the
@Counted
annotation is used to keep track of how many times this method was invoked. The@Timed
annotation is used to keep track of how long the invocations took.In this example, a JAX-RS resource method was annotated directly, but you can annotate any CDI bean in your application as well.
Launch your application:
mvn thorntail:run
$ mvn thorntail:run
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Invoke the traced endpoint several times:
curl http://localhost:8080/
$ curl http://localhost:8080/ Hello from counted and timed endpoint
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait at least 15 seconds for the collection to happen, and see the metrics in Prometheus UI:
-
Open the Prometheus UI at http://localhost:9090/ and type
hello
into the Expression box. -
From the suggestions, select for example
application:hello_count
and click Execute. - In the table that is displayed, you can see how many times the resource method was invoked.
-
Alternatively, select
application:hello_time_mean_seconds
to see the mean time of all the invocations.
Note that all metrics you created are prefixed with
application:
. There are other metrics, automatically exposed by Thorntail as the MicroProfile Metrics specification requires. Those metrics are prefixed withbase:
andvendor:
and expose information about the JVM in which the application runs.-
Open the Prometheus UI at http://localhost:9090/ and type
Additional resources
- For additional types of metrics, see the Eclipse MicroProfile Metrics documentation.