Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 12. Monitoring your application
This section contains information about monitoring your Thorntail–based application running on OpenShift.
12.1. Accessing JVM metrics for your application on OpenShift Copier lienLien copié sur presse-papiers!
12.1.1. Accessing JVM metrics using Jolokia on OpenShift Copier lienLien copié sur presse-papiers!
Jolokia is a built-in lightweight solution for accessing JMX (Java Management Extension) metrics over HTTP on OpenShift. Jolokia allows you to access CPU, storage, and memory usage data collected by JMX over an HTTP bridge. Jolokia uses a REST interface and JSON-formatted message payloads. It is suitable for monitoring cloud applications thanks to its comparably high speed and low resource requirements.
For Java-based applications, the OpenShift Web console provides the integrated hawt.io console that collects and displays all relevant metrics output by the JVM running your application.
Prerequistes
-
the
occlient authenticated - a Java-based application container running in a project on OpenShift
- latest JDK 1.8.0 image
Procedure
List the deployment configurations of the pods inside your project and select the one that corresponds to your application.
oc get dc
oc get dcCopy to Clipboard Copied! Toggle word wrap Toggle overflow NAME REVISION DESIRED CURRENT TRIGGERED BY MY_APP_NAME 2 1 1 config,image(my-app:6) ...
NAME REVISION DESIRED CURRENT TRIGGERED BY MY_APP_NAME 2 1 1 config,image(my-app:6) ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow Open the YAML deployment template of the pod running your application for editing.
oc edit dc/MY_APP_NAME
oc edit dc/MY_APP_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following entry to the
portssection of the template and save your changes:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Redeploy the pod running your application.
oc rollout latest dc/MY_APP_NAME
oc rollout latest dc/MY_APP_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow The pod is redeployed with the updated deployment configuration and exposes the port
8778.- Log into the OpenShift Web console.
- In the sidebar, navigate to Applications > Pods, and click on the name of the pod running your application.
- In the pod details screen, click Open Java Console to access the hawt.io console.
Additional resources
12.2. Application metrics Copier lienLien copié sur presse-papiers!
Thorntail provides ways of exposing application metrics in order to track performance and service availability.
12.2.1. What are metrics Copier lienLien copié sur presse-papiers!
In the microservices architecture, where multiple services are invoked in order to serve a single user request, diagnosing performance issues or reacting to service outages might be hard. To make solving problems easier, applications must expose machine-readable data about their behavior, such as:
- How many requests are currently being processed.
- How many connections to the database are currently in use.
- How long service invocations take.
These kinds of data are referred to as metrics. Collecting metrics, visualizing them, setting alerts, discovering trends, etc. are very important to keep a service healthy.
Thorntail provides a fraction for Eclipse MicroProfile Metrics, an easy-to-use API for exposing metrics. Among other formats, it supports exporting data in the native format of Prometheus, a popular monitoring solution. Inside the application, you need nothing except this fraction. Outside of the application, Prometheus typically runs.
Additional resources
- The MicroProfile Metrics GitHub page.
- The Prometheus homepage
- A popular solution to visualize metrics stored in Prometheus is Grafana. For more information, see the Grafana homepage.
12.2.2. Exposing application metrics Copier lienLien copié sur presse-papiers!
In this example, you:
- Configure your application to expose metrics.
- Collect and view the data using Prometheus.
Note that Prometheus actively connects to a monitored application to collect data; the application does not actively send metrics to a server.
Prerequisites
Prometheus configured to collect metrics from the application:
Download and extract the archive with the latest Prometheus release:
wget https://github.com/prometheus/prometheus/releases/download/v2.4.3/prometheus-2.4.3.linux-amd64.tar.gz tar -xvf prometheus-2.4.3.linux-amd64.tar.gz
$ wget https://github.com/prometheus/prometheus/releases/download/v2.4.3/prometheus-2.4.3.linux-amd64.tar.gz $ tar -xvf prometheus-2.4.3.linux-amd64.tar.gzCopy to Clipboard Copied! Toggle word wrap Toggle overflow Navigate to the directory with Prometheus:
cd prometheus-2.4.3.linux-amd64
$ cd prometheus-2.4.3.linux-amd64Copy to Clipboard Copied! Toggle word wrap Toggle overflow Append the following snippet to the
prometheus.ymlfile to make Prometheus automatically collect metrics from your application:- job_name: 'thorntail' static_configs: - targets: ['localhost:8080']- job_name: 'thorntail' static_configs: - targets: ['localhost:8080']Copy to Clipboard Copied! Toggle word wrap Toggle overflow The default behavior of Thorntail-based applications is to expose metrics at the
/metricsendpoint. This is what the MicroProfile Metrics specification requires, and also what Prometheus expects.
The Prometheus server started on
localhost:Start Prometheus and wait until the
Server is ready to receive web requestsmessage is displayed in the console../prometheus
$ ./prometheusCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Include the
microprofile-metricsfraction in thepom.xmlfile in your application:pom.xml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Annotate methods or classes with the metrics annotations, for example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Here, the
@Countedannotation is used to keep track of how many times this method was invoked. The@Timedannotation is used to keep track of how long the invocations took.In this example, a JAX-RS resource method was annotated directly, but you can annotate any CDI bean in your application as well.
Launch your application:
mvn thorntail:run
$ mvn thorntail:runCopy to Clipboard Copied! Toggle word wrap Toggle overflow Invoke the traced endpoint several times:
curl http://localhost:8080/
$ curl http://localhost:8080/ Hello from counted and timed endpointCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait at least 15 seconds for the collection to happen, and see the metrics in Prometheus UI:
-
Open the Prometheus UI at http://localhost:9090/ and type
hellointo the Expression box. -
From the suggestions, select for example
application:hello_countand click Execute. - In the table that is displayed, you can see how many times the resource method was invoked.
-
Alternatively, select
application:hello_time_mean_secondsto see the mean time of all the invocations.
Note that all metrics you created are prefixed with
application:. There are other metrics, automatically exposed by Thorntail as the MicroProfile Metrics specification requires. Those metrics are prefixed withbase:andvendor:and expose information about the JVM in which the application runs.-
Open the Prometheus UI at http://localhost:9090/ and type
Additional resources
- For additional types of metrics, see the Eclipse MicroProfile Metrics documentation.