Chapter 2. Monitoring your own services
You can use OpenShift Monitoring for your own services in addition to monitoring the cluster. This way, you do not need to use an additional monitoring solution. This helps keeping monitoring centralized. Additionally, you can extend the access to the metrics of your services beyond cluster administrators. This enables developers and arbitrary users to access these metrics.
Custom Prometheus instances and the Prometheus Operator installed through Operator Lifecycle Manager (OLM) can cause issues with user-defined workload monitoring if it is enabled. Custom Prometheus instances are not supported in OpenShift Container Platform.
Monitoring your own services is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.
2.1. Enabling monitoring of your own services
You can enable monitoring your own services by setting the techPreviewUserWorkload/enabled
flag in the cluster monitoring config map.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - You have installed the OpenShift CLI (oc).
-
You have created the
cluster-monitoring-config
ConfigMap
object.
Procedure
Start editing the
cluster-monitoring-config
ConfigMap
object:$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Set the
techPreviewUserWorkload
setting totrue
underdata/config.yaml
:apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | techPreviewUserWorkload: enabled: true
- Save the file to apply the changes. Monitoring your own services is enabled automatically.
Optional: You can check that the
prometheus-user-workload
pods were created:$ oc -n openshift-user-workload-monitoring get pod
Example output
NAME READY STATUS RESTARTS AGE prometheus-operator-6f7b748d5b-t7nbg 2/2 Running 0 3h prometheus-user-workload-0 5/5 Running 1 3h prometheus-user-workload-1 5/5 Running 1 3h thanos-ruler-user-workload-0 3/3 Running 0 3h thanos-ruler-user-workload-1 3/3 Running 0 3h
Additional resources
-
See Creating a cluster monitoring config map to learn how to create the
cluster-monitoring-config
ConfigMap
object.
2.2. Deploying a sample service
To test monitoring your own services, you can deploy a sample service.
Procedure
-
Create a YAML file for the service configuration. In this example, it is called
prometheus-example-app.yaml
. Fill the file with the configuration for deploying the service:
apiVersion: v1 kind: Namespace metadata: name: ns1 --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: replicas: 1 selector: matchLabels: app: prometheus-example-app template: metadata: labels: app: prometheus-example-app spec: containers: - image: ghcr.io/rhobs/prometheus-example-app:0.3.0 imagePullPolicy: IfNotPresent name: prometheus-example-app --- apiVersion: v1 kind: Service metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: ports: - port: 8080 protocol: TCP targetPort: 8080 name: web selector: app: prometheus-example-app type: ClusterIP
This configuration deploys a service named
prometheus-example-app
in thens1
project. This service exposes the customversion
metric.Apply the configuration file to the cluster:
$ oc apply -f prometheus-example-app.yaml
It will take some time to deploy the service.
You can check that the service is running:
$ oc -n ns1 get pod
Example output
NAME READY STATUS RESTARTS AGE prometheus-example-app-7857545cb7-sbgwq 1/1 Running 0 81m
2.3. Granting user permissions using web console
This procedure shows how to grant users permissions for monitoring their own services using the web console.
Prerequisites
- Have a user created.
- Log in to the web console as a cluster administrator.
Procedure
-
In the web console, navigate to User Management
Role Bindings Create Binding. - In Binding Type, select the "Namespace Role Binding" type.
- In Name, enter a name for the binding.
-
In Namespace, select the namespace where you want to grant the access. For example, select
ns1
. In Role Name, enter
monitoring-rules-view
,monitoring-rules-edit
, ormonitoring-edit
.-
monitoring-rules-view
allows readingPrometheusRule
custom resources within the namespace. -
monitoring-rules-edit
allows creating, modifying, and deletingPrometheusRule
custom resources matching the permitted namespace. -
monitoring-edit
gives the same permissions asmonitoring-rules-edit
. Additionally, it allows creating scraping targets for services or pods. It also allows creating, modifying, and deletingServiceMonitor
andPodMonitor
resources.
ImportantWhichever role you choose, you must bind it against a specific namespace as a cluster administrator.
For example, enter
monitoring-edit
.-
- In Subject, select User.
-
In Subject Name, enter the name of the user. For example, enter
johnsmith
. -
Confirm the role binding. If you followed the example, then user
johnsmith
has been assigned the permissions for setting up metrics collection and creating alerting rules in thens1
namespace.
2.4. Granting user permissions using CLI
This procedure shows how to grant users permissions for monitoring their own services using the CLI.
Whichever role you choose, you must bind it against a specific namespace.
Prerequisites
- You have access to the cluster as a user with the cluster-admin role.
- Have a user created.
-
Log in using the
oc
command.
Procedure
Run this command to assign a role to a user in a defined namespace:
$ oc policy add-role-to-user <role> <user> -n <namespace>
Substitute
<role>
withmonitoring-rules-view
,monitoring-rules-edit
, ormonitoring-edit
.-
monitoring-rules-view
allows readingPrometheusRule
custom resources within the namespace. -
monitoring-rules-edit
allows creating, modifying, and deletingPrometheusRule
custom resources matching the permitted namespace. -
monitoring-edit
gives the same permissions asmonitoring-rules-edit
. Additionally, it allows creating scraping targets for services or pods. It also allows creating, modifying, and deletingServiceMonitor
andPodMonitor
resources.
As an example, substitute the role with
monitoring-edit
, the user withjohnsmith
, and the namespace withns1
. This assigns to userjohnsmith
the permissions for setting up metrics collection and creating alerting rules in thens1
namespace.-
2.5. Setting up metrics collection
To use the metrics exposed by your service, you must configure OpenShift Monitoring to scrape metrics from the /metrics
endpoint. You can do this by using a ServiceMonitor
custom resource definition (CRD) that specifies how to monitor a service or a PodMonitor
CRD that specifies how to monitor a pod. The former requires a Service
object, while the latter does not, which allows Prometheus to directly scrape metrics from the metrics endpoint exposed by a pod.
This procedure shows how to create a ServiceMonitor
resource for the service.
Prerequisites
-
Log in as a cluster administrator or a user with the
monitoring-edit
role.
Procedure
-
Create a YAML file for the
ServiceMonitor
resource configuration. In this example, the file is calledexample-app-service-monitor.yaml
. Fill the file with the configuration for creating the
ServiceMonitor
resource:apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: k8s-app: prometheus-example-monitor name: prometheus-example-monitor namespace: ns1 spec: endpoints: - interval: 30s port: web scheme: http selector: matchLabels: app: prometheus-example-app
This configuration makes OpenShift Monitoring scrape the metrics exposed by the sample service deployed in "Deploying a sample service", which includes the single
version
metric.Apply the configuration file to the cluster:
$ oc apply -f example-app-service-monitor.yaml
It will take some time to deploy the
ServiceMonitor
resource.You can check that the
ServiceMonitor
resource is running:$ oc -n ns1 get servicemonitor
Example output
NAME AGE prometheus-example-monitor 81m
Additional resources
See the Prometheus Operator API documentation for more information on ServiceMonitor
and PodMonitor
resources.
2.6. Creating alerting rules
You can create alerting rules, which will fire alerts based on values of chosen metrics.
Viewing and managing your rules and alerts is not yet integrated into the web console. A cluster administrator can instead use the Alertmanager UI or the Thanos Ruler. See the respective sections for instructions.
Prerequisites
-
Log in as a user that has the
monitoring-rules-edit
role for the namespace where you want to create the alerting rule.
Procedure
-
Create a YAML file for alerting rules. In this example, it is called
example-app-alerting-rule.yaml
. Fill the file with the configuration for the alerting rules:
NoteWhen you create an alerting rule, a namespace label is enforced on it if a rule with the same name exists in another namespace.
apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-alert namespace: ns1 spec: groups: - name: example rules: - alert: VersionAlert expr: version{job="prometheus-example-app"} == 0
This configuration creates an alerting rule named
example-alert
, which fires an alert when theversion
metric exposed by the sample service becomes0
.ImportantFor every namespace, you can use metrics of that namespace and cluster metrics, but not metrics of another namespace.
For example, an alerting rule for
ns1
can have metrics fromns1
and cluster metrics, such as the CPU and memory metrics. However, the rule cannot include metrics fromns2
.Additionally, you cannot create alerting rules for the
openshift-*
core OpenShift namespaces. OpenShift Container Platform Monitoring by default provides a set of alerting rules for these namespaces.Apply the configuration file to the cluster:
$ oc apply -f example-app-alerting-rule.yaml
It will take some time to create the alerting rules.
2.7. Removing alerting rules
You can remove an alerting rule.
Prerequisites
-
Log in as a user that has the
monitoring-rules-edit
role for the namespace where you want to remove an alerting rule.
Procedure
To remove a rule in a namespace, run:
$ oc -n <namespace> delete prometheusrule <foo>
2.8. Accessing alerting rules for your project
You can list existing alerting rules for your project.
Prerequisites
-
Log in as a user with the
monitoring-rules-view
role against your project.
Procedure
To list alerting rules in a project, run:
$ oc -n <project> get prometheusrule
To list the configuration of an alerting rule, run:
$ oc -n <project> get prometheusrule <rule> -oyaml
2.9. Accessing alerting rules for all namespaces
As a cluster administrator, you can access alerting rules from all namespaces together in a single view.
In a future release, the route to the Thanos Ruler UI will be deprecated in favor of the web console.
Prerequisites
-
Have the
oc
command installed. - Log in as a cluster administrator.
Procedure
List routes for the
openshift-user-workload-monitoring
namespace:$ oc -n openshift-user-workload-monitoring get routes
The output shows the URL for the Thanos Ruler UI:
NAME HOST/PORT ... thanos-ruler thanos-ruler-openshift-user-workload-monitoring.apps.example.devcluster.openshift.com
- Navigate to the listed URL. Here you can see user alerting rules from all namespaces.
2.10. Accessing the metrics of your service as a developer
After you have enabled monitoring your own services, deployed a service, and set up metrics collection for the service, you can access the metrics of the service as a developer or as a user with view permissions for the project.
The Grafana instance shipped within OpenShift Container Platform Monitoring is read-only and displays only infrastructure-related dashboards.
Prerequisites
- Deploy the service that you want to monitor.
- Enable monitoring of your own services.
- Have metrics scraping set up for the service.
- Log in as a developer or as a user with view permissions for the project.
Procedure
Go to the OpenShift Container Platform web console, switch to the Developer Perspective, then click Advanced
Metrics. Select the project you want to see the metrics for. NoteDevelopers can only use the Developer Perspective and not the Administrator Perspective. They can only query metrics from a single project. They cannot access the third-party UIs provided with OpenShift Container Platform Monitoring.
- Use the PromQL interface to run queries for your services.
2.11. Accessing metrics of all services as a cluster administrator
If you are a cluster administrator or a user with view permissions for all namespaces, you can access metrics of all services from all namespaces together in a single view.
Prerequisites
- Log in to the web console as a cluster administrator or a user with view permissions for all namespaces.
-
Optionally, log in with the
oc
command as well.
Procedure
Using the Metrics web interface:
Go to the OpenShift Container Platform web console, switch to the Administrator Perspective, and click Monitoring
Metrics. NoteCluster administrators, when using the Administrator Perspective, have access to all cluster metrics and to custom service metrics from all projects.
NoteOnly cluster administrators have access to the third-party UIs provided with OpenShift Container Platform Monitoring.
- Use the PromQL interface to run queries for your services.
Using the Thanos Querier UI:
NoteIn a future release, the route to the Thanos Querier UI will be deprecated in favor of the web console.
List routes for the
openshift-monitoring
namespace:$ oc -n openshift-monitoring get routes
The output shows the URL for the Thanos Querier UI:
NAME HOST/PORT ... thanos-querier thanos-querier-openshift-monitoring.apps.example.devcluster.openshift.com
- Navigate to the listed URL. Here you can see all metrics from all namespaces.
Additional resources