このコンテンツは選択した言語では利用できません。
Chapter 31. Enabling Cluster Metrics
31.1. Overview
The kubelet exposes metrics that can be collected and stored in back-ends by Heapster.
As an OpenShift Container Platform administrator, you can view a cluster’s metrics from all containers and components in one user interface. These metrics are also used by horizontal pod autoscalers in order to determine when and how to scale.
This topic describes using Hawkular Metrics as a metrics engine which stores the data persistently in a Cassandra database. When this is configured, CPU, memory and network-based metrics are viewable from the OpenShift Container Platform web console and are available for use by horizontal pod autoscalers.
Heapster retrieves a list of all nodes from the master server, then contacts each node individually through the /stats
endpoint. From there, Heapster scrapes the metrics for CPU, memory and network usage, then exports them into Hawkular Metrics.
Browsing individual pods in the web console displays separate sparkline charts for memory and CPU. The time range displayed is selectable, and these charts automatically update every 30 seconds. If there are multiple containers on the pod, then you can select a specific container to display its metrics.
If resource limits are defined for your project, then you can also see a donut chart for each pod. The donut chart displays usage against the resource limit. For example: 145 Available of 200 MiB
, with the donut chart showing 55 MiB Used
.
31.2. Before You Begin
An Ansible playbook is available to deploy and upgrade cluster metrics. You should familiarize yourself with the Advanced Installation section. This provides information for preparing to use Ansible and includes information about configuration. Parameters are added to the Ansible inventory file to configure various areas of cluster metrics.
The following describe the various areas and the parameters that can be added to the Ansible inventory file in order to modify the defaults:
31.3. Metrics Project
The components for cluster metrics must be deployed to the openshift-infra project in order for autoscaling to work. Horizontal pod autoscalers specifically use this project to discover the Heapster service and use it to retrieve metrics. The metrics project can be changed by adding openshift_metrics_project
to the inventory file.
31.4. Metrics Data Storage
You can store the metrics data to either persistent storage or to a temporary pod volume.
31.4.1. Persistent Storage
Running OpenShift Container Platform cluster metrics with persistent storage means that your metrics will be stored to a persistent volume and be able to survive a pod being restarted or recreated. This is ideal if you require your metrics data to be guarded from data loss. For production environments it is highly recommended to configure persistent storage for your metrics pods.
The size requirement of the Cassandra storage is dependent on the number of pods. It is the administrator’s responsibility to ensure that the size requirements are sufficient for their setup and to monitor usage to ensure that the disk does not become full. The size of the persisted volume claim is specified with the openshift_metrics_cassandra_pvc_size
ansible variable which is set to 10 GB by default.
If you would like to use dynamically provisioned persistent volumes set the openshift_metrics_cassandra_storage_type
variable to dynamic
in the inventory file.
31.4.2. Capacity Planning for Cluster Metrics
After running the openshift_metrics
Ansible role, the output of oc get pods
should resemble the following:
# oc get pods -n openshift-infra NAME READY STATUS RESTARTS AGE hawkular-cassandra-1-l5y4g 1/1 Running 0 17h hawkular-metrics-1t9so 1/1 Running 0 17h heapster-febru 1/1 Running 0 17h
OpenShift Container Platform metrics are stored using the Cassandra database, which is deployed with settings of openshift_metrics_cassandra_limits_memory: 2G
; this value could be adjusted further based upon the available memory as determined by the Cassandra start script. This value should cover most OpenShift Container Platform metrics installations, but using environment variables you can modify the MAX_HEAP_SIZE
along with heap new generation size, HEAP_NEWSIZE
, in the Cassandra Dockerfile prior to deploying cluster metrics.
By default, metrics data is stored for seven days. After seven days, Cassandra begins to purge the oldest metrics data. Metrics data for deleted pods and projects is not automatically purged; it is only removed once the data is more than seven days old.
Example 31.1. Data Accumulated by 10 Nodes and 1000 Pods
In a test scenario including 10 nodes and 1000 pods, a 24 hour period accumulated 2.5 GB of metrics data. Therefore, the capacity planning formula for metrics data in this scenario is:
(((2.5 × 109) ÷ 1000) ÷ 24) ÷ 106 = ~0.125 MB/hour per pod.
Example 31.2. Data Accumulated by 120 Nodes and 10000 Pods
In a test scenario including 120 nodes and 10000 pods, a 24 hour period accumulated 25 GB of metrics data. Therefore, the capacity planning formula for metrics data in this scenario is:
(((11.410 × 109) ÷ 1000) ÷ 24) ÷ 106 = 0.475 MB/hour
1000 pods | 10000 pods | |
---|---|---|
Cassandra storage data accumulated over 24 hours (default metrics parameters) | 2.5 GB | 11.4 GB |
If the default value of 7 days for openshift_metrics_duration
and 10 seconds for openshift_metrics_resolution
are preserved, then weekly storage requirements for the Cassandra pod would be:
1000 pods | 10000 pods | |
---|---|---|
Cassandra storage data accumulated over seven days (default metrics parameters) | 20 GB | 90 GB |
In the previous table, an additional 10 percent was added to the expected storage space as a buffer for unexpected monitored pod usage.
If the Cassandra persisted volume runs out of sufficient space, then data loss will occur.
For cluster metrics to work with persistent storage, ensure that the persistent volume has the ReadWriteOnce access mode. If this mode is not active, then the persistent volume claim cannot locate the persistent volume, and Cassandra fails to start.
To use persistent storage with the metric components, ensure that a persistent volume of sufficient size is available. The creation of persistent volume claims is handled by the OpenShift Ansible openshift_metrics
role.
OpenShift Container Platform metrics also supports dynamically-provisioned persistent volumes. To use this feature with OpenShift Container Platform metrics, it is necessary to set the value of openshift_metrics_cassandra_storage_type
to dynamic
. You can use EBS, GCE, and Cinder storage back-ends to dynamically provision persistent volumes.
For information on configuring the performance and scaling the cluster metrics pods, see the Scaling Cluster Metrics topic.
Number of Nodes | Number of Pods | Cassandra Storage growth speed | Cassandra storage growth per day | Cassandra storage growth per week |
---|---|---|---|---|
210 | 10500 | 500 MB per hour | 15 GB | 75 GB |
990 | 11000 | 1 GB per hour | 30 GB | 210 GB |
In the above calculation, approximately 20 percent of the expected size was added as overhead to ensure that the storage requirements do not exceed calculated value.
If the METRICS_DURATION
and METRICS_RESOLUTION
values are kept at the default (7
days and 15
seconds respectively), it is safe to plan Cassandra storage size requrements for week, as in the values above.
Because OpenShift Container Platform metrics uses the Cassandra database as a datastore for metrics data, if USE_PERSISTANT_STORAGE=true
is set during the metrics set up process, PV
will be on top in the network storage, with NFS as the default. However, using network storage in combination with Cassandra is not recommended, as per the Cassandra documentation.
Recommendations for OpenShift Container Platform Version 3.5
- Run metrics pods on dedicated OpenShift Container Platform infrastructure nodes.
-
Use persistent storage when configuring metrics. Set
USE_PERSISTENT_STORAGE=true
. -
Keep the
METRICS_RESOLUTION=30
parameter in OpenShift Container Platform metrics deployments. Using a value lower than the default value of30
forMETRICS_RESOLUTION
is not recommended. When using the Ansible metrics installation procedure, this is theopenshift_metrics_resolution
parameter. - Closely monitor OpenShift Container Platform nodes with host metrics pods to detect early capacity shortages (CPU and memory) on the host system. These capacity shortages can cause problems for metrics pods.
- In OpenShift Container Platform version 3.5 testing, test cases up to 25,000 pods were monitored in a OpenShift Container Platform cluster.
Known Issues and Limitations
Testing found that the heapster
metrics component is capable of handling up to 25,000 pods. If the amount of pods exceed that number, Heapster begins to fall behind in metrics processing, resulting in the possibility of metrics graphs no longer appearing. Work is ongoing to increase the number of pods that Heapster can gather metrics on, as well as upstream development of alternate metrics-gathering solutions.
31.4.3. Non-Persistent Storage
Running OpenShift Container Platform cluster metrics with non-persistent storage means that any stored metrics will be deleted when the pod is deleted. While it is much easier to run cluster metrics with non-persistent data, running with non-persistent data does come with the risk of permanent data loss. However, metrics can still survive a container being restarted.
In order to use non-persistent storage, you must set the openshift_metrics_cassandra_storage_type
variable to emptydir
in the inventory file.
When using non-persistent storage, metrics data will be written to /var/lib/origin/openshift.local.volumes/pods on the node where the Cassandra pod is running. Ensure /var has enough free space to accommodate metrics storage.
31.5. Metrics Ansible Role
The OpenShift Ansible openshift_metrics
role configures and deploys all of the metrics components using the variables from the Configuring Ansible inventory file.
31.5.1. Specifying Metrics Ansible Variables
The openshift_metrics
role included with OpenShift Ansible defines the tasks to deploy cluster metrics. The following is a list of role variables that can be added to your inventory file if it is necessary to override them.
Variable | Description |
---|---|
|
Deploy metrics if |
| Start the metrics cluster after deploying the components. |
|
The prefix for the component images. With The prefix for the component images. With |
|
The version for the component images. For example, with |
| The time, in seconds, to wait until Hawkular Metrics and Heapster start up before attempting a restart. |
| The number of days to store metrics before they are purged. |
| The frequency that metrics are gathered. Defined as a number and time identifier: seconds (s), minutes (m), hours (h). |
| The persistent volume claim prefix created for Cassandra. A serial number is appended to the prefix starting from 1. |
| The persistent volume claim size for each of the Cassandra nodes. |
|
Use |
| The number of Cassandra nodes for the metrics stack. This value dictates the number of Cassandra replication controllers. |
|
The memory limit for the Cassandra pod. For example, a value of |
|
The CPU limit for the Cassandra pod. For example, a value of |
| The number of replicas for Cassandra. |
|
The amount of memory to request for Cassandra pod. For example, a value of |
|
The CPU request for the Cassandra pod. For example, a value of |
| The supplemental storage group to use for Cassandra. |
|
Set to the desired, existing node selector to ensure that pods are placed onto nodes with specific labels. For example, |
| An optional certificate authority (CA) file used to sign the Hawkular certificate. |
| The certificate file used for re-encrypting the route to Hawkular metrics. The certificate must contain the host name used by the route. If unspecified, the default router certificate is used. |
| The key file used with the Hawkular certificate. |
|
The amount of memory to limit the Hawkular pod. For example, a value of |
|
The CPU limit for the Hawkular pod. For example, a value of |
| The number of replicas for Hawkular metrics. |
|
The amount of memory to request for the Hawkular pod. For example, a value of |
|
The CPU request for the Hawkular pod. For example, a value of |
|
Set to the desired, existing node selector to ensure that pods are placed onto nodes with specific labels. For example, |
|
A comma-separated list of CN to accept. By default, this is set to allow the OpenShift service proxy to connect. Add |
|
The amount of memory to limit the Heapster pod. For example, a value of |
|
The CPU limit for the Heapster pod. For example, a value of |
|
The amount of memory to request for Heapster pod. For example, a value of |
|
The CPU request for the Heapster pod. For example, a value of |
| Deploy only Heapster, without the Hawkular Metrics and Cassandra components. |
|
Set to the desired, existing node selector to ensure that pods are placed onto nodes with specific labels. For example, |
|
Set to |
|
Set when executing the |
See Compute Resources for further discussion on how to specify requests and limits.
If you are using persistent storage with Cassandra, it is the administrator’s responsibility to set a sufficient disk size for the cluster using the openshift_metrics_cassandra_pvc_size
variable. It is also the administrator’s responsibility to monitor disk usage to make sure that it does not become full.
Data loss will result if the Cassandra persisted volume runs out of sufficient space.
All of the other variables are optional and allow for greater customization. For instance, if you have a custom install in which the Kubernetes master is not available under https://kubernetes.default.svc:443
you can specify the value to use instead with the openshift_metrics_master_url
parameter. To deploy a specific version of the metrics components, modify the openshift_metrics_image_version
variable.
It is highly recommended to not use latest for the openshift_metrics_image_version. The latest version corresponds to the very latest version available and can cause issues if it brings in a newer version not meant to function on the version of OpenShift Container Platform you are currently running.
31.5.2. Using Secrets
The OpenShift Ansible openshift_metrics
role will auto-generate self-signed certificates for use between its components and will generate a re-encrypting route to expose the Hawkular Metrics service. This route is what allows the web console to access the Hawkular Metrics service.
In order for the browser running the web console to trust the connection through this route, it must trust the route’s certificate. This can be accomplished by providing your own certificates signed by a trusted Certificate Authority. The openshift_metrics
role allows you to specify your own certificates which it will then use when creating the route.
The router’s default certificate are used if you do not provide your own.
31.5.2.1. Providing Your Own Certificates
To provide your own certificate which will be used by the re-encrypting route, you can set the openshift_metrics_hawkular_cert
, openshift_metrics_hawkular_key
, and openshift_metrics_hawkular_ca
variables in your inventory file.
The hawkular-metrics.pem
value needs to contain the certificate in its .pem format. You may also need to provide the certificate for the Certificate Authority which signed this pem file via the hawkular-metrics-ca.cert
secret.
For more information, please see the re-encryption route documentation.
31.6. Deploying the Metric Components
Because deploying and configuring all the metric components is handled with OpenShift Ansible, you can deploy everything in one step.
The following examples show you how to deploy metrics with and without persistent storage using the default parameters.
In accordance with upstream Kubernetes rules, metrics can be collected only on the default interface of eth0
.
Example 31.3. Deploying with Persistent Storage
The following command sets the Hawkular Metrics route to use hawkular-metrics.example.com and is deployed using persistent storage.
You must have a persistent volume of sufficient size available.
$ ansible-playbook [-i </path/to/inventory>] <OPENSHIFT_ANSIBLE_DIR>/byo/openshift-cluster/openshift-metrics.yml \ -e openshift_metrics_install_metrics=True \ -e openshift_metrics_hawkular_hostname=hawkular-metrics.example.com \ -e openshift_metrics_cassandra_storage_type=pv
Example 31.4. Deploying without Persistent Storage
The following command sets the Hawkular Metrics route to use hawkular-metrics.example.com and deploy without persistent storage.
$ ansible-playbook [-i </path/to/inventory>] <OPENSHIFT_ANSIBLE_DIR>/byo/openshift-cluster/openshift-metrics.yml \ -e openshift_metrics_install_metrics=True \ -e openshift_metrics_hawkular_hostname=hawkular-metrics.example.com
Because this is being deployed without persistent storage, metric data loss can occur.
31.6.1. Metrics Diagnostics
The are some diagnostics for metrics to assist in evaluating the state of the metrics stack. To execute diagnostics for metrics:
$ oc adm diagnostics MetricsApiProxy
31.7. Deploying the Hawkular OpenShift Agent
The Hawkular OpenShift Agent is currently in Technology Preview.
The Hawkular OpenShift Agent can be used to gather application metrics from pods running within the OpenShift Container Platform cluster. These metrics can then be viewed from the console or accessed via the Hawkular Metrics REST API.
In order for these metrics to be gathered, your pod needs to expose a Prometheus or Jolokia endpoint and create a special ConfigMap, which defines where the metric endpoint is located and how the metrics should be gathered. More information can be found under the Hawkular OpenShift Agent documentation.
The agent runs as a daemon set in your cluster and is deployed to the default
project. By deploying to the default
project, the agent continues to monitor all your pods even if the ovs_multitenant
plug-in is enabled.
To deploy the agent, you will need to gather two configuration files:
$ wget https://raw.githubusercontent.com/openshift/origin-metrics/enterprise/hawkular-openshift-agent/hawkular-openshift-agent-configmap.yaml $ wget https://raw.githubusercontent.com/openshift/origin-metrics/enterprise/hawkular-openshift-agent/hawkular-openshift-agent.yaml
To set up and deploy the agent into your OpenShift Container Platform cluster, run:
$ oc create -f hawkular-openshift-agent-configmap.yaml -n default $ oc process -f hawkular-openshift-agent.yaml | oc create -n default -f - $ oc adm policy add-cluster-role-to-user hawkular-openshift-agent system:serviceaccount:default:hawkular-openshift-agent
31.7.1. Undeploying the Hawkular OpenShift Agent
To undeploy the Hawkular Metrics Agent, run:
$ oc delete all,secrets,sa,templates,configmaps,daemonsets,clusterroles --selector=metrics-infra=agent -n default
31.8. Setting the Metrics Public URL
The OpenShift Container Platform web console uses the data coming from the Hawkular Metrics service to display its graphs. The URL for accessing the Hawkular Metrics service must be configured with the metricsPublicURL
option in the master configuration file (/etc/origin/master/master-config.yaml). This URL corresponds to the route created with the openshift_metrics_hawkular_hostname
inventory variable used during the deployment of the metrics components.
You must be able to resolve the openshift_metrics_hawkular_hostname
from the browser accessing the console.
For example, if your openshift_metrics_hawkular_hostname
corresponds to hawkular-metrics.example.com
, then you must make the following change in the master-config.yaml file:
assetConfig: ... metricsPublicURL: "https://hawkular-metrics.example.com/hawkular/metrics"
Once you have updated and saved the master-config.yaml file, you must restart your OpenShift Container Platform instance.
When your OpenShift Container Platform server is back up and running, metrics will be displayed on the pod overview pages.
If you are using self-signed certificates, remember that the Hawkular Metrics service is hosted under a different host name and uses different certificates than the console. You may need to explicitly open a browser tab to the value specified in metricsPublicURL
and accept that certificate.
To avoid this issue, use certificates which are configured to be acceptable by your browser.
31.9. Accessing Hawkular Metrics Directly
To access and manage metrics more directly, use the Hawkular Metrics API.
When accessing Hawkular Metrics from the API, you will only be able to perform reads. Writing metrics has been disabled by default. If you want for individual users to also be able to write metrics, you must set the openshift_metrics_hawkular_user_write_access
variable to true.
However, it is recommended to use the default configuration and only have metrics enter the system via Heapster. If write access is enabled, any user will be able to write metrics to the system, which can affect performance and cause Cassandra disk usage to unpredictably increase.
The Hawkular Metrics documentation covers how to use the API, but there are a few differences when dealing with the version of Hawkular Metrics configured for use on OpenShift Container Platform:
31.9.1. OpenShift Container Platform Projects and Hawkular Tenants
Hawkular Metrics is a multi-tenanted application. It is configured so that a project in OpenShift Container Platform corresponds to a tenant in Hawkular Metrics.
As such, when accessing metrics for a project named MyProject you must set the Hawkular-Tenant header to MyProject.
There is also a special tenant named _system which contains system level metrics. This requires either a cluster-reader or cluster-admin level privileges to access.
31.9.2. Authorization
The Hawkular Metrics service will authenticate the user against OpenShift Container Platform to determine if the user has access to the project it is trying to access.
Hawkular Metrics accepts a bearer token from the client and verifies that token with the OpenShift Container Platform server using a SubjectAccessReview. If the user has proper read privileges for the project, they are allowed to read the metrics for that project. For the _system tenant, the user requesting to read from this tenant must have cluster-reader permission.
When accessing the Hawkular Metrics API, you must pass a bearer token in the Authorization header.
31.10. Scaling OpenShift Container Platform Cluster Metrics Pods
Information about scaling cluster metrics capabilities is available in the Scaling and Performance Guide.
31.11. Cleanup
You can remove everything deployed by the OpenShift Ansible openshift_metrics
role by performing the following steps:
$ ansible-playbook [-i </path/to/inventory>] <OPENSHIFT_ANSIBLE_DIR>/byo/openshift-cluster/openshift-metrics.yml \ -e openshift_metrics_install_metrics=False