Chapter 2. Planning for operational measurements


You can use Ceilometer or collectd to collect telemetry data for autoscaling or Service Telemetry Framework (STF).

2.1. Collectd measurements

The following are the default collectd measurements:

  • cpu
  • disk free
  • disk usage
  • hugepages
  • interface
  • load
  • memory
  • unixsock
  • uptime

2.2. Planning for data storage

Gnocchi stores a collection of data points, where each data point is an aggregate. The storage format is compressed using different techniques. As a result, to calculate the size of a time-series database, you must estimate the size based on the worst-case scenario.

Warning

The use of Red Hat OpenStack Platform (RHOSP) Object Storage (swift) for time series database (Gnocchi) storage is only supported for small and non-production environments.

Procedure

  1. Calculate the number of data points:

    number of points = timespan / granularity

    For example, if you want to retain a year of data with one-minute resolution, use the formula:

    number of data points = (365 days X 24 hours X 60 minutes) / 1 minute number of data points = 525600

  2. Calculate the size of the time-series database:

    size in bytes = number of data points X 8 bytes

    If you apply this formula to the example, the result is 4.1 MB:

    size in bytes = 525600 points X 8 bytes = 4204800 bytes = 4.1 MB

    This value is an estimated storage requirement for a single aggregated time-series database. If your archive policy uses multiple aggregation methods (min, max, mean, sum, std, count), multiply this value by the number of aggregation methods you use.

2.3. Planning and managing archive policies

You can use an archive policy to configure how you aggregate the metrics and for how long you store the metrics in the time-series database. An archive policy is defined as the number of points over a timespan.

If your archive policy defines a policy of 10 points with a granularity of 1 second, the time series archive keeps up to 10 seconds, each representing an aggregation over 1 second. This means that the time series retains, at a maximum, 10 seconds of data between the more recent point and the older point. The archive policy also defines the aggregate method to use. The default is set to the parameter default_aggregation_methods, where the default values are set to mean, min, max. sum, std, count. So, depending on the use case, the archive policy and the granularity can vary.

To plan an archive policy, ensure that you are familiar with the following concepts:

2.3.1. Metrics

Gnocchi provides an object type called metric. A metric is anything that you can measure, for example, the CPU usage of a server, the temperature of a room, or the number of bytes sent by a network interface. A metric has the following properties:

  • A UUID to identify it
  • A name
  • The archive policy used to store and aggregate the measures

Additional resources

2.3.2. Creating custom measures

A measure is an incoming tuple that the API sends to Gnocchi. It consists of a timestamp and a value. You can create your own custom measures.

Procedure

  • Create a custom measure:

    $ openstack metric measures add -m <MEASURE1> -m <MEASURE2> .. -r <RESOURCE_NAME> <METRIC_NAME>

2.3.3. Verifying the metric status

You can use the openstack metric command to verify a successful deployment.

Procedure

  • Verify the deployment:

    (overcloud) [stack@undercloud-0 ~]$ openstack metric status
    +-----------------------------------------------------+-------+
    | Field                                           	| Value |
    +-----------------------------------------------------+-------+
    | storage/number of metric having measures to process | 0 	|
    | storage/total number of measures to process     	| 0 	|
    +-----------------------------------------------------+-------+

If there are no error messages, your deployment is successful.

2.3.4. Creating an archive policy

You can create an archive policy to define how you aggregate the metrics and for how long you store the metrics in the time-series database.

Procedure

  • Create an archive policy. Replace <archive-policy-name> with the name of the policy and replace <aggregation-method> with the method of aggregation.

    $ openstack metric archive policy create <archive-policy-name> --definition <definition> \
    --aggregation-method <aggregation-method>
    Note

    <definition> is the policy definition. Separate multiple attributes with a comma (,). Separate the name and value of the archive policy definition with a colon (:).

2.3.5. Viewing an archive policy

Use the following steps to examine your archive policies.

Procedure

  1. List the archive policies.

    $ openstack metric archive policy list
  2. View the details of an archive policy:

    # openstack metric archive-policy show <archive-policy-name>

2.3.6. Deleting an archive policy

Use the following step if you want to delete an archive policy.

Procedure

  • Delete the archive policy. Replace <archive-policy-name> with the name of the policy that you want to delete.

    $ openstack metric archive policy delete <archive-policy-name>

Verification

  • Check that the archive policy that you deleted is absent from the list of archive policies.

    $ openstack metric archive policy list

2.3.7. Creating an archive policy rule

You can use an archive policy rule to configure the mapping between a metric and an archive policy.

Procedure

  • Create an archive policy rule. Replace <rule-name> with the name of the rule and replace <archive-policy-name> with the name of the archive policy:

    $ openstack metric archive-policy-rule create <rule-name> /
    --archive-policy-name  <archive-policy-name>
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top