Chapter 2. Enabling RHOSO observability logging


Enable and configure Red Hat OpenStack Services on OpenShift (RHOSO) observability logging to collect, store, and access logs from your RHOSO environment. When observability logging is enabled on the control plane, you can enable the RHOSO observability logging service on the data plane.

2.1. Prerequisites

  • The Loki Operator is installed and started by creating a LokiStack instance. For more information, see Installing the Loki Operator by using the CLI in the Red Hat OpenShift Logging Installing logging guide.
  • The Red Hat OpenShift Logging Operator is installed. For more information about installing the Red Hat OpenShift Logging Operator, see Installing Red Hat OpenShift Logging Operator by using the CLI in the Red Hat OpenShift Logging Installing Logging guide.
  • An instance of the Red Hat OpenShift Logging Operator is started for the control plane by creating a ClusterLogForwarder that is configured for application logs retrieval. For an example of how to configure the ClusterLogForwarder instance for application logs retrieval, see the Application logs retrieval configuration example.
  • An instance of the Red Hat OpenShift Logging Operator is started for the data plane by creating a ClusterLogForwarder that is configured with a Syslog receiver. For an example of how to configure the ClusterLogForwarder instance with a `Syslog`receiver, see the Syslog receiver configuration example.
  • The logging plugin is installed to enable the logging tab in the observability dashboard. For more information, see Installing the logging UI plugin by using the CLI.

To enable and configure observability logging on the control plane, you edit the Telemetry service in your OpenStackControlPlane custom resource (CR) file.

Procedure

  1. Open the OpenStackControlPlane CR definition file, openstack_control_plane.yaml, on your workstation.
  2. Update the telemetry section based on the needs of your environment:

     telemetry:
        enabled: true
        template:
          ...
          logging:
            enabled: true
            annotations:
              metallb.universe.tf/address-pool: internalapi
              metallb.universe.tf/allow-shared-ip: internalapi
              metallb.universe.tf/loadBalancerIPs: 172.17.0.80
    Copy to Clipboard Toggle word wrap
    • logging.enabled: Set to true to enable observability logging.
    • logging.annotations.metallb.universe.tf/address-pool: Set to the RHOSO network that you want to use to transport the logs from the Compute nodes to the control plane.
    • logging.annotations.metallb.universe.tf/loadBalancerIPs: Set to the IP address that rsyslog sends messages to. Ensure that the IP address is reachable from the Compute node. The default IP address is the default VIP for internalapi, which is 172.17.0.80.
  3. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard Toggle word wrap
  4. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    NAME 						STATUS 	MESSAGE
    openstack-control-plane 	Unknown 	Setup started
    Copy to Clipboard Toggle word wrap

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress

Verification

  1. Open the logging pane in the OpenShift Console.
  2. Select Observe Logs.
  3. From the drop-down menu, choose Application.
  4. Verify that pod logs from the RHOSO control plane services are present.

You can enable Red Hat OpenStack Services on OpenShift (RHOSO) observability logging on the data plane by adding the OpenStackDataPlaneService logging service to the services list of each OpenStackDataPlaneNodeSet custom resource (CR) defined for the data plane.

Prerequisites

  • RHOSO observability logging is enabled on the control plane.

Procedure

  1. Open the OpenStackDataPlaneNodeSet CR definition file for the node set you want to update, for example, openstack_data_plane.yaml.
  2. Add the services field, and include all the required services, including the default services, then add logging after telemetry:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    metadata:
      name: openstack-data-plane
      namespace: openstack
    spec:
      tlsEnabled: true
      env:
        - name: ANSIBLE_FORCE_COLOR
          value: "True"
      services:
        - redhat
        - bootstrap
        - download-cache
        - configure-network
        - validate-network
        - install-os
        - configure-os
        - ssh-known-hosts
        - run-os
        - reboot-os
        - install-certs
        - ovn
        - neutron-metadata
        - libvirt
        - nova
        - telemetry
        - logging
    Copy to Clipboard Toggle word wrap
  3. Save the OpenStackDataPlaneNodeSet CR definition file.
  4. Apply the updated OpenStackDataPlaneNodeSet CR configuration:

    $ oc apply -f openstack_data_plane.yaml
    Copy to Clipboard Toggle word wrap
  5. Verify that the data plane resource has been updated by confirming that the status is SetupReady:

    $ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10m
    Copy to Clipboard Toggle word wrap

    When the status is SetupReady the command returns a condition met message, otherwise it returns a timeout error.

    For information about the data plane conditions and states, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.

  6. Create a file on your workstation to define the OpenStackDataPlaneDeployment CR:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
      name: <node_set_deployment_name>
    Copy to Clipboard Toggle word wrap
    • Replace <node_set_deployment_name> with the name of the OpenStackDataPlaneDeployment CR. The name must be unique, must consist of lower case alphanumeric characters, - (hyphen) or . (period), and must start and end with an alphanumeric character.
    Tip

    Give the definition file and the OpenStackDataPlaneDeployment CR unique and descriptive names that indicate the purpose of the modified node set.

  7. Add the OpenStackDataPlaneNodeSet CR that you modified:

    spec:
      nodeSets:
        - <nodeSet_name>
    Copy to Clipboard Toggle word wrap
  8. Save the OpenStackDataPlaneDeployment CR deployment file.
  9. Deploy the modified OpenStackDataPlaneNodeSet CR:

    $ oc create -f openstack_data_plane_deploy.yaml -n openstack
    Copy to Clipboard Toggle word wrap

    You can view the Ansible logs while the deployment executes:

    $ oc get pod -l app=openstackansibleee -w
    $ oc logs -l app=openstackansibleee -f --max-log-requests 10
    Copy to Clipboard Toggle word wrap

    If the oc logs command returns an error similar to the following error, increase the --max-log-requests value:

    error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limit
    Copy to Clipboard Toggle word wrap
  10. Verify that the modified OpenStackDataPlaneNodeSet CR is deployed:

    $ oc get openstackdataplanedeployment -n openstack
    NAME             	STATUS   MESSAGE
    openstack-data-plane   True     Setup Complete
    
    
    $ oc get openstackdataplanenodeset -n openstack
    NAME             	STATUS   MESSAGE
    openstack-data-plane   True     NodeSet Ready
    Copy to Clipboard Toggle word wrap

    For information about the meaning of the returned status, see Data plane conditions and states in the Deploying Red Hat OpenStack Services on OpenShift guide.

    If the status indicates that the data plane has not been deployed, then troubleshoot the deployment. For information, see Troubleshooting the data plane creation and deployment in the Deploying Red Hat OpenStack Services on OpenShift guide.

  11. If you added a new node to the node set, then map the node to the Compute cell it is connected to:

    $ oc rsh nova-cell0-conductor-0 nova-manage cell_v2 discover_hosts --verbose
    Copy to Clipboard Toggle word wrap

    If you did not create additional cells, this command maps the Compute nodes to cell1.

    Access the remote shell for the openstackclient pod and verify that the deployed Compute nodes are visible on the control plane:

    $ oc rsh -n openstack openstackclient
    $ openstack hypervisor list
    Copy to Clipboard Toggle word wrap

Verification

  1. Open the logging pane in the OpenShift Console.
  2. Click Observe and then click Logs.
  3. From the drop-down menu, choose Infrastructure.
  4. Verify that Journald logs from the Compute nodes are present.
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top