Chapter 4. Customizing the data plane


The Red Hat OpenStack Services on OpenShift (RHOSO) data plane consists of RHEL 9.4 nodes. You use the OpenStackDataPlaneNodeSet custom resource definition (CRD) to create the custom resources (CRs) that define the nodes and the layout of the data plane. You can use pre-provisioned nodes, or provision bare-metal nodes as part of the data plane creation and deployment process.

You can add additional node sets to your data plane by using the procedures in Creating the data plane in the Deploying Red Hat OpenStack Services on OpenShift guide.

You can also modify existing OpenStackDataPlaneNodeSet CRs, add Compute cells to your data plane, and customize your data plane by creating custom services.

Warning

To prevent database corruption, do not edit the name of any cell in the custom resource definition (CRD).

4.1. Prerequisites

  • The RHOSO environment is deployed on a Red Hat OpenShift Container Platform (RHOCP) cluster. For more information, see Deploying Red Hat OpenStack Services on OpenShift.
  • You are logged on to a workstation that has access to the RHOCP cluster as a user with cluster-admin privileges.
Note

Red Hat OpenStack Services on OpenShift (RHOSO) supports external deployments of Red Hat Ceph Storage 7 and 8. Configuration examples that reference Red Hat Ceph Storage use Release 7 information. If you are using Red Hat Ceph Storage 8, adjust the configuration examples accordingly.

4.2. Modifying an OpenStackDataPlaneNodeSet CR

You can modify an existing OpenStackDataPlaneNodeSet custom resource (CR), for example, to add a new node or update node configuration. You can include each node in only one OpenStackDataPlaneNodeSet CR, and you can connect each node set to only one Compute cell. By default, node sets are connected to cell1. If your control plane includes additional Compute cells, you must specify the cell to which the node set is connected.

To apply the OpenStackDataPlaneNodeSet CR modifications to the data plane, you create an OpenStackDataPlaneDeployment CR that deploys the modified OpenStackDataPlaneNodeSet CR.

Note

When the OpenStackDataPlaneDeployment successfully completes execution, it does not automatically execute the Ansible again, even if the OpenStackDataPlaneDeployment or related OpenStackDataPlaneNodeSet resources are changed. To start another Ansible execution, you must create another OpenStackDataPlaneDeployment CR. Remove any failed OpenStackDataPlaneDeployment CRs in your environment before creating a new one to allow the new OpenStackDataPlaneDeployment to run Ansible with an updated Secret.

Procedure

  1. Open the OpenStackDataPlaneNodeSet CR definition file for the node set you want to update, for example, openstack_data_plane.yaml.
  2. Update or add the configuration you require. For information about the properties you can use to configure common node attributes, see OpenStackDataPlaneNodeSet CR spec properties in the Deploying Red Hat OpenStack Services on OpenShift guide.
  3. Save the OpenStackDataPlaneNodeSet CR definition file.
  4. Apply the updated OpenStackDataPlaneNodeSet CR configuration:

    $ oc apply -f openstack_data_plane.yaml
    Copy to Clipboard Toggle word wrap
  5. Verify that the data plane resource has been updated by confirming that the status is SetupReady:

    $ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10m
    Copy to Clipboard Toggle word wrap

    When the status is SetupReady the command returns a condition met message, otherwise it returns a timeout error.

    For information about the data plane conditions and states, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.

  6. If there are any failed OpenStackDataPlaneDeployment CRs in your environment, remove them to allow a new OpenStackDataPlaneDeployment to run Ansible with an updated Secret.
  7. Create a file on your workstation to define the new OpenStackDataPlaneDeployment CR:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
      name: <node_set_deployment_name>
    Copy to Clipboard Toggle word wrap
    • Replace <node_set_deployment_name> with the name of the OpenStackDataPlaneDeployment CR. The name must be unique, must consist of lower case alphanumeric characters, - (hyphen) or . (period), and must start and end with an alphanumeric character.
    Tip

    Give the definition file and the OpenStackDataPlaneDeployment CR unique and descriptive names that indicate the purpose of the modified node set.

  8. Add the OpenStackDataPlaneNodeSet CR that you modified:

    spec:
      nodeSets:
        - <nodeSet_name>
    Copy to Clipboard Toggle word wrap
  9. Save the OpenStackDataPlaneDeployment CR deployment file.
  10. Deploy the modified OpenStackDataPlaneNodeSet CR:

    $ oc create -f openstack_data_plane_deploy.yaml -n openstack
    Copy to Clipboard Toggle word wrap

    You can view the Ansible logs while the deployment executes:

    $ oc get pod -l app=openstackansibleee -w
    $ oc logs -l app=openstackansibleee -f --max-log-requests 10
    Copy to Clipboard Toggle word wrap

    If the oc logs command returns an error similar to the following error, increase the --max-log-requests value:

    error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limit
    Copy to Clipboard Toggle word wrap
  11. Verify that the modified OpenStackDataPlaneNodeSet CR is deployed:

    $ oc get openstackdataplanedeployment -n openstack
    NAME             	STATUS   MESSAGE
    openstack-data-plane   True     Setup Complete
    
    
    $ oc get openstackdataplanenodeset -n openstack
    NAME             	STATUS   MESSAGE
    openstack-data-plane   True     NodeSet Ready
    Copy to Clipboard Toggle word wrap

    For information about the meaning of the returned status, see Data plane conditions and states in the Deploying Red Hat OpenStack Services on OpenShift guide.

    If the status indicates that the data plane has not been deployed, then troubleshoot the deployment. For information, see Troubleshooting the data plane creation and deployment in the Deploying Red Hat OpenStack Services on OpenShift guide.

  12. If you added a new node to the node set, then map the node to the Compute cell it is connected to:

    $ oc rsh nova-cell0-conductor-0 nova-manage cell_v2 discover_hosts --verbose
    Copy to Clipboard Toggle word wrap

    If you did not create additional cells, this command maps the Compute nodes to cell1.

    Access the remote shell for the openstackclient pod and verify that the deployed Compute nodes are visible on the control plane:

    $ oc rsh -n openstack openstackclient
    $ openstack hypervisor list
    Copy to Clipboard Toggle word wrap

4.3. Data plane services

A data plane service is an Ansible execution that manages the installation, configuration, and execution of a software deployment on data plane nodes. Each service is a resource instance of the OpenStackDataPlaneService custom resource definition (CRD), which combines Ansible content and configuration data from ConfigMap and Secret CRs. You specify the Ansible execution for your service with Ansible play content, which can be an Ansible playbook from edpm-ansible, or any Ansible play content. The ConfigMap and Secret CRs can contain any configuration data that needs to be consumed by the Ansible content.

The OpenStack Operator provides core services that are deployed by default on data plane nodes. If you omit the services field from the OpenStackDataPlaneNodeSet specification, then the following services are applied by default in the following order:

services:
- redhat
- bootstrap
- download-cache
- configure-network
- validate-network
- install-os
- configure-os
- ssh-known-hosts
- run-os
- reboot-os
- install-certs
- ovn
- neutron-metadata
- libvirt
- nova
- telemetry
Copy to Clipboard Toggle word wrap

The OpenStack Operator also includes the following services that are not enabled by default:

Expand
ServiceDescription

ceph-client

Include this service to configure data plane nodes as clients of a Red Hat Ceph Storage server. Include between the install-os and configure-os services. The OpenStackDataPlaneNodeSet CR must include the following configuration to access the Red Hat Ceph Storage secrets:

apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneNodeSet
spec:
  ...
  nodeTemplate:
    extraMounts:
    - extraVolType: Ceph
      volumes:
      - name: ceph
        secret:
          secretName: ceph-conf-files
      mounts:
      - name: ceph
        mountPath: "/etc/ceph"
        readOnly: true
Copy to Clipboard Toggle word wrap

ceph-hci-pre

Include this service to prepare data plane nodes to host Red Hat Ceph Storage in a HCI configuration. For more information, see Deploying a Hyperconverged Infrastructure environment.

neutron-dhcp

Include this service to run a Neutron DHCP agent on the data plane nodes.

neutron-ovn

Include this service to run the Neutron OVN agent on the data plane nodes. This agent is required to provide QoS to hardware offloaded ports on the Compute nodes.

neutron-sriov

Include this service to run a Neutron SR-IOV NIC agent on the data plane nodes.

telemetry-power-monitoring

Include this service to gather metrics for power consumption on the data plane nodes.

Important

This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.

For more information about the available default services, see https://github.com/openstack-k8s-operators/openstack-operator/tree/main/config/services.

You can enable and disable services for an OpenStackDataPlaneNodeSet resource.

Note

Do not change the order of the default service deployments.

You can use the OpenStackDataPlaneService CRD to create a custom service that you can deploy on your data plane nodes. You add your custom service to the default list of services where the service must be executed. For more information, see Creating and enabling a custom service.

You can view the details of a service by viewing the YAML representation of the resource:

$ oc get openstackdataplaneservice configure-network -o yaml -n openstack
Copy to Clipboard Toggle word wrap

4.3.1. Creating and enabling a custom service

You can use the OpenStackDataPlaneService CRD to create custom services to deploy on your data plane nodes.

Note

Do not create a custom service with the same name as one of the default services. If a custom service name matches a default service name, the default service values overwrite the custom service values during OpenStackDataPlaneNodeSet reconciliation.

You specify the Ansible execution for your service with either an Ansible playbook or by including the free-form playbook contents directly in the playbookContents section of the service.

Note

You cannot include an Ansible playbook and playbookContents in the same service.

Procedure

  1. Create an OpenStackDataPlaneService CR and save it to a YAML file on your workstation, for example custom-service.yaml:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneService
    metadata:
      name: custom-service
    spec:
    Copy to Clipboard Toggle word wrap
  2. Specify the Ansible commands to create the custom service, by referencing an Ansible playbook or by including the Ansible play in the playbookContents field:

    • Specify the Ansible playbook to use:

      apiVersion: dataplane.openstack.org/v1beta1
      kind: OpenStackDataPlaneService
      metadata:
        name: custom-service
      spec:
        playbook: osp.edpm.configure_os
      Copy to Clipboard Toggle word wrap
    • Specify the Ansible play in the playbookContents field as a string that uses Ansible playbook syntax:

      apiVersion: dataplane.openstack.org/v1beta1
      kind: OpenStackDataPlaneService
      metadata:
        name: custom-service
      spec:
        playbookContents: |
          - hosts: all
            tasks:
              - name: Hello World!
                shell: "echo Hello World!"
                register: output
              - name: Show output
                debug:
                  msg: "{{ output.stdout }}"
              - name: Hello World role
                import_role: hello_world
      Copy to Clipboard Toggle word wrap

      For information about how to create an Ansible playbook, see Creating a playbook.

  3. Specify the edpmServiceType field for the service. You can have different custom services that use the same Ansible content to manage the same data plane service, for example, ovn or nova. The DataSources, TLS certificates, and CA certificates must be mounted at the same locations so that Ansible content can locate them and re-use the same paths for a custom service. You use the edpmServiceType field to create this association. The value is the name of the default service that uses the same Ansible content as the custom service. For example, if you have a custom service that uses the edpm_ovn Ansible content from edpm-ansible, you set edpmServiceType to ovn, which matches the default ovn service name provided by the OpenStack Operator.

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneService
    metadata:
      name: custom-service
    spec:
      ...
      edpmServiceType: ovn
    Copy to Clipboard Toggle word wrap
    Note

    The acroynm edpm used in field names stands for "External Data Plane Management".

  4. Optional: To override the default container image used by the ansible-runner execution environment with a custom image that uses additional Ansible content for a custom service, build and include a custom ansible-runner image. For information, see Building a custom ansible-runner image.
  5. Optional: Specify the names of Secret or ConfigMap resources to use to pass secrets or configurations into the OpenStackAnsibleEE job:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneService
    metadata:
      name: custom-service
    spec:
      ...
      playbookContents: |
        ...
      dataSources:
        - configMapRef:
            name: hello-world-cm-0
        - secretRef:
            name: hello-world-secret-0
        - secretRef:
            name: hello-world-secret-1
            optional: true
    Copy to Clipboard Toggle word wrap
    • datasources.secretRef.optional: An optional field that, when set to "true", marks the resource as optional so that an error is not thrown if it doesn’t exist.

      A mount is created for each Secret and ConfigMap CR in the OpenStackAnsibleEE pod with a filename that matches the resource value. The mounts are created under /var/lib/openstack/configs/<service name>. You can then use Ansible content to access the configuration or secret data.

  6. Optional: Set the deployOnAllNodeSets field to true if the service must run on all node sets in the OpenStackDataPlaneDeployment CR, even if the service is not listed as a service in every node set in the deployment:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneService
    metadata:
      name: custom-service
    spec:
    
      playbookContents: |
      ...
      deployOnAllNodeSets: true
    Copy to Clipboard Toggle word wrap
  7. Create the custom service:

    $ oc apply -f custom-service.yaml -n openstack
    Copy to Clipboard Toggle word wrap
  8. Verify that the custom service is created:

    $ oc get openstackdataplaneservice <custom_service_name> -o yaml -n openstack
    Copy to Clipboard Toggle word wrap
  9. Add the custom service to the services field in the definition file for the node sets the service applies to. Add the service name in the order that it should be executed relative to the other services. If the deployAllNodeSets field is set to true, then you need to add the service to only one of the node sets in the deployment.

    Note

    When adding your custom service to the services list in a node set definition, you must include all the required services, including the default services. If you include only your custom service in the services list, then that is the only service that is deployed.

4.3.2. Building a custom ansible-runner image

You can override the default container image used by the ansible-runner execution environment with your own custom image when you need additional Ansible content for a custom service.

Procedure

  1. Create a Containerfile that adds the custom content to the default image:

    FROM quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest
    COPY my_custom_role /usr/share/ansible/roles/my_custom_role
    Copy to Clipboard Toggle word wrap
  2. Build and push the image to a container registry:

    $ podman build -t quay.io/example_user/my_custom_image:latest .
    $ podman push quay.io/example_user/my_custom_role:latest
    Copy to Clipboard Toggle word wrap
  3. Specify your new container image as the image that the ansible-runner execution environment must use to add the additional Ansible content that your custom service requires, such as Ansible roles or modules:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneService
    metadata:
      name: custom-service
    spec:
      label: dataplane-deployment-custom-service
      openStackAnsibleEERunnerImage: quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest
      playbookContents: |
    Copy to Clipboard Toggle word wrap
    • openstack-ansibleee-runner: Your container image that the ansible-runner execution environment uses to execute Ansible.

You can designate a node set for a particular feature or workload. To designate and configure a node set for a feature or workload, complete the following tasks:

  1. Create the ConfigMap custom resources (CRs) to configure the nodes for the feature.
  2. Create a custom service for the node set that runs the playbook for the service.
  3. Include the ConfigMap CRs in the custom service.
Note

The Compute service (nova) provides a default ConfigMap CR named nova-extra-config, where you can add generic configuration that applies to all the node sets that use the default nova service. If you use this default nova-extra-config ConfigMap to add generic configuration to be applied to all the node sets, then you do not need to create a custom service.

Procedure

  1. Create a ConfigMap CR that defines a new configuration file for the feature:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: feature-configmap
      namespace: openstack
    data:
      <integer>-<feature>.conf: |
        <[config_grouping]>
        <config_option> = <value>
        <config_option> = <value>
    Copy to Clipboard Toggle word wrap
    Note

    If you are using the default ConfigMap CR for the Compute service named nova-extra-config or any other ConfigMap or Secret intended to pass configuration options to the nova-compute service on the EDPM node, you must configure the target configuration filename to match nova.conf, for example, <integer>-nova-<feature>.conf. For more information, see Configuring the Compute service (nova) in Configuring the Compute service for instance creation.

    • Replace <integer> with a number that indicates when to apply the configuration. The control plane services apply every file in their service directory, /etc/<service>/<service>.conf.d/, in lexicographical order. Therefore, configurations defined in later files override the same configurations defined in an earlier file. Each service operator generates the default configuration file with the name 01-<service>.conf. For example, the default configuration file for the nova-operator is 01-nova.conf.

      Note

      Numbers below 25 are reserved for the OpenStack services and Ansible configuration files.

    • Replace <feature> with a string that indicates the feature being configured.

      Note

      Do not use the name of the default configuration file, because it would override the infrastructure configuration, such as the transport_url.

    • Replace <[config_grouping]> with the name of the group the configuration options belong to in the service configuration file. For example, [compute] or database.
    • Replace <config_option> with the option you want to configure, for example, cpu_shared_set.
    • Replace <value> with the value for the configuration option, for example, 2,6.

      When the service is deployed, it adds the configuration to the etc/<service>/<service>.conf.d/ directory in the service container. For example, for a Compute feature, the configuration file is added to etc/nova/nova.conf.d/ in the nova_compute container.

      For more information on creating ConfigMap objects, see Creating and using config maps in the RHOCP Nodes guide.

    Tip

    You can use a Secret to create the custom configuration instead if the configuration includes sensitive information, such as passwords or certificates that are required for certification.

  2. Create a custom service for the node set. For information about how to create a custom service, see Creating and enabling a custom service.
  3. Add the ConfigMap CR to the custom service:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneService
    metadata:
      name: <nodeset>-service
    spec:
      ...
      dataSources:
        - configMapRef:
            name: feature-configmap
    Copy to Clipboard Toggle word wrap
  4. Specify the Secret CR for the cell that the node set that runs this service connects to:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneService
    metadata:
      name: <nodeset>-service
    spec:
      ...
      dataSources:
        - configMapRef:
            name: feature-configmap
        - secretRef:
            name: nova-migration-ssh-key
        - secretRef:
            name: nova-cell1-compute-config
    Copy to Clipboard Toggle word wrap

Each node set can be connected to only one Compute cell. By default, node sets are connected to cell1. If you added additional Compute cells to your control plane, you must specify to which cell the node set connects.

Procedure

  1. Create a custom nova service that includes the Secret custom resource (CR) for the cell to connect to:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneService
    metadata:
      name: <nova_cell_custom>
      spec:
        playbook: osp.edpm.nova
        ...
        dataSources:
        - secretRef:
            name: <cell_secret_ref>
        edpmServiceType: nova
    Copy to Clipboard Toggle word wrap
    • Replace <nova_cell_custom> with a name for the custom service, for example, nova-cell1-custom.
    • Replace <cell_secret_ref> with the Secret CR generated by the control plane for the cell, for example, nova-cell1-compute-config.

    For information about how to create a custom service, see Creating and enabling a custom service.

  2. If you configured each cell with a dedicated nova metadata API service, create a custom neutron-metadata service for each cell that includes the Secret CR for connecting to the cell:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneService
    metadata:
      name: <neutron_cell_metadata_custom>
      spec:
        playbook: osp.edpm.neutron_metadata
        ...
        dataSources:
        - secretRef:
            name: neutron-ovn-metadata-agent-neutron-config
        - secretRef:
            name: <cell_metadata_secret_ref>
        edpmServiceType: neutron-metadata
    Copy to Clipboard Toggle word wrap
    • Replace <neutron_cell_metadata_custom> with a name for the custom service, for example, neutron-cell1-metadata-custom.
    • Replace <cell_metadata_secret_ref> with the Secret CR generated by the control plane for the cell, for example, nova-cell1-metadata-neutron-config.
  3. Open the OpenStackDataPlaneNodeSet CR file for the cell node set, for example, openstack_cell1_node_set.yaml.
  4. Replace the nova service in your OpenStackDataPlaneNodeSet CR with your custom nova service for the cell:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    metadata:
      name: openstack-cell1
    spec:
      services:
        - download-cache
        - redhat
        - bootstrap
        - configure-network
        - validate-network
        - install-os
        - configure-os
        - ssh-known-hosts
        - run-os
        - ovn
        - libvirt
        - *nova-cell1-custom*
        - telemetry
    Copy to Clipboard Toggle word wrap
    Note

    Do not change the order of the default services.

  5. If you created a custom neutron-metadata service, add it to the list of services or replace the neutron-metadata service with your custom service for the cell:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    metadata:
      name: openstack-cell1
    spec:
      services:
        - download-cache
        - redhat
        - bootstrap
        - configure-network
        - validate-network
        - install-os
        - configure-os
        - ssh-known-hosts
        - run-os
        - ovn
        - libvirt
        - nova-cell-custom
        - *neutron-cell1-metadata-custom*
        - telemetry
    Copy to Clipboard Toggle word wrap
  6. Complete the configuration of your OpenStackDataPlaneNodeSet CR. For more information, see Creating the data plane.
  7. Save the OpenStackDataPlaneNodeSet CR definition file.
  8. Create the data plane resources:

    $ oc create -f openstack_cell1_node_set.yaml
    Copy to Clipboard Toggle word wrap
  9. Verify that the data plane resources have been created by confirming that the status is SetupReady:

    $ oc wait openstackdataplanenodeset openstack-cell1 --for condition=SetupReady --timeout=10m
    Copy to Clipboard Toggle word wrap

    When the status is SetupReady the command returns a condition met message, otherwise it returns a timeout error.

    For information about the data plane conditions and states, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.

  10. Verify that the Secret resource was created for the node set:

    $ oc get secret | grep openstack-cell1
    openstack_cell1_node_set Opaque 1 3m50s
    Copy to Clipboard Toggle word wrap
  11. Verify the services were created:

    $ oc get openstackdataplaneservice -n openstack | grep nova-cell1-custom
    Copy to Clipboard Toggle word wrap
  12. Create an OpenStackDataPlaneDeployment CR to deploy the OpenStackDataPlaneNodeSet CR. For more information, see Deploying the data plane in the Deploying Red Hat OpenStack Services on OpenShift guide.

The OpenStackProvisionServer custom resource (CR) is automatically created by default during the installation and deployment of your Red Hat OpenStack on OpenShift (RHOSO) environment. By default, the OpenStackProvisionServer CR uses the port range 6190-6220. You can create a custom OpenStackProvisionServer CR to limit the ports that must be opened.

Procedure

  1. Create a file on your workstation to define the OpenStackProvisionServer CR, for example, my_os_provision_server.yaml:

    apiVersion: baremetal.openstack.org/v1beta1
    kind: OpenStackProvisionServer
    metadata:
      name: my-os-provision-server
    spec:
      interface: enp1s0
      port: 6195 
    1
    
      osImage: edpm-hardened-uefi.qcow2
    Copy to Clipboard Toggle word wrap
    • port: Specifies the port that you want to open. Must be in the OpenStackProvisionServer CR range: 6190 - 6220.
  2. Create the OpenStackProvisionServer CR:

    $ oc create -f my_os_provision_server.yaml -n openstack
    Copy to Clipboard Toggle word wrap

The Red Hat OpenStack Services on OpenShift (RHOSO) DNS server is configured only for data plane nodes. If the data plane nodes must resolve third-party nodes that cannot be resolved by the upstream DNS server that the dnsmasq service is configured to forward requests to, then you can register the third-party nodes with the same DNS instance that the data plane nodes are configured with.

To register third-party nodes, you create DNSData custom resources (CRs). Creating a DNSData CR updates the DNS configuration and restarts the dnsmasq pods that can then read and resolve the DNS information in the associated DNSData CR.

All nodes must be able to resolve the hostnames of the Red Hat OpenShift Container Platform (RHOCP) pods, for example, by using the external IP of the dnsmasq service.

Procedure

  1. Create a file on your workstation named host_dns_data.yaml to define the `DNSData CR:

    apiVersion: network.openstack.org/v1beta1
    kind: DNSData
    metadata:
      name: my-dnsdata
      namespace: openstack
    Copy to Clipboard Toggle word wrap
  2. Define the hostnames and IP addresses of each host:

    spec:
      hosts:
      - hostnames:
        - my-host.some.domain
        - same-host.some.domain
        ip: 10.1.1.1
      - hostnames:
        - my-other-host.some.domain
        ip: 10.1.1.2
    Copy to Clipboard Toggle word wrap
    • hosts.hostnames: Lists the hostnames that can be used to access the third-party node.
    • hosts.ip: Defines the IP address of the third-party node to which the hostname resolves.
  3. Create the DNSData CR:

    $ oc apply -f host_dns_data.yaml -n openstack
    Copy to Clipboard Toggle word wrap
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat