Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 14. Adding metadata to instances


The Compute (nova) service uses metadata to pass configuration information to instances on launch. The instance can access the metadata by using a config drive or the metadata service.

Config drive
By default, every instance has a config drive. Config drives are special drives that you can attach to an instance when it boots. The config drive is presented to the instance as a read-only drive. The instance can mount this drive and read files from it to get information that is normally available through the metadata service. To disable the config drive, see Disabling config drive.
Metadata service
The Compute service provides the metadata service as a REST API, which can be used to retrieve data specific to an instance. Instances access this service at 169.254.169.254 or at fe80::a9fe:a9fe.

14.1. Types of instance metadata

Cloud users, cloud administrators, and the Compute service can pass metadata to instances:

Cloud user provided data
Cloud users can specify additional data to use when they launch an instance, such as a shell script that the instance runs on boot. The cloud user can pass data to instances by using the user data feature, and by passing key-value pairs as required properties when creating or updating an instance.
Cloud administrator provided data

The Red Hat OpenStack Services on OpenShift (RHOSO) administrator uses the vendordata feature to pass data to instances. The Compute service provides the vendordata modules StaticJSON and DynamicJSON to allow administrators to pass metadata to instances:

  • StaticJSON: (Default) Use for metadata that is the same for all instances.
  • DynamicJSON: Use for metadata that is different for each instance. This module makes a request to an external REST service to determine what metadata to add to an instance.

Vendordata configuration is located in one of the following read-only files on the instance:

  • /openstack/{version}/vendor_data.json
  • /openstack/{version}/vendor_data2.json
Compute service provided data
The Compute service uses its internal implementation of the metadata service to pass information to the instance, such as the requested hostname for the instance, and the availability zone the instance is in. This happens by default and requires no configuration by the cloud user or administrator.

14.2. Disabling config drive

To disable the attachment of a config drive when launching an instance, you must set the force_config_drive parameter to false.

Warning

You can configure only whole node sets. Reconfiguring a subset of the nodes within a node set is not supported. If you need to reconfigure a subset of nodes within a node set, you must scale the node set down, and create a new node set from the previously removed nodes.

Prerequisites

  • The oc command line tool is installed on your workstation.
  • You are logged in to Red Hat OpenStack Services on OpenShift (RHOSO) as a user with cluster-admin privileges.
  • You have selected the OpenStackDataPlaneNodeSet custom resource (CR) that defines the nodes for which you want to disable config drive. For more information about creating an OpenStackDataPlaneNodeSet CR, see Creating an OpenStackDataPlaneNodeSet CR with pre-provisioned nodes in Deploying Red Hat OpenStack Services on OpenShift.

Procedure

  1. Create or update the ConfigMap CR named nova-extra-config.yaml and set the value of force_config_drive under [DEFAULT] to false:

    apiVersion: v1
    kind: ConfigMap
    metadata:
       name: nova-extra-config
       namespace: openstack
    data:
       35-nova-config-drive.conf: |
          [DEFAULT]
          force_config_drive = false

    For more information about creating ConfigMap objects, see Creating and using config maps in Nodes.

  2. Create a new OpenStackDataPlaneDeployment CR to configure the services on the data plane nodes and deploy the data plane, and save it to a file named compute_config_drive_deploy.yaml on your workstation:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
        name: compute-config-drive
  3. In the compute_config_drive_deploy.yaml CR, specify nodeSets to include all the OpenStackDataPlaneNodeSet CRs you want to deploy. Ensure that you include the OpenStackDataPlaneNodeSet CR that you selected as a prerequisite. That OpenStackDataPlaneNodeSet CR defines which nodes you are disabling config drive on.

    Warning

    If your deployment has more than one node set, changes to the nova-extra-config.yaml ConfigMap might directly affect more than one node set, depending on how the node sets and the DataPlaneServices are configured. To check if a node set uses the nova-extra-config ConfigMap and therefore will be affected by the reconfiguration, complete the following steps:

    1. Check the services list of the node set and find the name of the DataPlaneService that points to nova.
    2. Ensure that the value of the edpmServiceType field of the DataPlaneService is set to nova.

      If the dataSources list of the DataPlaneService contains a configMapRef named nova-extra-config, then this node set uses the nova-extra-config.yaml ConfigMap and therefore will be affected by the configuration changes in this ConfigMap. If some of the node sets that are affected should not be reconfigured, you must create a new DataPlaneService pointing to a separate ConfigMap for these node sets.

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
      name: compute-config-drive
    spec:
      nodeSets:
        - openstack-edpm
        - compute-config-drive
        - ...
        - <nodeSet_name>
    • Replace <nodeSet_name> with the names of the OpenStackDataPlaneNodeSet CRs that you want to include in your data plane deployment.
  4. Save the compute_config_drive_deploy.yaml deployment file.
  5. Deploy the data plane:

    $ oc create -f compute_config_drive_deploy.yaml
  6. Verify that the data plane is deployed:

    $ oc get openstackdataplanenodeset
    
    NAME                    STATUS    MESSAGE
    compute-config-drive   True            Deployed
    Tip

    Append the -w option to the end of the get command to track deployment progress.

  7. Access the remote shell for openstackclient and verify that the deployed Compute nodes are visible on the control plane:

    $ oc rsh -n openstack openstackclient
    
    $ openstack hypervisor list

14.3. Configuring dynamic metadata for instances

By configuring dynamic metadata, you can provide instance or deployment specific metadata to individual instances generated by an external service. The instance can access the metadata via both the config drive and the metadata service. To ensure that the metadata uses the same content regardless of the way it is accessed, you must configure both the data plane and the control plane.

Prerequisites

  • The oc command line tool is installed on your workstation.
  • You are logged in to Red Hat OpenStack Services on OpenShift (RHOSO) with cluster-admin privileges.
  • You have selected the OpenStackDataPlaneNodeSet custom resource (CR) that defines the nodes for which you want to configure dynamic metadata. For more information about creating an OpenStackDataPlaneNodeSet CR, see Creating the data plane in Deploying Red Hat OpenStack Services on OpenShift.
  • The external http service is accessible from both the control plane and data plane Compute nodes.

Procedure

  1. Configure the data plane:

    1. Create or update the ConfigMap CR named nova-extra-config and add the following configuration to the data section of the ConfigMap to enable the DynamicJSON provider and define your metadata targets:

      apiVersion: v1
      kind: ConfigMap
      metadata:
         name: nova-extra-config
         namespace: openstack
      data:
         40-nova-vendordata.conf: |
           [api]
           vendordata_providers=DynamicJSON,StaticJSON
           vendordata_dynamic_targets=<name>@<external-http-service-url>
           vendordata_dynamic_targets=<name>@<external-http-service-url>
      • Replace <name>@<external-http-service-url> with the external service that provides the dynamic metadata, for example, target@http://127.0.0.1:125. You can configure multiple different external services and the Compute (nova) service gathers the dynamic metadata for an instance from each external service, merges the results, and provides the merged metadata to the instance.

        For more information about creating ConfigMap objects, see Creating and using config maps in Nodes.

    2. Create a new OpenStackDataPlaneDeployment CR to configure the services on the data plane nodes and deploy the data plane, and save it to a file named compute_vendordata_deploy.yaml on your workstation:

      apiVersion: dataplane.openstack.org/v1beta1
      kind: OpenStackDataPlaneDeployment
      metadata:
        name: compute_vendordata

      For more information about creating an OpenStackDataPlaneDeployment CR, see Deploying the data plane in Deploying Red Hat OpenStack Services on OpenShift.

    3. In the compute_vendordata_deploy.yaml, specify nodeSets to include all the OpenStackDataPlaneNodeSet CRs that you want to deploy. Ensure that you include the OpenStackDataPlaneNodeSet CR that you selected as a prerequisite. That OpenStackDataPlaneNodeSet CR defines the nodes for which you want to configure dynamic metadata.

      Warning

      In certain deployment configurations, when you modify the nova-extra-config.yaml ConfigMap, you might directly affect more than one node set. To check if a node set uses the nova-extra-config ConfigMap and is affected by the reconfiguration, complete the following steps:

      1. Check the services list of the node set and find the name of the DataPlaneService that points to the Compute (nova) service.
      2. Ensure that the value of the edpmServiceType field of the DataPlaneService is set to nova.

        If the dataSources list of the DataPlaneService contains a configMapRef named nova-extra-config, then this node set uses this ConfigMap and is affected by the configuration changes in this ConfigMap. You must create a new DataPlaneService pointing to a separate ConfigMap for the node sets that you do not want to reconfigure.

      apiVersion: dataplane.openstack.org/v1beta1
      kind: OpenStackDataPlaneDeployment
      metadata:
        name: compute-vendordata
      spec:
        nodeSets:
          - openstack-edpm
          - vendordata
          - ...
          - <nodeSet_name>
      • Replace <nodeSet_name> with the names of the OpenStackDataPlaneNodeSet CRs that you want to include in your data plane deployment.
    4. Save the compute_vendordata_deploy.yaml deployment file.
    5. Deploy the data plane:

      $ oc create -f compute_vendordata_deploy.yaml
    6. Verify that the data plane is deployed:

      $ oc get openstackdataplanenodeset
      NAME           STATUS MESSAGE
      compute-vendordata True   Deployed
    7. Access the remote shell for openstackclient and verify that the deployed Compute nodes are visible on the control plane:

      $ oc rsh -n openstack openstackclient
      $ openstack resource provider list
  2. Configure the control plane:

    1. Open your OpenStackControlPlane custom resource file, openstack_control_plane.yaml.
    2. In openstack_control_plane.yaml, identify the location for the customServiceConfig:

      • If nova.template.metadataServiceTemplate.enabled is True, add the configuration under nova.template.metadataServiceTemplate.
      • If the nova.template.metadataServiceTemplate.enabled is False, add the configuration under nova.template.cellTemplates.<cell_name>.metadataServiceTemplate.
    3. Add the following configuration to the appropriate customServiceConfig section based on your metadata service deployment:

           [api]
           vendordata_providers=DynamicJSON,StaticJSON
           vendordata_dynamic_targets=<name>@<external-http-service-url>
           vendordata_dynamic_targets=<name>@<external-http-service-url>
      • Replace <name>@<external-http-service-url> with the external service providing the dynamic metadata, for example, target@http://127.0.0.1:125. You can configure multiple external services and the Compute (nova) service gathers the dynamic metadata for an instance from each external service, merges the results, and provides the merged metadata to the instance.
    4. Update the control plane:

      $ oc apply -f openstack_control_plane.yaml -n openstack
    5. Check if Red Hat OpenShift Container Platform (RHOCP) created the resources related to the OpenStackControlPlane CR:

      $ oc get openstackcontrolplane -n openstack

      The OpenStackControlPlane resources are created when the status is "Setup complete".

      Tip

      Use the -w option with the get command to track deployment progress:

      $ oc get -w openstackcontrolplane -n openstack
    6. Optional: Confirm that the control plane is deployed by reviewing the pods in the openstack namespace for each of your cells:

      $ oc get pods -n openstack

Verification

  1. Create a new instance:

    $ openstack server create --flavor <flavor> --image <image> --nic <nic> --use-config-drive vm1
  2. Use a remote console or SSH to access the instance. Within the instance, use one of the following methods to query the dynamic metadata:

    • The metadata service from the address http://169.254.169.254/openstack/latest/vendor_data2.json, for example:

      $ curl http://169.254.169.254/openstack/latest/vendor_data2.json
      {"target1": {...}, "target2":{...}}
    • The config drive by mounting the extra disk drive provided to the instance and accessing the file openstack/latest/vendor_data2.json on it, for example:

      $ lsblk
      NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
      sr0      11:0    1  474K  0 rom
      vda     252:0    0    1G  0 disk
      |-vda1  252:1    0 1015M  0 part /
      `-vda15 252:15   0    8M  0 part
      vdb     252:16   0    1G  0 disk /mnt
      $ mkdir /tmp/conf
      $ mount /dev/sr0 /tmp/conf
      $ cat /tmp/conf/openstack/latest/vendor_data2.json
      {"target1": {...}, "target2":{...}}

      The control plane is deployed when all the pods are either completed or running.

Red Hat logoGithubredditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können. Entdecken Sie unsere neuesten Updates.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

Theme

© 2026 Red Hat
Nach oben