Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 14. Adding metadata to instances
The Compute (nova) service uses metadata to pass configuration information to instances on launch. The instance can access the metadata by using a config drive or the metadata service.
- Config drive
- By default, every instance has a config drive. Config drives are special drives that you can attach to an instance when it boots. The config drive is presented to the instance as a read-only drive. The instance can mount this drive and read files from it to get information that is normally available through the metadata service. To disable the config drive, see Disabling config drive.
- Metadata service
-
The Compute service provides the metadata service as a REST API, which can be used to retrieve data specific to an instance. Instances access this service at
169.254.169.254or atfe80::a9fe:a9fe.
14.1. Types of instance metadata Link kopierenLink in die Zwischenablage kopiert!
Cloud users, cloud administrators, and the Compute service can pass metadata to instances:
- Cloud user provided data
- Cloud users can specify additional data to use when they launch an instance, such as a shell script that the instance runs on boot. The cloud user can pass data to instances by using the user data feature, and by passing key-value pairs as required properties when creating or updating an instance.
- Cloud administrator provided data
The Red Hat OpenStack Services on OpenShift (RHOSO) administrator uses the vendordata feature to pass data to instances. The Compute service provides the vendordata modules
StaticJSONandDynamicJSONto allow administrators to pass metadata to instances:-
StaticJSON: (Default) Use for metadata that is the same for all instances. -
DynamicJSON: Use for metadata that is different for each instance. This module makes a request to an external REST service to determine what metadata to add to an instance.
Vendordata configuration is located in one of the following read-only files on the instance:
-
/openstack/{version}/vendor_data.json -
/openstack/{version}/vendor_data2.json
-
- Compute service provided data
- The Compute service uses its internal implementation of the metadata service to pass information to the instance, such as the requested hostname for the instance, and the availability zone the instance is in. This happens by default and requires no configuration by the cloud user or administrator.
14.2. Disabling config drive Link kopierenLink in die Zwischenablage kopiert!
To disable the attachment of a config drive when launching an instance, you must set the force_config_drive parameter to false.
You can configure only whole node sets. Reconfiguring a subset of the nodes within a node set is not supported. If you need to reconfigure a subset of nodes within a node set, you must scale the node set down, and create a new node set from the previously removed nodes.
Prerequisites
-
The
occommand line tool is installed on your workstation. -
You are logged in to Red Hat OpenStack Services on OpenShift (RHOSO) as a user with
cluster-adminprivileges. -
You have selected the
OpenStackDataPlaneNodeSetcustom resource (CR) that defines the nodes for which you want to disable config drive. For more information about creating anOpenStackDataPlaneNodeSetCR, see Creating an OpenStackDataPlaneNodeSet CR with pre-provisioned nodes in Deploying Red Hat OpenStack Services on OpenShift.
Procedure
Create or update the
ConfigMapCR namednova-extra-config.yamland set the value offorce_config_driveunder[DEFAULT]tofalse:apiVersion: v1 kind: ConfigMap metadata: name: nova-extra-config namespace: openstack data: 35-nova-config-drive.conf: | [DEFAULT] force_config_drive = falseFor more information about creating
ConfigMapobjects, see Creating and using config maps in Nodes.Create a new
OpenStackDataPlaneDeploymentCR to configure the services on the data plane nodes and deploy the data plane, and save it to a file namedcompute_config_drive_deploy.yamlon your workstation:apiVersion: core.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: compute-config-driveIn the
compute_config_drive_deploy.yamlCR, specifynodeSetsto include all theOpenStackDataPlaneNodeSetCRs you want to deploy. Ensure that you include theOpenStackDataPlaneNodeSetCR that you selected as a prerequisite. ThatOpenStackDataPlaneNodeSetCR defines which nodes you are disabling config drive on.WarningIf your deployment has more than one node set, changes to the
nova-extra-config.yamlConfigMapmight directly affect more than one node set, depending on how the node sets and theDataPlaneServicesare configured. To check if a node set uses thenova-extra-configConfigMapand therefore will be affected by the reconfiguration, complete the following steps:-
Check the services list of the node set and find the name of the
DataPlaneServicethat points to nova. Ensure that the value of the
edpmServiceTypefield of theDataPlaneServiceis set tonova.If the
dataSourceslist of theDataPlaneServicecontains aconfigMapRefnamednova-extra-config, then this node set uses thenova-extra-config.yamlConfigMapand therefore will be affected by the configuration changes in thisConfigMap. If some of the node sets that are affected should not be reconfigured, you must create a newDataPlaneServicepointing to a separateConfigMapfor these node sets.
apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: compute-config-drive spec: nodeSets: - openstack-edpm - compute-config-drive - ... - <nodeSet_name>-
Replace
<nodeSet_name>with the names of theOpenStackDataPlaneNodeSetCRs that you want to include in your data plane deployment.
-
Check the services list of the node set and find the name of the
-
Save the
compute_config_drive_deploy.yamldeployment file. Deploy the data plane:
$ oc create -f compute_config_drive_deploy.yamlVerify that the data plane is deployed:
$ oc get openstackdataplanenodeset NAME STATUS MESSAGE compute-config-drive True DeployedTipAppend the
-woption to the end of the get command to track deployment progress.Access the remote shell for
openstackclientand verify that the deployed Compute nodes are visible on the control plane:$ oc rsh -n openstack openstackclient $ openstack hypervisor list
14.3. Configuring dynamic metadata for instances Link kopierenLink in die Zwischenablage kopiert!
By configuring dynamic metadata, you can provide instance or deployment specific metadata to individual instances generated by an external service. The instance can access the metadata via both the config drive and the metadata service. To ensure that the metadata uses the same content regardless of the way it is accessed, you must configure both the data plane and the control plane.
Prerequisites
-
The
occommand line tool is installed on your workstation. -
You are logged in to Red Hat OpenStack Services on OpenShift (RHOSO) with
cluster-adminprivileges. -
You have selected the
OpenStackDataPlaneNodeSetcustom resource (CR) that defines the nodes for which you want to configure dynamic metadata. For more information about creating anOpenStackDataPlaneNodeSetCR, see Creating the data plane in Deploying Red Hat OpenStack Services on OpenShift. - The external http service is accessible from both the control plane and data plane Compute nodes.
Procedure
Configure the data plane:
Create or update the
ConfigMapCR namednova-extra-configand add the following configuration to the data section of theConfigMapto enable theDynamicJSONprovider and define your metadata targets:apiVersion: v1 kind: ConfigMap metadata: name: nova-extra-config namespace: openstack data: 40-nova-vendordata.conf: | [api] vendordata_providers=DynamicJSON,StaticJSON vendordata_dynamic_targets=<name>@<external-http-service-url> vendordata_dynamic_targets=<name>@<external-http-service-url>Replace
<name>@<external-http-service-url>with the external service that provides the dynamic metadata, for example,target@http://127.0.0.1:125. You can configure multiple different external services and the Compute (nova) service gathers the dynamic metadata for an instance from each external service, merges the results, and provides the merged metadata to the instance.For more information about creating
ConfigMapobjects, see Creating and using config maps in Nodes.
Create a new
OpenStackDataPlaneDeploymentCR to configure the services on the data plane nodes and deploy the data plane, and save it to a file namedcompute_vendordata_deploy.yamlon your workstation:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: compute_vendordataFor more information about creating an
OpenStackDataPlaneDeploymentCR, see Deploying the data plane in Deploying Red Hat OpenStack Services on OpenShift.In the
compute_vendordata_deploy.yaml, specifynodeSetsto include all theOpenStackDataPlaneNodeSetCRs that you want to deploy. Ensure that you include theOpenStackDataPlaneNodeSetCR that you selected as a prerequisite. ThatOpenStackDataPlaneNodeSetCR defines the nodes for which you want to configure dynamic metadata.WarningIn certain deployment configurations, when you modify the
nova-extra-config.yamlConfigMap, you might directly affect more than one node set. To check if a node set uses thenova-extra-configConfigMapand is affected by the reconfiguration, complete the following steps:-
Check the services list of the node set and find the name of the
DataPlaneServicethat points to the Compute (nova) service. Ensure that the value of the
edpmServiceTypefield of theDataPlaneServiceis set tonova.If the
dataSourceslist of theDataPlaneServicecontains aconfigMapRefnamednova-extra-config, then this node set uses thisConfigMapand is affected by the configuration changes in thisConfigMap. You must create a newDataPlaneServicepointing to a separateConfigMapfor the node sets that you do not want to reconfigure.
apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: compute-vendordata spec: nodeSets: - openstack-edpm - vendordata - ... - <nodeSet_name>-
Replace
<nodeSet_name>with the names of theOpenStackDataPlaneNodeSetCRs that you want to include in your data plane deployment.
-
Check the services list of the node set and find the name of the
-
Save the
compute_vendordata_deploy.yamldeployment file. Deploy the data plane:
$ oc create -f compute_vendordata_deploy.yamlVerify that the data plane is deployed:
$ oc get openstackdataplanenodesetNAME STATUS MESSAGE compute-vendordata True DeployedAccess the remote shell for
openstackclientand verify that the deployed Compute nodes are visible on the control plane:$ oc rsh -n openstack openstackclient$ openstack resource provider list
Configure the control plane:
-
Open your
OpenStackControlPlanecustom resource file,openstack_control_plane.yaml. In
openstack_control_plane.yaml, identify the location for thecustomServiceConfig:-
If
nova.template.metadataServiceTemplate.enabledisTrue, add the configuration undernova.template.metadataServiceTemplate. -
If the
nova.template.metadataServiceTemplate.enabledis False, add the configuration undernova.template.cellTemplates.<cell_name>.metadataServiceTemplate.
-
If
Add the following configuration to the appropriate
customServiceConfigsection based on your metadata service deployment:[api] vendordata_providers=DynamicJSON,StaticJSON vendordata_dynamic_targets=<name>@<external-http-service-url> vendordata_dynamic_targets=<name>@<external-http-service-url>-
Replace
<name>@<external-http-service-url>with the external service providing the dynamic metadata, for example,target@http://127.0.0.1:125. You can configure multiple external services and the Compute (nova) service gathers the dynamic metadata for an instance from each external service, merges the results, and provides the merged metadata to the instance.
-
Replace
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackCheck if Red Hat OpenShift Container Platform (RHOCP) created the resources related to the
OpenStackControlPlaneCR:$ oc get openstackcontrolplane -n openstackThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipUse the
-woption with thegetcommand to track deployment progress:$ oc get -w openstackcontrolplane -n openstackOptional: Confirm that the control plane is deployed by reviewing the pods in the
openstacknamespace for each of your cells:$ oc get pods -n openstack
-
Open your
Verification
Create a new instance:
$ openstack server create --flavor <flavor> --image <image> --nic <nic> --use-config-drive vm1Use a remote console or SSH to access the instance. Within the instance, use one of the following methods to query the dynamic metadata:
The metadata service from the address
http://169.254.169.254/openstack/latest/vendor_data2.json, for example:$ curl http://169.254.169.254/openstack/latest/vendor_data2.json {"target1": {...}, "target2":{...}}The config drive by mounting the extra disk drive provided to the instance and accessing the file
openstack/latest/vendor_data2.jsonon it, for example:$ lsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sr0 11:0 1 474K 0 rom vda 252:0 0 1G 0 disk |-vda1 252:1 0 1015M 0 part / `-vda15 252:15 0 8M 0 part vdb 252:16 0 1G 0 disk /mnt$ mkdir /tmp/conf $ mount /dev/sr0 /tmp/conf $ cat /tmp/conf/openstack/latest/vendor_data2.json {"target1": {...}, "target2":{...}}The control plane is deployed when all the pods are either completed or running.