Este contenido no está disponible en el idioma seleccionado.
Chapter 4. Enabling TLS on a deployed RHOSO environment
TLS is enabled by default in Red Hat OpenStack Services on OpenShift (RHOSO) environments. If you disabled TLS when you deployed your RHOSO environment, or if you adopted your Red Hat OpenStack Platform 17.1 deployment to a RHOSO environment, then you can reenable TLS after deployment.
Enabling TLS on a deployed RHOSO environment involves some data plane downtime when connectivity to Rabbitmq and OVS from the control plane is lost during the redeployment.
- If your deployment uses the default configuration where no floating IP connectivity is directed through the control plane, then this downtime does not affect the workload hosted on the RHOSO environment.
- If your deployment routes traffic through the control plane, then the downtime will impact the workload hosted on the RHOSO environment.
- New workloads cannot be created and existing workloads cannot be managed with the OpenStack API while the control plane and data plane are being updated.
4.1. Enabling TLS on a deployed RHOSO environment error messages Copiar enlaceEnlace copiado en el portapapeles!
The following error messages are logged when the connectivity to Rabbitmq and OVS is lost from the control plane during the redeployment to enable TLS:
Extract from the
nova-computelog:Aug 09 11:35:49 edpm-compute-0 nova_compute[105613]: 2024-08-09 11:35:49.037 2 ERROR oslo.messaging._drivers.impl_rabbit [-] [98752a36-cf06-4d26-aee8-f5b21bf55aef] AMQP server on rabbitmq-cell1.openstack.svc:5672 is unreachable: <RecoverableConnectionError: unknown error>. Trying again in 1 seconds.: amqp.exceptions.RecoverableConnectionError: <RecoverableConnectionError: unknown error> Aug 09 11:35:49 edpm-compute-0 nova_compute[105613]: 2024-08-09 11:35:49.566 2 ERROR oslo.messaging._drivers.impl_rabbit [-] [8c795961-cb17-4a6d-82ee-25c862316b40] AMQP server on rabbitmq-cell1.openstack.svc:5672 is unreachable: timed out. Trying again in 32 seconds.: socket.timeout: timed outExtract from the OVN controller log:
ovn_controller[55433]: 2024-08-09T11:35:47Z|00452|reconnect|INFO|tcp:ovsdbserver-sb.openstack.svc:6642: connected Aug 09 11:35:47 edpm-compute-0 ovn_controller[55433]: 2024-08-09T11:35:47Z|00453|jsonrpc|WARN|tcp:ovsdbserver-sb.openstack.svc:6642: error parsing stream: line 0, column 0, byte 0: invalid character U+0015 Aug 09 11:35:47 edpm-compute-0 ovn_controller[55433]: 2024-08-09T11:35:47Z|00454|reconnect|WARN|tcp:ovsdbserver-sb.openstack.svc:6642: connection dropped (Protocol error)
4.2. Enabling TLS on a RHOSO environment after deployment Copiar enlaceEnlace copiado en el portapapeles!
If TLS is disabled in your deployed Red Hat OpenStack Services on OpenShift (RHOSO) environment, you can reenable it on a operational RHOSO environment with minimal disruption.
Prerequisites
- The RHOSO environment is deployed on a Red Hat OpenShift Container Platform (RHOCP) cluster. For more information, see Deploying Red Hat OpenStack Services on OpenShift.
-
You are logged on to a workstation that has access to the RHOCP cluster as a user with
cluster-adminprivileges.
Procedure
-
Open your
OpenStackControlPlanecustom resource (CR) file,openstack_control_plane.yaml, on your workstation. Add the following
spec.tlsconfiguration, if not already present:spec: tls: ingress: ca: duration: 87600h0m0s cert: duration: 43800h0m0s enabled: true podLevel: enabled: true internal: ca: duration: 87600h0m0s cert: duration: 43800h0m0s libvirt: ca: duration: 87600h0m0s cert: duration: 43800h0m0s ovn: ca: duration: 87600h0m0s cert: duration: 43800h0m0s-
If the
tlsconfiguration is already present in the CR file, then ensure that thepodLevelis enabled.
-
If the
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackThe rabbitmq pods cannot change the TLS configuration on an operating environment, therefore you must delete the existing rabbitmq pods to update the control plane with the new rabbitmq pods that have TLS enabled:
$ oc delete pod -n openstack -l app.kubernetes.io/component=rabbitmqWait for the control plane to be ready:
$ oc wait openstackcontrolplane -n openstack --for=condition=Ready --timeout=400s -l core.openstack.org/openstackcontrolplaneWhile waiting for the control plane to be ready, new workloads cannot be created and existing workloads cannot be managed with the OpenStack API. The
nova-computeservice on the data plane nodes cannot connect to the cell1 rabbitmq and reports as down:$ oc rsh openstackclient $ openstack compute service list -c Binary -c Host -c Status -c State +----------------+-------------------------------------+---------+-------+ | Binary | Host | Status | State | +----------------+-------------------------------------+---------+-------+ | nova-conductor | nova-cell0-conductor-0 | enabled | up | | nova-scheduler | nova-scheduler-0 | enabled | up | | nova-conductor | nova-cell1-conductor-0 | enabled | up | | nova-compute | edpm-compute-0.ctlplane.example.com | enabled | down | | nova-compute | edpm-compute-1.ctlplane.example.com | enabled | down | +----------------+-------------------------------------+---------+-------+The OVN controller and the OVN metadata agent cannot connect to the southbound database:
$ openstack network agent list -c 'Agent Type' -c Host -c Alive -c State +------------------------------+-------------------------------------+-------+-------+ | Agent Type | Host | Alive | State | +------------------------------+-------------------------------------+-------+-------+ | OVN Controller Gateway agent | crc | :-) | UP | | OVN Controller agent | edpm-compute-1.ctlplane.example.com | XXX | UP | | OVN Metadata agent | edpm-compute-1.ctlplane.example.com | XXX | UP | | OVN Controller agent | edpm-compute-0.ctlplane.example.com | XXX | UP | | OVN Metadata agent | edpm-compute-0.ctlplane.example.com | XXX | UP | +------------------------------+-------------------------------------+-------+-------+NoteThe existing workload is not impacted if workload traffic is not routed through the control plane.
Open the
OpenStackDataPlaneNodeSetCR file for each node on the data plane, and enable TLS in each:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: <node_set_name> namespace: openstack spec: tlsEnabled: truewhere:
<node_set_name>-
Specifies the name of the
OpenStackDataPlaneNodeSetCR that the node belongs to.
Save the updated
OpenStackDataPlaneNodeSetCR files and apply the updates:$ oc apply -f openstack_data_plane.yaml -n openstackCheck that TLS is enabled on each node set:
$ oc get openstackdataplanenodeset <node_set_name> -n openstack -o json | jq .items[0].spec.tlsEnabled trueCreate a file on your workstation to define the
OpenStackDataPlaneDeploymentCR:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: <node_set_deployment_name>where:
<node_set_deployment_name>-
Specifies the name of the
OpenStackDataPlaneDeploymentCR. The name must be unique, must consist of lower case alphanumeric characters,-(hyphen) or.(period), and must start and end with an alphanumeric character.
TipGive the
OpenStackDataPlaneDeploymentCR file a descriptive name that indicates the purpose of the modified node set.Add the
OpenStackDataPlaneNodeSetCRs that you modified to enable TLS:spec: nodeSets: - <node_set_name>-
Provide the required
<node_set_name>for each node on the data plane.
-
Provide the required
-
Save the
OpenStackDataPlaneDeploymentCR deployment file. Deploy the modified
OpenStackDataPlaneNodeSetCRs:$ oc create -f openstack_data_plane_deploy.yaml -n openstackYou can view the Ansible logs while the deployment executes:
$ oc get pod -l app=openstackansibleee -n openstack -w $ oc logs -l app=openstackansibleee -f --max-log-requests 10 -n openstackIf the
oc logscommand returns an error similar to the following error, increase the--max-log-requestsvalue:error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limitVerify that the modified
OpenStackDataPlaneNodeSetCRs are deployed:$ oc get openstackdataplanedeployment -n openstack NAME STATUS MESSAGE openstack-data-plane True Setup Complete $ oc get openstackdataplanenodeset -n openstack NAME STATUS MESSAGE openstack-data-plane True NodeSet ReadyFor information about the returned status, see Data plane conditions and states in the Deploying Red Hat OpenStack Services on OpenShift guide.
If the status indicates that the data plane has not been deployed, then troubleshoot the deployment. For information, see Troubleshooting the data plane creation and deployment in the Deploying Red Hat OpenStack Services on OpenShift guide.
Verify that the
nova-computeservice is connected again to TLS rabbitmq:$ oc rsh openstackclient $ openstack compute service list -c Binary -c Host -c Status -c State +----------------+-------------------------------------+---------+-------+ | Binary | Host | Status | State | +----------------+-------------------------------------+---------+-------+ | nova-conductor | nova-cell0-conductor-0 | enabled | up | | nova-scheduler | nova-scheduler-0 | enabled | up | | nova-conductor | nova-cell1-conductor-0 | enabled | up | | nova-compute | edpm-compute-0.ctlplane.example.com | enabled | up | | nova-compute | edpm-compute-1.ctlplane.example.com | enabled | up | +----------------+-------------------------------------+---------+-------+Verify that the OVN agents are running again:
$ openstack network agent list -c 'Agent Type' -c Host -c Alive -c State +------------------------------+-------------------------------------+-------+-------+ | Agent Type | Host | Alive | State | +------------------------------+-------------------------------------+-------+-------+ | OVN Controller Gateway agent | crc | :-) | UP | | OVN Controller agent | edpm-compute-1.ctlplane.example.com | :-) | UP | | OVN Metadata agent | edpm-compute-1.ctlplane.example.com | :-) | UP | | OVN Controller agent | edpm-compute-0.ctlplane.example.com | :-) | UP | | OVN Metadata agent | edpm-compute-0.ctlplane.example.com | :-) | UP | +------------------------------+-------------------------------------+-------+-------+
4.3. Deploying RHOSO with TLS disabled Copiar enlaceEnlace copiado en el portapapeles!
TLS is enabled, by default, when you deploy Red Hat OpenStack Services on OpenShift (RHOSO). But you can disable TLS, if you need to.
You can re-enable TLS on a operational RHOSO environment with minimal disruption.
Procedure
-
Open your
OpenStackControlPlanecustom resource (CR) file,openstack_control_plane.yaml, on your workstation. Add the following
spec.tlsconfiguration, if not already present:spec: tls: ingress: enabled: false podLevel: enabled: falseUpdate the control plane:
$ oc apply -f openstack_control_plane.yamlOpen the
OpenStackDataPlaneNodeSetCR file for each node on the data plane, and disable TLS by settingspec.tlsEnabledtofalse:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: <node_set_name> namespace: openstack spec: tlsEnabled: falsewhere:
<node_set_name>-
Specifies the name of the
OpenStackDataPlaneNodeSetCR that the node belongs to.
Save the updated
OpenStackDataPlaneNodeSetCR files and apply the updates:$ oc apply -f openstack_data_plane.yamlVerify that TLS is disabled on every node set:
$ oc get openstackdataplanenodeset <node_set_name> -n openstack -o json | jq .items[0].spec.tlsEnabled