Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 3. Creating the control plane
The Red Hat OpenStack Services on OpenShift (RHOSO) control plane contains the RHOSO services that manage the cloud. The RHOSO services run as a Red Hat OpenShift Container Platform (RHOCP) workload.
3.1. Prerequisites
- The RHOCP cluster is prepared for RHOSO network isolation. For more information, see Preparing RHOCP for RHOSO network isolation.
-
The OpenStack Operator (
openstack-operator
) is installed. For more information, see Installing and preparing the Operators. -
You are logged on to a workstation that has access to the RHOCP cluster, as a user with
cluster-admin
privileges.
3.2. Creating the control plane
Define an OpenStackControlPlane
custom resource (CR) to perform the following tasks:
- Create the control plane.
- Enable the core, mandatory Red Hat OpenStack Services on OpenShift (RHOSO) services.
Use the following commands to view the OpenStackControlPlane
CRD definition and specification schema:
$ oc describe crd openstackcontrolplane $ oc explain openstackcontrolplane.spec
Procedure
Create a file on your workstation named
openstack_control_plane.yaml
to define theOpenStackControlPlane
CR:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane
Specify the
Secret
CR you created to provide secure access to the RHOSO service pods in Providing secure access to the Red Hat OpenStack Services on OpenShift services:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane spec: secret: osp-secret
Specify the
storageClass
you created for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane spec: secret: osp-secret storageClass: your-RHOCP-storage-class
NoteFor information about storage classes, see Creating a storage class.
Add configuration for the following core, mandatory services:
Block Storage service (cinder):
cinder: apiOverride: route: {} template: databaseInstance: openstack secret: osp-secret cinderAPI: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer cinderScheduler: replicas: 1 cinderBackup: networkAttachments: - storage replicas: 0 # backend needs to be configured cinderVolumes: volume1: networkAttachments: - storage replicas: 0 # backend needs to be configured
Compute service (nova):
nova: apiOverride: route: {} template: apiServiceTemplate: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer metadataServiceTemplate: override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer secret: osp-secret
NoteA full set of Compute services (nova) are deployed by default for each of the default cells,
cell0
andcell1
:nova-api
,nova-metadata
,nova-scheduler
, andnova-conductor
. Thenovncproxy
service is also enabled forcell1
by default.A Galera cluster for use by all RHOSO services (
openstack
), and a Galera cluster for use by the Compute service forcell1
(openstack-cell1
):galera: templates: openstack: storageRequest: 5000M secret: osp-secret replicas: 3 openstack-cell1: storageRequest: 5000M secret: osp-secret replicas: 3
Identity service (keystone)
keystone: apiOverride: route: {} template: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack secret: osp-secret
Image service (glance):
glance: apiOverrides: default: route: {} template: databaseInstance: openstack storageClass: "" storageRequest: 10G secret: osp-secret keystoneEndpoint: default glanceAPIs: default: type: single replicas: 1 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer networkAttachments: - storage
NoteYou must configure a back end for the Image service. If you do not configure a back end for the Image service, then the service is deployed but not activated (
replicas: 0
). For information about configuring a back end for the Image service, see the Configuring storage guide.Key Management service (barbican):
barbican apiOverride: route: {} template: databaseInstance: openstack secret: osp-secret barbicanAPI: replicas: 1 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer barbicanWorker: replicas: 1 barbicanKeystoneListener: replicas: 1
Memcached:
memcached: templates: memcached: replicas: 3
Networking service (neutron):
neutron: apiOverride: route: {} template: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack secret: osp-secret networkAttachments: - internalapi
Object Storage service (swift):
swift: enabled: true proxyOverride: route: {} template: swiftProxy: networkAttachments: - storage override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer replicas: 2 swiftRing: ringReplicas: 3 swiftStorage: networkAttachments: - storage replicas: 3 storageClass: local-storage storageRequest: 10Gi
OVN:
ovn: template: ovnDBCluster: ovndbcluster-nb: replicas: 3 dbType: NB storageRequest: 10G networkAttachment: internalapi ovndbcluster-sb: dbType: SB storageRequest: 10G networkAttachment: internalapi ovnNorthd: networkAttachment: internalapi ovnController: networkAttachment: tenant nicMappings: <network_name: nic_name>
-
Replace
<network_name>
with the name of the network your gateway is on. -
Replace
<nic_name>
with the name of the NIC connecting to the gateway network. -
Optional: Add additional
<network_name>:<nic_name>
pairs under nicMappings as required.
-
Replace
Placement service (placement):
placement: apiOverride: route: {} template: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack secret: osp-secret
RabbitMQ:
rabbitmq: templates: rabbitmq: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.85 spec: type: LoadBalancer rabbitmq-cell1: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.86 spec: type: LoadBalancer
Telemetry service (ceilometer, prometheus):
telemetry: enabled: true template: metricStorage: enabled: true monitoringStack: alertingEnabled: true scrapeInterval: 30s storage: strategy: persistent retention: 24h persistent: pvcStorageRequest: 20G autoscaling: enabled: false aodh: passwordSelectors: databaseUser: aodh databaseInstance: openstack memcachedInstance: memcached secret: osp-secret heatInstance: heat ceilometer: enabled: true secret: osp-secret logging: enabled: false network: internalapi ipaddr: <ip_address>
-
Replace
<ip_address>
with the IP address for your environment.
-
Replace
Create the control plane:
$ oc create -f openstack_control_plane.yaml -n openstack
Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-galera-network-isolation Unknown Setup started
The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.Optional: Confirm that the control plane is deployed by reviewing the pods in the
openstack
namespace:$ oc get pods -n openstack
The control plane is deployed when all the pods are either completed or running.
Verification
Open a remote shell connection to the
OpenStackClient
pod:$ oc rsh -n openstack openstackclient
Confirm that the internal service endpoints are registered with each service:
$ openstack endpoint list -c 'Service Name' -c Interface -c URL --service glance +--------------+-----------+---------------------------------------------------------------+ | Service Name | Interface | URL | +--------------+-----------+---------------------------------------------------------------+ | glance | internal | http://glance-internal.openstack.svc:9292 | | glance | public | http://glance-public-openstack.apps.ostest.test.metalkube.org | +--------------+-----------+---------------------------------------------------------------+
Exit the
OpenStackClient
pod:$ exit
3.3. Example OpenStackControlPlane
CR for a core control plane
The following example OpenStackControlPlane
CR is a complete core control plane configuration that includes all the key services that must always be enabled for a successful deployment.
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane spec: secret: osp-secret storageClass: your-RHOCP-storage-class 1 cinder: 2 apiOverride: route: {} template: databaseInstance: openstack secret: osp-secret cinderAPI: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer cinderScheduler: replicas: 1 cinderBackup: 3 networkAttachments: - storage replicas: 0 # backend needs to be configured cinderVolumes: 4 volume1: networkAttachments: 5 - storage replicas: 0 # backend needs to be configured nova: 6 apiOverride: 7 route: {} template: apiServiceTemplate: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi 8 metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 9 spec: type: LoadBalancer metadataServiceTemplate: override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer secret: osp-secret galera: templates: openstack: storageRequest: 5000M secret: osp-secret replicas: 3 openstack-cell1: storageRequest: 5000M secret: osp-secret replicas: 3 keystone: apiOverride: route: {} template: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack secret: osp-secret glance: apiOverrides: default: route: {} template: databaseInstance: openstack storageClass: "" storageRequest: 10G secret: osp-secret keystoneEndpoint: default glanceAPIs: default: type: single replicas: 1 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer networkAttachments: - storage barbican apiOverride: route: {} template: databaseInstance: openstack secret: osp-secret barbicanAPI: replicas: 1 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer barbicanWorker: replicas: 1 barbicanKeystoneListener: replicas: 1 memcached: templates: memcached: replicas: 3 neutron: apiOverride: route: {} 10 template: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack secret: osp-secret networkAttachments: - internalapi swift: enabled: true proxyOverride: route: {} template: swiftProxy: networkAttachments: - storage override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer replicas: 2 swiftRing: ringReplicas: 3 swiftStorage: networkAttachments: - storage replicas: 3 storageClass: local-storage storageRequest: 10Gi ovn: template: ovnDBCluster: ovndbcluster-nb: replicas: 3 dbType: NB storageRequest: 10G networkAttachment: internalapi ovndbcluster-sb: dbType: SB storageRequest: 10G networkAttachment: internalapi ovnNorthd: networkAttachment: internalapi ovnController: networkAttachment: tenant nicMappings: <network_name: nic_name> placement: apiOverride: route: {} template: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack secret: osp-secret rabbitmq: 11 templates: rabbitmq: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.85 spec: type: LoadBalancer rabbitmq-cell1: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.86 spec: type: LoadBalancer telemetry: enabled: true template: metricStorage: enabled: true monitoringStack: alertingEnabled: true scrapeInterval: 30s storage: strategy: persistent retention: 24h persistent: pvcStorageRequest: 20G autoscaling: enabled: false aodh: passwordSelectors: databaseUser: aodh databaseInstance: openstack memcachedInstance: memcached secret: osp-secret heatInstance: heat ceilometer: enabled: true secret: osp-secret logging: enabled: false network: internalapi ipaddr: <ip_address>
- 1
- The storage class that you created for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end.
- 2
- Service-specific parameters for the Block Storage service (cinder).
- 3
- The Block Storage service back end. For more information on configuring storage services, see the Configuring storage guide.
- 4
- The Block Storage service configuration. For more information on configuring storage services, see the Configuring storage guide.
- 5
- The list of networks that each service pod is directly attached to, specified by using the
NetworkAttachmentDefinition
resource names. A NIC is configured for the service for each specified network attachment.NoteIf you do not configure the isolated networks that each service pod is attached to, then the default pod network is used. For example, the Block Storage service uses the storage network to connect to a storage back end; the Identity service (keystone) uses an LDAP or Active Directory (AD) network; the
ovnDBCluster
andovnNorthd
services use theinternalapi
network; and theovnController
service uses thetenant
network. - 6
- Service-specific parameters for the Compute service (nova).
- 7
- Service API route definition. You can customize the service route by using route-specific annotations. For more information, see Route-specific annotations in the RHOCP Networking guide. Set
route:
to{}
to apply the default route template. - 8
- The internal service API endpoint registered as a MetalLB service with the
IPAddressPool internalapi
. - 9
- The virtual IP (VIP) address for the service. The IP is shared with other services by default.
- 10
- Customized service API route definition. For more information, see Route-specific annotations in the RHOCP Networking guide.
- 11
- The RabbitMQ instances exposed to an isolated network.Note
Multiple RabbitMQ instances cannot share the same VIP as they use the same port. If you need to expose multiple RabbitMQ instances to the same network, then you must use distinct IP addresses.
3.4. Adding the Bare Metal Provisioning service (ironic) to the control plane
If you want your cloud users to be able to launch bare-metal instances, you must configure the control plane with the Bare Metal Provisioning service (ironic).
Procedure
-
Open your
OpenStackControlPlane
custom resource (CR) file,openstack_control_plane.yaml
, on your workstation. Add the following
cellTemplates
configuration to thenova
service configuration:nova: apiOverride: route: {} template: ... secret: osp-secret cellTemplates: cell0: cellDatabaseUser: nova_cell0 hasAPIAccess: true cell1: cellDatabaseUser: nova_cell1 cellDatabaseInstance: openstack-cell1 cellMessageBusInstance: rabbitmq-cell1 hasAPIAccess: true novaComputeTemplates: compute-ironic: 1 computeDriver: ironic.IronicDriver
- 1
- The name of the Compute service. The name has a limit of 20 characters, and must contain only lowercase alphanumeric characters and the
-
symbol.
-
Create the network that the
ironic
service pod attaches to, for example,baremetal
. For more information about how to create an isolated network, see Preparing RHOCP for RHOSO network isolation. Enable and configure the
ironic
service:spec: ... ironic: enabled: true template: rpcTransport: oslo databaseInstance: openstack ironicAPI: replicas: 1 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer ironicConductors: - replicas: 1 storageRequest: 10G networkAttachments: - baremetal 1 provisionNetwork: baremetal customServiceConfig: | [neutron] cleaning_network = provisioning provisioning_network = provisioning rescuing_network = provisioning ironicInspector: replicas: 0 networkAttachments: - baremetal inspectionNetwork: baremetal ironicNeutronAgent: replicas: 1 secret: osp-secret
- 1
- The
NetworkAttachmentDefinition
CR for yourbaremetal
network.
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstack
Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-network-isolation-ironic Unknown Setup started
The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.Confirm that the control plane is deployed by reviewing the pods in the
openstack
namespace:$ oc get pods -n openstack
The control plane is deployed when all the pods are either completed or running.
Verification
Open a remote shell connection to the
OpenStackClient
pod:$ oc rsh -n openstack openstackclient
Confirm that the internal service endpoints are registered with each service:
$ openstack endpoint list -c 'Service Name' -c Interface -c URL --service ironic +--------------+-----------+---------------------------------------------------------------+ | Service Name | Interface | URL | +--------------+-----------+---------------------------------------------------------------+ | ironic | internal | http://ironic-internal.openstack.svc:9292 | | ironic | public | http://ironic-public-openstack.apps.ostest.test.metalkube.org | +--------------+-----------+---------------------------------------------------------------+
Exit the
openstackclient
pod:$ exit
3.5. Adding Compute cells to the control plane
You can use cells to divide Compute nodes in large deployments into groups. Each cell has a dedicated message queue, runs standalone copies of the cell-specific Compute services and databases, and stores instance metadata in a database dedicated to instances in that cell.
By default, the control plane creates two cells:
-
cell0
: The controller cell that manages global components and services, such as the Compute scheduler and the global conductor. This cell also contains a dedicated database to store information about instances that failed to be scheduled to a Compute node. You cannot connect Compute nodes to this cell. -
cell1
: The default cell that Compute nodes are connected to when you don’t create and configure additional cells.
You can add cells to your Red Hat OpenStack Services on OpenShift (RHOSO) environment when you create your control plane or at any time afterwards.
Procedure
-
Open your
OpenStackControlPlane
custom resource (CR) file,openstack_control_plane.yaml
, on your workstation. Create a database server for each new cell that you want to add to your RHOSO environment:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-galera-network-isolation spec: secret: osp-secret storageClass: local-storage ... galera: enabled: true templates: openstack: 1 storageRequest: 5G secret: cell0-secret replicas: 1 openstack-cell1: 2 storageRequest: 5G secret: cell1-secret replicas: 1 openstack-cell2: 3 storageRequest: 5G secret: cell2-secret replicas: 1
Create a message bus with unique IPs for the load balancer for each new cell that you want to add to your RHOSO environment:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-galera-network-isolation spec: secret: osp-secret storageClass: local-storage ... rabbitmq: templates: rabbitmq: 1 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.85 spec: type: LoadBalancer rabbitmq-cell1: 2 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.86 spec: type: LoadBalancer rabbitmq-cell2: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.87 spec: type: LoadBalancer
Add the new cells to the
cellTemplates
configuration in thenova
service configuration:nova: apiOverride: route: {} template: ... secret: osp-secret cellTemplates: cell0: cellDatabaseUser: nova_cell0 hasAPIAccess: true cell1: cellDatabaseInstance: openstack-cell1 cellDatabaseUser: nova_cell1 cellMessageBusInstance: rabbitmq-cell1 hasAPIAccess: true cell2: 1 cellDatabaseInstance: openstack-cell2 cellDatabaseUser: nova_cell1 cellMessageBusInstance: rabbitmq-cell2 hasAPIAccess: true
- 1
- The name of the new Compute cell. The name has a limit of 20 characters, and must contain only lowercase alphanumeric characters and the
-
symbol. For more information about the properties you can configure for a cell, view the definition for theNova
CRD:
$ oc describe crd nova
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstack
Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-galera-network-isolation Unknown Setup started
The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.Optional: Confirm that the control plane is deployed by reviewing the pods in the
openstack
namespace for each of the cells you created:$ oc get pods -n openstack | grep cell2 nova-cell2-conductor-0 1/1 Running 2 5d20h nova-cell2-novncproxy-0 1/1 Running 2 5d20h openstack-cell2-galera-0 1/1 Running 2 5d20h rabbitmq-cell2-server-0 1/1 Running 2 5d20h
The control plane is deployed when all the pods are either completed or running.
Optional: Confirm that the new cells are created:
$ oc exec -it nova-cell0-conductor-0 /bin/bash # nova-manage cell_v2 list_cells +-------+--------------------------------------+----------------------------------------------------------------------------------+------------------------------------------------------------+----------+| Name | UUID | Transport URL | Database Connection | Disabled |+-------+--------------------------------------+----------------------------------------------------------------------------------+------------------------------------------------------------+----------+| cell0 | 00000000-0000-0000-0000-000000000000 | rabbit: | mysql+pymysql://nova_cell0:****@openstack/nova_cell0 | False || cell1 | c5bf5e35-6677-40aa-80d0-33a440cac14e | rabbit://default_user_CuUVnXz-PvgzXvPxypU:****@rabbitmq-cell1.openstack.svc:5672 | mysql+pymysql://nova_cell1:****@openstack-cell1/nova_cell1 | False || cell2 | c5bf5e35-6677-40aa-80d0-33a440cac14e | rabbit://default_user_CuUVnXz-PvgzXvPxypU:****@rabbitmq-cell2.openstack.svc:5672 | mysql+pymysql://nova_cell2:****@openstack-cell2/nova_cell2| False |+-------+--------------------------------------+----------------------------------------------------------------------------------+------------------------------------------------------------+----------+
3.6. Enabling the Dashboard service (horizon) interface
You can enable the Dashboard service (horizon) interface for cloud user access to the cloud through a web browser.
Procedure
Obtain the
OpenStackControlPlane
CR name:$ oc get openstackcontrolplanes
Enable the Dashboard service in the
OpenStackControlPlane
CR:$ oc patch openstackcontrolplanes/<openstackcontrolplane_name> -p='[{"op": "replace", "path": "/spec/horizon/enabled", "value": true}]' --type json
-
Replace
<openstackcontrolplane_name>
with the name of yourOpenStackControlPlane
CR, for example,openstack-galera-network-isolation
.
-
Replace
Retrieve the Dashboard service endpoint URL:
$ oc get horizons horizon -o jsonpath='{.status.endpoint}'
Use this URL to access the Horizon interface.
Verification
To log in as the admin user, obtain the admin password from the
AdminPassword
parameter in theosp-secret
secret:$ oc get secret osp-secret -o jsonpath='{.data.AdminPassword}' | base64 -d
- Open a web browser.
- Enter the Dashboard endpoint URL.
- Log in to the dashboard with your username and password.
3.7. Additional resources
- Kubernetes NMState Operator
- The Kubernetes NMState project
- Load balancing with MetalLB
- MetalLB documentation
- MetalLB in layer 2 mode
- Specify network interfaces that LB IP can be announced from
- Multiple networks
- Using the Multus CNI in OpenShift
- macvlan plugin
- whereabouts IPAM CNI plugin - Extended configuration
- About advertising for the IP address pools
- Dynamic provisioning