Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 14. Deploying nodes with spine-leaf configuration by using director Operator
Deploy nodes with spine-leaf networking architecture to replicate an extensive network topology within your environment. Current restrictions allow only one provisioning network for Metal3
.
14.1. Creating or updating the OpenStackNetConfig custom resource to define all subnets
Define your OpenStackNetConfig
custom resource (CR) and specify the subnets for the overcloud networks. Red Hat OpenStack Platform (RHOSP) director Operator (OSPdO) then renders the configuration and creates, or updates, the network topology.
Prerequisites
- You have installed OSPdO on an operational Red Hat OpenShift Container Platform (RHOCP) cluster.
-
You have installed the
oc
command line tool on your workstation.
Procedure
Create a configuration file named
openstacknetconfig.yaml
:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackNetConfig metadata: name: openstacknetconfig spec: attachConfigurations: br-osp: nodeNetworkConfigurationPolicy: nodeSelector: node-role.kubernetes.io/worker: "" desiredState: interfaces: - bridge: options: stp: enabled: false port: - name: enp7s0 description: Linux bridge with enp7s0 as a port name: br-osp state: up type: linux-bridge mtu: 1500 br-ex: nodeNetworkConfigurationPolicy: nodeSelector: node-role.kubernetes.io/worker: "" desiredState: interfaces: - bridge: options: stp: enabled: false port: - name: enp6s0 description: Linux bridge with enp6s0 as a port name: br-ex state: up type: linux-bridge mtu: 1500 # optional DnsServers list dnsServers: - 192.168.25.1 # optional DnsSearchDomains list dnsSearchDomains: - osptest.test.metalkube.org - some.other.domain # DomainName of the OSP environment domainName: osptest.test.metalkube.org networks: - name: Control nameLower: ctlplane subnets: - name: ctlplane ipv4: allocationEnd: 192.168.25.250 allocationStart: 192.168.25.100 cidr: 192.168.25.0/24 gateway: 192.168.25.1 attachConfiguration: br-osp - name: InternalApi nameLower: internal_api mtu: 1350 subnets: - name: internal_api ipv4: allocationEnd: 172.17.0.250 allocationStart: 172.17.0.10 cidr: 172.17.0.0/24 routes: - destination: 172.17.1.0/24 nexthop: 172.17.0.1 - destination: 172.17.2.0/24 nexthop: 172.17.0.1 vlan: 20 attachConfiguration: br-osp - name: internal_api_leaf1 ipv4: allocationEnd: 172.17.1.250 allocationStart: 172.17.1.10 cidr: 172.17.1.0/24 routes: - destination: 172.17.0.0/24 nexthop: 172.17.1.1 - destination: 172.17.2.0/24 nexthop: 172.17.1.1 vlan: 21 attachConfiguration: br-osp - name: internal_api_leaf2 ipv4: allocationEnd: 172.17.2.250 allocationStart: 172.17.2.10 cidr: 172.17.2.0/24 routes: - destination: 172.17.1.0/24 nexthop: 172.17.2.1 - destination: 172.17.0.0/24 nexthop: 172.17.2.1 vlan: 22 attachConfiguration: br-osp - name: External nameLower: external subnets: - name: external ipv4: allocationEnd: 10.0.0.250 allocationStart: 10.0.0.10 cidr: 10.0.0.0/24 gateway: 10.0.0.1 attachConfiguration: br-ex - name: Storage nameLower: storage mtu: 1350 subnets: - name: storage ipv4: allocationEnd: 172.18.0.250 allocationStart: 172.18.0.10 cidr: 172.18.0.0/24 routes: - destination: 172.18.1.0/24 nexthop: 172.18.0.1 - destination: 172.18.2.0/24 nexthop: 172.18.0.1 vlan: 30 attachConfiguration: br-osp - name: storage_leaf1 ipv4: allocationEnd: 172.18.1.250 allocationStart: 172.18.1.10 cidr: 172.18.1.0/24 routes: - destination: 172.18.0.0/24 nexthop: 172.18.1.1 - destination: 172.18.2.0/24 nexthop: 172.18.1.1 vlan: 31 attachConfiguration: br-osp - name: storage_leaf2 ipv4: allocationEnd: 172.18.2.250 allocationStart: 172.18.2.10 cidr: 172.18.2.0/24 routes: - destination: 172.18.0.0/24 nexthop: 172.18.2.1 - destination: 172.18.1.0/24 nexthop: 172.18.2.1 vlan: 32 attachConfiguration: br-osp - name: StorageMgmt nameLower: storage_mgmt mtu: 1350 subnets: - name: storage_mgmt ipv4: allocationEnd: 172.19.0.250 allocationStart: 172.19.0.10 cidr: 172.19.0.0/24 routes: - destination: 172.19.1.0/24 nexthop: 172.19.0.1 - destination: 172.19.2.0/24 nexthop: 172.19.0.1 vlan: 40 attachConfiguration: br-osp - name: storage_mgmt_leaf1 ipv4: allocationEnd: 172.19.1.250 allocationStart: 172.19.1.10 cidr: 172.19.1.0/24 routes: - destination: 172.19.0.0/24 nexthop: 172.19.1.1 - destination: 172.19.2.0/24 nexthop: 172.19.1.1 vlan: 41 attachConfiguration: br-osp - name: storage_mgmt_leaf2 ipv4: allocationEnd: 172.19.2.250 allocationStart: 172.19.2.10 cidr: 172.19.2.0/24 routes: - destination: 172.19.0.0/24 nexthop: 172.19.2.1 - destination: 172.19.1.0/24 nexthop: 172.19.2.1 vlan: 42 attachConfiguration: br-osp - name: Tenant nameLower: tenant vip: False mtu: 1350 subnets: - name: tenant ipv4: allocationEnd: 172.20.0.250 allocationStart: 172.20.0.10 cidr: 172.20.0.0/24 routes: - destination: 172.20.1.0/24 nexthop: 172.20.0.1 - destination: 172.20.2.0/24 nexthop: 172.20.0.1 vlan: 50 attachConfiguration: br-osp - name: tenant_leaf1 ipv4: allocationEnd: 172.20.1.250 allocationStart: 172.20.1.10 cidr: 172.20.1.0/24 routes: - destination: 172.20.0.0/24 nexthop: 172.20.1.1 - destination: 172.20.2.0/24 nexthop: 172.20.1.1 vlan: 51 attachConfiguration: br-osp - name: tenant_leaf2 ipv4: allocationEnd: 172.20.2.250 allocationStart: 172.20.2.10 cidr: 172.20.2.0/24 routes: - destination: 172.20.0.0/24 nexthop: 172.20.2.1 - destination: 172.20.1.0/24 nexthop: 172.20.2.1 vlan: 52 attachConfiguration: br-osp
Create the internal API network:
$ oc create -f openstacknetconfig.yaml -n openstack
Verify that the resources and child resources for the
OpenStackNetConfig
resource are created:$ oc get openstacknetconfig/openstacknetconfig -n openstack $ oc get openstacknetattachment -n openstack $ oc get openstacknet -n openstack
14.2. Add roles for leaf networks to your deployment
To add roles for the leaf networks to your deployment, update the roles_data.yaml
configuration file. If the leaf network roles have different NIC configurations, you can create Ansible NIC templates for each role to configure the spine-leaf networking, register the NIC templates, and create the ConfigMap
custom resource.
You must use roles_data.yaml
as the filename.
Procedure
Update the
roles_data.yaml
file:... ############################################################################### # Role: ComputeLeaf1 # ############################################################################### - name: ComputeLeaf1 description: | Basic ComputeLeaf1 Node role # Create external Neutron bridge (unset if using ML2/OVS without DVR) tags: - compute - external_bridge networks: InternalApi: subnet: internal_api_leaf1 Tenant: subnet: tenant_leaf1 Storage: subnet: storage_leaf1 HostnameFormatDefault: '%stackname%-novacompute-leaf1-%index%' ... ############################################################################### # Role: ComputeLeaf2 # ############################################################################### - name: ComputeLeaf2 description: | Basic ComputeLeaf1 Node role # Create external Neutron bridge (unset if using ML2/OVS without DVR) tags: - compute - external_bridge networks: InternalApi: subnet: internal_api_leaf2 Tenant: subnet: tenant_leaf2 Storage: subnet: storage_leaf2 HostnameFormatDefault: '%stackname%-novacompute-leaf2-%index%' ...
- Create a NIC template for each Compute role. For example Ansible NIC templates, see https://github.com/openstack/tripleo-ansible/tree/stable/wallaby/tripleo_ansible/roles/tripleo_network_config/templates.
Add the NIC templates for the new nodes to an environment file:
parameter_defaults: ComputeNetworkConfigTemplate: 'multiple_nics_vlans_dvr.j2' ComputeLeaf1NetworkConfigTemplate: 'multiple_nics_vlans_dvr.j2' ComputeLeaf2NetworkConfigTemplate: 'multiple_nics_compute_leaf_2_vlans_dvr.j2'
In the
~/custom_environment_files
directory, archive theroles_data.yaml
file, the environment file, and the NIC templates into a tarball:$ tar -cvzf custom-spine-leaf-config.tar.gz *.yaml
Create the
tripleo-tarball-config
ConfigMap
resource:$ oc create configmap tripleo-tarball-config --from-file=custom-spine-leaf-config.tar.gz -n openstack
14.3. Deploying the overcloud with multiple routed networks
To deploy the overcloud with multiple sets of routed networking, create the control plane and the Compute nodes for the spine-leaf network, and then render and apply the Ansible playbooks. To create the control plane, specify the resources for the Controller nodes. To create the Compute nodes for the leafs from bare-metal machines, include the resource specification in the OpenStackBaremetalSet
custom resource.
Procedure
Create a file named
openstack-controller.yaml
on your workstation. Include the resource specification for the Controller nodes. The following example shows a specification for a control plane that consists of three Controller nodes:apiVersion: osp-director.openstack.org/v1beta2 kind: OpenStackControlPlane metadata: name: overcloud namespace: openstack spec: gitSecret: git-secret openStackClientImageURL: registry.redhat.io/rhosp-rhel9/openstack-tripleoclient:17.1 openStackClientNetworks: - ctlplane - external - internal_api - internal_api_leaf1 # optionally the openstackclient can also be connected to subnets openStackClientStorageClass: host-nfs-storageclass passwordSecret: userpassword domainName: ostest.test.metalkube.org virtualMachineRoles: Controller: roleName: Controller roleCount: 1 networks: - ctlplane - internal_api - external - tenant - storage - storage_mgmt cores: 6 memory: 20 rootDisk: diskSize: 500 baseImageVolumeName: openstack-base-img storageClass: host-nfs-storageclass storageAccessMode: ReadWriteMany storageVolumeMode: Filesystem enableFencing: False
Create the control plane:
$ oc create -f openstack-controller.yaml -n openstack
-
Wait until Red Hat OpenShift Container Platform (RHOCP) creates the resources related to the
OpenStackControlPlane
resource. Create a file on your workstation for each Compute leaf, for example,
openstack-computeleaf1.yaml
. Include the resource specification for the Compute nodes for the leaf. The following example shows a specification for one Compute leaf that includes one Compute node:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackBaremetalSet metadata: name: computeleaf1 namespace: openstack spec: # How many nodes to provision count: 1 # The image to install on the provisioned nodes baseImageUrl: http://<source_host>/rhel-9.2-x86_64-kvm.qcow2 # The secret containing the SSH pub key to place on the provisioned nodes deploymentSSHSecret: osp-controlplane-ssh-keys # The interface on the nodes that will be assigned an IP from the mgmtCidr ctlplaneInterface: enp7s0 # Networks to associate with this host networks: - ctlplane - internal_api_leaf1 - external - tenant_leaf1 - storage_leaf1 roleName: ComputeLeaf1 passwordSecret: userpassword
Create the Compute nodes for each leaf:
$ oc create -f openstack-computeleaf1.yaml -n openstack
-
Generate the Ansible playbooks by using
OpenStackConfigGenerator
and apply the overcloud configuration. For more information, see Configuring and deploying the overcloud with director Operator.
Verification
View the resource for the control plane:
$ oc get openstackcontrolplane/overcloud -n openstack
View the
OpenStackVMSet
resources to verify the creation of the control plane virtual machine (VM) set:$ oc get openstackvmsets -n openstack
View the VM resources to verify the creation of the control plane VMs in OpenShift Virtualization:
$ oc get virtualmachines -n openstack
Test access to the
openstackclient
pod remote shell:$ oc rsh -n openstack openstackclient
View the resource for each Compute leaf:
$ oc get openstackbaremetalset/computeleaf1 -n openstack
View the bare-metal machines managed by RHOCP to verify the creation of the Compute nodes:
$ oc get baremetalhosts -n openshift-machine-api