第 5 章 缩减 director Operator 资源
在将数据库迁移到 control plane 之前,您必须缩减和删除 OpenStack director Operator (OSPdO)资源,才能使用 OpenShift (RHOSO)资源。
您必须执行以下操作:
- 从现有 RHOSP 17.1 集群中转储所选数据。您可以使用此数据为 data plane 采用构建自定义资源。
- 提取并保存数据后,删除 OSPdO control plane 和 Operator。
流程
下载 NIC 模板:
# Make temp directory if doesn't exist mkdir -p temp cd temp echo "Extract nic templates" oc get -n "${OSPDO_NAMESPACE}" cm tripleo-tarball -ojson | jq -r '.binaryData."tripleo-tarball.tar.gz"' | base64 -d | tar xzvf - # Revert back to original directory cd -获取用于访问数据平面节点的 SSH 密钥:
# Make temp directory if doesn't exist mkdir -p temp # Get the SSH key from the openstackclient (osp 17) # to be used later to create the SSH secret for dataplane adoption $OS_CLIENT cat /home/cloud-admin/.ssh/id_rsa > temp/ssh.private $OS_CLIENT cat /home/cloud-admin/.ssh/id_rsa.pub > temp/ssh.pub echo "SSH private and public keys saved in temp/ssh.private and temp/ssh.pub"从每个 Compute 节点角色
OpenStackBaremetalSet获取 OVN 配置。稍后使用这个配置来构建OpenStackDataPlaneNodeSet(s)。为每个OpenStackBaremetalSet重复此步骤:# Make temp directory if doesn't exist mkdir -p temp # # Query the first node in OSBMS # IP=$(oc -n "${OSPDO_NAMESPACE}" get openstackbaremetalsets.osp-director.openstack.org <<OSBMS-NAME>> -ojson | jq -r '.status.baremetalHosts| keys[] as $k | .[$k].ipaddresses["ctlplane"]'| awk -F'/' '{print $1}') # Get the OVN parameters oc -n "${OSPDO_NAMESPACE}" exec -c openstackclient openstackclient -- \ ssh cloud-admin@${IP} sudo ovs-vsctl -f json --columns=external_ids list Open | jq -r '.data[0][0][1][]|join("=")' | sed -n -E 's/^(ovn.*)+=(.*)+/edpm_\1: \2/p' | grep -v -e ovn-remote -e encap-tos -e openflow -e ofctrl > temp/<<OSBMS-NAME>>.txt# Create temp directory if it does not exist mkdir -p temp for name in $(oc -n "${OSPDO_NAMESPACE}" get openstackbaremetalsets.osp-director.openstack.org | awk 'NR > 1 {print $1}'); do oc -n "${OSPDO_NAMESPACE}" get openstackbaremetalsets.osp-director.openstack.org $name -ojson | jq -r '.status.baremetalHosts| "nodes:", keys[] as $k | .[$k].ipaddresses as $a | " \($k):", " hostName: \($k)", " ansible:", " ansibleHost: \($a["ctlplane"] | sub("/\\d+"; ""))", " networks:", ($a | to_entries[] | " - name: \(.key) \n fixedIP: \(.value | sub("/\\d+"; ""))\n subnetName: subnet1")' > temp/${name}-nodes.txt done从所有 Compute 主机中删除冲突的存储库和软件包。定义必须停止的 OSPdO 和 RHOSP 17.1 Pacemaker 服务:
PacemakerResourcesToStop_dataplane=( "galera-bundle" "haproxy-bundle" "rabbitmq-bundle") # Stop these PCM services after adopting control # plane, but before starting deletion of OSPD0 (osp17) env echo "Stopping pacemaker OpenStack services" SSH_CMD=CONTROLLER_SSH if [ -n "${!SSH_CMD}" ]; then echo "Using controller 0 to run pacemaker commands " for resource in "${PacemakerResourcesToStop_dataplane[@]}"; do if ${!SSH_CMD} sudo pcs resource config "$resource" &>/dev/null; then echo "Stopping $resource" ${!SSH_CMD} sudo pcs resource disable "$resource" else echo "Service $resource not present" fi done fi将 RHOSO OpenStack Operator
controller-manager缩减为 0 个副本,并临时删除OpenStackControlPlaneOpenStackClientpod,以便您可以使用 OSPdOcontroller-manager清理其一些资源。需要清理,以避免 OSPdO OpenStackClient 和 RHOSO OpenStackClient 之间的 pod 名称冲突。$ oc patch csv -n openstack-operators openstack-operator.v1.0.5 --type json -p="[{"op": "replace", "path": "/spec/install/spec/deployments/0/spec/replicas", "value": "0"}]" $ oc delete openstackclients.client.openstack.org --all- 将 CSV 版本替换为集群中部署的 CSV 版本。
删除 OSPdO
OpenStackControlPlane自定义资源(CR):$ oc delete openstackcontrolplanes.osp-director.openstack.org -n "${OSPDO_NAMESPACE}" --all删除 OSPdO
OpenStackNetConfigCR,以删除关联的节点网络配置策略:$ oc delete osnetconfig -n "${OSPDO_NAMESPACE}" --all标记包含 OSPdO 虚拟机(VM)的 RHOCP 节点:
$ oc label nodes <ospdo_vm_master_node> type=openstack-
将
<ospdo_vm_master_node> 替换为包含 OSPdO 虚拟机的剩余 master 节点。
-
将
为第三个 RHOCP 节点创建节点网络配置策略。例如:
$ cat << EOF > /tmp/node3_nncp.yaml apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: labels: osp/interface: enp7s0 name: <ostest-master-node> spec: desiredState: dns-resolver: config: search: [] server: - 172.22.0.1 interfaces: - description: internalapi vlan interface name: enp7s0.20 state: up type: vlan vlan: base-iface: enp7s0 id: 20 reorder-headers: true ipv4: address: - ip: 172.17.0.7 prefix-length: 24 enabled: true dhcp: false ipv6: enabled: false - description: storage vlan interface name: enp7s0.30 state: up type: vlan vlan: base-iface: enp7s0 id: 30 reorder-headers: true ipv4: address: - ip: 172.18.0.7 prefix-length: 24 enabled: true dhcp: false ipv6: enabled: false - description: storagemgmt vlan interface name: enp7s0.40 state: up type: vlan vlan: base-iface: enp7s0 id: 40 reorder-headers: true ipv4: address: - ip: 172.19.0.7 prefix-length: 24 enabled: true dhcp: false ipv6: enabled: false - description: tenant vlan interface name: enp7s0.50 state: up type: vlan vlan: base-iface: enp7s0 id: 50 reorder-headers: true ipv4: address: - ip: 172.20.0.7 prefix-length: 24 enabled: true dhcp: false ipv6: enabled: false - description: Configuring Bridge br-ctlplane with interface enp7s0 name: br-ctlplane mtu: 1500 type: linux-bridge state: up bridge: options: stp: enabled: false port: - name: enp1s0 vlan: {} ipv4: address: - ip: 172.22.0.53 prefix-length: 24 enabled: true dhcp: false ipv6: enabled: false - bridge: options: stp: enabled: false port: - name: enp6s0 description: Linux bridge with enp6s0 as a port ipv4: enabled: false ipv6: enabled: false mtu: 1500 name: br-external state: up type: linux-bridge nodeSelector: kubernetes.io/hostname: <ostest-master-node> node-role.kubernetes.io/worker: "" EOF $ oc apply -f /tmp/node3_nncp.yaml删除剩余的 OSPdO 资源。不要删除
OpenStackBaremetalSets和OpenStackProvisionServer资源:$ for i in $(oc get crd | grep osp-director | grep -v baremetalset | grep -v provisionserver | awk {'print $1'}); do echo Deleting $i...; oc delete $i -n "${OSPDO_NAMESPACE}" --all; done将 OSPdO 缩减为 0 个副本:
$ ospdo_csv_ver=$(oc get csv -n "${OSPDO_NAMESPACE}" -l operators.coreos.com/osp-director-operator.openstack -o json | jq -r '.items[0].metadata.name') $ oc patch csv -n "${OSPDO_NAMESPACE}" $ospdo_csv_ver --type json -p="[{"op": "replace", "path": "/spec/install/spec/deployments/0/spec/replicas", "value": "0"}]"从 OSPdO 中删除 webhook:
$ oc patch csv $ospdo_csv_ver -n "${OSPDO_NAMESPACE}" --type json -p="[{"op": "remove", "path": "/spec/webhookdefinitions"}]"从 OSPdO
OpenStackBaremetalSet资源中删除终结器:$ oc patch openstackbaremetalsets.osp-director.openstack.org -n "${OSPDO_NAMESPACE}" compute --type json -p="[{"op": "remove", "path": "/metadata/finalizers"}]"删除
OpenStackBaremetalSet和OpenStackProvisionServer资源:$ oc delete openstackbaremetalsets.osp-director.openstack.org -n "${OSPDO_NAMESPACE}" --all $ oc delete openstackprovisionservers.osp-director.openstack.org -n "${OSPDO_NAMESPACE}" --all注解每个 RHOSP Compute
BareMetalHost资源,以便 Metal3 不会启动节点:$ compute_bmh_list=$(oc get bmh -n openshift-machine-api |grep compute|awk '{printf $1 " "}') $ for bmh_compute in $compute_bmh_list;do oc annotate bmh -n openshift-machine-api $bmh_compute baremetalhost.metal3.io/detached="";\ oc -n openshift-machine-api wait bmh/$bmh_compute --for=jsonpath='{.status.operationalStatus}'=detached --timeout=30s || { echo "ERROR: BMH did not enter detatched state" exit 1 } done在其操作状态被分离后删除
BareMetalHost资源:for bmh_compute in $compute_bmh_list;do \ oc -n openshift-machine-api delete bmh $bmh_compute; \ done删除 OSPdO Operator Lifecycle Manager 资源以删除 OSPdO:
$ oc delete subscription osp-director-operator -n "${OSPDO_NAMESPACE}" $ oc delete operatorgroup osp-director-operator -n "${OSPDO_NAMESPACE}" $ oc delete catalogsource osp-director-operator-index -n "${OSPDO_NAMESPACE}" $ oc delete csv $ospdo_csv_ver -n "${OSPDO_NAMESPACE}"将 RHOSO OpenStack Operator
controller-manager扩展为 1 个副本,以便协调关联的OpenStackControlPlaneCR,并重新创建其OpenStackClientpod:$ oc patch csv -n "${OSPDO_NAMESPACE}"-operators openstack-operator.v0.0.1 --type json -p="[{"op": "replace", "path": "/spec/install/spec/deployments/0/spec/replicas", "value": "1"}]"