이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 7. Configuring network settings after installing OpenStack
You can configure network settings for an OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) cluster after installation.
7.1. Configuring application access with floating IP addresses
After you install OpenShift Container Platform, configure Red Hat OpenStack Platform (RHOSP) to allow application network traffic.
You do not need to perform this procedure if you provided values for platform.openstack.apiFloatingIP
and platform.openstack.ingressFloatingIP
in the install-config.yaml
file, or os_api_fip
and os_ingress_fip
in the inventory.yaml
playbook, during installation. The floating IP addresses are already set.
Prerequisites
- OpenShift Container Platform cluster must be installed
- Floating IP addresses are enabled as described in the OpenShift Container Platform on RHOSP installation documentation.
Procedure
After you install the OpenShift Container Platform cluster, attach a floating IP address to the ingress port:
Show the port:
$ openstack port show <cluster_name>-<cluster_ID>-ingress-port
Attach the port to the IP address:
$ openstack floating ip set --port <ingress_port_ID> <apps_FIP>
Add a wildcard
A
record for*apps.
to your DNS file:*.apps.<cluster_name>.<base_domain> IN A <apps_FIP>
If you do not control the DNS server but want to enable application access for non-production purposes, you can add these hostnames to /etc/hosts
:
<apps_FIP> console-openshift-console.apps.<cluster name>.<base domain> <apps_FIP> integrated-oauth-server-openshift-authentication.apps.<cluster name>.<base domain> <apps_FIP> oauth-openshift.apps.<cluster name>.<base domain> <apps_FIP> prometheus-k8s-openshift-monitoring.apps.<cluster name>.<base domain> <apps_FIP> <app name>.apps.<cluster name>.<base domain>
7.2. Enabling OVS hardware offloading
For clusters that run on Red Hat OpenStack Platform (RHOSP), you can enable Open vSwitch (OVS) hardware offloading.
OVS is a multi-layer virtual switch that enables large-scale, multi-server network virtualization.
Prerequisites
- You installed a cluster on RHOSP that is configured for single-root input/output virtualization (SR-IOV).
- You installed the SR-IOV Network Operator on your cluster.
-
You created two
hw-offload
type virtual function (VF) interfaces on your cluster.
Application layer gateway flows are broken in OpenShift Container Platform version 4.10, 4.11, and 4.12. Also, you cannot offload the application layer gateway flow for OpenShift Container Platform version 4.13.
Procedure
Create an
SriovNetworkNodePolicy
policy for the twohw-offload
type VF interfaces that are on your cluster:The first virtual function interface
apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy 1 metadata: name: "hwoffload9" namespace: openshift-sriov-network-operator spec: deviceType: netdevice isRdma: true nicSelector: pfNames: 2 - ens6 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: 'true' numVfs: 1 priority: 99 resourceName: "hwoffload9"
The second virtual function interface
apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy 1 metadata: name: "hwoffload10" namespace: openshift-sriov-network-operator spec: deviceType: netdevice isRdma: true nicSelector: pfNames: 2 - ens5 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: 'true' numVfs: 1 priority: 99 resourceName: "hwoffload10"
Create
NetworkAttachmentDefinition
resources for the two interfaces:A
NetworkAttachmentDefinition
resource for the first interfaceapiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: annotations: k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload9 name: hwoffload9 namespace: default spec: config: '{ "cniVersion":"0.3.1", "name":"hwoffload9","type":"host-device","device":"ens6" }'
A
NetworkAttachmentDefinition
resource for the second interfaceapiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: annotations: k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload10 name: hwoffload10 namespace: default spec: config: '{ "cniVersion":"0.3.1", "name":"hwoffload10","type":"host-device","device":"ens5" }'
Use the interfaces that you created with a pod. For example:
A pod that uses the two OVS offload interfaces
apiVersion: v1 kind: Pod metadata: name: dpdk-testpmd namespace: default annotations: irq-load-balancing.crio.io: disable cpu-quota.crio.io: disable k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload9 k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload10 spec: restartPolicy: Never containers: - name: dpdk-testpmd image: quay.io/krister/centos8_nfv-container-dpdk-testpmd:latest
7.3. Attaching an OVS hardware offloading network
You can attach an Open vSwitch (OVS) hardware offloading network to your cluster.
Prerequisites
- Your cluster is installed and running.
- You provisioned an OVS hardware offloading network on Red Hat OpenStack Platform (RHOSP) to use with your cluster.
Procedure
Create a file named
network.yaml
from the following template:spec: additionalNetworks: - name: hwoffload1 namespace: cnf rawCNIConfig: '{ "cniVersion": "0.3.1", "name": "hwoffload1", "type": "host-device","pciBusId": "0000:00:05.0", "ipam": {}}' 1 type: Raw
where:
pciBusId
Specifies the device that is connected to the offloading network. If you do not have it, you can find this value by running the following command:
$ oc describe SriovNetworkNodeState -n openshift-sriov-network-operator
From a command line, enter the following command to patch your cluster with the file:
$ oc apply -f network.yaml
7.4. Enabling IPv6 connectivity to pods on RHOSP
To enable IPv6 connectivity between pods that have additional networks that are on different nodes, disable port security for the IPv6 port of the server. Disabling port security obviates the need to create allowed address pairs for each IPv6 address that is assigned to pods and enables traffic on the security group.
Only the following IPv6 additional network configurations are supported:
- SLAAC and host-device
- SLAAC and MACVLAN
- DHCP stateless and host-device
- DHCP stateless and MACVLAN
Procedure
On a command line, enter the following command:
$ openstack port set --no-security-group --disable-port-security <compute_ipv6_port> 1
ImportantThis command removes security groups from the port and disables port security. Traffic restrictions are removed entirely from the port.
7.5. Create pods that have IPv6 connectivity on RHOSP
After you enable IPv6 connectivty for pods and add it to them, create pods that have secondary IPv6 connections.
Procedure
Define pods that use your IPv6 namespace and the annotation
k8s.v1.cni.cncf.io/networks: <additional_network_name>
, where<additional_network_name
is the name of the additional network. For example, as part of aDeployment
object:apiVersion: apps/v1 kind: Deployment metadata: name: hello-openshift namespace: ipv6 spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - hello-openshift replicas: 2 selector: matchLabels: app: hello-openshift template: metadata: labels: app: hello-openshift annotations: k8s.v1.cni.cncf.io/networks: ipv6 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: hello-openshift securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL image: quay.io/openshift/origin-hello-openshift ports: - containerPort: 8080
Create the pod. For example, on a command line, enter the following command:
$ oc create -f <ipv6_enabled_resource> 1
- 1
- Specify the file that contains your resource definition.
7.6. Adding IPv6 connectivity to pods on RHOSP
After you enable IPv6 connectivity in pods, add connectivity to them by using a Container Network Interface (CNI) configuration.
Procedure
To edit the Cluster Network Operator (CNO), enter the following command:
$ oc edit networks.operator.openshift.io cluster
Specify your CNI configuration under the
spec
field. For example, the following configuration uses a SLAAC address mode with MACVLAN:... spec: additionalNetworks: - name: ipv6 namespace: ipv6 1 rawCNIConfig: '{ "cniVersion": "0.3.1", "name": "ipv6", "type": "macvlan", "master": "ens4"}' 2 type: Raw
NoteIf you are using stateful address mode, include the IP Address Management (IPAM) in the CNI configuration.
DHCPv6 is not supported by Multus.
- Save your changes and quit the text editor to commit your changes.
Verification
On a command line, enter the following command:
$ oc get network-attachment-definitions -A
Example output
NAMESPACE NAME AGE ipv6 ipv6 21h
You can now create pods that have secondary IPv6 connections.