Chapter 16. Getting Traffic into a Cluster
16.1. Getting Traffic into a Cluster
OpenShift Container Platform provides multiple methods for communicating from outside the cluster with services running in the cluster.
The procedures in this section require prerequisites performed by the cluster administrator.
Administrators can expose a service endpoint that external traffic can reach, by assigning a unique external IP address to that service from a range of external IP addresses. Administrators can designate a range of addresses using a CIDR notation, which allows an application user to make a request against the cluster for an external IP address.
Each IP address should be assigned to only one service to ensure that each service has a unique endpoint. Potential port clashes are handled on a first-come, first-served basis.
The recommendation, in order or preference, is:
- If you have HTTP/HTTPS, use a router.
- If you have a TLS-encrypted protocol other than HTTPS (for example, TLS with the SNI header), use a router.
- Otherwise, use a Load Balancer, an External IP, or a NodePort.
Method | Purpose |
---|---|
Allows access to HTTP/HTTPS traffic and TLS-encrypted protocols other than HTTPS (for example, TLS with the SNI header). | |
Automatically Assign a Public IP Using a Load Balancer Service | Allows traffic to non-standard ports through an IP address assigned from a pool. |
Allows traffic to non-standard ports through a specific IP address. | |
Expose a service on all nodes in the cluster. |
16.2. Using a Router to Get Traffic into the Cluster
16.2.1. Overview
Using a router is the most common way to allow external access to an OpenShift Container Platform cluster.
A router is configured to accept external requests and proxy them based on the configured routes. This is limited to HTTP/HTTPS(SNI)/TLS(SNI), which covers web applications.
16.2.2. Administrator Prerequisites
Before starting this procedure, the administrator must:
- Set up the external port to the cluster networking environment so that requests can reach the cluster. For example, names can be configured into DNS to point to specific nodes or other IP addresses in the cluster. The DNS wildcard feature can be used to configure a subset of names to an IP address in the cluster. This allows the users to set up routes within the cluster without further administrator attention.
- Make sure that the local firewall on each node permits the request to reach the IP address.
- Configure the OpenShift Container Platform cluster to use an identity provider that allows appropriate user access.
Make sure there is at least one user with cluster admin role. To add this role to a user, run the following command:
oc adm policy add-cluster-role-to-user cluster-admin username
- Have an OpenShift Container Platform cluster with at least one master and at least one node and a system outside the cluster that has network access to the cluster. This procedure assumes that the external system is on the same subnet as the cluster. The additional networking required for external systems on a different subnet is out-of-scope for this topic.
16.2.2.1. Defining the Public IP Range
The first step in allowing access to a service is to define an external IP address range in the master configuration file:
Log into OpenShift Container Platform as a user with the cluster admin role.
$ oc login Authentication required (openshift) Username: admin Password: Login successful. You have access to the following projects and can switch between them with 'oc project <projectname>': * default Using project "default".
Configure the
externalIPNetworkCIDRs
parameter in the /etc/origin/master/master-config.yaml file as shown:networkConfig: externalIPNetworkCIDRs: - <ip_address>/<cidr>
For example:
networkConfig: externalIPNetworkCIDRs: - 192.168.120.0/24
Restart the OpenShift Container Platform master service to apply the changes.
# systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers
The IP address pool must terminate at one or more nodes in the cluster.
16.2.3. Create a Project and Service
If the project and service that you want to expose do not exist, first create the project, then the service.
If the project and service already exist, go to the next step: Expose the Service to Create a Route.
- Log into OpenShift Container Platform.
Create a new project for your service:
$ oc new-project <project_name>
For example:
$ oc new-project external-ip
Use the
oc new-app
command to create a service:For example:
$ oc new-app \ -e MYSQL_USER=admin \ -e MYSQL_PASSWORD=redhat \ -e MYSQL_DATABASE=mysqldb \ registry.access.redhat.com/openshift3/mysql-55-rhel7
Run the following command to see that the new service is created:
oc get svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE mysql-55-rhel7 172.30.131.89 <none> 3306/TCP 13m
By default, the new service does not have an external IP address.
16.2.4. Expose the Service to Create a Route
You must expose the service as a route using the oc expose
command.
To expose the service:
- Log into OpenShift Container Platform.
Log into the project where the service you want to expose is located.
$ oc project project1
Run the following command to expose the route:
oc expose service <service-name>
For example:
oc expose service mysql-55-rhel7 route "mysql-55-rhel7" exposed
On the master, use a tool, such as cURL, to make sure you can reach the service using the cluster IP address for the service:
curl <pod-ip>:<port>
For example:
curl 172.30.131.89:3306
The examples in this section use a MySQL service, which requires a client application. If you get a string of characters with the
Got packets out of order
message, you are connected to the service.If you have a MySQL client, log in with the standard CLI command:
$ mysql -h 172.30.131.89 -u admin -p Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. MySQL [(none)]>
16.2.5. Configure the Router
Work with your administrator to configure a router to accept external requests and proxy them based on the configured routes.
The administrator can create a wildcard DNS entry and then set up a router. Then, you can self-service the edge router without having to contact the administrators.
The router has controls to allow the administrator to specify whether the users can self-provision host names or the host names require a specific pattern.
When a set of routes is created in various projects, the overall set of routes is available to the set of routers. Each router admits (or selects) routes from the set of routes. By default, all routers admit all routes.
Routers that have permission to view all of the labels in all projects can select routes to admit based on the labels. This is called router sharding. This is useful when balancing incoming traffic load among a set of routers and when isolating traffic to a specific router. For example, company A goes to one router and company B to another.
Since a router runs on a specific node, when it or the node fails traffic ingress stops. The impact of this can be reduced by creating redundant routers on different nodes and using high availability to switch the router IP address when a node fails.
16.2.6. Configure IP Failover using VIPs
Optionally, an administrator can configure IP failover.
IP failover manages a pool of Virtual IP (VIP) addresses on a set of nodes. Every VIP in the set is serviced by a node selected from the set. As long as a single node is available, the VIPs will be served. There is no way to explicitly distribute the VIPs over the nodes. As such, there may be nodes with no VIPs and other nodes with multiple VIPs. If there is only one node, all VIPs will be on it.
The VIPs must be routable from outside the cluster.
To configure IP failover:
On the master, make sure the
ipfailover
service account has sufficient security privileges:oc adm policy add-scc-to-user privileged -z ipfailover
Run the following command to create the IP failover:
oc adm ipfailover --virtual-ips=<exposed-ip-address> --watch-port=<exposed-port> --replicas=<number-of-pods> --create
For example:
oc adm ipfailover --virtual-ips="172.30.233.169" --watch-port=32315 --replicas=4 --create --> Creating IP failover ipfailover ... serviceaccount "ipfailover" created deploymentconfig "ipfailover" created --> Success
16.3. Using a Load Balancer to Get Traffic into the Cluster
16.3.1. Overview
If you do not need a specific external IP address, you can configure a load balancer service to allow external access to an OpenShift Container Platform cluster.
A load balancer service allocates a unique IP from a configured pool. The load balancer has a single edge router IP (which can be a virtual IP (VIP), but is still a single machine for initial load balancing).
This process involves the following:
- The administrator performs the prerequisites;
- The developer creates a project and service, if the service to be exposed does not exist;
- The developer exposes the service to create a route.
- The developer creates the Load Balancer Service.
- The network administrator configures networking to the service.
16.3.2. Administrator Prerequisites
Before starting this procedure, the administrator must:
- Set up the external port to the cluster networking environment so that requests can reach the cluster. For example, names can be configured into DNS to point to specific nodes or other IP addresses in the cluster. The DNS wildcard feature can be used to configure a subset of names to an IP address in the cluster. This allows the users to set up routes within the cluster without further administrator attention.
- Make sure that the local firewall on each node permits the request to reach the IP address.
- Configure the OpenShift Container Platform cluster to use an identity provider that allows appropriate user access.
Make sure there is at least one user with cluster admin role. To add this role to a user, run the following command:
oc adm policy add-cluster-role-to-user cluster-admin username
- Have an OpenShift Container Platform cluster with at least one master and at least one node and a system outside the cluster that has network access to the cluster. This procedure assumes that the external system is on the same subnet as the cluster. The additional networking required for external systems on a different subnet is out-of-scope for this topic.
16.3.2.1. Defining the Public IP Range
The first step in allowing access to a service is to define an external IP address range in the master configuration file:
Log into OpenShift Container Platform as a user with the cluster admin role.
$ oc login Authentication required (openshift) Username: admin Password: Login successful. You have access to the following projects and can switch between them with 'oc project <projectname>': * default Using project "default".
Configure the
externalIPNetworkCIDRs
parameter in the /etc/origin/master/master-config.yaml file as shown:networkConfig: externalIPNetworkCIDRs: - <ip_address>/<cidr>
For example:
networkConfig: externalIPNetworkCIDRs: - 192.168.120.0/24
Restart the OpenShift Container Platform master service to apply the changes.
# systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers
The IP address pool must terminate at one or more nodes in the cluster.
16.3.3. Create a Project and Service
If the project and service that you want to expose do not exist, first create the project, then the service.
If the project and service already exist, go to the next step: Expose the Service to Create a Route.
- Log into OpenShift Container Platform.
Create a new project for your service:
$ oc new-project <project_name>
For example:
$ oc new-project external-ip
Use the
oc new-app
command to create a service:For example:
$ oc new-app \ -e MYSQL_USER=admin \ -e MYSQL_PASSWORD=redhat \ -e MYSQL_DATABASE=mysqldb \ registry.access.redhat.com/openshift3/mysql-55-rhel7
Run the following command to see that the new service is created:
oc get svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE mysql-55-rhel7 172.30.131.89 <none> 3306/TCP 13m
By default, the new service does not have an external IP address.
16.3.4. Expose the Service to Create a Route
You must expose the service as a route using the oc expose
command.
To expose the service:
- Log into OpenShift Container Platform.
Log into the project where the service you want to expose is located.
$ oc project project1
Run the following command to expose the route:
oc expose service <service-name>
For example:
oc expose service mysql-55-rhel7 route "mysql-55-rhel7" exposed
On the master, use a tool, such as cURL, to make sure you can reach the service using the cluster IP address for the service:
curl <pod-ip>:<port>
For example:
curl 172.30.131.89:3306
The examples in this section use a MySQL service, which requires a client application. If you get a string of characters with the
Got packets out of order
message, you are connected to the service.If you have a MySQL client, log in with the standard CLI command:
$ mysql -h 172.30.131.89 -u admin -p Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. MySQL [(none)]>
Then, perform the following tasks:
16.3.5. Create the Load Balancer Service
To create a load balancer service:
- Log into OpenShift Container Platform.
Load the project where the service you want to expose is located. If the project or service does not exist, see Create a Project and Service.
$ oc project project1
Open a text file on the master node and paste the following text, editing the file as needed:
Example 16.1. Sample load balancer configuration file
apiVersion: v1 kind: Service metadata: name: egress-2 1 spec: ports: - name: db port: 3306 2 loadBalancerIP: type: LoadBalancer 3 selector: name: mysql 4
- Save and exit the file.
Run the following command to create the service:
oc create -f <file-name>
For example:
oc create -f mysql-lb.yaml
Execute the following command to view the new service:
oc get svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE egress-2 172.30.236.167 172.29.121.74,172.29.121.74 3306/TCP 6s
Note that the service has an external IP address automatically assigned.
On the master, use a tool, such as cURL, to make sure you can reach the service using the public IP address:
$ curl <public-ip>:<port>
++ For example:
$ curl 172.29.121.74:3306
The examples in this section use a MySQL service, which requires a client application. If you get a string of characters with the
Got packets out of order
message, you are connecting with the service:If you have a MySQL client, log in with the standard CLI command:
$ mysql -h 172.30.131.89 -u admin -p Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. MySQL [(none)]>
16.3.6. Configuring Networking
The following steps are general guidelines for configuring the networking required to access the exposed service from other nodes. As network environments vary, consult your network administrator for specific configurations that need to be made within your environment.
These steps assume that all of the systems are on the same subnet.
On the Node:
Restart the network to make sure the network is up.
$ service network restart Restarting network (via systemctl): [ OK ]
If the network is not up, you will receive error messages such as Network is unreachable when executing the following commands.
Add a route between the IP address of the exposed service on the master and the IP address of the master host. If using a netmask for a networking route, use the
netmask
option, as well as the netmask to use:$ route add -net 172.29.121.74 netmask 255.255.0.0 gw 10.16.41.22 dev eth0
Use a tool, such as cURL, to make sure you can reach the service using the public IP address:
$ curl <public-ip>:<port>
For example:
curl 172.29.121.74:3306
If you get a string of characters with the
Got packets out of order
message, your service is accessible from the node.
On the system that is not in the cluster:
Restart the network to make sure the network is up.
$ service network restart Restarting network (via systemctl): [ OK ]
If the network is not up, you will receive error messages such as Network is unreachable when executing the following commands.
Add a route between the IP address of the exposed service on master and the IP address of the master host. If using a netmask for a networking route, use the
netmask
option, as well as the netmask to use:$ route add -net 172.29.121.74 netmask 255.255.0.0 gw 10.16.41.22 dev eth0
Make sure you can reach the service using the public IP address:
$ curl <public-ip>:<port>
For example:
curl 172.29.121.74:3306
If you get a string of characters with the
Got packets out of order
message, your service is accessible outside the cluster.
16.3.7. Configure IP Failover using VIPs
Optionally, an administrator can configure IP failover.
IP failover manages a pool of Virtual IP (VIP) addresses on a set of nodes. Every VIP in the set is serviced by a node selected from the set. As long as a single node is available, the VIPs will be served. There is no way to explicitly distribute the VIPs over the nodes. As such, there may be nodes with no VIPs and other nodes with multiple VIPs. If there is only one node, all VIPs will be on it.
The VIPs must be routable from outside the cluster.
To configure IP failover:
On the master, make sure the
ipfailover
service account has sufficient security privileges:oc adm policy add-scc-to-user privileged -z ipfailover
Run the following command to create the IP failover:
oc adm ipfailover --virtual-ips=<exposed-ip-address> --watch-port=<exposed-port> --replicas=<number-of-pods> --create
For example:
oc adm ipfailover --virtual-ips="172.30.233.169" --watch-port=32315 --replicas=4 --create --> Creating IP failover ipfailover ... serviceaccount "ipfailover" created deploymentconfig "ipfailover" created --> Success
16.4. Using a Service External IP to Get Traffic into the Cluster
16.4.1. Overview
One method to expose a service is to assign an external IP access directly to the service you want to make accessible from outside the cluster.
Make sure you have created a range of IP addresses to use, as shown in Defining the Public IP Address Range.
By setting an external IP on the service, OpenShift Container Platform sets up IP table rules to allow traffic arriving at any cluster node that is targeting that IP address to be sent to one of the internal pods. This is similar to the internal service IP addresses, but the external IP tells OpenShift Container Platform that this service should also be exposed externally at the given IP. The administrator must assign the IP address to a host (node) interface on one of the nodes in the cluster. Alternatively, the address can be used as a virtual IP (VIP).
These IPs are not managed by OpenShift Container Platform and administrators are responsible for ensuring that traffic arrives at a node with this IP.
The following is a non-HA solution and does not configure IP failover. IP failover is required to make the service highly-available.
This process involves the following:
- The administrator performs the prerequisites;
- The developer creates a project and service, if the service to be exposed does not exist;
- The developer exposes the service to create a route.
- The developer assigns the IP address to the service.
- The network administrator configures networking to the service.
16.4.2. Administrator Prerequisites
Before starting this procedure, the administrator must:
- Set up the external port to the cluster networking environment so that requests can reach the cluster. For example, names can be configured into DNS to point to specific nodes or other IP addresses in the cluster. The DNS wildcard feature can be used to configure a subset of names to an IP address in the cluster. This allows the users to set up routes within the cluster without further administrator attention.
- Make sure that the local firewall on each node permits the request to reach the IP address.
- Configure the OpenShift Container Platform cluster to use an identity provider that allows appropriate user access.
Make sure there is at least one user with cluster admin role. To add this role to a user, run the following command:
oc adm policy add-cluster-role-to-user cluster-admin username
- Have an OpenShift Container Platform cluster with at least one master and at least one node and a system outside the cluster that has network access to the cluster. This procedure assumes that the external system is on the same subnet as the cluster. The additional networking required for external systems on a different subnet is out-of-scope for this topic.
16.4.2.1. Defining the Public IP Range
The first step in allowing access to a service is to define an external IP address range in the master configuration file:
Log into OpenShift Container Platform as a user with the cluster admin role.
$ oc login Authentication required (openshift) Username: admin Password: Login successful. You have access to the following projects and can switch between them with 'oc project <projectname>': * default Using project "default".
Configure the
externalIPNetworkCIDRs
parameter in the /etc/origin/master/master-config.yaml file as shown:networkConfig: externalIPNetworkCIDRs: - <ip_address>/<cidr>
For example:
networkConfig: externalIPNetworkCIDRs: - 192.168.120.0/24
Restart the OpenShift Container Platform master service to apply the changes.
# systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers
The IP address pool must terminate at one or more nodes in the cluster.
16.4.3. Create a Project and Service
If the project and service that you want to expose do not exist, first create the project, then the service.
If the project and service already exist, go to the next step: Expose the Service to Create a Route.
- Log into OpenShift Container Platform.
Create a new project for your service:
$ oc new-project <project_name>
For example:
$ oc new-project external-ip
Use the
oc new-app
command to create a service:For example:
$ oc new-app \ -e MYSQL_USER=admin \ -e MYSQL_PASSWORD=redhat \ -e MYSQL_DATABASE=mysqldb \ registry.access.redhat.com/openshift3/mysql-55-rhel7
Run the following command to see that the new service is created:
oc get svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE mysql-55-rhel7 172.30.131.89 <none> 3306/TCP 13m
By default, the new service does not have an external IP address.
16.4.4. Expose the Service to Create a Route
You must expose the service as a route using the oc expose
command.
To expose the service:
- Log into OpenShift Container Platform.
Log into the project where the service you want to expose is located.
$ oc project project1
Run the following command to expose the route:
oc expose service <service-name>
For example:
oc expose service mysql-55-rhel7 route "mysql-55-rhel7" exposed
On the master, use a tool, such as cURL, to make sure you can reach the service using the cluster IP address for the service:
curl <pod-ip>:<port>
For example:
curl 172.30.131.89:3306
The examples in this section use a MySQL service, which requires a client application. If you get a string of characters with the
Got packets out of order
message, you are connected to the service.If you have a MySQL client, log in with the standard CLI command:
$ mysql -h 172.30.131.89 -u admin -p Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. MySQL [(none)]>
Then, perform the following tasks:
16.4.5. Assigning an IP Address to the Service
To assign an external IP address to a service:
- Log into OpenShift Container Platform.
- Load the project where the service you want to expose is located. If the project or service does not exist, see Create a Project and Service in the Prerequisites.
Run the following command to assign an external IP address to the service you want to access. Use an IP address from the external IP address range:
oc patch svc <name> -p '{"spec":{"externalIPs":["<ip_address>"]}}'
The
<name>
is the name of the service and-p
indicates a patch to be applied to the service JSON file. The expression in the brackets will assign the specified IP address to the specified service.For example:
oc patch svc mysql-55-rhel7 -p '{"spec":{"externalIPs":["192.174.120.10"]}}' "mysql-55-rhel7" patched
Run the following command to see that the service has a public IP:
oc get svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE mysql-55-rhel7 172.30.131.89 192.174.120.10 3306/TCP 13m
On the master, use a tool, such as cURL, to make sure you can reach the service using the public IP address:
$ curl <public-ip>:<port>
For example:
curl 192.168.120.10:3306
If you get a string of characters with the
Got packets out of order
message, you are connected to the service.If you have a MySQL client, log in with the standard CLI command:
$ mysql -h 192.168.120.10 -u admin -p Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. MySQL [(none)]>
16.4.6. Configuring Networking
After the external IP address is assigned, you need to create routes to that IP.
The following steps are general guidelines for configuring the networking required to access the exposed service from other nodes. As network environments vary, consult your network administrator for specific configurations that need to be made within your environment.
These steps assume that all of the systems are on the same subnet.
On the master:
Restart the network to make sure the network is up.
$ service network restart Restarting network (via systemctl): [ OK ]
If the network is not up, you will receive error messages such as Network is unreachable when running the following commands.
Run the following command with the external IP address of the service you want to expose and device name associated with the host IP from the
ifconfig
command output:$ ip address add <external-ip> dev <device>
For example:
$ ip address add 192.168.120.10 dev eth0
If you need to, run the following command to obtain the IP address of the host server where the master resides:
$ ifconfig
Look for the device that is listed similar to:
UP,BROADCAST,RUNNING,MULTICAST
.eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.16.41.22 netmask 255.255.248.0 broadcast 10.16.47.255 ...
Add a route between the IP address of the host where the master resides and the gateway IP address of the master host. If using a netmask for a networking route, use the
netmask
option, as well as the netmask to use:$ route add -host <host_ip_address> netmask <netmask> gw <gateway_ip_address> dev <device>
For example:
$ route add -host 10.16.41.22 netmask 255.255.248.0 gw 10.16.41.254 dev eth0
The
netstat -nr
command provides the gateway IP address:$ netstat -nr Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 10.16.41.254 0.0.0.0 UG 0 0 0 eth0
Add a route between the IP address of the exposed service and the IP address of the master host:
$ route add -net 192.174.120.0/24 gw 10.16.41.22 eth0
On the Node:
Restart the network to make sure the network is up.
$ service network restart Restarting network (via systemctl): [ OK ]
If the network is not up, you will receive error messages such as Network is unreachable when executing the following commands.
Add a route between IP address of the host where the node is located and the gateway IP of the node host. If using a netmask for a networking route, use the
netmask
option, as well as the netmask to use:$ route add -net 10.16.40.0 netmask 255.255.248.0 gw 10.16.47.254 eth0
The
ifconfig
command displays the host IP:ifconfig eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.16.41.71 netmask 255.255.248.0 broadcast 10.19.41.255
The
netstat -nr
command displays the gateway IP:netstat -nr Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 10.16.41.254 0.0.0.0 UG 0 0 0 eth0
Add a route between the IP address of the exposed service and the IP address of the host system where the master node resides:
$ route add -net 192.174.120.0 netmask 255.255.255.0 gw 10.16.41.22 dev eth0
Use a tool, such as cURL, to make sure you can reach the service using the public IP address:
$ curl <public-ip>:<port>
For example:
curl 192.168.120.10:3306
If you get a string of characters with the
Got packets out of order
message, your service is accessible from the node.
On the system that is not in the cluster:
Restart the network to make sure the network is up.
$ service network restart Restarting network (via systemctl): [ OK ]
If the network is not up, you will receive error messages such as Network is unreachable when executing the following commands.
Add a route between the IP address of the remote host and the gateway IP of the remote host. If using a netmask for a networking route, use the
netmask
option, as well as the netmask to use:$ route add -net 10.16.64.0 netmask 255.255.248.0 gw 10.16.71.254 eno1
Add a route between the IP address of the exposed service on master and the IP address of the master host:
$ route add -net 192.174.120.0 netmask 255.255.248.0 gw 10.16.41.22
Use a tool, such as cURL, to make sure you can reach the service using the public IP address:
$ curl <public-ip>:<port>
For example:
curl 192.168.120.10:3306
If you get a string of characters with the
Got packets out of order
message, your service is accessible outside the cluster.
16.4.7. Configure IP Failover using VIPs
Optionally, an administrator can configure IP failover.
IP failover manages a pool of Virtual IP (VIP) addresses on a set of nodes. Every VIP in the set is serviced by a node selected from the set. As long as a single node is available, the VIPs will be served. There is no way to explicitly distribute the VIPs over the nodes. As such, there may be nodes with no VIPs and other nodes with multiple VIPs. If there is only one node, all VIPs will be on it.
The VIPs must be routable from outside the cluster.
To configure IP failover:
On the master, make sure the
ipfailover
service account has sufficient security privileges:oc adm policy add-scc-to-user privileged -z ipfailover
Run the following command to create the IP failover:
oc adm ipfailover --virtual-ips=<exposed-ip-address> --watch-port=<exposed-port> --replicas=<number-of-pods> --create
For example:
oc adm ipfailover --virtual-ips="172.30.233.169" --watch-port=32315 --replicas=4 --create --> Creating IP failover ipfailover ... serviceaccount "ipfailover" created deploymentconfig "ipfailover" created --> Success
16.5. Using a NodePort to Get Traffic into the Cluster
16.5.1. Overview
Use NodePorts to expose the service nodePort on all nodes in the cluster.
Using NodePorts requires additional port resources.
A node port exposes the service on a static port on the node IP address.
NodePorts are in the 30000-32767 range by default, which means a NodePort is unlikely to match a service’s intended port (for example, 8080 may be exposed as 31020).
The administrator must ensure the external IPs are routed to the nodes and local firewall rules on all nodes allow access to the open port.
NodePorts and external IPs are independent and both can be used concurrently.
16.5.2. Administrator Prerequisites
Before starting this procedure, the administrator must:
- Set up the external port to the cluster networking environment so that requests can reach the cluster. For example, names can be configured into DNS to point to specific nodes or other IP addresses in the cluster. The DNS wildcard feature can be used to configure a subset of names to an IP address in the cluster. This allows the users to set up routes within the cluster without further administrator attention.
- Make sure that the local firewall on each node permits the request to reach the IP address.
- Configure the OpenShift Container Platform cluster to use an identity provider that allows appropriate user access.
Make sure there is at least one user with cluster admin role. To add this role to a user, run the following command:
oc adm policy add-cluster-role-to-user cluster-admin username
- Have an OpenShift Container Platform cluster with at least one master and at least one node and a system outside the cluster that has network access to the cluster. This procedure assumes that the external system is on the same subnet as the cluster. The additional networking required for external systems on a different subnet is out-of-scope for this topic.
16.5.3. Configuring the Service
You specify a port number for the nodePort when you create or modify a service. If you didn’t manually specify a port, system will allocate one for you.
- Log into the master node.
If the project you want to use does not exist, create a new project for your service:
$ oc new-project <project_name>
For example:
$ oc new-project external-ip
Edit the service definition to specify
spec.type:NodePort
and optionally specify a port in the in the 30000-32767 range.apiVersion: v1 kind: Service metadata: name: mysql labels: name: mysql spec: type: NodePort ports: - port: 3036 nodePort: 30036 name: http selector: name: mysql
Execute the following command to create the service:
$ oc new-app <file-name>
For example:
oc new-app mysql.yaml
Execute the following command to see that the new service is created:
oc get svc NAME CLUSTER_IP EXTERNAL_IP PORT(S) AGE mysql 172.30.89.219 <nodes> 3036:30036/TCP 2m
Note that the external IP is listed as
<nodes>
and the node ports are listed.
You should be able to access the service using the <NodeIP>:<NodePort>
address.