Chapter 5. Networking
5.1. DNS configuration details
5.1.1. General DNS setup
The OpenShift Container Platform cluster managed by Red Hat OpenShift Local uses 2 DNS domain names, crc.testing
and apps-crc.testing
. The crc.testing
domain is for core OpenShift Container Platform services. The apps-crc.testing
domain is for accessing OpenShift applications deployed on the cluster.
For example, the OpenShift Container Platform API server is exposed as api.crc.testing
while the OpenShift Container Platform console is accessed as console-openshift-console.apps-crc.testing
. These DNS domains are served by a dnsmasq
DNS container running inside the Red Hat OpenShift Local instance.
The crc setup
command detects and adjusts your system DNS configuration so that it can resolve these domains. Additional checks are done to verify DNS is properly configured when running crc start
.
5.1.2. Linux
On Linux, depending on your distribution, Red Hat OpenShift Local expects the following DNS configuration:
5.1.2.1. NetworkManager + systemd-resolved
This configuration is used by default on Fedora 33 or newer, and on Ubuntu Desktop editions.
- Red Hat OpenShift Local expects NetworkManager to manage networking.
-
Red Hat OpenShift Local configures
systemd-resolved
to forward requests for thetesting
domain to the192.168.130.11
DNS server.192.168.130.11
is the IP of the Red Hat OpenShift Local instance. systemd-resolved
configuration is done with a NetworkManager dispatcher script in /etc/NetworkManager/dispatcher.d/99-crc.sh:#!/bin/sh export LC_ALL=C systemd-resolve --interface crc --set-dns 192.168.130.11 --set-domain ~testing exit 0
systemd-resolved
is also available as an unsupported Technology Preview on Red Hat Enterprise Linux and CentOS 8.3. After configuring the host to use systemd-resolved
, stop any running clusters and rerun crc setup
.
5.1.2.2. NetworkManager + dnsmasq
This configuration is used by default on Fedora 32 or older, on Red Hat Enterprise Linux, and on CentOS.
- Red Hat OpenShift Local expects NetworkManager to manage networking.
-
NetworkManager uses
dnsmasq
with the /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf configuration file. The configuration file for this
dnsmasq
instance is /etc/NetworkManager/dnsmasq.d/crc.conf:server=/crc.testing/192.168.130.11 server=/apps-crc.testing/192.168.130.11
-
The NetworkManager
dnsmasq
instance forwards requests for thecrc.testing
andapps-crc.testing
domains to the192.168.130.11
DNS server.
-
The NetworkManager
5.2. Reserved IP subnets
The OpenShift Container Platform cluster managed by Red Hat OpenShift Local reserves IP subnets for internal use which should not collide with your host network. Ensure that the following IP subnets are available for use:
Reserved IP subnets
-
10.217.0.0/22
-
10.217.4.0/23
-
192.168.126.0/24
Additionally, the host hypervisor may reserve another IP subnet depending on the host operating system. On Microsoft Windows, the hypervisor reserves a randomly generated IP subnet that cannot be determined ahead-of-time. No additional subnet is reserved on macOS. The additional reserved subnet for Linux is 192.168.130.0/24
.
5.3. Starting Red Hat OpenShift Local behind a proxy
You can start Red Hat OpenShift Local behind a defined proxy using environment variables or configurable properties.
SOCKS proxies are not supported by OpenShift Container Platform.
Prerequisites
-
To use an existing OpenShift CLI (
oc
) executable on your host machine, export the.testing
domain as part of theno_proxy
environment variable. The embeddedoc
executable does not require manual settings. For more information about using the embeddedoc
executable, see Accessing the OpenShift cluster with the OpenShift CLI.
Procedure
Define a proxy using the
http_proxy
andhttps_proxy
environment variables or using thecrc config set
command as follows:$ crc config set http-proxy http://proxy.example.com:<port> $ crc config set https-proxy http://proxy.example.com:<port> $ crc config set no-proxy <comma-separated-no-proxy-entries>
If the proxy uses a custom CA certificate file, set it as follows:
$ crc config set proxy-ca-file <path-to-custom-ca-file>
Proxy-related values set in the configuration for Red Hat OpenShift Local have priority over values set with environment variables.
5.4. Setting up Red Hat OpenShift Local on a remote server
Configure a remote server to run an OpenShift Container Platform cluster provided by Red Hat OpenShift Local.
This procedure assumes the use of a Red Hat Enterprise Linux, Fedora, or CentOS server. Run every command in this procedure on the remote server.
Perform this procedure only on a local network. Exposing an insecure server on the internet has many security implications.
Prerequisites
- Red Hat OpenShift Local is installed and set up on the remote server. For more information, see Installing Red Hat OpenShift Local and Setting up Red Hat OpenShift Local.
- Red Hat OpenShift Local is configured to use the OpenShift preset on the remote server. For more information, see Changing the selected preset.
-
Your user account has
sudo
permissions on the remote server.
Procedure
Start the cluster:
$ crc start
Ensure that the cluster remains running during this procedure.
Install the
haproxy
package and other utilities:$ sudo dnf install haproxy /usr/sbin/semanage
Modify the firewall to allow communication with the cluster:
$ sudo systemctl enable --now firewalld $ sudo firewall-cmd --add-service=http --permanent $ sudo firewall-cmd --add-service=https --permanent $ sudo firewall-cmd --add-service=kube-apiserver --permanent $ sudo firewall-cmd --reload
For SELinux, allow HAProxy to listen on TCP port 6443 to serve
kube-apiserver
on this port:$ sudo semanage port -a -t http_port_t -p tcp 6443
Create a backup of the default
haproxy
configuration:$ sudo cp /etc/haproxy/haproxy.cfg{,.bak}
Configure
haproxy
for use with the cluster:$ export CRC_IP=$(crc ip) $ sudo tee /etc/haproxy/haproxy.cfg &>/dev/null <<EOF global log /dev/log local0 defaults balance roundrobin log global maxconn 100 mode tcp timeout connect 5s timeout client 500s timeout server 500s listen apps bind 0.0.0.0:80 server crcvm $CRC_IP:80 check listen apps_ssl bind 0.0.0.0:443 server crcvm $CRC_IP:443 check listen api bind 0.0.0.0:6443 server crcvm $CRC_IP:6443 check EOF
Start the
haproxy
service:$ sudo systemctl start haproxy
5.5. Connecting to a remote Red Hat OpenShift Local instance
Use dnsmasq
to connect a client machine to a remote server running an OpenShift Container Platform cluster managed by Red Hat OpenShift Local.
This procedure assumes the use of a Red Hat Enterprise Linux, Fedora, or CentOS client. Run every command in this procedure on the client.
Connect to a server that is only exposed on your local network.
Prerequisites
- A remote server is set up for the client to connect to. For more information, see Setting up Red Hat OpenShift Local on a remote server.
- You know the external IP address of the server.
-
You have the latest OpenShift CLI (
oc
) in your$PATH
on the client.
Procedure
Install the
dnsmasq
package:$ sudo dnf install dnsmasq
Enable the use of
dnsmasq
for DNS resolution in NetworkManager:$ sudo tee /etc/NetworkManager/conf.d/use-dnsmasq.conf &>/dev/null <<EOF [main] dns=dnsmasq EOF
Add DNS entries for Red Hat OpenShift Local to the
dnsmasq
configuration:$ sudo tee /etc/NetworkManager/dnsmasq.d/external-crc.conf &>/dev/null <<EOF address=/apps-crc.testing/SERVER_IP_ADDRESS address=/api.crc.testing/SERVER_IP_ADDRESS EOF
NoteComment out any existing entries in /etc/NetworkManager/dnsmasq.d/crc.conf. These entries are created by running a local instance of Red Hat OpenShift Local and will conflict with the entries for the remote cluster.
Reload the NetworkManager service:
$ sudo systemctl reload NetworkManager
Log in to the remote cluster as the
developer
user withoc
:$ oc login -u developer -p developer https://api.crc.testing:6443
The remote OpenShift Container Platform web console is available at https://console-openshift-console.apps-crc.testing.