Configuring
Configuring MicroShift
Abstract
Chapter 1. About the default MicroShift configuration file Copy linkLink copied to clipboard!
The MicroShift built-in default settings are listed in a YAML file.
1.1. Configuring Red Hat Device Edge Copy linkLink copied to clipboard!
MicroShift and Red Hat Enterprise Linux (RHEL) work together to bring a lighter-weight, single-node Kubernetes to the edge. This combination means that there is a single node that is both control-plane and worker. It also means that the operating system handles many functions. You add features by installing optional RPMs or Operators. In many cases, you must configure the operating system or other resources in addition to the MicroShift service.
Bringing many of these pieces together is the MicroShift configuration file, config.yaml. The MicroShift configuration file customizes your application platform and can enable many advanced functions. For example:
- Ingress is available by default, but you can add advanced functions such as TLS and route admission specifications by using parameters in the MicroShift configuration file.
-
If you do not need storage, you can disable the built-in storage provider by using the MicroShift configuration file. If you do want to use the built-in storage provider, you must make your adjustments in the
lvmd.configfile. The role of the MicroShift configuration file in this case is to set whether you use the default storage provider. - Advanced networking functions, such as using multiple networks. The Multus package is an installable RPM, but you set up access by using the MicroShift configuration file to set parameters. In addition, you must configure network settings on your networks through the host.
For your convenience, a config.yaml.default file is automatically installed. You can copy and rename this file config.yaml and use it as a starting point for your own custom configuration.
You can also add features that operate without configurations to the MicroShift config.yaml file. For example, you can install and configure GitOps for application management without configuring MicroShift.
If you want to make configuration changes or deploy applications through the MicroShift API with tools other than kustomize manifests, you must wait until the greenboot health checks have finished. This ensures that your changes are not lost if greenboot rolls your rpm-ostree system back to an earlier state.
1.2. The MicroShift configuration file Copy linkLink copied to clipboard!
At startup, MicroShift checks the system-wide /etc/microshift/ directory for a configuration file named config.yaml. If the configuration file does not exist in the directory, built-in default values are used to start the service.
You must use the MicroShift configuration file in combination with host and, sometimes, application and service settings. Ensure that you configure each function in tandem when you adjust settings for your MicroShift node.
For your convenience, a config.yaml.default file ready for your inputs is automatically installed.
1.2.1. Default settings Copy linkLink copied to clipboard!
The Generic Device Plugin for MicroShift is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
If you do not create a config.yaml file or use a configuration snippet YAML file, default values are used. The following example shows the default configuration settings.
To see the default values, run the following command:
microshift show-config
$ microshift show-configCopy to Clipboard Copied! Toggle word wrap Toggle overflow Default values example output in YAML form
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 2. Customizing MicroShift by using the configuration file Copy linkLink copied to clipboard!
Use the MicroShift YAML file to customize your preferences, settings, and parameters.
The Generic Device Plugin for MicroShift is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
2.1. Using custom settings Copy linkLink copied to clipboard!
To create custom configurations, make a copy of the config.yaml.default file that is given in the /etc/microshift/ directory, renaming it config.yaml. Keep this file in the /etc/microshift/ directory, and then you can change supported settings that override the defaults before starting or restarting MicroShift.
If you have just a few changes to make to the default settings, consider using configuration drop-in snippets as an alternative method.
Restart MicroShift after changing any configuration settings to have them take effect. The config.yaml file is read only when MicroShift starts.
2.1.1. Separate restarts Copy linkLink copied to clipboard!
Applications and other optional services used with your MicroShift node might also need to be restarted separately to apply configuration changes throughout the node. For example, when making changes to certain networking settings, you must stop and restart service and application pods to apply those changes. See each procedure for the task you are completing for more information.
If you add all of the configurations you need at the same time, you can minimize system restarts.
2.1.2. Parameters and values for the MicroShift config.yaml file Copy linkLink copied to clipboard!
The following table explains MicroShift configuration YAML parameters and valid values for each:
| Field | Type | Description |
|---|---|---|
|
|
| A string that specifies the IP address from which the API server is advertised to members of the node. The default value is calculated based on the address of the service network. |
|
|
|
How long log files are stored before automatic deletion. The default value of |
|
|
|
By default, when the |
|
|
| The total number of log files kept. By default, MicroShift retains 10 log files. The oldest is deleted when an excess file is created. You can configure this value. |
|
|
|
Logs only metadata for read and write requests; does not log request bodies except for OAuth access token requests. If you do not specify this field, the |
|
|
| Defines externally generated certificates and domain names by using custom certificate authorities. |
|
|
| The full path to the certificate. |
|
|
| The full path to the certificate key. |
|
|
| Optional. Add a list of explicit DNS names. Leading wildcards are allowed. If you do not list names, the implicit names are extracted from the certificates. |
|
|
Fully qualified domain names (FQDNs), wildcards such as | Subject Alternative Names for API server certificates. SANs indicate all of the domain names and IP addresses that are secured by a certificate. |
|
|
| Defines the transport later protocol (TLS) used and the cipher suites allowed. Provides security for the exposed MicroShift API server and internal control plane endpoints. |
|
|
|
Lists the allowed cipher suites that the API server accepts and serves. Defaults to the cipher suites allowed with the TLS specification set in the |
|
|
|
Specifies the minimum version of TLS to serve from the API server. The default value is |
|
|
|
Log verbosity. The default value is |
|
|
| Base domain of the node. All managed DNS records are subdomains of this base. |
|
|
|
By default, |
|
|
| Lists the device definitions to be exposed by the plugin. Each Device entry contains a 'name' and a list of groups. |
|
|
| Lists device groups. Devices within a group comprise a pool of devices under a common name. When you request a device from that pool, you can receive a device from different defined paths. |
|
|
|
Specifies how many times this group of devices can be mounted concurrently. If unspecified, Count defaults to 1. Setting a high count, for example, |
|
|
|
Lists the host device file paths. Paths can be glob patterns. For example, |
|
|
|
Specifies up to how many times this device can be used in the group concurrently when other devices in the group yield more matches. For example, if one path in the group matches 5 devices and another matches 1 device, but has a limit of 10, then the group provides 5 pairs of devices. When unspecified, the limit defaults to |
|
|
|
The file path at which the host device should be mounted within the container. When unspecified, mountPath defaults to |
|
|
|
The file path of a device on the host. For example, |
|
|
|
The file-system permissions given to the mounted device. Applies only to mounts of type
When unspecified, the value defaults to |
|
|
|
Specifies whether the path should be mounted read-only. Applies only to mounts of type |
|
|
|
Describes what type of file-system node this |
|
|
|
Lists the USB specifications that this device group consists of. The vendor and product IDs must always match. The serial ID must match if provided, or skipped if the ID is empty. The |
|
|
|
The USB Product ID of the device to match on. For example, |
|
|
| The serial number of the device to match on. A USB device must match exactly on all the given attributes to pass. |
|
|
|
The USB Vendor ID of the device to match on. For example, |
|
|
|
A unique string representing the kind of device this specification describes. For example, |
|
|
|
Specifies the domain prefix with which devices are advertised and present in the node. For example, |
|
|
| Specifies the default GDP status. |
|
|
|
A reference to a secret that contains the default certificate that is served by the ingress controller. When routes do not specify their own certificate, The secret must contain the following keys and data:
If you do not set one of these values, a wildcard certificate is automatically generated and used. The certificate is valid for the ingress controller Any certificate in use is automatically integrated in the MicroShift OAuth server. |
|
|
|
Authenticates client access to the node and services. Mutual TLS authentication is enabled when using these settings. If you do not set values for the |
|
|
| Optional subfield which specifies a list of regular expressions that are matched against the distinguished name on a valid client certificate to filter requests. Use this parameter to cause the ingress controller to reject certificates based on the distinguished name. The Perl Compatible Regular Expressions (PCRE) syntax is required. If you configure this field, it must contain a valid expression or the MicroShift service fails. At least one pattern must match a client certificate’s distinguished name; otherwise, the ingress controller rejects the certificate and denies the connection. |
|
|
|
Required subfield that specifies a config map in the |
|
|
| Required subfield that creates a secure route using reencrypt TLS termination with a custom certificate. You must have a certificate/key pair in PEM-encoded files, where the certificate is valid for the route host. The ingress controller only checks client certificates for edge-terminated and reencrypt TLS routes. Certificates for plain text HTTP or passthrough TLS routes are not checked with this setting. |
|
|
|
Determines the default HTTP version to be used for ingress. The default value is |
|
|
|
Specifies when and how the ingress controller sets the
|
|
|
| Defines a policy for HTTP traffic compression. There is no HTTP compression by default. |
|
|
|
A list of MIME types to compress. When the list is empty, the ingress controller does not apply any compression. To define a list, use the format of the Content-Type definition in RFC 1341 that specifies the type and subtype of data in the body of a message and the native encoding of the data. For example,
Not all MIME types benefit from compression, but |
|
|
|
The default value is
|
|
| IP address, NIC name, or multiple | Value defaults to the entire network of the host. The valid configurable value is a list that can be either a single IP address or NIC name or multiple IP addresses and NIC names. |
|
|
|
The default value is |
|
|
|
The default port shown. Configurable. Valid value is a single, unique port in the |
|
|
|
The default port shown. Configurable. Valid value is a single, unique port in the |
|
|
| Defines a policy for handling new route claims, such as allowing or denying claims across namespaces. By default, allows routes to claim different paths of the same hostname across namespaces. |
|
|
|
Describes how hostname claims across namespaces should be handled. The default value is
|
|
|
| Describes how routes with wildcard policies are handled by the ingress controller.
|
|
|
|
Router status. The default is |
|
|
|
Specifies settings for ingress controllers TLS connections. If you do not set one, the default value is based on the |
|
|
|
Specifies the profile type for the TLS Security. The default value is
When using the |
|
|
| Specifies the TLS version for ingress controllers.
The minimum TLS version is
|
|
| Objects | Specifies options for tuning the performance of ingress controller pods. |
|
|
|
Defines how long a connection is held open while waiting for a client response to the server/backend before closing the connection. The default timeout is |
|
|
|
Defines how long a connection is held open while waiting for a client response. The default timeout is |
|
|
An |
Describes how much memory in bytes must be reserved for
|
|
|
|
Describes how much memory in bytes must be reserved from
|
|
|
|
The default
|
|
|
|
The default value is
|
|
|
|
Defines how long a connection is held open while waiting for a server or backend response to the client before closing the connection. The default timeout is |
|
|
|
Defines how long a connection is held open while waiting for a server or backend response. The default timeout is |
|
|
|
Defines the number of threads created per
|
|
|
| Defines how long the router can hold data to find a matching route. Setting this interval with too short a value can cause the router to revert to the default certificate for edge-terminated clients or re-encrypt routes, even when a better-matching certificate could be used.
|
|
|
|
Defines how long a tunnel connection, including websockets, are held open while the tunnel is idle. The default timeout is |
|
| See the MicroShift low-latency instructions | Parameter for passthrough configuration of the kubelet node agent. Used for low-latency configuration. The default value is null. |
|
|
|
The locations on the file system to scan for |
|
| IP address block |
A block of IP addresses from which pod IP addresses are allocated. IPv4 is the default network. Dual-stack entries are supported. The first entry in this field is immutable after MicroShift starts. The default range is |
|
| String |
Deploys the Open Virtual Networking - Kubernetes (OVN-K) network plugin as the default container network interface (CNI) when empty or set to |
|
|
|
Controls the deployment of the Multus Container Network Interface (CNI). The default status is |
|
| IP address block |
A block of virtual IP addresses for Kubernetes services. IP address pool for services. IPv4 is the default. Dual-stack entries are supported. The first entry in this field is immutable after MicroShift starts. The default range is |
|
|
|
The port range allowed for Kubernetes services of type |
|
|
| The name of the node. The default value is the hostname. If non-empty, this string is used to identify the node instead of the hostname. This value is immutable after MicroShift starts. |
|
| IPv4 address | The IPv4 address of the node. The default value is the IP address of the default route. |
|
| IPv6 address | The IPv6 address for the node for dual-stack configurations. Cannot be configured in single stack for either IPv4 or IPv6. The default is an empty value or null. |
|
|
| The default value is empty. An empty value or null field defaults to LVMS deployment. |
|
|
|
The default value is null or an empty array. A null or empty array defaults to deploying |
|
|
The endpoint where telemetry data is sent. No user or private data is included in the metrics reported. The default value is | |
|
|
|
Telemetry status, which can be |
2.1.3. Configuring the advertise address network flag Copy linkLink copied to clipboard!
The apiserver.advertiseAddress flag specifies the IP address on which to advertise the API server to members of the node. This address must be reachable by the node. You can set a custom IP address here, but you must also add the IP address to a host interface. Customizing this parameter preempts MicroShift from adding a default IP address to the br-ex network interface.
If you customize the advertiseAddress IP address, make sure it is reachable by the node when MicroShift starts by adding the IP address to a host interface.
If unset, the default value is set to the next immediate subnet after the service network. For example, when the service network is 10.43.0.0/16, the advertiseAddress is set to 10.44.0.0/32.
2.1.4. Extending the port range for NodePort services Copy linkLink copied to clipboard!
The serviceNodePortRange setting extends the port range available to NodePort services. This option is useful when specific standard ports under the 30000-32767 range need to be exposed. For example, if your device needs to expose the 1883/tcp MQ Telemetry Transport (MQTT) port on the network because client devices cannot use a different port.
NodePorts can overlap with system ports, causing a malfunction of the system or MicroShift.
Consider the following when configuring the NodePort service ranges:
-
Do not create any NodePort service without an explicit
nodePortselection. When an explicitnodePortis not specified, the port is assigned randomly by thekube-apiserverand cannot be predicted. -
Do not create any NodePort service for any system service port, MicroShift port, or other services you expose on your device
HostNetwork. Table one specifies ports to avoid when extending the port range:
Expand Table 2.2. Ports to avoid. Port Description 22/tcp
SSH port
80/tcp
OpenShift Router HTTP endpoint
443/tcp
OpenShift Router HTTPS endpoint
1936/tcp
Metrics service for the openshift-router, not exposed today
2379/tcp
etcd port
2380/tcp
etcd port
6443
Kubernetes API
8445/tcp
openshift-route-controller-manager
9537/tcp
cri-o metrics
10250/tcp
kubelet
10248/tcp
kubelet healthz port
10259/tcp
kube scheduler
Chapter 3. Using configuration snippets Copy linkLink copied to clipboard!
If you want to configure one or two settings, use the /etc/microshift/config.d/ configuration directory to drop in configuration snippet YAML files.
3.1. How configuration snippets work Copy linkLink copied to clipboard!
If you want to configure one or two settings, such as adding subject alternative names (SANs), you can use the /etc/microshift/config.d/ configuration directory to drop in configuration snippet YAML files. You must restart MicroShift for new configurations to apply.
To return to previous values, you can delete a configuration snippet and restart MicroShift.
At runtime, the YAML files inside /etc/microshift/config.d are merged into the existing MicroShift configuration, whether that configuration is a result of default values or a user-created config.yaml file. You do not need to create a config.yaml file to use a configuration snippet.
Files in the snippet directory are sorted in lexicographical order and run sequentially. You can use numerical prefixes for snippets so that each is read in the order you want. The last-read file takes precedence when there is more than one YAML for the same parameter.
Configuration snippets take precedence over both default values and a customized config.yaml configuration file.
3.2. Examples of configuration snippet lists or arrays Copy linkLink copied to clipboard!
Lists, or arrays, are not merged, they are overwritten. For example, you can replace a SAN or list of SANs by creating an additional snippet for the same field that is read after the first:
MicroShift configuration directory contents
-
/etc/microshift/config.yaml.defaultor/etc/microshift/config.yaml
Example MicroShift configuration snippet directory contents
-
/etc/microshift/config.d/10-san.yaml /etc/microshift/config.d/20-san.yamlExample
10-san.yamlsnippetapiServer: subjectAltNames: - host1 - host2apiServer: subjectAltNames: - host1 - host2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
20-san.yamlsnippetapiServer: subjectAltNames: - hostZapiServer: subjectAltNames: - hostZCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example configuration result
apiServer: subjectAltNames: - hostZapiServer: subjectAltNames: - hostZCopy to Clipboard Copied! Toggle word wrap Toggle overflow
If you want to add a value to an existing list, you can add it to an existing snippet. For example, to add hostZ to an existing list of SANs, edit the snippet you have instead of creating a new one:
Example 10-san.yaml snippet
apiServer:
subjectAltNames:
- host1
- host2
- hostZ
apiServer:
subjectAltNames:
- host1
- host2
- hostZ
Example configuration result
apiServer:
subjectAltNames:
- host1
- host2
- hostZ
apiServer:
subjectAltNames:
- host1
- host2
- hostZ
3.3. Example configuration snippets that are objects Copy linkLink copied to clipboard!
Objects are merged together when you use a configuration snippet.
Example 10-advertiseAddress.yaml snippet
apiServer: advertiseAddress: "microshift-example"
apiServer:
advertiseAddress: "microshift-example"
Example 20-audit-log.yaml snippet
apiServer:
auditLog:
maxFileAge: 12
apiServer:
auditLog:
maxFileAge: 12
Example configuration result
apiServer:
advertiseAddress: "microshift-example"
auditLog:
maxFileAge: 12
apiServer:
advertiseAddress: "microshift-example"
auditLog:
maxFileAge: 12
3.4. Examples of mixed configuration snippets Copy linkLink copied to clipboard!
In this example, the values of both advertiseAddress and auditLog.maxFileAge fields merge into the configuration, but only the c.com and d.com subjectAltNames values are retained. This happens because the numbering in the filename indicates that the c.com and d.com values are higher priority.
Example 10-advertiseAddress.yaml snippet
apiServer: advertiseAddress: "microshift-example"
apiServer:
advertiseAddress: "microshift-example"
Example 20-audit-log.yaml snippet
apiServer:
auditLog:
maxFileAge: 12
apiServer:
auditLog:
maxFileAge: 12
Example 30-SAN.yaml snippet
apiServer:
subjectAltNames:
- a.com
- b.com
apiServer:
subjectAltNames:
- a.com
- b.com
Example 40-SAN.yaml snippet
apiServer:
subjectAltNames:
- c.com
- d.com
apiServer:
subjectAltNames:
- c.com
- d.com
Example configuration result
Chapter 4. Configuring IPv6 single or dual-stack networking Copy linkLink copied to clipboard!
You can use the IPv6 networking protocol in either single-stack or dual-stack networking modes.
4.1. IPv6 networking with MicroShift Copy linkLink copied to clipboard!
The MicroShift service defaults to IPv4 address families node-wide. However, IPv6 single-stack and IPv4/IPv6 dual-stack networking is available on supported platforms.
- When you set the values for IPv6 in the MicroShift configuration file and restart the service, settings managed by the OVN-Kubernetes network plugin are updated automatically.
- After migrating to dual-stack networking, both new and existing pods have dual-stack networking enabled.
- If you require node-wide IPv6 access, such as for the control plane and other services, use the following configuration examples. The MicroShift Multus Container Network Interface (CNI) plugin can enable IPv6 for pods.
- For dual-stack networking, each MicroShift node network and service network supports up to two values in the node and service network configuration parameters.
Plan for IPv6 before starting MicroShift for the first time. Switching a node to and from different IP families is not supported unless you are migrating a node from default single-stack to dual-stack networking.
If you configure your networking for either IPv6 single stack or IPv4/IPv6 dual stack, you must restart application pods and services. Otherwise pods and services remain configured with the default IP family.
4.2. Configuring IPv6 single-stack networking Copy linkLink copied to clipboard!
You can use the IPv6 network protocol by updating the MicroShift service configuration file.
Prerequisites
-
You installed the OpenShift CLI (
oc). - You have root access to the node.
- Your node uses the OVN-Kubernetes network plugin.
- The host has an IPv6 address and IPv6 routes, including the default.
Procedure
-
If you have not done so, make a copy of the provided
config.yaml.defaultfile in the/etc/microshift/directory, renaming itconfig.yaml. Keep the new MicroShift
config.yamlin the/etc/microshift/directory. Yourconfig.yamlfile is read every time the MicroShift service starts.NoteAfter you create it, the
config.yamlfile takes precedence over built-in settings.Replace the default values in the
networksection of the MicroShift YAML with your valid values.Example single-stack IPv6 networking configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify a
clusterNetworkwith a CIDR value that is less than64. - 2
- Specify an IPv6 CIDR with a prefix of
112. Kubernetes uses only the lowest 16 bits. For a prefix of112, IP addresses are assigned from112to128bits. - 3
- Example node IP address. Valid values are IP addresses in the IPv6 address family. You must only specify an IPv6 address when an IPv4 network is also present. If an IPv4 network is not present, the MicroShift service automatically fills in this value upon restart.
Complete any other configurations you require, then start MicroShift by running the following command:
sudo systemctl start microshift
$ sudo systemctl start microshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Retrieve the networks defined in the node resource by running the following command:
oc get node -o jsonpath='{.items[].spec.podCIDRs[]}'$ oc get node -o jsonpath='{.items[].spec.podCIDRs[]}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
fd01::/48
fd01::/48Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the status of the pods by running the following command:
oc get pod -A -o wide
$ oc get pod -A -o wideCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the status of services by running the following command:
oc get svc -A
$ oc get svc -ACopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3. Configuring IPv6 dual-stack networking before MicroShift starts Copy linkLink copied to clipboard!
You can configure your MicroShift node to run on dual-stack networking that supports IPv4 and IPv6 address families by using the configuration file before starting the service.
- The first IP family in the configuration is the primary IP stack in the node.
- After the node is running with dual-stack networking, enable application pods and add-on services for dual-stack by restarting them.
The OVN-Kubernetes network plugin requires that both IPv4 and IPv6 default routes be on the same network device. IPv4 and IPv6 default routes on separate network devices is not supported.
When using dual-stack networking where IPv6 is required, you cannot use IPv4-mapped IPv6 addresses, such as ::FFFF:198.51.100.1.
Prerequisites
-
You installed the OpenShift CLI (
oc). - You have root access to the node.
- Your node uses the OVN-Kubernetes network plugin.
- The host has both IPv4 and IPv6 addresses and routes, including a default for each.
- The host has at least two L3 networks, IPv4 and IPv6.
Procedure
-
If you have not done so, make a copy of the provided
config.yaml.defaultfile in the/etc/microshift/directory, renaming itconfig.yaml. Keep the new MicroShift
config.yamlin the/etc/microshift/directory. Yourconfig.yamlfile is read every time the MicroShift service starts.NoteAfter you create it, the
config.yamlfile takes precedence over built-in settings.If you have not started MicroShift, replace the default values in the
networksection of the MicroShift YAML with your valid values.Example dual-stack IPv6 networking configuration with network assignments
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify an IPv6
clusterNetworkwith a CIDR value that is less than64. - 2
- Specify an IPv6 CIDR with a prefix of
112. Kubernetes uses only the lowest 16 bits. For a prefix of112, IP addresses are assigned from112to128bits. - 3
- Example node IP address. Must be an IPv4 address family.
- 4
- Example node IP address for dual-stack configuration. Must be an IPv6 address family. Configurable only with dual-stack networking.
Complete any other MicroShift configurations you require, then start MicroShift by running the following command:
sudo systemctl start microshift
$ sudo systemctl start microshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Reset the IP family policy for application pods and services as needed, then restart those application pods and services to enable dual-stack networking. See "Resetting the IP family policy for application pods and services" for a simple example.
Verification
You can verify that all of the system services and pods to have two IP addresses, one for each family, by using the following steps:
Retrieve the networks defined in the node resource by running the following command:
oc get pod -n openshift-ingress router-default-5b75594b4-w7w6s -o jsonpath='{.status.podIPs}'$ oc get pod -n openshift-ingress router-default-5b75594b4-w7w6s -o jsonpath='{.status.podIPs}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
[{"ip":"10.42.0.4"},{"ip":"fd01:0:0:1::4"}][{"ip":"10.42.0.4"},{"ip":"fd01:0:0:1::4"}]Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the networks defined by the host network pods by running the following command:
oc get pod -n openshift-ovn-kubernetes ovnkube-master-2fm2k -o jsonpath='{.status.podIPs}'$ oc get pod -n openshift-ovn-kubernetes ovnkube-master-2fm2k -o jsonpath='{.status.podIPs}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
[{"ip":"192.168.113.117"},{"ip":"2001:db9:ca7:ff::1db8"}][{"ip":"192.168.113.117"},{"ip":"2001:db9:ca7:ff::1db8"}]Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.4. Migrating a MicroShift node to IPv6 dual-stack networking Copy linkLink copied to clipboard!
You can convert a single-stack node to dual-stack node networking that supports IPv4 and IPv6 address families by setting two entries in the service and node network parameters in the MicroShift configuration file.
- The first IP family in the configuration is the primary IP stack in the node.
- MicroShift system pods and services are automatically updated upon MicroShift restart.
- After the node is migrated to dual-stack networking and has restarted, enable workload pods and services for dual-stack networking by restarting them.
The OVN-Kubernetes network plugin requires that both IPv4 and IPv6 default routes be on the same network device. IPv4 and IPv6 default routes on separate network devices is not supported.
When using dual-stack networking where IPv6 is required, you cannot use IPv4-mapped IPv6 addresses, such as ::FFFF:198.51.100.1.
Prerequisites
-
You installed the OpenShift CLI (
oc). - You have root access to the node.
- Your node uses the OVN-Kubernetes network plugin.
- The host has both IPv4 and IPv6 addresses and routes, including a default for each.
- The host has at least two L3 networks, IPv4 and IPv6.
Procedure
-
If you have not done so, make a copy of the provided
config.yaml.defaultfile in the/etc/microshift/directory, renaming itconfig.yaml. Keep the new MicroShift
config.yamlin the/etc/microshift/directory. Yourconfig.yamlfile is read every time the MicroShift service starts.NoteAfter you create it, the
config.yamlfile takes precedence over built-in settings.Add IPv6 configurations to the
networksection of the MicroShift YAML with your valid values:WarningYou must keep the same first entry across restarts and migrations. This is true for any migration: single-to-dual stack, or dual-to-single stack. A complete wipe of the etcd database is required if a change to the first entry is needed. This might result in application data loss and is not supported.
-
Add an IPv6 configuration for a second network in the
networksection of the MicroShift YAML with your valid values. Add network assignments to the
networksection of the MicroShiftconfig.yamlto enable dual stack with IPv6 as secondary network.Example dual-stack IPv6 configuration with network assignments
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The IPv6 node address.
- 2
- IPv4 network. Specify a
clusterNetworkwith a CIDR value that is less than24. - 3
- IPv6 network. Specify a
clusterNetworkwith a CIDR value that is less than64. - 4
- Specify an IPv6 CIDR with a prefix of
112. Kubernetes uses only the lowest 16 bits. For a prefix of112, IP addresses are assigned from112to128bits. - 5
- Example node IP address. Maintain the previous IPv4 IP address.
- 6
- Example node IP address. Must be an IPv6 address family.
-
Add an IPv6 configuration for a second network in the
Complete any other configurations you require, then restart MicroShift by running the following command:
sudo systemctl restart microshift
$ sudo systemctl restart microshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Reset the IP family policy for application pods and services as needed, then restart those application pods and services to enable dual-stack networking. See "Resetting the IP family policy for application pods and services" for a simple example.
Verification
You can verify that all of the system services and pods to have two IP addresses, one for each family, by using the following steps:
Retrieve the status of the pods by running the following command:
oc get pod -A -o wide
$ oc get pod -A -o wideCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the networks defined by the OVN-K network plugin by running the following command:
oc get pod -n openshift-ovn-kubernetes ovnkube-master-bltk7 -o jsonpath='{.status.podIPs}'$ oc get pod -n openshift-ovn-kubernetes ovnkube-master-bltk7 -o jsonpath='{.status.podIPs}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
[{"ip":"192.168.113.117"},{"ip":"2001:db9:ca7:ff::1db8"}][{"ip":"192.168.113.117"},{"ip":"2001:db9:ca7:ff::1db8"}]Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the networks defined in the node resource by running the following command:
oc get pod -n openshift-ingress router-default-5b75594b4-228z7 -o jsonpath='{.status.podIPs}'$ oc get pod -n openshift-ingress router-default-5b75594b4-228z7 -o jsonpath='{.status.podIPs}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
[{"ip":"10.42.0.3"},{"ip":"fd01:0:0:1::3"}][{"ip":"10.42.0.3"},{"ip":"fd01:0:0:1::3"}]Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteTo return to single-stack networking, you can remove the second entry to the networks and return to the single stack that was configured before migrating to dual-stack.
4.5. Resetting the IP family policy for application pods and services Copy linkLink copied to clipboard!
The default ipFamilyPolicy configuration value, PreferSingleStack, does not automatically update in all services after you update your MicroShift configuration to dual-stack networking. To enable dual-stack networking in services and application pods, you must update the ipFamilyPolicy value.
Prerequisites
-
You used the MicroShift
config.yamlto define a dual-stack network with an IPv6 address family.
Procedure
Set the
spec.ipFamilyPolicyfield to a valid value for dual-stack networking in your service or pod by using the following example:Example dual-stack network configuration for a service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Required. Valid values for dual-stack networking are
PreferDualStackandRequireDualStack. The value you set depends on the requirements of your application.PreferSingleStackis the default value for theipFamilyPolicyfield.
-
Restart any application pods that do not have a
hostNetworkdefined. Pods that do have ahostNetworkdefined do not need to be restarted to update theipFamilyPolicyvalue.
MicroShift system services and pods are automatically updated when the ipFamilyPolicy value is updated.
4.6. OVN-Kubernetes IPv6 and dual-stack limitations Copy linkLink copied to clipboard!
The OVN-Kubernetes network plugin has the following limitations:
For a cluster configured for dual-stack networking, both IPv4 and IPv6 traffic must use the same network interface as the default gateway.
If this requirement is not met, pods on the host in the
ovnkube-nodedaemon set enter theCrashLoopBackOffstate.If you display a pod with a command such as
oc get pod -n openshift-ovn-kubernetes -l app=ovnkube-node -o yaml, thestatusfield has more than one message about the default gateway, as shown in the following output:I1006 16:09:50.985852 60651 helper_linux.go:73] Found default gateway interface br-ex 192.168.127.1 I1006 16:09:50.985923 60651 helper_linux.go:73] Found default gateway interface ens4 fe80::5054:ff:febe:bcd4 F1006 16:09:50.985939 60651 ovnkube.go:130] multiple gateway interfaces detected: br-ex ens4
I1006 16:09:50.985852 60651 helper_linux.go:73] Found default gateway interface br-ex 192.168.127.1 I1006 16:09:50.985923 60651 helper_linux.go:73] Found default gateway interface ens4 fe80::5054:ff:febe:bcd4 F1006 16:09:50.985939 60651 ovnkube.go:130] multiple gateway interfaces detected: br-ex ens4Copy to Clipboard Copied! Toggle word wrap Toggle overflow The only resolution is to reconfigure the host networking so that both IP families use the same network interface for the default gateway.
For a cluster configured for dual-stack networking, both the IPv4 and IPv6 routing tables must contain the default gateway.
If this requirement is not met, pods on the host in the
ovnkube-nodedaemon set enter theCrashLoopBackOffstate.If you display a pod with a command such as
oc get pod -n openshift-ovn-kubernetes -l app=ovnkube-node -o yaml, thestatusfield has more than one message about the default gateway, as shown in the following output:I0512 19:07:17.589083 108432 helper_linux.go:74] Found default gateway interface br-ex 192.168.123.1 F0512 19:07:17.589141 108432 ovnkube.go:133] failed to get default gateway interface
I0512 19:07:17.589083 108432 helper_linux.go:74] Found default gateway interface br-ex 192.168.123.1 F0512 19:07:17.589141 108432 ovnkube.go:133] failed to get default gateway interfaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow The only resolution is to reconfigure the host networking so that both IP families contain the default gateway.
-
If you set the
ipv6.disableparameter to1in thekernelArgumentsection of theMachineConfigcustom resource (CR) for your cluster, OVN-Kubernetes pods enter aCrashLoopBackOffstate. Additionally, updating your cluster to a later version of Red Hat build of MicroShift fails because the Network Operator remains on aDegradedstate. Red Hat does not support disabling IPv6 adddresses for your cluster so do not set theipv6.disableparameter to1.
Chapter 5. Using ingress control for a MicroShift node Copy linkLink copied to clipboard!
Use the ingress controller options in the MicroShift configuration file to make pods and services accessible outside the node.
5.1. Using ingress control in MicroShift Copy linkLink copied to clipboard!
When you create your MicroShift node, each pod and service running on the node is allocated an IP address. These IP addresses are accessible to other pods and services running nearby by default, but are not accessible to external clients. MicroShift uses a minimal implementation of the OpenShift Container Platform IngressController API to enable external access to node services.
With more configuration options, you can fine-tune ingress to meet your specific needs. To use enhanced ingress control, update the parameters in the MicroShift configuration file and restart the service.
Ingress configuration is useful in a variety of ways, for example:
- Accommodate server response speed
-
If your application starts processing requests from clients but the connection closes before it can respond, you can set the
ingress.tuningOptions.serverTimeoutparameter in the configuration file to a higher value to accommodate the speed of the response from the server.
-
If your application starts processing requests from clients but the connection closes before it can respond, you can set the
- Closing router connections
-
If the router has many connections open because an application running on the node does not close connections properly, you can set the
ingress.tuningOptions.serverTimeoutandspec.tuningOptions.serverFinTimeoutparameters to a lower value, forcing those connections to close sooner.
-
If the router has many connections open because an application running on the node does not close connections properly, you can set the
- Verify client certificates
-
If you need to configure the ingress controller to verify client certificates, you can use the
ingress.clientTLSparameter to set a clientCA value, which is a reference to a config map. The config map contains the PEM-encoded CA certificate bundle that is used to verify a client’s certificate. Optionally, you can also configure a list of certificate subject filters.
-
If you need to configure the ingress controller to verify client certificates, you can use the
- Configure a TLS security profile
-
If you need to configure a TLS security profile for an ingress controller, you can use the
ingress.tlsSecurityProfileparameter to specify a default or custom individual TLS security profiles. The TLS security profile defines the minimum TLS version and the TLS ciphers for TLS connections for the ingress controllers. If a TLS security profile is not configured, the default value is based on the TLS security profile set for the API server.
-
If you need to configure a TLS security profile for an ingress controller, you can use the
- Create policies for new route claims
-
If you need to define a policy for handling new route claims, you can use the
routeAdmissionparameter to allow or deny claims across namespaces. Set therouteAdmissionparameter to describe how hostname claims across namespaces should be handled and to describe how the ingress controller handles routes with wildcard policies.
-
If you need to define a policy for handling new route claims, you can use the
- Customize error pages
- If you want more than the default error pages, which are usually empty and only return the HTTP status code, configure custom error pages.
- Capture HTTP headers or cookies
- If you want to include the capture of HTTP headers or cookies, configure them in the access logging.
5.2. Configuring ingress control in MicroShift Copy linkLink copied to clipboard!
You can use detailed ingress control settings by updating the MicroShift service configuration file or using a configuration snippet.
-
A
config.yamlconfiguration file takes precedence over built-in settings. Theconfig.yamlfile is read every time the MicroShift service starts. -
Configuration snippet YAMLs take precedence over both built-in settings and the
config.yamlconfiguration file.
Prerequisites
-
You installed the OpenShift CLI (
oc). - You have root access to the node.
- Your node uses the OVN-Kubernetes Container Network Interface (CNI) plugin.
Procedure
Apply ingress control settings in one of the two following ways:
-
Update the MicroShift
config.yamlconfiguration file by making a copy of the providedconfig.yaml.defaultfile in the/etc/microshift/directory, naming itconfig.yamland keeping it in the source directory. -
Use a configuration snippet to apply the ingress control settings you want. To do this, create a configuration snippet YAML file and put it in the
/etc/microshift/config.d/configuration directory.
-
Update the MicroShift
Replace the default values in the
ingresssection of the MicroShift YAML with your valid values, or create a configuration snippet file with the sections you need.Ingress controller configuration fields with default values
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expand Table 5.1. Ingress controller configuration fields definitions table Parameter Description ingressThe
ingresssection of the MicroShiftconfig.yamlfile defines the configurable parameters for the implementation of the OpenShift Container PlatformIngressControllerAPI. All of the following parameters in this table are subsections in theingresssection of the MicroShiftconfig.yaml.accessLoggingThis
ingresssubsection describes how client requests are logged. If thestatusfield is empty, access logging is disabled. When the status field is set toEnabled, access requests are logged as configured with theaccessLoggingparameters and theaccessLogging.destination.typeis automatically set toContainer.-
When enabled, access logging is part of the
openshift-routerlogs. The sos report procedure for MicroShift captures logs from this pod.
accessLogging.destinationA destination for logs. The destination for logs can be a local sidecar container or remote. Default value is null.
accessLogging.destination.typeThe type of destination for logs. Valid values are
ContainerorSyslog.-
Setting this value to
Containerspecifies that logs should go to a sidecar container. When the destination type is set toContainer, a container calledlogsis automatically created. Using container logs means that logs might be dropped if the rate of logs exceeds the container runtime capacity or the custom logging solution capacity. You must have a custom logging solution that reads logs from this sidecar. -
Setting this value to
Syslogspecifies that logs are sent to a Syslog endpoint. You must configure a custom Syslog instance and specify an endpoint that can receive Syslog messages. You must have a custom Syslog instance. For example, Getting started with kernel logging.
accessLogging.destination.containerDescribes parameters for the
Containerlogging destination type. You must configure a custom logging solution that reads logs from this sidecar.accessLogging.destination.container.maxLengthOptional configuration. The default value is
1024bytes. Message length must be at least480and not greater than8192bytes.accessLogging.destination.syslogDescribes parameters for the
Sysloglogging destination type. You must configure a custom Syslog instance with an endpoint that can receive Syslog messages.accessLogging.destination.syslog.addressRequired configuration when the
Syslogdestination type is set. Valid value is the IP address of the syslog endpoint that receives log messages.accessLogging.destination.syslog.facilityOptional configuration when the
Syslogdestination type is set. Specifies the syslog facility of log messages. If this field is empty, the facility islocal1. Otherwise, the field must specify one of the following valid syslog facilities:kern,user,mail,daemon,auth,syslog,lpr,news,uucp,cron, auth2`,ftp,ntp,audit,alert,cron2,local0,local1,local2,local3,local4,local5,local6, orlocal7.accessLogging.destination.syslog.maxLengthOptional configuration when the
Syslogdestination type is set. The maximum length of theSyslogmessage. Message length must be at least480and not greater than4096bytes. If this field is empty, the maximum length is set to the default value of1024bytes.accessLogging.destination.syslog.portRequired configuration when the
Syslogdestination type is set. The UDP port number of the syslog endpoint that receives log messages. The default value is0.httpCaptureCookiesSpecifies HTTP cookies that you want to capture in access logs. If the
httpCaptureCookiesfield is empty, access logs do not capture the cookies. Default value is empty. Configuringingress.accessLogging.httpCaptureCookiesautomatically enables ingress access logging. For any cookie that you want to capture, you must also set thematchTypeandmaxLengthparameters.For example:
httpCaptureCookies: - matchType: Exact maxLength: 128 name: MYCOOKIEhttpCaptureCookies: - matchType: Exact maxLength: 128 name: MYCOOKIECopy to Clipboard Copied! Toggle word wrap Toggle overflow
httpCaptureCookies.matchTypeSpecifies whether the field name of the cookie exactly matches the capture cookie setting or is a prefix of the capture cookie setting. Valid values are
Exactfor an exact string match andPrefixfor a string prefix match.-
If you use the
Exactsetting, you must also specify a name in thehttpCaptureCookies.namefield. -
If you use the
Prefixsetting, you must also specify a prefix in thehttpCaptureCookies.namePrefixfield. For example, the settings ofmatchType: Prefixwhen thenamePrefixis "mush" captures a cookie named "mush" or "mushroom" but not one named "room". The first matching cookie is captured.
httpCaptureCookies.maxLengthSpecifies the maximum length of the cookie that is logged, which includes the cookie name, cookie value, and one-character delimiter. If the log entry exceeds this length, the value is truncated in the log message. The ingress controller might impose a separate bound on the total length of HTTP headers in a request. The minimum value is
1byte, maximum value is1024bytes. The default value is0.httpCaptureCookies.nameSpecifies the exact name used for a cookie name match as set in the
httpCaptureCookies.matchTypeparameter. The value must be a valid HTTP cookie name as defined in RFC 6265 section 4.1. The minimum length is1byte and the maximum length is1024bytes.httpCaptureCookies.namePrefixSpecifies the prefix for a cookie name match as set in the
httpCaptureCookies.matchTypeparameter. The value must be a valid HTTP cookie name as defined in RFC 6265 section 4.1. The minimum length is1byte and the maximum length is1024bytes.httpCaptureHeadersDefines the HTTP headers that should be captured in the access logs. This field is a list and allows capturing request and response headers independently. When this field is empty, headers are not captured. This option only applies to plain text HTTP connections and to secure HTTP connections for which the ingress controller terminates encryption: for example, edge-terminated or reencrypt connections. Headers cannot be captured for TLS
passthroughconnections. Configuring theingress.accessLogging.httpCaptureHeadersparameter automatically enables ingress access logging.httpCaptureHeaders.requestSpecifies which HTTP request headers to capture. When this field is empty, no request headers are captured.
httpCaptureHeaders.request.maxLengthSpecifies a maximum length for the header value. When a header value exceeds this length, the value is truncated in the log message. The minimum required value is
1byte. The ingress controller might impose a separate bound on the total length of HTTP headers in a request.httpCaptureHeaders.request.nameSpecifies a header name. The value must be a valid HTTP header name as defined in RFC 2616 section 4.2. If you configure this value, you must specify
maxLengthandnamevalues.httpCaptureHeaders.responseSpecifies which HTTP response headers to capture. If this field is empty, no response headers are captured.
httpCaptureHeaders.response.maxLengthSpecifies a maximum length for the header value. If a header value exceeds this length, the value is truncated in the log message. The ingress controller might impose a separate bound on the total length of HTTP headers in a request.
httpCaptureHeaders.response.nameSpecifies a header name. The value must be a valid HTTP header name as defined in RFC 2616 section 4.2.
httpLogFormatSpecifies the format of the log message for an HTTP request. If this field is empty, log messages use the default HTTP log format. For HAProxy default HTTP log format, see the HAProxy documentation.
statusSpecifies whether access is logged or not. Valid values are
EnabledandDisabled. Default value isDisabled.-
When you configure either
ingress.accessLogging.httpCaptureHeadersoringress.accessLogging.httpCaptureCookies, you must setingress.accessLogging.statustoEnabled. -
When you set the
ingress.statusfield toEnabled, theaccessLogging.destination.typeis automatically set toContainerand the router logs all requests in thelogscontainer. -
If you set this value to
Disabled, the router does not log any requests in the access log.
certificateSecretA reference to a
kubernetes.io/tlstype of secret that contains the default certificate that the MicroShift ingress controller serves. When routes do not specify their own certificate, thecertificateSecretparameter is used. All secrets used must containtls.keykey file contents andtls.crtcertificate file contents.-
When the
certificateSecretparameter is not set, a wildcard certificate is automatically generated and used. The wildcard certificate is valid for the ingress controller defaultdomainand itssubdomains. The generated certificate authority (CA) is automatically integrated with the truststore of the node. - In-use generated and user-specified certificates are automatically integrated with the MicroShift built-in OAuth server.
clientTLSAuthenticates client access to the node and services. As a result, mutual TLS authentication is enabled. If this parameter is not set, then client TLS is not enabled. You must set the
spec.clientTLS.clientCertificatePolicyandspec.clientTLS.clientCAparameters to use client TLS.clientTLS.AllowedSubjectPatternsOptional subfield that specifies a list of regular expressions that are matched against the distinguished name on a valid client certificate to filter requests. This parameter is useful when you have client authentication. Use this parameter to cause the ingress controller to reject certificates based on the distinguished name. The Perl Compatible Regular Expressions (PCRE) syntax is required. You must set the
spec.clientTLS.clientCertificatePolicyandspec.clientTLS.clientCAparameters to useclientTLS.AllowedSubjectPatterns.ImportantWhen configured, this field must contain a valid expression or the MicroShift service fails. At least one pattern must match a client certificate’s distinguished name; otherwise, the ingress controller rejects the certificate and denies the connection.
clientTLS.clientCASpecifies a required config map that is in the
openshift-ingressnamespace. Required to enable client TLS. The config map must contain a certificate authority (CA) bundle namedca-bundle.pemor the deployment of the default router fails.clientTLS.clientCA.nameThe
metadata.nameof the config map referenced in theclientTLS.clientCAvalue.clientTLS.ClientCertificatePolicyRequiredorOptionalare valid values. Set toRequiredto enable client TLS. The ingress controller only checks client certificates for edge-terminated and re-encrypted TLS routes. The ingress controller cannot check certificates for plain text HTTP or passthrough TLS routes.defaultHTTPVersionSets the HTTP version for the ingress controller. The default value is
1for HTTP 1.1. Setting up a load balancer for HTTP 2 and 3 is recommended.forwardedHeaderPolicySpecifies when and how the ingress controller sets the
Forwarded,X-Forwarded-For,X-Forwarded-Host,X-Forwarded-Port,X-Forwarded-Proto, andX-Forwarded-Proto-VersionHTTP headers. The following values are valid:-
Appendpreserves any existing headers by specifying that the ingress controller appends them. 'Append` is the default value. -
Replaceremoves any existing headers by specifying that the ingress controller sets the headers. -
IfNonesets the headers set by specifying that the ingress controller sets the headers if they are not already set. -
Neverpreserves any existing headers by specifying that the ingress controller never sets the headers.
httpCompressionDefines the policy for HTTP traffic compression.
httpCompression.mimeTypesDefines a list of MIME types to which compression should be applied.
-
For example,
text/css; charset=utf-8,text/html,text/*,image/svg+xml,application/octet-stream,X-custom/customsub, in the,type/subtype; [;attribute=value]format. -
Valid
typesare: application, image, message, multipart, text, video, or a custom type prefaced byX-. To see the full notation for MIME types and subtypes, see RFC1341 (IETF Datatracker documentation).
httpEmptyRequestsPolicyDescribes how HTTP connections are handled if the connection times out before a request is received. Allowed values for this field are
RespondandIgnore. The default value isRespond. Empty requests typically come from load-balancer health probes or preconnects and can often be safely ignored. However, network errors and port scans can also cause these requests. Therefore, setting this field toIgnorecan impede detection or diagnosis of network problems and detecting intrusion attempts.-
When the policy is set to
Respond, the ingress controller sends an HTTP400or408response, logs the connection if access logging is enabled, and counts the connection in the appropriate metrics. -
When the policy is set to
Ignore, thehttp-ignore-probesparameter is added to theHAproxyprocess configuration. After this parameter is added, the ingress controller closes the connection without sending a response, then either logs the connection or incrementing metrics.
logEmptyRequestsSpecifies connections for which no request is received and logged.
LogandIgnoreare valid values. Empty requests typically come from load-balancer health probes or preconnects and can often be safely ignored. However, network errors and port scans can also cause these requests. Therefore, setting this field toIgnorecan impede detection or diagnosis of network problems and detecting intrusion attempts. The default value isLog.-
Setting this value to
Logindicates that an event should be logged. -
Setting this value to
Ignoresets thedontlognulloption in theHAproxyconfiguration.
httpErrorCodePagesDescribes custom error code pages. To use this setting, you must configure the
httpErrorCodePages.nameparameter.httpErrorCodePages.nameSpecifies custom error code pages. You can only customize errors for
503and404page codes. To customize error code pages, specify aConfigMapname. TheConfigMapobject must be in theopenshift-ingressnamespace and contain keys in theerror-page-<error code>.httpformat where<error code>is an HTTP status code. Each value in theConfigMapmust be the full response, including HTTP headers. The default value of this parameter is null.portsDefines default router ports.
ports.httpDefault router http port. Must be in range 1-65535. Default value is
80.ports.httpsDefault router https port. Must be in range 1-65535. Default value is
443.routeAdmissionDefines a policy for handling new route claims, such as allowing or denying claims across namespaces.
routeAdmission.namespaceOwnershipDescribes how hostname claims across namespaces are handled. The default is
InterNamespaceAllowed. The following are valid values:-
Strictdoes not allow routes to claim the same hostname across namespaces. -
InterNamespaceAllowedallows routes to claim different paths of the same hostname across namespaces.
routeAdmission.wildcardPolicyControls how the ingress controller handles routes with configured wildcard policies.
WildcardsAllowedandWildcardsDisallowedare valid values. Default value isWildcardsDisallowed.-
WildcardPolicyAllowedmeans that the ingress controller admits routes with any wildcard policy. -
WildcardPolicyDisallowedmeans that the ingress controller admits only routes with a wildcard policy ofNone.
ImportantChanging the wildcard policy from
WildcardsAllowedtoWildcardsDisallowedcauses admitted routes with a wildcard policy ofsubdomainto stop working. The ingress controller only readmits these routes after they are recreated with a wildcard policy ofNone.statusDefault router status.
ManagedorRemovedare valid values.tlsSecurityProfiletlsSecurityProfilespecifies settings for TLS connections for ingress controllers. If not set, the default value is based on theapiservers.config.openshift.io/clusterresource. The TLS1.0version of anOldorCustomprofile is automatically converted to1.1by the ingress controller.Intermediateis the default setting.-
The minimum TLS version for ingress controllers is
1.1. The maximum TLS version is1.3.
NoteThe
TLSProfilestatus shows the ciphers and the minimum TLS version of the configured security profile. Profiles are intent-based and change over time when new ciphers are developed and existing ciphers are found to be insecure. The usable list can be reduced depending on which ciphers are available to a specific process.tlsSecurityProfile.customUser-defined TLS security profile. If you configure this parameter and related parameters, use extreme caution.
tlsSecurityProfile.custom.ciphersSpecifies the cipher algorithms that are negotiated during the TLS handshake. Operators might remove entries their operands do not support.
tlsSecurityProfile.custom.minTLSVersionSpecifies the minimal version of the TLS protocol that is negotiated during the TLS handshake. For example, to use TLS versions 1.1, 1.2 and 1.3, set the value to
VersionTLS11. The highest valid value forminTLSVersionisVersionTLS12.tlsSecurityProfile.intermediateYou can use this TLS profile for a majority of services. Intermediate compatibility (recommended).
tlsSecurityProfile.oldUsed for backward compatibility. Old backward compatibility.
tlsSecurityProfile.typeValid values are
Intermediate,Old, orCustom. TheModernvalue is not supported.tuningOptionsSpecifies options for tuning the performance of ingress controller pods.
tuningOptions.clientFinTimeoutSpecifies how long the ingress controller holds a connection open while waiting for a client response before the server closes the connection. The default timeout is
1s.tuningOptions.clientTimeoutSpecifies how long the ingress controller holds a connection open while waiting for a client response. The default timeout is
30s.tuningOptions.headerBufferBytesSpecifies how much memory is reserved, in bytes, for ingress controller connection sessions. This value must be at least
16384if HTTP/2 is enabled for the ingress controller. If not set, the default value is32768bytes.ImportantSetting this field not recommended because
headerBufferMaxRewriteBytesparameter values that are too small can break the ingress controller. Conversely, values forheaderBufferMaxRewriteBytesthat are too large could cause the ingress controller to use significantly more memory than necessary.tuningOptions.headerBufferMaxRewriteBytesSpecifies how much memory should be reserved, in bytes, from
headerBufferBytesfor HTTP header rewriting and appending for ingress controller connection sessions. The minimum value forheaderBufferMaxRewriteBytesis4096.headerBufferBytesmust be greater than theheaderBufferMaxRewriteBytesvalue for incoming HTTP requests. If not set, the default value is8192bytes.ImportantSetting this field is not recommended because
headerBufferMaxRewriteBytesvalues that are too small can break the ingress controller andheaderBufferMaxRewriteBytesthat are too large could cause the ingress controller to use significantly more memory than necessary.tuningOptions.healthCheckIntervalSpecifies how long the router waits between health checks, set in seconds. The default is
5s.tuningOptions.maxConnectionsSpecifies the maximum number of simultaneous connections that can be established for each
HAProxyprocess. Increasing this value allows each ingress controller pod to handle more connections at the cost of additional system resources. Permitted values are0,-1, any value within the range2000and2000000, or the field can be left empty.-
If this field is empty or has the value
0, the ingress controller uses the default value of50000. -
If the field has the value of
-1, then theHAProxyprocess dynamically computes a maximum value based on the availableulimitsin the running container. This process results in a large computed value that incurs significant memory usage compared to the current default value of50000. -
If the field has a value that is greater than the current operating system limit, the
HAProxyprocesses do not start. -
If you choose a discrete value and the router pod is migrated to a new node, it is possible that the new node does not have an identical
ulimitconfigured. In such cases, the pod fails to start. -
You can monitor memory usage for router containers with the
container_memory_working_set_bytes{container="router",namespace="openshift-ingress"}metric. -
You can monitor memory usage of individual
HAProxyprocesses in router containers with thecontainer_memory_working_set_bytes{container="router",namespace="openshift-ingress"}/container_processes{container="router",namespace="openshift-ingress"}metric.
tuningOptions.serverFinTimeoutSpecifies how long a connection is held open while waiting for the server response to the client that is closing the connection. The default timeout is
1s.tuningOptions.serverTimeoutSpecifies how long a connection is held open while waiting for a server response. The default timeout is
30s.tuningOptions.threadCountSpecifies the number of threads to create per HAProxy process. Creating more threads allows each ingress controller pod to handle more connections, at the cost of using more system resources. The HAProxy load balancer supports up to
64threads. If this field is empty, the ingress controller uses the default value of4threads.ImportantSetting this field is not recommended because increasing the number of
HAProxythreads allows ingress controller pods to use more CPU time under load, and prevent other pods from receiving the CPU resources they need to perform. Reducing the number of threads can cause the ingress controller to perform poorly.tuningOptions.tlsInspectDelaySpecifies how long the router can hold data to find a matching route. Setting this value too low can cause the router to fall back to the default certificate for edge-terminated, re-encrypted, or passthrough routes, even when using a better-matched certificate. The default inspect delay is
5s.tuningOptions.tunnelTimeoutSpecifies how long a tunnel connection, including websockets, remains open while the tunnel is idle. The default timeout is
1h.-
When enabled, access logging is part of the
Complete any other configurations you require, then start or restart MicroShift by running one the following commands:
sudo systemctl start microshift
$ sudo systemctl start microshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow sudo systemctl restart microshift
$ sudo systemctl restart microshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
After making ingress configuration changes and restarting MicroShift, you can check the age of the router pod to ensure that changes are applied.
To check the status of the router pod, run the following command:
oc get pods -n openshift-ingress
$ oc get pods -n openshift-ingressCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE router-default-8649b5bf65-w29cn 1/1 Running 0 6m10s
NAME READY STATUS RESTARTS AGE router-default-8649b5bf65-w29cn 1/1 Running 0 6m10sCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.2.1. Creating a secret for the ingress controller certificateSecret Copy linkLink copied to clipboard!
Use this procedure to create a secret that is referenced by the certificateSecret parameter value in the MicroShift configuration file. This secret contains the default certificate served by the ingress controller.
Any in-use certificates is automatically integrated with the MicroShift built-in OAuth server.
Prerequisites
- You have root access to MicroShift.
-
You installed the OpenShift CLI (
oc). - Your private key is not encrypted or you have decrypted it for importing into MicroShift.
Procedure
Create a secret that contains the wildcard certificate chain and key:
oc create secret tls <secret>
$ oc create secret tls <secret>1 --cert=</path/to/cert.crt>2 --key=</path/to/cert.key>3 -n openshift-ingressCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThe certificate must include the
subjectAltNameextension showing*.apps.<nodename>.<domain>.-
Update the
certificateSecretparameter value in the MicroShift configuration YAML with the newly created secret. Complete any other configurations you require, then start or restart MicroShift by running one the following commands:
sudo systemctl start microshift
$ sudo systemctl start microshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow sudo systemctl restart microshift
$ sudo systemctl restart microshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.2.2. Configuring the TLS security profile for the ingress controller Copy linkLink copied to clipboard!
You can configure the TLS security profile for the ingress controller to use by setting the type in MicroShift configuration YAML.
Prerequisites
- You have root access to the MicroShift node.
Procedure
Add the
spec.tlsSecurityProfilefield to the MicroShift YAML configuration file.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the TLS security profile type (
Old,Intermediate, orCustom). The default isIntermediate. - 2
- Specify the appropriate field for the selected type:
-
old: {} -
intermediate: {} -
custom:
-
- 3
- For the
customtype, specify a list of TLS ciphers and minimum accepted TLS version.WarningIf you choose a
customTLS configuration, use extreme caution. Using self-signed TLS certificates can introduce security risks.
- Save the file to apply the changes.
Restart MicroShift by running the following command:
sudo systemctl restart microshift
$ sudo systemctl restart microshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 6. Disabling the LVMS CSI provider or CSI snapshot Copy linkLink copied to clipboard!
You can configure MicroShift to disable the built-in logical volume manager storage (LVMS) Container Storage Interface (CSI) provider or the CSI snapshot capabilities to reduce the use of runtime resources such as RAM, CPU, and storage.
6.1. Disabling deployments that run CSI snapshot implementations Copy linkLink copied to clipboard!
Use the following procedure to disable installation of the CSI implementation pods.
This procedure is for users who are defining the configuration file before installing and running MicroShift. If MicroShift is already started then CSI snapshot implementation will be running. Users must manually remove it by following the uninstallation instructions.
MicroShift will not delete CSI snapshot implementation pods. You must configure MicroShift to disable installation of the CSI snapshot implementation pods during the startup process.
Procedure
Disable installation of the CSI snapshot controller by entering the
optionalCsiComponentsvalue under thestoragesection of the MicroShift configuration file in/etc/microshift/config.yaml:# ... storage: {} # ...# ... storage: {}1 # ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Accepted values are:
-
Not defining
optionalCsiComponents. -
Specifying
optionalCsiComponentsfield with an empty value ([]) or a single empty string element ([""]). Specifying
optionalCsiComponentswith one of the accepted values which aresnapshot-controller, ornone. A value ofnoneis mutually exclusive with all other values.NoteIf the
optionalCsiComponentsvalue is empty or null, MicroShift defaults to deploying snapshot-controller.
-
Not defining
After the
optionalCsiComponentsfield is specified with a supported value in theconfig.yaml, start MicroShift by running the following command:sudo systemctl start microshift
$ sudo systemctl start microshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteMicroShift does not redeploy the disabled components after a restart.
6.2. Disabling deployments that run the CSI driver implementations Copy linkLink copied to clipboard!
Use the following procedure to disable installation of the CSI implementation pods. MicroShift does not delete CSI driver implementation pods. You must configure MicroShift to disable installation of the CSI driver implementation pods during the startup process.
This procedure is for defining the configuration file before installing and running MicroShift. If MicroShift is already started, then the CSI driver implementation is running. You must manually remove it by following the uninstallation instructions.
Procedure
Disable installation of the CSI driver by entering the
drivervalue under thestoragesection of the MicroShift configuration file in/etc/microshift/config.yaml:# ... storage driver: - "none" # ...
# ... storage driver: - "none"1 # ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Valid values are
noneorlvms.
NoteBy default, the
drivervalue is empty or null and LVMS is deployed.Start MicroShift after the
driverfield is specified with a supported value in the/etc/microshift/config.yamlfile by running the following command:sudo systemctl enable --now microshift
$ sudo systemctl enable --now microshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteMicroShift does not redeploy the disabled components after a restart operation.
Chapter 7. Checking greenboot scripts status Copy linkLink copied to clipboard!
To deploy applications or make other changes through the MicroShift API with tools other than kustomize manifests, you must wait until the greenboot health checks have finished. This ensures that your changes are not lost if greenboot rolls your rpm-ostree system back to an earlier state.
The greenboot-healthcheck service runs one time and then exits. After greenboot has exited and the system is in a healthy state, you can proceed with configuration changes and deployments.
7.1. Checking the status of greenboot health checks Copy linkLink copied to clipboard!
Check the status of greenboot health checks before making changes to the system and while troubleshooting. You can use any of the following commands to help you ensure that greenboot scripts have finished running.
Procedure
To see a report of health check status, use the following command:
systemctl show --property=SubState --value greenboot-healthcheck.service
$ systemctl show --property=SubState --value greenboot-healthcheck.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
An output of
startmeans that greenboot checks are still running. -
An output of
exitedmeans that checks have passed and greenboot has exited. Greenboot runs the scripts in thegreen.ddirectory when the system is a healthy state. -
An output of
failedmeans that checks have not passed. Greenboot runs the scripts inred.ddirectory when the system is in this state and might restart the system.
-
An output of
To see a report showing the numerical exit code of the service where
0means success and non-zero values mean a failure occurred, use the following command:systemctl show --property=ExecMainStatus --value greenboot-healthcheck.service
$ systemctl show --property=ExecMainStatus --value greenboot-healthcheck.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow To see a report showing a message about boot status, such as
Boot Status is GREEN - Health Check SUCCESS, use the following command:cat /run/motd.d/boot-status
$ cat /run/motd.d/boot-statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 8. Node access with kubeconfig files Copy linkLink copied to clipboard!
Learn about how kubeconfig files are used with MicroShift deployments. CLI tools use kubeconfig files to communicate with the API server of a node. These files provide node details, IP addresses, and other information needed for authentication.
8.1. Kubeconfig files for configuring node access Copy linkLink copied to clipboard!
The two categories of kubeconfig files used in MicroShift are local access and remote access. Every time MicroShift starts, a set of kubeconfig files for local and remote access to the API server are generated. These files are generated in the /var/lib/microshift/resources/kubeadmin/ directory by using preexisting configuration information.
Each access type requires a different authentication certificate signed by different Certificate Authorities (CAs). The generation of multiple kubeconfig files accommodates this need.
You can use the appropriate kubeconfig file for the access type needed in each case to provide authentication details. The contents of MicroShift kubeconfig files are determined by either default built-in values or a config.yaml file.
A kubeconfig file must exist for the cluster to be accessible. The values are applied from built-in default values or a customized config.yaml file.
Example contents of the kubeconfig files
8.2. Local access kubeconfig file Copy linkLink copied to clipboard!
The local access kubeconfig file is written to /var/lib/microshift/resources/kubeadmin/kubeconfig. This kubeconfig file provides access to the API server by using localhost. Choose this file when you are connecting the node locally.
Example contents of kubeconfig for local access
clusters:
- cluster:
certificate-authority-data: <base64 CA>
server: https://localhost:6443
clusters:
- cluster:
certificate-authority-data: <base64 CA>
server: https://localhost:6443
The localhost kubeconfig file can only be used from a client connecting to the API server from the same host. The certificates in the file do not work for remote connections.
8.2.1. Accessing the MicroShift node locally Copy linkLink copied to clipboard!
Use the following procedure to access the MicroShift node locally by using a kubeconfig file.
Prerequisites
-
You installed OpenShift CLI (
oc).
Procedure
Optional: to create a
~/.kube/folder if your Red Hat Enterprise Linux (RHEL) machine does not have one, run the following command:mkdir -p ~/.kube/
$ mkdir -p ~/.kube/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the generated local access
kubeconfigfile to the~/.kube/directory by running the following command:sudo cat /var/lib/microshift/resources/kubeadmin/kubeconfig > ~/.kube/config
$ sudo cat /var/lib/microshift/resources/kubeadmin/kubeconfig > ~/.kube/configCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the permissions on your
~/.kube/configfile by running the following command:chmod go-r ~/.kube/config
$ chmod go-r ~/.kube/configCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that MicroShift is running by entering the following command:
oc get pods -A
$ oc get pods -ACopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis example output shows a basic MicroShift installation. If you installed optional RPMs, the status of pods running those services is also expected in your output.
8.3. Remote access kubeconfig files Copy linkLink copied to clipboard!
When a MicroShift node connects to the API server from an external source, a certificate with all of the alternative names in the SAN field is used for validation. MicroShift generates a default kubeconfig for external access by using the hostname value. The defaults are set in the <node.hostnameOverride>, <node.nodeIP> and api.<dns.baseDomain> parameter values of the default kubeconfig file.
The /var/lib/microshift/resources/kubeadmin/<hostname>/kubeconfig file uses the hostname of the machine, or node.hostnameOverride if that option is set, to reach the API server. The CA of the kubeconfig file is able to validate certificates when accessed externally.
Example contents of a default kubeconfig file for remote access
clusters:
- cluster:
certificate-authority-data: <base64 CA>
server: https://microshift-rhel9:6443
clusters:
- cluster:
certificate-authority-data: <base64 CA>
server: https://microshift-rhel9:6443
8.3.1. Remote access customization Copy linkLink copied to clipboard!
Multiple remote access kubeconfig file values can be generated for accessing the node with different IP addresses or host names. An additional kubeconfig file generates for each entry in the apiServer.subjectAltNames parameter. You can copy remote access kubeconfig files from the host during times of IP connectivity and then use them to access the API server from other workstations.
8.4. Generating additional kubeconfig files for remote access Copy linkLink copied to clipboard!
You can generate additional kubeconfig files to use if you need more host names or IP addresses than the default remote access file provides.
You must restart MicroShift for configuration changes to be implemented.
Prerequisites
-
You have created a
config.yamlfor MicroShift.
Procedure
Optional: You can show the contents of the
config.yaml. Run the following command:cat /etc/microshift/config.yaml
$ cat /etc/microshift/config.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: You can show the contents of the remote-access
kubeconfigfile. Run the following command:cat /var/lib/microshift/resources/kubeadmin/<hostname>/kubeconfig
$ cat /var/lib/microshift/resources/kubeadmin/<hostname>/kubeconfigCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantAdditional remote access
kubeconfigfiles must include one of the server names listed in the Red Hat build of MicroShiftconfig.yamlfile. Additionalkubeconfigfiles must also use the same CA for validation.To generate additional
kubeconfigfiles for additional DNS names SANs or external IP addresses, add the entries you need to theapiServer.subjectAltNamesfield. In the following example, the DNS name used isalt-name-1and the IP address is1.2.3.4.Example
config.yamlwith additional authentication valuesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restart MicroShift to apply configuration changes and auto-generate the
kubeconfigfiles you need by running the following command:sudo systemctl restart microshift
$ sudo systemctl restart microshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow To check the contents of additional remote-access
kubeconfigfiles, insert the name or IP address as listed in theconfig.yamlinto thecatcommand. For example,alt-name-1is used in the following example command:cat /var/lib/microshift/resources/kubeadmin/alt-name-1/kubeconfig
$ cat /var/lib/microshift/resources/kubeadmin/alt-name-1/kubeconfigCopy to Clipboard Copied! Toggle word wrap Toggle overflow Choose the
kubeconfigfile to use that contains the SAN or IP address you want to use to connect your node. In this example, thekubeconfigcontaining`alt-name-1` in thecluster.serverfield is the correct file.Example contents of an additional
kubeconfigfileclusters: - cluster: certificate-authority-data: <base64 CA> server: https://alt-name-1:6443clusters: - cluster: certificate-authority-data: <base64 CA> server: https://alt-name-1:64431 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
/var/lib/microshift/resources/kubeadmin/alt-name-1/kubeconfigfile values are from theapiServer.subjectAltNamesconfiguration values.
All of these parameters are included as common names (CN) and subject alternative names (SAN) in the external serving certificates for the API server.
8.4.1. Opening the firewall for remote access to the MicroShift node Copy linkLink copied to clipboard!
Use the following procedure to open the firewall so that a remote user can access the MicroShift service. You must complete this procedure before a workstation user can access the node remotely.
For this procedure, user@microshift is the user on the MicroShift host machine and is responsible for setting up that machine so that it can be accessed by a remote user on a separate workstation.
Prerequisites
-
You installed OpenShift CLI (
oc). - Your account has cluster administration privileges.
Procedure
As
user@microshifton the MicroShift host, open the firewall port for the Kubernetes API server (6443/tcp) by running the following command:sudo firewall-cmd --permanent --zone=public --add-port=6443/tcp && sudo firewall-cmd --reload
[user@microshift]$ sudo firewall-cmd --permanent --zone=public --add-port=6443/tcp && sudo firewall-cmd --reloadCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
As
user@microshift, verify that MicroShift is running by entering the following command:oc get pods -A
$ oc get pods -ACopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis example output shows a basic MicroShift installation. If you installed optional RPMs, the status of pods running those services is also expected in your output.
8.4.2. Accessing the MicroShift node remotely Copy linkLink copied to clipboard!
Use the following procedure to access the MicroShift service from a remote location by using a kubeconfig file.
The user@workstation login is used to access the host machine remotely. The <user> value in the procedure is the name of the user that user@workstation logs in with to the MicroShift host.
Prerequisites
-
You installed OpenShift CLI (
oc). -
The
user@microshifthas opened the firewall from the local host. -
You generated additional
kubeconfigfiles.
Procedure
As
user@workstation, create a~/.kube/folder if your Red Hat Enterprise Linux (RHEL) machine does not have one by running the following command:mkdir -p ~/.kube/
[user@workstation]$ mkdir -p ~/.kube/Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
user@workstation, set a variable for the hostname of your MicroShift host by running the following command:MICROSHIFT_MACHINE=<microshift_hostname>
[user@workstation]$ MICROSHIFT_MACHINE=<microshift_hostname>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace the value, <MicroShift_hostname>, with the either the name or the IP address of the host running .
As
user@workstation, copy the generatedkubeconfigfile that contains the hostname or IP address you want to connect to from the RHEL machine running MicroShift to your local machine by running the following command:ssh <user>@$MICROSHIFT_MACHINE "sudo cat /var/lib/microshift/resources/kubeadmin/$MICROSHIFT_MACHINE/kubeconfig" > ~/.kube/config
[user@workstation]$ ssh <user>@$MICROSHIFT_MACHINE "sudo cat /var/lib/microshift/resources/kubeadmin/$MICROSHIFT_MACHINE/kubeconfig" > ~/.kube/config1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace <user> with your SSH login credentials.
As
user@workstation, update the permissions on your~/.kube/configfile by running the following command:chmod go-r ~/.kube/config
$ chmod go-r ~/.kube/configCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
As
user@workstation, verify that MicroShift is running by entering the following command:oc get pods -A
$ oc get pods -ACopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis example output shows a basic MicroShift installation. If you installed optional RPMs, the status of pods running those services is also expected in your output.
Chapter 9. Using the Generic Device Plugin Copy linkLink copied to clipboard!
The Generic Device Plugin for MicroShift is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The Generic Device Plugin (GDP) for MicroShift enables your containerized applications to securely access physical host devices, such as serial ports, video cameras, or sound cards directly from within Kubernetes pods. By using GDP, you can extend capabilities of MicroShift to support applications that require direct hardware interaction, such as Internet of Things (IoT) applications.
9.1. Understanding the Generic Device Plugin Copy linkLink copied to clipboard!
The Generic Device Plugin (GDP) is a Kubernetes device plugin that enables applications running in pods to access host devices such as serial ports, cameras, and sound cards securely. This capability is especially important for edge and IoT environments where direct hardware interaction is a common requirement. The GDP integrates with the kubelet to advertise available devices to the node and facilitate their allocation to pods without requiring elevated privileges within the container itself.
The GDP is designed to handle devices that are initialized and managed by the operating system and do not require any special initialization procedures or drivers for a pod to use them.
Here are examples of generic devices that are suitable for the GDP:
-
Serial ports, for example,
/dev/ttyUSB*,/dev/ttyACM*. -
Video cameras, for example,
/dev/video0. -
Sound devices, for example,
/dev/snd,/dev/snd/controlC0. - USB devices specified by Vendor ID and Product ID, or, optionally, by the device serial number.
The following specialized devices are not suitable for the GDP:
- Devices that require specific initialization procedures beyond standard operating system management.
- Specialized hardware that needs additional drivers or kernel modules. Examples of this specialized hardware include GPUs and FPGAs. These types of devices typically require their own specialized device plugins.
9.2. Limitations and considerations for the Generic Device Plugin Copy linkLink copied to clipboard!
Although the Generic Device Plugin (GDP) provides powerful capabilities for accessing host devices in MicroShift, it is important to understand its limitations and current status.
9.2.1. Devices not suited for the Generic Device Plugin Copy linkLink copied to clipboard!
The GDP is designed for devices that are managed directly by the operating system and do not require special setup procedures. Devices that are not well-suited for the Generic Device Plugin include:
- Complex hardware requiring specialized drivers such as GPUs (graphics processing units) or FPGAs (field-programmable gate arrays). These types of hardware typically require dedicated device plugins that can perform unique initialization procedures, memory management, or queue resets before a pod can use them.
- Devices with specific vendor-supplied software stacks. Devices that require a complex software stack or proprietary APIs beyond direct file system access might require a specialized plugin.
9.2.2. Device identification and logging Copy linkLink copied to clipboard!
When you use glob paths, for example, /dev/ttyUSB*, to expose multiple similar devices, the GDP allocates devices based on availability. However, if your application needs to connect to an exactly specified physical device, for example, serial device 3 out of 10, using broad glob paths might be insufficient. In such cases, configure individual device entries in the config.yaml file using more stable and unique identifiers such as:
-
Specific device paths, for example,
/dev/video0. -
Symbolic links provided by the operating system, for example,
/dev/serial/by-id/or/dev/serial/by-path/. - USB vendor ID, product ID, and serial number combinations for precise USB device targeting.
9.2.3. Performance considerations Copy linkLink copied to clipboard!
The count parameter in the config.yaml file enables a device group to be scheduled multiple times concurrently. While there are no explicit limits set within the GDP for the count (for example, 1000 for /dev/fuse), the actual performance depends on the host system’s capabilities and the nature of the device itself. Running a very high number of concurrent processes that access the same device might affect performance.
9.3. Configuring the Generic Device Plugin Copy linkLink copied to clipboard!
The Generic Device Plugin for MicroShift is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The Generic Device Plugin (GDP) is disabled by default in MicroShift. To use the GDP, you must enable it and specify which host devices your Kubernetes applications can access. To enable the GDP, you must modify the MicroShift config.yaml file or create a configuration snippet file. For example, /etc/microshift/config.d/10-gdp.yaml.
Prerequisites
- You installed MicroShift.
-
You created a custom
config.yamlfile in the/etc/microshiftdirectory. -
You installed the OpenShift CLI (
oc). -
You have
sudoprivileges on the MicroShift host. -
You have identified the specific host devices that you want to expose to your MicroShift node. For example,
/dev/video0,/dev/ttyUSB*, or USB Vendor/Product IDs.
Procedure
-
From your CLI using
sudoprivileges, open/etc/microshift/config.yamlin a text editor. -
Locate the
genericDevicePluginsection. If it is not present, add it. Set the
statusparameter toEnabledand define thedevicesthat should be exposed. Each device definition needs anameand one or moregroups. Each group can specify devices usingpaths, for file-based devices, including glob patterns, orusbs, for USB devices using Vendor/Product IDs. You cannot mixpathsandusbswithin the same device group.GDP fields with default values
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Exposes all the USB serial devices that are matched by this glob.
- 2
- Exposes all the ACM serial devices that are matched by this glob.
- 3
- For example, the file path for a fuse device.
- 4
- For example, the name of the device.
- 5
- Exposes a specific USB device by Vendor ID and Product ID.
- 6
- For example, the Product ID for a CH340 serial converter
- 7
- For example, the Vendor ID for a CH340 serial converter.
- 8
- Default domain for the GDP.
Important-
The output of the
microshift show-configparameter might include pre-configured default paths for serial devices even if you have not explicitly configured them inconfig.yaml. These paths represent the default discovery settings if the Generic Device Plugin is enabled without specific user configuration. -
For consistency and precise device targeting, especially when dealing with multiple similar devices, consider using stable device paths like
/dev/serial/by-id/or specific USB Vendor, Product, or Serial IDs instead of broad glob patterns like/dev/ttyUSB*. -
The
countparameter in a device group allows a single device, or a set of devices matched by a glob, to be allocated multiple times concurrently to different pods. If omitted,countdefaults to1.
-
Save the
config.yamlfile. Restart the MicroShift service to apply the changes:
sudo systemctl restart microshift
$ sudo systemctl restart microshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow Allow some time for MicroShift to restart and for the GDP to register its devices with the Kubelet.
Verification
You can check the available devices in your node by running the following command:
oc describe node <microshift_node_name> | grep "device.microshift.io"
$ oc describe node <microshift_node_name> | grep "device.microshift.io"1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace <microshift_node_name> with your node name.
Depending on your configuration, expect output that indicates that the devices are now discoverable and schedulable within your MicroShift node.
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.4. Deploying applications that use generic devices Copy linkLink copied to clipboard!
After the Generic Device Plugin (GDP) is configured and enabled in MicroShift, you can deploy Kubernetes workloads, such as pods, deployments, or StatefulSets, that request access to the host devices that you have exposed. Devices are made available inside the container without requiring the pod to run with elevated privileges.
Prerequisites
- You installed MicroShift.
- You enabled and configured GDP.
-
You installed OpenShift CLI (
oc).
Procedure
Define the device request in your
Podspecification:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace with your container image.
- 2
- Replace with the command for your application.
- 3
- For example, how your application might use the device.
- 4
- The resource name must follow the pattern
device.microshift.io/<device_name>, where<device_name>matches thenamethat you specified in your configuration file. - 5
- A request for one instance of the
videodevice. - 6
- Define and configure with the least privilege value to ensure that the container has only required permissions, such as access to the device file, and to restrict other capabilities for the container.
Deploy the Kubernetes workload by applying the manifest to the MicroShift node by running the following command:
oc apply -f <your-workload-manifest.yaml>
$ oc apply -f <your-workload-manifest.yaml>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace <your_workload_manifest> with the name of your workload manifest.
Results
After the pod is running, the specified host device is available at its original path, or mountPath if specified, inside the container. Your application can then interact with it as if it were a local device.
For example, if you requested device.microshift.io/serial, which maps to /dev/ttyUSB*, your application might find the device at /dev/ttyUSB0 or a similar path inside the container.
Verification
Verify device access by running the following command inside the running pod:
oc exec -it <pod_name> -- ls -l /dev/video0
$ oc exec -it <pod_name> -- ls -l /dev/video01 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace <pod_name> with the name of the pod.
9.5. Generic Device Plugin configuration reference Copy linkLink copied to clipboard!
This section provides a detailed reference for the configuration parameters available for the Generic Device Plugin within the MicroShift config.yaml file.
| Parameter | Description |
|---|---|
|
|
The |
|
|
Is a subgroup that lists the device definitions to be exposed by the plugin. Each |
|
| Lists device groups. Devices within a group comprise a pool of devices under a common name. When you request a device from that pool, you can receive a device from different defined paths. |
|
|
Specifies how many times this group of devices can be mounted concurrently. If unspecified, |
|
|
Lists the host device file paths. Paths can be glob patterns, for example, |
|
|
Specifies up to how many times this device can be used in the group concurrently when other devices in the group yield more matches. For example, if one path in the group matches 5 devices and another matches 1 device but has a limit of 10, then the group provides 5 pairs of devices. When unspecified, the limit defaults to |
|
|
The file path at which the host device should be mounted within the container. When unspecified, |
|
|
The file path of a device on the host, for example, |
|
|
The file-system permissions given to the mounted device. Applies only to mounts of type
*
*
*
When unspecified, |
|
|
Specifies whether the path should be mounted read-only. The values are |
|
|
Describes what type of file-system node this |
|
|
Lists the USB specifications that this device group consists of. The vendor and product IDs must always match. The serial ID must match if provided, or skipped if the ID is empty, The |
|
|
The USB Product ID of the device to match on, for example, |
|
| The serial number of the device to match on. A USB device must match exactly on all the given attributes to pass. |
|
|
The USB Vendor ID of the device to match on, for example, |
|
|
A unique string representing the kind of device this specification describes, for example, |
|
|
|
|
|
|
9.5.1. Troubleshooting configuration issues Copy linkLink copied to clipboard!
If you encounter errors such as "invalid configuration: failed to parse device" or "cannot define both path and usbs at the same time", it means you have incorrectly mixed paths and usbs fields within the same groups entry for a device. Each group must exclusively use either paths or usbs to define its devices.
9.5.2. Additional resources Copy linkLink copied to clipboard!
Chapter 10. Configuring MicroShift authentication and security Copy linkLink copied to clipboard!
10.1. Configuring custom certificate authorities Copy linkLink copied to clipboard!
Allow and encrypt connections with external clients by replacing the MicroShift default API server certificate with a custom server certificate issued by a certificate authority (CA).
10.1.1. Using custom certificate authorities for the MicroShift API server Copy linkLink copied to clipboard!
When MicroShift starts, an internal MicroShift node certificate authority (CA) issues the default API server certificate. By default, clients outside of the node cannot verify the MicroShift-issued API server certificate. You can grant secure access and encrypt connections between the MicroShift API server and external clients. Replace the default certificate with a custom server certificate issued externally by a CA that clients trust.
The following steps illustrate the workflow for customizing the API server certificate configuration in MicroShift:
- Copy the certificates and keys to the preferred directory in the host operating system. Ensure that the files are accessible only with root access.
Update the MicroShift configuration for each custom CA by specifying the certificate names and new fully qualified domain name (FQDN) in the MicroShift
/etc/microshift/config.yamlconfiguration file.Each certificate configuration can contain the following values:
- The certificate file location is a required value.
A single common name containing the API server DNS and IP address or IP address range.
TipIn most cases, MicroShift generates a new
kubeconfigfile for your custom CA that includes the IP address or range that you specify. The exception is when you specify wildcards for the IP address. In this case, MicroShift generates akubeconfigfile with the public IP address of the server. To use wildcards, you must update thekubeconfigfile with your specific details.- Multiple Subject Alternative Names (SANs) containing the API server DNS and IP addresses or a wildcard certificate.
- You can list additional DNS names for each certificate.
-
After the MicroShift service restarts, you must copy the generated
kubeconfigfiles to the client. Configure additional CAs on the client system. For example, you can update CA bundles in the Red Hat Enterprise Linux (RHEL) truststore.
ImportantCustom server certificates must be validated against CA data configured in the trust root of the host operating system. For more information, read the following documentation:
The certificates and keys are read from the specified file location on the host. You can test and validate configuration from the client.
- If any validation fails, MicroShift skips the custom configuration and uses the default certificate to start. The priority is to continue the service uninterrupted. MicroShift logs errors when the service starts. Common errors include expired certificates, missing files, or wrong IP addresses.
- External server certificates are not automatically renewed. You must manually rotate your external certificates.
10.1.2. Configuring custom certificate authorities Copy linkLink copied to clipboard!
To configure externally generated certificates and domain names by using custom certificate authorities (CAs), add them to the MicroShift /etc/microshift/config.yaml configuration file. You must also configure the host operating system trust root.
Externally generated kubeconfig files are created in the /var/lib/microshift/resources/kubeadmin/<hostname>/kubeconfig directory. If you need to use localhost in addition to externally generated configurations, retain the original kubeconfig file in its default location. The localhost kubeconfig file uses the self-signed certificate authority.
Prerequisites
-
The OpenShift CLI (
oc) is installed. - You have root access to the node.
- The certificate authority has issued the custom certificates.
-
A MicroShift
/etc/microshift/config.yamlconfiguration file exists.
Procedure
- Copy the custom certificates you want to add to the trust root of the MicroShift host. Ensure that the certificate and private keys are only accessible to MicroShift.
For each custom CA that you need, add an
apiServersection callednamedCertificatesto the/etc/microshift/config.yamlMicroShift configuration file by using the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the MicroShift to apply the certificates by running the following command:
systemctl microshift restart
$ systemctl microshift restartCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Wait a few minutes for the system to restart and apply the custom server. New
kubeconfigfiles are generated in the/var/lib/microshift/resources/kubeadmin/directory. -
Copy the
kubeconfigfiles to the client. If you specified wildcards for the IP address, update thekubeconfigto remove the public IP address of the server and replace that IP address with the specific wildcard range you want to use. From the client, use the following steps:
Specify the
kubeconfigto use by running the following command:export KUBECONFIG=~/custom-kubeconfigs/kubeconfig
$ export KUBECONFIG=~/custom-kubeconfigs/kubeconfig1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Use the location of the copied
kubeconfigfile as the path.
Check that the certificates are applied by using the following command:
oc --certificate-authority ~/certs/ca.ca get node
$ oc --certificate-authority ~/certs/ca.ca get nodeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
oc get node NAME STATUS ROLES AGE VERSION dhcp-1-235-195.arm.example.com Ready control-plane,master,worker 76m v1.33.4
oc get node NAME STATUS ROLES AGE VERSION dhcp-1-235-195.arm.example.com Ready control-plane,master,worker 76m v1.33.4Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the new CA file to the $KUBECONFIG environment variable by running the following command:
oc config set clusters.microshift.certificate-authority /tmp/certificate-authority-data-new.crt
$ oc config set clusters.microshift.certificate-authority /tmp/certificate-authority-data-new.crtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the new
kubeconfigfile contains the new CA by running the following command:oc config view --flatten
$ oc config view --flattenCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example externally generated
kubeconfigfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
certificate-authority-datasection is not present in externally generatedkubeconfigfiles. It is added with theoc config setcommand used previously.
Verify the
subjectandissuerof your customized API server certificate authority by running the following command:curl --cacert /tmp/caCert.pem https://${fqdn_name}:6443/healthz -v$ curl --cacert /tmp/caCert.pem https://${fqdn_name}:6443/healthz -vCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantEither replace the
certificate-authority-datain the generatedkubeconfigfile with the newrootCAor add thecertificate-authority-datato the trust root of the operating system. Do not use both methods.Configure additional CAs in the trust root of the operating system. For example, in the RHEL Client truststore on the client system. The system-wide truststore.
- Updating the certificate bundle with the configuration that contains the CA is recommended.
-
If you do not want to configure your certificate bundles, you can alternately use the
oc login localhost:8443 --certificate-authority=/path/to/cert.crtcommand, but this method is not preferred.
10.1.3. Custom certificates reserved name values Copy linkLink copied to clipboard!
The following certificate problems cause MicroShift to ignore certificates dynamically and log an error:
- The certificate files do not exist on the disk or are not readable.
- The certificate is not parsable.
-
The certificate overrides the internal certificates IP addresses or DNS names in a
SubjectAlternativeNames(SAN) field. Do not use a reserved name when configuring SANs.
| Address | Type | Comment |
|---|---|---|
|
| DNS | |
|
| IP Address | |
|
| IP Address | Node Network |
|
| IP Address | Service Network |
| 169.254.169.2/29 | IP Address | br-ex Network |
|
| DNS | |
|
| DNS | |
|
| DNS |
10.1.4. Troubleshooting custom certificates Copy linkLink copied to clipboard!
To troubleshoot the implementation of custom certificates, you can take the following steps.
Procedure
From MicroShift, ensure that the certificate is served by the
kube-apiserverand verify that the certificate path is appended to the--tls-sni-cert-keyFLAG by running the following command:journalctl -u microshift -b0 | grep tls-sni-cert-key
$ journalctl -u microshift -b0 | grep tls-sni-cert-keyCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Jan 24 14:53:00 localhost.localdomain microshift[45313]: kube-apiserver I0124 14:53:00.649099 45313 flags.go:64] FLAG: --tls-sni-cert-key="[/home/eslutsky/dev/certs/server.crt,/home/eslutsky/dev/certs/server.key;/var/lib/microshift/certs/kube-apiserver-external-signer/kube-external-serving/server.crt,/var/lib/microshift/certs/kube-apiserver-external-signer/kube-external-serving/server.key;/var/lib/microshift/certs/kube-apiserver-localhost-signer/kube-apiserver-localhost-serving/server.crt,/var/lib/microshift/certs/kube-apiserver-localhost-signer/kube-apiserver-localhost-serving/server.key;/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.crt,/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.key
Jan 24 14:53:00 localhost.localdomain microshift[45313]: kube-apiserver I0124 14:53:00.649099 45313 flags.go:64] FLAG: --tls-sni-cert-key="[/home/eslutsky/dev/certs/server.crt,/home/eslutsky/dev/certs/server.key;/var/lib/microshift/certs/kube-apiserver-external-signer/kube-external-serving/server.crt,/var/lib/microshift/certs/kube-apiserver-external-signer/kube-external-serving/server.key;/var/lib/microshift/certs/kube-apiserver-localhost-signer/kube-apiserver-localhost-serving/server.crt,/var/lib/microshift/certs/kube-apiserver-localhost-signer/kube-apiserver-localhost-serving/server.key;/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.crt,/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.keyCopy to Clipboard Copied! Toggle word wrap Toggle overflow From the client, ensure that the
kube-apiserveris serving the correct certificate by running the following command:openssl s_client -connect <SNI_ADDRESS>:6443 -showcerts | openssl x509 -text -noout -in - | grep -C 1 "Alternative\|CN"
$ openssl s_client -connect <SNI_ADDRESS>:6443 -showcerts | openssl x509 -text -noout -in - | grep -C 1 "Alternative\|CN"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.1.5. Cleaning up and recreating the custom certificates Copy linkLink copied to clipboard!
To stop the MicroShift services, clean up the custom certificates and re-create the custom certificates, use the following steps.
Procedure
Stop the MicroShift services and clean up the custom certificates by running the following command:
sudo microshift-cleanup-data --cert
$ sudo microshift-cleanup-data --certCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Stopping MicroShift services Removing MicroShift certificates MicroShift service was stopped Cleanup succeeded
Stopping MicroShift services Removing MicroShift certificates MicroShift service was stopped Cleanup succeededCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the MicroShift services to recreate the custom certificates by running the following command:
sudo systemctl start microshift
$ sudo systemctl start microshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow
10.1.6. Additional resources Copy linkLink copied to clipboard!
10.2. Configuring TLS security profiles Copy linkLink copied to clipboard!
Use transport layer security (TLS) protocols to help prevent known insecure protocols, ciphers, or algorithms from accessing the applications you run on MicroShift.
10.2.1. Using TLS with MicroShift Copy linkLink copied to clipboard!
Transport layer security (TLS) profiles provide a way for servers to regulate which ciphers a client can use when connecting to the server. Using TLS helps to ensure that MicroShift applications use cryptographic libraries that do not allow known insecure protocols, ciphers, or algorithms. You can use either the TLS 1.2 or TLS 1.3 security profiles with MicroShift.
MicroShift API server cipher suites apply automatically to the following internal control plane components:
- API server
- Kubelet
- Kube controller manager
- Kube scheduler
- etcd
- Route controller manager
The API server uses the configured minimum TLS version and the associated cipher suites. If you leave the cipher suites parameter empty, the defaults for the configured minimum version are used automatically.
Default cipher suites for TLS 1.2
The following list specifies the default cipher suites for TLS 1.2:
-
TLS_AES_128_GCM_SHA256 -
TLS_AES_256_GCM_SHA384 -
TLS_CHACHA20_POLY1305_SHA256 -
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 -
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 -
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 -
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 -
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 -
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
Default cipher suites for TLS 1.3
The following list specifies the default cipher suites for TLS 1.3:
-
TLS_AES_128_GCM_SHA256 -
TLS_AES_256_GCM_SHA384 -
TLS_CHACHA20_POLY1305_SHA256
10.2.2. Configuring TLS for MicroShift Copy linkLink copied to clipboard!
You can choose to use either the TLS 1.2 or TLS 1.3 security profiles with MicroShift for system hardening.
Prerequisites
- You have access to the node as a root user.
- MicroShift has either not started for the first time, or is stopped.
-
The OpenShift CLI (
oc) is installed. - The certificate authority has issued the custom certificates (CAs).
Procedure
-
Make a copy of the provided
config.yaml.defaultfile in the/etc/microshift/directory, renaming itconfig.yaml. Keep the new MicroShift
config.yamlin the/etc/microshift/directory. Yourconfig.yamlfile is read every time the MicroShift service starts.NoteAfter you create it, the
config.yamlfile takes precedence over built-in settings.- Optional: Use a configuration snippet if you are using an existing MicroShift YAML. See "Using configuration snippets" in the Additional resources section for more information.
Replace the default values in the
tlssection of the MicroShift YAML with your valid values.Example TLS 1.2 configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Defaults to the suites of the configured
minVersion. IfminVersionis not configured, the default value is TLS 1.2. - 2
- Specify the cipher suites you want to use from the list of supported cipher suites. If you do not configure this list, all of the supported cipher suites are used. All clients connecting to the API server must support the configured cipher suites or the connections fail during the TLS handshake phase. Be sure to add the CA certificate bundle to the list of CA certificates that the TLS client or server trusts.
- 3
- Specify
VersionTLS12orVersionTLS13.
ImportantWhen you choose TLS 1.3 as the minimum TLS version, only the default MicroShift cipher suites can be used. Additional cipher suites are not configurable. If other cipher suites to use with TLS 1.3 are configured, those suites are ignored and overwritten by the MicroShift defaults.
Complete any other additional configurations that you require, then restart MicroShift by running the following command:
sudo systemctl restart microshift
$ sudo systemctl restart microshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow
10.2.2.1. Default cipher suites Copy linkLink copied to clipboard!
Default cipher suites are included with MicroShift for both TLS 1.2 and TLS 1.3. The cipher suites for TLS 1.3 cannot be customized.
Default cipher suites for TLS 1.2
The following list specifies the default cipher suites for TLS 1.2:
-
TLS_AES_128_GCM_SHA256 -
TLS_AES_256_GCM_SHA384 -
TLS_CHACHA20_POLY1305_SHA256 -
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 -
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 -
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 -
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 -
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 -
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
Default cipher suites for TLS 1.3
The following list specifies the default cipher suites for TLS 1.3:
-
TLS_AES_128_GCM_SHA256 -
TLS_AES_256_GCM_SHA384 -
TLS_CHACHA20_POLY1305_SHA256
10.3. Configuring audit logging policies Copy linkLink copied to clipboard!
You can control MicroShift audit log file rotation and retention by using configuration values.
10.3.1. About setting limits on audit log files Copy linkLink copied to clipboard!
Controlling the rotation and retention of the MicroShift audit log file by using configuration values helps keep the limited storage capacities of far-edge devices from being exceeded. On such devices, logging data accumulation can limit host system or node workloads, potentially causing the device stop working. Setting audit log policies can help ensure that critical processing space is continually available.
The values you set to limit MicroShift audit logs enable you to enforce the size, number, and age limits of audit log backups. Field values are processed independently of one another and without prioritization.
You can set fields in combination to define a maximum storage limit for retained logs. For example:
-
Set both
maxFileSizeandmaxFilesto create a log storage upper limit. -
Set a
maxFileAgevalue to automatically delete files older than the timestamp in the file name, regardless of themaxFilesvalue.
10.3.1.1. Default audit log values Copy linkLink copied to clipboard!
MicroShift includes the following default audit log rotation values:
| Audit log parameter | Default setting | Definition |
|---|---|---|
|
|
| How long log files are retained before automatic deletion. The default value means that a log file is never deleted based on age. This value can be configured. |
|
|
| The total number of log files retained. By default, MicroShift retains 10 log files. The oldest is deleted when an excess file is created. This value can be configured. |
|
|
|
By default, when the |
|
|
|
The |
The maximum default storage usage for audit log retention is 2000Mb if there are 10 or fewer files.
If you do not specify a value for a field, the default value is used. If you remove a previously set field value, the default value is restored after the next MicroShift service restart.
You must configure audit log retention and rotation in Red Hat Enterprise Linux (RHEL) for logs that are generated by application pods. These logs print to the console and are saved. Ensure that your log preferences are configured for the RHEL /var/log/audit/audit.log file to maintain MicroShift node health.
Additional resources
- Configuring auditd for a secure environment
- Understanding Audit log files
- How to use logrotate utility to rotate log files (Solutions, dated 7 August 2024)
10.3.2. About audit log policy profiles Copy linkLink copied to clipboard!
Audit log profiles define how to log requests that come to the OpenShift API server and the Kubernetes API server.
MicroShift supports the following predefined audit policy profiles:
| Profile | Description |
|---|---|
|
| Logs only metadata for read and write requests; does not log request bodies except for OAuth access token requests. This is the default policy. |
|
|
In addition to logging metadata for all requests, logs request bodies for every write request to the API servers ( |
|
|
In addition to logging metadata for all requests, logs request bodies for every read and write request to the API servers ( |
|
| No requests are logged, including OAuth access token requests and OAuth authorize token requests. Warning
Do not disable audit logging by using the |
-
Sensitive resources, such as
Secret,Route, andOAuthClientobjects, are only logged at the metadata level.
By default, MicroShift uses the Default audit log profile. You can use another audit policy profile that also logs request bodies, but be aware of the increased resource usage such as CPU, memory, and I/O.
10.3.3. Configuring audit log values Copy linkLink copied to clipboard!
You can configure audit log settings by using the MicroShift service configuration file.
Procedure
-
Make a copy of the provided
config.yaml.defaultfile in the/etc/microshift/directory, renaming itconfig.yaml. Keep the new MicroShiftconfig.yamlyou create in the/etc/microshift/directory. The newconfig.yamlis read whenever the MicroShift service starts. After you create it, theconfig.yamlfile takes precedence over built-in settings. Replace the default values in the
auditLogsection of the YAML with your desired valid values.Example default
auditLogconfigurationCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specifies the maximum time in days that log files are kept. Files older than this limit are deleted. In this example, after a log file is more than 7 days old, it is deleted. The files are deleted regardless of whether or not the live log has reached the maximum file size specified in the
maxFileSizefield. File age is determined by the timestamp written in the name of the rotated log file, for example,audit-2024-05-16T17-03-59.994.log. When the value is0, the limit is disabled. - 2
- The maximum audit log file size in megabytes. In this example, the file is rotated as soon as the live log reaches the 200 MB limit. When the value is set to
0, the limit is disabled. - 3
- The maximum number of rotated audit log files retained. After the limit is reached, the log files are deleted in order from oldest to newest. In this example, the value
1results in only 1 file of sizemaxFileSizebeing retained in addition to the current active log. When the value is set to0, the limit is disabled. - 4
- Logs only metadata for read and write requests; does not log request bodies except for OAuth access token requests. If you do not specify this field, the
Defaultprofile is used.
Optional: To specify a new directory for logs, you can stop MicroShift, and then move the
/var/log/kube-apiserverdirectory to your desired location:Stop MicroShift by running the following command:
sudo systemctl stop microshift
$ sudo systemctl stop microshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow Move the
/var/log/kube-apiserverdirectory to your desired location by running the following command:sudo mv /var/log/kube-apiserver <~/kube-apiserver>
$ sudo mv /var/log/kube-apiserver <~/kube-apiserver>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<~/kube-apiserver>with the path to the directory that you want to use.
If you specified a new directory for logs, create a symlink to your custom directory at
/var/log/kube-apiserverby running the following command:sudo ln -s <~/kube-apiserver> /var/log/kube-apiserver
$ sudo ln -s <~/kube-apiserver> /var/log/kube-apiserver1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<~/kube-apiserver>with the path to the directory that you want to use. This enables the collection of logs in sos reports.
If you are configuring audit log policies on a running instance, restart MicroShift by entering the following command:
sudo systemctl restart microshift
$ sudo systemctl restart microshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow
10.3.4. Troubleshooting audit log configuration Copy linkLink copied to clipboard!
Use the following steps to troubleshoot custom audit log settings and file locations.
Procedure
Check the current values that are configured by running the following command:
sudo microshift show-config --mode effective
$ sudo microshift show-config --mode effectiveCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
auditLog: maxFileSize: 200 maxFiles: 1 maxFileAge: 7 profile: AllRequestBodiesauditLog: maxFileSize: 200 maxFiles: 1 maxFileAge: 7 profile: AllRequestBodiesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the
audit.logfile permissions by running the following command:sudo ls -ltrh /var/log/kube-apiserver/audit.log
$ sudo ls -ltrh /var/log/kube-apiserver/audit.logCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
-rw-------. 1 root root 46M Mar 12 09:52 /var/log/kube-apiserver/audit.log
-rw-------. 1 root root 46M Mar 12 09:52 /var/log/kube-apiserver/audit.logCopy to Clipboard Copied! Toggle word wrap Toggle overflow List the contents of the current log directory by running the following command:
sudo ls -ltrh /var/log/kube-apiserver/
$ sudo ls -ltrh /var/log/kube-apiserver/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
total 6.0M -rw-------. 1 root root 2.0M Mar 12 10:56 audit-2024-03-12T14-56-16.267.log -rw-------. 1 root root 2.0M Mar 12 10:56 audit-2024-03-12T14-56-49.444.log -rw-------. 1 root root 962K Mar 12 10:57 audit.log
total 6.0M -rw-------. 1 root root 2.0M Mar 12 10:56 audit-2024-03-12T14-56-16.267.log -rw-------. 1 root root 2.0M Mar 12 10:56 audit-2024-03-12T14-56-49.444.log -rw-------. 1 root root 962K Mar 12 10:57 audit.logCopy to Clipboard Copied! Toggle word wrap Toggle overflow
10.4. Verifying container signatures for supply chain security Copy linkLink copied to clipboard!
You can enhance supply chain security by using the sigstore signing methodology.
sigstore support is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
10.4.1. Understanding how to use sigstore to verify container signatures Copy linkLink copied to clipboard!
You can configure the container runtime to verify image integrity by using the sigstore signing methodology. Configuring MicroShift container runtimes enables the verification of image integrity. With the sigstore project, developers can digitally sign what they build, creating a safer chain of custody that traces software back to the source. Administrators can then verify signatures and monitor workflows at scale. By using sigstore, you can store signatures in the same registry as the build images.
- For user-specific images, you must update the configuration file to point to the appropriate public key, or disable signature verification for those image sources.
For disconnected or offline configurations, you must embed the public key contents into the operating system image.
10.4.2. Verifying container signatures using sigstore Copy linkLink copied to clipboard!
Verify container signatures for MicroShift by configuring the container runtime to use sigstore. The container signature verification uses the public key from the Red Hat keypair when signing the images. To use sigstore, edit the default /etc/containers/policy.json file that is installed as part of the container runtime package.
You can access Red Hat public keys at the following link:
You must use the release key 3 for verifying MicroShift container signatures.
Prerequisites
- You have admin access to the MicroShift host.
- You installed MicroShift.
Procedure
Download the relevant public key and save it as
/etc/containers/RedHat_ReleaseKey3.pubby running the following command:sudo curl -sL https://access.redhat.com/security/data/63405576.txt -o /etc/containers/RedHat_ReleaseKey3.pub
$ sudo curl -sL https://access.redhat.com/security/data/63405576.txt -o /etc/containers/RedHat_ReleaseKey3.pubCopy to Clipboard Copied! Toggle word wrap Toggle overflow To configure the container runtime to verify images from Red Hat sources, edit the
/etc/containers/policy.jsonfile to contain the following configuration:Example policy JSON file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure Red Hat remote registries to use sigstore attachments when pulling images to the local storage, by editing the
/etc/containers/registries.d/registry.redhat.io.yamlfile to contain the following configuration:cat /etc/containers/registries.d/registry.redhat.io.yaml docker: registry.redhat.io: use-sigstore-attachments: true$ cat /etc/containers/registries.d/registry.redhat.io.yaml docker: registry.redhat.io: use-sigstore-attachments: trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure Red Hat remote registries to use sigstore attachments when pulling images to the local storage, by editing the
/etc/containers/registries.d/registry.quay.io.yamlfile to contain the following configuration:cat /etc/containers/registries.d/quay.io.yaml docker: quay.io/openshift-release-dev: use-sigstore-attachments: true$ cat /etc/containers/registries.d/quay.io.yaml docker: quay.io/openshift-release-dev: use-sigstore-attachments: trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Create user-specific registry configuration files if your use case requires signature verification for those image sources. You can use the example here to start with and add your own requirements.
Next steps
- If you are using a mirror registry, enable sigstore attachments.
- Otherwise, proceed to wiping the local container storage clean.
10.4.2.1. Enabling sigstore attachments for mirror registries Copy linkLink copied to clipboard!
If you are using mirror registries you must apply additional configuration to enable sigstore attachments and mirroring by digest.
Prerequisites
- You have admin access to the MicroShift host.
- You completed the steps in "Verifying container signatures using sigstore."
Procedure
Enable sigstore attachments by creating the
/etc/containers/registries.d/mirror.registry.local.yamlfile.cat /etc/containers/registries.d/<mirror.registry.local.yaml> docker: mirror.registry.local: use-sigstore-attachments: true$ cat /etc/containers/registries.d/<mirror.registry.local.yaml>1 docker: mirror.registry.local: use-sigstore-attachments: trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Name the
<mirror.registry.local.yaml>file after your mirror registry URL.
Enable mirroring by digest by creating the
/etc/containers/registries.conf.d/999-microshift-mirror.confwith the following contents:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
- Wipe the local container storage clean.
10.4.2.2. Wiping local container storage clean Copy linkLink copied to clipboard!
When you apply the configuration to an existing system, you must wipe the local container storage clean. Cleaning the container storage ensures that container images with signatures are properly downloaded.
Prerequisites
- You have administrator access to the MicroShift host.
- You enabled sigstore on your mirror registries.
Procedure
Stop the CRI-O container runtime service and MicroShift by running the following command:
sudo systemctl stop crio microshift
$ sudo systemctl stop crio microshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wipe the CRI-O container runtime storage clean by running the following command:
sudo crio wipe --force
$ sudo crio wipe --forceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the CRI-O container runtime service and MicroShift by running the following command:
sudo systemctl start crio microshift
$ sudo systemctl start crio microshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that all pods are running in a healthy state by entering the following command:
oc get pods -A
$ oc get pods -A
Example output
This example output shows a basic MicroShift installation. If you installed optional RPMs, the status of pods running those services is also expected in your output.
Chapter 11. Configuring low latency Copy linkLink copied to clipboard!
11.1. Configuring low latency Copy linkLink copied to clipboard!
You can configure and tune low latency capabilities to improve application performance on edge devices.
11.1.1. Lowering latency in MicroShift applications Copy linkLink copied to clipboard!
Latency is defined as the time from an event to the response to that event. You can use low latency configurations and tuning in a MicroShift node running in an operational or software-defined control system where an edge device has to respond quickly to an external event. You can fully optimize low latency performance by combining MicroShift configurations with operating system tuning and workload partitioning.
The CPU set for management applications, such as the MicroShift service, OVS, CRI-O, MicroShift pods, and isolated cores, must contain all-online CPUs.
11.1.1.1. Workflow for configuring low latency for MicroShift applications Copy linkLink copied to clipboard!
To configure low latency for applications running in a MicroShift node, you must complete the following tasks:
- Required
-
Install the
microshift-low-latencyRPM. - Configure workload partitioning.
-
Configure the
kubeletsection of theconfig.yamlfile in the/etc/microshift/directory. - Configure and activate a TuneD profile. TuneD is a Red Hat Enterprise Linux (RHEL) service that monitors the host system and optimizes performance under certain workloads.
- Restart the host.
-
Install the
- Optional
- If you are using the x86_64 architecture, you can install Red Hat Enterprise Linux for Real Time 9.
11.1.2. Installing the MicroShift low latency RPM package Copy linkLink copied to clipboard!
When you install MicroShift, the low latency RPM package is not installed by default. You can install the low latency RPM as an optional package.
Prerequisites
- You installed the MicroShift RPM.
- You configured workload partitioning for MicroShift.
Procedure
Install the low latency RPM package by running the following command:
sudo dnf install -y microshift-low-latency
$ sudo dnf install -y microshift-low-latencyCopy to Clipboard Copied! Toggle word wrap Toggle overflow TipWait to restart the host until after activating your TuneD profile. Restarting the host restarts MicroShift and CRI-O, which applies the low latency manifests and activates the TuneD profile.
Next steps
-
Configure the kubelet parameter for low latency in the MicroShift
config.yaml. - Tune your operating system, for example, configure and activate a TuneD profile.
- Optional: Configure automatic activation of your TuneD profile.
- Optional: If you are using the x86_64 architecture, install Red Hat Enterprise Linux for Real Time (real-time kernel).
- Prepare your workloads for low latency.
11.1.3. Configuration kubelet parameters and values in MicroShift Copy linkLink copied to clipboard!
The first step in enabling low latency to a MicroShift node is to add configurations to the MicroShift config.yaml file.
Prerequisites
-
You installed the OpenShift CLI (
oc). - You have root access to the node.
-
You made a copy of the provided
config.yaml.defaultfile in the/etc/microshift/directory, and renamed itconfig.yaml.
Procedure
Add the kubelet configuration to the MicroShift
config.yamlfile:Example passthrough
kubeletconfigurationCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- If you change the CPU or memory managers in the kubelet configuration, you must remove files that cache the previous configuration. Restart the host to remove them automatically, or manually remove the
/var/lib/kubelet/cpu_manager_stateand/var/lib/kubelet/memory_manager_statefiles. - 2
- The name of the policy to use. Valid values are
noneandstatic. Requires theCPUManagerfeature gate to be enabled. Default value isnone. - 3
- A set of
key=valuepairs for setting extra options that fine tune the behavior of theCPUManagerpolicies. The default value isnull. Requires both theCPUManagerandCPUManagerPolicyOptionsfeature gates to be enabled. - 4
- The name of the policy used by Memory Manager. Case-sensitive. The default value is
none. Requires theMemoryManagerfeature gate to be enabled. - 5
- Required. The
reservedSystemCPUsvalue must be the inverse of the offlined CPUs because both values combined must account for all of the CPUs on the system. This parameter is essential to dividing the management and application workloads. Use this parameter to define a static CPU set for the host-level system and Kubernetes daemons, plus interrupts and timers. Then the rest of the CPUs on the system can be used exclusively for workloads. - 6
- The value in
reservedMemory[0].limits.memory,1100Mi in this example, is equal tokubeReserved.memory+systemReserved.memory+evictionHard.memory.available. - 7
- The
evictionHardparameters define under which conditions the kubelet evicts pods. When you change the default value of only one parameter for theevictionHardstanza, the default values of other parameters are not inherited and are set to zero. Provide all the threshold values even when you want to change just one. - 8
- The
imagefsis a filesystem that container runtimes use to store container images and container writable layers. In this example, theevictionHard.imagefs.availableparameter means that the pod is evicted when the available space of the image filesystem is less than 15%. - 9
- In this example, the
evictionHard.memory.availableparameter means that the pods are evicted when the available memory of the node drops below 100MiB. - 10
- In this example, the
evictionHard.nodefs.availableparameter means that the pods are evicted when the main filesystem of the node has less than 10% available space. - 11
- In this example, the
evictionHard.nodefs.inodesFreeparameter means that the pods are evicted when more than 15% of the node’s main filesystem’s inodes are in use. - 12
- For container garbage collection: The duration to wait before transitioning out of an eviction pressure condition. Setting the
evictionPressureTransitionPeriodparameter to0configures the default value of 5 minutes.
Verification
-
After you complete the next steps and restart the host, you can use a root-access account to check that your settings are in the
config.yamlfile in the/var/lib/microshift/resources/kubelet/config/directory.
Next steps
- Enable workload partitioning.
- Tune your operating system. For example, configure and activate a TuneD profile.
- Optional: Configure automatic enablement of your TuneD profile.
- Optional: If you are using the x86_64 architecture, you can install Red Hat Enterprise Linux for Real Time (real-time kernel).
- Prepare your MicroShift workloads for low latency.
11.1.4. Tuning Red Hat Enterprise Linux 9 Copy linkLink copied to clipboard!
As a Red Hat Enterprise Linux (RHEL) system administrator, you can use the TuneD service to optimize the performance profile of RHEL for a variety of use cases. TuneD monitors and optimizes system performance under certain workloads, including latency performance.
- Use TuneD profiles to tune your system for different use cases, such as deploying a low-latency MicroShift node.
- You can modify the rules defined for each profile and customize tuning for a specific device.
- When you switch to another profile or deactivate TuneD, all changes made to the system settings by the previous profile revert back to their original state.
- You can also configure TuneD to react to changes in device usage and adjusts settings to improve performance of active devices and reduce power consumption of inactive devices.
11.1.4.1. Configuring the MicroShift TuneD profile Copy linkLink copied to clipboard!
Configure a TuneD profile for your host to use low latency with MicroShift workloads by using the microshift-baseline-variables.conf configuration file provided in the Red Hat Enterprise Linux (RHEL) /etc/tuned/ host directory after you install the microshift-low-latency RPM package.
Prerequisites
- You have root access to the node.
-
You installed the
microshift-low-latencyRPM package. - Your RHEL host has TuneD installed. See Getting started with TuneD (RHEL documentation).
Procedure
You can use the default
microshift-baseline-variables.confTuneD profile in the/etc/tuned/directory profile, or create your own to add more tunings.Example
microshift-baseline-variables.confTuneD profileCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Controls which cores should be isolated. By default, 1 core per socket is reserved in MicroShift for housekeeping. The other cores are isolated. Valid values are a core list or range. You can isolate any range, for example:
isolated_cores=2,4-7orisolated_cores=2-23.ImportantYou must keep only one
isolated_cores=variable.NoteThe Kubernetes CPU manager can use any CPU to run the workload except the reserved CPUs defined in the kubelet configuration. For this reason it is best that:
- The sum of the kubelet’s reserved CPUs and isolated cores include all online CPUs.
- Isolated cores are complementary to the reserved CPUs defined in the kubelet configuration.
- 2
- Size of the hugepages. Valid values are 2M or 1G.
- 3
- Additional kernel arguments, for example,
additional_args=console=tty0 console=ttyS0,115200. - 4
- The CPU set to be offlined.Important
Must not overlap with
isolated_cores.
Enable the profile or make changes active, by running the following command:
sudo tuned-adm profile microshift-baseline
$ sudo tuned-adm profile microshift-baselineCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Reboot the host to make kernel arguments active.
Verification
Optional: You can read the
/proc/cmdlinefile that contains the arguments given to the currently running kernel on start.cat /proc/cmdline
$ cat /proc/cmdlineCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
BOOT_IMAGE=(hd0,msdos2)/ostree/rhel-7f82ccd9595c3c70af16525470e32c6a81c9138c4eae6c79ab86d5a2d108d7fc/vmlinuz-5.14.0-427.31.1.el9_4.x86_64+rt crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M rd.lvm.lv=rhel/root fips=0 console=ttyS0,115200n8 root=/dev/mapper/rhel-root rw ostree=/ostree/boot.1/rhel/7f82ccd9595c3c70af16525470e32c6a81c9138c4eae6c79ab86d5a2d108d7fc/0 skew_tick=1 tsc=reliable rcupdate.rcu_normal_after_boot=1 nohz=on nohz_full=2,4-5 rcu_nocbs=2,4-5 tuned.non_isolcpus=0000000b intel_pstate=disable nosoftlockup hugepagesz=2M hugepages=10
BOOT_IMAGE=(hd0,msdos2)/ostree/rhel-7f82ccd9595c3c70af16525470e32c6a81c9138c4eae6c79ab86d5a2d108d7fc/vmlinuz-5.14.0-427.31.1.el9_4.x86_64+rt crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M rd.lvm.lv=rhel/root fips=0 console=ttyS0,115200n8 root=/dev/mapper/rhel-root rw ostree=/ostree/boot.1/rhel/7f82ccd9595c3c70af16525470e32c6a81c9138c4eae6c79ab86d5a2d108d7fc/0 skew_tick=1 tsc=reliable rcupdate.rcu_normal_after_boot=1 nohz=on nohz_full=2,4-5 rcu_nocbs=2,4-5 tuned.non_isolcpus=0000000b intel_pstate=disable nosoftlockup hugepagesz=2M hugepages=10Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
- Prepare your MicroShift workloads for low latency.
- Optional: Configure automatic enablement of your TuneD profile.
- Optional: If you are using the x86_64 architecture, you can install Red Hat Enterprise Linux for Real Time (real-time kernel).
11.1.4.2. Automatically enable the MicroShift TuneD profile Copy linkLink copied to clipboard!
Included in the microshift-low-latency RPM package is a systemd service that you can configure to automatically enable a TuneD profile when the system starts. This ability is particularly useful if you are installing MicroShift in a large fleet of devices.
Prerequisites
- You installed the microshift-low-latency RPM package on the host.
-
You enabled low latency in the MicroShift
config.yaml. - You created a TuneD profile.
-
You configured the
microshift-baseline-variables.conffile.
Procedure
Configure the
tuned.yamlin the/etc/microshift/directory, for example:Example tuned.yaml
profile: microshift-baseline reboot_after_apply: True
profile: microshift-baseline1 reboot_after_apply: True2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Controls which TuneD profile is activated. In this example, the name of the profile is
microshift-baseline. - 2
- Controls whether the host must be rebooted after applying the profile. Valid values are
TrueandFalse. For example, use theTruesetting to automatically restart the host after a newostreecommit is deployed.
ImportantThe host is restarted when the
microshift-tuned.serviceruns, but it does not restart the system when a new commit is deployed. You must restart the host to enable a new commit, then the system starts again when themicroshift-tuned.serviceruns on that boot and detects changes to profiles and variables.This double-boot can effect rollbacks. Ensure that you adjust the number of reboots in greenboot that are allowed before rollback when using automatic profile activation. For example, if 3 reboots are allowed before a rollback in greenboot, increase that number to 4. See the "Additional resources" list for more information.
Enable the
microshift-tuned.serviceto run on each system start by entering the following command:sudo systemctl enable microshift-tuned.service
$ sudo systemctl enable microshift-tuned.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantIf you set
reboot_after_applytoTrue, ensure that a TuneD profile is active and that no other profiles have been activated outside of the MicroShift service. Otherwise, starting themicroshift-tuned.serviceresults in a host reboot.Start the
microshift-tuned.serviceby running the following command:sudo systemctl start microshift-tuned.service
$ sudo systemctl start microshift-tuned.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
microshift-tuned.serviceuses collected checksums to detect changes to selected TuneD profiles and variables. If there are no checksums on the disk, the service activates the TuneD profile and restarts the host. Expect a host restart when first starting themicroshift-tuned.service.
Next steps
- Optional: If you are using the x86_64 architecture, you can install Red Hat Enterprise Linux for Real Time (real-time kernel).
11.1.5. Using Red Hat Enterprise Linux for Real Time Copy linkLink copied to clipboard!
If your workload has stringent low-latency determinism requirements for core kernel features such as interrupt handling and process scheduling in the microsecond (μs) range, you can use the Red Hat Enterprise Linux for Real Time (real-time kernel). The goal of the real-time kernel is consistent, low-latency determinism that offers predictable response times.
When considering system tuning, consider the following factors:
- System tuning is just as important when using the real-time kernel as it is for the standard kernel.
- Installing the real-time kernel on an untuned system running the standard kernel supplied as part of the RHEL 9 release is not likely to result in any noticeable benefit.
- Tuning the standard kernel yields 90% of possible latency gains.
- The real-time kernel provides the last 10% of latency reduction required by the most demanding workloads.
11.1.5.1. Installing the Red Hat Enterprise Linux for Real Time (real-time kernel) Copy linkLink copied to clipboard!
Although the real-time kernel is not necessary for low latency workloads, using the real-time kernel can optimize low latency performance. You can install it on a host by using RPM packages, and include it in a Red Hat Enterprise Linux for Edge (RHEL for Edge) image deployment.
Prerequisites
- You have a Red Hat subscription that includes Red Hat Enterprise Linux for Real Time (real-time kernel). For example, your host machine is registered and Red Hat Enterprise Linux (RHEL) is attached to a RHEL for Real Time subscription.
- You are using x86_64 architecture.
Procedure
Enable the real-time kernel repository by running the following command:
sudo subscription-manager repos --enable rhel-9-for-x86_64-rt-rpms
$ sudo subscription-manager repos --enable rhel-9-for-x86_64-rt-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Install the real-time kernel by running the following command:
sudo dnf install -y kernel-rt
$ sudo dnf install -y kernel-rtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Query the real-time kernel version by running the following command:
RTVER=$(rpm -q --queryformat '%{version}-%{release}.%{arch}' kernel-rt | sort | tail -1)$ RTVER=$(rpm -q --queryformat '%{version}-%{release}.%{arch}' kernel-rt | sort | tail -1)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Make a persistent change in GRUB that designates the real-time kernel as the default kernel by running the following command:
sudo grubby --set-default="/boot/vmlinuz-${RTVER}+rt"$ sudo grubby --set-default="/boot/vmlinuz-${RTVER}+rt"Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Restart the host to activate the real-time kernel.
Next steps
- Prepare your MicroShift workloads for low latency.
- Optional: Use a blueprint to install the real-time kernel in a RHEL for Edge image.
You can include the real-time kernel in a RHEL for Edge image deployment using image builder. The following example blueprint sections include references gathered from the previous steps required to configure low latency for a MicroShift node.
Prerequisites
- You have a Red Hat subscription enabled on the host that includes Red Hat Enterprise Linux for Real Time (real-time kernel).
- You are using the x86_64 architecture.
-
You configured
osbuildto use thekernel-rtrepository.
A subscription that includes the real-time kernel must be enabled on the host used to build the commit.
Procedure
Add the following example blueprint sections to your complete installation blueprint for installing the real-time kernel in a RHEL for Edge image:
Example blueprint snippet for the real-time kernel
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
- Complete the image building process.
- If you have not completed the previous steps for enabling low latency for your MicroShift cluster, do so now. Update the blueprint with the information gathered in those steps.
- If you have not configured workload partitioning, do so now.
- Prepare your MicroShift workloads for low latency.
11.1.6. Building the Red Hat Enterprise Linux for Edge (RHEL for Edge) image with the real-time kernel Copy linkLink copied to clipboard!
Complete the build process by starting with the following procedure to embed MicroShiftin a RHEL for Edge image. Then complete the remaining steps in the installation documentation for installing MicroShift in a RHEL for Edge image:
11.1.7. Preparing a MicroShift workload for low latency Copy linkLink copied to clipboard!
To take advantage of low latency, workloads running on MicroShift must have the microshift-low-latency container runtime configuration set by using the RuntimeClass feature. The CRI-O RuntimeClass object is installed with the microshift-low-latency RPM, so only the pod annotations need to be configured.
Prerequisites
-
You installed the
microshift-low-latencyRPM package. - You configured workload partitioning.
Procedure
Use the following example to set the following annotations in the pod spec:
cpu-load-balancing.crio.io: "disable" irq-load-balancing.crio.io: "disable" cpu-quota.crio.io: "disable" cpu-load-balancing.crio.io: "disable" cpu-freq-governor.crio.io: "<governor>"
cpu-load-balancing.crio.io: "disable" irq-load-balancing.crio.io: "disable" cpu-quota.crio.io: "disable" cpu-load-balancing.crio.io: "disable" cpu-freq-governor.crio.io: "<governor>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example pod that runs
oslattest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Disables the CPU load balancing for the pod.
- 2
- Opts the pod out of interrupt handling (IRQ).
- 3
- Disables the CPU completely fair scheduler (CFS) quota at the pod run time.
- 4
- Enables or disables C-states for each CPU. Set the value to
disableto provide the best performance for a high-priority pod. - 5
- Sets the
cpufreqgovernor for each CPU. Theperformancegovernor is recommended for high-priority workloads. - 6
- The
runtimeClassNamemust match the name of the performance profile configured in the node. For example,microshift-low-latency.
NoteDisable CPU load balancing only when the CPU manager static policy is enabled and for pods with guaranteed QoS that use whole CPUs. Otherwise, disabling CPU load balancing can affect the performance of other containers in the node.
ImportantFor the pod to have the
GuaranteedQoS class, it must have the same values of CPU and memory in requests and limits. See Guaranteed (Kubernetes upstream documentation)
11.1.8. Reference blueprint for installing Red Hat Enterprise Linux for Real Time (real-time kernel) in a RHEL for Edge image Copy linkLink copied to clipboard!
An image blueprint is a persistent definition of the required image customizations that enable you to create multiple builds. Instead of reconfiguring the blueprint for each image build, you can edit, rebuild, delete, and save the blueprint so that you can keep rebuilding images from it.
Example blueprint used to install the real-time kernel in a RHEL for Edge image
11.2. Workload partitioning Copy linkLink copied to clipboard!
Workload partitioning divides the node CPU resources into distinct CPU sets. The primary objective is to limit the amount of CPU usage for all control plane components which reserves rest of the device CPU resources for workloads of the user.
Workload partitioning allocates reserved set of CPUs to MicroShift services, cluster management workloads, and infrastructure pods, ensuring that the remaining CPUs in the cluster deployment are untouched and available exclusively for non-platform workloads.
11.2.1. Enabling workload partitioning Copy linkLink copied to clipboard!
To enable workload partitioning on MicroShift, make the following configuration changes:
-
Update the MicroShift
config.yamlfile to include the kubelet configuration file. - Create the CRI-O systemd and configuration files.
- Create and update the systemd configuration file for the MicroShift and CRI-O services respectively.
Procedure
Update the MicroShift
config.yamlfile to include the kubelet configuration file to enable and configure CPU Manager for the workloads:Create the kubelet configuration file in the path
/etc/kubernetes/openshift-workload-pinning. The kubelet configuration directs the kubelet to modify the node resources based on the capacity and allocatable CPUs.kubelet configuration example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
cpusetapplies to a machine with 8 VCPUs (4 cores) and is valid throughout the document.
Update the MicroShift config.yaml file in the path
/etc/microshift/config.yaml. Embed the kubelet configuration in the MicroShiftconfig.yamlfile to enable and configure CPU Manager for the workloads.MicroShift
config.yamlexampleCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create the CRI-O systemd and configuration files:
Create the CRI-O configuration file in the path
/etc/crio/crio.conf.d/20-microshift-workload-partition.confwhich overrides the default configuration that already exists in the11-microshift-ovn.conffile.CRI-O configuration example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the systemd file for CRI-O in the path
/etc/systemd/system/crio.service.d/microshift-cpuaffinity.conf.CRI-O systemd configuration example
# ... [Service] CPUAffinity=0,6,7 # ...
# ... [Service] CPUAffinity=0,6,7 # ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create and update the systemd configuration file with
CPUAffinityvalue for the MicroShift and CRI-O services:Create the MicroShift services systemd file in the path
/etc/systemd/system/microshift.service.d/microshift-cpuaffinity.conf. MicroShift will be pinned using the systemdCPUAffinityvalue.MicroShift services systemd configuration example
# ... [Service] CPUAffinity=0,6,7 # ...
# ... [Service] CPUAffinity=0,6,7 # ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
CPUAffinityvalue in the MicroShift ovs-vswitchd systemd file in the path/etc/systemd/system/ovs-vswitchd.service.d/microshift-cpuaffinity.conf.MicroShift ovs-vswitchd systemd configuration example
# ... [Service] CPUAffinity=0,6,7 # ...
# ... [Service] CPUAffinity=0,6,7 # ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
CPUAffinityvalue in the MicroShift ovsdb-server systemd file in the path/etc/systemd/system/ovsdb-server.service.d/microshift-cpuaffinity.confMicroShift ovsdb-server systemd configuration example
# ... [Service] CPUAffinity=0,6,7 # ...
# ... [Service] CPUAffinity=0,6,7 # ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow