Chapter 7. Configuring MicroShift authentication and security
7.1. Configuring custom certificate authorities Copy linkLink copied to clipboard!
Allow and encrypt connections with external clients by replacing the MicroShift default API server certificate with a custom server certificate issued by a certificate authority (CA).
7.1.1. Using custom certificate authorities for the MicroShift API server Copy linkLink copied to clipboard!
When MicroShift starts, an internal MicroShift cluster certificate authority (CA) issues the default API server certificate. By default, clients outside of the cluster cannot verify the MicroShift-issued API server certificate. You can grant secure access and encrypt connections between the MicroShift API server and external clients. Replace the default certificate with a custom server certificate issued externally by a CA that clients trust.
The following steps illustrate the workflow for customizing the API server certificate configuration in MicroShift:
- Copy the certificates and keys to the preferred directory in the host operating system. Ensure that the files are accessible only with root access.
Update the MicroShift configuration for each custom CA by specifying the certificate names and new fully qualified domain name (FQDN) in the MicroShift
/etc/microshift/config.yaml
configuration file.Each certificate configuration can contain the following values:
- The certificate file location is a required value.
A single common name containing the API server DNS and IP address or IP address range.
TipIn most cases, MicroShift generates a new
kubeconfig
file for your custom CA that includes the IP address or range that you specify. The exception is when you specify wildcards for the IP address. In this case, MicroShift generates akubeconfig
file with the public IP address of the server. To use wildcards, you must update thekubeconfig
file with your specific details.- Multiple Subject Alternative Names (SANs) containing the API server DNS and IP addresses or a wildcard certificate.
- You can list additional DNS names for each certificate.
-
After the MicroShift service restarts, you must copy the generated
kubeconfig
files to the client. Configure additional CAs on the client system. For example, you can update CA bundles in the Red Hat Enterprise Linux (RHEL) truststore.
ImportantCustom server certificates must be validated against CA data configured in the trust root of the host operating system. For more information, read the following documentation:
The certificates and keys are read from the specified file location on the host. You can test and validate configuration from the client.
- If any validation fails, MicroShift skips the custom configuration and uses the default certificate to start. The priority is to continue the service uninterrupted. MicroShift logs errors when the service starts. Common errors include expired certificates, missing files, or wrong IP addresses.
- External server certificates are not automatically renewed. You must manually rotate your external certificates.
7.1.2. Configuring custom certificate authorities Copy linkLink copied to clipboard!
To configure externally generated certificates and domain names by using custom certificate authorities (CAs), add them to the MicroShift /etc/microshift/config.yaml
configuration file. You must also configure the host operating system trust root.
Externally generated kubeconfig
files are created in the /var/lib/microshift/resources/kubeadmin/<hostname>/kubeconfig
directory. If you need to use localhost
in addition to externally generated configurations, retain the original kubeconfig
file in its default location. The localhost
kubeconfig
file uses the self-signed certificate authority.
Prerequisites
-
The OpenShift CLI (
oc
) is installed. - You have access to the cluster as a user with the cluster administration role.
- The certificate authority has issued the custom certificates.
-
A MicroShift
/etc/microshift/config.yaml
configuration file exists.
Procedure
- Copy the custom certificates you want to add to the trust root of the MicroShift host. Ensure that the certificate and private keys are only accessible to MicroShift.
For each custom CA that you need, add an
apiServer
section callednamedCertificates
to the/etc/microshift/config.yaml
MicroShift configuration file by using the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the MicroShift to apply the certificates by running the following command:
systemctl microshift restart
$ systemctl microshift restart
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Wait a few minutes for the system to restart and apply the custom server. New
kubeconfig
files are generated in the/var/lib/microshift/resources/kubeadmin/
directory. -
Copy the
kubeconfig
files to the client. If you specified wildcards for the IP address, update thekubeconfig
to remove the public IP address of the server and replace that IP address with the specific wildcard range you want to use. From the client, use the following steps:
Specify the
kubeconfig
to use by running the following command:export KUBECONFIG=~/custom-kubeconfigs/kubeconfig
$ export KUBECONFIG=~/custom-kubeconfigs/kubeconfig
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Use the location of the copied
kubeconfig
file as the path.
Check that the certificates are applied by using the following command:
oc --certificate-authority ~/certs/ca.ca get node
$ oc --certificate-authority ~/certs/ca.ca get node
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
oc get node NAME STATUS ROLES AGE VERSION dhcp-1-235-195.arm.example.com Ready control-plane,master,worker 76m v1.32.3
oc get node NAME STATUS ROLES AGE VERSION dhcp-1-235-195.arm.example.com Ready control-plane,master,worker 76m v1.32.3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the new CA file to the $KUBECONFIG environment variable by running the following command:
oc config set clusters.microshift.certificate-authority /tmp/certificate-authority-data-new.crt
$ oc config set clusters.microshift.certificate-authority /tmp/certificate-authority-data-new.crt
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the new
kubeconfig
file contains the new CA by running the following command:oc config view --flatten
$ oc config view --flatten
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example externally generated
kubeconfig
fileCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
certificate-authority-data
section is not present in externally generatedkubeconfig
files. It is added with theoc config set
command used previously.
Verify the
subject
andissuer
of your customized API server certificate authority by running the following command:curl --cacert /tmp/caCert.pem https://${fqdn_name}:6443/healthz -v
$ curl --cacert /tmp/caCert.pem https://${fqdn_name}:6443/healthz -v
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantEither replace the
certificate-authority-data
in the generatedkubeconfig
file with the newrootCA
or add thecertificate-authority-data
to the trust root of the operating system. Do not use both methods.Configure additional CAs in the trust root of the operating system. For example, in the RHEL Client truststore on the client system. The system-wide truststore.
- Updating the certificate bundle with the configuration that contains the CA is recommended.
-
If you do not want to configure your certificate bundles, you can alternately use the
oc login localhost:8443 --certificate-authority=/path/to/cert.crt
command, but this method is not preferred.
7.1.3. Custom certificates reserved name values Copy linkLink copied to clipboard!
The following certificate problems cause MicroShift to ignore certificates dynamically and log an error:
- The certificate files do not exist on the disk or are not readable.
- The certificate is not parsable.
-
The certificate overrides the internal certificates IP addresses or DNS names in a
SubjectAlternativeNames
(SAN) field. Do not use a reserved name when configuring SANs.
Address | Type | Comment |
---|---|---|
| DNS | |
| IP Address | |
| IP Address | Cluster Network |
| IP Address | Service Network |
169.254.169.2/29 | IP Address | br-ex Network |
| DNS | |
| DNS | |
| DNS |
7.1.4. Troubleshooting custom certificates Copy linkLink copied to clipboard!
To troubleshoot the implementation of custom certificates, you can take the following steps.
Procedure
From MicroShift, ensure that the certificate is served by the
kube-apiserver
and verify that the certificate path is appended to the--tls-sni-cert-key
FLAG by running the following command:journalctl -u microshift -b0 | grep tls-sni-cert-key
$ journalctl -u microshift -b0 | grep tls-sni-cert-key
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Jan 24 14:53:00 localhost.localdomain microshift[45313]: kube-apiserver I0124 14:53:00.649099 45313 flags.go:64] FLAG: --tls-sni-cert-key="[/home/eslutsky/dev/certs/server.crt,/home/eslutsky/dev/certs/server.key;/var/lib/microshift/certs/kube-apiserver-external-signer/kube-external-serving/server.crt,/var/lib/microshift/certs/kube-apiserver-external-signer/kube-external-serving/server.key;/var/lib/microshift/certs/kube-apiserver-localhost-signer/kube-apiserver-localhost-serving/server.crt,/var/lib/microshift/certs/kube-apiserver-localhost-signer/kube-apiserver-localhost-serving/server.key;/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.crt,/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.key
Jan 24 14:53:00 localhost.localdomain microshift[45313]: kube-apiserver I0124 14:53:00.649099 45313 flags.go:64] FLAG: --tls-sni-cert-key="[/home/eslutsky/dev/certs/server.crt,/home/eslutsky/dev/certs/server.key;/var/lib/microshift/certs/kube-apiserver-external-signer/kube-external-serving/server.crt,/var/lib/microshift/certs/kube-apiserver-external-signer/kube-external-serving/server.key;/var/lib/microshift/certs/kube-apiserver-localhost-signer/kube-apiserver-localhost-serving/server.crt,/var/lib/microshift/certs/kube-apiserver-localhost-signer/kube-apiserver-localhost-serving/server.key;/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.crt,/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.key
Copy to Clipboard Copied! Toggle word wrap Toggle overflow From the client, ensure that the
kube-apiserver
is serving the correct certificate by running the following command:openssl s_client -connect <SNI_ADDRESS>:6443 -showcerts | openssl x509 -text -noout -in - | grep -C 1 "Alternative\|CN"
$ openssl s_client -connect <SNI_ADDRESS>:6443 -showcerts | openssl x509 -text -noout -in - | grep -C 1 "Alternative\|CN"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.1.5. Cleaning up and recreating the custom certificates Copy linkLink copied to clipboard!
To stop the MicroShift services, clean up the custom certificates and recreate the custom certificates, use the following steps.
Procedure
Stop the MicroShift services and clean up the custom certificates by running the following command:
sudo microshift-cleanup-data --cert
$ sudo microshift-cleanup-data --cert
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Stopping MicroShift services Removing MicroShift certificates MicroShift service was stopped Cleanup succeeded
Stopping MicroShift services Removing MicroShift certificates MicroShift service was stopped Cleanup succeeded
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the MicroShift services to recreate the custom certificates by running the following command:
sudo systemctl start microshift
$ sudo systemctl start microshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.1.6. Additional resources Copy linkLink copied to clipboard!
7.2. Configuring TLS security profiles Copy linkLink copied to clipboard!
Use transport layer security (TLS) protocols to help prevent known insecure protocols, ciphers, or algorithms from accessing the applications you run on MicroShift.
7.2.1. Using TLS with MicroShift Copy linkLink copied to clipboard!
Transport layer security (TLS) profiles provide a way for servers to regulate which ciphers a client can use when connecting to the server. Using TLS helps to ensure that MicroShift applications use cryptographic libraries that do not allow known insecure protocols, ciphers, or algorithms. You can use either the TLS 1.2 or TLS 1.3 security profiles with MicroShift.
MicroShift API server cipher suites apply automatically to the following internal control plane components:
- API server
- Kubelet
- Kube controller manager
- Kube scheduler
- etcd
- Route controller manager
The API server uses the configured minimum TLS version and the associated cipher suites. If you leave the cipher suites parameter empty, the defaults for the configured minimum version are used automatically.
Default cipher suites for TLS 1.2
The following list specifies the default cipher suites for TLS 1.2:
-
TLS_AES_128_GCM_SHA256
-
TLS_AES_256_GCM_SHA384
-
TLS_CHACHA20_POLY1305_SHA256
-
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
-
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
-
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
-
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
-
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
-
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
Default cipher suites for TLS 1.3
The following list specifies the default cipher suites for TLS 1.3:
-
TLS_AES_128_GCM_SHA256
-
TLS_AES_256_GCM_SHA384
-
TLS_CHACHA20_POLY1305_SHA256
7.2.2. Configuring TLS for MicroShift Copy linkLink copied to clipboard!
You can choose to use either the TLS 1.2 or TLS 1.3 security profiles with MicroShift for system hardening.
Prerequisites
- You have access to the cluster as a root user.
- MicroShift has either not started for the first time, or is stopped.
-
The OpenShift CLI (
oc
) is installed. - The certificate authority has issued the custom certificates (CAs).
Procedure
-
Make a copy of the provided
config.yaml.default
file in the/etc/microshift/
directory, renaming itconfig.yaml
. Keep the new MicroShift
config.yaml
in the/etc/microshift/
directory. Yourconfig.yaml
file is read every time the MicroShift service starts.NoteAfter you create it, the
config.yaml
file takes precedence over built-in settings.- Optional: Use a configuration snippet if you are using an existing MicroShift YAML. See "Using configuration snippets" in the Additional resources section for more information.
Replace the default values in the
tls
section of the MicroShift YAML with your valid values.Example TLS 1.2 configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Defaults to the suites of the configured
minVersion
. IfminVersion
is not configured, the default value is TLS 1.2. - 2
- Specify the cipher suites you want to use from the list of supported cipher suites. If you do not configure this list, all of the supported cipher suites are used. All clients connecting to the API server must support the configured cipher suites or the connections fail during the TLS handshake phase. Be sure to add the CA certificate bundle to the list of CA certificates that the TLS client or server trusts.
- 3
- Specify
VersionTLS12
orVersionTLS13
.
ImportantWhen you choose TLS 1.3 as the minimum TLS version, only the default MicroShift cipher suites can be used. Additional cipher suites are not configurable. If other cipher suites to use with TLS 1.3 are configured, those suites are ignored and overwritten by the MicroShift defaults.
Complete any other additional configurations that you require, then restart MicroShift by running the following command:
sudo systemctl restart microshift
$ sudo systemctl restart microshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2.2.1. Default cipher suites Copy linkLink copied to clipboard!
Default cipher suites are included with MicroShift for both TLS 1.2 and TLS 1.3. The cipher suites for TLS 1.3 cannot be customized.
Default cipher suites for TLS 1.2
The following list specifies the default cipher suites for TLS 1.2:
-
TLS_AES_128_GCM_SHA256
-
TLS_AES_256_GCM_SHA384
-
TLS_CHACHA20_POLY1305_SHA256
-
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
-
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
-
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
-
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
-
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
-
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
Default cipher suites for TLS 1.3
The following list specifies the default cipher suites for TLS 1.3:
-
TLS_AES_128_GCM_SHA256
-
TLS_AES_256_GCM_SHA384
-
TLS_CHACHA20_POLY1305_SHA256
7.3. Configuring audit logging policies Copy linkLink copied to clipboard!
You can control MicroShift audit log file rotation and retention by using configuration values.
7.3.1. About setting limits on audit log files Copy linkLink copied to clipboard!
Controlling the rotation and retention of the MicroShift audit log file by using configuration values helps keep the limited storage capacities of far-edge devices from being exceeded. On such devices, logging data accumulation can limit host system or cluster workloads, potentially causing the device stop working. Setting audit log policies can help ensure that critical processing space is continually available.
The values you set to limit MicroShift audit logs enable you to enforce the size, number, and age limits of audit log backups. Field values are processed independently of one another and without prioritization.
You can set fields in combination to define a maximum storage limit for retained logs. For example:
-
Set both
maxFileSize
andmaxFiles
to create a log storage upper limit. -
Set a
maxFileAge
value to automatically delete files older than the timestamp in the file name, regardless of themaxFiles
value.
7.3.1.1. Default audit log values Copy linkLink copied to clipboard!
MicroShift includes the following default audit log rotation values:
Audit log parameter | Default setting | Definition |
---|---|---|
|
| How long log files are retained before automatic deletion. The default value means that a log file is never deleted based on age. This value can be configured. |
|
| The total number of log files retained. By default, MicroShift retains 10 log files. The oldest is deleted when an excess file is created. This value can be configured. |
|
|
By default, when the |
|
|
The |
The maximum default storage usage for audit log retention is 2000Mb if there are 10 or fewer files.
If you do not specify a value for a field, the default value is used. If you remove a previously set field value, the default value is restored after the next MicroShift service restart.
You must configure audit log retention and rotation in Red Hat Enterprise Linux (RHEL) for logs that are generated by application pods. These logs print to the console and are saved. Ensure that your log preferences are configured for the RHEL /var/log/audit/audit.log
file to maintain MicroShift cluster health.
Additional resources
- Configuring auditd for a secure environment
- Understanding Audit log files
- How to use logrotate utility to rotate log files (Solutions, dated 7 August 2024)
7.3.2. About audit log policy profiles Copy linkLink copied to clipboard!
Audit log profiles define how to log requests that come to the OpenShift API server and the Kubernetes API server.
MicroShift supports the following predefined audit policy profiles:
Profile | Description |
---|---|
| Logs only metadata for read and write requests; does not log request bodies except for OAuth access token requests. This is the default policy. |
|
In addition to logging metadata for all requests, logs request bodies for every write request to the API servers ( |
|
In addition to logging metadata for all requests, logs request bodies for every read and write request to the API servers ( |
| No requests are logged, including OAuth access token requests and OAuth authorize token requests. Warning
Do not disable audit logging by using the |
-
Sensitive resources, such as
Secret
,Route
, andOAuthClient
objects, are only logged at the metadata level.
By default, MicroShift uses the Default
audit log profile. You can use another audit policy profile that also logs request bodies, but be aware of the increased resource usage such as CPU, memory, and I/O.
7.3.3. Configuring audit log values Copy linkLink copied to clipboard!
You can configure audit log settings by using the MicroShift service configuration file.
Procedure
-
Make a copy of the provided
config.yaml.default
file in the/etc/microshift/
directory, renaming itconfig.yaml
. Keep the new MicroShiftconfig.yaml
you create in the/etc/microshift/
directory. The newconfig.yaml
is read whenever the MicroShift service starts. After you create it, theconfig.yaml
file takes precedence over built-in settings. Replace the default values in the
auditLog
section of the YAML with your desired valid values.Example default
auditLog
configurationCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specifies the maximum time in days that log files are kept. Files older than this limit are deleted. In this example, after a log file is more than 7 days old, it is deleted. The files are deleted regardless of whether or not the live log has reached the maximum file size specified in the
maxFileSize
field. File age is determined by the timestamp written in the name of the rotated log file, for example,audit-2024-05-16T17-03-59.994.log
. When the value is0
, the limit is disabled. - 2
- The maximum audit log file size in megabytes. In this example, the file is rotated as soon as the live log reaches the 200 MB limit. When the value is set to
0
, the limit is disabled. - 3
- The maximum number of rotated audit log files retained. After the limit is reached, the log files are deleted in order from oldest to newest. In this example, the value
1
results in only 1 file of sizemaxFileSize
being retained in addition to the current active log. When the value is set to0
, the limit is disabled. - 4
- Logs only metadata for read and write requests; does not log request bodies except for OAuth access token requests. If you do not specify this field, the
Default
profile is used.
Optional: To specify a new directory for logs, you can stop MicroShift, and then move the
/var/log/kube-apiserver
directory to your desired location:Stop MicroShift by running the following command:
sudo systemctl stop microshift
$ sudo systemctl stop microshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Move the
/var/log/kube-apiserver
directory to your desired location by running the following command:sudo mv /var/log/kube-apiserver <~/kube-apiserver>
$ sudo mv /var/log/kube-apiserver <~/kube-apiserver>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<~/kube-apiserver>
with the path to the directory that you want to use.
If you specified a new directory for logs, create a symlink to your custom directory at
/var/log/kube-apiserver
by running the following command:sudo ln -s <~/kube-apiserver> /var/log/kube-apiserver
$ sudo ln -s <~/kube-apiserver> /var/log/kube-apiserver
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<~/kube-apiserver>
with the path to the directory that you want to use. This enables the collection of logs in sos reports.
If you are configuring audit log policies on a running instance, restart MicroShift by entering the following command:
sudo systemctl restart microshift
$ sudo systemctl restart microshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.3.4. Troubleshooting audit log configuration Copy linkLink copied to clipboard!
Use the following steps to troubleshoot custom audit log settings and file locations.
Procedure
Check the current values that are configured by running the following command:
sudo microshift show-config --mode effective
$ sudo microshift show-config --mode effective
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
auditLog: maxFileSize: 200 maxFiles: 1 maxFileAge: 7 profile: AllRequestBodies
auditLog: maxFileSize: 200 maxFiles: 1 maxFileAge: 7 profile: AllRequestBodies
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the
audit.log
file permissions by running the following command:sudo ls -ltrh /var/log/kube-apiserver/audit.log
$ sudo ls -ltrh /var/log/kube-apiserver/audit.log
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
-rw-------. 1 root root 46M Mar 12 09:52 /var/log/kube-apiserver/audit.log
-rw-------. 1 root root 46M Mar 12 09:52 /var/log/kube-apiserver/audit.log
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the contents of the current log directory by running the following command:
sudo ls -ltrh /var/log/kube-apiserver/
$ sudo ls -ltrh /var/log/kube-apiserver/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
total 6.0M -rw-------. 1 root root 2.0M Mar 12 10:56 audit-2024-03-12T14-56-16.267.log -rw-------. 1 root root 2.0M Mar 12 10:56 audit-2024-03-12T14-56-49.444.log -rw-------. 1 root root 962K Mar 12 10:57 audit.log
total 6.0M -rw-------. 1 root root 2.0M Mar 12 10:56 audit-2024-03-12T14-56-16.267.log -rw-------. 1 root root 2.0M Mar 12 10:56 audit-2024-03-12T14-56-49.444.log -rw-------. 1 root root 962K Mar 12 10:57 audit.log
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.4. Verifying container signatures for supply chain security Copy linkLink copied to clipboard!
You can enhance supply chain security by using the sigstore signing methodology.
sigstore support is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
7.4.1. Understanding how to use sigstore to verify container signatures Copy linkLink copied to clipboard!
You can configure the container runtime to verify image integrity by using the sigstore signing methodology. Configuring MicroShift container runtimes enables the verification of image integrity. With the sigstore project, developers can digitally sign what they build, creating a safer chain of custody that traces software back to the source. Administrators can then verify signatures and monitor workflows at scale. By using sigstore, you can store signatures in the same registry as the build images.
- For user-specific images, you must update the configuration file to point to the appropriate public key, or disable signature verification for those image sources.
For disconnected or offline configurations, you must embed the public key contents into the operating system image.
7.4.2. Verifying container signatures using sigstore Copy linkLink copied to clipboard!
Verify container signatures for MicroShift by configuring the container runtime to use sigstore. The container signature verification uses the public key from the Red Hat keypair when signing the images. To use sigstore, edit the default /etc/containers/policy.json
file that is installed as part of the container runtime package.
You can access Red Hat public keys at the following link:
You must use the release key 3 for verifying MicroShift container signatures.
Prerequisites
- You have admin access to the MicroShift host.
- You installed MicroShift.
Procedure
Download the relevant public key and save it as
/etc/containers/RedHat_ReleaseKey3.pub
by running the following command:sudo curl -sL https://access.redhat.com/security/data/63405576.txt -o /etc/containers/RedHat_ReleaseKey3.pub
$ sudo curl -sL https://access.redhat.com/security/data/63405576.txt -o /etc/containers/RedHat_ReleaseKey3.pub
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To configure the container runtime to verify images from Red Hat sources, edit the
/etc/containers/policy.json
file to contain the following configuration:Example policy JSON file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure Red Hat remote registries to use sigstore attachments when pulling images to the local storage, by editing the
/etc/containers/registries.d/registry.redhat.io.yaml`
file to contain the following configuration:cat /etc/containers/registries.d/registry.redhat.io.yaml docker: registry.redhat.io: use-sigstore-attachments: true
$ cat /etc/containers/registries.d/registry.redhat.io.yaml docker: registry.redhat.io: use-sigstore-attachments: true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure Red Hat remote registries to use sigstore attachments when pulling images to the local storage, by editing the
/etc/containers/registries.d/registry.quay.io.yaml
file to contain the following configuration:cat /etc/containers/registries.d/quay.io.yaml docker: quay.io/openshift-release-dev: use-sigstore-attachments: true
$ cat /etc/containers/registries.d/quay.io.yaml docker: quay.io/openshift-release-dev: use-sigstore-attachments: true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create user-specific registry configuration files if your use case requires signature verification for those image sources. You can use the example here to start with and add your own requirements.
Next steps
- If you are using a mirror registry, enable sigstore attachments.
- Otherwise, proceed to wiping the local container storage clean.
7.4.2.1. Enabling sigstore attachments for mirror registries Copy linkLink copied to clipboard!
If you are using mirror registries you must apply additional configuration to enable sigstore attachments and mirroring by digest.
Prerequisites
- You have admin access to the MicroShift host.
- You completed the steps in "Verifying container signatures using sigstore."
Procedure
Enable sigstore attachments by creating the
/etc/containers/registries.d/mirror.registry.local.yaml
file.cat /etc/containers/registries.d/<mirror.registry.local.yaml> docker: mirror.registry.local: use-sigstore-attachments: true
$ cat /etc/containers/registries.d/<mirror.registry.local.yaml>
1 docker: mirror.registry.local: use-sigstore-attachments: true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Name the
<mirror.registry.local.yaml>
file after your mirror registry URL.
Enable mirroring by digest by creating the
/etc/containers/registries.conf.d/999-microshift-mirror.conf
with the following contents:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
- Wipe the local container storage clean.
7.4.2.2. Wiping local container storage clean Copy linkLink copied to clipboard!
When you apply the configuration to an existing system, you must wipe the local container storage clean. Cleaning the container storage ensures that container images with signatures are properly downloaded.
Prerequisites
- You have administrator access to the MicroShift host.
- You enabled sigstore on your mirror registries.
Procedure
Stop the CRI-O container runtime service and MicroShift by running the following command:
sudo systemctl stop crio microshift
$ sudo systemctl stop crio microshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wipe the CRI-O container runtime storage clean by running the following command:
sudo crio wipe --force
$ sudo crio wipe --force
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the CRI-O container runtime service and MicroShift by running the following command:
sudo systemctl start crio microshift
$ sudo systemctl start crio microshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that all pods are running in a healthy state by entering the following command:
oc get pods -A
$ oc get pods -A
Example output
This example output shows basic MicroShift. If you have installed optional RPMs, the status of pods running those services is also expected to be shown in your output.