이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 11. Configuring MicroShift authentication and security
11.1. Configuring custom certificate authorities 링크 복사링크가 클립보드에 복사되었습니다!
You can allow and encrypt connections with external clients by replacing the MicroShift default API server certificate with a custom server certificate issued by a certificate authority (CA).
11.1.1. Using custom certificate authorities for the MicroShift API server 링크 복사링크가 클립보드에 복사되었습니다!
To enable external clients to verify the MicroShift API server and maintain encrypted connections, you can replace the default internal certificate with a custom server certificate issued by a trusted certificate authority (CA).
By default, clients outside of the node cannot verify the MicroShift-issued API server certificate. You must update the configuration file with the certificate location and relevant domain names to ensure secure access across your network.
The following steps illustrate the workflow for customizing the API server certificate configuration in MicroShift:
- Copy the certificates and keys to the preferred directory in the host operating system. Ensure that the files are accessible only with root access.
Update the MicroShift configuration for each custom CA by specifying the certificate names and new fully qualified domain name (FQDN) in the MicroShift
/etc/microshift/config.yamlconfiguration file.Each certificate configuration can contain the following values:
- The certificate file location is a required value.
A single common name containing the API server DNS and IP address or IP address range.
TipIn most cases, MicroShift generates a new
kubeconfigfile for your custom CA that includes the IP address or range that you specify. The exception is when you specify wildcards for the IP address. In this case, MicroShift generates akubeconfigfile with the public IP address of the server. To use wildcards, you must update thekubeconfigfile with your specific details.- Multiple Subject Alternative Names (SANs) containing the API server DNS and IP addresses or a wildcard certificate.
- You can list additional DNS names for each certificate.
-
After the MicroShift service restarts, you must copy the generated
kubeconfigfiles to the client. Configure additional CAs on the client system. For example, you can update CA bundles in the Red Hat Enterprise Linux (RHEL) truststore.
ImportantCustom server certificates must be validated against CA data configured in the trust root of the host operating system. For more information, read the following documentation:
The certificates and keys are read from the specified file location on the host. You can test and validate configuration from the client.
- If any validation fails, MicroShift skips the custom configuration and uses the default certificate to start. The priority is to continue the service uninterrupted. MicroShift logs errors when the service starts. Common errors include expired certificates, missing files, or wrong IP addresses.
- External server certificates are not automatically renewed. You must manually rotate your external certificates.
11.1.2. Configuring custom certificate authorities 링크 복사링크가 클립보드에 복사되었습니다!
To configure externally generated certificates and domain names by using custom certificate authorities (CAs), add them to the MicroShift /etc/microshift/config.yaml configuration file. You must also configure the host operating system trust root.
Externally generated kubeconfig files are created in the /var/lib/microshift/resources/kubeadmin/<hostname>/kubeconfig directory. If you need to use localhost in addition to externally generated configurations, retain the original kubeconfig file in its default location. The localhost kubeconfig file uses the self-signed certificate authority.
Prerequisites
-
The OpenShift CLI (
oc) is installed. - You have root access to the node.
- The certificate authority has issued the custom certificates.
-
A MicroShift
/etc/microshift/config.yamlconfiguration file exists.
Procedure
- Copy the custom certificates you want to add to the trust root of the MicroShift host. Ensure that the certificate and private keys are only accessible to MicroShift.
For each custom CA that you need, add an
apiServersection callednamedCertificatesto the/etc/microshift/config.yamlMicroShift configuration file by using the following example:apiServer: namedCertificates: - certPath: ~/certs/api_fqdn_1.crt keyPath: ~/certs/api_fqdn_1.key - certPath: ~/certs/api_fqdn_2.crt keyPath: ~/certs/api_fqdn_2.key names: - api_fqdn_1 - *.apps.external.comwhere:
apiServer.namedCertificates.certPath- Add the full path to the certificate.
apiServer.namedCertificates.keyPath- Add the full path to the certificate key.
apiServer.namedCertificates.names- Optional. Add a list of explicit DNS names. Leading wildcards are allowed. If no names are listed, the implicit names are extracted from the certificates.
Restart the MicroShift to apply the certificates by running the following command:
$ systemctl microshift restart-
Wait a few minutes for the system to restart and apply the custom server. New
kubeconfigfiles are generated in the/var/lib/microshift/resources/kubeadmin/directory. -
Copy the
kubeconfigfiles to the client. If you specified wildcards for the IP address, update thekubeconfigto remove the public IP address of the server and replace that IP address with the specific wildcard range you want to use. From the client, use the following steps:
Specify the
kubeconfigto use by running the following command:$ export KUBECONFIG=~/custom-kubeconfigs/kubeconfigUse the location of the copied
kubeconfigfile as the path.Check that the certificates are applied by using the following command:
$ oc --certificate-authority ~/certs/ca.ca get nodeExample output
oc get node NAME STATUS ROLES AGE VERSION dhcp-1-235-195.arm.example.com Ready control-plane,master,worker 76m v1.34.2Add the new CA file to the $KUBECONFIG environment variable by running the following command:
$ oc config set clusters.microshift.certificate-authority /tmp/certificate-authority-data-new.crtVerify that the new
kubeconfigfile contains the new CA by running the following command:$ oc config view --flattenExample externally generated
kubeconfigfileapiVersion: v1 clusters: - cluster: certificate-authority: /tmp/certificate-authority-data-new.crt server: https://api.ci-ln-k0gim2b-76ef8.aws-2.ci.openshift.org:6443 name: ci-ln-k0gim2b-76ef8 contexts: - context: cluster: ci-ln-k0gim2b-76ef8 user: name: current-context: kind: Config preferences: {}where:
clusters.cluster.certificate-authority-
The
certificate-authority-datasection is not present in externally generatedkubeconfigfiles. It is added with theoc config setcommand used previously.
Verify the
subjectandissuerof your customized API server certificate authority by running the following command:$ curl --cacert /tmp/caCert.pem https://${fqdn_name}:6443/healthz -vExample output
Server certificate: subject: CN=kas-test-cert_server start date: Mar 12 11:39:46 2024 GMT expire date: Mar 12 11:39:46 2025 GMT subjectAltName: host "dhcp-1-235-3.arm.eng.rdu2.redhat.com" matched cert's "dhcp-1-235-3.arm.eng.rdu2.redhat.com" issuer: CN=kas-test-cert_ca SSL certificate verify ok.ImportantEither replace the
certificate-authority-datain the generatedkubeconfigfile with the newrootCAor add thecertificate-authority-datato the trust root of the operating system. Do not use both methods.Configure additional CAs in the trust root of the operating system. For example, in the RHEL Client truststore on the client system. The system-wide truststore.
- Updating the certificate bundle with the configuration that contains the CA is recommended.
-
If you do not want to configure your certificate bundles, you can alternately use the
oc login localhost:8443 --certificate-authority=/path/to/cert.crtcommand, but this method is not preferred.
11.1.3. Custom certificates reserved name values 링크 복사링크가 클립보드에 복사되었습니다!
Certificate problems cause MicroShift to ignore certificates dynamically and log an error. Problems can be caused by:
- The certificate files do not exist on the disk or are not readable.
- The certificate is not parsable.
-
The certificate overrides the internal certificates IP addresses or DNS names in a
SubjectAlternativeNames(SAN) field. Do not use a reserved name when configuring SANs.
| Address | Type | Comment |
|---|---|---|
|
| DNS | |
|
| IP Address | |
|
| IP Address | Node Network |
|
| IP Address | Service Network |
| 169.254.169.2/29 | IP Address | br-ex Network |
|
| DNS | |
|
| DNS | |
|
| DNS |
11.1.4. Troubleshooting custom certificates 링크 복사링크가 클립보드에 복사되었습니다!
To troubleshoot the implementation of custom certificates, you can take the following steps.
Procedure
From MicroShift, ensure that the certificate is served by the
kube-apiserverand verify that the certificate path is appended to the--tls-sni-cert-keyFLAG by running the following command:$ journalctl -u microshift -b0 | grep tls-sni-cert-keyExample output
Jan 24 14:53:00 localhost.localdomain microshift[45313]: kube-apiserver I0124 14:53:00.649099 45313 flags.go:64] FLAG: --tls-sni-cert-key="[/home/eslutsky/dev/certs/server.crt,/home/eslutsky/dev/certs/server.key;/var/lib/microshift/certs/kube-apiserver-external-signer/kube-external-serving/server.crt,/var/lib/microshift/certs/kube-apiserver-external-signer/kube-external-serving/server.key;/var/lib/microshift/certs/kube-apiserver-localhost-signer/kube-apiserver-localhost-serving/server.crt,/var/lib/microshift/certs/kube-apiserver-localhost-signer/kube-apiserver-localhost-serving/server.key;/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.crt,/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.keyFrom the client, ensure that the
kube-apiserveris serving the correct certificate by running the following command:$ openssl s_client -connect <SNI_ADDRESS>:6443 -showcerts | openssl x509 -text -noout -in - | grep -C 1 "Alternative\|CN"
11.1.5. Cleaning up and recreating the custom certificates 링크 복사링크가 클립보드에 복사되었습니다!
You can stop the MicroShift service, clean up the custom certificates, and re-create the custom certificates, to ensure that your system uses the most recent certificate data.
Procedure
Stop the MicroShift services and clean up the custom certificates by running the following command:
$ sudo microshift-cleanup-data --certExample output
Stopping MicroShift services Removing MicroShift certificates MicroShift service was stopped Cleanup succeededRestart the MicroShift services to recreate the custom certificates by running the following command:
$ sudo systemctl start microshift
11.2. Configuring TLS security profiles 링크 복사링크가 클립보드에 복사되었습니다!
Use transport layer security (TLS) protocols to help prevent known insecure protocols, ciphers, or algorithms from accessing the applications you run on MicroShift.
11.2.1. Using TLS with MicroShift 링크 복사링크가 클립보드에 복사되었습니다!
Transport layer security (TLS) profiles provide a way for servers to regulate which ciphers a client can use when connecting to the server. Using TLS helps to ensure that MicroShift applications use cryptographic libraries that do not allow known insecure protocols, ciphers, or algorithms. You can use either the TLS 1.2 or TLS 1.3 security profiles with MicroShift.
MicroShift API server cipher suites apply automatically to the following internal control plane components:
- API server
- Kubelet
- Kube controller manager
- Kube scheduler
- etcd
- Route controller manager
The API server uses the configured minimum TLS version and the associated cipher suites. If you leave the cipher suites parameter empty, the defaults for the configured minimum version are used automatically.
Default cipher suites for TLS 1.2
The following list specifies the default cipher suites for TLS 1.2:
-
TLS_AES_128_GCM_SHA256 -
TLS_AES_256_GCM_SHA384 -
TLS_CHACHA20_POLY1305_SHA256 -
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 -
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 -
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 -
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 -
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 -
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
Default cipher suites for TLS 1.3
The following list specifies the default cipher suites for TLS 1.3:
-
TLS_AES_128_GCM_SHA256 -
TLS_AES_256_GCM_SHA384 -
TLS_CHACHA20_POLY1305_SHA256
11.2.2. Configuring TLS for MicroShift 링크 복사링크가 클립보드에 복사되었습니다!
You can choose to use either the TLS 1.2 or TLS 1.3 security profiles with MicroShift for system hardening.
Prerequisites
- You have access to the node as a root user.
- MicroShift has either not started for the first time, or is stopped.
-
The OpenShift CLI (
oc) is installed. - The certificate authority has issued the custom certificates (CAs).
Procedure
-
Make a copy of the provided
config.yaml.defaultfile in the/etc/microshift/directory, renaming itconfig.yaml. Keep the new MicroShift
config.yamlin the/etc/microshift/directory. Yourconfig.yamlfile is read every time the MicroShift service starts.NoteAfter you create it, the
config.yamlfile takes precedence over built-in settings.- Optional: Use a configuration snippet if you are using an existing MicroShift YAML. See "Using configuration snippets" in the Additional resources section for more information.
Replace the default values in the
tlssection of the MicroShift YAML with your valid values.Example TLS 1.2 configuration
apiServer: # ... tls: cipherSuites: - <cipher_suite_1> - ... minVersion: VersionTLS12 # ...where:
apiServer.tls.cipherSuites-
Defaults to the suites of the configured
minVersion. IfminVersionis not configured, the default value is TLS 1.2. You can specify the cipher suites you want to use from the list of supported cipher suites. All clients connecting to the API server must support the configured cipher suites or the connections fail during the TLS handshake phase. Be sure to add the CA certificate bundle to the list of CA certificates that the TLS client or server trusts. apiServer.tls.minVersionSpecify
VersionTLS12orVersionTLS13.ImportantWhen you choose TLS 1.3 as the minimum TLS version, only the default MicroShift cipher suites can be used. Additional cipher suites are not configurable. If other cipher suites to use with TLS 1.3 are configured, those suites are ignored and overwritten by the MicroShift defaults.
Complete any other additional configurations that you require, then restart MicroShift by running the following command:
$ sudo systemctl restart microshift
11.2.2.1. Default cipher suites 링크 복사링크가 클립보드에 복사되었습니다!
Default cipher suites are included with MicroShift for both TLS 1.2 and TLS 1.3. The cipher suites for TLS 1.3 cannot be customized.
Default cipher suites for TLS 1.2
The following list specifies the default cipher suites for TLS 1.2:
-
TLS_AES_128_GCM_SHA256 -
TLS_AES_256_GCM_SHA384 -
TLS_CHACHA20_POLY1305_SHA256 -
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 -
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 -
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 -
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 -
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 -
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
Default cipher suites for TLS 1.3
The following list specifies the default cipher suites for TLS 1.3:
-
TLS_AES_128_GCM_SHA256 -
TLS_AES_256_GCM_SHA384 -
TLS_CHACHA20_POLY1305_SHA256
11.3. Configuring audit logging policies 링크 복사링크가 클립보드에 복사되었습니다!
You can control MicroShift audit log file rotation and retention by using configuration values.
11.3.1. About setting limits on audit log files 링크 복사링크가 클립보드에 복사되었습니다!
To prevent logging data from exceeding the storage capacity of far-edge devices, you can set rotation and retention limits for MicroShift audit log files. Configuring the size, number, and age values ensures that host systems maintain the processing space required for node workloads.
The values you set to limit MicroShift audit logs enable you to enforce the size, number, and age limits of audit log backups. Field values are processed independently of one another and without prioritization.
You can set fields in combination to define a maximum storage limit for retained logs. For example:
-
Set both
maxFileSizeandmaxFilesto create a log storage upper limit. -
Set a
maxFileAgevalue to automatically delete files older than the timestamp in the file name, regardless of themaxFilesvalue.
11.3.1.1. Default audit log values 링크 복사링크가 클립보드에 복사되었습니다!
MicroShift includes the following default audit log rotation values:
| Audit log parameter | Default setting | Definition |
|---|---|---|
|
|
| How long log files are retained before automatic deletion. The default value means that a log file is never deleted based on age. This value can be configured. |
|
|
| The total number of log files retained. By default, MicroShift retains 10 log files. The oldest is deleted when an excess file is created. This value can be configured. |
|
|
|
By default, when the |
|
|
|
The |
The maximum default storage usage for audit log retention is 2000Mb if there are 10 or fewer files.
If you do not specify a value for a field, the default value is used. If you remove a previously set field value, the default value is restored after the next MicroShift service restart.
You must configure audit log retention and rotation in Red Hat Enterprise Linux (RHEL) for logs that are generated by application pods. These logs print to the console and are saved. Ensure that your log preferences are configured for the RHEL /var/log/audit/audit.log file to maintain MicroShift node health.
11.3.2. About audit log policy profiles 링크 복사링크가 클립보드에 복사되었습니다!
To monitor activity and maintain compliance, you can apply audit log profiles that define the level of detail recorded for API server requests. While more comprehensive profiles provide request bodies for troubleshooting, they also increase resource overhead on the host system.
Audit log profiles define how to log requests that come to the OpenShift API server and the Kubernetes API server.
MicroShift supports the following predefined audit policy profiles:
| Profile | Description |
|---|---|
|
| Logs only metadata for read and write requests; does not log request bodies except for OAuth access token requests. This is the default policy. |
|
|
In addition to logging metadata for all requests, logs request bodies for every write request to the API servers ( |
|
|
In addition to logging metadata for all requests, logs request bodies for every read and write request to the API servers ( |
|
| No requests are logged, including OAuth access token requests and OAuth authorize token requests. Warning
Do not disable audit logging by using the |
-
Sensitive resources, such as
Secret,Route, andOAuthClientobjects, are only logged at the metadata level.
By default, MicroShift uses the Default audit log profile. You can use another audit policy profile that also logs request bodies, but be aware of the increased resource usage such as CPU, memory, and I/O.
11.3.3. Configuring audit log values 링크 복사링크가 클립보드에 복사되었습니다!
To manage disk space, you can customize the audit log retention settings in the MicroShift configuration file. Adjusting values such as file age and size ensures that the system retains critical event data without exhausting local storage.
Procedure
-
Make a copy of the provided
config.yaml.defaultfile in the/etc/microshift/directory, renaming itconfig.yaml. Keep the new MicroShiftconfig.yamlyou create in the/etc/microshift/directory. The newconfig.yamlis read whenever the MicroShift service starts. After you create it, theconfig.yamlfile takes precedence over built-in settings. Replace the default values in the
auditLogsection of the YAML with your desired valid values.Example default
auditLogconfigurationapiServer: # .... auditLog: maxFileAge: 7 maxFileSize: 200 maxFiles: 1 profile: Default # ....where:
apiServer.auditLog.maxFileAge-
Specifies the maximum time in days that log files are kept. Files older than this limit are deleted. In this example, after a log file is more than 7 days old, it is deleted. The files are deleted regardless of whether or not the live log has reached the maximum file size specified in the
maxFileSizefield. File age is determined by the timestamp written in the name of the rotated log file, for example,audit-2024-05-16T17-03-59.994.log. When the value is0, the limit is disabled. apiServer.auditLog.maxFileSize-
The maximum audit log file size in megabytes. In this example, the file is rotated as soon as the live log reaches the 200 MB limit. When the value is set to
0, the limit is disabled. apiServer.auditLog.maxFiles-
The maximum number of rotated audit log files retained. After the limit is reached, the log files are deleted in order from oldest to newest. In this example, the value
1results in only 1 file of sizemaxFileSizebeing retained in addition to the current active log. When the value is set to0, the limit is disabled. apiServer.auditLog.profile-
Logs only metadata for read and write requests; does not log request bodies except for OAuth access token requests. If you do not specify this field, the
Defaultprofile is used.
Optional: To specify a new directory for logs, you can stop MicroShift, and then move the
/var/log/kube-apiserverdirectory to your desired location:Stop MicroShift by running the following command:
$ sudo systemctl stop microshiftMove the
/var/log/kube-apiserverdirectory to your desired location by running the following command:$ sudo mv /var/log/kube-apiserver <~/kube-apiserver>Replace
<~/kube-apiserver>with the path to the directory that you want to use.If you specified a new directory for logs, create a symlink to your custom directory at
/var/log/kube-apiserverby running the following command:$ sudo ln -s <~/kube-apiserver> /var/log/kube-apiserverReplace
<~/kube-apiserver>with the path to the directory that you want to use. This enables the collection of logs in sos reports.
If you are configuring audit log policies on a running instance, restart MicroShift by entering the following command:
$ sudo systemctl restart microshift
11.3.4. Troubleshooting audit log configuration 링크 복사링크가 클립보드에 복사되었습니다!
You can use the following steps to troubleshoot MicroShift custom audit log settings and file locations.
Procedure
Check the current values that are configured by running the following command:
$ sudo microshift show-config --mode effectiveExample output
auditLog: maxFileSize: 200 maxFiles: 1 maxFileAge: 7 profile: AllRequestBodiesCheck the
audit.logfile permissions by running the following command:$ sudo ls -ltrh /var/log/kube-apiserver/audit.logExample output
-rw-------. 1 root root 46M Mar 12 09:52 /var/log/kube-apiserver/audit.logList the contents of the current log directory by running the following command:
$ sudo ls -ltrh /var/log/kube-apiserver/Example output
total 6.0M -rw-------. 1 root root 2.0M Mar 12 10:56 audit-2024-03-12T14-56-16.267.log -rw-------. 1 root root 2.0M Mar 12 10:56 audit-2024-03-12T14-56-49.444.log -rw-------. 1 root root 962K Mar 12 10:57 audit.log
11.4. Verifying container signatures for supply chain security 링크 복사링크가 클립보드에 복사되었습니다!
You can enhance supply chain security by using the sigstore signing methodology.
sigstore support is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
11.4.1. Understanding how to use sigstore to verify container signatures 링크 복사링크가 클립보드에 복사되었습니다!
To verify image integrity within your MicroShift environment, you can configure the container runtime to use the sigstore signing methodology. This ensures a safer chain of custody by enabling the digital signing and verification of build artifacts.
- For user-specific images, you must update the configuration file to point to the appropriate public key, or disable signature verification for those image sources.
For disconnected or offline configurations, you must embed the public key contents into the operating system image.
11.4.2. Verifying container signatures using sigstore 링크 복사링크가 클립보드에 복사되었습니다!
To secure your MicroShift environment against unauthorized image deployments, you can configure the container runtime to verify container signatures. By using sigstore with Red Hat public keys, you ensure that only authentic, signed images from trusted registries are used.
You can access Red Hat public keys at the following link:
You must use the release key 3 for verifying MicroShift container signatures.
Prerequisites
- You have admin access to the MicroShift host.
- You installed MicroShift.
Procedure
Download the relevant public key and save it as
/etc/containers/RedHat_ReleaseKey3.pubby running the following command:$ sudo curl -sL https://access.redhat.com/security/data/63405576.txt -o /etc/containers/RedHat_ReleaseKey3.pubTo configure the container runtime to verify images from Red Hat sources, edit the
/etc/containers/policy.jsonfile to contain the following configuration:Example policy JSON file
{ "default": [ { "type": "reject" } ], "transports": { "docker": { "quay.io/openshift-release-dev": [{ "type": "sigstoreSigned", "keyPath": "/etc/containers/RedHat_ReleaseKey3.pub", "signedIdentity": { "type": "matchRepoDigestOrExact" } }], "registry.redhat.io": [{ "type": "sigstoreSigned", "keyPath": "/etc/containers/RedHat_ReleaseKey3.pub", "signedIdentity": { "type": "matchRepoDigestOrExact" } }] } } }Configure Red Hat remote registries to use sigstore attachments when pulling images to the local storage, by editing the
/etc/containers/registries.d/registry.redhat.io.yamlfile to contain the following configuration:$ cat /etc/containers/registries.d/registry.redhat.io.yaml docker: registry.redhat.io: use-sigstore-attachments: trueConfigure Red Hat remote registries to use sigstore attachments when pulling images to the local storage, by editing the
/etc/containers/registries.d/registry.quay.io.yamlfile to contain the following configuration:$ cat /etc/containers/registries.d/quay.io.yaml docker: quay.io/openshift-release-dev: use-sigstore-attachments: true- Create user-specific registry configuration files if your use case requires signature verification for those image sources. You can use the example here to start with and add your own requirements.
Next steps
- If you are using a mirror registry, enable sigstore attachments.
- Otherwise, proceed to wiping the local container storage clean.
11.4.2.1. Enabling sigstore attachments for mirror registries 링크 복사링크가 클립보드에 복사되었습니다!
If you are using mirror registries, you must apply additional configuration to enable sigstore attachments and mirroring by digest.
Prerequisites
- You have admin access to the MicroShift host.
- You completed the steps in "Verifying container signatures using sigstore."
Procedure
Enable sigstore attachments by creating the
/etc/containers/registries.d/mirror.registry.local.yamlfile.$ cat /etc/containers/registries.d/<mirror.registry.local.yaml> docker: mirror.registry.local: use-sigstore-attachments: trueName the
<mirror.registry.local.yaml>file after your mirror registry URL.Enable mirroring by digest by creating the
/etc/containers/registries.conf.d/999-microshift-mirror.confwith the following contents:$ cat /etc/containers/registries.conf.d/999-microshift-mirror.conf [[registry]] prefix = "quay.io/openshift-release-dev" location = "mirror.registry.local" mirror-by-digest-only = true [[registry]] prefix = "registry.redhat.io" location = "mirror.registry.local" mirror-by-digest-only = true
Next steps
- Wipe the local container storage clean.
11.4.2.2. Wiping local container storage clean 링크 복사링크가 클립보드에 복사되었습니다!
To ensure that container images with sigstore signatures are correctly downloaded and verified, you must clear existing local storage. Removing previous container data prevents configuration conflicts when you update security policies for MicroShift.
Prerequisites
- You have administrator access to the MicroShift host.
- You enabled sigstore on your mirror registries.
Procedure
Stop the CRI-O container runtime service and MicroShift by running the following command:
$ sudo systemctl stop crio microshiftWipe the CRI-O container runtime storage clean by running the following command:
$ sudo crio wipe --forceRestart the CRI-O container runtime service and MicroShift by running the following command:
$ sudo systemctl start crio microshift
Verification
Verify that all pods are running in a healthy state by entering the following command:
$ oc get pods -A
Example output
NAMESPACE NAME READY STATUS RESTARTS AGE
default i-06166fbb376f14a8bus-west-2computeinternal-debug-qtwcr 1/1 Running 0 46m
kube-system csi-snapshot-controller-5c6586d546-lprv4 1/1 Running 0 51m
openshift-dns dns-default-45jl7 2/2 Running 0 50m
openshift-dns node-resolver-7wmzf 1/1 Running 0 51m
openshift-ingress router-default-78b86fbf9d-qvj9s 1/1 Running 0 51m
openshift-ovn-kubernetes ovnkube-master-5rfhh 4/4 Running 0 51m
openshift-ovn-kubernetes ovnkube-node-gcnt6 1/1 Running 0 51m
openshift-service-ca service-ca-bf5b7c9f8-pn6rk 1/1 Running 0 51m
openshift-storage topolvm-controller-549f7fbdd5-7vrmv 5/5 Running 0 51m
openshift-storage topolvm-node-rht2m 3/3 Running 0 50m
This example output shows a basic MicroShift installation. If you installed optional RPMs, the status of pods running those services is also expected in your output.