Chapter 7. Configuring MicroShift authentication and security


7.1. Configuring custom certificate authorities

Allow and encrypt connections with external clients by replacing the MicroShift default API server certificate with a custom server certificate issued by a certificate authority (CA).

When MicroShift starts, an internal MicroShift cluster certificate authority (CA) issues the default API server certificate. By default, clients outside of the cluster cannot verify the MicroShift-issued API server certificate. You can grant secure access and encrypt connections between the MicroShift API server and external clients. Replace the default certificate with a custom server certificate issued externally by a CA that clients trust.

The following steps illustrate the workflow for customizing the API server certificate configuration in MicroShift:

  1. Copy the certificates and keys to the preferred directory in the host operating system. Ensure that the files are accessible only with root access.
  2. Update the MicroShift configuration for each custom CA by specifying the certificate names and new fully qualified domain name (FQDN) in the MicroShift /etc/microshift/config.yaml configuration file.

    Each certificate configuration can contain the following values:

    • The certificate file location is a required value.
    • A single common name containing the API server DNS and IP address or IP address range.

      Tip

      In most cases, MicroShift generates a new kubeconfig file for your custom CA that includes the IP address or range that you specify. The exception is when you specify wildcards for the IP address. In this case, MicroShift generates a kubeconfig file with the public IP address of the server. To use wildcards, you must update the kubeconfig file with your specific details.

    • Multiple Subject Alternative Names (SANs) containing the API server DNS and IP addresses or a wildcard certificate.
    • You can list additional DNS names for each certificate.
  3. After the MicroShift service restarts, you must copy the generated kubeconfig files to the client.
  4. Configure additional CAs on the client system. For example, you can update CA bundles in the Red Hat Enterprise Linux (RHEL) truststore.

    Important

    Custom server certificates must be validated against CA data configured in the trust root of the host operating system. For more information, read the following documentation:

  5. The certificates and keys are read from the specified file location on the host. You can test and validate configuration from the client.

    • If any validation fails, MicroShift skips the custom configuration and uses the default certificate to start. The priority is to continue the service uninterrupted. MicroShift logs errors when the service starts. Common errors include expired certificates, missing files, or wrong IP addresses.
  6. External server certificates are not automatically renewed. You must manually rotate your external certificates.

7.1.2. Configuring custom certificate authorities

To configure externally generated certificates and domain names by using custom certificate authorities (CAs), add them to the MicroShift /etc/microshift/config.yaml configuration file. You must also configure the host operating system trust root.

Note

Externally generated kubeconfig files are created in the /var/lib/microshift/resources/kubeadmin/<hostname>/kubeconfig directory. If you need to use localhost in addition to externally generated configurations, retain the original kubeconfig file in its default location. The localhost kubeconfig file uses the self-signed certificate authority.

Prerequisites

  • The OpenShift CLI (oc) is installed.
  • You have access to the cluster as a user with the cluster administration role.
  • The certificate authority has issued the custom certificates.
  • A MicroShift /etc/microshift/config.yaml configuration file exists.

Procedure

  1. Copy the custom certificates you want to add to the trust root of the MicroShift host. Ensure that the certificate and private keys are only accessible to MicroShift.
  2. For each custom CA that you need, add an apiServer section called namedCertificates to the /etc/microshift/config.yaml MicroShift configuration file by using the following example:

    apiServer:
      namedCertificates:
       - certPath: ~/certs/api_fqdn_1.crt 
    1
    
         keyPath:  ~/certs/api_fqdn_1.key 
    2
    
       - certPath: ~/certs/api_fqdn_2.crt
         keyPath:  ~/certs/api_fqdn_2.key
         names: 
    3
    
         - api_fqdn_1
         - *.apps.external.com
    Copy to Clipboard Toggle word wrap
    1
    Add the full path to the certificate.
    2
    Add the full path to the certificate key.
    3
    Optional. Add a list of explicit DNS names. Leading wildcards are allowed. If no names are listed, the implicit names are extracted from the certificates.
  3. Restart the MicroShift to apply the certificates by running the following command:

    $ systemctl microshift restart
    Copy to Clipboard Toggle word wrap
  4. Wait a few minutes for the system to restart and apply the custom server. New kubeconfig files are generated in the /var/lib/microshift/resources/kubeadmin/ directory.
  5. Copy the kubeconfig files to the client. If you specified wildcards for the IP address, update the kubeconfig to remove the public IP address of the server and replace that IP address with the specific wildcard range you want to use.
  6. From the client, use the following steps:

    1. Specify the kubeconfig to use by running the following command:

      $ export KUBECONFIG=~/custom-kubeconfigs/kubeconfig 
      1
      Copy to Clipboard Toggle word wrap
      1
      Use the location of the copied kubeconfig file as the path.
    2. Check that the certificates are applied by using the following command:

      $ oc --certificate-authority ~/certs/ca.ca get node
      Copy to Clipboard Toggle word wrap

      Example output

      oc get node
      NAME                             STATUS   ROLES                         AGE   VERSION
      dhcp-1-235-195.arm.example.com   Ready    control-plane,master,worker   76m   v1.32.3
      Copy to Clipboard Toggle word wrap

    3. Add the new CA file to the $KUBECONFIG environment variable by running the following command:

      $ oc config set clusters.microshift.certificate-authority /tmp/certificate-authority-data-new.crt
      Copy to Clipboard Toggle word wrap
    4. Verify that the new kubeconfig file contains the new CA by running the following command:

      $ oc config view --flatten
      Copy to Clipboard Toggle word wrap

      Example externally generated kubeconfig file

      apiVersion: v1
      clusters:
      - cluster:
          certificate-authority: /tmp/certificate-authority-data-new.crt 
      1
      
          server: https://api.ci-ln-k0gim2b-76ef8.aws-2.ci.openshift.org:6443
        name: ci-ln-k0gim2b-76ef8
      contexts:
      - context:
          cluster: ci-ln-k0gim2b-76ef8
          user:
        name:
      current-context:
      kind: Config
      preferences: {}
      Copy to Clipboard Toggle word wrap

      1
      The certificate-authority-data section is not present in externally generated kubeconfig files. It is added with the oc config set command used previously.
    5. Verify the subject and issuer of your customized API server certificate authority by running the following command:

      $ curl --cacert /tmp/caCert.pem https://${fqdn_name}:6443/healthz -v
      Copy to Clipboard Toggle word wrap

      Example output

      Server certificate:
        subject: CN=kas-test-cert_server
        start date: Mar 12 11:39:46 2024 GMT
        expire date: Mar 12 11:39:46 2025 GMT
        subjectAltName: host "dhcp-1-235-3.arm.eng.rdu2.redhat.com" matched cert's "dhcp-1-235-3.arm.eng.rdu2.redhat.com"
        issuer: CN=kas-test-cert_ca
        SSL certificate verify ok.
      Copy to Clipboard Toggle word wrap

      Important

      Either replace the certificate-authority-data in the generated kubeconfig file with the new rootCA or add the certificate-authority-data to the trust root of the operating system. Do not use both methods.

    6. Configure additional CAs in the trust root of the operating system. For example, in the RHEL Client truststore on the client system. The system-wide truststore.

      • Updating the certificate bundle with the configuration that contains the CA is recommended.
      • If you do not want to configure your certificate bundles, you can alternately use the oc login localhost:8443 --certificate-authority=/path/to/cert.crt command, but this method is not preferred.

7.1.3. Custom certificates reserved name values

The following certificate problems cause MicroShift to ignore certificates dynamically and log an error:

  • The certificate files do not exist on the disk or are not readable.
  • The certificate is not parsable.
  • The certificate overrides the internal certificates IP addresses or DNS names in a SubjectAlternativeNames (SAN) field. Do not use a reserved name when configuring SANs.
Expand
Table 7.1. Reserved Names values
AddressTypeComment

localhost

DNS

 

127.0.0.1

IP Address

 

10.42.0.0

IP Address

Cluster Network

10.43.0.0/16,10.44.0.0/16

IP Address

Service Network

169.254.169.2/29

IP Address

br-ex Network

kubernetes.default.svc

DNS

 

openshift.default.svc

DNS

 

svc.cluster.local

DNS

 

7.1.4. Troubleshooting custom certificates

To troubleshoot the implementation of custom certificates, you can take the following steps.

Procedure

  1. From MicroShift, ensure that the certificate is served by the kube-apiserver and verify that the certificate path is appended to the --tls-sni-cert-key FLAG by running the following command:

    $ journalctl -u microshift -b0 | grep tls-sni-cert-key
    Copy to Clipboard Toggle word wrap

    Example output

    Jan 24 14:53:00 localhost.localdomain microshift[45313]: kube-apiserver I0124 14:53:00.649099   45313 flags.go:64] FLAG: --tls-sni-cert-key="[/home/eslutsky/dev/certs/server.crt,/home/eslutsky/dev/certs/server.key;/var/lib/microshift/certs/kube-apiserver-external-signer/kube-external-serving/server.crt,/var/lib/microshift/certs/kube-apiserver-external-signer/kube-external-serving/server.key;/var/lib/microshift/certs/kube-apiserver-localhost-signer/kube-apiserver-localhost-serving/server.crt,/var/lib/microshift/certs/kube-apiserver-localhost-signer/kube-apiserver-localhost-serving/server.key;/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.crt,/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.key
    Copy to Clipboard Toggle word wrap

  2. From the client, ensure that the kube-apiserver is serving the correct certificate by running the following command:

    $ openssl s_client -connect <SNI_ADDRESS>:6443 -showcerts | openssl x509 -text -noout -in - | grep -C 1 "Alternative\|CN"
    Copy to Clipboard Toggle word wrap

To stop the MicroShift services, clean up the custom certificates and recreate the custom certificates, use the following steps.

Procedure

  1. Stop the MicroShift services and clean up the custom certificates by running the following command:

    $ sudo microshift-cleanup-data --cert
    Copy to Clipboard Toggle word wrap

    Example output

    Stopping MicroShift services
    Removing MicroShift certificates
    MicroShift service was stopped
    Cleanup succeeded
    Copy to Clipboard Toggle word wrap

  2. Restart the MicroShift services to recreate the custom certificates by running the following command:

    $ sudo systemctl start microshift
    Copy to Clipboard Toggle word wrap

7.1.6. Additional resources

7.2. Configuring TLS security profiles

Use transport layer security (TLS) protocols to help prevent known insecure protocols, ciphers, or algorithms from accessing the applications you run on MicroShift.

7.2.1. Using TLS with MicroShift

Transport layer security (TLS) profiles provide a way for servers to regulate which ciphers a client can use when connecting to the server. Using TLS helps to ensure that MicroShift applications use cryptographic libraries that do not allow known insecure protocols, ciphers, or algorithms. You can use either the TLS 1.2 or TLS 1.3 security profiles with MicroShift.

MicroShift API server cipher suites apply automatically to the following internal control plane components:

  • API server
  • Kubelet
  • Kube controller manager
  • Kube scheduler
  • etcd
  • Route controller manager

The API server uses the configured minimum TLS version and the associated cipher suites. If you leave the cipher suites parameter empty, the defaults for the configured minimum version are used automatically.

Default cipher suites for TLS 1.2

The following list specifies the default cipher suites for TLS 1.2:

  • TLS_AES_128_GCM_SHA256
  • TLS_AES_256_GCM_SHA384
  • TLS_CHACHA20_POLY1305_SHA256
  • TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
  • TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
  • TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
  • TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
  • TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
  • TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256

Default cipher suites for TLS 1.3

The following list specifies the default cipher suites for TLS 1.3:

  • TLS_AES_128_GCM_SHA256
  • TLS_AES_256_GCM_SHA384
  • TLS_CHACHA20_POLY1305_SHA256

7.2.2. Configuring TLS for MicroShift

You can choose to use either the TLS 1.2 or TLS 1.3 security profiles with MicroShift for system hardening.

Prerequisites

  • You have access to the cluster as a root user.
  • MicroShift has either not started for the first time, or is stopped.
  • The OpenShift CLI (oc) is installed.
  • The certificate authority has issued the custom certificates (CAs).

Procedure

  1. Make a copy of the provided config.yaml.default file in the /etc/microshift/ directory, renaming it config.yaml.
  2. Keep the new MicroShift config.yaml in the /etc/microshift/ directory. Your config.yaml file is read every time the MicroShift service starts.

    Note

    After you create it, the config.yaml file takes precedence over built-in settings.

  3. Optional: Use a configuration snippet if you are using an existing MicroShift YAML. See "Using configuration snippets" in the Additional resources section for more information.
  4. Replace the default values in the tls section of the MicroShift YAML with your valid values.

    Example TLS 1.2 configuration

    apiServer:
    # ...
      tls:
        cipherSuites: 
    1
    
        - <cipher_suite_1> 
    2
    
        - ...
        minVersion: VersionTLS12 
    3
    
    # ...
    Copy to Clipboard Toggle word wrap

    1
    Defaults to the suites of the configured minVersion. If minVersion is not configured, the default value is TLS 1.2.
    2
    Specify the cipher suites you want to use from the list of supported cipher suites. If you do not configure this list, all of the supported cipher suites are used. All clients connecting to the API server must support the configured cipher suites or the connections fail during the TLS handshake phase. Be sure to add the CA certificate bundle to the list of CA certificates that the TLS client or server trusts.
    3
    Specify VersionTLS12 or VersionTLS13.
    Important

    When you choose TLS 1.3 as the minimum TLS version, only the default MicroShift cipher suites can be used. Additional cipher suites are not configurable. If other cipher suites to use with TLS 1.3 are configured, those suites are ignored and overwritten by the MicroShift defaults.

  5. Complete any other additional configurations that you require, then restart MicroShift by running the following command:

    $ sudo systemctl restart microshift
    Copy to Clipboard Toggle word wrap

7.2.2.1. Default cipher suites

Default cipher suites are included with MicroShift for both TLS 1.2 and TLS 1.3. The cipher suites for TLS 1.3 cannot be customized.

Default cipher suites for TLS 1.2

The following list specifies the default cipher suites for TLS 1.2:

  • TLS_AES_128_GCM_SHA256
  • TLS_AES_256_GCM_SHA384
  • TLS_CHACHA20_POLY1305_SHA256
  • TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
  • TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
  • TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
  • TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
  • TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
  • TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256

Default cipher suites for TLS 1.3

The following list specifies the default cipher suites for TLS 1.3:

  • TLS_AES_128_GCM_SHA256
  • TLS_AES_256_GCM_SHA384
  • TLS_CHACHA20_POLY1305_SHA256

7.3. Configuring audit logging policies

You can control MicroShift audit log file rotation and retention by using configuration values.

7.3.1. About setting limits on audit log files

Controlling the rotation and retention of the MicroShift audit log file by using configuration values helps keep the limited storage capacities of far-edge devices from being exceeded. On such devices, logging data accumulation can limit host system or cluster workloads, potentially causing the device stop working. Setting audit log policies can help ensure that critical processing space is continually available.

The values you set to limit MicroShift audit logs enable you to enforce the size, number, and age limits of audit log backups. Field values are processed independently of one another and without prioritization.

You can set fields in combination to define a maximum storage limit for retained logs. For example:

  • Set both maxFileSize and maxFiles to create a log storage upper limit.
  • Set a maxFileAge value to automatically delete files older than the timestamp in the file name, regardless of the maxFiles value.

7.3.1.1. Default audit log values

MicroShift includes the following default audit log rotation values:

Expand
Table 7.2. MicroShift default audit log values
Audit log parameterDefault settingDefinition

maxFileAge:

0

How long log files are retained before automatic deletion. The default value means that a log file is never deleted based on age. This value can be configured.

maxFiles:

10

The total number of log files retained. By default, MicroShift retains 10 log files. The oldest is deleted when an excess file is created. This value can be configured.

maxFileSize:

200

By default, when the audit.log file reaches the maxFileSize limit, the audit.log file is rotated and MicroShift begins writing to a new audit.log file. This value is in megabytes and can be configured.

profile:

Default

The Default profile setting only logs metadata for read and write requests; request bodies are not logged except for OAuth access token requests. If you do not specify this field, the Default profile is used.

The maximum default storage usage for audit log retention is 2000Mb if there are 10 or fewer files.

If you do not specify a value for a field, the default value is used. If you remove a previously set field value, the default value is restored after the next MicroShift service restart.

Important

You must configure audit log retention and rotation in Red Hat Enterprise Linux (RHEL) for logs that are generated by application pods. These logs print to the console and are saved. Ensure that your log preferences are configured for the RHEL /var/log/audit/audit.log file to maintain MicroShift cluster health.

7.3.2. About audit log policy profiles

Audit log profiles define how to log requests that come to the OpenShift API server and the Kubernetes API server.

MicroShift supports the following predefined audit policy profiles:

Expand
ProfileDescription

Default

Logs only metadata for read and write requests; does not log request bodies except for OAuth access token requests. This is the default policy.

WriteRequestBodies

In addition to logging metadata for all requests, logs request bodies for every write request to the API servers (create, update, patch, delete, deletecollection). This profile has more resource overhead than the Default profile. [1]

AllRequestBodies

In addition to logging metadata for all requests, logs request bodies for every read and write request to the API servers (get, list, create, update, patch). This profile has the most resource overhead. [1]

None

No requests are logged, including OAuth access token requests and OAuth authorize token requests.

Warning

Do not disable audit logging by using the None profile unless you are fully aware of the risks of not logging data that can be beneficial when troubleshooting issues. If you disable audit logging and a support situation arises, you might need to enable audit logging and reproduce the issue to troubleshoot properly.

  1. Sensitive resources, such as Secret, Route, and OAuthClient objects, are only logged at the metadata level.

By default, MicroShift uses the Default audit log profile. You can use another audit policy profile that also logs request bodies, but be aware of the increased resource usage such as CPU, memory, and I/O.

7.3.3. Configuring audit log values

You can configure audit log settings by using the MicroShift service configuration file.

Procedure

  1. Make a copy of the provided config.yaml.default file in the /etc/microshift/ directory, renaming it config.yaml. Keep the new MicroShift config.yaml you create in the /etc/microshift/ directory. The new config.yaml is read whenever the MicroShift service starts. After you create it, the config.yaml file takes precedence over built-in settings.
  2. Replace the default values in the auditLog section of the YAML with your desired valid values.

    Example default auditLog configuration

    apiServer:
    # ....
      auditLog:
        maxFileAge: 7 
    1
    
        maxFileSize: 200 
    2
    
        maxFiles: 1 
    3
    
        profile: Default 
    4
    
    # ....
    Copy to Clipboard Toggle word wrap

    1
    Specifies the maximum time in days that log files are kept. Files older than this limit are deleted. In this example, after a log file is more than 7 days old, it is deleted. The files are deleted regardless of whether or not the live log has reached the maximum file size specified in the maxFileSize field. File age is determined by the timestamp written in the name of the rotated log file, for example, audit-2024-05-16T17-03-59.994.log. When the value is 0, the limit is disabled.
    2
    The maximum audit log file size in megabytes. In this example, the file is rotated as soon as the live log reaches the 200 MB limit. When the value is set to 0, the limit is disabled.
    3
    The maximum number of rotated audit log files retained. After the limit is reached, the log files are deleted in order from oldest to newest. In this example, the value 1 results in only 1 file of size maxFileSize being retained in addition to the current active log. When the value is set to 0, the limit is disabled.
    4
    Logs only metadata for read and write requests; does not log request bodies except for OAuth access token requests. If you do not specify this field, the Default profile is used.
  3. Optional: To specify a new directory for logs, you can stop MicroShift, and then move the /var/log/kube-apiserver directory to your desired location:

    1. Stop MicroShift by running the following command:

      $ sudo systemctl stop microshift
      Copy to Clipboard Toggle word wrap
    2. Move the /var/log/kube-apiserver directory to your desired location by running the following command:

      $ sudo mv /var/log/kube-apiserver <~/kube-apiserver> 
      1
      Copy to Clipboard Toggle word wrap
      1
      Replace <~/kube-apiserver> with the path to the directory that you want to use.
    3. If you specified a new directory for logs, create a symlink to your custom directory at /var/log/kube-apiserver by running the following command:

      $ sudo ln -s <~/kube-apiserver> /var/log/kube-apiserver 
      1
      Copy to Clipboard Toggle word wrap
      1
      Replace <~/kube-apiserver> with the path to the directory that you want to use. This enables the collection of logs in sos reports.
  4. If you are configuring audit log policies on a running instance, restart MicroShift by entering the following command:

    $ sudo systemctl restart microshift
    Copy to Clipboard Toggle word wrap

7.3.4. Troubleshooting audit log configuration

Use the following steps to troubleshoot custom audit log settings and file locations.

Procedure

  • Check the current values that are configured by running the following command:

    $ sudo microshift show-config --mode effective
    Copy to Clipboard Toggle word wrap

    Example output

    auditLog:
        maxFileSize: 200
        maxFiles: 1
        maxFileAge: 7
        profile: AllRequestBodies
    Copy to Clipboard Toggle word wrap

  • Check the audit.log file permissions by running the following command:

    $ sudo ls -ltrh /var/log/kube-apiserver/audit.log
    Copy to Clipboard Toggle word wrap

    Example output

    -rw-------. 1 root root 46M Mar 12 09:52 /var/log/kube-apiserver/audit.log
    Copy to Clipboard Toggle word wrap

  • List the contents of the current log directory by running the following command:

    $ sudo ls -ltrh /var/log/kube-apiserver/
    Copy to Clipboard Toggle word wrap

    Example output

    total 6.0M
    -rw-------. 1 root root 2.0M Mar 12 10:56 audit-2024-03-12T14-56-16.267.log
    -rw-------. 1 root root 2.0M Mar 12 10:56 audit-2024-03-12T14-56-49.444.log
    -rw-------. 1 root root 962K Mar 12 10:57 audit.log
    Copy to Clipboard Toggle word wrap

You can enhance supply chain security by using the sigstore signing methodology.

Important

sigstore support is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

You can configure the container runtime to verify image integrity by using the sigstore signing methodology. Configuring MicroShift container runtimes enables the verification of image integrity. With the sigstore project, developers can digitally sign what they build, creating a safer chain of custody that traces software back to the source. Administrators can then verify signatures and monitor workflows at scale. By using sigstore, you can store signatures in the same registry as the build images.

  • For user-specific images, you must update the configuration file to point to the appropriate public key, or disable signature verification for those image sources.
Important

For disconnected or offline configurations, you must embed the public key contents into the operating system image.

Verify container signatures for MicroShift by configuring the container runtime to use sigstore. The container signature verification uses the public key from the Red Hat keypair when signing the images. To use sigstore, edit the default /etc/containers/policy.json file that is installed as part of the container runtime package.

You can access Red Hat public keys at the following link:

You must use the release key 3 for verifying MicroShift container signatures.

Prerequisites

  • You have admin access to the MicroShift host.
  • You installed MicroShift.

Procedure

  1. Download the relevant public key and save it as /etc/containers/RedHat_ReleaseKey3.pub by running the following command:

    $ sudo curl -sL https://access.redhat.com/security/data/63405576.txt -o /etc/containers/RedHat_ReleaseKey3.pub
    Copy to Clipboard Toggle word wrap
  2. To configure the container runtime to verify images from Red Hat sources, edit the /etc/containers/policy.json file to contain the following configuration:

    Example policy JSON file

    {
        "default": [
            {
                "type": "reject"
            }
        ],
        "transports": {
            "docker": {
                "quay.io/openshift-release-dev": [{
                    "type": "sigstoreSigned",
                    "keyPath": "/etc/containers/RedHat_ReleaseKey3.pub",
                    "signedIdentity": {
                        "type": "matchRepoDigestOrExact"
                    }
                }],
                "registry.redhat.io": [{
                    "type": "sigstoreSigned",
                    "keyPath": "/etc/containers/RedHat_ReleaseKey3.pub",
                    "signedIdentity": {
                        "type": "matchRepoDigestOrExact"
                    }
                }]
            }
        }
    }
    Copy to Clipboard Toggle word wrap

  3. Configure Red Hat remote registries to use sigstore attachments when pulling images to the local storage, by editing the /etc/containers/registries.d/registry.redhat.io.yaml` file to contain the following configuration:

    $ cat /etc/containers/registries.d/registry.redhat.io.yaml
    docker:
         registry.redhat.io:
             use-sigstore-attachments: true
    Copy to Clipboard Toggle word wrap
  4. Configure Red Hat remote registries to use sigstore attachments when pulling images to the local storage, by editing the /etc/containers/registries.d/registry.quay.io.yaml file to contain the following configuration:

    $ cat /etc/containers/registries.d/quay.io.yaml
    docker:
      quay.io/openshift-release-dev:
        use-sigstore-attachments: true
    Copy to Clipboard Toggle word wrap
  5. Create user-specific registry configuration files if your use case requires signature verification for those image sources. You can use the example here to start with and add your own requirements.

Next steps

  1. If you are using a mirror registry, enable sigstore attachments.
  2. Otherwise, proceed to wiping the local container storage clean.

If you are using mirror registries you must apply additional configuration to enable sigstore attachments and mirroring by digest.

Prerequisites

  • You have admin access to the MicroShift host.
  • You completed the steps in "Verifying container signatures using sigstore."

Procedure

  1. Enable sigstore attachments by creating the /etc/containers/registries.d/mirror.registry.local.yaml file.

    $ cat /etc/containers/registries.d/<mirror.registry.local.yaml> 
    1
    
    docker:
       mirror.registry.local:
            use-sigstore-attachments: true
    Copy to Clipboard Toggle word wrap
    1
    Name the <mirror.registry.local.yaml> file after your mirror registry URL.
  2. Enable mirroring by digest by creating the /etc/containers/registries.conf.d/999-microshift-mirror.conf with the following contents:

    $ cat /etc/containers/registries.conf.d/999-microshift-mirror.conf
    [[registry]]
        prefix = "quay.io/openshift-release-dev"
        location = "mirror.registry.local"
        mirror-by-digest-only = true
    
    [[registry]]
        prefix = "registry.redhat.io"
        location = "mirror.registry.local"
        mirror-by-digest-only = true
    Copy to Clipboard Toggle word wrap

Next steps

  1. Wipe the local container storage clean.

7.4.2.2. Wiping local container storage clean

When you apply the configuration to an existing system, you must wipe the local container storage clean. Cleaning the container storage ensures that container images with signatures are properly downloaded.

Prerequisites

  • You have administrator access to the MicroShift host.
  • You enabled sigstore on your mirror registries.

Procedure

  1. Stop the CRI-O container runtime service and MicroShift by running the following command:

    $ sudo systemctl stop crio microshift
    Copy to Clipboard Toggle word wrap
  2. Wipe the CRI-O container runtime storage clean by running the following command:

    $ sudo crio wipe --force
    Copy to Clipboard Toggle word wrap
  3. Restart the CRI-O container runtime service and MicroShift by running the following command:

    $ sudo systemctl start crio microshift
    Copy to Clipboard Toggle word wrap

Verification

Verify that all pods are running in a healthy state by entering the following command:

$ oc get pods -A
Copy to Clipboard Toggle word wrap

Example output

NAMESPACE                   NAME                                                     READY   STATUS   RESTARTS  AGE
default                     i-06166fbb376f14a8bus-west-2computeinternal-debug-qtwcr  1/1     Running  0		    46m
kube-system                 csi-snapshot-controller-5c6586d546-lprv4                 1/1     Running  0		    51m
openshift-dns               dns-default-45jl7                                        2/2     Running  0		    50m
openshift-dns               node-resolver-7wmzf                                      1/1     Running  0		    51m
openshift-ingress           router-default-78b86fbf9d-qvj9s                          1/1     Running  0		    51m
openshift-ovn-kubernetes    ovnkube-master-5rfhh                                     4/4     Running  0		    51m
openshift-ovn-kubernetes    ovnkube-node-gcnt6                                       1/1     Running  0		    51m
openshift-service-ca        service-ca-bf5b7c9f8-pn6rk                               1/1     Running  0		    51m
openshift-storage           topolvm-controller-549f7fbdd5-7vrmv                      5/5     Running  0		    51m
openshift-storage           topolvm-node-rht2m                                       3/3     Running  0		    50m
Copy to Clipboard Toggle word wrap

Note

This example output shows basic MicroShift. If you have installed optional RPMs, the status of pods running those services is also expected to be shown in your output.

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat