Troubleshooting


Red Hat build of MicroShift 4.17

Troubleshooting common issues

Red Hat OpenShift Documentation Team

Abstract

Information about troubleshooting common Red Hat build of MicroShift issues.

To begin troubleshooting, you must know which version of Red Hat build of MicroShift you have installed.

To begin troubleshooting, you must know your MicroShift version. One way to get this information is by using the command-line interface (CLI).

Procedure

  • Run the following command to check the version information:

    $ microshift version
    Copy to Clipboard Toggle word wrap

    Example output

    MicroShift Version: 4.17-0.microshift-e6980e25
    Base OCP Version: 4.17
    Copy to Clipboard Toggle word wrap

1.2. Checking the MicroShift version using the API

To begin troubleshooting, you must know your MicroShift version. One way to get this information is by using the API.

Procedure

  • To get the version number using the OpenShift CLI (oc), view the kube-public/microshift-version config map by running the following command:

    $ oc get configmap -n kube-public microshift-version -o yaml
    Copy to Clipboard Toggle word wrap

    Example output

    apiVersion: v1
    data:
      major: "4"
      minor: "20"
      version: 4.20.0-0.microshift-fa441af87431
    kind: ConfigMap
    metadata:
      creationTimestamp: "2025-11-03T21:06:11Z"
      name: microshift-version
      namespace: kube-public
    Copy to Clipboard Toggle word wrap

1.3. Checking the etcd version

You can get the version information for the etcd database included with your MicroShift by using one or both of the following methods, depending on the level of information that you need.

Procedure

  • To display the base database version information, run the following command:

    $ microshift-etcd version
    Copy to Clipboard Toggle word wrap

    Example output

    microshift-etcd Version: 4.20.0
    Base etcd Version: 3.5.13
    Copy to Clipboard Toggle word wrap

  • To display the full database version information, run the following command:

    $ microshift-etcd version -o json
    Copy to Clipboard Toggle word wrap

    Example output

    {
      "major": "4",
      "minor": "20",
      "gitVersion": "4.20.0",
      "gitCommit": "140777711962eb4e0b765c39dfd325fb0abb3622",
      "gitTreeState": "clean",
      "buildDate": "2025-11-03T16:37:53Z",
      "goVersion": "go1.21.9"
      "compiler": "gc",
      "platform": "linux/amd64",
      "patch": "",
      "etcdVersion": "3.5.13"
    }
    Copy to Clipboard Toggle word wrap

Chapter 2. Troubleshooting a node

To begin troubleshooting a MicroShift node, first access the node status.

2.1. Checking the status of a node

You can check the status of a MicroShift node or see active pods. You can choose to run any or all of the following commands to help you get the information you need to troubleshoot the node.

Procedure

  • Check the system status, which returns the node status, by running the following command:

    $ sudo systemctl status microshift
    Copy to Clipboard Toggle word wrap

    If MicroShift fails to start, this command returns the logs from the previous run.

    Example healthy output

    ● microshift.service - MicroShift
         Loaded: loaded (/usr/lib/systemd/system/microshift.service; enabled; preset: disabled)
         Active: active (running) since <day> <date> 12:39:06 UTC; 47min ago
       Main PID: 20926 (microshift)
          Tasks: 14 (limit: 48063)
         Memory: 542.9M
            CPU: 2min 41.185s
         CGroup: /system.slice/microshift.service
                 └─20926 microshift run
    
    <Month-Day> 13:23:06 i-06166fbb376f14a8b.<hostname> microshift[20926]: kube-apiserver I0528 13:23:06.876001   20926 controll>
    <Month-Day> 13:23:06 i-06166fbb376f14a8b.<hostname> microshift[20926]: kube-apiserver I0528 13:23:06.876574   20926 controll>
    # ...
    Copy to Clipboard Toggle word wrap

  • Optional: Get comprehensive logs by running the following command:

    $ sudo journalctl -u microshift
    Copy to Clipboard Toggle word wrap
    Note

    The default configuration of the systemd journal service stores data in a volatile directory. To persist system logs across system starts and restarts, enable log persistence and set limits on the maximum journal data size.

  • Optional: If MicroShift is running, check the status of active pods by entering the following command:

    $ oc get pods -A
    Copy to Clipboard Toggle word wrap

    Example output

    NAMESPACE                   NAME                                                     READY   STATUS   RESTARTS  AGE
    default                     i-06166fbb376f14a8bus-west-2computeinternal-debug-qtwcr  1/1     Running  0		    46m
    kube-system                 csi-snapshot-controller-5c6586d546-lprv4                 1/1     Running  0		    51m
    kube-system                 csi-snapshot-webhook-6bf8ddc7f5-kz6k9                    1/1     Running  0		    51m
    openshift-dns               dns-default-45jl7                                        2/2     Running  0		    50m
    openshift-dns               node-resolver-7wmzf                                      1/1     Running  0		    51m
    openshift-ingress           router-default-78b86fbf9d-qvj9s                          1/1     Running  0		    51m
    openshift-ovn-kubernetes    ovnkube-master-5rfhh                                     4/4     Running  0		    51m
    openshift-ovn-kubernetes    ovnkube-node-gcnt6                                       1/1     Running  0		    51m
    openshift-service-ca        service-ca-bf5b7c9f8-pn6rk                               1/1     Running  0		    51m
    openshift-storage           topolvm-controller-549f7fbdd5-7vrmv                      5/5     Running  0		    51m
    openshift-storage           topolvm-node-rht2m                                       3/3     Running  0		    50m
    Copy to Clipboard Toggle word wrap

    Note

    This example output shows a basic MicroShift installation. If you installed optional RPMs, the status of pods running those services is also expected in your output.

Chapter 3. Troubleshooting installation issues

To troubleshoot a failed MicroShift installation, you can run an sos report. Use the sos report command to generate a detailed report that shows all of the enabled plugins and data from the different components and applications in a system.

3.1. Gathering data from an sos report

Prerequisites

  • You must have the sos package installed.

Procedure

  1. Log in to the failing host as a root user.
  2. Perform the debug report creation procedure by running the following command:

    $ microshift-sos-report
    Copy to Clipboard Toggle word wrap

    Example output

    sosreport (version 4.5.1)
    
    This command will collect diagnostic and configuration information from
    this Red Hat Enterprise Linux system and installed applications.
    
    An archive containing the collected information will be generated in
    /var/tmp/sos.o0sznf_8 and may be provided to a Red Hat support
    representative.
    
    Any information provided to Red Hat will be treated in accordance with
    the published support policies at:
    
            Distribution Website : https://www.redhat.com/
            Commercial Support   : https://www.access.redhat.com/
    
    The generated archive may contain data considered sensitive and its
    content should be reviewed by the originating organization before being
    passed to any third party.
    
    No changes will be made to system configuration.
    
    
     Setting up archive ...
     Setting up plugins ...
     Running plugins. Please wait ...
    
      Starting 1/2   microshift      [Running: microshift]
      Starting 2/2   microshift_ovn  [Running: microshift microshift_ovn]
      Finishing plugins              [Running: microshift]
    
      Finished running plugins
    
    Found 1 total reports to obfuscate, processing up to 4 concurrently
    
    sosreport-microshift-rhel9-2023-03-31-axjbyxw :    Beginning obfuscation...
    sosreport-microshift-rhel9-2023-03-31-axjbyxw :    Obfuscation completed
    
    Successfully obfuscated 1 report(s)
    
    Creating compressed archive...
    
    A mapping of obfuscated elements is available at
    	/var/tmp/sosreport-microshift-rhel9-2023-03-31-axjbyxw-private_map
    
    Your sosreport has been generated and saved in:
    	/var/tmp/sosreport-microshift-rhel9-2023-03-31-axjbyxw-obfuscated.tar.xz
    
     Size	444.14KiB
     Owner	root
     sha256	922e5ff2db25014585b7c6c749d2c44c8492756d619df5e9838ce863f83d4269
    
    Please send this file to your support representative.
    Copy to Clipboard Toggle word wrap

To troubleshoot failed data backups and restorations, check the basics first. For example, user permissions, system health and configuration, and storage capacity.

4.1. Data backup failure

Data backups are automatic on rpm-ostree systems. If you are not using an rpm-ostree system and attempted to create a manual backup, the following reasons can cause the backup to fail:

  • Not waiting several minutes after a system start to successfully stop MicroShift. The system must complete health checks and any other background processes before a back up can succeed.
  • If MicroShift stopped running because of an error, you cannot perform a backup of the data.

    • Make sure the system is healthy.
    • Stop it in a healthy state before attempting a backup.
  • If you do not have enough storage for the data, the backup fails. Ensure that you have enough storage for MicroShift data.
  • If you do not have the required user permissions, a backup can fail. Ensure that you have the correct user permissions to create a backup and perform the required configurations.

4.2. Checking backup logs

Backup logs can help you identify where backups are and what processes occurred during manual and automatic backups.

  • Logs print to the terminal console during manual backups.
  • Logs are automatically generated for rpm-ostree system automated backups as part of the MicroShift journal logs.

Procedure

  • Check the logs by running the following command:

    $ sudo journalctl -u microshift
    Copy to Clipboard Toggle word wrap

4.3. Data restoration failure

The restoration of data can fail for many reasons, including storage and permission issues. Mismatched data versions can cause failures when MicroShift restarts.

4.3.1. Image-based systems data restore failed

Data restorations are automatic on rpm-ostree systems, but can fail, for example:

  • The only backups that are restored on rpm-ostree systems are backups from the current deployment or a rollback deployment. Backups are not taken on an unhealthy system.

    • Only the latest backups that have corresponding deployments are retained. Outdated backups that do not have a matching deployment are automatically removed.
    • Data is usually not restored from a newer version of MicroShift.
    • Ensure that the data you are restoring follows same versioning pattern as the update path. For example, if the destination version of MicroShift is an older version than the version of the MicroShift data you are currently using, the restoration can fail.

4.3.2. RPM-based manual data restore failed

If you are using an RPM system that is not rpm-ostree and tried to restore a manual backup, the following reasons can cause the restoration to fail:

  • If MicroShift stopped running because of an error, you cannot restore data.

    • Make sure the system is healthy.
    • Start it in a healthy state before attempting to restore data.
  • If you do not have enough storage space allocated for the incoming data, the restoration fails.

    • Make sure that your current system storage is configured to accept the restored data.
  • You are attempting to restore data from a newer version of MicroShift.

    • Ensure that the data you are restoring follows same versioning pattern as the update path. For example, if the destination version of MicroShift is an older version than the version of the MicroShift data you are attempting to use, the restoration can fail.

4.4. Storage migration failure

Storage migration failures are typically caused by significant changes in custom resources (CRs) from one MicroShift version to the next.

  • If a storage migration fails, there is usually an unresolvable discrepancy between versions that requires manual review.

Chapter 5. Troubleshooting updates

To troubleshoot MicroShift updates, you can check update paths, review journal and greenboot health check logs, and use other techniques to help you solve update problems.

5.1. Troubleshooting MicroShift updates

In some cases, MicroShift might fail to update. In these events, it is helpful to understand failure types and how to troubleshoot them.

RPM dependency errors result if a MicroShift update is incompatible with the version of Red Hat Enterprise Linux for Edge (RHEL for Edge) or Red Hat Enterprise Linux (RHEL). Check the following compatibility table:

Red Hat Device Edge release compatibility matrix

Red Hat Enterprise Linux (RHEL) and MicroShift work together as a single solution for device-edge computing. You can update each component separately, but the product versions must be compatible. Supported configurations of Red Hat Device Edge use verified releases for each together as listed in the following table:

Expand
RHEL Version(s)MicroShift VersionSupported MicroShift Version → Version Updates

9.4

4.17

4.17.1 → 4.17.z

9.4

4.16

4.16.0 → 4.16.z, 4.16 → 4.17

9.2, 9.3

4.15

4.15.0 → 4.15.z, 4.15 → 4.16 on RHEL 9.4

9.2, 9.3

4.14

4.14.0 → 4.14.z, 4.14 → 4.15 or 4.14 → 4.16 on RHEL 9.4

5.1.1.1. Version compatibility

Check the following update paths:

Red Hat build of MicroShift update paths

  • Generally Available Version 4.17.1 to 4.17.z on RHEL for Edge 9.4
  • Generally Available Version 4.15.0 from RHEL 9.2 to 4.16.0 on RHEL 9.4
  • Generally Available Version 4.14.0 from RHEL 9.2 to 4.15.0 on RHEL 9.4

5.1.2. RHEL for Edge update failed

If you updated on an rpm-ostree system, the greenboot health check automatically logs and acts on system health. A system rollback by greenboot can indicate an update failure. In cases where the update failed, but greenboot did not complete a system rollback, you can troubleshoot using the RHEL for Edge documentation linked in the "Additional resources" section.

  • Manually check the greenboot logs to verify system health by running the following command:

    $ sudo systemctl restart --no-block greenboot-healthcheck && sudo journalctl -fu greenboot-healthcheck
    Copy to Clipboard Toggle word wrap

5.1.3. Manual RPM update failed

If you updated by using RPMs on a non-OSTree system, greenboot can indicate an update failure, but the health checks are only informative. Checking the system logs is the next step in troubleshooting a manual RPM update failure. You can use greenboot and the sos report tool to check both the MicroShift update and the host system.

5.2. Checking journal logs after updates

Journal logs can assist in diagnosing MicroShift update failures. The default configuration of the systemd journal service stores data in a volatile directory. To persist system logs across system starts and restarts, enable log persistence and set limits on the maximum journal data size.

Procedure

  • Get comprehensive MicroShift journal logs by running the following command:

    $ sudo journalctl -u microshift
    Copy to Clipboard Toggle word wrap
  • Check the greenboot journal logs by running the following command:

    $ sudo journalctl -u greenboot-healthcheck
    Copy to Clipboard Toggle word wrap
  • Examining the comprehensive logs of a specific boot uses three steps. First list the boots, then select the one you want from the list you obtained:

    • List the boots present in the journal logs by running the following command:

      $ sudo journalctl --list-boots
      Copy to Clipboard Toggle word wrap

      Example output

      IDX  BOOT ID                          	FIRST ENTRY                 LAST ENTRY
       0   681ece6f5c3047e183e9d43268c5527f 	<Day> <Date> 12:27:58 UTC 	<Day> <Date>> 13:39:41 UTC
      #....
      Copy to Clipboard Toggle word wrap

    • Check the journal logs for the specific boot you want by running the following command:

      $ sudo journalctl --boot <idx_or_boot_id> 
      1
      Copy to Clipboard Toggle word wrap
      1
      Replace <idx_or_boot_id> with the IDX or the BOOT ID number assigned to the specific boot that you want to check.
    • Check the journal logs for the boot of a specific service by running the following command:

      $ sudo journalctl --boot <idx_or_boot_id> -u <service_name> 
      1
       
      2
      Copy to Clipboard Toggle word wrap
      1
      Replace <idx_or_boot_id> with the IDX or the BOOT ID number assigned to the specific boot that you want to check.
      2
      Replace <service_name> with the name of the service that you want to check.

Check the status of greenboot health checks before making changes to the system and while troubleshooting. You can use any of the following commands to help you ensure that greenboot scripts have finished running.

Procedure

  • To see a report of health check status, use the following command:

    $ systemctl show --property=SubState --value greenboot-healthcheck.service
    Copy to Clipboard Toggle word wrap
    • An output of start means that greenboot checks are still running.
    • An output of exited means that checks have passed and greenboot has exited. Greenboot runs the scripts in the green.d directory when the system is a healthy state.
    • An output of failed means that checks have not passed. Greenboot runs the scripts in red.d directory when the system is in this state and might restart the system.
  • To see a report showing the numerical exit code of the service where 0 means success and non-zero values mean a failure occurred, use the following command:

    $ systemctl show --property=ExecMainStatus --value greenboot-healthcheck.service
    Copy to Clipboard Toggle word wrap
  • To see a report showing a message about boot status, such as Boot Status is GREEN - Health Check SUCCESS, use the following command:

    $ cat /run/motd.d/boot-status
    Copy to Clipboard Toggle word wrap

Chapter 6. Checking audit logs

You can use audit logs to identify pod security violations.

You can identify pod security admission violations on a workload by viewing the server audit logs. The following procedure shows you how to access the audit logs and parse them to find pod security admission violations in a workload.

Prerequisites

  • You have installed jq.
  • You have root access to the node.

Procedure

  1. To retrieve the node name, run the following command:

    $ <node_name>=$(oc get node -ojsonpath='{.items[0].metadata.name}')
    Copy to Clipboard Toggle word wrap
  2. To view the audit logs, run the following command:

    $ oc adm node-logs <node_name> --path=kube-apiserver/ 
    1
    Copy to Clipboard Toggle word wrap
    1
    Replace <node_name> with the name of the node retrieved from the previous step.

    Example output

    rhel-94.lab.local audit-2024-10-18T18-25-41.663.log
    rhel-94.lab.local audit-2024-10-19T11-21-29.225.log
    rhel-94.lab.local audit-2024-10-20T04-16-09.622.log
    rhel-94.lab.local audit-2024-10-20T21-11-41.163.log
    rhel-94.lab.local audit-2024-10-21T14-06-10.402.log
    rhel-94.lab.local audit-2024-10-22T06-35-10.392.log
    rhel-94.lab.local audit-2024-10-22T23-26-27.667.log
    rhel-94.lab.local audit-2024-10-23T16-52-15.456.log
    rhel-94.lab.local audit-2024-10-24T07-31-55.238.log
    Copy to Clipboard Toggle word wrap

  3. To parse the affected audit logs, enter the following command:

    $ oc adm node-logs <node_name> --path=kube-apiserver/audit.log \
      | jq -r 'select((.annotations["pod-security.kubernetes.io/audit-violations"] != null) and (.objectRef.resource=="pods")) | .objectRef.namespace + " " + .objectRef.name + " " + .objectRef.resource' \
      | sort | uniq -c 
    1
    Copy to Clipboard Toggle word wrap
    1
    Replace <node_name> with the name of the node retrieved from the previous step.

Chapter 7. Troubleshoot etcd

To troubleshoot etcd and improve performance, configure the memory allowance for the service.

By default, etcd uses as much memory as necessary to handle the load on the system. In memory-constrained systems, you might need to limit the amount of memory etcd uses.

Procedure

  • Edit the /etc/microshift/config.yaml file to set the memoryLimitMB value.

    etcd:
      memoryLimitMB: 128
    Copy to Clipboard Toggle word wrap
    Note

    The minimum required value for memoryLimitMB on MicroShift is 128 MB. Values close to the minimum value are more likely to impact etcd performance. The lower the limit, the longer etcd takes to respond to queries. If the limit is too low or the etcd usage is high, queries time out.

Verification

  1. After modifying the memoryLimitMB value in /etc/microshift/config.yaml, restart MicroShift by running the following command:

    $ sudo systemctl restart microshift
    Copy to Clipboard Toggle word wrap
  2. Verify the new memoryLimitMB value is in use by running the following command:

    $ systemctl show --property=MemoryHigh microshift-etcd.scope
    Copy to Clipboard Toggle word wrap

MicroShift responds to system configuration changes and restarts after alterations are detected, including IP address changes, clock adjustments, and security certificate age.

8.1. IP address changes or clock adjustments

MicroShift depends on device IP addresses and system-wide clock settings to remain consistent during its runtime. However, these settings might occasionally change on edge devices.

For example, DHCP or Network Time Protocol (NTP) updates can change times. When these changes occur, some MicroShift components might stop functioning properly. To mitigate this situation, MicroShift monitors the IP address and system time and restarts if either setting changes.

The threshold for clock changes is a time change of greater than 10 seconds in either direction. Smaller drifts on regular time adjustments performed by the Network Time Protocol (NTP) service do not cause a restart.

8.2. Security certificate lifetime

MicroShift certificates are digital certificates that secure communication with communication protocols such as HTTPS. They fall into two basic categories:

Short-lived certificates
Have a certificate validity of one year. Most server or leaf certificates are short-lived.
Long-lived certificates
Have a certificate validity of 10 years. An example of a long-lived certificate is the client certificate for system:admin user authentication, or the certificate of the signer of the kube-apiserver external serving certificate.

MicroShift restarts automatically in certain cases, depending on certificate age.

8.3. Certificate rotation

Certificates that are expired or close to their expiration dates must be rotated to ensure continued MicroShift operation. This rotation can be an automatic process.

When MicroShift restarts for any reason, certificates that are close to expiring are rotated. A certificate that expires soon, or has already expired, can also cause an automatic MicroShift restart to perform a rotation.

Important

If the rotated certificate is a MicroShift certificate authority (CA), then all of the signed certificates rotate. If you created any custom CAs, ensure that the CAs manually rotate.

8.3.1. Short-term certificates rotation

Short-term certificates that are expired or close to their expiration dates must be rotated to ensure continued MicroShift operation.

The following situations describe MicroShift actions during short-term certificate lifetimes:

No rotation
When a short-term certificate is up to 5 months old, no rotation occurs.
Rotation at restart
When a short-term certificate is 5 to 8 months old, it is rotated when MicroShift starts or restarts.
Automatic restart for rotation
When a short-term certificate is more than 8 months old, MicroShift can automatically restart to rotate and apply a new certificate.

8.3.2. Long-term certificates rotation

Long-term certificates that are expired or close to their expiration dates must be rotated to ensure continued MicroShift operation.

The following situations describe MicroShift actions during long-term certificate lifetimes:

No rotation
When a long-term certificate is up to 8.5 years old, no rotation occurs.
Rotation at restart
When a long-term certificate is 8.5 to 9 years old, it is rotated when MicroShift starts or restarts.
Automatic restart for rotation
When a long-term certificate is more than 9 years old, MicroShift might automatically restart so that it can rotate and apply a new certificate.

Chapter 9. Cleaning up data with support

MicroShift provides the microshift-cleanup-data script for various troubleshooting tasks, such as deleting all data, certificates, and container images.

Warning

Do not run this script without the guidance of product Support. Contact Support by Submitting a support case.

9.1. Data cleanup script overview

You can see the usage and list available options of the microshift-cleanup-data script by running the script without arguments. Running the script without arguments does not delete any data or stop the MicroShift service.

Procedure

  1. See the usage and list the available options of the microshift-cleanup-data script by entering the following command:

    Warning

    Some of the options in the following script operations are destructive and can cause data loss. See the procedure of each argument for warnings.

    $ microshift-cleanup-data
    Copy to Clipboard Toggle word wrap

    Example output

    Stop all MicroShift services, also cleaning their data
    
    Usage: microshift-cleanup-data <--all [--keep-images] | --ovn | --cert>
       --all         Clean all MicroShift and OVN data
       --keep-images Keep container images when cleaning all data
       --ovn         Clean OVN data only
       --cert        Clean certificates only
    Copy to Clipboard Toggle word wrap

9.2. Cleaning all data and configuration

You can clean up all the MicroShift data and configuration by running the microshift-cleanup-data script.

When you run the script with the --all argument, you perform the following clean up actions:

  • Stop and disable all MicroShift services
  • Delete all MicroShift pods
  • Delete all container image storage
  • Reset network configuration
  • Delete the /var/lib/microshift data directory
  • Delete OVN-K networking configuration

Prerequisites

  • You are logged into MicroShift as an administrator with root-user access.
  • You have filed a support case.

Procedure

  1. Clean up all the MicroShift data and configuration by running the microshift-cleanup-data script with the --all argument, by entering the following command:

    Warning

    This option deletes all MicroShift data and user workloads. Use with caution.

    $ sudo microshift-cleanup-data --all
    Copy to Clipboard Toggle word wrap
    Tip

    The script prompts you with a message to confirm the operation. Type 1 or Yes to continue. Any other entries cancel the clean up.

    Example output when you continue the clean up

    DATA LOSS WARNING: Do you wish to stop and clean ALL MicroShift data AND cri-o container workloads?
    1) Yes
    2) No
    #? 1
    Stopping MicroShift services
    Disabling MicroShift services
    Removing MicroShift pods
    Removing crio image storage
    Deleting the br-int interface
    Killing conmon, pause and OVN processes
    Removing MicroShift configuration
    Removing OVN configuration
    MicroShift service was stopped
    MicroShift service was disabled
    Cleanup succeeded
    Copy to Clipboard Toggle word wrap

    Example output when you cancel the clean up

    DATA LOSS WARNING: Do you wish to stop and clean ALL MicroShift data AND cri-o container workloads?
    1) Yes
    2) No
    #? no
    Aborting cleanup
    Copy to Clipboard Toggle word wrap

    Important

    The MicroShift service is stopped and disabled after you run the script.

  2. Restart the MicroShift service by running the following command:

    $ sudo systemctl enable --now microshift
    Copy to Clipboard Toggle word wrap

You can retain the MicroShift container images while cleaning all data by running the microshift-cleanup-data script with the --all and --keep-images arguments.

Keeping the container images helps speed up MicroShift restart after data clean up because the necessary container images are already present locally when you start the service.

When you run the script with the --all and --keep-images arguments, you perform the following clean up actions:

  • Stop and disable all MicroShift services
  • Delete all MicroShift pods
  • Reset network configuration
  • Delete the /var/lib/microshift data directory
  • Delete OVN-K networking configuration
Warning

This option deletes all MicroShift data and user workloads. Use with caution.

Prerequisites

  • You are logged into MicroShift as an administrator with root-user access.
  • You have filed a support case.

Procedure

  1. Clean up all data and user workloads while retaining the MicroShift container images by running the microshift-cleanup-data script with the --all and --keep-images argument, by entering the following command:

    $ sudo microshift-cleanup-data --all --keep-images
    Copy to Clipboard Toggle word wrap

    Example output

    DATA LOSS WARNING: Do you wish to stop and clean ALL MicroShift data AND cri-o container workloads?
    1) Yes
    2) No
    #? Yes
    Stopping MicroShift services
    Disabling MicroShift services
    Removing MicroShift pods
    Deleting the br-int interface
    Killing conmon, pause and OVN processes
    Removing MicroShift configuration
    Removing OVN configuration
    MicroShift service was stopped
    MicroShift service was disabled
    Cleanup succeeded
    Copy to Clipboard Toggle word wrap

  2. Verify that the container images are still present by running the following command:

    $ sudo crictl images | awk '{print $1}'
    Copy to Clipboard Toggle word wrap

    Example output

    IMAGE
    quay.io/openshift-release-dev/ocp-v4.0-art-dev
    quay.io/openshift-release-dev/ocp-v4.0-art-dev
    quay.io/openshift-release-dev/ocp-v4.0-art-dev
    quay.io/openshift-release-dev/ocp-v4.0-art-dev
    quay.io/openshift-release-dev/ocp-v4.0-art-dev
    quay.io/openshift-release-dev/ocp-v4.0-art-dev
    quay.io/openshift-release-dev/ocp-v4.0-art-dev
    quay.io/openshift-release-dev/ocp-v4.0-art-dev
    quay.io/openshift-release-dev/ocp-v4.0-art-dev
    quay.io/openshift-release-dev/ocp-v4.0-art-dev
    registry.redhat.io/lvms4/topolvm-rhel9
    registry.redhat.io/openshift4/ose-csi-external-provisioner
    registry.redhat.io/openshift4/ose-csi-external-resizer
    registry.redhat.io/openshift4/ose-csi-livenessprobe
    registry.redhat.io/openshift4/ose-csi-node-driver-registrar
    registry.redhat.io/ubi9
    Copy to Clipboard Toggle word wrap

    Important

    The MicroShift service is stopped and disabled after you run the script.

  3. Restart the MicroShift service by running the following command:

    $ sudo systemctl enable --now microshift
    Copy to Clipboard Toggle word wrap

9.4. Cleaning the OVN-Kubernetes data

You can clean up the OVN-Kubernetes (ONV-K) data by running the microshift-cleanup-data script. Use the script to reset OVN-K network configurations.

When you run the script with the --ovn argument, you perform the following clean up actions:

  • Stop all MicroShift services
  • Delete all MicroShift pods
  • Delete OVN-K networking configuration

Prerequisites

  • You are logged into MicroShift as an administrator with root-user access.
  • You have filed a support case.

Procedure

  1. Clean up the OVN-K data by running the microshift-cleanup-data script with the --ovn argument, by entering the following command:

    $ sudo microshift-cleanup-data --ovn
    Copy to Clipboard Toggle word wrap

    Example output

    Stopping MicroShift services
    Removing MicroShift pods
    Killing conmon, pause and OVN processes
    Removing OVN configuration
    MicroShift service was stopped
    Cleanup succeeded
    Copy to Clipboard Toggle word wrap

    Important

    The MicroShift service is stopped after you run the script.

  2. Restart the MicroShift service by running the following command:

    $ sudo systemctl start microshift
    Copy to Clipboard Toggle word wrap

9.5. Cleaning custom certificates data

You can use the microshift-cleanup-data script to reset MicroShift custom certificates so that they are recreated when the MicroShift service restarts.

When you run the script with the --cert argument, you perform the following clean up actions:

  • Stop all MicroShift services
  • Delete all MicroShift pods
  • Delete all MicroShift certificates

Prerequisites

  • You are logged into MicroShift as an administrator with root-user access.
  • You have filed a support case.

Procedure

  1. Clean up the MicroShift certificates by running the microshift-cleanup-data script with the --cert argument, by entering the following command:

    $ sudo microshift-cleanup-data --cert
    Copy to Clipboard Toggle word wrap

    Example output

    Stopping MicroShift services
    Removing MicroShift pods
    Removing MicroShift certificates
    MicroShift service was stopped
    Cleanup succeeded
    Copy to Clipboard Toggle word wrap

    Important

    The MicroShift service is stopped after you run the script.

  2. Restart the MicroShift service by running the following command:

    $ sudo systemctl start microshift
    Copy to Clipboard Toggle word wrap

Legal Notice

Copyright © 2025 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat