Troubleshooting
Troubleshooting common issues
Abstract
Chapter 1. Checking which version you have installed
To begin troubleshooting, determine which version of Red Hat build of MicroShift you have installed.
1.1. Checking the version using the command-line interface
To begin troubleshooting, you must know your MicroShift version. One way to get this information is by using the CLI.
Procedure
Run the following command to check the version information:
$ microshift version
Example output
Red Hat build of MicroShift Version: 4.17-0.microshift-e6980e25 Base OCP Version: 4.17
1.2. Checking the MicroShift version using the API
To begin troubleshooting, you must know your MicroShift version. One way to get this information is by using the API.
Procedure
To get the version number using the OpenShift CLI (
oc
), view thekube-public/microshift-version
config map by running the following command:$ oc get configmap -n kube-public microshift-version -o yaml
Example output
apiVersion: v1 data: major: "4" minor: "13" version: 4.13.8-0.microshift-fa441af87431 kind: ConfigMap metadata: creationTimestamp: "2023-08-03T21:06:11Z" name: microshift-version namespace: kube-public
1.3. Checking the etcd version
You can get the version information for the etcd database included with your MicroShift by using one or both of the following methods, depending on the level of information that you need.
Procedure
To display the base database version information, run the following command:
$ microshift-etcd version
Example output
microshift-etcd Version: 4.17.1 Base etcd Version: 3.5.13
To display the full database version information, run the following command:
$ microshift-etcd version -o json
Example output
{ "major": "4", "minor": "16", "gitVersion": "4.17.1~rc.1", "gitCommit": "140777711962eb4e0b765c39dfd325fb0abb3622", "gitTreeState": "clean", "buildDate": "2024-05-10T16:37:53Z", "goVersion": "go1.21.9" "compiler": "gc", "platform": "linux/amd64", "patch": "", "etcdVersion": "3.5.13" }
Chapter 2. Troubleshooting a cluster
To begin troubleshooting a MicroShift cluster, first access the cluster status.
2.1. Checking the status of a cluster
You can check the status of a MicroShift cluster or see active pods. Given in the following procedure are three different commands you can use to check cluster status. You can choose to run one, two, or all commands to help you get the information you need to troubleshoot the cluster.
Procedure
Check the system status, which returns the cluster status, by running the following command:
$ sudo systemctl status microshift
If MicroShift fails to start, this command returns the logs from the previous run.
Example healthy output
● microshift.service - MicroShift Loaded: loaded (/usr/lib/systemd/system/microshift.service; enabled; preset: disabled) Active: active (running) since <day> <date> 12:39:06 UTC; 47min ago Main PID: 20926 (microshift) Tasks: 14 (limit: 48063) Memory: 542.9M CPU: 2min 41.185s CGroup: /system.slice/microshift.service └─20926 microshift run <Month-Day> 13:23:06 i-06166fbb376f14a8b.<hostname> microshift[20926]: kube-apiserver I0528 13:23:06.876001 20926 controll> <Month-Day> 13:23:06 i-06166fbb376f14a8b.<hostname> microshift[20926]: kube-apiserver I0528 13:23:06.876574 20926 controll> # ...
Optional: Get comprehensive logs by running the following command:
$ sudo journalctl -u microshift
NoteThe default configuration of the
systemd
journal service stores data in a volatile directory. To persist system logs across system starts and restarts, enable log persistence and set limits on the maximum journal data size.Optional: If MicroShift is running, check the status of active pods by entering the following command:
$ oc get pods -A
Example output
NAMESPACE NAME READY STATUS RESTARTS AGE default i-06166fbb376f14a8bus-west-2computeinternal-debug-qtwcr 1/1 Running 0 46m kube-system csi-snapshot-controller-5c6586d546-lprv4 1/1 Running 0 51m kube-system csi-snapshot-webhook-6bf8ddc7f5-kz6k9 1/1 Running 0 51m openshift-dns dns-default-45jl7 2/2 Running 0 50m openshift-dns node-resolver-7wmzf 1/1 Running 0 51m openshift-ingress router-default-78b86fbf9d-qvj9s 1/1 Running 0 51m openshift-ovn-kubernetes ovnkube-master-5rfhh 4/4 Running 0 51m openshift-ovn-kubernetes ovnkube-node-gcnt6 1/1 Running 0 51m openshift-service-ca service-ca-bf5b7c9f8-pn6rk 1/1 Running 0 51m openshift-storage topolvm-controller-549f7fbdd5-7vrmv 5/5 Running 0 51m openshift-storage topolvm-node-rht2m 3/3 Running 0 50m
NoteThis example output shows basic MicroShift. If you have installed optional RPMs, the status of pods running those services is also expected to be shown in your output.
Chapter 3. Troubleshooting installation issues
To troubleshoot a failed MicroShift installation, you can run an sos report. Use the sos report
command to generate a detailed report that shows all of the enabled plugins and data from the different components and applications in a system.
3.1. Gathering data from an sos report
Prerequisites
-
You must have the
sos
package installed.
Procedure
- Log into the failing host as a root user.
Perform the debug report creation procedure by running the following command:
$ microshift-sos-report
Example output
sosreport (version 4.5.1) This command will collect diagnostic and configuration information from this Red Hat Enterprise Linux system and installed applications. An archive containing the collected information will be generated in /var/tmp/sos.o0sznf_8 and may be provided to a Red Hat support representative. Any information provided to Red Hat will be treated in accordance with the published support policies at: Distribution Website : https://www.redhat.com/ Commercial Support : https://www.access.redhat.com/ The generated archive may contain data considered sensitive and its content should be reviewed by the originating organization before being passed to any third party. No changes will be made to system configuration. Setting up archive ... Setting up plugins ... Running plugins. Please wait ... Starting 1/2 microshift [Running: microshift] Starting 2/2 microshift_ovn [Running: microshift microshift_ovn] Finishing plugins [Running: microshift] Finished running plugins Found 1 total reports to obfuscate, processing up to 4 concurrently sosreport-microshift-rhel9-2023-03-31-axjbyxw : Beginning obfuscation... sosreport-microshift-rhel9-2023-03-31-axjbyxw : Obfuscation completed Successfully obfuscated 1 report(s) Creating compressed archive... A mapping of obfuscated elements is available at /var/tmp/sosreport-microshift-rhel9-2023-03-31-axjbyxw-private_map Your sosreport has been generated and saved in: /var/tmp/sosreport-microshift-rhel9-2023-03-31-axjbyxw-obfuscated.tar.xz Size 444.14KiB Owner root sha256 922e5ff2db25014585b7c6c749d2c44c8492756d619df5e9838ce863f83d4269 Please send this file to your support representative.
3.2. Additional resources
Chapter 4. Troubleshooting data backup and restore
To troubleshoot failed data backups and restorations, check the basics first, such as data paths, storage configuration, and storage capacity.
4.1. Backing up data failed
Data backups are automatic on rpm-ostree
systems. If you are not using an rpm-ostree
system and attempted to create a manual backup, the following reasons can cause the backup to fail:
- Not waiting several minutes after a system start to successfully stop MicroShift. The system must complete health checks and any other background processes before a back up can succeed.
If MicroShift stopped running because of an error, you cannot perform a backup of the data.
- Make sure the system is healthy.
- Stop it in a healthy state before attempting a backup.
- If you do not have sufficient storage for the data, the backup fails. Ensure that you have enough storage for the MicroShift data.
- If you do not have sufficient permissions, a backup can fail. Ensure you have the correct user permissions to create a backup and perform the required configurations.
4.2. Backup logs
- Logs print to the terminal console during manual backups.
Logs are automatically generated for
rpm-ostree
system automated backups as part of the MicroShift journal logs. You can check the logs by running the following command:$ sudo journalctl -u microshift
4.3. Restoring data failed
The restoration of data can fail for many reasons, including storage and permission issues. Mismatched data versions can cause failures when MicroShift restarts.
4.3.1. RPM-OSTree-based systems data restore failed
Data restorations are automatic on rpm-ostree
systems, but can fail, for example:
The only backups that are restored on
rpm-ostree
systems are backups from the current deployment or a rollback deployment. Backups are not taken on an unhealthy system.- Only the latest backups that have corresponding deployments are retained. Outdated backups that do not have a matching deployment are automatically removed.
- Data is usually not restored from a newer version of MicroShift.
- Ensure that the data you are restoring follows same versioning pattern as the update path. For example, if the destination version of MicroShift is an older version than the version of the MicroShift data you are currently using, the restoration can fail.
4.3.2. RPM-based manual data restore failed
If you are using an RPM system that is not rpm-ostree
and tried to restore a manual backup, the following reasons can cause the restoration to fail:
If MicroShift stopped running because of an error, you cannot restore data.
- Make sure the system is healthy.
- Start it in a healthy state before attempting to restore data.
If you do not have enough storage space allocated for the incoming data, the restoration fails.
- Make sure that your current system storage is configured to accept the restored data.
You are attempting to restore data from a newer version of MicroShift.
- Ensure that the data you are restoring follows same versioning pattern as the update path. For example, if the destination version of MicroShift is an older version than the version of the MicroShift data you are attempting to use, the restoration can fail.
4.4. Storage migration failed
Storage migration failures are typically caused by substantial changes in custom resources (CRs) from one MicroShift to the next. If a storage migration fails, there is usually an unresolvable discrepancy between versions that requires manual review.
Chapter 5. Troubleshoot updates
To troubleshoot MicroShift updates, use the following guide.
5.1. Troubleshooting MicroShift updates
In some cases, MicroShift might fail to update. In these events, it is helpful to understand failure types and how to troubleshoot them.
5.1.1. Update path is blocked by version incompatibility
RPM dependency errors result if a MicroShift update is incompatible with the version of Red Hat Enterprise Linux for Edge (RHEL for Edge) or Red Hat Enterprise Linux (RHEL).
5.1.1.1. Compatibility table
Check the following compatibility table:
Red Hat Device Edge release compatibility matrix
Red Hat Enterprise Linux (RHEL) and MicroShift work together as a single solution for device-edge computing. You can update each component separately, but the product versions must be compatible. Supported configurations of Red Hat Device Edge use verified releases for each together as listed in the following table:
RHEL Version(s) | MicroShift Version | Supported MicroShift Version → Version Updates |
---|---|---|
9.4 | 4.17 | 4.17.1 → 4.17.z |
9.4 | 4.16 | 4.16.0 → 4.16.z, 4.16 → 4.17 |
9.2, 9.3 | 4.15 | 4.15.0 → 4.15.z, 4.15 → 4.16 on RHEL 9.4 |
9.2, 9.3 | 4.14 | 4.14.0 → 4.14.z, 4.14 → 4.15 or 4.14 → 4.16 on RHEL 9.4 |
5.1.1.2. Version compatibility
Check the following update paths:
Red Hat build of MicroShift update paths
- Generally Available Version 4.17.1 to 4.17.z on RHEL for Edge 9.4
- Generally Available Version 4.15.0 from RHEL 9.2 to 4.16.0 on RHEL 9.4
- Generally Available Version 4.14.0 from RHEL 9.2 to 4.15.0 on RHEL 9.4
5.1.2. OSTree update failed
If you updated on an OSTree system, the Greenboot health check automatically logs and acts on system health. A failure can be indicated by a system rollback by Greenboot. In cases where the update failed, but Greenboot did not complete a system rollback, you can troubleshoot using the RHEL for Edge documentation linked in the "Additional resources" section that follows this content.
- Checking the Greenboot logs manually
Manually check the Greenboot logs to verify system health by running the following command:
$ sudo systemctl restart --no-block greenboot-healthcheck && sudo journalctl -fu greenboot-healthcheck
5.1.3. Manual RPM update failed
If you updated by using RPMs on a non-OSTree system, an update failure can be indicated by Greenboot, but the health checks are only informative. Checking the system logs is the next step in troubleshooting a manual RPM update failure. You can use Greenboot and sos report
to check both the MicroShift update and the host system.
5.2. Checking journal logs after updates
In some cases, MicroShift might fail to update. In these events, it is helpful to understand failure types and how to troubleshoot them. The journal logs can assist in diagnosing update failures.
The default configuration of the systemd
journal service stores data in a volatile directory. To persist system logs across system starts and restarts, enable log persistence and set limits on the maximum journal data size.
Procedure
Get comprehensive MicroShift journal logs by running the following command:
$ sudo journalctl -u microshift
Check the Greenboot journal logs by running the following command:
$ sudo journalctl -u greenboot-healthcheck
Examining the comprehensive logs of a specific boot uses three steps. First list the boots, then select the one you want from the list you obtained:
List the boots present in the journal logs by running the following command:
$ sudo journalctl --list-boots
Example output
IDX BOOT ID FIRST ENTRY LAST ENTRY 0 681ece6f5c3047e183e9d43268c5527f <Day> <Date> 12:27:58 UTC <Day> <Date>> 13:39:41 UTC #....
Check the journal logs for the specific boot you want by running the following command:
$ sudo journalctl --boot <idx_or_boot_id> 1
- 1
- Replace
<idx_or_boot_id>
with theIDX
or theBOOT ID
number assigned to the specific boot that you want to check.
Check the journal logs for the boot of a specific service by running the following command:
$ sudo journalctl --boot <idx_or_boot_id> -u <service_name> 1 2
5.3. Checking the status of greenboot health checks
Check the status of greenboot health checks before making changes to the system or during troubleshooting. You can use any of the following commands to help you ensure that greenboot scripts have finished running.
Procedure
To see a report of health check status, use the following command:
$ systemctl show --property=SubState --value greenboot-healthcheck.service
-
An output of
start
means that greenboot checks are still running. -
An output of
exited
means that checks have passed and greenboot has exited. Greenboot runs the scripts in thegreen.d
directory when the system is a healthy state. -
An output of
failed
means that checks have not passed. Greenboot runs the scripts inred.d
directory when the system is in this state and might restart the system.
-
An output of
To see a report showing the numerical exit code of the service where
0
means success and non-zero values mean a failure occurred, use the following command:$ systemctl show --property=ExecMainStatus --value greenboot-healthcheck.service
To see a report showing a message about boot status, such as
Boot Status is GREEN - Health Check SUCCESS
, use the following command:$ cat /run/motd.d/boot-status
Chapter 6. Checking audit logs
You can use audit logs to identify pod security violations.
6.1. Identifying pod security violations through audit logs
You can identify pod security admission violations on a workload by viewing the server audit logs. The following procedure shows you how to access the audit logs and parse them to find pod security admission violations in a workload.
Prerequisites
-
You have installed
jq
. -
You have access to the cluster as a user with the
cluster-admin
role.
Procedure
To retrieve the node name, run the following command:
$ <node_name>=$(oc get node -ojsonpath='{.items[0].metadata.name}')
To view the audit logs, run the following command:
$ oc adm node-logs <node_name> --path=kube-apiserver/ 1
- 1
- Replace <node_name> with the name of the node retrieved from the previous step.
Example output
rhel-94.lab.local audit-2024-10-18T18-25-41.663.log rhel-94.lab.local audit-2024-10-19T11-21-29.225.log rhel-94.lab.local audit-2024-10-20T04-16-09.622.log rhel-94.lab.local audit-2024-10-20T21-11-41.163.log rhel-94.lab.local audit-2024-10-21T14-06-10.402.log rhel-94.lab.local audit-2024-10-22T06-35-10.392.log rhel-94.lab.local audit-2024-10-22T23-26-27.667.log rhel-94.lab.local audit-2024-10-23T16-52-15.456.log rhel-94.lab.local audit-2024-10-24T07-31-55.238.log
To parse the affected audit logs, enter the following command:
$ oc adm node-logs <node_name> --path=kube-apiserver/audit.log \ | jq -r 'select((.annotations["pod-security.kubernetes.io/audit-violations"] != null) and (.objectRef.resource=="pods")) | .objectRef.namespace + " " + .objectRef.name + " " + .objectRef.resource' \ | sort | uniq -c 1
- 1
- Replace <node_name> with the name of the node retrieved from the previous step.
Chapter 7. Troubleshoot etcd
To troubleshoot etcd and improve performance, configure the memory allowance for the service.
7.1. Configuring the memoryLimitMB value to set parameters for the etcd server
By default, etcd uses as much memory as necessary to handle the load on the system. In memory-constrained systems, you might need to limit the amount of memory etcd uses.
Procedure
Edit the
/etc/microshift/config.yaml
file to set thememoryLimitMB
value.etcd: memoryLimitMB: 128
NoteThe minimum required value for
memoryLimitMB
on MicroShift is 128 MB. Values close to the minimum value are more likely to impact etcd performance. The lower the limit, the longer etcd takes to respond to queries. If the limit is too low or the etcd usage is high, queries time out.
Verification
After modifying the
memoryLimitMB
value in/etc/microshift/config.yaml
, restart MicroShift by running the following command:$ sudo systemctl restart microshift
Verify the new
memoryLimitMB
value is in use by running the following command:$ systemctl show --property=MemoryHigh microshift-etcd.scope
Chapter 8. Responsive restarts and security certificates
Red Hat build of MicroShift responds to system configuration changes and restarts after alterations are detected, including IP address changes, clock adjustments, and security certificate age.
8.1. IP address changes or clock adjustments
MicroShift depends on device IP addresses and system-wide clock settings to remain consistent during its runtime. However, these settings may occasionally change on edge devices, such as DHCP or Network Time Protocol (NTP) updates.
When such changes occur, some MicroShift components may stop functioning properly. To mitigate this situation, MicroShift monitors the IP address and system time and restarts if either setting change is detected.
The threshold for clock changes is a time adjustment of greater than 10 seconds in either direction. Smaller drifts on regular time adjustments performed by the Network Time Protocol (NTP) service do not cause a restart.
8.2. Security certificate lifetime
MicroShift certificates are separated into two basic groups:
- Short-lived certificates having certificate validity of one year.
- Long-lived certificates having certificate validity of 10 years.
Most server or leaf certificates are short-term.
An example of a long-lived certificate is the client certificate for system:admin user
authentication, or the certificate of the signer of the kube-apiserver
external serving certificate.
8.2.1. Certificate rotation
Certificates that are expired or close to their expiration dates need to be rotated to ensure continued MicroShift operation. When MicroShift restarts for any reason, certificates that are close to expiring are rotated. A certificate that is set to expire imminently, or has expired, can cause an automatic MicroShift restart to perform a rotation.
If the rotated certificate is a Certificate Authority, all of the certificates it signed rotate.
8.2.1.1. Short-term certificates
The following situations describe MicroShift actions during short-term certificate lifetimes:
No rotation:
- When a short-term certificate is up to 5 months old, no rotation occurs.
Rotation at restart:
- When a short-term certificate is 5 to 8 months old, it is rotated when MicroShift starts or restarts.
Automatic restart for rotation:
- When a short-term certificate is more than 8 months old, MicroShift can automatically restart to rotate and apply a new certificate.
8.2.1.2. Long-term certificates
The following situations describe MicroShift actions during long-term certificate lifetimes:
No rotation:
- When a long-term certificate is up to 8.5 years old, no rotation occurs.
Rotation at restart:
- When a long-term certificate is 8.5 to 9 years old, it is rotated when MicroShift starts or restarts.
Automatic restart for rotation:
- When a long-term certificate is more than 9 years old, MicroShift can automatically restart to rotate and apply a new certificate.
Chapter 9. Cleaning up data with support
MicroShift provides the microshift-cleanup-data
script for various troubleshooting tasks, such as deleting all data, certificates, and container images.
Do not run this script without the guidance of product Support. Contact Support by Submitting a support case.
9.1. Data cleanup script overview
You can see the usage and list available options of the microshift-cleanup-data
script by running the script without arguments. Running the script without arguments does not delete any data or stop the MicroShift service.
Procedure
See the usage and list the available options of the
microshift-cleanup-data
script by entering the following command:WarningSome of the options in the following script operations are destructive and can cause data loss. See the procedure of each argument for warnings.
$ microshift-cleanup-data
Example output
Stop all MicroShift services, also cleaning their data Usage: microshift-cleanup-data <--all [--keep-images] | --ovn | --cert> --all Clean all MicroShift and OVN data --keep-images Keep container images when cleaning all data --ovn Clean OVN data only --cert Clean certificates only
9.2. Cleaning all data and configuration
You can clean up all the MicroShift data and configuration by running the microshift-cleanup-data
script.
When you run the script with the --all
argument, you perform the following clean up actions:
- Stop and disable all MicroShift services
- Delete all MicroShift pods
- Delete all container image storage
- Reset network configuration
-
Delete the
/var/lib/microshift
data directory - Delete OVN-K networking configuration
Prerequisites
- You are logged into MicroShift as an administrator with root-user access.
- You have filed a support case.
Procedure
Clean up all the MicroShift data and configuration by running the
microshift-cleanup-data
script with the--all
argument, by entering the following command:WarningThis option deletes all MicroShift data and user workloads. Use with caution.
$ sudo microshift-cleanup-data --all
TipThe script prompts you with a message to confirm the operation. Type 1 or Yes to continue. Any other entries cancel the clean up.
Example output when you continue the clean up
DATA LOSS WARNING: Do you wish to stop and clean ALL MicroShift data AND cri-o container workloads? 1) Yes 2) No #? 1 Stopping MicroShift services Disabling MicroShift services Removing MicroShift pods Removing crio image storage Deleting the br-int interface Killing conmon, pause and OVN processes Removing MicroShift configuration Removing OVN configuration MicroShift service was stopped MicroShift service was disabled Cleanup succeeded
Example output when you cancel the clean up
DATA LOSS WARNING: Do you wish to stop and clean ALL MicroShift data AND cri-o container workloads? 1) Yes 2) No #? no Aborting cleanup
ImportantThe MicroShift service is stopped and disabled after you run the script.
Restart the MicroShift service by running the following command:
$ sudo systemctl enable --now microshift
9.3. Cleaning all data and keeping the container images
You can retain the MicroShift container images while cleaning all data by running the microshift-cleanup-data
script with the --all
and --keep-images
arguments.
Keeping the container images helps speed up MicroShift restart after data clean up because the necessary container images are already present locally when you start the service.
When you run the script with the --all
and --keep-images
arguments, you perform the following clean up actions:
- Stop and disable all MicroShift services
- Delete all MicroShift pods
- Reset network configuration
-
Delete the
/var/lib/microshift
data directory - Delete OVN-K networking configuration
This option deletes all MicroShift data and user workloads. Use with caution.
Prerequisites
- You are logged into MicroShift as an administrator with root-user access.
- You have filed a support case.
Procedure
Clean up all data and user workloads while retaining the MicroShift container images by running the
microshift-cleanup-data
script with the--all
and--keep-images
argument, by entering the following command:$ sudo microshift-cleanup-data --all --keep-images
Example output
DATA LOSS WARNING: Do you wish to stop and clean ALL MicroShift data AND cri-o container workloads? 1) Yes 2) No #? Yes Stopping MicroShift services Disabling MicroShift services Removing MicroShift pods Deleting the br-int interface Killing conmon, pause and OVN processes Removing MicroShift configuration Removing OVN configuration MicroShift service was stopped MicroShift service was disabled Cleanup succeeded
Verify that the container images are still present by running the following command:
$ sudo crictl images | awk '{print $1}'
Example output
IMAGE quay.io/openshift-release-dev/ocp-v4.0-art-dev quay.io/openshift-release-dev/ocp-v4.0-art-dev quay.io/openshift-release-dev/ocp-v4.0-art-dev quay.io/openshift-release-dev/ocp-v4.0-art-dev quay.io/openshift-release-dev/ocp-v4.0-art-dev quay.io/openshift-release-dev/ocp-v4.0-art-dev quay.io/openshift-release-dev/ocp-v4.0-art-dev quay.io/openshift-release-dev/ocp-v4.0-art-dev quay.io/openshift-release-dev/ocp-v4.0-art-dev quay.io/openshift-release-dev/ocp-v4.0-art-dev registry.redhat.io/lvms4/topolvm-rhel9 registry.redhat.io/openshift4/ose-csi-external-provisioner registry.redhat.io/openshift4/ose-csi-external-resizer registry.redhat.io/openshift4/ose-csi-livenessprobe registry.redhat.io/openshift4/ose-csi-node-driver-registrar registry.redhat.io/ubi9
ImportantThe MicroShift service is stopped and disabled after you run the script.
Restart the MicroShift service by running the following command:
$ sudo systemctl enable --now microshift
9.4. Cleaning the OVN-Kubernetes data
You can clean up the OVN-Kubernetes (ONV-K) data by running the microshift-cleanup-data
script. Use the script to reset OVN-K network configurations.
When you run the script with the --ovn
argument, you perform the following clean up actions:
- Stop all MicroShift services
- Delete all MicroShift pods
- Delete OVN-K networking configuration
Prerequisites
- You are logged into MicroShift as an administrator with root-user access.
- You have filed a support case.
Procedure
Clean up the OVN-K data by running the
microshift-cleanup-data
script with the--ovn
argument, by entering the following command:$ sudo microshift-cleanup-data --ovn
Example output
Stopping MicroShift services Removing MicroShift pods Killing conmon, pause and OVN processes Removing OVN configuration MicroShift service was stopped Cleanup succeeded
ImportantThe MicroShift service is stopped after you run the script.
Restart the MicroShift service by running the following command:
$ sudo systemctl start microshift
9.5. Cleaning custom certificates data
You can use the microshift-cleanup-data
script to reset MicroShift custom certificates so that they are recreated when the MicroShift service restarts.
When you run the script with the --cert
argument, you perform the following clean up actions:
- Stop all MicroShift services
- Delete all MicroShift pods
- Delete all MicroShift certificates
Prerequisites
- You are logged into MicroShift as an administrator with root-user access.
- You have filed a support case.
Procedure
Clean up the MicroShift certificates by running the
microshift-cleanup-data
script with the--cert
argument, by entering the following command:$ sudo microshift-cleanup-data --cert
Example output
Stopping MicroShift services Removing MicroShift pods Removing MicroShift certificates MicroShift service was stopped Cleanup succeeded
ImportantThe MicroShift service is stopped after you run the script.
Restart the MicroShift service by running the following command:
$ sudo systemctl start microshift