Troubleshooting
Troubleshooting common issues
Abstract
Chapter 1. Checking which version you have installed Copy linkLink copied to clipboard!
To begin troubleshooting, you must know which version of Red Hat build of MicroShift you have installed.
1.1. Checking the version using the command-line interface Copy linkLink copied to clipboard!
To begin troubleshooting, you must know your MicroShift version. One way to get this information is by using the command-line interface (CLI).
Procedure
Run the following command to check the version information:
microshift version
$ microshift versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
MicroShift Version: 4.17-0.microshift-e6980e25 Base OCP Version: 4.17
MicroShift Version: 4.17-0.microshift-e6980e25 Base OCP Version: 4.17Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.2. Checking the MicroShift version using the API Copy linkLink copied to clipboard!
To begin troubleshooting, you must know your MicroShift version. One way to get this information is by using the API.
Procedure
To get the version number using the OpenShift CLI (
oc), view thekube-public/microshift-versionconfig map by running the following command:oc get configmap -n kube-public microshift-version -o yaml
$ oc get configmap -n kube-public microshift-version -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.3. Checking the etcd version Copy linkLink copied to clipboard!
You can get the version information for the etcd database included with your MicroShift by using one or both of the following methods, depending on the level of information that you need.
Procedure
To display the base database version information, run the following command:
microshift-etcd version
$ microshift-etcd versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
microshift-etcd Version: 4.20.0 Base etcd Version: 3.5.13
microshift-etcd Version: 4.20.0 Base etcd Version: 3.5.13Copy to Clipboard Copied! Toggle word wrap Toggle overflow To display the full database version information, run the following command:
microshift-etcd version -o json
$ microshift-etcd version -o jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 2. Troubleshooting a node Copy linkLink copied to clipboard!
To begin troubleshooting a MicroShift node, first access the node status.
2.1. Checking the status of a node Copy linkLink copied to clipboard!
You can check the status of a MicroShift node or see active pods. You can choose to run any or all of the following commands to help you get the information you need to troubleshoot the node.
Procedure
Check the system status, which returns the node status, by running the following command:
sudo systemctl status microshift
$ sudo systemctl status microshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow If MicroShift fails to start, this command returns the logs from the previous run.
Example healthy output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Get comprehensive logs by running the following command:
sudo journalctl -u microshift
$ sudo journalctl -u microshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe default configuration of the
systemdjournal service stores data in a volatile directory. To persist system logs across system starts and restarts, enable log persistence and set limits on the maximum journal data size.Optional: If MicroShift is running, check the status of active pods by entering the following command:
oc get pods -A
$ oc get pods -ACopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis example output shows a basic MicroShift installation. If you installed optional RPMs, the status of pods running those services is also expected in your output.
Chapter 3. Troubleshooting installation issues Copy linkLink copied to clipboard!
To troubleshoot a failed MicroShift installation, you can run an sos report. Use the sos report command to generate a detailed report that shows all of the enabled plugins and data from the different components and applications in a system.
3.1. Gathering data from an sos report Copy linkLink copied to clipboard!
Prerequisites
-
You must have the
sospackage installed.
Procedure
- Log in to the failing host as a root user.
Perform the debug report creation procedure by running the following command:
microshift-sos-report
$ microshift-sos-reportCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 4. Troubleshooting data backup and restore Copy linkLink copied to clipboard!
To troubleshoot failed data backups and restorations, check the basics first. For example, user permissions, system health and configuration, and storage capacity.
4.1. Data backup failure Copy linkLink copied to clipboard!
Data backups are automatic on rpm-ostree systems. If you are not using an rpm-ostree system and attempted to create a manual backup, the following reasons can cause the backup to fail:
- Not waiting several minutes after a system start to successfully stop MicroShift. The system must complete health checks and any other background processes before a back up can succeed.
If MicroShift stopped running because of an error, you cannot perform a backup of the data.
- Make sure the system is healthy.
- Stop it in a healthy state before attempting a backup.
- If you do not have enough storage for the data, the backup fails. Ensure that you have enough storage for MicroShift data.
- If you do not have the required user permissions, a backup can fail. Ensure that you have the correct user permissions to create a backup and perform the required configurations.
4.2. Checking backup logs Copy linkLink copied to clipboard!
Backup logs can help you identify where backups are and what processes occurred during manual and automatic backups.
- Logs print to the terminal console during manual backups.
-
Logs are automatically generated for
rpm-ostreesystem automated backups as part of the MicroShift journal logs.
Procedure
Check the logs by running the following command:
sudo journalctl -u microshift
$ sudo journalctl -u microshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3. Data restoration failure Copy linkLink copied to clipboard!
The restoration of data can fail for many reasons, including storage and permission issues. Mismatched data versions can cause failures when MicroShift restarts.
4.3.1. Image-based systems data restore failed Copy linkLink copied to clipboard!
Data restorations are automatic on rpm-ostree systems, but can fail, for example:
The only backups that are restored on
rpm-ostreesystems are backups from the current deployment or a rollback deployment. Backups are not taken on an unhealthy system.- Only the latest backups that have corresponding deployments are retained. Outdated backups that do not have a matching deployment are automatically removed.
- Data is usually not restored from a newer version of MicroShift.
- Ensure that the data you are restoring follows same versioning pattern as the update path. For example, if the destination version of MicroShift is an older version than the version of the MicroShift data you are currently using, the restoration can fail.
4.3.2. RPM-based manual data restore failed Copy linkLink copied to clipboard!
If you are using an RPM system that is not rpm-ostree and tried to restore a manual backup, the following reasons can cause the restoration to fail:
If MicroShift stopped running because of an error, you cannot restore data.
- Make sure the system is healthy.
- Start it in a healthy state before attempting to restore data.
If you do not have enough storage space allocated for the incoming data, the restoration fails.
- Make sure that your current system storage is configured to accept the restored data.
You are attempting to restore data from a newer version of MicroShift.
- Ensure that the data you are restoring follows same versioning pattern as the update path. For example, if the destination version of MicroShift is an older version than the version of the MicroShift data you are attempting to use, the restoration can fail.
4.4. Storage migration failure Copy linkLink copied to clipboard!
Storage migration failures are typically caused by significant changes in custom resources (CRs) from one MicroShift version to the next.
- If a storage migration fails, there is usually an unresolvable discrepancy between versions that requires manual review.
Chapter 5. Troubleshooting updates Copy linkLink copied to clipboard!
To troubleshoot MicroShift updates, you can check update paths, review journal and greenboot health check logs, and use other techniques to help you solve update problems.
5.1. Troubleshooting MicroShift updates Copy linkLink copied to clipboard!
In some cases, MicroShift might fail to update. In these events, it is helpful to understand failure types and how to troubleshoot them.
5.1.1. Update path is blocked by version incompatibility Copy linkLink copied to clipboard!
RPM dependency errors result if a MicroShift update is incompatible with the version of Red Hat Enterprise Linux for Edge (RHEL for Edge) or Red Hat Enterprise Linux (RHEL). Check the following compatibility table:
Red Hat Device Edge release compatibility matrix
Red Hat Enterprise Linux (RHEL) and MicroShift work together as a single solution for device-edge computing. You can update each component separately, but the product versions must be compatible. Supported configurations of Red Hat Device Edge use verified releases for each together as listed in the following table:
| RHEL Version(s) | MicroShift Version | Supported MicroShift Version → Version Updates |
|---|---|---|
| 9.4 | 4.17 | 4.17.1 → 4.17.z |
| 9.4 | 4.16 | 4.16.0 → 4.16.z, 4.16 → 4.17 |
| 9.2, 9.3 | 4.15 | 4.15.0 → 4.15.z, 4.15 → 4.16 on RHEL 9.4 |
| 9.2, 9.3 | 4.14 | 4.14.0 → 4.14.z, 4.14 → 4.15 or 4.14 → 4.16 on RHEL 9.4 |
5.1.1.1. Version compatibility Copy linkLink copied to clipboard!
Check the following update paths:
Red Hat build of MicroShift update paths
- Generally Available Version 4.17.1 to 4.17.z on RHEL for Edge 9.4
- Generally Available Version 4.15.0 from RHEL 9.2 to 4.16.0 on RHEL 9.4
- Generally Available Version 4.14.0 from RHEL 9.2 to 4.15.0 on RHEL 9.4
5.1.2. RHEL for Edge update failed Copy linkLink copied to clipboard!
If you updated on an rpm-ostree system, the greenboot health check automatically logs and acts on system health. A system rollback by greenboot can indicate an update failure. In cases where the update failed, but greenboot did not complete a system rollback, you can troubleshoot using the RHEL for Edge documentation linked in the "Additional resources" section.
Manually check the greenboot logs to verify system health by running the following command:
sudo systemctl restart --no-block greenboot-healthcheck && sudo journalctl -fu greenboot-healthcheck
$ sudo systemctl restart --no-block greenboot-healthcheck && sudo journalctl -fu greenboot-healthcheckCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.1.3. Manual RPM update failed Copy linkLink copied to clipboard!
If you updated by using RPMs on a non-OSTree system, greenboot can indicate an update failure, but the health checks are only informative. Checking the system logs is the next step in troubleshooting a manual RPM update failure. You can use greenboot and the sos report tool to check both the MicroShift update and the host system.
5.2. Checking journal logs after updates Copy linkLink copied to clipboard!
Journal logs can assist in diagnosing MicroShift update failures. The default configuration of the systemd journal service stores data in a volatile directory. To persist system logs across system starts and restarts, enable log persistence and set limits on the maximum journal data size.
Procedure
Get comprehensive MicroShift journal logs by running the following command:
sudo journalctl -u microshift
$ sudo journalctl -u microshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the greenboot journal logs by running the following command:
sudo journalctl -u greenboot-healthcheck
$ sudo journalctl -u greenboot-healthcheckCopy to Clipboard Copied! Toggle word wrap Toggle overflow Examining the comprehensive logs of a specific boot uses three steps. First list the boots, then select the one you want from the list you obtained:
List the boots present in the journal logs by running the following command:
sudo journalctl --list-boots
$ sudo journalctl --list-bootsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
IDX BOOT ID FIRST ENTRY LAST ENTRY 0 681ece6f5c3047e183e9d43268c5527f <Day> <Date> 12:27:58 UTC <Day> <Date>> 13:39:41 UTC #....
IDX BOOT ID FIRST ENTRY LAST ENTRY 0 681ece6f5c3047e183e9d43268c5527f <Day> <Date> 12:27:58 UTC <Day> <Date>> 13:39:41 UTC #....Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the journal logs for the specific boot you want by running the following command:
sudo journalctl --boot <idx_or_boot_id>
$ sudo journalctl --boot <idx_or_boot_id>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<idx_or_boot_id>with theIDXor theBOOT IDnumber assigned to the specific boot that you want to check.
Check the journal logs for the boot of a specific service by running the following command:
sudo journalctl --boot <idx_or_boot_id> -u <service_name>
$ sudo journalctl --boot <idx_or_boot_id> -u <service_name>1 2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3. Checking the status of greenboot health checks Copy linkLink copied to clipboard!
Check the status of greenboot health checks before making changes to the system and while troubleshooting. You can use any of the following commands to help you ensure that greenboot scripts have finished running.
Procedure
To see a report of health check status, use the following command:
systemctl show --property=SubState --value greenboot-healthcheck.service
$ systemctl show --property=SubState --value greenboot-healthcheck.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
An output of
startmeans that greenboot checks are still running. -
An output of
exitedmeans that checks have passed and greenboot has exited. Greenboot runs the scripts in thegreen.ddirectory when the system is a healthy state. -
An output of
failedmeans that checks have not passed. Greenboot runs the scripts inred.ddirectory when the system is in this state and might restart the system.
-
An output of
To see a report showing the numerical exit code of the service where
0means success and non-zero values mean a failure occurred, use the following command:systemctl show --property=ExecMainStatus --value greenboot-healthcheck.service
$ systemctl show --property=ExecMainStatus --value greenboot-healthcheck.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow To see a report showing a message about boot status, such as
Boot Status is GREEN - Health Check SUCCESS, use the following command:cat /run/motd.d/boot-status
$ cat /run/motd.d/boot-statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 6. Checking audit logs Copy linkLink copied to clipboard!
You can use audit logs to identify pod security violations.
6.1. Identifying pod security violations through audit logs Copy linkLink copied to clipboard!
You can identify pod security admission violations on a workload by viewing the server audit logs. The following procedure shows you how to access the audit logs and parse them to find pod security admission violations in a workload.
Prerequisites
-
You have installed
jq. - You have root access to the node.
Procedure
To retrieve the node name, run the following command:
<node_name>=$(oc get node -ojsonpath='{.items[0].metadata.name}')$ <node_name>=$(oc get node -ojsonpath='{.items[0].metadata.name}')Copy to Clipboard Copied! Toggle word wrap Toggle overflow To view the audit logs, run the following command:
oc adm node-logs <node_name> --path=kube-apiserver/
$ oc adm node-logs <node_name> --path=kube-apiserver/1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace <node_name> with the name of the node retrieved from the previous step.
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To parse the affected audit logs, enter the following command:
oc adm node-logs <node_name> --path=kube-apiserver/audit.log \ | jq -r 'select((.annotations["pod-security.kubernetes.io/audit-violations"] != null) and (.objectRef.resource=="pods")) | .objectRef.namespace + " " + .objectRef.name + " " + .objectRef.resource' \ | sort | uniq -c
$ oc adm node-logs <node_name> --path=kube-apiserver/audit.log \ | jq -r 'select((.annotations["pod-security.kubernetes.io/audit-violations"] != null) and (.objectRef.resource=="pods")) | .objectRef.namespace + " " + .objectRef.name + " " + .objectRef.resource' \ | sort | uniq -c1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace <node_name> with the name of the node retrieved from the previous step.
Chapter 7. Troubleshoot etcd Copy linkLink copied to clipboard!
To troubleshoot etcd and improve performance, configure the memory allowance for the service.
7.1. Configuring the memoryLimitMB value to set parameters for the etcd server Copy linkLink copied to clipboard!
By default, etcd uses as much memory as necessary to handle the load on the system. In memory-constrained systems, you might need to limit the amount of memory etcd uses.
Procedure
Edit the
/etc/microshift/config.yamlfile to set thememoryLimitMBvalue.etcd: memoryLimitMB: 128
etcd: memoryLimitMB: 128Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe minimum required value for
memoryLimitMBon MicroShift is 128 MB. Values close to the minimum value are more likely to impact etcd performance. The lower the limit, the longer etcd takes to respond to queries. If the limit is too low or the etcd usage is high, queries time out.
Verification
After modifying the
memoryLimitMBvalue in/etc/microshift/config.yaml, restart MicroShift by running the following command:sudo systemctl restart microshift
$ sudo systemctl restart microshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the new
memoryLimitMBvalue is in use by running the following command:systemctl show --property=MemoryHigh microshift-etcd.scope
$ systemctl show --property=MemoryHigh microshift-etcd.scopeCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 8. Responsive restarts and security certificates Copy linkLink copied to clipboard!
MicroShift responds to system configuration changes and restarts after alterations are detected, including IP address changes, clock adjustments, and security certificate age.
8.1. IP address changes or clock adjustments Copy linkLink copied to clipboard!
MicroShift depends on device IP addresses and system-wide clock settings to remain consistent during its runtime. However, these settings might occasionally change on edge devices.
For example, DHCP or Network Time Protocol (NTP) updates can change times. When these changes occur, some MicroShift components might stop functioning properly. To mitigate this situation, MicroShift monitors the IP address and system time and restarts if either setting changes.
The threshold for clock changes is a time change of greater than 10 seconds in either direction. Smaller drifts on regular time adjustments performed by the Network Time Protocol (NTP) service do not cause a restart.
8.2. Security certificate lifetime Copy linkLink copied to clipboard!
MicroShift certificates are digital certificates that secure communication with communication protocols such as HTTPS. They fall into two basic categories:
- Short-lived certificates
- Have a certificate validity of one year. Most server or leaf certificates are short-lived.
- Long-lived certificates
-
Have a certificate validity of 10 years. An example of a long-lived certificate is the client certificate for
system:admin userauthentication, or the certificate of the signer of thekube-apiserverexternal serving certificate.
MicroShift restarts automatically in certain cases, depending on certificate age.
8.3. Certificate rotation Copy linkLink copied to clipboard!
Certificates that are expired or close to their expiration dates must be rotated to ensure continued MicroShift operation. This rotation can be an automatic process.
When MicroShift restarts for any reason, certificates that are close to expiring are rotated. A certificate that expires soon, or has already expired, can also cause an automatic MicroShift restart to perform a rotation.
If the rotated certificate is a MicroShift certificate authority (CA), then all of the signed certificates rotate. If you created any custom CAs, ensure that the CAs manually rotate.
8.3.1. Short-term certificates rotation Copy linkLink copied to clipboard!
Short-term certificates that are expired or close to their expiration dates must be rotated to ensure continued MicroShift operation.
The following situations describe MicroShift actions during short-term certificate lifetimes:
- No rotation
- When a short-term certificate is up to 5 months old, no rotation occurs.
- Rotation at restart
- When a short-term certificate is 5 to 8 months old, it is rotated when MicroShift starts or restarts.
- Automatic restart for rotation
- When a short-term certificate is more than 8 months old, MicroShift can automatically restart to rotate and apply a new certificate.
8.3.2. Long-term certificates rotation Copy linkLink copied to clipboard!
Long-term certificates that are expired or close to their expiration dates must be rotated to ensure continued MicroShift operation.
The following situations describe MicroShift actions during long-term certificate lifetimes:
- No rotation
- When a long-term certificate is up to 8.5 years old, no rotation occurs.
- Rotation at restart
- When a long-term certificate is 8.5 to 9 years old, it is rotated when MicroShift starts or restarts.
- Automatic restart for rotation
- When a long-term certificate is more than 9 years old, MicroShift might automatically restart so that it can rotate and apply a new certificate.
Chapter 9. Cleaning up data with support Copy linkLink copied to clipboard!
MicroShift provides the microshift-cleanup-data script for various troubleshooting tasks, such as deleting all data, certificates, and container images.
Do not run this script without the guidance of product Support. Contact Support by Submitting a support case.
9.1. Data cleanup script overview Copy linkLink copied to clipboard!
You can see the usage and list available options of the microshift-cleanup-data script by running the script without arguments. Running the script without arguments does not delete any data or stop the MicroShift service.
Procedure
See the usage and list the available options of the
microshift-cleanup-datascript by entering the following command:WarningSome of the options in the following script operations are destructive and can cause data loss. See the procedure of each argument for warnings.
microshift-cleanup-data
$ microshift-cleanup-dataCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.2. Cleaning all data and configuration Copy linkLink copied to clipboard!
You can clean up all the MicroShift data and configuration by running the microshift-cleanup-data script.
When you run the script with the --all argument, you perform the following clean up actions:
- Stop and disable all MicroShift services
- Delete all MicroShift pods
- Delete all container image storage
- Reset network configuration
-
Delete the
/var/lib/microshiftdata directory - Delete OVN-K networking configuration
Prerequisites
- You are logged into MicroShift as an administrator with root-user access.
- You have filed a support case.
Procedure
Clean up all the MicroShift data and configuration by running the
microshift-cleanup-datascript with the--allargument, by entering the following command:WarningThis option deletes all MicroShift data and user workloads. Use with caution.
sudo microshift-cleanup-data --all
$ sudo microshift-cleanup-data --allCopy to Clipboard Copied! Toggle word wrap Toggle overflow TipThe script prompts you with a message to confirm the operation. Type 1 or Yes to continue. Any other entries cancel the clean up.
Example output when you continue the clean up
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output when you cancel the clean up
DATA LOSS WARNING: Do you wish to stop and clean ALL MicroShift data AND cri-o container workloads? 1) Yes 2) No #? no Aborting cleanup
DATA LOSS WARNING: Do you wish to stop and clean ALL MicroShift data AND cri-o container workloads? 1) Yes 2) No #? no Aborting cleanupCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThe MicroShift service is stopped and disabled after you run the script.
Restart the MicroShift service by running the following command:
sudo systemctl enable --now microshift
$ sudo systemctl enable --now microshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow
9.3. Cleaning all data and keeping the container images Copy linkLink copied to clipboard!
You can retain the MicroShift container images while cleaning all data by running the microshift-cleanup-data script with the --all and --keep-images arguments.
Keeping the container images helps speed up MicroShift restart after data clean up because the necessary container images are already present locally when you start the service.
When you run the script with the --all and --keep-images arguments, you perform the following clean up actions:
- Stop and disable all MicroShift services
- Delete all MicroShift pods
- Reset network configuration
-
Delete the
/var/lib/microshiftdata directory - Delete OVN-K networking configuration
This option deletes all MicroShift data and user workloads. Use with caution.
Prerequisites
- You are logged into MicroShift as an administrator with root-user access.
- You have filed a support case.
Procedure
Clean up all data and user workloads while retaining the MicroShift container images by running the
microshift-cleanup-datascript with the--alland--keep-imagesargument, by entering the following command:sudo microshift-cleanup-data --all --keep-images
$ sudo microshift-cleanup-data --all --keep-imagesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the container images are still present by running the following command:
sudo crictl images | awk '{print $1}'$ sudo crictl images | awk '{print $1}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThe MicroShift service is stopped and disabled after you run the script.
Restart the MicroShift service by running the following command:
sudo systemctl enable --now microshift
$ sudo systemctl enable --now microshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow
9.4. Cleaning the OVN-Kubernetes data Copy linkLink copied to clipboard!
You can clean up the OVN-Kubernetes (ONV-K) data by running the microshift-cleanup-data script. Use the script to reset OVN-K network configurations.
When you run the script with the --ovn argument, you perform the following clean up actions:
- Stop all MicroShift services
- Delete all MicroShift pods
- Delete OVN-K networking configuration
Prerequisites
- You are logged into MicroShift as an administrator with root-user access.
- You have filed a support case.
Procedure
Clean up the OVN-K data by running the
microshift-cleanup-datascript with the--ovnargument, by entering the following command:sudo microshift-cleanup-data --ovn
$ sudo microshift-cleanup-data --ovnCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThe MicroShift service is stopped after you run the script.
Restart the MicroShift service by running the following command:
sudo systemctl start microshift
$ sudo systemctl start microshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow
9.5. Cleaning custom certificates data Copy linkLink copied to clipboard!
You can use the microshift-cleanup-data script to reset MicroShift custom certificates so that they are recreated when the MicroShift service restarts.
When you run the script with the --cert argument, you perform the following clean up actions:
- Stop all MicroShift services
- Delete all MicroShift pods
- Delete all MicroShift certificates
Prerequisites
- You are logged into MicroShift as an administrator with root-user access.
- You have filed a support case.
Procedure
Clean up the MicroShift certificates by running the
microshift-cleanup-datascript with the--certargument, by entering the following command:sudo microshift-cleanup-data --cert
$ sudo microshift-cleanup-data --certCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Stopping MicroShift services Removing MicroShift pods Removing MicroShift certificates MicroShift service was stopped Cleanup succeeded
Stopping MicroShift services Removing MicroShift pods Removing MicroShift certificates MicroShift service was stopped Cleanup succeededCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThe MicroShift service is stopped after you run the script.
Restart the MicroShift service by running the following command:
sudo systemctl start microshift
$ sudo systemctl start microshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow