Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Troubleshooting
Troubleshooting common issues
Abstract
Chapter 1. Checking which version you have installed Link kopierenLink in die Zwischenablage kopiert!
To begin troubleshooting, you must know which version of Red Hat build of MicroShift you have installed.
1.1. Checking the version using the command-line interface Link kopierenLink in die Zwischenablage kopiert!
To begin troubleshooting, you must know your MicroShift version. One way to get this information is by using the command-line interface (CLI).
Procedure
Check the version information by running the following command:
$ microshift versionExample output
MicroShift Version: 4.18-0.microshift-e6980e25 Base OCP Version: 4.18
1.2. Checking the MicroShift version using the API Link kopierenLink in die Zwischenablage kopiert!
To begin troubleshooting, you must know your MicroShift version. One way to get this information is by using the API.
Procedure
To get the version number using the OpenShift CLI (
oc), view thekube-public/microshift-versionconfig map by running the following command:$ oc get configmap -n kube-public microshift-version -o yamlExample output
apiVersion: v1 data: major: "4" minor: "20" version: 4.20.0-0.microshift-fa441af87431 kind: ConfigMap metadata: creationTimestamp: "2025-11-03T21:06:11Z" name: microshift-version namespace: kube-public
1.3. Checking the etcd version Link kopierenLink in die Zwischenablage kopiert!
You can get the version information for the etcd database included with your MicroShift by using one or both of the following methods, depending on the level of information that you need.
Procedure
To display the base database version information, run the following command:
$ microshift-etcd versionExample output
microshift-etcd Version: 4.20.0 Base etcd Version: 3.5.13To display the full database version information, run the following command:
$ microshift-etcd version -o jsonExample output
{ "major": "4", "minor": "20", "gitVersion": "4.20.0", "gitCommit": "140777711962eb4e0b765c39dfd325fb0abb3622", "gitTreeState": "clean", "buildDate": "2025-11-03T16:37:53Z", "goVersion": "go1.21.9" "compiler": "gc", "platform": "linux/amd64", "patch": "", "etcdVersion": "3.5.13" }
Chapter 2. Troubleshooting a node Link kopierenLink in die Zwischenablage kopiert!
To troubleshoot a MicroShift node, first check the node status.
2.1. Checking the status of a node Link kopierenLink in die Zwischenablage kopiert!
You can check the status of a MicroShift node or see active pods. You can choose to run any or all of the following commands to help you get the information you need to troubleshoot the node.
Procedure
Check the system status, which returns the node status, by running the following command:
$ sudo systemctl status microshiftIf MicroShift fails to start, this command returns the logs from the earlier run.
Example healthy output
● microshift.service - MicroShift Loaded: loaded (/usr/lib/systemd/system/microshift.service; enabled; preset: disabled) Active: active (running) since <day> <date> 12:39:06 UTC; 47min ago Main PID: 20926 (microshift) Tasks: 14 (limit: 48063) Memory: 542.9M CPU: 2min 41.185s CGroup: /system.slice/microshift.service └─20926 microshift run <Month-Day> 13:23:06 i-06166fbb376f14a8b.<hostname> microshift[20926]: kube-apiserver I0528 13:23:06.876001 20926 controll> <Month-Day> 13:23:06 i-06166fbb376f14a8b.<hostname> microshift[20926]: kube-apiserver I0528 13:23:06.876574 20926 controll> # ...Optional: Get comprehensive logs by running the following command:
$ sudo journalctl -u microshiftNoteThe default configuration of the
systemdjournal service stores data in a volatile directory, which does not persist across restarts. To retain logs across system restarts, enable log persistence and set a maximum size limit for journal data.If MicroShift is running, check the status of active pods by entering the following command:
$ oc get pods -AExample output
NAMESPACE NAME READY STATUS RESTARTS AGE default i-06166fbb376f14a8bus-west-2computeinternal-debug-qtwcr 1/1 Running 0 46m kube-system csi-snapshot-controller-5c6586d546-lprv4 1/1 Running 0 51m openshift-dns dns-default-45jl7 2/2 Running 0 50m openshift-dns node-resolver-7wmzf 1/1 Running 0 51m openshift-ingress router-default-78b86fbf9d-qvj9s 1/1 Running 0 51m openshift-ovn-kubernetes ovnkube-master-5rfhh 4/4 Running 0 51m openshift-ovn-kubernetes ovnkube-node-gcnt6 1/1 Running 0 51m openshift-service-ca service-ca-bf5b7c9f8-pn6rk 1/1 Running 0 51m openshift-storage topolvm-controller-549f7fbdd5-7vrmv 5/5 Running 0 51m openshift-storage topolvm-node-rht2m 3/3 Running 0 50mNoteThis example output shows a basic MicroShift installation. If you installed optional RPMs, the status of pods running those services is displayed in the output.
Chapter 3. Troubleshooting installation issues Link kopierenLink in die Zwischenablage kopiert!
To troubleshoot a failed MicroShift installation, you can run an sosreport archive. Use the sos report command to generate a detailed report that shows all of the enabled plugins and data from the different components and applications in a system.
3.1. Gathering data from an sos report Link kopierenLink in die Zwischenablage kopiert!
You can create an sosreport archive about a failing {op-system-full} host that you can share with Red Hat support for troubleshooting.
Prerequisites
-
You must have the
sospackage installed. - You have root access to the host.
Procedure
- Log in to the failing host as a root user.
Perform the debug report creation procedure by running the following command:
$ microshift-sos-reportExample output
sosreport (version 4.5.1) This command will collect diagnostic and configuration information from this Red Hat Enterprise Linux system and installed applications. An archive containing the collected information will be generated in /var/tmp/sos.o0sznf_8 and may be provided to a Red Hat support representative. Any information provided to Red Hat will be treated in accordance with the published support policies at: Distribution Website : https://www.redhat.com/ Commercial Support : https://www.access.redhat.com/ The generated archive may contain data considered sensitive and its content should be reviewed by the originating organization before being passed to any third party. No changes will be made to system configuration. Setting up archive ... Setting up plugins ... Running plugins. Please wait ... Starting 1/2 microshift [Running: microshift] Starting 2/2 microshift_ovn [Running: microshift microshift_ovn] Finishing plugins [Running: microshift] Finished running plugins Found 1 total reports to obfuscate, processing up to 4 concurrently sosreport-microshift-rhel9-2023-03-31-axjbyxw : Beginning obfuscation... sosreport-microshift-rhel9-2023-03-31-axjbyxw : Obfuscation completed Successfully obfuscated 1 report(s) Creating compressed archive... A mapping of obfuscated elements is available at /var/tmp/sosreport-microshift-rhel9-2023-03-31-axjbyxw-private_map Your sosreport has been generated and saved in: /var/tmp/sosreport-microshift-rhel9-2023-03-31-axjbyxw-obfuscated.tar.xz Size 444.14KiB Owner root sha256 922e5ff2db25014585b7c6c749d2c44c8492756d619df5e9838ce863f83d4269 Please send this file to your support representative.
Chapter 4. Troubleshooting data backup and restore Link kopierenLink in die Zwischenablage kopiert!
To troubleshoot failed data backups and restorations, check the basics first.
For example, verify the following common causes:
- User permissions
- System health and configuration
- Storage capacity
4.1. Data backup failure Link kopierenLink in die Zwischenablage kopiert!
Data backups are automatic on rpm-ostree systems. If you are not using an rpm-ostree system and attempted to create a manual backup, certain conditions can cause the backup to fail.
- MicroShift was stopped too soon after the system started
- Wait until the system completes health checks and background processes before stopping MicroShift.
- MicroShift stopped because of an error
- Verify that MicroShift is healthy and in a running state before you create a backup.
- Insufficient storage space
- Verify that sufficient storage is available for MicroShift data before you create a backup.
- Insufficient user permissions
- Verify that you have the correct user permissions and configurations required to create a backup.
4.2. Checking backup logs Link kopierenLink in die Zwischenablage kopiert!
Backup logs can help you identify the location and status of manual and automatic backups, and the processes that occurred during each backup.
- Manual backup logs are displayed in the terminal output.
-
Automatic backup logs for
rpm-ostreesystems are available in the MicroShift journal logs.
Procedure
Check the journal logs:
$ sudo journalctl -u microshift
4.3. Data restoration failure Link kopierenLink in die Zwischenablage kopiert!
The restoration of data can fail for many reasons, including storage and permission issues. Mismatched data versions can cause failures when MicroShift restarts.
4.3.1. Image-based systems data restore failed Link kopierenLink in die Zwischenablage kopiert!
Data restorations are automatic on rpm-ostree systems, but can fail, for example:
The only backups that are restored on
rpm-ostreesystems are backups from the current deployment or a rollback deployment. Backups are not taken on an unhealthy system.- Only the latest backups that have corresponding deployments are retained. Outdated backups that do not have a matching deployment are automatically removed.
- Data is usually not restored from a newer version of MicroShift.
- Ensure that the data you are restoring follows same versioning pattern as the update path. For example, if the destination version of MicroShift is an older version than the version of the MicroShift data you are currently using, the restoration can fail.
4.3.2. RPM-based manual data restore failed Link kopierenLink in die Zwischenablage kopiert!
If you are using an RPM system that is not rpm-ostree and tried to restore a manual backup, the following reasons can cause the restoration to fail:
If MicroShift stopped running because of an error, you cannot restore data.
- Make sure the system is healthy.
- Start it in a healthy state before attempting to restore data.
If you do not have enough storage space allocated for the incoming data, the restoration fails.
- Make sure that your current system storage is configured to accept the restored data.
You are attempting to restore data from a newer version of MicroShift.
- Ensure that the data you are restoring follows same versioning pattern as the update path. For example, if the destination version of MicroShift is an older version than the version of the MicroShift data you are attempting to use, the restoration can fail.
4.4. Storage migration failure Link kopierenLink in die Zwischenablage kopiert!
Storage migration failures typically result from incompatible changes to custom resources (CRs) between MicroShift versions. If a storage migration fails, the CR versions are likely incompatible and require manual review.
Chapter 5. Troubleshooting updates Link kopierenLink in die Zwischenablage kopiert!
To troubleshoot MicroShift updates, you can check update paths, review journal and greenboot health check logs, and use other techniques to help you solve update problems.
5.1. Troubleshooting MicroShift updates Link kopierenLink in die Zwischenablage kopiert!
In some cases, MicroShift might fail to update. In these events, it is helpful to understand failure types and how to troubleshoot them.
5.1.1. Update path is blocked by MicroShift version sequence Link kopierenLink in die Zwischenablage kopiert!
Non-EUS versions of MicroShift require serial updates. For example, if you attempt to update from MicroShift 4.15.5 directly to 4.17.1, the update fails. You must first update 4.15.5 to 4.16.z, and then you can update from 4.16.z to 4.17.0.
5.1.2. Update path is blocked by version incompatibility Link kopierenLink in die Zwischenablage kopiert!
RPM dependency errors result if a MicroShift update is incompatible with the version of Red Hat Enterprise Linux for Edge (RHEL for Edge) or Red Hat Enterprise Linux (RHEL). For more information, see "Red Hat Device Edge release compatibility matrix".
5.1.2.1. Version compatibility Link kopierenLink in die Zwischenablage kopiert!
Check the following update paths:
Red Hat build of MicroShift update paths
- Generally Available Version 4.18.0 to 4.18.z on RHEL 9.4
- Generally Available Version 4.17.1 to 4.17.z on RHEL 9.4
- Generally Available Version 4.15.0 from RHEL 9.2 to 4.16.0 on RHEL 9.4
- Generally Available Version 4.14.0 from RHEL 9.2 to 4.15.0 on RHEL 9.4
5.1.3. RHEL for Edge update failed Link kopierenLink in die Zwischenablage kopiert!
If you updated on an rpm-ostree system, the greenboot health check automatically logs and acts on system health. A system rollback by greenboot can indicate an update failure. In cases where the update failed, but greenboot did not complete a system rollback, you can troubleshoot using the RHEL for Edge documentation linked in the "Additional resources" section.
Manually check the greenboot logs to verify system health by running the following command:
$ sudo systemctl restart --no-block greenboot-healthcheck && sudo journalctl -fu greenboot-healthcheck
5.1.4. Manual RPM update failed Link kopierenLink in die Zwischenablage kopiert!
If you updated by using RPMs on a non-OSTree system, greenboot can indicate an update failure, but the health checks are only informative. Checking the system logs is the next step in troubleshooting a manual RPM update failure. You can use greenboot and the sos report tool to check both the MicroShift update and the host system.
5.2. Checking journal logs after updates Link kopierenLink in die Zwischenablage kopiert!
You can use journal logs to help diagnose MicroShift update failures. The default configuration of the systemd journal service stores data in a volatile directory, which does not persist across restarts. To retain logs across restarts, enable log persistence and set a maximum size limit for journal data.
Procedure
Get comprehensive MicroShift journal logs by running the following command:
$ sudo journalctl -u microshiftCheck the greenboot journal logs by running the following command:
$ sudo journalctl -u greenboot-healthcheckExamining the comprehensive logs of a specific boot uses three steps. First list the boots, then select the one you want from the list you obtained:
List the boots present in the journal logs by running the following command:
$ sudo journalctl --list-bootsExample output
IDX BOOT ID FIRST ENTRY LAST ENTRY 0 681ece6f5c3047e183e9d43268c5527f <Day> <Date> 12:27:58 UTC <Day> <Date>> 13:39:41 UTC #....Check the journal logs for the specific boot by running the following command:
$ sudo journalctl --boot _<idx_or_boot_id>where:
- idx_or_boot_id
-
Replace
<idx_or_boot_id>with theIDXor theBOOT IDnumber assigned to the specific boot that you want to check.
Check the journal logs for the boot of a specific service by running the following command:
$ sudo journalctl --boot <idx_or_boot_id> -u <service_name>where:
- idx_or_boot_id
-
Replace
<idx_or_boot_id>with theIDXor theBOOT IDnumber assigned to the specific boot that you want to check. - service_name
-
Replace
<service_name>with the name of the service that you want to check.
5.3. Checking the status of greenboot health checks Link kopierenLink in die Zwischenablage kopiert!
You can check the status of greenboot health checks before making changes to the system or while troubleshooting. By using helpful commands to verify that greenboot scripts have finished running.
Procedure
Check the current greenboot health check status by running the following command:
$ systemctl show --property=SubState --value greenboot-healthcheck.servicewhere:
start- Greenboot checks are still running.
exited-
Checks have passed and greenboot has exited. Greenboot runs the scripts in the
green.ddirectory when the system is in a healthy state. failed-
Checks have not passed. Greenboot runs the scripts in the
red.ddirectory when the system is in this state and restarts the system.
Check the numerical exit code of the greenboot health check service by running the following command:
$ systemctl show --property=ExecMainStatus --value greenboot-healthcheck.serviceAn exit code of
0means the health check succeeded. A non-zero exit code means the health check failed.To see a report showing a message about boot status, such as
Boot Status is GREEN - Health Check SUCCESS, use the following command:$ cat /run/motd.d/boot-statusExample output
Boot Status is GREEN - Health Check SUCCESS
Chapter 6. Checking audit logs Link kopierenLink in die Zwischenablage kopiert!
Audit logs record API requests to the MicroShift API server and can help you identify pod security violations and investigate suspicious requests.
6.1. Identifying pod security violations through audit logs Link kopierenLink in die Zwischenablage kopiert!
You can identify pod security admission violations in a workload by viewing the server audit logs. To do this, you must access and parse audit logs to find these violations.
Prerequisites
-
You have installed the
jqutility. - You have root access to the node.
Procedure
Retrieve the node name by running the following command:
$ NODE_NAME=$(oc get node -ojsonpath='{.items[0].metadata.name}')View the available audit logs by running the following command:
$ oc adm node-logs ${NODE_NAME} --path=kube-apiserver/Example output
rhel-94.lab.local audit-2024-10-18T18-25-41.663.log rhel-94.lab.local audit-2024-10-19T11-21-29.225.log rhel-94.lab.local audit-2024-10-20T04-16-09.622.log rhel-94.lab.local audit-2024-10-20T21-11-41.163.log rhel-94.lab.local audit-2024-10-21T14-06-10.402.log rhel-94.lab.local audit-2024-10-22T06-35-10.392.log rhel-94.lab.local audit-2024-10-22T23-26-27.667.log rhel-94.lab.local audit-2024-10-23T16-52-15.456.log rhel-94.lab.local audit-2024-10-24T07-31-55.238.logParse the audit logs to find pod security violations by running the following command:
$ oc adm node-logs ${NODE_NAME} --path=kube-apiserver/audit.log \ | jq -r 'select((.annotations["pod-security.kubernetes.io/audit-violations"] != null) and (.objectRef.resource=="pods")) | .objectRef.namespace + " " + .objectRef.name + " " + .objectRef.resource' \ | sort | uniq -c
Chapter 7. Troubleshooting etcd Link kopierenLink in die Zwischenablage kopiert!
To troubleshoot etcd and improve performance, configure the memory allowance for the service.
7.1. Configuring the memoryLimitMB value to set parameters for the etcd server Link kopierenLink in die Zwischenablage kopiert!
By default, etcd uses as much memory as necessary to handle the system load. On memory-constrained systems, limiting the amount of memory etcd uses might be necessary. Configure the memoryLimitMB parameter to restrict the memory consumption of the etcd server.
Procedure
Edit the
/etc/microshift/config.yamlconfiguration file to set thememoryLimitMBvalue.etcd: memoryLimitMB: 128NoteThe minimum required value for
memoryLimitMBon MicroShift is 128 MB. Values close to the minimum value are more likely to impactetcdperformance. Lower limits increase the time etcd takes to respond to queries. If the limit is too low or etcd usage is high, queries might time out.
Verification
Restart MicroShift to apply the changes by running the following command:
$ sudo systemctl restart microshiftVerify that the new
memoryLimitMBvalue is in use by running the following command:$ systemctl show --property=MemoryHigh microshift-etcd.scope
Chapter 8. Responsive restarts and security certificates Link kopierenLink in die Zwischenablage kopiert!
MicroShift automatically restarts when system configuration changes are detected. These changes include IP address updates, clock adjustments, and security certificate expiration.
8.1. IP address changes or clock adjustments Link kopierenLink in die Zwischenablage kopiert!
MicroShift depends on device IP addresses and system-wide clock settings to remain consistent during its runtime. However, these settings might occasionally change on edge devices.
For example, DHCP or Network Time Protocol (NTP) updates can change times. When these changes occur, some MicroShift components might stop functioning properly. To mitigate this situation, MicroShift monitors the IP address and system time and restarts if either setting changes.
The threshold for a clock-driven restart is a time change of greater than 10 seconds in either direction. Small drifts during regular NTP service adjustments do not trigger a restart.
8.2. Security certificate lifetime Link kopierenLink in die Zwischenablage kopiert!
MicroShift certificates are digital certificates that secure communication with communication protocols such as HTTPS. They fall into two basic categories:
- Short-lived certificates
- Valid for one year. Most server or leaf certificates are short-lived.
- Long-lived certificates
-
Valid for 10 years. For example, the client certificate for
system:adminuser authentication, or thekube-apiserverexternal serving certificate signer.
MicroShift restarts automatically depending on certificate age.
8.3. Certificate rotation Link kopierenLink in die Zwischenablage kopiert!
Certificates that are expired or close to their expiration dates must be rotated to ensure continued MicroShift operation. Certificate rotation can occur automatically.
When MicroShift restarts for any reason, certificates that are close to expiring are rotated. A certificate that expires soon, or has already expired, can also cause an automatic MicroShift restart to perform a rotation.
If the rotated certificate is a MicroShift certificate authority (CA), all signed certificates are also rotated. If you created custom CAs, you must rotate them manually.
8.3.1. Short-term-certificate rotation Link kopierenLink in die Zwischenablage kopiert!
Short-term certificates that are expired or close to their expiration dates must be rotated to ensure continued MicroShift operation.
The following situations describe MicroShift actions during short-term-certificate lifetime:
- No rotation
- When a short-term certificate is up to 5 months old, no rotation occurs.
- Rotation at restart
- When a short-term certificate is 5 to 8 months old, it is rotated when MicroShift starts or restarts.
- Automatic restart for rotation
- When a short-term certificate is more than 8 months old, MicroShift can automatically restart to rotate and apply a new certificate.
8.3.2. Long-term-certificate rotation Link kopierenLink in die Zwischenablage kopiert!
Long-term certificates that are expired or close to their expiration dates must be rotated to ensure continued MicroShift operation.
The following situations describe MicroShift actions during long-term certificate lifetime:
- No rotation
- When a long-term certificate is up to 8.5 years old, no rotation occurs.
- Rotation at restart
- When a long-term certificate is 8.5 to 9 years old, it is rotated when MicroShift starts or restarts.
- Automatic restart for rotation
- When a long-term certificate is more than 9 years old, MicroShift might automatically restart so that it can rotate and apply a new certificate.
Chapter 9. Cleaning up data with support Link kopierenLink in die Zwischenablage kopiert!
You can use the microshift-cleanup-data script for troubleshooting tasks such as deleting data, certificates, and container images.
Do not run this script without the guidance of product Support. Contact Support by Submitting a support case.
9.1. Data cleanup script overview Link kopierenLink in die Zwischenablage kopiert!
You can see the usage and list available options of the microshift-cleanup-data script by running the script without arguments. Running the script without arguments does not delete any data or stop the MicroShift service.
Procedure
See the usage and list the available options of the
microshift-cleanup-datascript by entering the following command:WarningSome script operations are destructive and can cause data loss. Review the specific procedure for each argument for detailed warnings.
$ microshift-cleanup-dataExample output
Stop all MicroShift services, also cleaning their data Usage: microshift-cleanup-data <--all [--keep-images] | --ovn | --cert> --all Clean all MicroShift and OVN data --keep-images Keep container images when cleaning all data --ovn Clean OVN data only --cert Clean certificates only
9.2. Cleaning all data and configuration Link kopierenLink in die Zwischenablage kopiert!
You can clean up all the MicroShift data and configuration by running the microshift-cleanup-data script.
When you run the script with the --all argument, you perform the following clean up actions:
- Stop and disable all MicroShift services
- Delete all MicroShift pods
- Delete all container image storage
- Reset network configuration
-
Delete the
/var/lib/microshiftdata directory - Delete OVN-K networking configuration
Prerequisites
- You are logged into MicroShift.
- You have filed a support case.
Procedure
Clean up all the MicroShift data and configuration by running the
microshift-cleanup-datascript with the--allargument, by entering the following command:WarningThis option deletes all MicroShift data and user workloads. Use with caution.
$ sudo microshift-cleanup-data --allTipThe script prompts you to confirm the operation. Enter
1orYesto continue. Any other entry cancels the cleanup.Example output when you continue the cleanup
DATA LOSS WARNING: Do you wish to stop and clean ALL MicroShift data AND cri-o container workloads? 1) Yes 2) No #? 1 Stopping MicroShift services Disabling MicroShift services Removing MicroShift pods Removing crio image storage Deleting the br-int interface Killing conmon, pause and OVN processes Removing MicroShift configuration Removing OVN configuration MicroShift service was stopped MicroShift service was disabled Cleanup succeededExample output when you cancel the cleanup
DATA LOSS WARNING: Do you wish to stop and clean ALL MicroShift data AND cri-o container workloads? 1) Yes 2) No #? no Aborting cleanupImportantThe
microshift-cleanup-datascript stops and disables the MicroShift service.Restart the MicroShift service by running the following command:
$ sudo systemctl enable --now microshift
9.3. Cleaning all data and keeping the container images Link kopierenLink in die Zwischenablage kopiert!
You can retain the MicroShift container images while cleaning all data by running the microshift-cleanup-data script with the --all and --keep-images arguments.
Keeping the container images helps speed up MicroShift restart after data clean up because the necessary container images are already present locally when you start the service.
When you run the script with the --all and --keep-images arguments, you perform the following clean up actions:
- Stop and disable all MicroShift services
- Delete all MicroShift pods
- Reset network configuration
-
Delete the
/var/lib/microshiftdata directory - Delete OVN-K networking configuration
This option deletes all MicroShift data and user workloads. Use with caution.
Prerequisites
- You are logged into MicroShift.
- You have filed a support case.
Procedure
Clean up all data and user workloads when retaining the MicroShift container images by running the following command:
$ sudo microshift-cleanup-data --all --keep-imagesExample output
DATA LOSS WARNING: Do you wish to stop and clean ALL MicroShift data AND cri-o container workloads? 1) Yes 2) No #? Yes Stopping MicroShift services Disabling MicroShift services Removing MicroShift pods Deleting the br-int interface Killing conmon, pause and OVN processes Removing MicroShift configuration Removing OVN configuration MicroShift service was stopped MicroShift service was disabled Cleanup succeededVerify that the container images are still present by running the following command:
$ sudo crictl images | awk '{print $1}'Example output
IMAGE quay.io/openshift-release-dev/ocp-v4.0-art-dev quay.io/openshift-release-dev/ocp-v4.0-art-dev quay.io/openshift-release-dev/ocp-v4.0-art-dev quay.io/openshift-release-dev/ocp-v4.0-art-dev quay.io/openshift-release-dev/ocp-v4.0-art-dev quay.io/openshift-release-dev/ocp-v4.0-art-dev quay.io/openshift-release-dev/ocp-v4.0-art-dev quay.io/openshift-release-dev/ocp-v4.0-art-dev quay.io/openshift-release-dev/ocp-v4.0-art-dev quay.io/openshift-release-dev/ocp-v4.0-art-dev registry.redhat.io/lvms4/topolvm-rhel9 registry.redhat.io/openshift4/ose-csi-external-provisioner registry.redhat.io/openshift4/ose-csi-external-resizer registry.redhat.io/openshift4/ose-csi-livenessprobe registry.redhat.io/openshift4/ose-csi-node-driver-registrar registry.redhat.io/ubi9ImportantThe
microshift-cleanup-datascript stops and disables the MicroShift service.Restart the MicroShift service by running the following command:
$ sudo systemctl enable --now microshift
9.4. Cleaning the OVN-Kubernetes data Link kopierenLink in die Zwischenablage kopiert!
Reset OVN-Kubernetes (OVN-K) network configurations by running the microshift-cleanup-data script.
When you run the script with the --ovn argument, you perform the following clean up actions:
- Stop all MicroShift services
- Delete all MicroShift pods
- Delete the OVN-K networking configuration
Prerequisites
- You are logged into MicroShift.
- You have filed a support case.
Procedure
Clean up the OVN-K data by running the
microshift-cleanup-datascript with the--ovnargument, by entering the following command:$ sudo microshift-cleanup-data --ovnExample output
Stopping MicroShift services Removing MicroShift pods Killing conmon, pause and OVN processes Removing OVN configuration MicroShift service was stopped Cleanup succeededImportantThe
microshift-cleanup-datascript stops the MicroShift service.Restart the MicroShift service by running the following command:
$ sudo systemctl start microshift
9.5. Cleaning custom certificates data Link kopierenLink in die Zwischenablage kopiert!
To recreate MicroShift custom certificates upon service restart, reset them by using the microshift-cleanup-data script.
When you run the script with the --cert argument, you perform the following clean up actions:
- Stop all MicroShift services
- Delete all MicroShift pods
- Delete all MicroShift certificates
Prerequisites
- You are logged into MicroShift.
- You have filed a support case.
Procedure
Clean up the MicroShift certificates by running the
microshift-cleanup-datascript with the--certargument, by entering the following command:$ sudo microshift-cleanup-data --certExample output
Stopping MicroShift services Removing MicroShift pods Removing MicroShift certificates MicroShift service was stopped Cleanup succeededImportantRunning the script stops the MicroShift service.
Restart the MicroShift service by running the following command:
$ sudo systemctl start microshift