Upgrading
Upgrading Red Hat Advanced Cluster Security for Kubernetes
Abstract
Chapter 1. Upgrading by using the Operator
Upgrades through the Red Hat Advanced Cluster Security for Kubernetes (RHACS) Operator are performed automatically or manually, depending on the Update approval option you chose at installation.
If you installed RHACS using the Operator and selected Automatic in the Update approval field, RHACS is automatically updated when a new software version is released. If you selected Manual, you must approve subsequent Operator updates by using Operator Lifecycle Manager (OLM). For more information, see Manually approving a pending Operator update.
To roll back an Operator upgrade, you must perform the steps described in one of the following sections. You can roll back an Operator upgrade by using the CLI or the OpenShift Container Platform web console.
1.1. Rolling back an Operator upgrade by using the CLI
You can roll back the Operator version by using CLI commands.
Procedure
Delete the OLM subscription by running the following command:
For OpenShift Container Platform, run the following command:
$ oc -n rhacs-operator delete subscription rhacs-operator
For Kubernetes, run the following command:
$ kubectl -n rhacs-operator delete subscription rhacs-operator
Delete the cluster service version (CSV) by running the following command:
For OpenShift Container Platform, run the following command:
$ oc -n rhacs-operator delete csv -l operators.coreos.com/rhacs-operator.rhacs-operator
For Kubernetes, run the following command:
$ kubectl -n rhacs-operator delete csv -l operators.coreos.com/rhacs-operator.rhacs-operator
Determine the previous version you want to roll back to by choosing one of the following options:
If the current Central instance is running, query the RHACS API to get the rollback version by running the following command:
$ curl -k -s -u <user>:<password> https://<central hostname>/v1/centralhealth/upgradestatus | jq -r .upgradeStatus.forceRollbackTo
If the current Central instance is not running, perform the following steps:
NoteThis procedure can only be used for RHACS release 3.74 and earlier when the
rocksdb
database is installed.Ensure the Central deployment is scaled down by running the following command:
For OpenShift Container Platform, run the following command:
$ oc scale -n <central namespace> –replicas=0 deploy/central
For Kubernetes, run the following command:
$ kubectl scale -n <central namespace> –replicas=0 deploy/central
Save the following pod spec as a YAML file:
apiVersion: v1 kind: Pod metadata: name: get-previous-db-version spec: containers: - name: get-previous-db-version image: registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:<rollback version> command: - sh args: - '-c' - "cat /var/lib/stackrox/.previous/migration_version.yaml | grep '^image:' | cut -f 2 -d : | tr -d ' '" volumeMounts: - name: stackrox-db mountPath: /var/lib/stackrox volumes: - name: stackrox-db persistentVolumeClaim: claimName: stackrox-db
Create a pod in your Central namespace by running the following command using the YAML file that you saved:
For OpenShift Container Platform, run the following command:
$ oc create -n <central namespace> -f pod.yaml
For Kubernetes, run the following command:
$ kubectl create -n <central namespace> -f pod.yaml
After pod creation is complete, get the version by running the following command:
For OpenShift Container Platform, run the following command:
$ oc logs -n <central namespace> get-previous-db-version
For Kubernetes, run the following command:
$ kubectl logs -n <central namespace> get-previous-db-version
Edit the
central-config.yaml
ConfigMap
to set themaintenance.forceRollBackVersion:<version>
parameter by running the following command:For OpenShift Container Platform, run the following command:
$ oc get configmap -n <central namespace> central-config -o yaml | sed -e "s/forceRollbackVersion: none/forceRollbackVersion: <version>/" | oc -n <central namespace> apply -f -
For Kubernetes, run the following command:
$ kubectl get configmap -n <central namespace> central-config -o yaml | sed -e "s/forceRollbackVersion: none/forceRollbackVersion: <version>/" | kubectl -n <central namespace> apply -f -
Set the image for the Central deployment using the version string shown in Step 3 as the image tag. For example, run the following command:
For OpenShift Container Platform, run the following command:
$ oc set image -n <central namespace> deploy/central central=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:<version>
For Kubernetes, run the following command:
$ kubectl set image -n <central namespace> deploy/central central=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:<version>
Verification
Ensure that the Central pod starts and has a
ready
status. If the pod crashes, check the logs to see if the backup was restored. A successful log message appears similar to the following example:Clone to Migrate ".previous", ""
-
Reinstall the Operator on the rolled back channel. For example,
3.71.3
is installed on therhacs-3.71
channel.
1.2. Rolling back an Operator upgrade by using the web console
You can roll back the Operator version by using the OpenShift Container Platform web console.
Prerequisites
-
You have access to an OpenShift Container Platform cluster web console using an account with
cluster-admin
permissions.
Procedure
- Navigate to the Operators → Installed Operators page.
- Locate the RHACS Operator and click on it.
- On the Operator Details page, select Uninstall Operator from the Actions list. Following this action, the Operator stops running and no longer receives updates.
Determine the previous version you want to roll back to by choosing one of the following options:
If the current Central instance is running, you can query the RHACS API to get the rollback version by running the following command from a terminal window:
$ curl -k -s -u <user>:<password> https://<central hostname>/v1/centralhealth/upgradestatus | jq -r .upgradeStatus.forceRollbackTo
You can create a pod and extract the previous version by performing the following steps:
NoteThis procedure can only be used for RHACS release 3.74 and earlier when the
rocksdb
database is installed.- Navigate to Workloads → Deployments → central.
- Under Deployment details, click the down arrow next to the pod count to scale down the pod.
Navigate to Workloads → Pods → Create Pod and paste the contents of the pod spec as shown in the following example into the editor:
apiVersion: v1 kind: Pod metadata: name: get-previous-db-version spec: containers: - name: get-previous-db-version image: registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:<rollback version> command: - sh args: - '-c' - "cat /var/lib/stackrox/.previous/migration_version.yaml | grep '^image:' | cut -f 2 -d : | tr -d ' '" volumeMounts: - name: stackrox-db mountPath: /var/lib/stackrox volumes: - name: stackrox-db persistentVolumeClaim: claimName: stackrox-db
- Click Create.
- After the pod is created, click the Logs tab to get the version string.
Update the rollback configuration by performing the following steps:
- Navigate to Workloads → ConfigMaps → central-config and select Edit ConfigMap from the Actions list.
-
Find the
forceRollbackVersion
line in the value of thecentral-config.yaml
key. -
Replace
none
with3.73.3
, and then save the file.
Update Central to the earlier version by performing the following steps:
- Navigate to Workloads → Deployments → central and select Edit Deployment from the Actions list.
- Update the image name, and then save the changes.
Verification
Ensure that the Central pod starts and has a
ready
status. If the pod crashes, check the logs to see if the backup was restored. A successful log message appears similar to the following example:Clone to Migrate ".previous", ""
-
Reinstall the Operator on the rolled back channel. For example,
3.71.3
is installed on therhacs-3.71
channel.
1.3. Additional resources
Chapter 2. Upgrading using Helm charts
If you have installed Red Hat Advanced Cluster Security for Kubernetes by using Helm charts, to upgrade to the latest version of Red Hat Advanced Cluster Security for Kubernetes you must perform the following:
- Update the Helm chart.
- Update configuration files for the central-services Helm chart.
- Upgrade the central-services Helm chart.
- Update configuration files for the secured-cluster-services Helm chart.
- Upgrade the secured-cluster-services Helm chart.
To ensure optimal functionality, use the same version for your secured-cluster-services Helm chart and central-services Helm chart.
2.1. Updating the Helm chart repository
You must always update Helm charts before upgrading to a new version of Red Hat Advanced Cluster Security for Kubernetes.
Prerequisites
- You must have already added the Red Hat Advanced Cluster Security for Kubernetes Helm chart repository.
Procedure
Update Red Hat Advanced Cluster Security for Kubernetes charts repository.
$ helm repo update
Verification
Run the following command to verify the added chart repository:
$ helm search repo -l rhacs/
2.2. Additional resources
Chapter 3. Manually upgrading using the roxctl CLI
You can upgrade to the latest version of Red Hat Advanced Cluster Security for Kubernetes (RHACS) from a supported older version.
You need to perform the manual upgrade procedure only if you used the roxctl
CLI to deploy RHACS.
To upgrade RHACS to the latest version, you must perform the following:
-
Set the
ROX_SCANNER_DB_INIT
environment variable - Backup the Central database
- Upgrade Central
-
Upgrade the
roxctl
CLI - Upgrade Scanner
- Verify that all secured clusters are upgraded
3.1. Set up the ROX_SCANNER_DB_INIT
environment variable
ScannerDB’s initContainer
requires a new environment variable called ROX_SCANNER_DB_INIT
. You must set its value to true
before you upgrade.
Procedure
For OpenShift Container Platform, run the following command:
$ oc -n stackrox set env deploy/scanner-db -c init-db ROX_SCANNER_DB_INIT=true
For Kubernetes, run the following command:
$ kubectl -n stackrox set env deploy/scanner-db -c init-db ROX_SCANNER_DB_INIT=true
3.2. Backing up the Central database
You can back up the Central database and use that backup for rolling back from a failed upgrade or data restoration in the case of an infrastructure disaster.
Prerequisites
-
You must have an API token with
read
permission for all resources of Red Hat Advanced Cluster Security for Kubernetes. The Analyst system role hasread
permissions for all resources. -
You have installed the
roxctl
CLI. -
You have configured the
ROX_API_TOKEN
and theROX_CENTRAL_ADDRESS
environment variables.
Procedure
Run the backup command:
For Red Hat Advanced Cluster Security for Kubernetes 3.0.55 and newer:
$ roxctl -e "$ROX_CENTRAL_ADDRESS" central backup
For Red Hat Advanced Cluster Security for Kubernetes 3.0.54 and older:
$ roxctl -e "$ROX_CENTRAL_ADDRESS" central db backup
Additional resources
3.3. Upgrading the Central cluster
After you have backed up the Central database, the next step is to upgrade the Central cluster. This step includes upgrading Central, the roxctl
CLI, and the Scanner.
3.3.1. Upgrading Central
You can update Central to the latest version by downloading and deploying the updated images.
3.3.1.1. Upgrading Central on OpenShift Container Platform
If you installed Red Hat Advanced Cluster Security for Kubernetes on OpenShift Container Platform, use the following procedure to upgrade.
Procedure
Patch the local role:
$ oc -n stackrox patch role edit -p '{"rules":[{"apiGroups":["*"],"resources":["*"],"verbs":["create","get", "list", "watch", "update", "patch", "delete","deletecollection"]}]}'
Cleanup existing roles and role bindings:
$ oc -n stackrox delete RoleBinding admission-control-use-scc || true
$ oc -n stackrox delete RoleBinding sensor-use-scc || true
$ oc -n stackrox delete Role use-anyuid-scc || true
Set
sensor
andadmission-control
torestricted[-v2]
security context constraints by removing the hard-coded security context:$ oc -n stackrox patch deploy sensor -p '{"spec":{"template":{"spec":{"securityContext":null}}}}' 1
- 1
- Red Hat Advanced Cluster Security for Kubernetes recreates the pods automatically, however,
sensor
can take some time to restart.
$ oc -n stackrox patch deploy admission-control -p '{"spec":{"template":{"spec":{"securityContext":null}}}}'
Run the following commands to upgrade Central:
$ oc -n stackrox patch deploy/central -p '{"spec":{"template":{"spec":{"containers":[{"name":"central","env":[{"name":"ROX_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}]}]}}}}'
$ oc -n stackrox patch deployment/scanner -p '{"spec":{"template":{"spec":{"containers":[{"name":"scanner","securityContext":{"runAsUser":65534}}]}}}}'
$ oc -n stackrox set image deploy/central central=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:3.74.9 1
- 1
- If you deploy images from a private image registry, push the new image into your private registry, and replace the image registry address here.
ImportantIf you have not installed Red Hat Advanced Cluster Security for Kubernetes by using Helm or Operator, and you want to enable authentication using the OpenShift OAuth server, you must run the following additional commands:
$ oc -n stackrox set env deploy/central ROX_ENABLE_OPENSHIFT_AUTH=true
$ oc -n stackrox patch serviceaccount/central -p ' { "metadata": { "annotations": { "serviceaccounts.openshift.io/oauth-redirecturi.main": "sso/providers/openshift/callback", "serviceaccounts.openshift.io/oauth-redirectreference.main": "{"kind":"OAuthRedirectReference","apiVersion":"v1","reference":{"kind":"Route","name":"central"}}" } } }'
Verification
Verify that the new pods have deployed:
$ oc get deploy -n stackrox -o wide
$ oc get pod -n stackrox --watch
3.3.1.2. Upgrading Central on Kubernetes
If you installed Red Hat Advanced Cluster Security for Kubernetes on Kubernetes, use the following procedure to upgrade.
Prerequisites
- If you deploy images from a private image registry, first push the new image into your private registry, and then replace your image registry in the following commands.
Procedure
Patch the local role:
$ kubectl -n stackrox patch role edit -p '{"rules":[{"apiGroups":["*"],"resources":["*"],"verbs":["create","get", "list", "watch", "update", "patch", "delete","deletecollection"]}]}'
Run the following commands to upgrade Central:
$ kubectl -n stackrox patch deploy/central -p '{"spec":{"template":{"spec":{"containers":[{"name":"central","env":[{"name":"ROX_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}]}]}}}}'
$ kubectl -n stackrox patch deployment/scanner -p '{"spec":{"template":{"spec":{"containers":[{"name":"scanner","securityContext":{"runAsUser":65534}}]}}}}'
$ kubectl -n stackrox set image deploy/central central=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:3.74.9 1
- 1
- If you deploy images from a private image registry, push the new image into your private registry, and replace the image registry address here.
Verification
Verify that the new pods have deployed:
$ kubectl get deploy -n stackrox -o wide
$ kubectl get pod -n stackrox --watch
3.3.2. Upgrading the roxctl CLI
To upgrade the roxctl
CLI to the latest version you must uninstall the existing version of roxctl
CLI and then install the latest version of the roxctl
CLI.
3.3.2.1. Uninstalling the roxctl CLI
You can uninstall the roxctl
CLI binary on Linux by using the following procedure.
Procedure
Find and delete the
roxctl
binary:$ ROXPATH=$(which roxctl) && rm -f $ROXPATH 1
- 1
- Depending on your environment, you might need administrator rights to delete the
roxctl
binary.
3.3.2.2. Installing the roxctl CLI on Linux
You can install the roxctl
CLI binary on Linux by using the following procedure.
Procedure
Download the latest version of the
roxctl
CLI:$ curl -O https://mirror.openshift.com/pub/rhacs/assets/3.74.9/bin/Linux/roxctl
Make the
roxctl
binary executable:$ chmod +x roxctl
Place the
roxctl
binary in a directory that is on yourPATH
:To check your
PATH
, execute the following command:$ echo $PATH
Verification
Verify the
roxctl
version you have installed:$ roxctl version
3.3.2.3. Installing the roxctl CLI on macOS
You can install the roxctl
CLI binary on macOS by using the following procedure.
Procedure
Download the latest version of the
roxctl
CLI:$ curl -O https://mirror.openshift.com/pub/rhacs/assets/3.74.9/bin/Darwin/roxctl
Remove all extended attributes from the binary:
$ xattr -c roxctl
Make the
roxctl
binary executable:$ chmod +x roxctl
Place the
roxctl
binary in a directory that is on yourPATH
:To check your
PATH
, execute the following command:$ echo $PATH
Verification
Verify the
roxctl
version you have installed:$ roxctl version
3.3.2.4. Installing the roxctl CLI on Windows
You can install the roxctl
CLI binary on Windows by using the following procedure.
Procedure
Download the latest version of the
roxctl
CLI:$ curl -O https://mirror.openshift.com/pub/rhacs/assets/3.74.9/bin/Windows/roxctl.exe
Verification
Verify the
roxctl
version you have installed:$ roxctl version
After you upgrade the roxctl
CLI you can upgrade Scanner.
3.3.3. Upgrading Scanner
You can update Scanner to the latest version by using the roxctl
CLI.
Prerequisites
- If you deploy images from a private image registry, you must first push the new image into your private registry, then edit the commands in the following section to use the name of your private image registry.
Procedure
If you have created custom scanner configurations, you must apply those changes before updating the scanner configuration file.
Generate Scanner using the following
roxctl
command:$ roxctl -e "$ROX_CENTRAL_ADDRESS" scanner generate
Apply the TLS secrets YAML file:
If you use OpenShift Container Platform, enter the following command:
$ oc apply -f scanner-bundle/scanner/02-scanner-03-tls-secret.yaml
If you use Kubernetes, enter the following command:
$ kubectl apply -f scanner-bundle/scanner/02-scanner-03-tls-secret.yaml
Apply the Scanner configuration YAML file:
If you use OpenShift Container Platform, enter the following command:
$ oc apply -f scanner-bundle/scanner/02-scanner-04-scanner-config.yaml
If you use Kubernetes, enter the following command:
$ kubectl apply -f scanner-bundle/scanner/02-scanner-04-scanner-config.yaml
Update the Scanner image:
If you use OpenShift Container Platform, enter the following command:
$ oc -n stackrox set image deploy/scanner scanner=registry.redhat.io/advanced-cluster-security/rhacs-scanner-rhel8:3.74.9
If you use Kubernetes, enter the following command:
$ kubectl -n stackrox set image deploy/scanner scanner=registry.redhat.io/advanced-cluster-security/rhacs-scanner-rhel8:3.74.9
Update the Scanner database image:
If you use OpenShift Container Platform, enter the following command:
$ oc -n stackrox set image deploy/scanner-db db=registry.redhat.io/advanced-cluster-security/rhacs-scanner-db-rhel8:3.74.9 init-db=registry.redhat.io/advanced-cluster-security/rhacs-scanner-db-rhel8:3.74.9
If you use Kubernetes, enter the following command:
$ kubectl -n stackrox set image deploy/scanner-db db=registry.redhat.io/advanced-cluster-security/rhacs-scanner-db-rhel8:3.74.9 init-db=registry.redhat.io/advanced-cluster-security/rhacs-scanner-db-rhel8:3.74.9
Verification
Check that the new pods have deployed successfully:
If you use OpenShift Container Platform, enter the following command:
$ oc get pod -n stackrox --watch
If you use Kubernetes, enter the following command:
$ kubectl get pod -n stackrox --watch
3.3.3.1. Upgrading to RHACS version 3.71
If you are upgrading to RHACS 3.71 using the roxctl
CLI and YAML files, you need to perform some additional steps. The Scanner DB image no longer mounts the scanner-db-password
Kubernetes Secret into the db
Scanner DB container. Instead, scanner-db-password
is only used in the init container, init-db
. Therefore, you must add the POSTGRES_PASSWORD_FILE
environment variable to the init container configuration. The init container must also mount the scanner-db-tls-volume
and scanner-db-password
volumes. The following section provides the upgrade steps for RHACS if you are using OpenShift Container Platform or Kubernetes. For more information about init containers, see the Kubernetes documentation.
Prerequisites
-
This procedure assumes the
db
container in the Scanner DB configuration is atindex 0
, which is the first entry in thecontainers
list; and thescanner-db-password
volume mount is atindex 2
, which is the third entry.
While this scenario applies to most deployments, check the configuration for Scanner DB before entering these commands. If your values differ, you must adjust the …/containers/x/volumeMounts/y
value in the following commands.
Procedure
Apply the patch:
If you use OpenShift Container Platform, enter the following command:
$ oc -n stackrox patch deployment.apps/scanner-db --patch '{"spec":{"template":{"spec":{"initContainers":[{"name":"init-db","env":[{"name":"POSTGRES_PASSWORD_FILE","value":"/run/secrets/stackrox.io/secrets/password"}],"command":["/usr/local/bin/docker-entrypoint.sh","postgres","-c","config_file=/etc/postgresql.conf"],"volumeMounts":[{"name":"db-data","mountPath":"/var/lib/postgresql/data"},{"name":"scanner-db-tls-volume","mountPath":"/run/secrets/stackrox.io/certs","readOnly":true},{"name":"scanner-db-password","mountPath":"/run/secrets/stackrox.io/secrets","readOnly":true}],"securityContext":{"runAsGroup":70,"runAsNonRoot":true,"runAsUser":70}}]}}}}'
If you use Kubernetes, enter the following command:
$ kubectl -n stackrox patch deployment.apps/scanner-db --patch '{"spec":{"template":{"spec":{"initContainers":[{"name":"init-db","env":[{"name":"POSTGRES_PASSWORD_FILE","value":"/run/secrets/stackrox.io/secrets/password"}],"command":["/usr/local/bin/docker-entrypoint.sh","postgres","-c","config_file=/etc/postgresql.conf"],"volumeMounts":[{"name":"db-data","mountPath":"/var/lib/postgresql/data"},{"name":"scanner-db-tls-volume","mountPath":"/run/secrets/stackrox.io/certs","readOnly":true},{"name":"scanner-db-password","mountPath":"/run/secrets/stackrox.io/secrets","readOnly":true}],"securityContext":{"runAsGroup":70,"runAsNonRoot":true,"runAsUser":70}}]}}}}'
Remove the path:
If you use OpenShift Container Platform, enter the following command:
$ oc -n stackrox patch deployment.apps/scanner-db --type json --patch '[{"op":"remove","path":"/spec/template/spec/containers/0/volumeMounts/2"}]'
If you use Kubernetes, enter the following command:
$ kubectl -n stackrox patch deployment.apps/scanner-db --type json --patch '[{"op":"remove","path":"/spec/template/spec/containers/0/volumeMounts/2"}]'
3.3.4. Verifying the Central cluster upgrade
After you have upgraded both Central and Scanner, verify that the Central cluster upgrade is complete.
Procedure
Check the Central logs:
If you are using OpenShift Container Platform, enter the following command:
$ oc logs -n stackrox deploy/central -c central
If you are using Kubernetes, enter the following command:
$ kubectl logs -n stackrox deploy/central -c central
Sample output of a successful upgrade
No database restore directory found (this is not an error). Migrator: 2019/10/25 17:58:54: starting DB compaction Migrator: 2019/10/25 17:58:54: Free fraction of 0.0391 (40960/1048576) is < 0.7500. Will not compact badger 2019/10/25 17:58:54 INFO: All 1 tables opened in 2ms badger 2019/10/25 17:58:55 INFO: Replaying file id: 0 at offset: 846357 badger 2019/10/25 17:58:55 INFO: Replay took: 50.324µs badger 2019/10/25 17:58:55 DEBUG: Value log discard stats empty Migrator: 2019/10/25 17:58:55: DB is up to date. Nothing to do here. badger 2019/10/25 17:58:55 INFO: Got compaction priority: {level:0 score:1.73 dropPrefix:[]} version: 2019/10/25 17:58:55.189866 ensure.go:49: Info: Version found in the DB was current. We’re good to go!
3.4. Upgrading all secured clusters
After upgrading Central services, you must upgrade all secured clusters.
If you are using automatic upgrades:
- Update all your secured clusters by using automatic upgrades.
- Skip the instructions in this section and follow the instructions in the Verify upgrades and Revoking the API token sections.
If you are not using automatic upgrades, you must run the instructions in this section on all secured clusters including the Central cluster.
- To ensure optimal functionality, use the same RHACS version for your secured clusters and the cluster on which Central is installed.
To complete manual upgrades of each secured cluster running Sensor, Collector, and Admission controller, follow the instructions in this section.
3.4.1. Update ValidatingWebhookConfiguration
Earlier RHACS versions included a wrong entry in the ValidatingWebhookConfiguration. To fix it, you must update the ValidatingWebhookConfiguration.
Procedure
If you have enabled
listenOnEvents
in your Admission controller, you must run the following command:$ oc patch validatingwebhookconfiguration stackrox -p '{"webhooks":[{"name": "k8sevents.stackrox.io", "rules": [{"apiGroups": ["*"], "apiVersions": ["*"], "operations": ["CONNECT"], "resources": ["pods", "pods/exec", "pods/portforward"]}]}]}' 1
- 1
- If you use Kubernetes, enter
kubectl
instead ofoc
.
3.4.2. Updating other images
You must update the sensor, collector and compliance images on each secured cluster when not using automatic upgrades.
If you are using Kubernetes, use kubectl
instead of oc
for the commands listed in this procedure.
Procedure
Update the Sensor image:
$ oc -n stackrox set image deploy/sensor sensor=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:3.74.9 1
- 1
- If you use Kubernetes, enter
kubectl
instead ofoc
.
Update the Compliance image:
$ oc -n stackrox set image ds/collector compliance=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:3.74.9 1
- 1
- If you use Kubernetes, enter
kubectl
instead ofoc
.
Update the Collector image:
$ oc -n stackrox set image ds/collector collector=registry.redhat.io/advanced-cluster-security/rhacs-collector-rhel8:3.74.9 1
- 1
- If you use Kubernetes, enter
kubectl
instead ofoc
.
NoteIf you are using the collector slim image, run the following command instead:
$ oc -n stackrox set image ds/collector collector=registry.redhat.io/advanced-cluster-security/rhacs-collector-slim-rhel8:{rhacs-version}
Update the admission control image:
$ oc -n stackrox set image deploy/admission-control admission-control=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:3.74.9
3.4.3. Verifying secured cluster upgrade
After you have upgraded secured clusters, verify that the updated pods are working.
3.5. Rolling back Central
You can roll back to a previous version of Central if the upgrade to a new version is unsuccessful.
3.5.1. Rolling back Central normally
You can roll back to a previous version of Central if upgrading Red Hat Advanced Cluster Security for Kubernetes fails.
Prerequisites
- You must be using Red Hat Advanced Cluster Security for Kubernetes 3.0.57.0 or higher.
- Before you can perform a rollback, you must have free disk space available on your persistent storage. Red Hat Advanced Cluster Security for Kubernetes uses disk space to keep a copy of databases during the upgrade. If the disk space is not enough to store a copy and the upgrade fails, you will not be able to roll back to an earlier version.
Procedure
Run the following command to roll back to a previous version when an upgrade fails (before the Central service starts):
$ oc -n stackrox rollout undo deploy/central 1
- 1
- If you use Kubernetes, enter
kubectl
instead ofoc
.
3.5.2. Rolling back Central forcefully
You can use forced rollback to roll back to an earlier version of Central (after the Central service starts).
Using forced rollback to switch back to a previous version might result in loss of data and functionality.
Prerequisites
- You must be using Red Hat Advanced Cluster Security for Kubernetes 3.0.58.0 or higher.
- Before you can perform a rollback, you must have free disk space available on your persistent storage. Red Hat Advanced Cluster Security for Kubernetes uses disk space to keep a copy of databases during the upgrade. If the disk space is not enough to store a copy and the upgrade fails, you will not be able to roll back to an earlier version.
Procedure
Run the following commands to perform a forced rollback:
To forcefully rollback to the previously installed version:
$ oc -n stackrox rollout undo deploy/central 1
- 1
- If you use Kubernetes, enter
kubectl
instead ofoc
.
To forcefully rollback to a specific version:
Edit Central’s
ConfigMap
:$ oc -n stackrox edit configmap/central-config 1
- 1
- If you use Kubernetes, enter
kubectl
instead ofoc
.
Update the value of the
maintenance.forceRollbackVersion
key:data: central-config.yaml: | maintenance: safeMode: false compaction: enabled: true bucketFillFraction: .5 freeFractionThreshold: 0.75 forceRollbackVersion: <x.x.x.x> 1 ...
- 1
- Specify the version that you want to roll back to.
Update the Central image version:
$ oc -n stackrox \ 1 set image deploy/central central=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:<x.x.x.x> 2
3.6. Verifying upgrades
The updated Sensors and Collectors continue to report the latest data from each secured cluster.
The last time Sensor contacted Central is visible in the RHACS portal.
Procedure
- On the RHACS portal, navigate to Platform Configuration → System Health.
- Check to ensure that Sensor Upgrade shows clusters up to date with Central.
3.7. Revoking the API token
For security reasons, Red Hat recommends that you revoke the API token that you have used to complete Central database backup.
Prerequisites
- After the upgrade, you must reload the RHACS portal page and re-accept the certificate to continue using the RHACS portal.
Procedure
- On the RHACS portal, navigate to Platform Configuration → Integrations.
- Scroll down to the Authentication Tokens category, and click API Token.
- Select the checkbox in front of the token name that you want to revoke.
- Click Revoke.
- On the confirmation dialog box, click Confirm.