Chapter 9. Upgrading RHACS Cloud Service
9.1. Upgrading secured clusters in RHACS Cloud Service by using the Operator Copy linkLink copied to clipboard!
Red Hat provides regular service updates for the components that it manages, including Central services. These service updates include upgrades to new versions of Red Hat Advanced Cluster Security Cloud Service.
You must regularly upgrade the version of RHACS on your secured clusters to ensure compatibility with RHACS Cloud Service.
9.1.1. Preparing to upgrade Copy linkLink copied to clipboard!
Before you upgrade the Red Hat Advanced Cluster Security for Kubernetes (RHACS) version, complete the following steps:
-
If the cluster you are upgrading contains the
SecuredClustercustom resource (CR), change the collection method toCORE_BPF. For more information, see "Changing the collection method".
9.1.1.1. Changing the collection method Copy linkLink copied to clipboard!
If the cluster that you are upgrading contains the SecuredCluster CR, you must ensure that the per node collection setting is set to CORE_BPF before you upgrade.
Procedure
- In the OpenShift Container Platform web console, go to the RHACS Operator page.
- In the top navigation menu, select Secured Cluster.
- Click the instance name, for example, stackrox-secured-cluster-services.
Use one of the following methods to change the setting:
-
In the Form view, under Per Node Settings
Collector Settings Collection, select CORE_BPF. -
Click YAML to open the YAML editor and locate the
spec.perNode.collector.collectionattribute. If the value isKernelModuleorEBPF, then change it toCORE_BPF.
-
In the Form view, under Per Node Settings
- Click Save.
9.1.2. Rolling back an Operator upgrade for secured clusters Copy linkLink copied to clipboard!
To roll back an Operator upgrade, you can use either the CLI or the OpenShift Container Platform web console.
On secured clusters, rolling back Operator upgrades is needed only in rare cases, for example, if an issue exists with the secured cluster.
9.1.2.1. Rolling back an Operator upgrade by using the CLI Copy linkLink copied to clipboard!
You can roll back the Operator version by using command-line interface (CLI) commands.
Procedure
Delete the Operator Lifecycle Manager (OLM) subscription and cluster service version (CSV):
NoteIf you use Kubernetes, enter
kubectlinstead ofoc.To delete the OLM subscription, run the following command:
$ oc -n rhacs-operator delete subscription rhacs-operatorThe command returns the following output:
subscription.operators.coreos.com "rhacs-operator" deletedTo delete the CSV, run the following command:
$ oc -n rhacs-operator delete csv -l operators.coreos.com/rhacs-operator.rhacs-operatorThe command returns the following output:
clusterserviceversion.operators.coreos.com "rhacs-operator.v4.8.4" deleted
- Install the latest version of the Operator on the rolled back channel.
9.1.2.2. Rolling back an Operator upgrade by using the web console Copy linkLink copied to clipboard!
You can roll back the Operator version by using the OpenShift Container Platform web console.
Prerequisites
-
You have access to an OpenShift Container Platform cluster web console using an account with
cluster-adminpermissions.
Procedure
-
In the OpenShift web console, click Ecosystem
Installed Operators. - From the list of projects, select rhacs-operator.
Locate the Advanced Cluster Security for Kubernetes Operator:
Click the overflow menu
Uninstall Operator. The uninstall Operator dialog is displayed.
- Ensure that the Delete all operand instances for this operator checkbox is clear to avoid uninstallation of Red Hat Advanced Cluster Security for Kubernetes (RHACS).
- Click Uninstall.
- Install the latest version of the Operator on the rolled back channel.
9.1.3. Troubleshooting Operator upgrade issues Copy linkLink copied to clipboard!
Follow these instructions to investigate and resolve upgrade-related issues for the RHACS Operator.
9.1.3.1. Central or Secured cluster fails to deploy Copy linkLink copied to clipboard!
When RHACS Operator has the following conditions, you must check the custom resource conditions to find the issue:
- If the Operator fails to deploy Secured Cluster
- If the Operator fails to apply CR changes to actual resources
For Secured clusters, run the following command to check the conditions:
$ oc -n rhacs-operator describe securedclusters.platform.stackrox.ioYou can identify configuration errors from the conditions output:
Example output
Conditions: Last Transition Time: 2023-04-19T10:49:57Z Status: False Type: Deployed Last Transition Time: 2023-04-19T10:49:57Z Status: True Type: Initialized Last Transition Time: 2023-04-19T10:59:10Z Message: Deployment.apps "central" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: "50": must be less than or equal to cpu limit Reason: ReconcileError Status: True Type: Irreconcilable Last Transition Time: 2023-04-19T10:49:57Z Message: No proxy configuration is desired Reason: NoProxyConfig Status: False Type: ProxyConfigFailed Last Transition Time: 2023-04-19T10:49:57Z Message: Deployment.apps "central" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: "50": must be less than or equal to cpu limit Reason: InstallError Status: True Type: ReleaseFailedAdditionally, you can view RHACS pod logs to find more information about the issue. Run the following command to view the logs:
oc -n rhacs-operator logs deploy/rhacs-operator-controller-manager manager
9.2. Upgrading secured clusters in RHACS Cloud Service by using Helm charts Copy linkLink copied to clipboard!
You can upgrade your secured clusters in RHACS Cloud Service by using Helm charts.
If you installed RHACS secured clusters by using Helm charts, you can upgrade to the latest version of RHACS by updating the Helm chart and running the helm upgrade command.
9.2.1. Updating the Helm chart repository Copy linkLink copied to clipboard!
You must always update Helm charts before upgrading to a new version of Red Hat Advanced Cluster Security for Kubernetes.
Prerequisites
- You must have already added the Red Hat Advanced Cluster Security for Kubernetes Helm chart repository.
- You must be using Helm version 3.8.3 or newer.
Procedure
Update Red Hat Advanced Cluster Security for Kubernetes charts repository.
$ helm repo update
Verification
Run the following command to verify the added chart repository:
$ helm search repo -l rhacs/
9.2.2. Running the Helm upgrade command Copy linkLink copied to clipboard!
You can use the helm upgrade command to update Red Hat Advanced Cluster Security for Kubernetes (RHACS).
Prerequisites
-
You must have access to the
values-private.yamlconfiguration file that you have used to install Red Hat Advanced Cluster Security for Kubernetes (RHACS). Otherwise, you must generate thevalues-private.yamlconfiguration file containing root certificates before proceeding with these commands.
Procedure
Run the helm upgrade command and specify the configuration files by using the
-foption:$ helm upgrade -n stackrox stackrox-secured-cluster-services \ rhacs/secured-cluster-services --version <current-rhacs-version> \ -f values-private.yaml
9.3. Manually upgrading secured clusters in RHACS Cloud Service by using the roxctl CLI Copy linkLink copied to clipboard!
You can upgrade your secured clusters in RHACS Cloud Service by using the roxctl CLI.
You need to manually upgrade secured clusters only if you used the roxctl CLI to install the secured clusters.
9.3.1. Upgrading the roxctl CLI Copy linkLink copied to clipboard!
To upgrade the roxctl CLI to the latest version, you must uninstall your current version of the roxctl CLI and then install the latest version of the roxctl CLI.
9.3.1.1. Uninstalling the roxctl CLI Copy linkLink copied to clipboard!
You can uninstall the roxctl CLI binary on Linux by using the following procedure.
Procedure
Find and delete the
roxctlbinary:$ ROXPATH=$(which roxctl) && rm -f $ROXPATHNoteDepending on your environment, you might need administrator rights to delete the
roxctlbinary.
9.3.1.2. Installing the roxctl CLI on Linux Copy linkLink copied to clipboard!
You can install the roxctl CLI binary on Linux by using the following procedure.
roxctl CLI for Linux is available for amd64, arm64, ppc64le, and s390x architectures.
Procedure
Determine the
roxctlarchitecture for the target operating system:$ arch="$(uname -m | sed "s/x86_64//")"; arch="${arch:+-$arch}"Download the
roxctlCLI:$ curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.10.1/bin/Linux/roxctl${arch}"Make the
roxctlbinary executable:$ chmod +x roxctlPlace the
roxctlbinary in a directory that is on yourPATH:To check your
PATH, execute the following command:$ echo $PATH
Verification
Verify the
roxctlversion you have installed:$ roxctl version
9.3.1.3. Installing the roxctl CLI on macOS Copy linkLink copied to clipboard!
You can install the roxctl CLI binary on macOS by using the following procedure.
roxctl CLI for macOS is available for amd64 and arm64 architectures.
Procedure
Determine the
roxctlarchitecture for the target operating system:$ arch="$(uname -m | sed "s/x86_64//")"; arch="${arch:+-$arch}"Download the
roxctlCLI:$ curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.10.1/bin/Darwin/roxctl${arch}"Remove all extended attributes from the binary:
$ xattr -c roxctlMake the
roxctlbinary executable:$ chmod +x roxctlPlace the
roxctlbinary in a directory that is on yourPATH:To check your
PATH, execute the following command:$ echo $PATH
Verification
Verify the
roxctlversion you have installed:$ roxctl version
9.3.1.4. Installing the roxctl CLI on Windows Copy linkLink copied to clipboard!
You can install the roxctl CLI binary on Windows by using the following procedure.
roxctl CLI for Windows is available for the amd64 architecture.
Procedure
Download the
roxctlCLI:$ curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.10.1/bin/Windows/roxctl.exe
Verification
Verify the
roxctlversion you have installed:$ roxctl version
9.3.2. Upgrading all secured clusters manually Copy linkLink copied to clipboard!
To ensure optimal functionality, use the same RHACS version for your secured clusters that RHACS Cloud Service is running. If you are using automatic upgrades, update all your secured clusters by using automatic upgrades. If you are not using automatic upgrades, complete the instructions in this section on all secured clusters.
To complete manual upgrades of each secured cluster running Sensor, Collector, and Admission controller, follow these instructions.
9.3.2.1. Updating other images Copy linkLink copied to clipboard!
You must update the sensor, collector and compliance images on each secured cluster when not using automatic upgrades.
If you are using Kubernetes, use kubectl instead of oc for the commands listed in this procedure.
Procedure
Update the Sensor image:
$ oc -n stackrox set image deploy/sensor sensor=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.10.1Update the Compliance image:
$ oc -n stackrox set image ds/collector compliance=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.10.1Update the Collector image:
$ oc -n stackrox set image ds/collector collector=registry.redhat.io/advanced-cluster-security/rhacs-collector-rhel8:4.10.1Update the admission control image:
$ oc -n stackrox set image deploy/admission-control admission-control=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.10.1ImportantIf you have installed RHACS on Red Hat OpenShift by using the
roxctlCLI, you need to migrate the security context constraints (SCCs).For more information, see "Migrating SCCs during the manual upgrade" in the "Additional resources" section.
9.3.2.2. Migrating SCCs during the manual upgrade Copy linkLink copied to clipboard!
By migrating the security context constraints (SCCs) during the manual upgrade by using roxctl CLI, you can seamlessly transition the Red Hat Advanced Cluster Security for Kubernetes (RHACS) services to use the Red Hat OpenShift SCCs, ensuring compatibility and optimal security configurations across Central and all secured clusters.
Procedure
List all of the RHACS services that are deployed on all secured clusters:
$ oc -n stackrox describe pods | grep 'openshift.io/scc\|^Name:'Example output
Name: admission-control-6f4dcc6b4c-2phwd openshift.io/scc: stackrox-admission-control #... Name: central-575487bfcb-sjdx8 openshift.io/scc: stackrox-central Name: central-db-7c7885bb-6bgbd openshift.io/scc: stackrox-central-db Name: collector-56nkr openshift.io/scc: stackrox-collector #... Name: scanner-68fc55b599-f2wm6 openshift.io/scc: stackrox-scanner Name: scanner-68fc55b599-fztlh #... Name: sensor-84545f86b7-xgdwf openshift.io/scc: stackrox-sensor #...In this example, you can see that each pod has its own custom SCC, which is specified through the
openshift.io/sccfield.- Add the required roles and role bindings to use the Red Hat OpenShift SCCs instead of the RHACS custom SCCs.
To add the required roles and role bindings to use the Red Hat OpenShift SCCs for all secured clusters, complete the following steps:
Create a file named
upgrade-scs.yamlthat defines the role and role binding resources by using the following content:Example 9.1. Example YAML file
apiVersion: rbac.authorization.k8s.io/v1 kind: Role # metadata: annotations: email: support@stackrox.com owner: stackrox labels: app.kubernetes.io/component: collector app.kubernetes.io/instance: stackrox-secured-cluster-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-secured-cluster-services app.kubernetes.io/version: 4.4.0 auto-upgrade.stackrox.io/component: sensor name: use-privileged-scc # namespace: stackrox # rules: # - apiGroups: - security.openshift.io resourceNames: - privileged resources: - securitycontextconstraints verbs: - use - - - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding # metadata: annotations: email: support@stackrox.com owner: stackrox labels: app.kubernetes.io/component: collector app.kubernetes.io/instance: stackrox-secured-cluster-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-secured-cluster-services app.kubernetes.io/version: 4.4.0 auto-upgrade.stackrox.io/component: sensor name: collector-use-scc # namespace: stackrox roleRef: # apiGroup: rbac.authorization.k8s.io kind: Role name: use-privileged-scc subjects: # - kind: ServiceAccount name: collector namespace: stackrox - - -where:
kind: Role-
Specifies the type of Kubernetes resource, in this example,
Role. metadata.name: <rolename>- Specifies the name of the role resource.
metadata.namespace- Specifies the namespace in which the role is created.
Rules- Specifies the permissions granted by the role resource.
kind: RoleBinding-
Specifies the type of Kubernetes resource, in this example,
RoleBinding. metadata.name: <rolebindingname>- Specifies the name of the role binding resource.
metadata.roleRef- Specifies the role to bind in the same namespace.
metadata.subjects- Specifies the subjects that are bound to the role.
Create the role and role binding resources specified in the
upgrade-scs.yamlfile by running the following command:$ oc -n stackrox create -f ./update-scs.yamlImportantYou must run this command on each secured cluster to create the role and role bindings specified in the
upgrade-scs.yamlfile.
Delete the SCCs that are specific to RHACS:
To delete the SCCs that are specific to all secured clusters, run the following command:
$ oc delete scc/stackrox-admission-control scc/stackrox-collector scc/stackrox-sensorImportantYou must run this command on each secured cluster to delete the SCCs that are specific to each secured cluster.
Verification
Ensure that all the pods are using the correct SCCs by running the following command:
$ oc -n stackrox describe pods | grep 'openshift.io/scc\|^Name:'Compare the output with the following table:
Expand Component Previous custom SCC New Red Hat OpenShift 4 SCC Central
stackrox-centralnonroot-v2Central-db
stackrox-central-dbnonroot-v2Scanner
stackrox-scannernonroot-v2Scanner-db
stackrox-scannernonroot-v2Admission Controller
stackrox-admission-controlrestricted-v2Collector
stackrox-collectorprivilegedSensor
stackrox-sensorrestricted-v2
9.3.2.2.1. Verifying secured cluster upgrade Copy linkLink copied to clipboard!
After you have upgraded secured clusters, verify that the updated pods are working.
If you use Kubernetes, enter kubectl instead of oc.
Procedure
Check that the new pods have deployed:
$ oc get deploy,ds -n stackrox -o wide$ oc get pod -n stackrox --watch
9.3.3. Enabling RHCOS node scanning with the StackRox Scanner Copy linkLink copied to clipboard!
If you use OpenShift Container Platform, you can enable scanning of Red Hat Enterprise Linux CoreOS (RHCOS) nodes for vulnerabilities by using Red Hat Advanced Cluster Security for Kubernetes (RHACS).
Use Scanner V4 for full functionality when scanning nodes. For instructions on changing to Scanner V4 if you are using the StackRox scanner, see "Enabling Scanner V4".
Prerequisites
- For scanning RHCOS node hosts of the secured cluster, you must have installed Secured Cluster services on OpenShift Container Platform 4.12 or later. For information about supported platforms and architecture, see the Red Hat Advanced Cluster Security for Kubernetes Support Matrix. For life cycle support information for RHACS, see the Red Hat Advanced Cluster Security for Kubernetes Support Policy.
- This procedure describes how to enable node scanning for the first time. If you are reconfiguring Red Hat Advanced Cluster Security for Kubernetes to use the StackRox Scanner instead of Scanner V4, follow the procedure in "Restoring RHCOS node scanning with the StackRox Scanner".
Procedure
Run one of the following commands to update the compliance container.
For a default compliance container with metrics disabled, run the following command:
$ oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"name":"compliance","env":[{"name":"ROX_METRICS_PORT","value":"disabled"},{"name":"ROX_NODE_SCANNING_ENDPOINT","value":"127.0.0.1:8444"},{"name":"ROX_NODE_SCANNING_INTERVAL","value":"4h"},{"name":"ROX_NODE_SCANNING_INTERVAL_DEVIATION","value":"24m"},{"name":"ROX_NODE_SCANNING_MAX_INITIAL_WAIT","value":"5m"},{"name":"ROX_RHCOS_NODE_SCANNING","value":"true"},{"name":"ROX_CALL_NODE_INVENTORY_ENABLED","value":"true"}]}]}}}}'For a compliance container with Prometheus metrics enabled, run the following command:
$ oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"name":"compliance","env":[{"name":"ROX_METRICS_PORT","value":":9091"},{"name":"ROX_NODE_SCANNING_ENDPOINT","value":"127.0.0.1:8444"},{"name":"ROX_NODE_SCANNING_INTERVAL","value":"4h"},{"name":"ROX_NODE_SCANNING_INTERVAL_DEVIATION","value":"24m"},{"name":"ROX_NODE_SCANNING_MAX_INITIAL_WAIT","value":"5m"},{"name":"ROX_RHCOS_NODE_SCANNING","value":"true"},{"name":"ROX_CALL_NODE_INVENTORY_ENABLED","value":"true"}]}]}}}}'
Update the Collector DaemonSet (DS) by taking the following steps:
Add new volume mounts to Collector DS by running the following command:
$ oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"volumes":[{"name":"tmp-volume","emptyDir":{}},{"name":"cache-volume","emptyDir":{"sizeLimit":"200Mi"}}]}}}}'Add the new
NodeScannercontainer by running the following command:$ oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"command":["/scanner","--nodeinventory","--config=",""],"env":[{"name":"ROX_NODE_NAME","valueFrom":{"fieldRef":{"apiVersion":"v1","fieldPath":"spec.nodeName"}}},{"name":"ROX_CLAIR_V4_SCANNING","value":"true"},{"name":"ROX_COMPLIANCE_OPERATOR_INTEGRATION","value":"true"},{"name":"ROX_CSV_EXPORT","value":"false"},{"name":"ROX_DECLARATIVE_CONFIGURATION","value":"false"},{"name":"ROX_INTEGRATIONS_AS_CONFIG","value":"false"},{"name":"ROX_NETPOL_FIELDS","value":"true"},{"name":"ROX_NETWORK_DETECTION_BASELINE_SIMULATION","value":"true"},{"name":"ROX_NETWORK_GRAPH_PATTERNFLY","value":"true"},{"name":"ROX_NODE_SCANNING_CACHE_TIME","value":"3h36m"},{"name":"ROX_NODE_SCANNING_INITIAL_BACKOFF","value":"30s"},{"name":"ROX_NODE_SCANNING_MAX_BACKOFF","value":"5m"},{"name":"ROX_PROCESSES_LISTENING_ON_PORT","value":"false"},{"name":"ROX_QUAY_ROBOT_ACCOUNTS","value":"true"},{"name":"ROX_ROXCTL_NETPOL_GENERATE","value":"true"},{"name":"ROX_SOURCED_AUTOGENERATED_INTEGRATIONS","value":"false"},{"name":"ROX_SYSLOG_EXTRA_FIELDS","value":"true"},{"name":"ROX_SYSTEM_HEALTH_PF","value":"false"},{"name":"ROX_VULN_MGMT_WORKLOAD_CVES","value":"false"}],"image":"registry.redhat.io/advanced-cluster-security/rhacs-scanner-slim-rhel8:4.10.1","imagePullPolicy":"IfNotPresent","name":"node-inventory","ports":[{"containerPort":8444,"name":"grpc","protocol":"TCP"}],"volumeMounts":[{"mountPath":"/host","name":"host-root-ro","readOnly":true},{"mountPath":"/tmp/","name":"tmp-volume"},{"mountPath":"/cache","name":"cache-volume"}]}]}}}}'