이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 3. Manually upgrading using the roxctl CLI
You can upgrade to the latest version of Red Hat Advanced Cluster Security for Kubernetes (RHACS) from a supported older version.
-
You need to perform the manual upgrade procedure only if you used the
roxctlCLI to install RHACS. - There are manual steps for each version upgrade that must be followed, for example, from version 3.74 to version 4.0, and from version 4.0 to version 4.1. Therefore, Red Hat recommends upgrading first from 3.74 to 4.0, then from 4.0 to 4.1, then 4.1 to 4.2, until the selected version is installed. For full functionality, Red Hat recommends upgrading to the most recent version.
- Upgrading to RHACS 4.8 includes an upgrade to PostgreSQL 15 and it requires additional free space on your disk. Before starting the upgrade, ensure you have enough free disk space, ideally almost double the size of your existing database.
To upgrade RHACS to the latest version, perform the following steps:
3.1. Backing up the Central database 링크 복사링크가 클립보드에 복사되었습니다!
You can back up the Central database and use that backup for rolling back from a failed upgrade or data restoration in the case of an infrastructure disaster.
Prerequisites
-
You must have an API token with
readpermission for all resources of Red Hat Advanced Cluster Security for Kubernetes. The Analyst system role hasreadpermissions for all resources. -
You have installed the
roxctlCLI. -
You have configured the
ROX_API_TOKENand theROX_CENTRAL_ADDRESSenvironment variables.
Procedure
Run the backup command:
$ roxctl -e "$ROX_CENTRAL_ADDRESS" central backup
3.2. Upgrading the roxctl CLI 링크 복사링크가 클립보드에 복사되었습니다!
To upgrade the roxctl CLI to the latest version you must uninstall the existing version of roxctl CLI and then install the latest version of the roxctl CLI.
3.2.1. Uninstalling the roxctl CLI 링크 복사링크가 클립보드에 복사되었습니다!
You can uninstall the roxctl CLI binary on Linux by using the following procedure.
Procedure
Find and delete the
roxctlbinary:$ ROXPATH=$(which roxctl) && rm -f $ROXPATHNoteDepending on your environment, you might need administrator rights to delete the
roxctlbinary.
3.2.2. Installing the roxctl CLI on Linux 링크 복사링크가 클립보드에 복사되었습니다!
You can install the roxctl CLI binary on Linux by using the following procedure.
roxctl CLI for Linux is available for amd64, arm64, ppc64le, and s390x architectures.
Procedure
Determine the
roxctlarchitecture for the target operating system:$ arch="$(uname -m | sed "s/x86_64//")"; arch="${arch:+-$arch}"Download the
roxctlCLI:$ curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.10.0/bin/Linux/roxctl${arch}"Make the
roxctlbinary executable:$ chmod +x roxctlPlace the
roxctlbinary in a directory that is on yourPATH:To check your
PATH, execute the following command:$ echo $PATH
Verification
Verify the
roxctlversion you have installed:$ roxctl version
3.2.3. Installing the roxctl CLI on macOS 링크 복사링크가 클립보드에 복사되었습니다!
You can install the roxctl CLI binary on macOS by using the following procedure.
roxctl CLI for macOS is available for amd64 and arm64 architectures.
Procedure
Determine the
roxctlarchitecture for the target operating system:$ arch="$(uname -m | sed "s/x86_64//")"; arch="${arch:+-$arch}"Download the
roxctlCLI:$ curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.10.0/bin/Darwin/roxctl${arch}"Remove all extended attributes from the binary:
$ xattr -c roxctlMake the
roxctlbinary executable:$ chmod +x roxctlPlace the
roxctlbinary in a directory that is on yourPATH:To check your
PATH, execute the following command:$ echo $PATH
Verification
Verify the
roxctlversion you have installed:$ roxctl version
3.2.4. Installing the roxctl CLI on Windows 링크 복사링크가 클립보드에 복사되었습니다!
You can install the roxctl CLI binary on Windows by using the following procedure.
roxctl CLI for Windows is available for the amd64 architecture.
Procedure
Download the
roxctlCLI:$ curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.10.0/bin/Windows/roxctl.exe
Verification
Verify the
roxctlversion you have installed:$ roxctl version
3.3. Upgrading the Central cluster 링크 복사링크가 클립보드에 복사되었습니다!
After you have created a backup of the Central database and generated the necessary resources by using the provisioning bundle, the next step is to upgrade the Central cluster.
This process requires upgrading the SecurityPolicy custom resource definition (CRD), Central, and Scanner.
3.3.1. Upgrading the SecurityPolicy custom resource definition 링크 복사링크가 클립보드에 복사되었습니다!
You can update the SecurityPolicy custom resource definition (CRD) to the latest version by generating the new CRD and applying it to the cluster.
If you use Kubernetes, enter kubectl instead of oc.
Procedure
Use
roxctlto generate a new set of resources by entering the following command:$ roxctl central generate k8s pvc > bundle.zipExtract the CRD from the archive by entering the following command:
$ unzip bundle.zip central/00-securitypolicy-crd.yamlApply the extracted CRD to your cluster by entering the following command:
$ oc apply -f central/00-securitypolicy-crd.yaml
3.3.2. Upgrading Central 링크 복사링크가 클립보드에 복사되었습니다!
You can update Central to the latest version by downloading and deploying the updated images.
If you use Kubernetes, enter kubectl instead of oc.
Procedure
To update the Central image, run the following command:
$ oc -n stackrox set image deploy/central \ central=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.10.0To update the Central-db image, run the following command:
$ oc -n stackrox set image deploy/central-db \ central-db=registry.redhat.io/advanced-cluster-security/rhacs-central-db-rhel8:4.10.0 \ init-db=registry.redhat.io/advanced-cluster-security/rhacs-central-db-rhel8:4.10.0To update the config controller image, run the following command:
$ oc -n stackrox set image deploy/config-controller \ manager=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.10.0
Verification
Verify that the new pods have deployed:
$ oc get deploy -n stackrox -o wide$ oc get pod -n stackrox --watch
3.3.3. Upgrading Scanner 링크 복사링크가 클립보드에 복사되었습니다!
You can update Scanner to the latest version by downloading and deploying the updated images.
If you are using Kubernetes, enter the kubectl command instead of the oc command.
Procedure
If you have created custom Scanner configurations, you must apply these changes before updating the Scanner configuration file:
To generate Scanner, run the following command:
$ roxctl -e "$ROX_CENTRAL_ADDRESS" scanner generateTo apply the TLS secrets YAML file, run the following command:
$ oc apply -f scanner-bundle/scanner/02-scanner-03-tls-secret.yamlTo apply the Scanner configuration YAML file, run the following command:
$ oc apply -f scanner-bundle/scanner/02-scanner-04-scanner-config.yaml
To update the Scanner image, run the following command:
$ oc -n stackrox set image deploy/scanner \ scanner=registry.redhat.io/advanced-cluster-security/rhacs-scanner-rhel8:4.10.0To update the Scanner database image, run the following command:
$ oc -n stackrox set image deploy/scanner-db \ db=registry.redhat.io/advanced-cluster-security/rhacs-scanner-db-rhel8:4.10.0 \ init-db=registry.redhat.io/advanced-cluster-security/rhacs-scanner-db-rhel8:4.10.0
Verification
To verify that the new pods have been deployed, run the following commands:
$ oc get deploy -n stackrox -o wide$ oc get pod -n stackrox --watch
3.3.4. Verifying the Central cluster upgrade 링크 복사링크가 클립보드에 복사되었습니다!
After you have upgraded both Central and Scanner, verify that the Central cluster upgrade is complete.
If you use Kubernetes, enter kubectl instead of oc.
Procedure
Check the Central logs by running the following command:
$ oc logs -n stackrox deploy/central -c centralExample output
No database restore directory found (this is not an error). Migrator: 2023/04/19 17:58:54: starting DB compaction Migrator: 2023/04/19 17:58:54: Free fraction of 0.0391 (40960/1048576) is < 0.7500. Will not compact badger 2023/04/19 17:58:54 INFO: All 1 tables opened in 2ms badger 2023/04/19 17:58:55 INFO: Replaying file id: 0 at offset: 846357 badger 2023/04/19 17:58:55 INFO: Replay took: 50.324µs badger 2023/04/19 17:58:55 DEBUG: Value log discard stats empty Migrator: 2023/04/19 17:58:55: DB is up to date. Nothing to do here. badger 2023/04/19 17:58:55 INFO: Got compaction priority: {level:0 score:1.73 dropPrefix:[]} version: 2023/04/19 17:58:55.189866 ensure.go:49: Info: Version found in the DB was current. We’re good to go!
3.4. Upgrading all secured clusters 링크 복사링크가 클립보드에 복사되었습니다!
After upgrading Central services, you must upgrade all secured clusters. You can use automatic upgrades or manual upgrades depending on your configuration.
Be aware of these important guidelines when upgrading your secured clusters.
If you are using automatic upgrades, follow this guidance:
- Update all your secured clusters by using automatic upgrades.
- For information about troubleshooting problems with the automatic cluster upgrader, see "Troubleshooting the cluster upgrader".
- Skip the instructions in this section and follow the instructions in the "Verify upgrades" and "Revoking the API token" sections.
If you are not using automatic upgrades, you must run the instructions in this section on all secured clusters including the Central cluster.
- To ensure optimal functionality, use the same RHACS version for your secured clusters and the cluster on which Central is installed.
To complete manual upgrades of each secured cluster running Sensor, Collector, and Admission controller, follow the instructions that are provided.
3.4.1. Updating other images 링크 복사링크가 클립보드에 복사되었습니다!
You must update the sensor, collector and compliance images on each secured cluster when not using automatic upgrades.
If you are using Kubernetes, use kubectl instead of oc for the commands listed in this procedure.
Procedure
Update the Sensor image:
$ oc -n stackrox set image deploy/sensor sensor=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.10.0Update the Compliance image:
$ oc -n stackrox set image ds/collector compliance=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.10.0Update the Collector image:
$ oc -n stackrox set image ds/collector collector=registry.redhat.io/advanced-cluster-security/rhacs-collector-rhel8:4.10.0Update the admission control image:
$ oc -n stackrox set image deploy/admission-control admission-control=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.10.0ImportantIf you have installed RHACS on Red Hat OpenShift by using the
roxctlCLI, you need to migrate the security context constraints (SCCs).For more information, see "Migrating SCCs during the manual upgrade" in the "Additional resources" section.
3.4.2. Adding POD_NAMESPACE to sensor and admission-control deployments 링크 복사링크가 클립보드에 복사되었습니다!
When upgrading to version 4.6 or later from a version earlier than 4.6, you must patch the sensor and admission-control deployments to set the POD_NAMESPACE environment variable.
If you are using Kubernetes, use kubectl instead of oc for the commands listed in this procedure.
Procedure
Patch sensor to ensure
POD_NAMESPACEis set by running the following command:$ [[ -z "$(oc -n stackrox get deployment sensor -o yaml | grep POD_NAMESPACE)" ]] && oc -n stackrox patch deployment sensor --type=json -p '[{"op":"add","path":"/spec/template/spec/containers/0/env/-","value":{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}}]'Patch admission-control to ensure
POD_NAMESPACEis set by running the following command:$ [[ -z "$(oc -n stackrox get deployment admission-control -o yaml | grep POD_NAMESPACE)" ]] && oc -n stackrox patch deployment admission-control --type=json -p '[{"op":"add","path":"/spec/template/spec/containers/0/env/-","value":{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}}]'
3.4.3. Migrating SCCs during the manual upgrade 링크 복사링크가 클립보드에 복사되었습니다!
By migrating the security context constraints (SCCs) during the manual upgrade by using roxctl CLI, you can seamlessly transition the Red Hat Advanced Cluster Security for Kubernetes (RHACS) services to use the Red Hat OpenShift SCCs, ensuring compatibility and optimal security configurations across Central and all secured clusters.
Procedure
List all of the RHACS services that are deployed on Central and all secured clusters:
$ oc -n stackrox describe pods | grep 'openshift.io/scc\|^Name:'Example output
Name: admission-control-6f4dcc6b4c-2phwd openshift.io/scc: stackrox-admission-control #... Name: central-575487bfcb-sjdx8 openshift.io/scc: stackrox-central Name: central-db-7c7885bb-6bgbd openshift.io/scc: stackrox-central-db Name: collector-56nkr openshift.io/scc: stackrox-collector #... Name: scanner-68fc55b599-f2wm6 openshift.io/scc: stackrox-scanner Name: scanner-68fc55b599-fztlh #... Name: sensor-84545f86b7-xgdwf openshift.io/scc: stackrox-sensor #...In this example, you can see that each pod has its own custom SCC, which is specified through the
openshift.io/sccfield.Add the required roles and role bindings to use the Red Hat OpenShift SCCs instead of the RHACS custom SCCs.
To add the required roles and role bindings to use the Red Hat OpenShift SCCs for the Central cluster, complete the following steps:
Create a file named
update-central.yamlthat defines the role and role binding resources by using the following content:Example 3.1. Example YAML file
apiVersion: rbac.authorization.k8s.io/v1 kind: Role # metadata: annotations: email: support@stackrox.com owner: stackrox labels: app.kubernetes.io/component: central app.kubernetes.io/instance: stackrox-central-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-central-services app.kubernetes.io/version: 4.4.0 name: use-central-db-scc # namespace: stackrox # Rules: # - apiGroups: - security.openshift.io resourceNames: - nonroot-v2 resources: - securitycontextconstraints verbs: - use - - - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: annotations: email: support@stackrox.com owner: stackrox labels: app.kubernetes.io/component: central app.kubernetes.io/instance: stackrox-central-services app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-central-services app.kubernetes.io/version: 4.4.0 name: use-central-scc namespace: stackrox rules: - apiGroups: - security.openshift.io resourceNames: - nonroot-v2 resources: - securitycontextconstraints verbs: - use - - - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: annotations: email: support@stackrox.com owner: stackrox labels: app.kubernetes.io/component: scanner app.kubernetes.io/instance: stackrox-central-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-central-services app.kubernetes.io/version: 4.4.0 name: use-scanner-scc namespace: stackrox rules: - apiGroups: - security.openshift.io resourceNames: - nonroot-v2 resources: - securitycontextconstraints verbs: - use - - - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding # metadata: annotations: email: support@stackrox.com owner: stackrox labels: app.kubernetes.io/component: central app.kubernetes.io/instance: stackrox-central-services app.kubernetes.io/name: stackrox app.k ubernetes.io/part-of: stackrox-central-services app.kubernetes.io/version: 4.4.0 name: central-db-use-scc # namespace: stackrox roleRef: # apiGroup: rbac.authorization.k8s.io kind: Role name: use-central-db-scc subjects: # - kind: ServiceAccount name: central-db namespace: stackrox - - - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: annotations: email: support@stackrox.com owner: stackrox labels: app.kubernetes.io/component: central app.kubernetes.io/instance: stackrox-central-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-central-services app.kubernetes.io/version: 4.4.0 name: central-use-scc namespace: stackrox roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: use-central-scc subjects: - kind: ServiceAccount name: central namespace: stackrox - - - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: annotations: email: support@stackrox.com owner: stackrox labels: app.kubernetes.io/component: scanner app.kubernetes.io/instance: stackrox-central-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-central-services app.kubernetes.io/version: 4.4.0 name: scanner-use-scc namespace: stackrox roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: use-scanner-scc subjects: - kind: ServiceAccount name: scanner namespace: stackrox - - -where:
kind: Role-
Specifies the type of Kubernetes resource, in this example,
Role. metadata.name: <rolename>- Specifies the name of the role resource.
metadata.namespace- Specifies the namespace in which the role is created.
Rules- Specifies the permissions granted by the role resource.
kind: RoleBinding-
Specifies the type of Kubernetes resource, in this example,
RoleBinding. metadata.name: <rolebindingname>- Specifies the name of the role binding resource.
metadata.roleRef- Specifies the role to bind in the same namespace.
metadata.subjects- Specifies the subjects that are bound to the role.
Create the role and role binding resources specified in the
update-central.yamlfile by running the following command:$ oc -n stackrox create -f ./update-central.yaml
To add the required roles and role bindings to use the Red Hat OpenShift SCCs for all secured clusters, complete the following steps:
Create a file named
upgrade-scs.yamlthat defines the role and role binding resources by using the following content:Example 3.2. Example YAML file
apiVersion: rbac.authorization.k8s.io/v1 kind: Role # metadata: annotations: email: support@stackrox.com owner: stackrox labels: app.kubernetes.io/component: collector app.kubernetes.io/instance: stackrox-secured-cluster-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-secured-cluster-services app.kubernetes.io/version: 4.4.0 auto-upgrade.stackrox.io/component: sensor name: use-privileged-scc # namespace: stackrox # rules: # - apiGroups: - security.openshift.io resourceNames: - privileged resources: - securitycontextconstraints verbs: - use - - - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding # metadata: annotations: email: support@stackrox.com owner: stackrox labels: app.kubernetes.io/component: collector app.kubernetes.io/instance: stackrox-secured-cluster-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-secured-cluster-services app.kubernetes.io/version: 4.4.0 auto-upgrade.stackrox.io/component: sensor name: collector-use-scc # namespace: stackrox roleRef: # apiGroup: rbac.authorization.k8s.io kind: Role name: use-privileged-scc subjects: # - kind: ServiceAccount name: collector namespace: stackrox - - -where:
kind: Role-
Specifies the type of Kubernetes resource, in this example,
Role. metadata.name: <rolename>- Specifies the name of the role resource.
metadata.namespace- Specifies the namespace in which the role is created.
Rules- Specifies the permissions granted by the role resource.
kind: RoleBinding-
Specifies the type of Kubernetes resource, in this example,
RoleBinding. metadata.name: <rolebindingname>- Specifies the name of the role binding resource.
metadata.roleRef- Specifies the role to bind in the same namespace.
metadata.subjects- Specifies the subjects that are bound to the role.
Create the role and role binding resources specified in the
upgrade-scs.yamlfile by running the following command:$ oc -n stackrox create -f ./update-scs.yamlImportantYou must run this command on each secured cluster to create the role and role bindings specified in the
upgrade-scs.yamlfile.
Delete the SCCs that are specific to RHACS:
To delete the SCCs that are specific to the Central cluster, run the following command:
$ oc delete scc/stackrox-central scc/stackrox-central-db scc/stackrox-scannerTo delete the SCCs that are specific to all secured clusters, run the following command:
$ oc delete scc/stackrox-admission-control scc/stackrox-collector scc/stackrox-sensorImportantYou must run this command on each secured cluster to delete the SCCs that are specific to each secured cluster.
Verification
Ensure that all the pods are using the correct SCCs by running the following command:
$ oc -n stackrox describe pods | grep 'openshift.io/scc\|^Name:'Compare the output with the following table:
Expand Component Previous custom SCC New Red Hat OpenShift 4 SCC Central
stackrox-centralnonroot-v2Central-db
stackrox-central-dbnonroot-v2Scanner
stackrox-scannernonroot-v2Scanner-db
stackrox-scannernonroot-v2Admission Controller
stackrox-admission-controlrestricted-v2Collector
stackrox-collectorprivilegedSensor
stackrox-sensorrestricted-v2
3.4.3.1. Verifying secured cluster upgrade 링크 복사링크가 클립보드에 복사되었습니다!
After you have upgraded secured clusters, verify that the updated pods are working.
If you use Kubernetes, enter kubectl instead of oc.
Procedure
Check that the new pods have deployed:
$ oc get deploy,ds -n stackrox -o wide$ oc get pod -n stackrox --watch
3.5. Enabling RHCOS node scanning with the StackRox Scanner 링크 복사링크가 클립보드에 복사되었습니다!
If you use OpenShift Container Platform, you can enable scanning of Red Hat Enterprise Linux CoreOS (RHCOS) nodes for vulnerabilities by using Red Hat Advanced Cluster Security for Kubernetes (RHACS).
Use Scanner V4 for full functionality when scanning nodes. For instructions on changing to Scanner V4 if you are using the StackRox scanner, see "Enabling Scanner V4".
Prerequisites
- For scanning RHCOS node hosts of the secured cluster, you must have installed Secured Cluster services on OpenShift Container Platform 4.12 or later. For information about supported platforms and architecture, see the Red Hat Advanced Cluster Security for Kubernetes Support Matrix. For life cycle support information for RHACS, see the Red Hat Advanced Cluster Security for Kubernetes Support Policy.
- This procedure describes how to enable node scanning for the first time. If you are reconfiguring Red Hat Advanced Cluster Security for Kubernetes to use the StackRox Scanner instead of Scanner V4, follow the procedure in "Restoring RHCOS node scanning with the StackRox Scanner".
Procedure
Run one of the following commands to update the compliance container.
For a default compliance container with metrics disabled, run the following command:
$ oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"name":"compliance","env":[{"name":"ROX_METRICS_PORT","value":"disabled"},{"name":"ROX_NODE_SCANNING_ENDPOINT","value":"127.0.0.1:8444"},{"name":"ROX_NODE_SCANNING_INTERVAL","value":"4h"},{"name":"ROX_NODE_SCANNING_INTERVAL_DEVIATION","value":"24m"},{"name":"ROX_NODE_SCANNING_MAX_INITIAL_WAIT","value":"5m"},{"name":"ROX_RHCOS_NODE_SCANNING","value":"true"},{"name":"ROX_CALL_NODE_INVENTORY_ENABLED","value":"true"}]}]}}}}'For a compliance container with Prometheus metrics enabled, run the following command:
$ oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"name":"compliance","env":[{"name":"ROX_METRICS_PORT","value":":9091"},{"name":"ROX_NODE_SCANNING_ENDPOINT","value":"127.0.0.1:8444"},{"name":"ROX_NODE_SCANNING_INTERVAL","value":"4h"},{"name":"ROX_NODE_SCANNING_INTERVAL_DEVIATION","value":"24m"},{"name":"ROX_NODE_SCANNING_MAX_INITIAL_WAIT","value":"5m"},{"name":"ROX_RHCOS_NODE_SCANNING","value":"true"},{"name":"ROX_CALL_NODE_INVENTORY_ENABLED","value":"true"}]}]}}}}'
Update the Collector DaemonSet (DS) by taking the following steps:
Add new volume mounts to Collector DS by running the following command:
$ oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"volumes":[{"name":"tmp-volume","emptyDir":{}},{"name":"cache-volume","emptyDir":{"sizeLimit":"200Mi"}}]}}}}'Add the new
NodeScannercontainer by running the following command:$ oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"command":["/scanner","--nodeinventory","--config=",""],"env":[{"name":"ROX_NODE_NAME","valueFrom":{"fieldRef":{"apiVersion":"v1","fieldPath":"spec.nodeName"}}},{"name":"ROX_CLAIR_V4_SCANNING","value":"true"},{"name":"ROX_COMPLIANCE_OPERATOR_INTEGRATION","value":"true"},{"name":"ROX_CSV_EXPORT","value":"false"},{"name":"ROX_DECLARATIVE_CONFIGURATION","value":"false"},{"name":"ROX_INTEGRATIONS_AS_CONFIG","value":"false"},{"name":"ROX_NETPOL_FIELDS","value":"true"},{"name":"ROX_NETWORK_DETECTION_BASELINE_SIMULATION","value":"true"},{"name":"ROX_NETWORK_GRAPH_PATTERNFLY","value":"true"},{"name":"ROX_NODE_SCANNING_CACHE_TIME","value":"3h36m"},{"name":"ROX_NODE_SCANNING_INITIAL_BACKOFF","value":"30s"},{"name":"ROX_NODE_SCANNING_MAX_BACKOFF","value":"5m"},{"name":"ROX_PROCESSES_LISTENING_ON_PORT","value":"false"},{"name":"ROX_QUAY_ROBOT_ACCOUNTS","value":"true"},{"name":"ROX_ROXCTL_NETPOL_GENERATE","value":"true"},{"name":"ROX_SOURCED_AUTOGENERATED_INTEGRATIONS","value":"false"},{"name":"ROX_SYSLOG_EXTRA_FIELDS","value":"true"},{"name":"ROX_SYSTEM_HEALTH_PF","value":"false"},{"name":"ROX_VULN_MGMT_WORKLOAD_CVES","value":"false"}],"image":"registry.redhat.io/advanced-cluster-security/rhacs-scanner-slim-rhel8:4.10.0","imagePullPolicy":"IfNotPresent","name":"node-inventory","ports":[{"containerPort":8444,"name":"grpc","protocol":"TCP"}],"volumeMounts":[{"mountPath":"/host","name":"host-root-ro","readOnly":true},{"mountPath":"/tmp/","name":"tmp-volume"},{"mountPath":"/cache","name":"cache-volume"}]}]}}}}'
3.6. Rolling back Central normally 링크 복사링크가 클립보드에 복사되었습니다!
You can roll back to a previous version of Central if upgrading Red Hat Advanced Cluster Security for Kubernetes fails.
If you use Kubernetes, enter kubectl instead of oc.
Prerequisites
- Enough disk space: Before you can perform a rollback, you must have free disk space available on your persistent storage. Red Hat Advanced Cluster Security for Kubernetes uses disk space to keep a copy of databases during the upgrade. If the disk space is not enough to store a copy and the upgrade fails, you cannot perform a roll back to an earlier version.
Internal database rollback (4.8 or earlier): If you are rolling back from RHACS 4.8 to an earlier version and use the internal database (
central-db), you must first restore the database from a PostgreSQL 13 backup.-
To restore the database, add the
RESTORE_BACKUP=trueandFORCE_OLD_BINARIES=trueenvironment variables to thecentral-dbandinit-dbcontainers of thecentral-dbcomponent. - For details on injecting environment variables, see "Injecting an environment variable into the Central deployment".
-
To restore the database, add the
Procedure
Run the following command to roll back to a previous version when an upgrade fails (before the Central service starts):
$ oc -n stackrox rollout undo deploy/central
3.6.1. Rolling back Central forcefully 링크 복사링크가 클립보드에 복사되었습니다!
You can use forced rollback to roll back to an earlier version of Central (after the Central service starts).
- Using forced rollback to switch back to a previous version might result in loss of data and functionality.
-
If you use Kubernetes, enter
kubectlinstead ofoc.
Prerequisites
- Enough disk space: Before you can perform a rollback, you must have free disk space available on your persistent storage. Red Hat Advanced Cluster Security for Kubernetes uses disk space to keep a copy of databases during the upgrade. If the disk space is not enough to store a copy and the upgrade fails, you cannot perform a roll back to an earlier version.
Internal database rollback (4.8 or earlier): If you are rolling back from RHACS 4.8 to an earlier version and use the internal database (
central-db), you must first restore the database from a PostgreSQL 13 backup.-
To restore the database, add the
RESTORE_BACKUP=trueandFORCE_OLD_BINARIES=trueenvironment variables to thecentral-dbandinit-dbcontainers of thecentral-dbcomponent. - For details on injecting environment variables, see "Injecting an environment variable into the Central deployment".
-
To restore the database, add the
Procedure
Run the following commands to perform a forced rollback:
To forcefully rollback to the previously installed version:
$ oc -n stackrox rollout undo deploy/centralTo forcefully rollback to a specific version:
Edit the
ConfigMapthat belongs to Central:$ oc -n stackrox edit configmap/central-configUpdate the value of the
maintenance.forceRollbackVersionkey:data: central-config.yaml: | maintenance: safeMode: false compaction: enabled: true bucketFillFraction: .5 freeFractionThreshold: 0.75 forceRollbackVersion: <x.x.x.x> ...where:
<x.x.x.x>- Specifies the version that you want to roll back to.
Update the Central image version:
$ oc -n stackrox \ set image deploy/central central=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:<x.x.x.x>where:
<x.x.x.x>-
Specifies the version that you want to roll back to. It must be the same version that you specified for the
maintenance.forceRollbackVersionkey in thecentral-configconfig map.
3.7. Verifying upgrades 링크 복사링크가 클립보드에 복사되었습니다!
The updated Sensors and Collectors continue to report the latest data from each secured cluster.
The last time Sensor contacted Central is visible in the RHACS portal.
Procedure
-
In the RHACS portal, go to Platform Configuration
System Health. - Check to ensure that Sensor Upgrade shows clusters up to date with Central.
3.8. Revoking the API token 링크 복사링크가 클립보드에 복사되었습니다!
For security reasons, Red Hat recommends that you revoke the API token that you have used to complete Central database backup.
Prerequisites
- After the upgrade, you must reload the RHACS portal page and re-accept the certificate to continue using the RHACS portal.
Procedure
-
In the RHACS portal, go to Platform Configuration
Integrations. - Scroll down to the Authentication Tokens category, and click API Token.
- Select the checkbox in front of the token name that you want to revoke.
- Click Revoke.
- On the confirmation dialog box, click Confirm.
3.9. Troubleshooting the cluster upgrader 링크 복사링크가 클립보드에 복사되었습니다!
If you encounter problems when using the legacy installation method for the secured cluster and enabling the automated updates, you can try troubleshooting the problem by viewing the Platform Configuration
Procedure
Examine the messages in the Platform Configuration
Clusters page and determine the type of error: Missing permissions: If you see the following error, the upgrader is missing the appropriate permissions:
Upgrader failed to execute PreflightStage of the roll-forward workflow: executing stage "Run preflight checks": preflight check "Kubernetes authorization" reported errors. This usually means that access is denied. Have you configured this Secured Cluster for automatically receiving upgrades?Missing image: If you see the following error, the upgrader cannot pull the required image:
Upgrade initialization error: The upgrader pods have trouble pulling the new image: Error pulling image: (...) (<image_reference:tag>: not found)Unknown reasons: If you see the following error, the upgrade terminated due to an unknown reason:
Upgrade initialization error: Pod terminated: (Error)
- Follow the recommended steps to resolve the error depending on the error cause.
- If necessary, check the upgrader logs for more information.
3.9.1. Upgrader missing permissions 링크 복사링크가 클립보드에 복사되었습니다!
If the cluster upgrader is missing permissions, you can resolve this issue by ensuring the bundle was generated correctly or by manually configuring the required service accounts and role bindings.
Procedure
- Ensure that the bundle for the secured cluster was generated with future upgrades enabled before clicking Download YAML file and keys.
- If possible, remove that secured cluster and generate a new bundle making sure that future upgrades are enabled.
If you cannot re-create the cluster, you can take these actions:
-
Ensure that the service account
sensor-upgraderexists in the same namespace as Sensor. -
Ensure that a ClusterRoleBinding exists (default name:
<namespace>:upgrade-sensors) that grants thecluster-adminClusterRole to thesensor-upgraderservice account.
-
Ensure that the service account
3.9.2. Upgrader cannot start due to missing image 링크 복사링크가 클립보드에 복사되었습니다!
If the upgrader cannot start because it cannot pull the required image, you can resolve this issue by ensuring the secured cluster has access to the registry and the image pull secrets are configured correctly.
Procedure
-
Ensure that the Secured Cluster can access the registry and pull the image
<image_reference:tag>. - Ensure that the image pull secrets are configured correctly in the secured cluster.
3.9.3. Upgrader cannot start due to an unknown reason 링크 복사링크가 클립보드에 복사되었습니다!
If the upgrader cannot start and the reason is not immediately clear, you can troubleshoot by checking upgrader permissions and reviewing the logs for more information.
Procedure
- Ensure that the upgrader has enough permissions for accessing the cluster objects. For more information, see "Upgrader is missing permissions".
- Check the upgrader logs for more insights.
3.9.3.1. Getting upgrader logs 링크 복사링크가 클립보드에 복사되었습니다!
If you cannot easily determine the reason that the upgrader failed, you can get the upgrader logs and examine them to try to determine the reason for the failure.
The upgrader deployment is usually only running in the cluster for a short time while doing the upgrades. It is removed later, so accessing its logs using the orchestrator CLI can require proper timing.
Procedure
Check the upgrader logs for more insights by running the following command:
$ kubectl -n <namespace> logs deploy/sensor-upgraderwhere:
<namespace>- Specifies the namespace in which Sensor is running.