Chapter 9. Upgrading RHACS Cloud Service


9.1. Upgrading secured clusters in RHACS Cloud Service by using the Operator

Red Hat provides regular service updates for the components that it manages, including Central services. These service updates include upgrades to new versions of Red Hat Advanced Cluster Security Cloud Service.

You must regularly upgrade the version of RHACS on your secured clusters to ensure compatibility with RHACS Cloud Service.

9.1.1. Preparing to upgrade

Before you upgrade the Red Hat Advanced Cluster Security for Kubernetes (RHACS) version, complete the following steps:

  • If the cluster you are upgrading contains the SecuredCluster custom resource (CR), change the collection method to CORE_BPF. For more information, see "Changing the collection method".

9.1.1.1. Changing the collection method

If the cluster that you are upgrading contains the SecuredCluster CR, you must ensure that the per node collection setting is set to CORE_BPF before you upgrade.

Procedure

  1. In the OpenShift Container Platform web console, go to the RHACS Operator page.
  2. In the top navigation menu, select Secured Cluster.
  3. Click the instance name, for example, stackrox-secured-cluster-services.
  4. Use one of the following methods to change the setting:

    • In the Form view, under Per Node Settings Collector Settings Collection, select CORE_BPF.
    • Click YAML to open the YAML editor and locate the spec.perNode.collector.collection attribute. If the value is KernelModule or EBPF, then change it to CORE_BPF.
  5. Click Save.

Additional resources

9.1.2. Rolling back an Operator upgrade for secured clusters

To roll back an Operator upgrade, you can use either the CLI or the OpenShift Container Platform web console.

Note

On secured clusters, rolling back Operator upgrades is needed only in rare cases, for example, if an issue exists with the secured cluster.

9.1.2.1. Rolling back an Operator upgrade by using the CLI

You can roll back the Operator version by using CLI commands.

Procedure

  1. Delete the OLM subscription by running the following command:

    • For OpenShift Container Platform, run the following command:

      Copy to Clipboard Toggle word wrap
      $ oc -n rhacs-operator delete subscription rhacs-operator
    • For Kubernetes, run the following command:

      Copy to Clipboard Toggle word wrap
      $ kubectl -n rhacs-operator delete subscription rhacs-operator
  2. Delete the cluster service version (CSV) by running the following command:

    • For OpenShift Container Platform, run the following command:

      Copy to Clipboard Toggle word wrap
      $ oc -n rhacs-operator delete csv -l operators.coreos.com/rhacs-operator.rhacs-operator
    • For Kubernetes, run the following command:

      Copy to Clipboard Toggle word wrap
      $ kubectl -n rhacs-operator delete csv -l operators.coreos.com/rhacs-operator.rhacs-operator
  3. Install the latest version of the Operator on the rolled back channel.

9.1.2.2. Rolling back an Operator upgrade by using the web console

You can roll back the Operator version by using the OpenShift Container Platform web console.

Prerequisites

  • You have access to an OpenShift Container Platform cluster web console using an account with cluster-admin permissions.

Procedure

  1. Go to the Operators Installed Operators page.
  2. Click the RHACS Operator.
  3. On the Operator Details page, select Uninstall Operator from the Actions list. Following this action, the Operator stops running and no longer receives updates.
  4. Install the latest version of the Operator on the rolled back channel.

9.1.3. Troubleshooting Operator upgrade issues

Follow these instructions to investigate and resolve upgrade-related issues for the RHACS Operator.

9.1.3.1. Central or Secured cluster fails to deploy

When RHACS Operator has the following conditions, you must check the custom resource conditions to find the issue:

  • If the Operator fails to deploy Secured Cluster
  • If the Operator fails to apply CR changes to actual resources
  • For Secured clusters, run the following command to check the conditions:

    Copy to Clipboard Toggle word wrap
    $ oc -n rhacs-operator describe securedclusters.platform.stackrox.io 
    1
    1
    If you use Kubernetes, enter kubectl instead of oc.

You can identify configuration errors from the conditions output:

Example output

Copy to Clipboard Toggle word wrap
 Conditions:
    Last Transition Time:  2023-04-19T10:49:57Z
    Status:                False
    Type:                  Deployed
    Last Transition Time:  2023-04-19T10:49:57Z
    Status:                True
    Type:                  Initialized
    Last Transition Time:  2023-04-19T10:59:10Z
    Message:               Deployment.apps "central" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: "50": must be less than or equal to cpu limit
    Reason:                ReconcileError
    Status:                True
    Type:                  Irreconcilable
    Last Transition Time:  2023-04-19T10:49:57Z
    Message:               No proxy configuration is desired
    Reason:                NoProxyConfig
    Status:                False
    Type:                  ProxyConfigFailed
    Last Transition Time:  2023-04-19T10:49:57Z
    Message:               Deployment.apps "central" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: "50": must be less than or equal to cpu limit
    Reason:                InstallError
    Status:                True
    Type:                  ReleaseFailed

Additionally, you can view RHACS pod logs to find more information about the issue. Run the following command to view the logs:

Copy to Clipboard Toggle word wrap
oc -n rhacs-operator logs deploy/rhacs-operator-controller-manager manager 
1
1
If you use Kubernetes, enter kubectl instead of oc.

9.2. Upgrading secured clusters in RHACS Cloud Service by using Helm charts

You can upgrade your secured clusters in RHACS Cloud Service by using Helm charts.

If you installed RHACS secured clusters by using Helm charts, you can upgrade to the latest version of RHACS by updating the Helm chart and running the helm upgrade command.

9.2.1. Updating the Helm chart repository

You must always update Helm charts before upgrading to a new version of Red Hat Advanced Cluster Security for Kubernetes.

Prerequisites

  • You must have already added the Red Hat Advanced Cluster Security for Kubernetes Helm chart repository.
  • You must be using Helm version 3.8.3 or newer.

Procedure

  • Update Red Hat Advanced Cluster Security for Kubernetes charts repository.

    Copy to Clipboard Toggle word wrap
    $ helm repo update

Verification

  • Run the following command to verify the added chart repository:

    Copy to Clipboard Toggle word wrap
    $ helm search repo -l rhacs/

9.2.2. Running the Helm upgrade command

You can use the helm upgrade command to update Red Hat Advanced Cluster Security for Kubernetes (RHACS).

Prerequisites

  • You must have access to the values-private.yaml configuration file that you have used to install Red Hat Advanced Cluster Security for Kubernetes (RHACS). Otherwise, you must generate the values-private.yaml configuration file containing root certificates before proceeding with these commands.

Procedure

  • Run the helm upgrade command and specify the configuration files by using the -f option:

    Copy to Clipboard Toggle word wrap
    $ helm upgrade -n stackrox stackrox-secured-cluster-services \
      rhacs/secured-cluster-services --version <current-rhacs-version> \
    1
    
      -f values-private.yaml
    1
    Use the -f option to specify the paths for your YAML configuration files.

9.2.3. Additional resources

9.3. Manually upgrading secured clusters in RHACS Cloud Service by using the roxctl CLI

You can upgrade your secured clusters in RHACS Cloud Service by using the roxctl CLI.

Important

You need to manually upgrade secured clusters only if you used the roxctl CLI to install the secured clusters.

9.3.1. Upgrading the roxctl CLI

To upgrade the roxctl CLI to the latest version, you must uninstall your current version of the roxctl CLI and then install the latest version of the roxctl CLI.

9.3.1.1. Uninstalling the roxctl CLI

You can uninstall the roxctl CLI binary on Linux by using the following procedure.

Procedure

  • Find and delete the roxctl binary:

    Copy to Clipboard Toggle word wrap
    $ ROXPATH=$(which roxctl) && rm -f $ROXPATH 
    1
    1
    Depending on your environment, you might need administrator rights to delete the roxctl binary.

9.3.1.2. Installing the roxctl CLI on Linux

You can install the roxctl CLI binary on Linux by using the following procedure.

Note

roxctl CLI for Linux is available for amd64, arm64, ppc64le, and s390x architectures.

Procedure

  1. Determine the roxctl architecture for the target operating system:

    Copy to Clipboard Toggle word wrap
    $ arch="$(uname -m | sed "s/x86_64//")"; arch="${arch:+-$arch}"
  2. Download the roxctl CLI:

    Copy to Clipboard Toggle word wrap
    $ curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.7.3/bin/Linux/roxctl${arch}"
  3. Make the roxctl binary executable:

    Copy to Clipboard Toggle word wrap
    $ chmod +x roxctl
  4. Place the roxctl binary in a directory that is on your PATH:

    To check your PATH, execute the following command:

    Copy to Clipboard Toggle word wrap
    $ echo $PATH

Verification

  • Verify the roxctl version you have installed:

    Copy to Clipboard Toggle word wrap
    $ roxctl version

9.3.1.3. Installing the roxctl CLI on macOS

You can install the roxctl CLI binary on macOS by using the following procedure.

Note

roxctl CLI for macOS is available for amd64 and arm64 architectures.

Procedure

  1. Determine the roxctl architecture for the target operating system:

    Copy to Clipboard Toggle word wrap
    $ arch="$(uname -m | sed "s/x86_64//")"; arch="${arch:+-$arch}"
  2. Download the roxctl CLI:

    Copy to Clipboard Toggle word wrap
    $ curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.7.3/bin/Darwin/roxctl${arch}"
  3. Remove all extended attributes from the binary:

    Copy to Clipboard Toggle word wrap
    $ xattr -c roxctl
  4. Make the roxctl binary executable:

    Copy to Clipboard Toggle word wrap
    $ chmod +x roxctl
  5. Place the roxctl binary in a directory that is on your PATH:

    To check your PATH, execute the following command:

    Copy to Clipboard Toggle word wrap
    $ echo $PATH

Verification

  • Verify the roxctl version you have installed:

    Copy to Clipboard Toggle word wrap
    $ roxctl version

9.3.1.4. Installing the roxctl CLI on Windows

You can install the roxctl CLI binary on Windows by using the following procedure.

Note

roxctl CLI for Windows is available for the amd64 architecture.

Procedure

  • Download the roxctl CLI:

    Copy to Clipboard Toggle word wrap
    $ curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.7.3/bin/Windows/roxctl.exe

Verification

  • Verify the roxctl version you have installed:

    Copy to Clipboard Toggle word wrap
    $ roxctl version

9.3.2. Upgrading all secured clusters manually

Important

To ensure optimal functionality, use the same RHACS version for your secured clusters that RHACS Cloud Service is running. If you are using automatic upgrades, update all your secured clusters by using automatic upgrades. If you are not using automatic upgrades, complete the instructions in this section on all secured clusters.

To complete manual upgrades of each secured cluster running Sensor, Collector, and Admission controller, follow these instructions.

9.3.2.1. Updating other images

You must update the sensor, collector and compliance images on each secured cluster when not using automatic upgrades.

Note

If you are using Kubernetes, use kubectl instead of oc for the commands listed in this procedure.

Procedure

  1. Update the Sensor image:

    Copy to Clipboard Toggle word wrap
    $ oc -n stackrox set image deploy/sensor sensor=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.7.3 
    1
    1
    If you use Kubernetes, enter kubectl instead of oc.
  2. Update the Compliance image:

    Copy to Clipboard Toggle word wrap
    $ oc -n stackrox set image ds/collector compliance=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.7.3 
    1
    1
    If you use Kubernetes, enter kubectl instead of oc.
  3. Update the Collector image:

    Copy to Clipboard Toggle word wrap
    $ oc -n stackrox set image ds/collector collector=registry.redhat.io/advanced-cluster-security/rhacs-collector-rhel8:4.7.3 
    1
    1
    If you use Kubernetes, enter kubectl instead of oc.
  4. Update the admission control image:

    Copy to Clipboard Toggle word wrap
    $ oc -n stackrox set image deploy/admission-control admission-control=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.7.3
Important

If you have installed RHACS on Red Hat OpenShift by using the roxctl CLI, you need to migrate the security context constraints (SCCs).

For more information, see "Migrating SCCs during the manual upgrade" in the "Additional resources" section.

9.3.2.2. Migrating SCCs during the manual upgrade

By migrating the security context constraints (SCCs) during the manual upgrade by using roxctl CLI, you can seamlessly transition the Red Hat Advanced Cluster Security for Kubernetes (RHACS) services to use the Red Hat OpenShift SCCs, ensuring compatibility and optimal security configurations across Central and all secured clusters.

Procedure

  1. List all of the RHACS services that are deployed on all secured clusters:

    Copy to Clipboard Toggle word wrap
    $ oc -n stackrox describe pods | grep 'openshift.io/scc\|^Name:'

    Example output

    Copy to Clipboard Toggle word wrap
    Name:      admission-control-6f4dcc6b4c-2phwd
               openshift.io/scc: stackrox-admission-control
    #...
    Name:      central-575487bfcb-sjdx8
               openshift.io/scc: stackrox-central
    Name:      central-db-7c7885bb-6bgbd
               openshift.io/scc: stackrox-central-db
    Name:      collector-56nkr
               openshift.io/scc: stackrox-collector
    #...
    Name:      scanner-68fc55b599-f2wm6
               openshift.io/scc: stackrox-scanner
    Name:      scanner-68fc55b599-fztlh
    #...
    Name:      sensor-84545f86b7-xgdwf
               openshift.io/scc: stackrox-sensor
    #...

    In this example, you can see that each pod has its own custom SCC, which is specified through the openshift.io/scc field.

  2. Add the required roles and role bindings to use the Red Hat OpenShift SCCs instead of the RHACS custom SCCs.
  3. To add the required roles and role bindings to use the Red Hat OpenShift SCCs for all secured clusters, complete the following steps:

    1. Create a file named upgrade-scs.yaml that defines the role and role binding resources by using the following content:

      Example 9.1. Example YAML file

      Copy to Clipboard Toggle word wrap
      apiVersion: rbac.authorization.k8s.io/v1
      kind: Role  
      1
      
      metadata:
        annotations:
           email: support@stackrox.com
           owner: stackrox
        labels:
           app.kubernetes.io/component: collector
           app.kubernetes.io/instance: stackrox-secured-cluster-services
           app.kubernetes.io/name: stackrox
           app.kubernetes.io/part-of: stackrox-secured-cluster-services
           app.kubernetes.io/version: 4.4.0
           auto-upgrade.stackrox.io/component: sensor
        name: use-privileged-scc  
      2
      
        namespace: stackrox 
      3
      
      rules:  
      4
      
      - apiGroups:
        - security.openshift.io
        resourceNames:
        - privileged
        resources:
        - securitycontextconstraints
        verbs:
        - use
      - - -
      apiVersion: rbac.authorization.k8s.io/v1
      kind: RoleBinding 
      5
      
      metadata:
        annotations:
           email: support@stackrox.com
           owner: stackrox
        labels:
           app.kubernetes.io/component: collector
           app.kubernetes.io/instance: stackrox-secured-cluster-services
           app.kubernetes.io/name: stackrox
           app.kubernetes.io/part-of: stackrox-secured-cluster-services
           app.kubernetes.io/version: 4.4.0
           auto-upgrade.stackrox.io/component: sensor
        name: collector-use-scc 
      6
      
        namespace: stackrox
      roleRef: 
      7
      
        apiGroup: rbac.authorization.k8s.io
        kind: Role
        name: use-privileged-scc
      subjects: 
      8
      
      - kind: ServiceAccount
        name: collector
        namespace: stackrox
      - - -
      1
      The type of Kubernetes resource, in this example, Role.
      2
      The name of the role resource.
      3
      The namespace in which the role is created.
      4
      Describes the permissions granted by the role resource.
      5
      The type of Kubernetes resource, in this example, RoleBinding.
      6
      The name of the role binding resource.
      7
      Specifies the role to bind in the same namespace.
      8
      Specifies the subjects that are bound to the role.
    2. Create the role and role binding resources specified in the upgrade-scs.yaml file by running the following command:

      Copy to Clipboard Toggle word wrap
      $ oc -n stackrox create -f ./update-scs.yaml
      Important

      You must run this command on each secured cluster to create the role and role bindings specified in the upgrade-scs.yaml file.

  4. Delete the SCCs that are specific to RHACS:

    1. To delete the SCCs that are specific to all secured clusters, run the following command:

      Copy to Clipboard Toggle word wrap
      $ oc delete scc/stackrox-admission-control scc/stackrox-collector scc/stackrox-sensor
      Important

      You must run this command on each secured cluster to delete the SCCs that are specific to each secured cluster.

Verification

  • Ensure that all the pods are using the correct SCCs by running the following command:

    Copy to Clipboard Toggle word wrap
    $ oc -n stackrox describe pods | grep 'openshift.io/scc\|^Name:'

    Compare the output with the following table:

    ComponentPrevious custom SCCNew Red Hat OpenShift 4 SCC

    Central

    stackrox-central

    nonroot-v2

    Central-db

    stackrox-central-db

    nonroot-v2

    Scanner

    stackrox-scanner

    nonroot-v2

    Scanner-db

    stackrox-scanner

    nonroot-v2

    Admission Controller

    stackrox-admission-control

    restricted-v2

    Collector

    stackrox-collector

    privileged

    Sensor

    stackrox-sensor

    restricted-v2

9.3.2.2.1. Editing the GOMEMLIMIT environment variable for the Sensor deployment

Upgrading to version 4.4 requires that you manually replace the GOMEMLIMIT environment variable with the ROX_MEMLIMIT environment variable. You must edit this variable for each deployment.

Procedure

  1. Run the following command to edit the variable for the Sensor deployment:

    Copy to Clipboard Toggle word wrap
    $ oc -n stackrox edit deploy/sensor 
    1
    1
    If you use Kubernetes, enter kubectl instead of oc.
  2. Replace the GOMEMLIMIT variable with ROX_MEMLIMIT.
  3. Save the file.
9.3.2.2.2. Editing the GOMEMLIMIT environment variable for the Collector deployment

Upgrading to version 4.4 requires that you manually replace the GOMEMLIMIT environment variable with the ROX_MEMLIMIT environment variable. You must edit this variable for each deployment.

Procedure

  1. Run the following command to edit the variable for the Collector deployment:

    Copy to Clipboard Toggle word wrap
    $ oc -n stackrox edit deploy/collector 
    1
    1
    If you use Kubernetes, enter kubectl instead of oc.
  2. Replace the GOMEMLIMIT variable with ROX_MEMLIMIT.
  3. Save the file.
9.3.2.2.3. Editing the GOMEMLIMIT environment variable for the Admission Controller deployment

Upgrading to version 4.4 requires that you manually replace the GOMEMLIMIT environment variable with the ROX_MEMLIMIT environment variable. You must edit this variable for each deployment.

Procedure

  1. Run the following command to edit the variable for the Admission Controller deployment:

    Copy to Clipboard Toggle word wrap
    $ oc -n stackrox edit deploy/admission-control 
    1
    1
    If you use Kubernetes, enter kubectl instead of oc.
  2. Replace the GOMEMLIMIT variable with ROX_MEMLIMIT.
  3. Save the file.
9.3.2.2.4. Verifying secured cluster upgrade

After you have upgraded secured clusters, verify that the updated pods are working.

Procedure

  • Check that the new pods have deployed:

    Copy to Clipboard Toggle word wrap
    $ oc get deploy,ds -n stackrox -o wide 
    1
    1
    If you use Kubernetes, enter kubectl instead of oc.
    Copy to Clipboard Toggle word wrap
    $ oc get pod -n stackrox --watch 
    1
    1
    If you use Kubernetes, enter kubectl instead of oc.

9.3.3. Enabling RHCOS node scanning with the StackRox Scanner

If you use OpenShift Container Platform, you can enable scanning of Red Hat Enterprise Linux CoreOS (RHCOS) nodes for vulnerabilities by using Red Hat Advanced Cluster Security for Kubernetes (RHACS).

Prerequisites

  • For scanning RHCOS node hosts of the secured cluster, you must have installed Secured Cluster services on OpenShift Container Platform 4.12 or later. For information about supported platforms and architecture, see the Red Hat Advanced Cluster Security for Kubernetes Support Matrix. For life cycle support information for RHACS, see the Red Hat Advanced Cluster Security for Kubernetes Support Policy.
  • This procedure describes how to enable node scanning for the first time. If you are reconfiguring Red Hat Advanced Cluster Security for Kubernetes to use the StackRox Scanner instead of Scanner V4, follow the procedure in "Restoring RHCOS node scanning with the StackRox Scanner".

Procedure

  1. Run one of the following commands to update the compliance container.

    • For a default compliance container with metrics disabled, run the following command:

      Copy to Clipboard Toggle word wrap
      $ oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"name":"compliance","env":[{"name":"ROX_METRICS_PORT","value":"disabled"},{"name":"ROX_NODE_SCANNING_ENDPOINT","value":"127.0.0.1:8444"},{"name":"ROX_NODE_SCANNING_INTERVAL","value":"4h"},{"name":"ROX_NODE_SCANNING_INTERVAL_DEVIATION","value":"24m"},{"name":"ROX_NODE_SCANNING_MAX_INITIAL_WAIT","value":"5m"},{"name":"ROX_RHCOS_NODE_SCANNING","value":"true"},{"name":"ROX_CALL_NODE_INVENTORY_ENABLED","value":"true"}]}]}}}}'
    • For a compliance container with Prometheus metrics enabled, run the following command:

      Copy to Clipboard Toggle word wrap
      $ oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"name":"compliance","env":[{"name":"ROX_METRICS_PORT","value":":9091"},{"name":"ROX_NODE_SCANNING_ENDPOINT","value":"127.0.0.1:8444"},{"name":"ROX_NODE_SCANNING_INTERVAL","value":"4h"},{"name":"ROX_NODE_SCANNING_INTERVAL_DEVIATION","value":"24m"},{"name":"ROX_NODE_SCANNING_MAX_INITIAL_WAIT","value":"5m"},{"name":"ROX_RHCOS_NODE_SCANNING","value":"true"},{"name":"ROX_CALL_NODE_INVENTORY_ENABLED","value":"true"}]}]}}}}'
  2. Update the Collector DaemonSet (DS) by taking the following steps:

    1. Add new volume mounts to Collector DS by running the following command:

      Copy to Clipboard Toggle word wrap
      $ oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"volumes":[{"name":"tmp-volume","emptyDir":{}},{"name":"cache-volume","emptyDir":{"sizeLimit":"200Mi"}}]}}}}'
    2. Add the new NodeScanner container by running the following command:

      Copy to Clipboard Toggle word wrap
      $ oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"command":["/scanner","--nodeinventory","--config=",""],"env":[{"name":"ROX_NODE_NAME","valueFrom":{"fieldRef":{"apiVersion":"v1","fieldPath":"spec.nodeName"}}},{"name":"ROX_CLAIR_V4_SCANNING","value":"true"},{"name":"ROX_COMPLIANCE_OPERATOR_INTEGRATION","value":"true"},{"name":"ROX_CSV_EXPORT","value":"false"},{"name":"ROX_DECLARATIVE_CONFIGURATION","value":"false"},{"name":"ROX_INTEGRATIONS_AS_CONFIG","value":"false"},{"name":"ROX_NETPOL_FIELDS","value":"true"},{"name":"ROX_NETWORK_DETECTION_BASELINE_SIMULATION","value":"true"},{"name":"ROX_NETWORK_GRAPH_PATTERNFLY","value":"true"},{"name":"ROX_NODE_SCANNING_CACHE_TIME","value":"3h36m"},{"name":"ROX_NODE_SCANNING_INITIAL_BACKOFF","value":"30s"},{"name":"ROX_NODE_SCANNING_MAX_BACKOFF","value":"5m"},{"name":"ROX_PROCESSES_LISTENING_ON_PORT","value":"false"},{"name":"ROX_QUAY_ROBOT_ACCOUNTS","value":"true"},{"name":"ROX_ROXCTL_NETPOL_GENERATE","value":"true"},{"name":"ROX_SOURCED_AUTOGENERATED_INTEGRATIONS","value":"false"},{"name":"ROX_SYSLOG_EXTRA_FIELDS","value":"true"},{"name":"ROX_SYSTEM_HEALTH_PF","value":"false"},{"name":"ROX_VULN_MGMT_WORKLOAD_CVES","value":"false"}],"image":"registry.redhat.io/advanced-cluster-security/rhacs-scanner-slim-rhel8:4.7.3","imagePullPolicy":"IfNotPresent","name":"node-inventory","ports":[{"containerPort":8444,"name":"grpc","protocol":"TCP"}],"volumeMounts":[{"mountPath":"/host","name":"host-root-ro","readOnly":true},{"mountPath":"/tmp/","name":"tmp-volume"},{"mountPath":"/cache","name":"cache-volume"}]}]}}}}'

Additional resources

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat, Inc.