Chapter 5. Installing RHACS on other platforms


Red Hat Advanced Cluster Security for Kubernetes (RHACS) provides security services for self-managed RHACS on platforms such as Amazon Elastic Kubernetes Service (Amazon EKS), Google Kubernetes Engine (Google GKE), and Microsoft Azure Kubernetes Service (Microsoft AKS).

Before you install:

  • Understand the installation methods for different platforms.
  • Understand the RHACS architecture.
  • Check the default resource requirements.

The following list provides a high-level overview of installation steps:

  1. Install Central services on a cluster by using Helm charts or the roxctl CLI.
  2. Generate and apply a cluster registration secret or an init bundle.
  3. Install secured cluster resources on each of your secured clusters.

Central is the resource that contains the RHACS application management interface and services. It handles data persistence, API interactions, and RHACS portal access. You can use the same Central instance to secure multiple OpenShift Container Platform or Kubernetes clusters.

You can install Central by using one of the following methods:

  • Install using Helm charts
  • Install using the roxctl CLI (do not use this method unless you have a specific installation need that requires using it)

5.2.1. Install Central using Helm charts

You can install Central using Helm charts without any customization, using the default values, or by using Helm charts with additional customizations of configuration parameters.

You can install RHACS on your Red Hat OpenShift cluster without any customizations. You must add the Helm chart repository and install the central-services Helm chart to install the centralized components of Central and Scanner.

5.2.1.1.1. Adding the Helm chart repository

Add the RHACS Helm chart repository to access installation charts for Central services and secured cluster components.

Procedure

  • Add the RHACS charts repository.

    $ helm repo add rhacs https://mirror.openshift.com/pub/rhacs/charts/

    The Helm repository for Red Hat Advanced Cluster Security for Kubernetes includes Helm charts for installing different components, including:

  • Central services Helm chart (central-services) for installing the centralized components (Central and Scanner).

    Note

    You deploy centralized components only once and you can monitor multiple separate clusters by using the same installation.

  • Secured Cluster Services Helm chart (secured-cluster-services) for installing the per-cluster and per-node components (Sensor, Admission Controller, Collector, and Scanner-slim).

    Note

    Deploy the per-cluster components into each cluster that you want to monitor and deploy the per-node components in all nodes that you want to monitor.

Verification

  • Run the following command to verify the added chart repository:

    $ helm search repo -l rhacs/

You can install the central-services Helm chart to deploy the centralized components: Central and Scanner.

The output of the installation command includes:

  • An automatically generated administrator password
  • Instructions on storing all the configuration values
  • Any warnings that Helm generates

Prerequisites

Procedure

  • Run the following command to install Central services and expose Central using a route:

    $ helm install -n stackrox \
      --create-namespace stackrox-central-services rhacs/central-services \
      --set imagePullSecrets.username=<username> \
      --set imagePullSecrets.password=<password> \
      --set central.exposure.route.enabled=true

    where:

    <username>
    Specifies the user name for your pull secret for Red Hat Container Registry authentication.
    <password>
    Specifies the password for your pull secret for Red Hat Container Registry authentication.
  • Or, run the following command to install Central services and expose Central using a load balancer:

    $ helm install -n stackrox \
      --create-namespace stackrox-central-services rhacs/central-services \
      --set imagePullSecrets.username=<username> \
      --set imagePullSecrets.password=<password> \
      --set central.exposure.loadBalancer.enabled=true

    where:

    <username>
    Specifies the user name for your pull secret for Red Hat Container Registry authentication.
    <password>
    Specifies the password for your pull secret for Red Hat Container Registry authentication.
  • Or, run the following command to install Central services and expose Central using port forward:

    $ helm install -n stackrox \
      --create-namespace stackrox-central-services rhacs/central-services \
      --set imagePullSecrets.username=<username> \
      --set imagePullSecrets.password=<password>

    where:

    <username>
    Specifies the user name for your pull secret for Red Hat Container Registry authentication.
    <password>
    Specifies the password for your pull secret for Red Hat Container Registry authentication.
    Important
    • If you are installing Red Hat Advanced Cluster Security for Kubernetes in a cluster that requires a proxy to connect to external services, you must specify your proxy configuration by using the proxyConfig parameter. For example:

      env:
        proxyConfig: |
          url: http://proxy.name:port
          username: username
          password: password
          excludes:
          - some.domain
    • If you already created one or more image pull secrets in the namespace in which you are installing, instead of using a username and password, you can use --set imagePullSecrets.useExisting="<pull-secret-1;pull-secret-2>".
    • Do not use image pull secrets if the following conditions apply:

      • If you are pulling your images from quay.io/stackrox-io or a registry in a private network that does not require authentication. Use use --set imagePullSecrets.allowNone=true instead of specifying a username and password.
      • If you already configured image pull secrets in the default service account in the namespace you are installing. Use --set imagePullSecrets.useFromDefaultServiceAccount=true instead of specifying a username and password.

When installing RHACS, a certificate authority (CA) is automatically generated and stored in a Kubernetes secret on the cluster. If you later change your installation by using Helm, you might need to supply this CA. For example, enabling an RHACS component that was initially disabled at installation time requires that you provide this CA.

The automatically generated CA is stored in a secret that is usually named similar to stackrox-generated-suffix, where suffix is a randomly generated string.

To retrieve the CA and export it to a generated-values.yaml file when needed for the helm upgrade command, for example, run the following command:

$ kubectl -n <namespace> get secret stackrox-generated-<suffix> \
  -o go-template='{{ index .data "generated-values.yaml" }}' | \
  base64 --decode >generated-values.yaml
Important

This file might contain sensitive data, so store it in a safe place.

If you are using the helm upgrade command after changing a configuration, you might need to supply this CA. For example, to update your system and enable Scanner V4, you run the following command:

$ helm upgrade -n stackrox stackrox-central-services rhacs/central-services --reuse-values \
  -f <path_to_generated-values.yaml> \
  --set scannerV4.disable=false

You can install RHACS on your Red Hat OpenShift cluster with customizations by using Helm chart configuration parameters with the helm install and helm upgrade commands. You can specify these parameters by using the --set option or by creating YAML configuration files.

Create the following files for configuring the Helm chart for installing Red Hat Advanced Cluster Security for Kubernetes:

  • Public configuration file values-public.yaml: Use this file to save all non-sensitive configuration options.
  • Private configuration file values-private.yaml: Use this file to save all sensitive configuration options. Ensure that you store this file securely.
  • Configuration file declarative-config-values.yaml: Create this file if you are using declarative configuration to add the declarative configuration mounts to Central.
5.2.1.2.1. Private configuration file

This section lists the configurable parameters of the values-private.yaml file. There are no default values for these parameters.

5.2.1.2.2. Public configuration file

The configurable parameters of the values-public.yaml file.

5.2.1.2.3. Declarative configuration values

To use declarative configuration, you must create a YAML file (in this example, named "declarative-config-values.yaml") that adds the declarative configuration mounts to Central. This file is used in a Helm installation.

Procedure

  1. Create the YAML file (in this example, named declarative-config-values.yaml) using the following example as a guideline:

    central:
      declarativeConfiguration:
        mounts:
          configMaps:
            - declarative-configs
          secrets:
            - sensitive-declarative-configs
  2. Install the Central services Helm chart as documented in the "Installing the central-services Helm chart", referencing the declarative-config-values.yaml file.

After you configure the values-public.yaml and values-private.yaml files, install the central-services Helm chart to deploy the centralized components (Central and Scanner).

Procedure

  • Run the following command:

    $ helm install -n stackrox --create-namespace \
      stackrox-central-services rhacs/central-services \
      -f <path_to_values_public.yaml> -f <path_to_values_private.yaml>

    where:

    <path_to_values_public.yaml>
    Specifies the paths for your YAML configuration files.
    Note

    Optional: If using declarative configuration, add -f <path_to_declarative-config-values.yaml to this command to mount the declarative configurations file in Central.

You can make changes to any configuration options after you have deployed the central-services Helm chart.

When using the helm upgrade command to make changes, the following guidelines and requirements apply:

  • You can also specify configuration values using the --set or --set-file parameters. However, these options are not saved, and you must manually specify all the options again whenever you make changes.
  • Some changes, such as enabling a new component, require new certificates to be issued for the component. Therefore, you must provide a CA when making these changes.

    • If the CA was generated by the Helm chart during the initial installation, you must retrieve these automatically generated values from the cluster and provide them to the helm upgrade command. The post-installation notes of the central-services Helm chart include a command for retrieving the automatically generated values.
    • If the CA was generated outside of the Helm chart and provided during the installation of the central-services chart, then you must perform that action again when using the helm upgrade command, for example, by using the --reuse-values flag with the helm upgrade command.

Procedure

  1. Update the values-public.yaml and values-private.yaml configuration files with new values.
  2. Run the helm upgrade command and specify the configuration files using the -f option:

    $ helm upgrade -n stackrox \
      stackrox-central-services rhacs/central-services \
      --reuse-values \
      -f <path_to_init_bundle_file \
      -f <path_to_values_public.yaml> \
      -f <path_to_values_private.yaml>

    where:

    --reuse-values
    Specifies that the modified values that are not included in the values_public.yaml and values_private.yaml files.

5.2.2. Install Central using the roxctl CLI

Warning

For production environments, Red Hat recommends using the Operator or Helm charts to install RHACS. Do not use the roxctl install method unless you have a specific installation need that requires using this method.

5.2.2.1. Installing the roxctl CLI

To install Red Hat Advanced Cluster Security for Kubernetes you must install the roxctl CLI by downloading the binary. You can install roxctl on Linux, Windows, or macOS.

5.2.2.1.1. Installing the roxctl CLI on Linux

You can install the roxctl CLI binary on Linux by using the following procedure.

Note

roxctl CLI for Linux is available for amd64, arm64, ppc64le, and s390x architectures.

Procedure

  1. Determine the roxctl architecture for the target operating system:

    $ arch="$(uname -m | sed "s/x86_64//")"; arch="${arch:+-$arch}"
  2. Download the roxctl CLI:

    $ curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.10.1/bin/Linux/roxctl${arch}"
  3. Make the roxctl binary executable:

    $ chmod +x roxctl
  4. Place the roxctl binary in a directory that is on your PATH:

    To check your PATH, execute the following command:

    $ echo $PATH

Verification

  • Verify the roxctl version you have installed:

    $ roxctl version
5.2.2.1.2. Installing the roxctl CLI on macOS

You can install the roxctl CLI binary on macOS by using the following procedure.

Note

roxctl CLI for macOS is available for amd64 and arm64 architectures.

Procedure

  1. Determine the roxctl architecture for the target operating system:

    $ arch="$(uname -m | sed "s/x86_64//")"; arch="${arch:+-$arch}"
  2. Download the roxctl CLI:

    $ curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.10.1/bin/Darwin/roxctl${arch}"
  3. Remove all extended attributes from the binary:

    $ xattr -c roxctl
  4. Make the roxctl binary executable:

    $ chmod +x roxctl
  5. Place the roxctl binary in a directory that is on your PATH:

    To check your PATH, execute the following command:

    $ echo $PATH

Verification

  • Verify the roxctl version you have installed:

    $ roxctl version
5.2.2.1.3. Installing the roxctl CLI on Windows

You can install the roxctl CLI binary on Windows by using the following procedure.

Note

roxctl CLI for Windows is available for the amd64 architecture.

Procedure

  • Download the roxctl CLI:

    $ curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.10.1/bin/Windows/roxctl.exe

Verification

  • Verify the roxctl version you have installed:

    $ roxctl version

5.2.2.2. Using the interactive installer

Use the interactive installer to generate the required secrets, deployment configurations, and deployment scripts for your environment.

Procedure

  1. Run the interactive install command:

    $ roxctl central generate interactive
    Important

    Installing RHACS using the roxctl CLI creates PodSecurityPolicy (PSP) objects by default for backward compatibility. If you install RHACS on Kubernetes versions 1.25 and newer or OpenShift Container Platform version 4.12 and newer, you must disable the PSP object creation. To do this, specify --enable-pod-security-policies option as false for the roxctl central generate and roxctl sensor generate commands.

  2. Press Enter to accept the default value for a prompt or enter custom values as required. The following example shows the interactive installer prompts:

    Path to the backup bundle from which to restore keys and certificates (optional):
    PEM cert bundle file (optional):
    Disable the administrator password (only use this if you have already configured an IdP for your instance) (default: "false"):
    Create PodSecurityPolicy resources (for pre-v1.25 Kubernetes) (default: "false"):
    Administrator password (default: autogenerated):
    Orchestrator (k8s, openshift):
    Default container images settings (rhacs, opensource); it controls repositories from where to download the images, image names and tags format (default: "rhacs"):
    The directory to output the deployment bundle to (default: "central-bundle"):
    Whether to enable telemetry (default: "true"):
    The central-db image to use (if unset, a default will be used according to --image-defaults) (default: "registry.redhat.io/advanced-cluster-security/rhacs-central-db-rhel8:4.6.0"):
    List of secrets to add as declarative configuration mounts in central (default: "[]"):
    The method of exposing Central (lb, np, none) (default: "none"):
    The main image to use (if unset, a default will be used according to --image-defaults) (default: "registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.6.0"):
    Whether to run StackRox in offline mode, which avoids reaching out to the Internet (default: "false"):
    List of config maps to add as declarative configuration mounts in central (default: "[]"):
    The deployment tool to use (kubectl, helm, helm-values) (default: "kubectl"):
    Istio version when deploying into an Istio-enabled cluster (leave empty when not running Istio) (optional):
    The scanner-db image to use (if unset, a default will be used according to --image-defaults) (default: "registry.redhat.io/advanced-cluster-security/rhacs-scanner-db-rhel8:4.6.0"):
    The scanner image to use (if unset, a default will be used according to --image-defaults) (default: "registry.redhat.io/advanced-cluster-security/rhacs-scanner-rhel8:4.6.0"):
    The scanner-v4-db image to use (if unset, a default will be used according to --image-defaults) (default: "registry.redhat.io/advanced-cluster-security/rhacs-scanner-v4-db-rhel8:4.6.0"):
    The scanner-v4 image to use (if unset, a default will be used according to --image-defaults) (default: "registry.redhat.io/advanced-cluster-security/rhacs-scanner-v4-rhel8:4.6.0"):
    External volume type (hostpath, pvc): hostpath
    Path on the host (default: "/var/lib/stackrox-central"):
    Node selector key (e.g. kubernetes.io/hostname):
    Node selector value:

    where:

    PEM cert bundle file
    Specifies whether to add a custom TLS certificate, provide the file path for the PEM-encoded certificate. When you specify a custom certificate the interactive installer also prompts you to provide a PEM private key for the custom certificate you are using.
    Create PodSecurityPolicy resources
    Specifies whether to create PodSecurityPolicy resources. If you are running Kubernetes version 1.25 or later, set this value to false.
    List of secrets to add as declarative configuration mounts in central
    Specifies the list of secrets to add as declarative configuration mounts in Central. For more information about using declarative configurations for authentication and authorization, see "Declarative configuration for authentication and authorization resources" in "Managing RBAC in Red Hat Advanced Cluster Security for Kubernetes".
    The method of exposing Central
    Specifies the method of exposing Central. You must expose Central by using a route, a load balancer or a node port.
    List of config maps to add as declarative configuration mounts in central
    Specifies the list of config maps to add as declarative configuration mounts in Central. For more information about using declarative configurations for authentication and authorization, see "Declarative configuration for authentication and authorization resources" in "Managing RBAC in Red Hat Advanced Cluster Security for Kubernetes".
    Warning

    On OpenShift Container Platform, for using a hostPath volume, you must modify the SELinux policy to allow access to the directory, which the host and the container share. It is because SELinux blocks directory sharing by default. To modify the SELinux policy, run the following command:

    $ sudo chcon -Rt svirt_sandbox_file_t <full_volume_path>

    However, you must not modify the SELinux policy, instead use PVC when installing on OpenShift Container Platform.

    On completion, the installer creates a folder named central-bundle, which contains the necessary YAML manifests and scripts to deploy Central. In addition, it shows on-screen instructions for the scripts you need to run to deploy additional trusted certificate authorities, Central and Scanner, and the authentication instructions for logging into the RHACS portal along with the autogenerated password if you did not provide one when answering the prompts.

5.2.2.3. Running the Central installation scripts

After you run the interactive installer, you can run the setup.sh script to install Central.

Procedure

  1. Run the setup.sh script to configure image registry access:

    $ ./central-bundle/central/scripts/setup.sh
  2. Create the necessary resources:

    $ oc create -R -f central-bundle/central
$ kubectl create -R -f central-bundle/central
  1. Check the deployment progress:

    $ oc get pod -n stackrox -w
$ kubectl get pod -n stackrox -w
  1. After Central is running, find the RHACS portal IP address and open it in your browser. Depending on the exposure method you selected when answering the prompts, use one of the following methods to get the IP address.

    Expand
    Exposure methodCommandAddressExample

    Route

    oc -n stackrox get route central

    The address under the HOST/PORT column in the output

    https://central-stackrox.example.route

    Node Port

    oc get node -owide && oc -n stackrox get svc central-loadbalancer

    IP or hostname of any node, on the port shown for the service

    https://198.51.100.0:31489

    Load Balancer

    oc -n stackrox get svc central-loadbalancer

    EXTERNAL-IP or hostname shown for the service, on port 443

    https://192.0.2.0

    None

    central-bundle/central/scripts/port-forward.sh 8443

    https://localhost:8443

    https://localhost:8443

    Note

    If you have selected autogenerated password during the interactive install, you can run the following command to see it for logging into Central:

    $ cat central-bundle/password

Red Hat Advanced Cluster Security for Kubernetes (RHACS) uses a special artifact during installation that allows Central to communicate securely with the secured clusters that you are adding. This file is called a cluster registration secret (CRS) or an init bundle. Init bundles are still supported, but using a CRS is the preferred way to set up a secured cluster.

A cluster registration secret (CRS) is an authentication secret that offers improved security and are easier to use than init bundles. A CRS contain a single token that can be used when installing RHACS by using both Operator and Helm installation methods.

A CRS provides better security because it is only used for registering a new secured cluster. If leaked, the certificates and keys in an init bundle can be used to impersonate services running on a secured cluster. By contrast, the certificate and key in a CRS can only be used for registering a new cluster.

After the cluster is set up by using the CRS, service-specific certificates are issued by Central and sent to the new secured cluster. These service certificates are used for communication between Central and secured clusters. Therefore, a CRS can be revoked after the cluster is registered without disconnecting secured clusters.

Unlike init bundles, which can be re-applied on an existing secured cluster if secrets on the secured cluster need to be manually refreshed, you cannot apply a new CRS to a cluster and re-establish communication between Central and the secured cluster. If there is a problem with the CRS on a secured cluster, for example, if the certificate and keys are deleted, you must revoke the original CRS in Central and remove it from the secured cluster. Then, you create a new CRS in Central, and apply the new CRS to the secured clusters.

To establish a secure communication channel between RHACS Central and your secured clusters, you must generate and apply authentication artifacts. You can generate a cluster registration secret (CRS) or an init bundle by using the RHACS portal or CLI.

Although init bundles are still supported, using a CRS to establish the connection between Central and your secured clusters is the preferred method because it offers a reusable, time-bound token that you can use to register multiple clusters.

After creating a CRS or an init bundle, you provide the CRS or the init bundle when you run the helm install command.

Note

You must have the Admin user role to generate a CRS or an init bundle.

A cluster registration secret (CRS) is an authentication secret that offers improved security and are easier to use than init bundles. A CRS contain a single token that can be used when installing RHACS by using both Operator and Helm installation methods.

A CRS provides better security because it is only used for registering a new secured cluster. If leaked, the certificates and keys in an init bundle can be used to impersonate services running on a secured cluster. By contrast, the certificate and key in a CRS can only be used for registering a new cluster.

After the cluster is set up by using the CRS, service-specific certificates are issued by Central and sent to the new secured cluster. These service certificates are used for communication between Central and secured clusters. Therefore, a CRS can be revoked after the cluster is registered without disconnecting secured clusters.

Unlike init bundles, which can be re-applied on an existing secured cluster if secrets on the secured cluster need to be manually refreshed, you cannot apply a new CRS to a cluster and re-establish communication between Central and the secured cluster. If there is a problem with the CRS on a secured cluster, for example, if the certificate and keys are deleted, you must revoke the original CRS in Central and remove it from the secured cluster. Then, you create a new CRS in Central, and apply the new CRS to the secured clusters.

You can generate a cluster registration secret (CRS) by using the RHACS portal.

Note

You must have the Admin user role to generate a CRS.

Procedure

  1. Find the address of the RHACS portal as described in "Verifying Central installation using the Operator method".
  2. Log in to the RHACS portal. If you do not have secured clusters or an existing CRS, the Platform Configuration Clusters page appears.
  3. Click Create cluster registration secret.
  4. Enter a name for the CRS and click Download to generate and download it. The CRS is created in the form of a YAML file and you can use it to secure all of your clusters if you are using the same installation method.

    Important

    Store this file securely because it contains secrets.

Next steps

  1. Apply the CRS to the secured cluster.
  2. Install secured cluster services on each cluster.

5.3.2.1. Generating a CRS by using the roxctl CLI

You can generate a cluster registration secret by using the roxctl CLI.

Note

You must have the Admin user role to generate a CRS.

Prerequisites

  • You have configured the ROX_API_TOKEN and the ROX_CENTRAL_ADDRESS environment variables:

    1. Set the ROX_API_TOKEN by running the following command:

      $ export ROX_API_TOKEN=<api_token>
    2. Set the ROX_CENTRAL_ADDRESS environment variable by running the following command:

      $ export ROX_CENTRAL_ADDRESS=<address>:<port_number>

Procedure

  • To generate a CRS, run the following command:

    $ roxctl -e "$ROX_CENTRAL_ADDRESS" \
      central crs generate <crs_name> \
      --output <file_name>

    where:

    <crs_name>
    Specifies an identifier or name for the CRS.
    <file_name>
    Specifies a file name. Use - for standard output.

Ensure that you store this file securely because it contains secrets. You can use the same file to set up more than one secured cluster. You cannot retrieve a previously-generated CRS.

Depending on the output that you select, the command might return some INFO messages about the CRS and the YAML file.

Sample output

INFO:	Successfully generated new CRS
INFO:
INFO:	  Name:       test-crs
INFO:	  Created at: 2025-02-26T19:07:21Z
INFO:	  Expires at: 2026-02-26T19:07:00Z
INFO:	  Created By: sample-token
INFO:	  ID:         9214a63f-7e0e-485a-baae-0757b0860ac9
# This is a StackRox Cluster Registration Secret (CRS).
# It is used for setting up StackRox secured clusters.
# NOTE: This file contains secret data that allows connecting new secured clusters to central,
# and needs to be handled and stored accordingly.
apiVersion: v1
data:
  crs: EXAMPLEZXlKMlpYSnphVzl1SWpveExDSkRRWE1pT2xzaUxTMHRMUzFDUlVkSlRpQkRSVkpVU1VaSlEwREXAMPLE=
kind: Secret
metadata:
  annotations:
    crs.platform.stackrox.io/created-at: "2025-02-26T19:07:21.800414339Z"
    crs.platform.stackrox.io/expires-at: "2026-02-26T19:07:00Z"
    crs.platform.stackrox.io/id: 9214a63f-7e0e-485a-baae-0757b0860ac9
    crs.platform.stackrox.io/name: test-crs
  creationTimestamp: null
  name: cluster-registration-secret
INFO:	Then CRS needs to be stored securely, since it contains secrets.
INFO:	It is not possible to retrieve previously generated CRSs.

Before you configure a secured cluster, you must apply the CRS to the cluster. After you have applied the CRS, the services on the secured cluster can communicate securely with Central.

Note

If you are installing by using Helm charts, do not perform this step. Complete the installation by using Helm; See "Installing RHACS on secured clusters by using Helm charts" in the additional resources section.

Prerequisites

  • You must have generated a CRS.

Procedure

To create resources, perform only one of the following steps:

  • Create resources using the OpenShift Container Platform web console:

    1. In the OpenShift Container Platform web console, go to the stackrox project or the project where you want to install the secured cluster services.
    2. In the top menu, click + to open the Import YAML page.
    3. You can drag the CRS file or copy and paste its contents into the editor, and then click Create. When the command is complete, the display shows that the secret named cluster-registration-secret was created.
  • Create resources using the Red Hat OpenShift CLI: Using the Red Hat OpenShift CLI, run the following command to create the resources:

    $ oc create -f <file_name.yaml> \
      -n <stackrox>

    where:

    <file_name.yaml>
    Specifies the file name of the CRS.
    <stackrox>
    Specifies the name of the project where secured cluster services are installed.
  • Using the kubectl CLI, run the following commands to create the resources:

    $ kubectl create namespace stackrox

    This command creates the project where secured cluster resources will be installed. This example uses stackrox.

    $ kubectl create -f <file_name.yaml> \
      -n <stackrox>

    where:

    <file_name.yaml>
    Specifies the file name of the CRS.
    <stackrox>
    Specifies the project name that you created. This example uses stackrox.

You can establish secure communication between your secured cluster services and Red Hat Advanced Cluster Security for Kubernetes (RHACS) Central by generating and applying an init bundle. You can create an YAML file containing essential TLS secrets by using the RHACS portal or the roxctl CLI, and then apply the resources to your cluster to authorize service connections.

When you apply the init bundle to a secured cluster, it creates the necessary TLS certificate resources, including:

  • collector-tls
  • sensor-tls
  • admission-control-tls

You can generate init bundles by using either the RHACS portal or the roxctl CLI. The bundle is generated as a YAML file containing the required secrets. Because this file contains sensitive credentials, it must be stored securely.

Note

If you are installing secured cluster services by using Helm charts, the init bundle is often applied as part of the Helm installation command rather than as a separate step.

You can generate a cluster registration secret (CRS) or an init bundle containing secrets by using the RHACS portal.

Note

You must have the Admin user role to generate a CRS or an init bundle.

Procedure

  1. Find the address of the RHACS portal as described in "Verifying Central installation using the Operator method".
  2. Log in to the RHACS portal. If you do not have secured clusters, or an existing CRS or an init bundle, the Platform Configuration Clusters page appears.
  3. Click Create cluster registration secret or Init bundles installation method.

    Note

    Init bundles are still supported, but using a CRS to secure clusters is the preferred method.

    Complete only one of the following actions:

    • If you chose to generate a CRS, enter a name for the CRS and click Download to generate and download it. The CRS is created in the form of a YAML file and you can use it to secure all of your clusters if you are using the same installation method.

      Important

      Store this file securely because it contains secrets.

    • If you chose to generate an init bundle, complete these steps:

      1. Enter a name for the cluster init bundle.
      2. Select your platform.
      3. Select the installation method you will use for your secured clusters: Operator or Helm chart.
      4. Click Download to generate and download the init bundle, which is created in the form of a YAML file. You can use one init bundle and its corresponding YAML file for all secured clusters if you are using the same installation method.

        Important

        Store this file securely because it contains secrets.

Next steps

  1. Apply the CRS or the init bundle to the secured cluster.
  2. Install secured cluster services on each cluster.

You can generate an init bundle with secrets by using the roxctl CLI.

Note

You must have the Admin user role to create init bundles.

Prerequisites

  • You have configured the ROX_API_TOKEN and the ROX_CENTRAL_ADDRESS environment variables:

    1. Set the ROX_API_TOKEN by running the following command:

      $ export ROX_API_TOKEN=<api_token>
    2. Set the ROX_CENTRAL_ADDRESS environment variable by running the following command:

      $ export ROX_CENTRAL_ADDRESS=<address>:<port_number>

Procedure

  • To generate a cluster init bundle containing secrets for Helm installations, run the following command:

    $ roxctl -e "$ROX_CENTRAL_ADDRESS" \
      central init-bundles generate <cluster_init_bundle_name> --output \
      cluster_init_bundle.yaml
  • To generate a cluster init bundle containing secrets for Operator installations, run the following command:

    $ roxctl -e "$ROX_CENTRAL_ADDRESS" \
      central init-bundles generate <cluster_init_bundle_name> --output-secrets \
      cluster_init_bundle.yaml
    Important

    Ensure that you store this bundle securely because it contains secrets. You can use the same bundle to set up more than one secured cluster.

Before you configure a secured cluster, you must apply the init bundle to the secured cluster. Applying the init bundle allows the services on the secured cluster to communicate with Central.

Note

If you are installing by using Helm charts, do not perform this step. Complete the installation by using Helm; See "Installing RHACS on secured clusters by using Helm charts" in the additional resources section.

Prerequisites

  • You must have generated an init bundle containing secrets. The preferred way to set up a secured cluster is by using a CRS.
  • You must have created the stackrox project, or namespace, on the cluster where secured cluster services will be installed. Using stackrox for the project is not required, but ensures that vulnerabilities for RHACS processes are not reported when scanning your clusters.

Procedure

To create resources, perform only one of the following steps:

  • Create resources using the OpenShift Container Platform web console:

    1. In the OpenShift Container Platform web console, make sure that you are in the stackrox namespace.
    2. In the top menu, click + to open the Import YAML page.
    3. You can drag the init bundle file or copy and paste its contents into the editor, and then click Create. When the command is complete, the display shows that the collector-tls, sensor-tls, and admission-control-tls resources were created.
  • Create resources using the Red Hat OpenShift CLI: Using the Red Hat OpenShift CLI, run the following command to create the resources:

    $ oc create -f <init_bundle.yaml> \
      -n <stackrox>

    where:

    <init_bundle.yaml>
    Specifies the file name of the init bundle containing the secrets.
    <stackrox>
    Specifies the name of the project where Central services are installed.
  • Using the kubectl CLI, run the following commands to create the resources:

    $ kubectl create namespace stackrox

    This command creates the project where secured cluster resources will be installed. This example uses stackrox.

    $ kubectl create -f <init_bundle.yaml> \
      -n <stackrox>

    where:

    <init_bundle.yaml>
    Specifies the file name of the init bundle containing the secrets.
    <stackrox>
    Specifies the project name that you created. This example uses stackrox.

5.3.4. Next steps

After you have generated and applied the cluster registration secret (CRS) or the init bundle, you can proceed to install the services.

Install Red Hat Advanced Cluster Security for Kubernetes (RHACS) secured cluster services in all clusters that you want to monitor.

You can install Red Hat Advanced Cluster Security for Kubernetes (RHACS) on your secured clusters for the following platforms:

  • Amazon Elastic Kubernetes Service (Amazon EKS)
  • Google Kubernetes Engine (GKE)
  • Microsoft Azure Kubernetes Service (Microsoft AKS)

You can install RHACS on secured clusters by using Helm charts with no customization, using the default values, or with customizations of configuration parameters.

This procedure describes how to install Red Hat Advanced Cluster Security for Kubernetes (RHACS) on secured clusters by using Helm charts with default configuration values and no customizations.

5.4.1.1.1. Adding the Helm chart repository

Add the RHACS Helm chart repository to access installation charts for Central services and secured cluster components.

Procedure

  • Add the RHACS charts repository.

    $ helm repo add rhacs https://mirror.openshift.com/pub/rhacs/charts/

    The Helm repository for Red Hat Advanced Cluster Security for Kubernetes includes Helm charts for installing different components, including:

  • Central services Helm chart (central-services) for installing the centralized components (Central and Scanner).

    Note

    You deploy centralized components only once and you can monitor multiple separate clusters by using the same installation.

  • Secured Cluster Services Helm chart (secured-cluster-services) for installing the per-cluster and per-node components (Sensor, Admission Controller, Collector, and Scanner-slim).

    Note

    Deploy the per-cluster components into each cluster that you want to monitor and deploy the per-node components in all nodes that you want to monitor.

Verification

  • Run the following command to verify the added chart repository:

    $ helm search repo -l rhacs/

Use the following instructions to install the secured-cluster-services Helm chart to deploy the per-cluster and per-node components (Sensor, Admission controller, Collector, and Scanner-slim).

Prerequisites

  • You must have generated a RHACS cluster registration secret (CRS) or an init bundle for your cluster.
  • You must have access to the Red Hat Container Registry and a pull secret for authentication. For information about downloading images from registry.redhat.io, see Red Hat Container Registry Authentication.
  • You must have the address that you are exposing the Central service on.

Procedure

  • Run one of the following commands on an OpenShift Container Platform cluster:

    • If you are using a CRS, run the following command:

      $ helm install -n stackrox --create-namespace \
          stackrox-secured-cluster-services rhacs/secured-cluster-services \
          --set-file crs.file=<crs_file_name.yaml> \
          -f <path_to_pull_secret.yaml> \
          --set clusterName=<name_of_the_secured_cluster> \
          --set centralEndpoint=<endpoint_of_central_service> \
          --set scanner.disable=false

      where:

      <crs_file_name.yaml>
      Specifies the name of the file in which the generated CRS has been stored.
      <path_to_pull_secret.yaml>
      Specifies the path for the pull secret for Red Hat Container Registry authentication. Or, you can specify --set imagePullSecrets.username=<your redhat.com username> and --set imagePullSecrets.password=<your redhat.com password> in the command.
      <endpoint_of_central_service>
      Specifies specify the address and port number for Central. For example, acs.domain.com:443.
      --set scanner.disable=false
      Sets the value of the scanner.disable parameter to false, which means that Scanner-slim will be enabled during the installation. In Kubernetes, the secured cluster services now include Scanner-slim.
    • If you are using an init bundle, run the following command:

      $ helm install -n stackrox --create-namespace \
          stackrox-secured-cluster-services rhacs/secured-cluster-services \
          -f <path_to_cluster_init_bundle.yaml> \
          -f <path_to_pull_secret.yaml> \
          --set clusterName=<name_of_the_secured_cluster> \
          --set centralEndpoint=<endpoint_of_central_service> \
          --set scanner.disable=false

      where:

      <path_to_cluster_init_bundle.yaml>
      Specifies the path for the init bundle.
      <path_to_pull_secret.yaml>
      Specifies the path for the pull secret for Red Hat Container Registry authentication.
      <endpoint_of_central_service>
      Specifies the address and port number for Central. For example, acs.domain.com:443.
      --set scanner.disable=false
      Sets the value of the scanner.disable parameter to false, which means that Scanner-slim will be enabled during the installation. In Kubernetes, the secured cluster services now include Scanner-slim.

You can use Helm chart configuration parameters with the helm install and helm upgrade commands. You can specify these parameters by using the --set option or by creating YAML configuration files.

Create the following files for configuring the Helm chart for installing Red Hat Advanced Cluster Security for Kubernetes:

  • Public configuration file values-public.yaml: Use this file to save all non-sensitive configuration options.
  • Private configuration file values-private.yaml: Use this file to save all sensitive configuration options. Ensure that you store this file securely.
Important

While using the secured-cluster-services Helm chart, do not change the values.yaml file that is part of the chart.

5.4.1.2.1. Configuration parameters

Reference of configuration parameters for customizing the secured-cluster-services Helm chart installation.

Expand
ParameterDescription

clusterName

Name of your cluster.

centralEndpoint

Address of the Central endpoint. If you are using a non-gRPC capable load balancer, use the WebSocket protocol by prefixing the endpoint address with wss://. When configuring multiple clusters, use the hostname for the address. For example, central.example.com.

env.grpcEnforceALPN

Use true to force application-level protocol negotiation (ALPN) during the TLS handshake.

sensor.endpoint

Address of the Sensor endpoint including port number.

sensor.imagePullPolicy

Image pull policy for the Sensor container.

sensor.serviceTLS.cert

The internal service-to-service TLS certificate that Sensor uses.

sensor.serviceTLS.key

The internal service-to-service TLS certificate key that Sensor uses.

sensor.resources.requests.memory

The memory request for the Sensor container. Use this parameter to override the default value.

sensor.resources.requests.cpu

The CPU request for the Sensor container. Use this parameter to override the default value.

sensor.resources.limits.memory

The memory limit for the Sensor container. Use this parameter to override the default value.

sensor.resources.limits.cpu

The CPU limit for the Sensor container. Use this parameter to override the default value.

sensor.nodeSelector

Specify a node selector label as label-key: label-value to force Sensor to only schedule on nodes with the specified label.

sensor.tolerations

If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Sensor. This parameter is mainly used for infrastructure nodes.

image.main.name

The name of the main image.

image.collector.name

The name of the Collector image.

image.main.registry

The address of the registry you are using for the main image.

image.collector.registry

The address of the registry you are using for the Collector image.

image.scanner.registry

The address of the registry you are using for the Scanner image.

image.scannerDb.registry

The address of the registry you are using for the Scanner DB image.

image.scannerV4.registry

The address of the registry you are using for the Scanner V4 image.

image.scannerV4DB.registry

The address of the registry you are using for the Scanner V4 DB image.

image.main.pullPolicy

Image pull policy for main images.

image.collector.pullPolicy

Image pull policy for the Collector images.

image.main.tag

Tag of main image to use.

image.collector.tag

Tag of collector image to use.

collector.collectionMethod

Either CORE_BPF or NO_COLLECTION.

collector.imagePullPolicy

Image pull policy for the Collector container.

collector.complianceImagePullPolicy

Image pull policy for the Compliance container.

collector.disableTaintTolerations

If you specify false, tolerations are applied to Collector, and the collector pods can schedule onto all nodes with taints. If you specify it as true, no tolerations are applied, and the collector pods are not scheduled onto nodes with taints.

collector.resources.requests.memory

The memory request for the Collector container. Use this parameter to override the default value.

collector.resources.requests.cpu

The CPU request for the Collector container. Use this parameter to override the default value.

collector.resources.limits.memory

The memory limit for the Collector container. Use this parameter to override the default value.

collector.resources.limits.cpu

The CPU limit for the Collector container. Use this parameter to override the default value.

collector.complianceResources.requests.memory

The memory request for the Compliance container. Use this parameter to override the default value.

collector.complianceResources.requests.cpu

The CPU request for the Compliance container. Use this parameter to override the default value.

collector.complianceResources.limits.memory

The memory limit for the Compliance container. Use this parameter to override the default value.

collector.complianceResources.limits.cpu

The CPU limit for the Compliance container. Use this parameter to override the default value.

collector.serviceTLS.cert

The internal service-to-service TLS certificate that Collector uses.

collector.serviceTLS.key

The internal service-to-service TLS certificate key that Collector uses.

admissionControl.enforce

This parameter determines if the admission controller has been configured to enforce policies that have enforcement enabled. For a new secured cluster deployed with RHACS 4.9, the default value is true. For secured clusters updating from RHACS versions before 4.9, previous values for the admission controller configuration parameters determine the value of this parameter. Before the update, if either of the admissionControl.enforceOnCreates or admissionControl.enforceOnUpdates parameters was set to true, the value of this parameter defaults to true after upgrade. If both of these parameters were set to false, the default value becomes false on update.

admissionControl.failurePolicy

Determines whether API server request is allowed (fail open) or blocked (fail closed) if an error or timeout happens in the RHACS validating webhook’s evaluation. Valid values are Ignore and Fail. The default value is Ignore to fail open.

admissionControl.listenOnCreates

This parameter is deprecated and RHACS ignores its value.

admissionControl.listenOnUpdates

This parameter is deprecated and RHACS ignores its value.

admissionControl.listenOnEvents

This parameter is deprecated and RHACS ignores its value.

admissionControl.dynamic.enforceOnCreates

This parameter is deprecated. RHACS checks its value during updates to version 4.9 and uses it to set a default value for the new admissionControl.enforce parameter. On new installations, changing this parameter has no effect.

admissionControl.dynamic.enforceOnUpdates

This parameter is deprecated. RHACS checks its value during updates to version 4.9 and uses it to set a default value for the new admissionControl.enforce parameter. On new installations, changing this parameter has no effect.

admissionControl.dynamic.scanInline

This parameter is deprecated and RHACS ignores its value.

admissionControl.dynamic.disableBypass

Set this parameter to true to disable bypassing the admission controller. The default value is false.

admissionControl.dynamic.timeout

The ability to configure this parameter is deprecated. RHACS uses a preset value for the timeout period and you cannot change it. This parameter is ignored.

admissionControl.resources.requests.memory

The memory request for the Admission Control container. Use this parameter to override the default value.

admissionControl.resources.requests.cpu

The CPU request for the Admission Control container. Use this parameter to override the default value.

admissionControl.resources.limits.memory

The memory limit for the Admission Control container. Use this parameter to override the default value.

admissionControl.resources.limits.cpu

The CPU limit for the Admission Control container. Use this parameter to override the default value.

admissionControl.nodeSelector

Specify a node selector label as label-key: label-value to force Admission Control to only schedule on nodes with the specified label.

admissionControl.tolerations

If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Admission Control. This parameter is mainly used for infrastructure nodes.

admissionControl.namespaceSelector

If the admission controller webhook needs a specific namespaceSelector, you can specify the corresponding selector here. Use this parameter to override the default, which avoids a few system namespaces.

admissionControl.serviceTLS.cert

The internal service-to-service TLS certificate that Admission Control uses.

admissionControl.serviceTLS.key

The internal service-to-service TLS certificate key that Admission Control uses.

registryOverride

Use this parameter to override the default docker.io registry. Specify the name of your registry if you are using some other registry.

collector.disableTaintTolerations

If you specify false, tolerations are applied to Collector, and the Collector pods can schedule onto all nodes with taints. If you specify it as true, no tolerations are applied, and the Collector pods are not scheduled onto nodes with taints.

createUpgraderServiceAccount

Specify true to create the sensor-upgrader account. By default, Red Hat Advanced Cluster Security for Kubernetes creates a service account called sensor-upgrader in each secured cluster. This account is highly privileged but is only used during upgrades. If you do not create this account, you must complete future upgrades manually if the Sensor does not have enough permissions.

createSecrets

Specify false to skip the orchestrator secret creation for the Sensor, Collector, and Admission controller.

collector.slimMode

Deprecated. Specify true if you want to use a slim Collector image for deploying Collector.

sensor.resources

Resource specification for Sensor.

admissionControl.resources

Resource specification for Admission controller.

collector.resources

Resource specification for Collector.

collector.complianceResources

Resource specification for Collector’s Compliance container.

perNode.sfa.agent

If you set this option to Enabled, you can enable file activity monitoring in your secured cluster.

exposeMonitoring

If you set this option to true, Red Hat Advanced Cluster Security for Kubernetes exposes Prometheus metrics endpoints on port number 9090 for the Sensor, Collector, and the Admission controller.

auditLogs.disableCollection

If you set this option to true, Red Hat Advanced Cluster Security for Kubernetes disables the audit log detection features used to detect access and modifications to configuration maps and secrets.

autoLockProcessBaselines.enabled

If you set this option to true, Red Hat Advanced Cluster Security for Kubernetes enables automatically locking process baselines. The default is false.

scanner.disable

If you set this option to false, Red Hat Advanced Cluster Security for Kubernetes deploys a Scanner-slim and Scanner DB in the secured cluster to allow scanning images on the integrated OpenShift image registry. Enabling Scanner-slim is supported on OpenShift Container Platform and Kubernetes secured clusters. Defaults to true.

scanner.dbTolerations

If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner DB.

scanner.replicas

Resource specification for Collector’s Compliance container.

scanner.logLevel

Setting this parameter allows you to modify the scanner log level. Use this option only for troubleshooting purposes.

scanner.autoscaling.disable

If you set this option to true, Red Hat Advanced Cluster Security for Kubernetes disables autoscaling on the Scanner deployment.

scanner.autoscaling.minReplicas

The minimum number of replicas for autoscaling. Defaults to 2.

scanner.autoscaling.maxReplicas

The maximum number of replicas for autoscaling. Defaults to 5.

scanner.nodeSelector

Specify a node selector label as label-key: label-value to force Scanner to only schedule on nodes with the specified label.

scanner.tolerations

If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner.

scanner.dbNodeSelector

Specify a node selector label as label-key: label-value to force Scanner DB to only schedule on nodes with the specified label.

scanner.dbTolerations

If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner DB.

scanner.resources.requests.memory

The memory request for the Scanner container. Use this parameter to override the default value.

scanner.resources.requests.cpu

The CPU request for the Scanner container. Use this parameter to override the default value.

scanner.resources.limits.memory

The memory limit for the Scanner container. Use this parameter to override the default value.

scanner.resources.limits.cpu

The CPU limit for the Scanner container. Use this parameter to override the default value.

scanner.dbResources.requests.memory

The memory request for the Scanner DB container. Use this parameter to override the default value.

scanner.dbResources.requests.cpu

The CPU request for the Scanner DB container. Use this parameter to override the default value.

scanner.dbResources.limits.memory

The memory limit for the Scanner DB container. Use this parameter to override the default value.

scanner.dbResources.limits.cpu

The CPU limit for the Scanner DB container. Use this parameter to override the default value.

monitoring.openshift.enabled

If you set this option to false, Red Hat Advanced Cluster Security for Kubernetes will not set up Red Hat OpenShift monitoring. Defaults to true on Red Hat OpenShift 4.

network.enableNetworkPolicies

To provide security at the network level, RHACS creates default NetworkPolicy resources in the namespace where secured cluster resources are installed. These network policies allow ingress to specific components on specific ports. If you do not want RHACS to create these policies, set this parameter to False. This is a Boolean value. The default value is True, which means the default policies are automatically created.

Warning

Disabling creation of default network policies can break communication between RHACS components. If you disable creation of default policies, you must create your own network policies to allow this communication.

5.4.1.2.1.1. Environment variables

You can specify environment variables for Sensor and Admission controller in the following format:

customize:
  envVars:
    ENV_VAR1: "value1"
    ENV_VAR2: "value2"

The customize setting allows you to specify custom Kubernetes metadata (labels and annotations) for all objects created by this Helm chart and additional pod labels, pod annotations, and container environment variables for workloads.

The configuration is hierarchical, in the sense that metadata defined at a more generic scope (for example, for all objects) can be overridden by metadata defined at a narrower scope (for example, only for the Sensor deployment).

Install the secured-cluster-services Helm chart with custom configuration to deploy Sensor, Admission Controller, Collector, and Scanner components.

After you configure the values-public.yaml and values-private.yaml files, install the secured-cluster-services Helm chart to deploy the following per-cluster and per-node components:

  • Sensor
  • Admission controller
  • Collector
  • Scanner: optional for secured clusters when the StackRox Scanner is installed
  • Scanner DB: optional for secured clusters when the StackRox Scanner is installed
  • Scanner V4 Indexer and Scanner V4 DB: optional for secured clusters when Scanner V4 is installed

Prerequisites

  • You must have generated a cluster registration secret (CRS) or an init bundle for your cluster.
  • You must have access to the Red Hat Container Registry and a pull secret for authentication. For information about downloading images from registry.redhat.io, see Red Hat Container Registry Authentication.
  • You must have the address and the port number that you are exposing the Central service on.

Procedure

  • Run the following command:

    $ helm install -n stackrox \
      --create-namespace stackrox-secured-cluster-services rhacs/secured-cluster-services \
      -f <name_of_cluster_init_bundle.yaml> \
      -f <path_to_values_public.yaml> \
      -f <path_to_values_private.yaml> \
      --set imagePullSecrets.username=<username> \
      --set imagePullSecrets.password=<password>

    where:

    <path_to_values_public.yaml>
    Specifies the path to your public YAML configuration file.
    <path_to_values_private.yaml>
    Specifies the path to your private YAML configuration file.
    <username>
    Specifies the user name for your pull secret for Red Hat Container Registry authentication.
    <password>
    Specifies the password for your pull secret for Red Hat Container Registry authentication.
    Note

    To deploy secured-cluster-services Helm chart by using a continuous integration (CI) system, pass the CRS or the init bundle YAML file as an environment variable to the helm install command:

    $ helm install ... -f <(echo "$INIT_BUNDLE_YAML_SECRET")

    If you are using base64 encoded variables, use the helm install …​ -f <(echo "$INIT_BUNDLE_YAML_SECRET" | base64 --decode) command instead.

Change configuration options for the secured-cluster-services Helm chart after deployment by using the helm upgrade command.

You can make changes to any configuration options after you have deployed the secured-cluster-services Helm chart.

When using the helm upgrade command to make changes, the following guidelines and requirements apply:

  • You can also specify configuration values using the --set or --set-file parameters. However, these options are not saved, and you must manually specify all the options again whenever you make changes.
  • Some changes, such as enabling a new component, require new certificates to be issued for the component. Therefore, you must provide a CA when making these changes.

    • If the CA was generated by the Helm chart during the initial installation, you must retrieve these automatically generated values from the cluster and provide them to the helm upgrade command. The post-installation notes of the central-services Helm chart include a command for retrieving the automatically generated values.
    • If the CA was generated outside of the Helm chart and provided during the installation of the central-services chart, then you must perform that action again when using the helm upgrade command, for example, by using the --reuse-values flag with the helm upgrade command.

Procedure

  1. Update the values-public.yaml and values-private.yaml configuration files with new values.
  2. Run the helm upgrade command and specify the configuration files using the -f option:

    $ helm upgrade -n stackrox \
      stackrox-secured-cluster-services rhacs/secured-cluster-services \
      --reuse-values \
      -f <path_to_values_public.yaml> \
      -f <path_to_values_private.yaml>

    where:

    --reuse-values
    Specifies that the modified values are not included in the values_public.yaml and values_private.yaml files.

To install RHACS on secured clusters by using the roxctl CLI, you must first install the roxctl CLI and then install Sensor by using either the RHACS portal or the roxctl CLI.

5.4.2.1. Install the roxctl CLI

To install Red Hat Advanced Cluster Security for Kubernetes you must install the roxctl CLI by downloading the binary. You can install roxctl on Linux, Windows, or macOS.

5.4.2.1.1. Installing the roxctl CLI on Linux

You can install the roxctl CLI binary on Linux by using the following procedure.

Note

roxctl CLI for Linux is available for amd64, arm64, ppc64le, and s390x architectures.

Procedure

  1. Determine the roxctl architecture for the target operating system:

    $ arch="$(uname -m | sed "s/x86_64//")"; arch="${arch:+-$arch}"
  2. Download the roxctl CLI:

    $ curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.10.1/bin/Linux/roxctl${arch}"
  3. Make the roxctl binary executable:

    $ chmod +x roxctl
  4. Place the roxctl binary in a directory that is on your PATH:

    To check your PATH, execute the following command:

    $ echo $PATH

Verification

  • Verify the roxctl version you have installed:

    $ roxctl version
5.4.2.1.2. Installing the roxctl CLI on macOS

You can install the roxctl CLI binary on macOS by using the following procedure.

Note

roxctl CLI for macOS is available for amd64 and arm64 architectures.

Procedure

  1. Determine the roxctl architecture for the target operating system:

    $ arch="$(uname -m | sed "s/x86_64//")"; arch="${arch:+-$arch}"
  2. Download the roxctl CLI:

    $ curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.10.1/bin/Darwin/roxctl${arch}"
  3. Remove all extended attributes from the binary:

    $ xattr -c roxctl
  4. Make the roxctl binary executable:

    $ chmod +x roxctl
  5. Place the roxctl binary in a directory that is on your PATH:

    To check your PATH, execute the following command:

    $ echo $PATH

Verification

  • Verify the roxctl version you have installed:

    $ roxctl version
5.4.2.1.3. Installing the roxctl CLI on Windows

You can install the roxctl CLI binary on Windows by using the following procedure.

Note

roxctl CLI for Windows is available for the amd64 architecture.

Procedure

  • Download the roxctl CLI:

    $ curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.10.1/bin/Windows/roxctl.exe

Verification

  • Verify the roxctl version you have installed:

    $ roxctl version

5.4.2.2. Install Sensor

Deploy Sensor to a cluster by using the manifest installation method by using the RHACS portal or the roxctl CLI.

To monitor a cluster, you must deploy Sensor. You must deploy Sensor into each cluster that you want to monitor. This installation method is also called the manifest installation method.

To perform an installation by using the manifest installation method, follow only one of the following procedures:

  • Use the RHACS web portal to download the cluster bundle, and then extract and run the sensor script.
  • Use the roxctl CLI to generate the required sensor configuration for your OpenShift Container Platform cluster and associate it with your Central instance.

Prerequisites

  • You must have already installed Central services, or you can access Central services by selecting your ACS instance on Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service).

To help ensure your cluster reports security information to Central, you can deploy Sensor by installing a manifest by using the RHACS web portal.

Procedure

  1. On your secured cluster, in the RHACS portal, go to Platform Configuration Clusters.
  2. Select Secure a cluster Legacy installation method.
  3. Specify a name for the cluster.
  4. Give appropriate values for the fields based on where you are deploying the Sensor.

    • If you are deploying Sensor in the same cluster, accept the default values for all the fields.
    • If you are deploying into a different cluster, replace central.stackrox.svc:443 with a load balancer, node port, or other address, including the port number, that is accessible from the other cluster.
    • If you are using a non-gRPC capable load balancer, such as HAProxy, AWS Application Load Balancer (ALB), or AWS Elastic Load Balancing (ELB), use the WebSocket Secure (wss) protocol. To use wss:

      • Prefix the address with wss://.
      • Add the port number after the address, for example, wss://stackrox-central.example.com:443.
  5. Click Next to continue with the Sensor setup.
  6. Click Download YAML File and Keys to download the cluster bundle (zip archive).

    Important

    The cluster bundle zip archive includes unique configurations and keys for each cluster. Do not reuse the same files in another cluster.

  7. From a system that has access to the monitored cluster, extract and run the sensor script from the cluster bundle:

    $ unzip -d sensor sensor-<cluster_name>.zip
    $ ./sensor/sensor.sh

    If you get a warning that you do not have the required permissions to deploy Sensor, follow the on-screen instructions, or contact your cluster administrator for help.

    After RHACS deploys Sensor, Sensor contacts Central and provides cluster information.

Verification

  1. Return to the RHACS portal and check if the deployment is successful. If successful, when viewing your list of clusters in Platform Configuration Clusters, the cluster status displays a checkmark and a Healthy status. If you do not see a checkmark, use the following command to check for problems:

    • On Kubernetes, enter the following command:

      $ kubectl get pod -n stackrox -w
  2. Click Finish to close the window.

After installation, Sensor starts reporting security information to RHACS and the RHACS portal dashboard begins showing deployments, images, and policy violations from the cluster on which you have installed the Sensor.

To monitor your cluster for policy violations, deploy the required Sensor resources. You can generate the configuration bundle by using the roxctl CLI and running the extracted script to connect your cluster to RHACS.

Procedure

  1. Generate the required sensor configuration for your OpenShift Container Platform cluster and associate it with your Central instance by running the following command:

    $ roxctl sensor generate openshift --openshift-version <ocp_version> --name <cluster_name> --central "$ROX_ENDPOINT"

    where:

    <ocp_version>
    Specifies the major OpenShift Container Platform version number for your cluster. For example, specify 3 for OpenShift Container Platform version 3.x and specify 4 for OpenShift Container Platform version 4.x.
  2. From a system that has access to the monitored cluster, extract and run the sensor script from the cluster bundle:

    $ unzip -d sensor sensor-<cluster_name>.zip
    $ ./sensor/sensor.sh

    If you get a warning that you do not have the required permissions to deploy Sensor, follow the on-screen instructions, or contact your cluster administrator for help.

    After RHACS deploys Sensor, Sensor contacts Central and provides cluster information.

Verification

  1. Return to the RHACS portal and check if the deployment is successful. If successful, when viewing your list of clusters in Platform Configuration Clusters, the cluster status displays a checkmark and a Healthy status. If you do not see a checkmark, use the following command to check for problems:

    • On Kubernetes, enter the following command:

      $ kubectl get pod -n stackrox -w
  2. Click Finish to close the window.

After installation, Sensor starts reporting security information to RHACS and the RHACS portal dashboard begins showing deployments, images, and policy violations from the cluster on which you have installed the Sensor.

You can validate the build and deploy-time assessment features of Red Hat Advanced Cluster Security for Kubernetes (RHACS) by running sample applications with known vulnerabilities. You can then access the RHACS portal to view the resulting security assessments and confirm that policy violations are detected.

5.5.1. Verifying installation

After you complete the installation, run a few vulnerable applications and go to the RHACS portal to evaluate the results of security assessments and policy violations.

Note

The sample applications listed in the following section contain critical vulnerabilities and they are specifically designed to verify the build and deploy-time assessment features of Red Hat Advanced Cluster Security for Kubernetes.

To verify installation:

  1. Find the address of the RHACS portal based on your exposure method:

    1. For a load balancer:

      $ kubectl get service central-loadbalancer -n stackrox
    2. For port forward:

      1. Run the following command:

        $ kubectl port-forward svc/central 18443:443 -n stackrox
      2. Go to https://localhost:18443/.
  2. Create a new namespace:

    $ kubectl create namespace test
  3. Start some applications with critical vulnerabilities:

    $ kubectl run shell --labels=app=shellshock,team=test-team \
      --image=quay.io/stackrox-io/docs:example-vulnerables-cve-2014-6271 -n test
    $ kubectl run samba --labels=app=rce \
      --image=quay.io/stackrox-io/docs:example-vulnerables-cve-2017-7494 -n test

    Red Hat Advanced Cluster Security for Kubernetes automatically scans these deployments for security risks and policy violations as soon as they are submitted to the cluster. Go to the RHACS portal to view the violations. You can log in to the RHACS portal by using the default username admin and the generated password.

Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top