This documentation is for a release that is no longer maintained.
You can select a different version or view all RHACS documentation.Chapter 5. Installing RHACS on other platforms
5.1. High-level overview of installing RHACS on other platforms Copy linkLink copied to clipboard!
Red Hat Advanced Cluster Security for Kubernetes (RHACS) provides security services for self-managed RHACS on platforms such as Amazon Elastic Kubernetes Service (Amazon EKS), Google Kubernetes Engine (Google GKE), and Microsoft Azure Kubernetes Service (Microsoft AKS).
Before you install:
- Understand the installation methods for different platforms.
- Understand Red Hat Advanced Cluster Security for Kubernetes architecture.
- Check the default resource requirements page.
The following list provides a high-level overview of installation steps:
-
Install Central services on a cluster using Helm charts or the
roxctlCLI. - Generate and apply an init bundle.
- Install secured cluster resources on each of your secured clusters.
5.2. Installing Central services for RHACS on other platforms Copy linkLink copied to clipboard!
Central is the resource that contains the RHACS application management interface and services. It handles data persistence, API interactions, and RHACS portal access. You can use the same Central instance to secure multiple OpenShift Container Platform or Kubernetes clusters.
You can install Central by using one of the following methods:
- Install using Helm charts
-
Install using the
roxctlCLI (do not use this method unless you have a specific installation need that requires using it)
5.2.1. Install Central using Helm charts Copy linkLink copied to clipboard!
You can install Central using Helm charts without any customization, using the default values, or by using Helm charts with additional customizations of configuration parameters.
5.2.1.1. Install Central using Helm charts without customization Copy linkLink copied to clipboard!
You can install RHACS on your Red Hat OpenShift cluster without any customizations. You must add the Helm chart repository and install the central-services Helm chart to install the centralized components of Central and Scanner.
5.2.1.1.1. Adding the Helm chart repository Copy linkLink copied to clipboard!
Procedure
Add the RHACS charts repository.
helm repo add rhacs https://mirror.openshift.com/pub/rhacs/charts/
$ helm repo add rhacs https://mirror.openshift.com/pub/rhacs/charts/Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The Helm repository for Red Hat Advanced Cluster Security for Kubernetes includes Helm charts for installing different components, including:
Central services Helm chart (
central-services) for installing the centralized components (Central and Scanner).NoteYou deploy centralized components only once and you can monitor multiple separate clusters by using the same installation.
Secured Cluster Services Helm chart (
secured-cluster-services) for installing the per-cluster and per-node components (Sensor, Admission Controller, Collector, and Scanner-slim).NoteDeploy the per-cluster components into each cluster that you want to monitor and deploy the per-node components in all nodes that you want to monitor.
Verification
Run the following command to verify the added chart repository:
helm search repo -l rhacs/
$ helm search repo -l rhacs/Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.2.1.1.2. Installing the central-services Helm chart without customizations Copy linkLink copied to clipboard!
Use the following instructions to install the central-services Helm chart to deploy the centralized components (Central and Scanner).
Prerequisites
-
You must have access to the Red Hat Container Registry. For information about downloading images from
registry.redhat.io, see Red Hat Container Registry Authentication.
Procedure
Run the following command to install Central services and expose Central using a route:
helm install -n stackrox \ --create-namespace stackrox-central-services rhacs/central-services \ --set imagePullSecrets.username=<username> \ --set imagePullSecrets.password=<password> \ --set central.exposure.route.enabled=true
$ helm install -n stackrox \ --create-namespace stackrox-central-services rhacs/central-services \ --set imagePullSecrets.username=<username> \1 --set imagePullSecrets.password=<password> \2 --set central.exposure.route.enabled=trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Or, run the following command to install Central services and expose Central using a load balancer:
helm install -n stackrox \ --create-namespace stackrox-central-services rhacs/central-services \ --set imagePullSecrets.username=<username> \ --set imagePullSecrets.password=<password> \ --set central.exposure.loadBalancer.enabled=true
$ helm install -n stackrox \ --create-namespace stackrox-central-services rhacs/central-services \ --set imagePullSecrets.username=<username> \1 --set imagePullSecrets.password=<password> \2 --set central.exposure.loadBalancer.enabled=trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Or, run the following command to install Central services and expose Central using port forward:
helm install -n stackrox \ --create-namespace stackrox-central-services rhacs/central-services \ --set imagePullSecrets.username=<username> \ --set imagePullSecrets.password=<password>
$ helm install -n stackrox \ --create-namespace stackrox-central-services rhacs/central-services \ --set imagePullSecrets.username=<username> \1 --set imagePullSecrets.password=<password>2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow
If you are installing Red Hat Advanced Cluster Security for Kubernetes in a cluster that requires a proxy to connect to external services, you must specify your proxy configuration by using the
proxyConfigparameter. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
If you already created one or more image pull secrets in the namespace in which you are installing, instead of using a username and password, you can use
--set imagePullSecrets.useExisting="<pull-secret-1;pull-secret-2>". Do not use image pull secrets:
-
If you are pulling your images from
quay.io/stackrox-ioor a registry in a private network that does not require authentication. Use use--set imagePullSecrets.allowNone=trueinstead of specifying a username and password. -
If you already configured image pull secrets in the default service account in the namespace you are installing. Use
--set imagePullSecrets.useFromDefaultServiceAccount=trueinstead of specifying a username and password.
-
If you are pulling your images from
The output of the installation command includes:
- An automatically generated administrator password.
- Instructions on storing all the configuration values.
- Any warnings that Helm generates.
5.2.1.2. Install Central using Helm charts with customizations Copy linkLink copied to clipboard!
You can install RHACS on your Red Hat OpenShift cluster with customizations by using Helm chart configuration parameters with the helm install and helm upgrade commands. You can specify these parameters by using the --set option or by creating YAML configuration files.
Create the following files for configuring the Helm chart for installing Red Hat Advanced Cluster Security for Kubernetes:
-
Public configuration file
values-public.yaml: Use this file to save all non-sensitive configuration options. -
Private configuration file
values-private.yaml: Use this file to save all sensitive configuration options. Ensure that you store this file securely. -
Configuration file
declarative-config-values.yaml: Create this file if you are using declarative configuration to add the declarative configuration mounts to Central.
5.2.1.2.1. Private configuration file Copy linkLink copied to clipboard!
This section lists the configurable parameters of the values-private.yaml file. There are no default values for these parameters.
5.2.1.2.1.1. Image pull secrets Copy linkLink copied to clipboard!
The credentials that are required for pulling images from the registry depend on the following factors:
If you are using a custom registry, you must specify these parameters:
-
imagePullSecrets.username -
imagePullSecrets.password -
image.registry
-
If you do not use a username and password to log in to the custom registry, you must specify one of the following parameters:
-
imagePullSecrets.allowNone -
imagePullSecrets.useExisting -
imagePullSecrets.useFromDefaultServiceAccount
-
| Parameter | Description |
|---|---|
|
| The username of the account that is used to log in to the registry. |
|
| The password of the account that is used to log in to the registry. |
|
|
Use |
|
|
A comma-separated list of secrets as values. For example, |
|
|
Use |
5.2.1.2.1.2. Proxy configuration Copy linkLink copied to clipboard!
If you are installing Red Hat Advanced Cluster Security for Kubernetes in a cluster that requires a proxy to connect to external services, you must specify your proxy configuration by using the proxyConfig parameter. For example:
| Parameter | Description |
|---|---|
|
| Your proxy configuration. |
5.2.1.2.1.3. Central Copy linkLink copied to clipboard!
Configurable parameters for Central.
For a new installation, you can skip the following parameters:
-
central.jwtSigner.key -
central.serviceTLS.cert -
central.serviceTLS.key -
central.adminPassword.value -
central.adminPassword.htpasswd -
central.db.serviceTLS.cert -
central.db.serviceTLS.key -
central.db.password.value - When you do not specify values for these parameters the Helm chart autogenerates values for them.
-
If you want to modify these values you can use the
helm upgradecommand and specify the values using the--setoption.
For setting the administrator password, you can only use either central.adminPassword.value or central.adminPassword.htpasswd, but not both.
| Parameter | Description |
|---|---|
|
| A private key which RHACS should use for signing JSON web tokens (JWTs) for authentication. |
|
| An internal certificate that the Central service should use for deploying Central. |
|
| The private key of the internal certificate that the Central service should use. |
|
| The user-facing certificate that Central should use. RHACS uses this certificate for RHACS portal.
|
|
| The private key of the user-facing certificate that Central should use.
|
|
| Connection password for Central database. |
|
| Administrator password for logging into RHACS. |
|
| Administrator password for logging into RHACS. This password is stored in hashed format using bcrypt. |
|
| An internal certificate that the Central DB service should use for deploying Central DB. |
|
| The private key of the internal certificate that the Central DB service should use. |
|
| The password used to connect to the Central DB. |
If you are using central.adminPassword.htpasswd parameter, you must use a bcrypt encoded password hash. You can run the command htpasswd -nB admin to generate a password hash. For example,
htpasswd: | admin:<bcrypt-hash>
htpasswd: |
admin:<bcrypt-hash>
5.2.1.2.1.4. Scanner Copy linkLink copied to clipboard!
Configurable parameters for the StackRox Scanner and Scanner V4.
For a new installation, you can skip the following parameters and the Helm chart autogenerates values for them. Otherwise, if you are upgrading to a new version, specify the values for the following parameters:
-
scanner.dbPassword.value -
scanner.serviceTLS.cert -
scanner.serviceTLS.key -
scanner.dbServiceTLS.cert -
scanner.dbServiceTLS.key -
scannerV4.db.password.value -
scannerV4.indexer.serviceTLS.cert -
scannerV4.indexer.serviceTLS.key -
scannerV4.matcher.serviceTLS.cert -
scannerV4.matcher.serviceTLS.key -
scannerV4.db.serviceTLS.cert -
scannerV4.db.serviceTLS.key
| Parameter | Description |
|---|---|
|
| The password to use for authentication with Scanner database. Do not modify this parameter because RHACS automatically creates and uses its value internally. |
|
| An internal certificate that the StackRox Scanner service should use for deploying the StackRox Scanner. |
|
| The private key of the internal certificate that the Scanner service should use. |
|
| An internal certificate that the Scanner-db service should use for deploying Scanner database. |
|
| The private key of the internal certificate that the Scanner-db service should use. |
|
| The password to use for authentication with the Scanner V4 database. Do not modify this parameter because RHACS automatically creates and uses its value internally. |
|
| An internal certificate that the Scanner V4 DB service should use for deploying the Scanner V4 database. |
|
| The private key of the internal certificate that the Scanner V4 DB service should use. |
|
| An internal certificate that the Scanner V4 service should use for deploying the Scanner V4 Indexer. |
|
| The private key of the internal certificate that the Scanner V4 Indexer should use. |
|
| An internal certificate that the Scanner V4 service should use for deploying the the Scanner V4 Matcher. |
|
| The private key of the internal certificate that the Scanner V4 Matcher should use. |
5.2.1.2.2. Public configuration file Copy linkLink copied to clipboard!
This section lists the configurable parameters of the values-public.yaml file.
5.2.1.2.2.1. Image pull secrets Copy linkLink copied to clipboard!
Image pull secrets are the credentials required for pulling images from your registry.
| Parameter | Description |
|---|---|
|
|
Use |
|
|
A comma-separated list of secrets as values. For example, |
|
|
Use |
5.2.1.2.2.2. Image Copy linkLink copied to clipboard!
Image declares the configuration to set up the main registry, which the Helm chart uses to resolve images for the central.image, scanner.image, scanner.dbImage, scannerV4.image, and scannerV4.db.image parameters.
| Parameter | Description |
|---|---|
|
|
Address of your image registry. Either use a hostname, such as |
5.2.1.2.2.3. Policy as code Copy linkLink copied to clipboard!
Policy as code provides a way to configure RHACS to work with a continuous delivery tool such as Argo CD to track, manage, and apply policies that you have authored locally or exported from the RHACS portal and modified. You configure Argo CD or your other tool to apply policy as code resources to the same namespace in which RHACS is installed.
| Parameter | Description |
|---|---|
|
|
By default, the value is |
5.2.1.2.2.4. Environment variables Copy linkLink copied to clipboard!
Red Hat Advanced Cluster Security for Kubernetes automatically detects your cluster environment and sets values for env.openshift, env.istio, and env.platform. Only set these values to override the automatic cluster environment detection.
| Parameter | Description |
|---|---|
|
|
Use |
|
|
Use |
|
|
The platform on which you are installing RHACS. Set its value to |
|
|
Use |
5.2.1.2.2.5. Additional trusted certificate authorities Copy linkLink copied to clipboard!
The RHACS automatically references the system root certificates to trust. When Central, the StackRox Scanner, or Scanner V4 must reach out to services that use certificates issued by an authority in your organization or a globally trusted partner organization, you can add trust for these services by specifying the root certificate authority to trust by using the following parameter:
| Parameter | Description |
|---|---|
|
| Specify the PEM encoded certificate of the root certificate authority to trust. |
5.2.1.2.2.6. Default network policies Copy linkLink copied to clipboard!
To provide security at the network level, RHACS creates default NetworkPolicy resources in the namespace where Central is installed. These network policies allow ingress to specific components on specific ports. If you do not want RHACS to create these policies, set this parameter to Disabled. The default value is Enabled.
Disabling creation of default network policies can break communication between RHACS components. If you disable creation of default policies, you must create your own network policies to allow this communication.
| Parameter | Description |
|---|---|
|
|
Specify if RHACS creates default network policies to allow communication between components. To create your own network policies, set this parameter to |
5.2.1.2.2.7. Central Copy linkLink copied to clipboard!
Configurable parameters for Central.
-
For exposing Central deployment for external access. You must specify one parameter, either
central.exposure.loadBalancer,central.exposure.nodePort, orcentral.exposure.route. When you do not specify any value for these parameters, you must manually expose Central or access it by using port-forwarding.
The following table includes settings for an external PostgreSQL database.
| Parameter | Description |
|---|---|
|
| Mounts config maps used for declarative configurations. |
|
| Mounts secrets used for declarative configurations. |
|
| The endpoint configuration options for Central. |
|
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Central. This parameter is mainly used for infrastructure nodes. |
|
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Central. This parameter is mainly used for infrastructure nodes. |
|
|
Specify |
|
|
A custom registry that overrides the global |
|
|
The custom image name that overrides the default Central image name ( |
|
|
The custom image tag that overrides the default tag for Central image. If you specify your own image tag during a new installation, you must manually increment this tag when you to upgrade to a new version by running the |
|
|
Full reference including registry address, image name, and image tag for the Central image. Setting a value for this parameter overrides the |
|
| The memory request for Central. |
|
| The CPU request for Central. |
|
| The memory limit for Central. |
|
| The CPU limit for Central. |
|
|
Use |
|
| The port number on which to expose Central. The default port number is 443. |
|
|
Use |
|
| The port number on which to expose Central. When you skip this parameter, OpenShift Container Platform automatically assigns a port number. Red Hat recommends that you do not specify a port number if you are exposing RHACS by using a node port. |
|
|
Use |
|
|
Use |
|
|
The connection string for Central to use to connect to the database. This is only used when
|
|
| The minimum number of connections to the database to be established. |
|
| The maximum number of connections to the database to be established. |
|
| The number of milliseconds a single query or transaction can be active against the database. |
|
| The postgresql.conf to be used for Central DB as described in the PostgreSQL documentation in "Additional resources". |
|
| The pg_hba.conf to be used for Central DB as described in the PostgreSQL documentation in "Additional resources". |
|
|
Specify a node selector label as |
|
|
A custom registry that overrides the global |
|
|
The custom image name that overrides the default Central DB image name ( |
|
|
The custom image tag that overrides the default tag for Central DB image. If you specify your own image tag during a new installation, you must manually increment this tag when you to upgrade to a new version by running the |
|
|
Full reference including registry address, image name, and image tag for the Central DB image. Setting a value for this parameter overrides the |
|
| The memory request for Central DB. |
|
| The CPU request for Central DB. |
|
| The memory limit for Central DB. |
|
| The CPU limit for Central DB. |
|
| The path on the node where RHACS should create a database volume. Red Hat does not recommend using this option. |
|
| The name of the persistent volume claim (PVC) you are using. |
|
|
Use |
|
| The size (in GiB) of the persistent volume managed by the specified claim. |
5.2.1.2.2.8. StackRox Scanner Copy linkLink copied to clipboard!
The following table lists the configurable parameters for the StackRox Scanner. This is the scanner used for node and platform scanning. If Scanner V4 is not enabled, the StackRox scanner also performs image scanning. Beginning with version 4.4, Scanner V4 can be enabled to provide image scanning. See the next table for Scanner V4 parameters.
| Parameter | Description |
|---|---|
|
|
Use |
|
|
Specify |
|
|
The number of replicas to create for the StackRox Scanner deployment. When you use it with the |
|
|
Configure the log level for the StackRox Scanner. Red Hat recommends that you not change the default log level value ( |
|
|
Specify a node selector label as |
|
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for the StackRox Scanner. This parameter is mainly used for infrastructure nodes. |
|
|
Use |
|
| The minimum number of replicas for autoscaling. |
|
| The maximum number of replicas for autoscaling. |
|
| The memory request for the StackRox Scanner. |
|
| The CPU request for the StackRox Scanner. |
|
| The memory limit for the StackRox Scanner. |
|
| The CPU limit for the StackRox Scanner. |
|
| The memory request for the StackRox Scanner database deployment. |
|
| The CPU request for the StackRox Scanner database deployment. |
|
| The memory limit for the StackRox Scanner database deployment. |
|
| The CPU limit for the StackRox Scanner database deployment. |
|
| A custom registry for the StackRox Scanner image. |
|
|
The custom image name that overrides the default StackRox Scanner image name ( |
|
| A custom registry for the StackRox Scanner DB image. |
|
|
The custom image name that overrides the default StackRox Scanner DB image name ( |
|
|
Specify a node selector label as |
|
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for the StackRox Scanner DB. This parameter is mainly used for infrastructure nodes. |
5.2.1.2.2.9. Scanner V4 Copy linkLink copied to clipboard!
The following table lists the configurable parameters for Scanner V4.
| Parameter | Description |
|---|---|
|
|
The name of the PVC to manage persistent data for Scanner V4. By default, for Central, the system creates a PVC and uses the default value of |
|
| The size of the PVC to manage persistent data for Scanner V4. |
|
| The name of the storage class to use for the PVC. If your cluster is not configured with a default storage class, you must provide a value for this parameter. |
|
|
Use |
|
|
Specify |
|
|
The number of replicas to create for the Scanner V4 Indexer deployment. When you use it with the |
|
|
Configure the log level for the Scanner V4 Indexer. Red Hat recommends that you not change the default log level value ( |
|
|
Specify a node selector label as |
|
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for the Scanner V4 Indexer. This parameter is mainly used for infrastructure nodes. |
|
|
Use |
|
| The minimum number of replicas for autoscaling. |
|
| The maximum number of replicas for autoscaling. |
|
| The memory request for the Scanner V4 Indexer. |
|
| The CPU request for the Scanner V4 Indexer. |
|
| The memory limit for the Scanner V4 Indexer. |
|
| The CPU limit for the Scanner V4 Indexer. |
|
|
The number of replicas to create for the Scanner V4 Matcher deployment. When you use it with the |
|
|
Red Hat recommends that you not change the default log level value ( |
|
|
Specify a node selector label as |
|
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for the Scanner V4 Matcher. This parameter is mainly used for infrastructure nodes. |
|
|
Use |
|
| The minimum number of replicas for autoscaling. |
|
| The maximum number of replicas for autoscaling. |
|
| The memory request for the Scanner V4 Matcher. |
|
| The CPU request for the Scanner V4 Matcher. |
|
| The memory request for the Scanner V4 database deployment. |
|
| The CPU request for the Scanner V4 database deployment. |
|
| The memory limit for the Scanner V4 database deployment. |
|
| The CPU limit for the Scanner V4 database deployment. |
|
|
Specify a node selector label as |
|
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for the Scanner V4 DB. This parameter is mainly used for infrastructure nodes. |
|
| A custom registry for the Scanner V4 DB image. |
|
|
The custom image name that overrides the default Scanner V4 DB image name ( |
|
| A custom registry for the Scanner V4 image. |
|
|
The custom image name that overrides the default Scanner V4 image name ( |
5.2.1.2.2.10. Customization Copy linkLink copied to clipboard!
Use these parameters to specify additional attributes for all objects that RHACS creates.
| Parameter | Description |
|---|---|
|
| A custom label to attach to all objects. |
|
| A custom annotation to attach to all objects. |
|
| A custom label to attach to all deployments. |
|
| A custom annotation to attach to all deployments. |
|
| A custom environment variable for all containers in all objects. |
|
| A custom label to attach to all objects that Central creates. |
|
| A custom annotation to attach to all objects that Central creates. |
|
| A custom label to attach to all Central deployments. |
|
| A custom annotation to attach to all Central deployments. |
|
| A custom environment variable for all Central containers. |
|
| A custom label to attach to all objects that Scanner creates. |
|
| A custom annotation to attach to all objects that Scanner creates. |
|
| A custom label to attach to all Scanner deployments. |
|
| A custom annotation to attach to all Scanner deployments. |
|
| A custom environment variable for all Scanner containers. |
|
| A custom label to attach to all objects that Scanner DB creates. |
|
| A custom annotation to attach to all objects that Scanner DB creates. |
|
| A custom label to attach to all Scanner DB deployments. |
|
| A custom annotation to attach to all Scanner DB deployments. |
|
| A custom environment variable for all Scanner DB containers. |
|
| A custom label to attach to all objects that Scanner V4 Indexer creates and into the pods belonging to them. |
|
| A custom annotation to attach to all objects that Scanner V4 Indexer creates and into the pods belonging to them. |
|
| A custom label to attach to all objects that Scanner V4 Indexer creates and into the pods belonging to them. |
|
| A custom annotation to attach to all objects that Scanner V4 Indexer creates and into the pods belonging to them. |
|
| A custom environment variable for all Scanner V4 Indexer containers and the pods belonging to them. |
|
| A custom label to attach to all objects that Scanner V4 Matcher creates and into the pods belonging to them. |
|
| A custom annotation to attach to all objects that Scanner V4 Matcher creates and into the pods belonging to them. |
|
| A custom label to attach to all objects that Scanner V4 Matcher creates and into the pods belonging to them. |
|
| A custom annotation to attach to all objects that Scanner V4 Matcher creates and into the pods belonging to them. |
|
| A custom environment variable for all Scanner V4 Matcher containers and the pods belonging to them. |
|
| A custom label to attach to all objects that Scanner V4 DB creates and into the pods belonging to them. |
|
| A custom annotation to attach to all objects that Scanner V4 DB creates and into the pods belonging to them. |
|
| A custom label to attach to all objects that Scanner V4 DB creates and into the pods belonging to them. |
|
| A custom annotation to attach to all objects that Scanner V4 DB creates and into the pods belonging to them. |
|
| A custom environment variable for all Scanner V4 DB containers and the pods belonging to them. |
You can also use:
-
the
customize.other.service/*.labelsand thecustomize.other.service/*.annotationsparameters, to specify labels and annotations for all objects. -
or, provide a specific service name, for example,
customize.other.service/central-loadbalancer.labelsandcustomize.other.service/central-loadbalancer.annotationsas parameters and set their value.
5.2.1.2.2.11. Advanced customization Copy linkLink copied to clipboard!
The parameters specified in this section are for information only. Red Hat does not support RHACS instances with modified namespace and release names.
| Parameter | Description |
|---|---|
|
|
Use |
|
|
Use |
5.2.1.2.3. Declarative configuration values Copy linkLink copied to clipboard!
To use declarative configuration, you must create a YAML file (in this example, named "declarative-config-values.yaml") that adds the declarative configuration mounts to Central. This file is used in a Helm installation.
Procedure
Create the YAML file (in this example, named
declarative-config-values.yaml) using the following example as a guideline:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Install the Central services Helm chart as documented in the "Installing the central-services Helm chart", referencing the
declarative-config-values.yamlfile.
5.2.1.2.4. Installing the central-services Helm chart Copy linkLink copied to clipboard!
After you configure the values-public.yaml and values-private.yaml files, install the central-services Helm chart to deploy the centralized components (Central and Scanner).
Procedure
Run the following command:
helm install -n stackrox --create-namespace \ stackrox-central-services rhacs/central-services \ -f <path_to_values_public.yaml> -f <path_to_values_private.yaml>
$ helm install -n stackrox --create-namespace \ stackrox-central-services rhacs/central-services \ -f <path_to_values_public.yaml> -f <path_to_values_private.yaml>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Use the
-foption to specify the paths for your YAML configuration files.
Optional: If using declarative configuration, add -f <path_to_declarative-config-values.yaml to this command to mount the declarative configurations file in Central.
5.2.1.3. Changing configuration options after deploying the central-services Helm chart Copy linkLink copied to clipboard!
You can make changes to any configuration options after you have deployed the central-services Helm chart.
When using the helm upgrade command to make changes, the following guidelines and requirements apply:
-
You can also specify configuration values using the
--setor--set-fileparameters. However, these options are not saved, and you must manually specify all the options again whenever you make changes. Some changes, such as enabling a new component like Scanner V4, require new certificates to be issued for the component. Therefore, you must provide a CA when making these changes.
-
If the CA was generated by the Helm chart during the initial installation, you must retrieve these automatically generated values from the cluster and provide them to the
helm upgradecommand. The post-installation notes of thecentral-servicesHelm chart include a command for retrieving the automatically generated values. -
If the CA was generated outside of the Helm chart and provided during the installation of the
central-serviceschart, then you must perform that action again when using thehelm upgradecommand, for example, by using the--reuse-valuesflag with thehelm upgradecommand.
-
If the CA was generated by the Helm chart during the initial installation, you must retrieve these automatically generated values from the cluster and provide them to the
Procedure
-
Update the
values-public.yamlandvalues-private.yamlconfiguration files with new values. Run the
helm upgradecommand and specify the configuration files using the-foption:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- If you have modified values that are not included in the
values_public.yamlandvalues_private.yamlfiles, include the--reuse-valuesparameter.
5.2.2. Install Central using the roxctl CLI Copy linkLink copied to clipboard!
For production environments, Red Hat recommends using the Operator or Helm charts to install RHACS. Do not use the roxctl install method unless you have a specific installation need that requires using this method.
5.2.2.1. Installing the roxctl CLI Copy linkLink copied to clipboard!
To install Red Hat Advanced Cluster Security for Kubernetes you must install the roxctl CLI by downloading the binary. You can install roxctl on Linux, Windows, or macOS.
5.2.2.1.1. Installing the roxctl CLI on Linux Copy linkLink copied to clipboard!
You can install the roxctl CLI binary on Linux by using the following procedure.
roxctl CLI for Linux is available for amd64, arm64, ppc64le, and s390x architectures.
Procedure
Determine the
roxctlarchitecture for the target operating system:arch="$(uname -m | sed "s/x86_64//")"; arch="${arch:+-$arch}"$ arch="$(uname -m | sed "s/x86_64//")"; arch="${arch:+-$arch}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Download the
roxctlCLI:curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.6.10/bin/Linux/roxctl${arch}"$ curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.6.10/bin/Linux/roxctl${arch}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Make the
roxctlbinary executable:chmod +x roxctl
$ chmod +x roxctlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Place the
roxctlbinary in a directory that is on yourPATH:To check your
PATH, execute the following command:echo $PATH
$ echo $PATHCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify the
roxctlversion you have installed:roxctl version
$ roxctl versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.2.2.1.2. Installing the roxctl CLI on macOS Copy linkLink copied to clipboard!
You can install the roxctl CLI binary on macOS by using the following procedure.
roxctl CLI for macOS is available for amd64 and arm64 architectures.
Procedure
Determine the
roxctlarchitecture for the target operating system:arch="$(uname -m | sed "s/x86_64//")"; arch="${arch:+-$arch}"$ arch="$(uname -m | sed "s/x86_64//")"; arch="${arch:+-$arch}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Download the
roxctlCLI:curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.6.10/bin/Darwin/roxctl${arch}"$ curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.6.10/bin/Darwin/roxctl${arch}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove all extended attributes from the binary:
xattr -c roxctl
$ xattr -c roxctlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Make the
roxctlbinary executable:chmod +x roxctl
$ chmod +x roxctlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Place the
roxctlbinary in a directory that is on yourPATH:To check your
PATH, execute the following command:echo $PATH
$ echo $PATHCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify the
roxctlversion you have installed:roxctl version
$ roxctl versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.2.2.1.3. Installing the roxctl CLI on Windows Copy linkLink copied to clipboard!
You can install the roxctl CLI binary on Windows by using the following procedure.
roxctl CLI for Windows is available for the amd64 architecture.
Procedure
Download the
roxctlCLI:curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.6.10/bin/Windows/roxctl.exe
$ curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.6.10/bin/Windows/roxctl.exeCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify the
roxctlversion you have installed:roxctl version
$ roxctl versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.2.2.2. Using the interactive installer Copy linkLink copied to clipboard!
Use the interactive installer to generate the required secrets, deployment configurations, and deployment scripts for your environment.
Procedure
Run the interactive install command:
roxctl central generate interactive
$ roxctl central generate interactiveCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantInstalling RHACS using the
roxctlCLI creates PodSecurityPolicy (PSP) objects by default for backward compatibility. If you install RHACS on Kubernetes versions 1.25 and newer or OpenShift Container Platform version 4.12 and newer, you must disable the PSP object creation. To do this, specify--enable-pod-security-policiesoption asfalsefor theroxctl central generateandroxctl sensor generatecommands.Press Enter to accept the default value for a prompt or enter custom values as required. The following example shows the interactive installer prompts:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- If you want to add a custom TLS certificate, provide the file path for the PEM-encoded certificate. When you specify a custom certificate the interactive installer also prompts you to provide a PEM private key for the custom certificate you are using.
- 2
- If you are running Kubernetes version 1.25 or later, set this value to
false. - 3
- For more information on using declarative configurations for authentication and authorization, see "Declarative configuration for authentication and authorization resources" in "Managing RBAC in Red Hat Advanced Cluster Security for Kubernetes".
- 4
- To use the RHACS portal, you must expose Central by using a route, a load balancer or a node port.
- 5
- For more information on using declarative configurations for authentication and authorization, see "Declarative configuration for authentication and authorization resources" in "Managing RBAC in Red Hat Advanced Cluster Security for Kubernetes".
WarningOn OpenShift Container Platform, for using a hostPath volume, you must modify the SELinux policy to allow access to the directory, which the host and the container share. It is because SELinux blocks directory sharing by default. To modify the SELinux policy, run the following command:
sudo chcon -Rt svirt_sandbox_file_t <full_volume_path>
$ sudo chcon -Rt svirt_sandbox_file_t <full_volume_path>Copy to Clipboard Copied! Toggle word wrap Toggle overflow However, Red Hat does not recommend modifying the SELinux policy, instead use PVC when installing on OpenShift Container Platform.
On completion, the installer creates a folder named central-bundle, which contains the necessary YAML manifests and scripts to deploy Central. In addition, it shows on-screen instructions for the scripts you need to run to deploy additional trusted certificate authorities, Central and Scanner, and the authentication instructions for logging into the RHACS portal along with the autogenerated password if you did not provide one when answering the prompts.
5.2.2.3. Running the Central installation scripts Copy linkLink copied to clipboard!
After you run the interactive installer, you can run the setup.sh script to install Central.
Procedure
Run the
setup.shscript to configure image registry access:./central-bundle/central/scripts/setup.sh
$ ./central-bundle/central/scripts/setup.shCopy to Clipboard Copied! Toggle word wrap Toggle overflow To enable the policy as code feature (Technology Preview), manually apply the
config.stackrox.ioCRD that is located in the .zip file athelm/chart/crds/config.stackrox.io_securitypolicies.yaml.ImportantPolicy as code is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
To apply the CRD, run the following command:
oc create -f helm/chart/crds/config.stackrox.io_securitypolicies.yaml
$ oc create -f helm/chart/crds/config.stackrox.io_securitypolicies.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow kubectl create -f helm/chart/crds/config.stackrox.io_securitypolicies.yaml
$ kubectl create -f helm/chart/crds/config.stackrox.io_securitypolicies.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the necessary resources:
oc create -R -f central-bundle/central
$ oc create -R -f central-bundle/centralCopy to Clipboard Copied! Toggle word wrap Toggle overflow
kubectl create -R -f central-bundle/central
$ kubectl create -R -f central-bundle/central
Check the deployment progress:
oc get pod -n stackrox -w
$ oc get pod -n stackrox -wCopy to Clipboard Copied! Toggle word wrap Toggle overflow
kubectl get pod -n stackrox -w
$ kubectl get pod -n stackrox -w
After Central is running, find the RHACS portal IP address and open it in your browser. Depending on the exposure method you selected when answering the prompts, use one of the following methods to get the IP address.
Expand Exposure method Command Address Example Route
oc -n stackrox get route centralThe address under the
HOST/PORTcolumn in the outputhttps://central-stackrox.example.routeNode Port
oc get node -owide && oc -n stackrox get svc central-loadbalancerIP or hostname of any node, on the port shown for the service
https://198.51.100.0:31489Load Balancer
oc -n stackrox get svc central-loadbalancerEXTERNAL-IP or hostname shown for the service, on port 443
https://192.0.2.0None
central-bundle/central/scripts/port-forward.sh 8443https://localhost:8443https://localhost:8443
If you have selected autogenerated password during the interactive install, you can run the following command to see it for logging into Central:
cat central-bundle/password
$ cat central-bundle/password
5.3. Generating and applying an init bundle for RHACS on other platforms Copy linkLink copied to clipboard!
Before you install the SecuredCluster resource on a cluster, you must create an init bundle. The cluster that has SecuredCluster installed and configured then uses this bundle to authenticate with Central. You can create an init bundle by using either the RHACS portal or the roxctl CLI. You then apply the init bundle by using it to create resources.
You must have the Admin user role to create an init bundle.
5.3.1. Generating an init bundle Copy linkLink copied to clipboard!
5.3.1.1. Generating an init bundle by using the RHACS portal Copy linkLink copied to clipboard!
You can create an init bundle containing secrets by using the RHACS portal.
You must have the Admin user role to create an init bundle.
Procedure
- Find the address of the RHACS portal as described in "Verifying Central installation using the Operator method".
- Log in to the RHACS portal.
-
If you do not have secured clusters, the Platform Configuration
Clusters page appears. - Click Create init bundle.
- Enter a name for the cluster init bundle.
- Select your platform.
- Select the installation method you will use for your secured clusters: Operator or Helm chart.
Click Download to generate and download the init bundle, which is created in the form of a YAML file. You can use one init bundle and its corresponding YAML file for all secured clusters if you are using the same installation method.
ImportantStore this bundle securely because it contains secrets.
- Apply the init bundle by using it to create resources on the secured cluster.
- Install secured cluster services on each cluster.
5.3.1.2. Generating an init bundle by using the roxctl CLI Copy linkLink copied to clipboard!
You can create an init bundle with secrets by using the roxctl CLI.
You must have the Admin user role to create init bundles.
Prerequisites
You have configured the
ROX_API_TOKENand theROX_CENTRAL_ADDRESSenvironment variables:Set the
ROX_API_TOKENby running the following command:export ROX_API_TOKEN=<api_token>
$ export ROX_API_TOKEN=<api_token>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
ROX_CENTRAL_ADDRESSenvironment variable by running the following command:export ROX_CENTRAL_ADDRESS=<address>:<port_number>
$ export ROX_CENTRAL_ADDRESS=<address>:<port_number>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
To generate a cluster init bundle containing secrets for Helm installations, run the following command:
roxctl -e "$ROX_CENTRAL_ADDRESS" \ central init-bundles generate --output \ <cluster_init_bundle_name> cluster_init_bundle.yaml
$ roxctl -e "$ROX_CENTRAL_ADDRESS" \ central init-bundles generate --output \ <cluster_init_bundle_name> cluster_init_bundle.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow To generate a cluster init bundle containing secrets for Operator installations, run the following command:
roxctl -e "$ROX_CENTRAL_ADDRESS" \ central init-bundles generate --output-secrets \ <cluster_init_bundle_name> cluster_init_bundle.yaml
$ roxctl -e "$ROX_CENTRAL_ADDRESS" \ central init-bundles generate --output-secrets \ <cluster_init_bundle_name> cluster_init_bundle.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantEnsure that you store this bundle securely because it contains secrets. You can use the same bundle to set up multiple secured clusters.
5.3.1.3. Applying the init bundle on the secured cluster Copy linkLink copied to clipboard!
Before you configure a secured cluster, you must apply the init bundle by using it to create the required resources on the cluster. Applying the init bundle allows the services on the secured cluster to communicate with Central.
If you are installing by using Helm charts, do not perform this step. Complete the installation by using Helm; See "Installing RHACS on secured clusters by using Helm charts" in the additional resources section.
Prerequisites
- You must have generated an init bundle containing secrets.
-
You must have created the
stackroxproject, or namespace, on the cluster where secured cluster services will be installed. Usingstackroxfor the project is not required, but ensures that vulnerabilities for RHACS processes are not reported when scanning your clusters.
Procedure
To create resources, perform only one of the following steps:
-
Create resources using the OpenShift Container Platform web console: In the OpenShift Container Platform web console, make sure that you are in the
stackroxnamespace. In the top menu, click + to open the Import YAML page. You can drag the init bundle file or copy and paste its contents into the editor, and then click Create. When the command is complete, the display shows that thecollector-tls,sensor-tls, and admission-control-tls` resources were created. Create resources using the Red Hat OpenShift CLI: Using the Red Hat OpenShift CLI, run the following command to create the resources:
oc create -f <init_bundle>.yaml \ -n <stackrox>
$ oc create -f <init_bundle>.yaml \1 -n <stackrox>2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Using the
kubectlCLI, run the following commands to create the resources:kubectl create namespace stackrox kubectl create -f <init_bundle>.yaml \ -n <stackrox>
$ kubectl create namespace stackrox1 $ kubectl create -f <init_bundle>.yaml \2 -n <stackrox>3 Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3.2. Next steps Copy linkLink copied to clipboard!
- Install RHACS secured cluster services in all clusters that you want to monitor.
5.4. Installing Secured Cluster services for RHACS on other platforms Copy linkLink copied to clipboard!
You can install Red Hat Advanced Cluster Security for Kubernetes (RHACS) on your secured clusters for the following platforms:
- Amazon Elastic Kubernetes Service (Amazon EKS)
- Google Kubernetes Engine (GKE)
- Microsoft Azure Kubernetes Service (Microsoft AKS)
5.4.1. Installing RHACS on secured clusters by using Helm charts Copy linkLink copied to clipboard!
You can install RHACS on secured clusters by using Helm charts with no customization, using the default values, or with customizations of configuration parameters.
5.4.1.1. Installing RHACS on secured clusters by using Helm charts without customizations Copy linkLink copied to clipboard!
5.4.1.1.1. Adding the Helm chart repository Copy linkLink copied to clipboard!
Procedure
Add the RHACS charts repository.
helm repo add rhacs https://mirror.openshift.com/pub/rhacs/charts/
$ helm repo add rhacs https://mirror.openshift.com/pub/rhacs/charts/Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The Helm repository for Red Hat Advanced Cluster Security for Kubernetes includes Helm charts for installing different components, including:
Central services Helm chart (
central-services) for installing the centralized components (Central and Scanner).NoteYou deploy centralized components only once and you can monitor multiple separate clusters by using the same installation.
Secured Cluster Services Helm chart (
secured-cluster-services) for installing the per-cluster and per-node components (Sensor, Admission Controller, Collector, and Scanner-slim).NoteDeploy the per-cluster components into each cluster that you want to monitor and deploy the per-node components in all nodes that you want to monitor.
Verification
Run the following command to verify the added chart repository:
helm search repo -l rhacs/
$ helm search repo -l rhacs/Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.1.1.2. Installing the secured-cluster-services Helm chart without customization Copy linkLink copied to clipboard!
Use the following instructions to install the secured-cluster-services Helm chart to deploy the per-cluster and per-node components (Sensor, Admission controller, Collector, and Scanner-slim).
Prerequisites
- You must have generated an RHACS init bundle for your cluster.
-
You must have access to the Red Hat Container Registry and a pull secret for authentication. For information about downloading images from
registry.redhat.io, see Red Hat Container Registry Authentication. - You must have the address that you are exposing the Central service on.
5.4.1.2. Configuring the secured-cluster-services Helm chart with customizations Copy linkLink copied to clipboard!
This section describes Helm chart configuration parameters that you can use with the helm install and helm upgrade commands. You can specify these parameters by using the --set option or by creating YAML configuration files.
Create the following files for configuring the Helm chart for installing Red Hat Advanced Cluster Security for Kubernetes:
-
Public configuration file
values-public.yaml: Use this file to save all non-sensitive configuration options. -
Private configuration file
values-private.yaml: Use this file to save all sensitive configuration options. Ensure that you store this file securely.
While using the secured-cluster-services Helm chart, do not modify the values.yaml file that is part of the chart.
5.4.1.2.1. Configuration parameters Copy linkLink copied to clipboard!
| Parameter | Description |
|---|---|
|
| Name of your cluster. |
|
|
Address of the Central endpoint. If you are using a non-gRPC capable load balancer, use the WebSocket protocol by prefixing the endpoint address with |
|
| Address of the Sensor endpoint including port number. |
|
| Image pull policy for the Sensor container. |
|
| The internal service-to-service TLS certificate that Sensor uses. |
|
| The internal service-to-service TLS certificate key that Sensor uses. |
|
| The memory request for the Sensor container. Use this parameter to override the default value. |
|
| The CPU request for the Sensor container. Use this parameter to override the default value. |
|
| The memory limit for the Sensor container. Use this parameter to override the default value. |
|
| The CPU limit for the Sensor container. Use this parameter to override the default value. |
|
|
Specify a node selector label as |
|
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Sensor. This parameter is mainly used for infrastructure nodes. |
|
|
The name of the |
|
| The name of the Collector image. |
|
| The address of the registry you are using for the main image. |
|
| The address of the registry you are using for the Collector image. |
|
| The address of the registry you are using for the Scanner image. |
|
| The address of the registry you are using for the Scanner DB image. |
|
| The address of the registry you are using for the Scanner V4 image. |
|
| The address of the registry you are using for the Scanner V4 DB image. |
|
|
Image pull policy for |
|
| Image pull policy for the Collector images. |
|
|
Tag of |
|
|
Tag of |
|
|
Either |
|
| Image pull policy for the Collector container. |
|
| Image pull policy for the Compliance container. |
|
|
If you specify |
|
| The memory request for the Collector container. Use this parameter to override the default value. |
|
| The CPU request for the Collector container. Use this parameter to override the default value. |
|
| The memory limit for the Collector container. Use this parameter to override the default value. |
|
| The CPU limit for the Collector container. Use this parameter to override the default value. |
|
| The memory request for the Compliance container. Use this parameter to override the default value. |
|
| The CPU request for the Compliance container. Use this parameter to override the default value. |
|
| The memory limit for the Compliance container. Use this parameter to override the default value. |
|
| The CPU limit for the Compliance container. Use this parameter to override the default value. |
|
| The internal service-to-service TLS certificate that Collector uses. |
|
| The internal service-to-service TLS certificate key that Collector uses. |
|
|
This setting controls whether Kubernetes is configured to contact Red Hat Advanced Cluster Security for Kubernetes with |
|
|
When you set this parameter as |
|
|
This setting controls whether the cluster is configured to contact Red Hat Advanced Cluster Security for Kubernetes with |
|
| This setting controls whether Red Hat Advanced Cluster Security for Kubernetes evaluates policies; if it is disabled, all AdmissionReview requests are automatically accepted. |
|
|
This setting controls the behavior of the admission control service. You must specify |
|
|
If you set this option to |
|
|
Set it to |
|
|
Use this parameter to specify the maximum number of seconds RHACS must wait for an admission review before marking it as fail open. If the admission webhook does not receive information that it is requesting before the end of the timeout period, it fails, but in fail open status, it still allows the operation to succeed. For example, the admission controller would allow a deployment to be created even if a scan had timed out and RHACS could not determine if the deployment violated a policy. Beginning in release 4.5, Red Hat reduced the default timeout setting for the RHACS admission controller webhooks from 20 seconds to 10 seconds, resulting in an effective timeout of 12 seconds within the |
|
| The memory request for the Admission Control container. Use this parameter to override the default value. |
|
| The CPU request for the Admission Control container. Use this parameter to override the default value. |
|
| The memory limit for the Admission Control container. Use this parameter to override the default value. |
|
| The CPU limit for the Admission Control container. Use this parameter to override the default value. |
|
|
Specify a node selector label as |
|
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Admission Control. This parameter is mainly used for infrastructure nodes. |
|
| The internal service-to-service TLS certificate that Admission Control uses. |
|
| The internal service-to-service TLS certificate key that Admission Control uses. |
|
|
Use this parameter to override the default |
|
|
If you specify |
|
|
Specify |
|
|
Specify |
|
|
Deprecated. Specify |
|
| Resource specification for Sensor. |
|
| Resource specification for Admission controller. |
|
| Resource specification for Collector. |
|
| Resource specification for Collector’s Compliance container. |
|
|
If you set this option to |
|
|
If you set this option to |
|
|
If you set this option to |
|
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner DB. |
|
| Resource specification for Collector’s Compliance container. |
|
| Setting this parameter allows you to modify the scanner log level. Use this option only for troubleshooting purposes. |
|
|
If you set this option to |
|
| The minimum number of replicas for autoscaling. Defaults to 2. |
|
| The maximum number of replicas for autoscaling. Defaults to 5. |
|
|
Specify a node selector label as |
|
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner. |
|
|
Specify a node selector label as |
|
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner DB. |
|
| The memory request for the Scanner container. Use this parameter to override the default value. |
|
| The CPU request for the Scanner container. Use this parameter to override the default value. |
|
| The memory limit for the Scanner container. Use this parameter to override the default value. |
|
| The CPU limit for the Scanner container. Use this parameter to override the default value. |
|
| The memory request for the Scanner DB container. Use this parameter to override the default value. |
|
| The CPU request for the Scanner DB container. Use this parameter to override the default value. |
|
| The memory limit for the Scanner DB container. Use this parameter to override the default value. |
|
| The CPU limit for the Scanner DB container. Use this parameter to override the default value. |
|
|
If you set this option to |
|
|
To provide security at the network level, RHACS creates default Warning Disabling creation of default network policies can break communication between RHACS components. If you disable creation of default policies, you must create your own network policies to allow this communication. |
5.4.1.2.1.1. Environment variables Copy linkLink copied to clipboard!
You can specify environment variables for Sensor and Admission controller in the following format:
customize:
envVars:
ENV_VAR1: "value1"
ENV_VAR2: "value2"
customize:
envVars:
ENV_VAR1: "value1"
ENV_VAR2: "value2"
The customize setting allows you to specify custom Kubernetes metadata (labels and annotations) for all objects created by this Helm chart and additional pod labels, pod annotations, and container environment variables for workloads.
The configuration is hierarchical, in the sense that metadata defined at a more generic scope (for example, for all objects) can be overridden by metadata defined at a narrower scope (for example, only for the Sensor deployment).
5.4.1.2.2. Installing the secured-cluster-services Helm chart with customizations Copy linkLink copied to clipboard!
After you configure the values-public.yaml and values-private.yaml files, install the secured-cluster-services Helm chart to deploy the following per-cluster and per-node components:
- Sensor
- Admission controller
- Collector
- Scanner: optional for secured clusters when the StackRox Scanner is installed
- Scanner DB: optional for secured clusters when the StackRox Scanner is installed
- Scanner V4 Indexer and Scanner V4 DB: optional for secured clusters when Scanner V4 is installed
Prerequisites
- You must have generated an RHACS init bundle for your cluster.
-
You must have access to the Red Hat Container Registry and a pull secret for authentication. For information about downloading images from
registry.redhat.io, see Red Hat Container Registry Authentication. - You must have the address and the port number that you are exposing the Central service on.
Procedure
Run the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
To deploy secured-cluster-services Helm chart by using a continuous integration (CI) system, pass the init bundle YAML file as an environment variable to the helm install command:
helm install ... -f <(echo "$INIT_BUNDLE_YAML_SECRET")
$ helm install ... -f <(echo "$INIT_BUNDLE_YAML_SECRET")
- 1
- If you are using base64 encoded variables, use the
helm install … -f <(echo "$INIT_BUNDLE_YAML_SECRET" | base64 --decode)command instead.
5.4.1.3. Changing configuration options after deploying the secured-cluster-services Helm chart Copy linkLink copied to clipboard!
You can make changes to any configuration options after you have deployed the secured-cluster-services Helm chart.
When using the helm upgrade command to make changes, the following guidelines and requirements apply:
-
You can also specify configuration values using the
--setor--set-fileparameters. However, these options are not saved, and you must manually specify all the options again whenever you make changes. Some changes, such as enabling a new component like Scanner V4, require new certificates to be issued for the component. Therefore, you must provide a CA when making these changes.
-
If the CA was generated by the Helm chart during the initial installation, you must retrieve these automatically generated values from the cluster and provide them to the
helm upgradecommand. The post-installation notes of thecentral-servicesHelm chart include a command for retrieving the automatically generated values. -
If the CA was generated outside of the Helm chart and provided during the installation of the
central-serviceschart, then you must perform that action again when using thehelm upgradecommand, for example, by using the--reuse-valuesflag with thehelm upgradecommand.
-
If the CA was generated by the Helm chart during the initial installation, you must retrieve these automatically generated values from the cluster and provide them to the
Procedure
-
Update the
values-public.yamlandvalues-private.yamlconfiguration files with new values. Run the
helm upgradecommand and specify the configuration files using the-foption:helm upgrade -n stackrox \ stackrox-secured-cluster-services rhacs/secured-cluster-services \ --reuse-values \ -f <path_to_values_public.yaml> \ -f <path_to_values_private.yaml>
$ helm upgrade -n stackrox \ stackrox-secured-cluster-services rhacs/secured-cluster-services \ --reuse-values \1 -f <path_to_values_public.yaml> \ -f <path_to_values_private.yaml>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- If you have modified values that are not included in the
values_public.yamlandvalues_private.yamlfiles, include the--reuse-valuesparameter.
5.4.2. Installing RHACS on secured clusters by using the roxctl CLI Copy linkLink copied to clipboard!
To install RHACS on secured clusters by using the CLI, perform the following steps:
-
Install the
roxctlCLI - Install Sensor.
5.4.2.1. Installing the roxctl CLI Copy linkLink copied to clipboard!
You must first download the binary. You can install roxctl on Linux, Windows, or macOS.
5.4.2.1.1. Installing the roxctl CLI on Linux Copy linkLink copied to clipboard!
You can install the roxctl CLI binary on Linux by using the following procedure.
roxctl CLI for Linux is available for amd64, arm64, ppc64le, and s390x architectures.
Procedure
Determine the
roxctlarchitecture for the target operating system:arch="$(uname -m | sed "s/x86_64//")"; arch="${arch:+-$arch}"$ arch="$(uname -m | sed "s/x86_64//")"; arch="${arch:+-$arch}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Download the
roxctlCLI:curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.6.10/bin/Linux/roxctl${arch}"$ curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.6.10/bin/Linux/roxctl${arch}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Make the
roxctlbinary executable:chmod +x roxctl
$ chmod +x roxctlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Place the
roxctlbinary in a directory that is on yourPATH:To check your
PATH, execute the following command:echo $PATH
$ echo $PATHCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify the
roxctlversion you have installed:roxctl version
$ roxctl versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.2.1.2. Installing the roxctl CLI on macOS Copy linkLink copied to clipboard!
You can install the roxctl CLI binary on macOS by using the following procedure.
roxctl CLI for macOS is available for amd64 and arm64 architectures.
Procedure
Determine the
roxctlarchitecture for the target operating system:arch="$(uname -m | sed "s/x86_64//")"; arch="${arch:+-$arch}"$ arch="$(uname -m | sed "s/x86_64//")"; arch="${arch:+-$arch}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Download the
roxctlCLI:curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.6.10/bin/Darwin/roxctl${arch}"$ curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.6.10/bin/Darwin/roxctl${arch}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove all extended attributes from the binary:
xattr -c roxctl
$ xattr -c roxctlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Make the
roxctlbinary executable:chmod +x roxctl
$ chmod +x roxctlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Place the
roxctlbinary in a directory that is on yourPATH:To check your
PATH, execute the following command:echo $PATH
$ echo $PATHCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify the
roxctlversion you have installed:roxctl version
$ roxctl versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.2.1.3. Installing the roxctl CLI on Windows Copy linkLink copied to clipboard!
You can install the roxctl CLI binary on Windows by using the following procedure.
roxctl CLI for Windows is available for the amd64 architecture.
Procedure
Download the
roxctlCLI:curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.6.10/bin/Windows/roxctl.exe
$ curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.6.10/bin/Windows/roxctl.exeCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify the
roxctlversion you have installed:roxctl version
$ roxctl versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.2.2. Installing Sensor Copy linkLink copied to clipboard!
To monitor a cluster, you must deploy Sensor. You must deploy Sensor into each cluster that you want to monitor. This installation method is also called the manifest installation method.
To perform an installation by using the manifest installation method, follow only one of the following procedures:
- Use the RHACS web portal to download the cluster bundle, and then extract and run the sensor script.
-
Use the
roxctlCLI to generate the required sensor configuration for your OpenShift Container Platform cluster and associate it with your Central instance.
Prerequisites
- You must have already installed Central services, or you can access Central services by selecting your ACS instance on Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service).
5.4.2.2.1. Manifest installation method by using the web portal Copy linkLink copied to clipboard!
Procedure
-
On your secured cluster, in the RHACS portal, go to Platform Configuration
Clusters. -
Select Secure a cluster
Legacy installation method. - Specify a name for the cluster.
Provide appropriate values for the fields based on where you are deploying the Sensor.
- If you are deploying Sensor in the same cluster, accept the default values for all the fields.
-
If you are deploying into a different cluster, replace
central.stackrox.svc:443with a load balancer, node port, or other address, including the port number, that is accessible from the other cluster. If you are using a non-gRPC capable load balancer, such as HAProxy, AWS Application Load Balancer (ALB), or AWS Elastic Load Balancing (ELB), use the WebSocket Secure (
wss) protocol. To usewss:-
Prefix the address with
wss://. -
Add the port number after the address, for example,
wss://stackrox-central.example.com:443.
-
Prefix the address with
- Click Next to continue with the Sensor setup.
Click Download YAML File and Keys to download the cluster bundle (zip archive).
ImportantThe cluster bundle zip archive includes unique configurations and keys for each cluster. Do not reuse the same files in another cluster.
From a system that has access to the monitored cluster, extract and run the
sensorscript from the cluster bundle:unzip -d sensor sensor-<cluster_name>.zip
$ unzip -d sensor sensor-<cluster_name>.zipCopy to Clipboard Copied! Toggle word wrap Toggle overflow ./sensor/sensor.sh
$ ./sensor/sensor.shCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you get a warning that you do not have the required permissions to deploy Sensor, follow the on-screen instructions, or contact your cluster administrator for help.
After Sensor is deployed, it contacts Central and provides cluster information.
5.4.2.2.2. Manifest installation by using the roxctl CLI Copy linkLink copied to clipboard!
Procedure
Generate the required sensor configuration for your OpenShift Container Platform cluster and associate it with your Central instance by running the following command:
roxctl sensor generate openshift --openshift-version <ocp_version> --name <cluster_name> --central "$ROX_ENDPOINT"
$ roxctl sensor generate openshift --openshift-version <ocp_version> --name <cluster_name> --central "$ROX_ENDPOINT"1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For the
--openshift-versionoption, specify the major OpenShift Container Platform version number for your cluster. For example, specify3for OpenShift Container Platform version3.xand specify4for OpenShift Container Platform version4.x.
From a system that has access to the monitored cluster, extract and run the
sensorscript from the cluster bundle:unzip -d sensor sensor-<cluster_name>.zip
$ unzip -d sensor sensor-<cluster_name>.zipCopy to Clipboard Copied! Toggle word wrap Toggle overflow ./sensor/sensor.sh
$ ./sensor/sensor.shCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you get a warning that you do not have the required permissions to deploy Sensor, follow the on-screen instructions, or contact your cluster administrator for help.
After Sensor is deployed, it contacts Central and provides cluster information.
Verification
Return to the RHACS portal and check if the deployment is successful. If successful, when viewing your list of clusters in Platform Configuration
Clusters, the cluster status displays a green checkmark and a Healthy status. If you do not see a green checkmark, use the following command to check for problems: On Kubernetes, enter the following command:
kubectl get pod -n stackrox -w
$ kubectl get pod -n stackrox -wCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- Click Finish to close the window.
After installation, Sensor starts reporting security information to RHACS and the RHACS portal dashboard begins showing deployments, images, and policy violations from the cluster on which you have installed the Sensor.
5.5. Verifying installation of RHACS on other platforms Copy linkLink copied to clipboard!
Provides steps to verify that RHACS is properly installed.
5.5.1. Verifying installation Copy linkLink copied to clipboard!
After you complete the installation, run a few vulnerable applications and go to the RHACS portal to evaluate the results of security assessments and policy violations.
The sample applications listed in the following section contain critical vulnerabilities and they are specifically designed to verify the build and deploy-time assessment features of Red Hat Advanced Cluster Security for Kubernetes.
To verify installation:
Find the address of the RHACS portal based on your exposure method:
For a load balancer:
kubectl get service central-loadbalancer -n stackrox
$ kubectl get service central-loadbalancer -n stackroxCopy to Clipboard Copied! Toggle word wrap Toggle overflow For port forward:
Run the following command:
kubectl port-forward svc/central 18443:443 -n stackrox
$ kubectl port-forward svc/central 18443:443 -n stackroxCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Go to
https://localhost:18443/.
Create a new namespace:
kubectl create namespace test
$ kubectl create namespace testCopy to Clipboard Copied! Toggle word wrap Toggle overflow Start some applications with critical vulnerabilities:
kubectl run shell --labels=app=shellshock,team=test-team \ --image=quay.io/stackrox-io/docs:example-vulnerables-cve-2014-6271 -n test kubectl run samba --labels=app=rce \ --image=quay.io/stackrox-io/docs:example-vulnerables-cve-2017-7494 -n test
$ kubectl run shell --labels=app=shellshock,team=test-team \ --image=quay.io/stackrox-io/docs:example-vulnerables-cve-2014-6271 -n test $ kubectl run samba --labels=app=rce \ --image=quay.io/stackrox-io/docs:example-vulnerables-cve-2017-7494 -n testCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Red Hat Advanced Cluster Security for Kubernetes automatically scans these deployments for security risks and policy violations as soon as they are submitted to the cluster. Go to the RHACS portal to view the violations. You can log in to the RHACS portal by using the default username admin and the generated password.