Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 4. Installing RHACS on Red Hat OpenShift
4.1. Installing Central services for RHACS on Red Hat OpenShift
Central is the resource that contains the RHACS application management interface and services. It handles data persistence, API interactions, and RHACS portal access. You can use the same Central instance to secure multiple OpenShift Container Platform or Kubernetes clusters.
You can install Central on your OpenShift Container Platform or Kubernetes cluster by using one of the following methods:
- Install using the Operator
- Install using Helm charts
-
Install using the
roxctl
CLI (do not use this method unless you have a specific installation need that requires using it)
4.1.1. Install Central using the Operator
4.1.1.1. Installing the Red Hat Advanced Cluster Security for Kubernetes Operator
Using the OperatorHub provided with OpenShift Container Platform is the easiest way to install Red Hat Advanced Cluster Security for Kubernetes.
Prerequisites
- You have access to an OpenShift Container Platform cluster using an account with Operator installation permissions.
- You must be using OpenShift Container Platform 4.11 or later. For information about supported platforms and architecture, see the Red Hat Advanced Cluster Security for Kubernetes Support Matrix. For life cycle support information for RHACS, see the Red Hat Advanced Cluster Security for Kubernetes Support Policy.
Procedure
-
In the web console, go to the Operators
OperatorHub page. - If Red Hat Advanced Cluster Security for Kubernetes is not displayed, enter Advanced Cluster Security into the Filter by keyword box to find the Red Hat Advanced Cluster Security for Kubernetes Operator.
- Select the Red Hat Advanced Cluster Security for Kubernetes Operator to view the details page.
- Read the information about the Operator, and then click Install.
On the Install Operator page:
- Keep the default value for Installation mode as All namespaces on the cluster.
- Choose a specific namespace in which to install the Operator for the Installed namespace field. Install the Red Hat Advanced Cluster Security for Kubernetes Operator in the rhacs-operator namespace.
Select automatic or manual updates for Update approval.
If you choose automatic updates, when a new version of the Operator is available, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator.
If you choose manual updates, when a newer version of the Operator is available, OLM creates an update request. As a cluster administrator, you must manually approve the update request to update the Operator to the latest version.
ImportantIf you choose manual updates, you must update the RHACS Operator in all secured clusters when you update the RHACS Operator in the cluster where Central is installed. The secured clusters and the cluster where Central is installed must have the same version to ensure optimal functionality.
- Click Install.
Verification
-
After the installation completes, go to Operators
Installed Operators to verify that the Red Hat Advanced Cluster Security for Kubernetes Operator is listed with the status of Succeeded.
Next Step
-
You installed the Operator into the rhacs-operator project. Using that Operator, install, configure, and deploy the
Central
custom resource into thestackrox
project.
4.1.1.2. Installing Central using the Operator method
The main component of Red Hat Advanced Cluster Security for Kubernetes is called Central. You can install Central on OpenShift Container Platform by using the Central
custom resource. You deploy Central only once, and you can monitor multiple separate clusters by using the same Central installation.
-
When you install Red Hat Advanced Cluster Security for Kubernetes for the first time, you must first install the
Central
custom resource because theSecuredCluster
custom resource installation is dependent on certificates that Central generates. -
Red Hat recommends installing the Red Hat Advanced Cluster Security for Kubernetes
Central
custom resource in a dedicated project. Do not install it in the project where you have installed the Red Hat Advanced Cluster Security for Kubernetes Operator. Additionally, do not install it in any projects with names that begin withkube
,openshift
, orredhat
, and in theistio-system
project.
Prerequisites
- You must be using OpenShift Container Platform 4.11 or later. For information about supported platforms and architecture, see the Red Hat Advanced Cluster Security for Kubernetes Support Matrix. For life cycle support information for RHACS, see the Red Hat Advanced Cluster Security for Kubernetes Support Policy.
Procedure
-
On the OpenShift Container Platform web console, go to the Operators
Installed Operators page. - Select the Red Hat Advanced Cluster Security for Kubernetes Operator from the list of installed Operators.
If you have installed the Operator in the recommended namespace, OpenShift Container Platform lists the project as
rhacs-operator
. Select Project: rhacs-operatorCreate project. Note-
If you installed the Operator in a different namespace, OpenShift Container Platform lists the name of that namespace instead of
rhacs-operator
.
-
If you installed the Operator in a different namespace, OpenShift Container Platform lists the name of that namespace instead of
-
Enter the new project name (for example,
stackrox
), and click Create. Red Hat recommends that you usestackrox
as the project name. - Under the Provided APIs section, select Central. Click Create Central.
Optional: If you are using declarative configuration, next to Configure via:, click YAML view and add the information for the declarative configuration, such as shown in the following example:
... spec: central: declarativeConfiguration: configMaps: - name: "<declarative-configs>" 1 secrets: - name: "<sensitive-declarative-configs>" 2 ...
-
Enter a name for your
Central
custom resource and add any labels you want to apply. Otherwise, accept the default values for the available options. You can configure available options for Central:
Central component settings:
Setting Description Administrator password
Secret that contains the administrator password. Use this field if you do not want RHACS to generate a password for you.
Exposure
Settings for exposing Central by using a route, load balancer, or node port. See the
central.exposure.<parameter>
information in the "Public configuration file" section in "Installing Central services for RHACS on Red Hat OpenShift".User-facing TLS certificate secret
Use this field if you want to terminate TLS in Central and serve a custom server certificate.
Monitoring
Configures the monitoring endpoint for Central. See the
central.exposeMonitoring
parameter in the "Public configuration file" section in "Installing Central services for RHACS on Red Hat OpenShift".Persistence
These fields configure how Central should store its persistent data. Use a persistent volume claim (PVC) for best results, especially if you are using Scanner V4. See the
central.persistence.<parameter>
information in the "Public configuration file" section in "Installing Central services for RHACS on Red Hat OpenShift".Central DB Settings
Settings for Central DB, including data persistence. See the
central.db.<parameter>
information in the "Public configuration file" section in "Installing Central services for RHACS on Red Hat OpenShift".Resources
Use these fields after consulting the documentation if you need to override the default settings for memory and CPU resources. For more information, see the "Default resource requirements for RHACS" and "Recommended resource requirements for RHACS" sections in the "Installation" chapter.
Tolerations
Use this parameter to configure Central to run only on specific nodes. See the
central.tolerations
parameter in the "Public configuration file" section in "Installing Central services for RHACS on Red Hat OpenShift".Host Aliases
Use this parameter to configure additional hostnames to resolve in the pod’s hosts file.
- Scanner Component Settings: Settings for the default scanner, also called the StackRox Scanner. See the "Scanner" table in the "Public configuration file" section in "Installing Central services for RHACS on Red Hat OpenShift".
Scanner V4 Component Settings: Settings for the optional Scanner V4 scanner, available in version 4.4 and later. It is not currently enabled by default. You can enable both the StackRox Scanner and Scanner V4 for concurrent use. See the "Scanner V4" table in the "Public configuration file" section in "Installing Central services for RHACS on Red Hat OpenShift".
When Scanner V4 is enabled, you can configure the following options:
Setting Description Indexer
The process that indexes images and creates a report of findings. You can configure replicas and autoscaling, resources, and tolerations. Before changing the default resource values, see the "Scanner V4" sections in the "Default resource requirements for RHACS" and "Recommended resource requirements for RHACS" sections in the "Installation" chapter.
Matcher
The process that performs vulnerability matching of the report from the indexer against vulnerability data stored in Scanner V4 DB. You can configure replicas and autoscaling, resources, and tolerations. Before changing the default resource values, see the "Scanner V4" sections in the "Default resource requirements for RHACS" and "Recommended resource requirements for RHACS" sections in the "Installation" chapter.
DB
The database that stores information for Scanner V4, including vulnerability data and index reports. You can configure persistence, resources, and tolerations. If you are using Scanner V4, a persistent volume claim (PVC) is required on Central clusters. A PVC is strongly recommended on secured clusters for best results. Before changing the default resource values, see the "Scanner V4" sections in the "Default resource requirements for RHACS" and "Recommended resource requirements for RHACS" sections in the "Installation" chapter.
- Egress: Settings for outgoing network traffic, including whether RHACS should run in online (connected) or offline (disconnected) mode.
- TLS: Use this field to add additional trusted root certificate authorities (CAs).
network: To provide security at the network level, RHACS creates default
NetworkPolicy
resources in the namespace where Central is installed. To create and manage your own network policies, in the policies section, select Disabled. By default, this option is Enabled.WarningDisabling creation of default network policies can break communication between RHACS components. If you disable creation of default policies, you must create your own network policies to allow this communication.
Advanced configuration: You can use these fields to perform the following actions:
- Specify additional image pull secrets
- Add custom environment variables to set for managed pods' containers
- Enable Red Hat OpenShift monitoring
- Click Create.
If you are using the cluster-wide proxy, Red Hat Advanced Cluster Security for Kubernetes uses that proxy configuration to connect to the external services.
Next Steps
- Verify Central installation.
- Optional: Configure Central options.
-
Generate an init bundle containing the cluster secrets that allows communication between the
Central
andSecuredCluster
resources. You need to download this bundle, use it to generate resources on the clusters you want to secure, and securely store it. - Install secured cluster services on each cluster you want to monitor.
4.1.1.3. Provisioning a database in your PostgreSQL instance
This step is optional. You can use your existing PostgreSQL infrastructure to provision a database for RHACS. Use the instructions in this section for configuring a PostgreSQL database environment, creating a user, database, schema, role, and granting required permissions.
Procedure
Create a new user:
CREATE USER stackrox WITH PASSWORD <password>;
Create a database:
CREATE DATABASE stackrox;
Connect to the database:
\connect stackrox
Create user schema:
CREATE SCHEMA stackrox;
(Optional) Revoke rights on public:
REVOKE CREATE ON SCHEMA public FROM PUBLIC; REVOKE USAGE ON SCHEMA public FROM PUBLIC; REVOKE ALL ON DATABASE stackrox FROM PUBLIC;
Create a role:
CREATE ROLE readwrite;
Grant connection permission to the role:
GRANT CONNECT ON DATABASE stackrox TO readwrite;
Add required permissions to the
readwrite
role:GRANT USAGE ON SCHEMA stackrox TO readwrite; GRANT USAGE, CREATE ON SCHEMA stackrox TO readwrite; GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA stackrox TO readwrite; ALTER DEFAULT PRIVILEGES IN SCHEMA stackrox GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO readwrite; GRANT USAGE ON ALL SEQUENCES IN SCHEMA stackrox TO readwrite; ALTER DEFAULT PRIVILEGES IN SCHEMA stackrox GRANT USAGE ON SEQUENCES TO readwrite;
Assign the
readwrite
role to thestackrox
user:GRANT readwrite TO stackrox;
4.1.1.4. Installing Central with an external database using the Operator method
The main component of Red Hat Advanced Cluster Security for Kubernetes is called Central. You can install Central on OpenShift Container Platform by using the Central
custom resource. You deploy Central only once, and you can monitor multiple separate clusters by using the same Central installation.
When you install Red Hat Advanced Cluster Security for Kubernetes for the first time, you must first install the Central
custom resource because the SecuredCluster
custom resource installation is dependent on certificates that Central generates.
For more information about RHACS databases, see the Database Scope of Coverage.
Prerequisites
- You must be using OpenShift Container Platform 4.11 or later. For more information about supported OpenShift Container Platform versions, see the Red Hat Advanced Cluster Security for Kubernetes Support Matrix.
You must have a database in your database instance that supports PostgreSQL 13 and a user with the following permissions:
- Connection rights to the database.
-
Usage
andCreate
on the schema. -
Select
,Insert
,Update
, andDelete
on all tables in the schema. -
Usage
on all sequences in the schema.
Procedure
-
On the OpenShift Container Platform web console, go to the Operators
Installed Operators page. - Select the Red Hat Advanced Cluster Security for Kubernetes Operator from the list of installed Operators.
If you have installed the Operator in the recommended namespace, OpenShift Container Platform lists the project as
rhacs-operator
. Select Project: rhacs-operatorCreate project. Warning-
If you have installed the Operator in a different namespace, OpenShift Container Platform shows the name of that namespace rather than
rhacs-operator
. -
Red Hat recommends installing the Red Hat Advanced Cluster Security for Kubernetes
Central
custom resource in a dedicated project. Do not install it in the project where you have installed the Red Hat Advanced Cluster Security for Kubernetes Operator. Additionally, do not install it in any projects with names that begin withkube
,openshift
, orredhat
, and in theistio-system
project.
-
If you have installed the Operator in a different namespace, OpenShift Container Platform shows the name of that namespace rather than
-
Enter the new project name (for example,
stackrox
), and click Create. Red Hat recommends that you usestackrox
as the project name. Create a password secret in the deployed namespace by using the OpenShift Container Platform web console or the terminal.
-
On the OpenShift Container Platform web console, go to the Workloads
Secrets page. Create a Key/Value secret with the key password
and the value as the path of a plain text file containing the password for the superuser of the provisioned database. Or, run the following command in your terminal:
$ oc create secret generic external-db-password \1 --from-file=password=<password.txt> 2
-
On the OpenShift Container Platform web console, go to the Workloads
- Return to the Red Hat Advanced Cluster Security for Kubernetes operator page in the OpenShift Container Platform web console. Under the Provided APIs section, select Central. Click Create Central.
- Optional: If you are using declarative configuration, next to Configure via:, click YAML view.
Add the information for the declarative configuration, such as shown in the following example:
... spec: central: declarativeConfiguration: configMaps: - name: <declarative-configs> 1 secrets: - name: <sensitive-declarative-configs> 2 ...
-
Enter a name for your
Central
custom resource and add any labels you want to apply. -
Go to Central Component Settings
Central DB Settings. -
For Administrator Password specify the referenced secret as
external-db-password
(or the secret name of the password created previously). -
For Connection String specify the connection string in
keyword=value
format, for example,host=<host> port=5432 database=stackrox user=stackrox sslmode=verify-ca
-
For Persistence
PersistentVolumeClaim Claim Name, remove central-db
. If necessary, you can specify a Certificate Authority so that there is trust between the database certificate and Central. To add this, go to the YAML view and add a TLS block under the top-level spec, as shown in the following example:
spec: tls: additionalCAs: - name: db-ca content: | <certificate>
- Click Create.
If you are using the cluster-wide proxy, Red Hat Advanced Cluster Security for Kubernetes uses that proxy configuration to connect to the external services.
Next Steps
- Verify Central installation.
- Optional: Configure Central options.
-
Generate an init bundle containing the cluster secrets that allows communication between the
Central
andSecuredCluster
resources. You need to download this bundle, use it to generate resources on the clusters you want to secure, and securely store it. - Install secured cluster services on each cluster you want to monitor.
Additional resources
4.1.1.5. Verifying Central installation using the Operator method
After Central finishes installing, log in to the RHACS portal to verify the successful installation of Central.
Procedure
-
On the OpenShift Container Platform web console, go to the Operators
Installed Operators page. - Select the Red Hat Advanced Cluster Security for Kubernetes Operator from the list of installed Operators.
- Select the Central tab.
-
From the Centrals list, select
stackrox-central-services
to view its details. To get the password for the
admin
user, you can either:- Click the link under Admin Password Secret Reference.
Use the Red Hat OpenShift CLI to enter the command listed under Admin Credentials Info:
$ oc -n stackrox get secret central-htpasswd -o go-template='{{index .data "password" | base64decode}}'
Find the link to the RHACS portal by using the Red Hat OpenShift CLI command:
$ oc -n stackrox get route central -o jsonpath="{.status.ingress[0].host}"
Alternatively, you can use the Red Hat Advanced Cluster Security for Kubernetes web console to find the link to the RHACS portal by performing the following commands:
-
Go to Networking
Routes. - Find the central Route and click on the RHACS portal link under the Location column.
-
Go to Networking
-
Log in to the RHACS portal using the username admin and the password that you retrieved in a previous step. Until RHACS is completely configured (for example, you have the
Central
resource and at least oneSecuredCluster
resource installed and configured), no data is available in the dashboard. TheSecuredCluster
resource can be installed and configured on the same cluster as theCentral
resource. Clusters with theSecuredCluster
resource are similar to managed clusters in Red Hat Advanced Cluster Management (RHACM).
Next Steps
- Optional: Configure central settings.
-
Generate an init bundle containing the cluster secrets that allows communication between the
Central
andSecuredCluster
resources. You need to download this bundle, use it to generate resources on the clusters you want to secure, and securely store it. - Install secured cluster services on each cluster you want to monitor.
4.1.2. Install Central using Helm charts
You can install Central using Helm charts without any customization, using the default values, or by using Helm charts with additional customizations of configuration parameters.
4.1.2.1. Install Central using Helm charts without customization
You can install RHACS on your cluster without any customizations. You must add the Helm chart repository and install the central-services
Helm chart to install the centralized components of Central and Scanner.
4.1.2.1.1. Adding the Helm chart repository
Procedure
Add the RHACS charts repository.
$ helm repo add rhacs https://mirror.openshift.com/pub/rhacs/charts/
The Helm repository for Red Hat Advanced Cluster Security for Kubernetes includes Helm charts for installing different components, including:
Central services Helm chart (
central-services
) for installing the centralized components (Central and Scanner).NoteYou deploy centralized components only once and you can monitor multiple separate clusters by using the same installation.
Secured Cluster Services Helm chart (
secured-cluster-services
) for installing the per-cluster and per-node components (Sensor, Admission Controller, Collector, and Scanner-slim).NoteDeploy the per-cluster components into each cluster that you want to monitor and deploy the per-node components in all nodes that you want to monitor.
Verification
Run the following command to verify the added chart repository:
$ helm search repo -l rhacs/
4.1.2.1.2. Installing the central-services Helm chart without customizations
Use the following instructions to install the central-services
Helm chart to deploy the centralized components (Central and Scanner).
Prerequisites
-
You must have access to the Red Hat Container Registry. For information about downloading images from
registry.redhat.io
, see Red Hat Container Registry Authentication.
Procedure
Run the following command to install Central services and expose Central using a route:
$ helm install -n stackrox \ --create-namespace stackrox-central-services rhacs/central-services \ --set imagePullSecrets.username=<username> \1 --set imagePullSecrets.password=<password> \2 --set central.exposure.route.enabled=true
Or, run the following command to install Central services and expose Central using a load balancer:
$ helm install -n stackrox \ --create-namespace stackrox-central-services rhacs/central-services \ --set imagePullSecrets.username=<username> \1 --set imagePullSecrets.password=<password> \2 --set central.exposure.loadBalancer.enabled=true
Or, run the following command to install Central services and expose Central using port forward:
$ helm install -n stackrox \ --create-namespace stackrox-central-services rhacs/central-services \ --set imagePullSecrets.username=<username> \1 --set imagePullSecrets.password=<password> 2
If you are installing Red Hat Advanced Cluster Security for Kubernetes in a cluster that requires a proxy to connect to external services, you must specify your proxy configuration by using the
proxyConfig
parameter. For example:env: proxyConfig: | url: http://proxy.name:port username: username password: password excludes: - some.domain
-
If you already created one or more image pull secrets in the namespace in which you are installing, instead of using a username and password, you can use
--set imagePullSecrets.useExisting="<pull-secret-1;pull-secret-2>"
. Do not use image pull secrets:
-
If you are pulling your images from
quay.io/stackrox-io
or a registry in a private network that does not require authentication. Use use--set imagePullSecrets.allowNone=true
instead of specifying a username and password. -
If you already configured image pull secrets in the default service account in the namespace you are installing. Use
--set imagePullSecrets.useFromDefaultServiceAccount=true
instead of specifying a username and password.
-
If you are pulling your images from
The output of the installation command includes:
- An automatically generated administrator password.
- Instructions on storing all the configuration values.
- Any warnings that Helm generates.
4.1.2.2. Install Central using Helm charts with customizations
You can install RHACS on your Red Hat OpenShift cluster with customizations by using Helm chart configuration parameters with the helm install
and helm upgrade
commands. You can specify these parameters by using the --set
option or by creating YAML configuration files.
Create the following files for configuring the Helm chart for installing Red Hat Advanced Cluster Security for Kubernetes:
-
Public configuration file
values-public.yaml
: Use this file to save all non-sensitive configuration options. -
Private configuration file
values-private.yaml
: Use this file to save all sensitive configuration options. Ensure that you store this file securely. -
Configuration file
declarative-config-values.yaml
: Create this file if you are using declarative configuration to add the declarative configuration mounts to Central.
4.1.2.2.1. Private configuration file
This section lists the configurable parameters of the values-private.yaml
file. There are no default values for these parameters.
4.1.2.2.1.1. Image pull secrets
The credentials that are required for pulling images from the registry depend on the following factors:
If you are using a custom registry, you must specify these parameters:
-
imagePullSecrets.username
-
imagePullSecrets.password
-
image.registry
-
If you do not use a username and password to log in to the custom registry, you must specify one of the following parameters:
-
imagePullSecrets.allowNone
-
imagePullSecrets.useExisting
-
imagePullSecrets.useFromDefaultServiceAccount
-
Parameter | Description |
---|---|
| The username of the account that is used to log in to the registry. |
| The password of the account that is used to log in to the registry. |
|
Use |
|
A comma-separated list of secrets as values. For example, |
|
Use |
4.1.2.2.1.2. Proxy configuration
If you are installing Red Hat Advanced Cluster Security for Kubernetes in a cluster that requires a proxy to connect to external services, you must specify your proxy configuration by using the proxyConfig
parameter. For example:
env: proxyConfig: | url: http://proxy.name:port username: username password: password excludes: - some.domain
Parameter | Description |
---|---|
| Your proxy configuration. |
4.1.2.2.1.3. Central
Configurable parameters for Central.
For a new installation, you can skip the following parameters:
-
central.jwtSigner.key
-
central.serviceTLS.cert
-
central.serviceTLS.key
-
central.adminPassword.value
-
central.adminPassword.htpasswd
-
central.db.serviceTLS.cert
-
central.db.serviceTLS.key
-
central.db.password.value
- When you do not specify values for these parameters the Helm chart autogenerates values for them.
-
If you want to modify these values you can use the
helm upgrade
command and specify the values using the--set
option.
For setting the administrator password, you can only use either central.adminPassword.value
or central.adminPassword.htpasswd
, but not both.
Parameter | Description |
---|---|
| A private key which RHACS should use for signing JSON web tokens (JWTs) for authentication. |
| An internal certificate that the Central service should use for deploying Central. |
| The private key of the internal certificate that the Central service should use. |
| The user-facing certificate that Central should use. RHACS uses this certificate for RHACS portal.
|
| The private key of the user-facing certificate that Central should use.
|
| Connection password for Central database. |
| Administrator password for logging into RHACS. |
| Administrator password for logging into RHACS. This password is stored in hashed format using bcrypt. |
| An internal certificate that the Central DB service should use for deploying Central DB. |
| The private key of the internal certificate that the Central DB service should use. |
| The password used to connect to the Central DB. |
If you are using central.adminPassword.htpasswd
parameter, you must use a bcrypt encoded password hash. You can run the command htpasswd -nB admin
to generate a password hash. For example,
htpasswd: | admin:<bcrypt-hash>
4.1.2.2.1.4. Scanner
Configurable parameters for the StackRox Scanner and Scanner V4.
For a new installation, you can skip the following parameters and the Helm chart autogenerates values for them. Otherwise, if you are upgrading to a new version, specify the values for the following parameters:
-
scanner.dbPassword.value
-
scanner.serviceTLS.cert
-
scanner.serviceTLS.key
-
scanner.dbServiceTLS.cert
-
scanner.dbServiceTLS.key
-
scannerV4.db.password.value
-
scannerV4.indexer.serviceTLS.cert
-
scannerV4.indexer.serviceTLS.key
-
scannerV4.matcher.serviceTLS.cert
-
scannerV4.matcher.serviceTLS.key
-
scannerV4.db.serviceTLS.cert
-
scannerV4.db.serviceTLS.key
Parameter | Description |
---|---|
| The password to use for authentication with Scanner database. Do not modify this parameter because RHACS automatically creates and uses its value internally. |
| An internal certificate that the StackRox Scanner service should use for deploying the StackRox Scanner. |
| The private key of the internal certificate that the Scanner service should use. |
| An internal certificate that the Scanner-db service should use for deploying Scanner database. |
| The private key of the internal certificate that the Scanner-db service should use. |
| The password to use for authentication with the Scanner V4 database. Do not modify this parameter because RHACS automatically creates and uses its value internally. |
| An internal certificate that the Scanner V4 DB service should use for deploying the Scanner V4 database. |
| The private key of the internal certificate that the Scanner V4 DB service should use. |
| An internal certificate that the Scanner V4 service should use for deploying the Scanner V4 Indexer. |
| The private key of the internal certificate that the Scanner V4 Indexer should use. |
| An internal certificate that the Scanner V4 service should use for deploying the the Scanner V4 Matcher. |
| The private key of the internal certificate that the Scanner V4 Matcher should use. |
4.1.2.2.2. Public configuration file
This section lists the configurable parameters of the values-public.yaml
file.
4.1.2.2.2.1. Image pull secrets
Image pull secrets are the credentials required for pulling images from your registry.
Parameter | Description |
---|---|
|
Use |
|
A comma-separated list of secrets as values. For example, |
|
Use |
4.1.2.2.2.2. Image
Image declares the configuration to set up the main registry, which the Helm chart uses to resolve images for the central.image
, scanner.image
, scanner.dbImage
, scannerV4.image
, and scannerV4.db.image
parameters.
Parameter | Description |
---|---|
|
Address of your image registry. Either use a hostname, such as |
4.1.2.2.2.3. Environment variables
Red Hat Advanced Cluster Security for Kubernetes automatically detects your cluster environment and sets values for env.openshift
, env.istio
, and env.platform
. Only set these values to override the automatic cluster environment detection.
Parameter | Description |
---|---|
|
Use |
|
Use |
|
The platform on which you are installing RHACS. Set its value to |
|
Use |
4.1.2.2.2.4. Additional trusted certificate authorities
The RHACS automatically references the system root certificates to trust. When Central, the StackRox Scanner, or Scanner V4 must reach out to services that use certificates issued by an authority in your organization or a globally trusted partner organization, you can add trust for these services by specifying the root certificate authority to trust by using the following parameter:
Parameter | Description |
---|---|
| Specify the PEM encoded certificate of the root certificate authority to trust. |
4.1.2.2.2.5. Default network policies
To provide security at the network level, RHACS creates default NetworkPolicy
resources in the namespace where Central is installed. These network policies allow ingress to specific components on specific ports. If you do not want RHACS to create these policies, set this parameter to Disabled
. The default value is Enabled
.
Disabling creation of default network policies can break communication between RHACS components. If you disable creation of default policies, you must create your own network policies to allow this communication.
Parameter | Description |
---|---|
|
Specify if RHACS creates default network policies to allow communication between components. To create your own network policies, set this parameter to |
4.1.2.2.2.6. Central
Configurable parameters for Central.
-
You must specify a persistent storage option as either
hostPath
orpersistentVolumeClaim
. -
For exposing Central deployment for external access. You must specify one parameter, either
central.exposure.loadBalancer
,central.exposure.nodePort
, orcentral.exposure.route
. When you do not specify any value for these parameters, you must manually expose Central or access it by using port-forwarding.
The following table includes settings for an external PostgreSQL database.
Parameter | Description |
---|---|
| Mounts config maps used for declarative configurations. |
| Mounts secrets used for declarative configurations. |
| The endpoint configuration options for Central. |
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Central. This parameter is mainly used for infrastructure nodes. |
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Central. This parameter is mainly used for infrastructure nodes. |
|
Specify |
|
A custom registry that overrides the global |
|
The custom image name that overrides the default Central image name ( |
|
The custom image tag that overrides the default tag for Central image. If you specify your own image tag during a new installation, you must manually increment this tag when you to upgrade to a new version by running the |
|
Full reference including registry address, image name, and image tag for the Central image. Setting a value for this parameter overrides the |
| The memory request for Central. |
| The CPU request for Central. |
| The memory limit for Central. |
| The CPU limit for Central. |
| The path on the node where RHACS should create a database volume. Red Hat does not recommend using this option. |
| The name of the persistent volume claim (PVC) you are using. |
|
Use |
| The size (in GiB) of the persistent volume managed by the specified claim. |
|
Use |
| The port number on which to expose Central. The default port number is 443. |
|
Use |
| The port number on which to expose Central. When you skip this parameter, OpenShift Container Platform automatically assigns a port number. Red Hat recommends that you do not specify a port number if you are exposing RHACS by using a node port. |
|
Use |
|
Use |
|
The connection string for Central to use to connect to the database. This is only used when
|
| The minimum number of connections to the database to be established. |
| The maximum number of connections to the database to be established. |
| The number of milliseconds a single query or transaction can be active against the database. |
| The postgresql.conf to be used for Central DB as described in the PostgreSQL documentation in "Additional resources". |
| The pg_hba.conf to be used for Central DB as described in the PostgreSQL documentation in "Additional resources". |
|
Specify a node selector label as |
|
A custom registry that overrides the global |
|
The custom image name that overrides the default Central DB image name ( |
|
The custom image tag that overrides the default tag for Central DB image. If you specify your own image tag during a new installation, you must manually increment this tag when you to upgrade to a new version by running the |
|
Full reference including registry address, image name, and image tag for the Central DB image. Setting a value for this parameter overrides the |
| The memory request for Central DB. |
| The CPU request for Central DB. |
| The memory limit for Central DB. |
| The CPU limit for Central DB. |
| The path on the node where RHACS should create a database volume. Red Hat does not recommend using this option. |
| The name of the persistent volume claim (PVC) you are using. |
|
Use |
| The size (in GiB) of the persistent volume managed by the specified claim. |
4.1.2.2.2.7. StackRox Scanner
The following table lists the configurable parameters for the StackRox Scanner. This is the scanner used for node and platform scanning. If Scanner V4 is not enabled, the StackRox scanner also performs image scanning. Beginning with version 4.4, Scanner V4 can be enabled to provide image scanning. See the next table for Scanner V4 parameters.
Parameter | Description |
---|---|
|
Use |
|
Specify |
|
The number of replicas to create for the StackRox Scanner deployment. When you use it with the |
|
Configure the log level for the StackRox Scanner. Red Hat recommends that you not change the default log level value ( |
|
Specify a node selector label as |
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for the StackRox Scanner. This parameter is mainly used for infrastructure nodes. |
|
Use |
| The minimum number of replicas for autoscaling. |
| The maximum number of replicas for autoscaling. |
| The memory request for the StackRox Scanner. |
| The CPU request for the StackRox Scanner. |
| The memory limit for the StackRox Scanner. |
| The CPU limit for the StackRox Scanner. |
| The memory request for the StackRox Scanner database deployment. |
| The CPU request for the StackRox Scanner database deployment. |
| The memory limit for the StackRox Scanner database deployment. |
| The CPU limit for the StackRox Scanner database deployment. |
| A custom registry for the StackRox Scanner image. |
|
The custom image name that overrides the default StackRox Scanner image name ( |
| A custom registry for the StackRox Scanner DB image. |
|
The custom image name that overrides the default StackRox Scanner DB image name ( |
|
Specify a node selector label as |
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for the StackRox Scanner DB. This parameter is mainly used for infrastructure nodes. |
4.1.2.2.2.8. Scanner V4
The following table lists the configurable parameters for Scanner V4.
Parameter | Description |
---|---|
|
The name of the PVC to manage persistent data for Scanner V4. If no PVC with the given name exists, it is created. The default value is |
|
Use |
|
Specify |
|
The number of replicas to create for the Scanner V4 Indexer deployment. When you use it with the |
|
Configure the log level for the Scanner V4 Indexer. Red Hat recommends that you not change the default log level value ( |
|
Specify a node selector label as |
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for the Scanner V4 Indexer. This parameter is mainly used for infrastructure nodes. |
|
Use |
| The minimum number of replicas for autoscaling. |
| The maximum number of replicas for autoscaling. |
| The memory request for the Scanner V4 Indexer. |
| The CPU request for the Scanner V4 Indexer. |
| The memory limit for the Scanner V4 Indexer. |
| The CPU limit for the Scanner V4 Indexer. |
|
The number of replicas to create for the Scanner V4 Matcher deployment. When you use it with the |
|
Red Hat recommends that you not change the default log level value ( |
|
Specify a node selector label as |
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for the Scanner V4 Matcher. This parameter is mainly used for infrastructure nodes. |
|
Use |
| The minimum number of replicas for autoscaling. |
| The maximum number of replicas for autoscaling. |
| The memory request for the Scanner V4 Matcher. |
| The CPU request for the Scanner V4 Matcher. |
| The memory request for the Scanner V4 database deployment. |
| The CPU request for the Scanner V4 database deployment. |
| The memory limit for the Scanner V4 database deployment. |
| The CPU limit for the Scanner V4 database deployment. |
|
Specify a node selector label as |
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for the Scanner V4 DB. This parameter is mainly used for infrastructure nodes. |
| A custom registry for the Scanner V4 DB image. |
|
The custom image name that overrides the default Scanner V4 DB image name ( |
| A custom registry for the Scanner V4 image. |
|
The custom image name that overrides the default Scanner V4 image name ( |
4.1.2.2.2.9. Customization
Use these parameters to specify additional attributes for all objects that RHACS creates.
Parameter | Description |
---|---|
| A custom label to attach to all objects. |
| A custom annotation to attach to all objects. |
| A custom label to attach to all deployments. |
| A custom annotation to attach to all deployments. |
| A custom environment variable for all containers in all objects. |
| A custom label to attach to all objects that Central creates. |
| A custom annotation to attach to all objects that Central creates. |
| A custom label to attach to all Central deployments. |
| A custom annotation to attach to all Central deployments. |
| A custom environment variable for all Central containers. |
| A custom label to attach to all objects that Scanner creates. |
| A custom annotation to attach to all objects that Scanner creates. |
| A custom label to attach to all Scanner deployments. |
| A custom annotation to attach to all Scanner deployments. |
| A custom environment variable for all Scanner containers. |
| A custom label to attach to all objects that Scanner DB creates. |
| A custom annotation to attach to all objects that Scanner DB creates. |
| A custom label to attach to all Scanner DB deployments. |
| A custom annotation to attach to all Scanner DB deployments. |
| A custom environment variable for all Scanner DB containers. |
| A custom label to attach to all objects that Scanner V4 Indexer creates and into the pods belonging to them. |
| A custom annotation to attach to all objects that Scanner V4 Indexer creates and into the pods belonging to them. |
| A custom label to attach to all objects that Scanner V4 Indexer creates and into the pods belonging to them. |
| A custom annotation to attach to all objects that Scanner V4 Indexer creates and into the pods belonging to them. |
| A custom environment variable for all Scanner V4 Indexer containers and the pods belonging to them. |
| A custom label to attach to all objects that Scanner V4 Matcher creates and into the pods belonging to them. |
| A custom annotation to attach to all objects that Scanner V4 Matcher creates and into the pods belonging to them. |
| A custom label to attach to all objects that Scanner V4 Matcher creates and into the pods belonging to them. |
| A custom annotation to attach to all objects that Scanner V4 Matcher creates and into the pods belonging to them. |
| A custom environment variable for all Scanner V4 Matcher containers and the pods belonging to them. |
| A custom label to attach to all objects that Scanner V4 DB creates and into the pods belonging to them. |
| A custom annotation to attach to all objects that Scanner V4 DB creates and into the pods belonging to them. |
| A custom label to attach to all objects that Scanner V4 DB creates and into the pods belonging to them. |
| A custom annotation to attach to all objects that Scanner V4 DB creates and into the pods belonging to them. |
| A custom environment variable for all Scanner V4 DB containers and the pods belonging to them. |
You can also use:
-
the
customize.other.service/*.labels
and thecustomize.other.service/*.annotations
parameters, to specify labels and annotations for all objects. -
or, provide a specific service name, for example,
customize.other.service/central-loadbalancer.labels
andcustomize.other.service/central-loadbalancer.annotations
as parameters and set their value.
4.1.2.2.2.10. Advanced customization
The parameters specified in this section are for information only. Red Hat does not support RHACS instances with modified namespace and release names.
Parameter | Description |
---|---|
|
Use |
|
Use |
4.1.2.2.3. Declarative configuration values
To use declarative configuration, you must create a YAML file (in this example, named "declarative-config-values.yaml") that adds the declarative configuration mounts to Central. This file is used in a Helm installation.
Procedure
Create the YAML file (in this example, named
declarative-config-values.yaml
) using the following example as a guideline:central: declarativeConfiguration: mounts: configMaps: - declarative-configs secrets: - sensitive-declarative-configs
-
Install the Central services Helm chart as documented in the "Installing the central-services Helm chart", referencing the
declarative-config-values.yaml
file.
4.1.2.2.4. Installing the central-services Helm chart
After you configure the values-public.yaml
and values-private.yaml
files, install the central-services
Helm chart to deploy the centralized components (Central and Scanner).
Procedure
Run the following command:
$ helm install -n stackrox --create-namespace \ stackrox-central-services rhacs/central-services \ -f <path_to_values_public.yaml> -f <path_to_values_private.yaml> 1
- 1
- Use the
-f
option to specify the paths for your YAML configuration files.
Optional: If using declarative configuration, add -f <path_to_declarative-config-values.yaml
to this command to mount the declarative configurations file in Central.
4.1.2.3. Changing configuration options after deploying the central-services Helm chart
You can make changes to any configuration options after you have deployed the central-services
Helm chart.
When using the helm upgrade
command to make changes, the following guidelines and requirements apply:
-
You can also specify configuration values using the
--set
or--set-file
parameters. However, these options are not saved, and you must manually specify all the options again whenever you make changes. Some changes, such as enabling a new component like Scanner V4, require new certificates to be issued for the component. Therefore, you must provide a CA when making these changes.
-
If the CA was generated by the Helm chart during the initial installation, you must retrieve these automatically generated values from the cluster and provide them to the
helm upgrade
command. The post-installation notes of thecentral-services
Helm chart include a command for retrieving the automatically generated values. -
If the CA was generated outside of the Helm chart and provided during the installation of the
central-services
chart, then you must perform that action again when using thehelm upgrade
command, for example, by using the--reuse-values
flag with thehelm upgrade
command.
-
If the CA was generated by the Helm chart during the initial installation, you must retrieve these automatically generated values from the cluster and provide them to the
Procedure
-
Update the
values-public.yaml
andvalues-private.yaml
configuration files with new values. Run the
helm upgrade
command and specify the configuration files using the-f
option:$ helm upgrade -n stackrox \ stackrox-central-services rhacs/central-services \ --reuse-values \1 -f <path_to_init_bundle_file \ -f <path_to_values_public.yaml> \ -f <path_to_values_private.yaml>
- 1
- If you have modified values that are not included in the
values_public.yaml
andvalues_private.yaml
files, include the--reuse-values
parameter.
4.1.3. Install Central using the roxctl CLI
For production environments, Red Hat recommends using the Operator or Helm charts to install RHACS. Do not use the roxctl
install method unless you have a specific installation need that requires using this method.
4.1.3.1. Installing the roxctl CLI
To install Red Hat Advanced Cluster Security for Kubernetes you must install the roxctl
CLI by downloading the binary. You can install roxctl
on Linux, Windows, or macOS.
4.1.3.1.1. Installing the roxctl CLI on Linux
You can install the roxctl
CLI binary on Linux by using the following procedure.
roxctl
CLI for Linux is available for amd64
, arm64
, ppc64le
, and s390x
architectures.
Procedure
Determine the
roxctl
architecture for the target operating system:$ arch="$(uname -m | sed "s/x86_64//")"; arch="${arch:+-$arch}"
Download the
roxctl
CLI:$ curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.5.4/bin/Linux/roxctl${arch}"
Make the
roxctl
binary executable:$ chmod +x roxctl
Place the
roxctl
binary in a directory that is on yourPATH
:To check your
PATH
, execute the following command:$ echo $PATH
Verification
Verify the
roxctl
version you have installed:$ roxctl version
4.1.3.1.2. Installing the roxctl CLI on macOS
You can install the roxctl
CLI binary on macOS by using the following procedure.
roxctl
CLI for macOS is available for amd64
and arm64
architectures.
Procedure
Determine the
roxctl
architecture for the target operating system:$ arch="$(uname -m | sed "s/x86_64//")"; arch="${arch:+-$arch}"
Download the
roxctl
CLI:$ curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.5.4/bin/Darwin/roxctl${arch}"
Remove all extended attributes from the binary:
$ xattr -c roxctl
Make the
roxctl
binary executable:$ chmod +x roxctl
Place the
roxctl
binary in a directory that is on yourPATH
:To check your
PATH
, execute the following command:$ echo $PATH
Verification
Verify the
roxctl
version you have installed:$ roxctl version
4.1.3.1.3. Installing the roxctl CLI on Windows
You can install the roxctl
CLI binary on Windows by using the following procedure.
roxctl
CLI for Windows is available for the amd64
architecture.
Procedure
Download the
roxctl
CLI:$ curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.5.4/bin/Windows/roxctl.exe
Verification
Verify the
roxctl
version you have installed:$ roxctl version
4.1.3.2. Using the interactive installer
Use the interactive installer to generate the required secrets, deployment configurations, and deployment scripts for your environment.
Procedure
Run the interactive install command:
$ roxctl central generate interactive
ImportantInstalling RHACS using the
roxctl
CLI creates PodSecurityPolicy (PSP) objects by default for backward compatibility. If you install RHACS on Kubernetes versions 1.25 and newer or OpenShift Container Platform version 4.12 and newer, you must disable the PSP object creation. To do this, specify--enable-pod-security-policies
option asfalse
for theroxctl central generate
androxctl sensor generate
commands.Press Enter to accept the default value for a prompt or enter custom values as required. The following example shows the interactive installer prompts:
Enter path to the backup bundle from which to restore keys and certificates (optional): Enter read templates from local filesystem (default: "false"): Enter path to helm templates on your local filesystem (default: "/path"): Enter PEM cert bundle file (optional): 1 Enter Create PodSecurityPolicy resources (for pre-v1.25 Kubernetes) (default: "true"): 2 Enter administrator password (default: autogenerated): Enter orchestrator (k8s, openshift): Enter default container images settings (development_build, stackrox.io, rhacs, opensource); it controls repositories from where to download the images, image names and tags format (default: "development_build"): Enter the directory to output the deployment bundle to (default: "central-bundle"): Enter the OpenShift major version (3 or 4) to deploy on (default: "0"): Enter whether to enable telemetry (default: "false"): Enter central-db image to use (if unset, a default will be used according to --image-defaults): Enter Istio version when deploying into an Istio-enabled cluster (leave empty when not running Istio) (optional): Enter the method of exposing Central (route, lb, np, none) (default: "none"): 3 Enter main image to use (if unset, a default will be used according to --image-defaults): Enter whether to run StackRox in offline mode, which avoids reaching out to the Internet (default: "false"): Enter list of secrets to add as declarative configuration mounts in central (default: "[]"): 4 Enter list of config maps to add as declarative configuration mounts in central (default: "[]"): 5 Enter the deployment tool to use (kubectl, helm, helm-values) (default: "kubectl"): Enter scanner-db image to use (if unset, a default will be used according to --image-defaults): Enter scanner image to use (if unset, a default will be used according to --image-defaults): Enter Central volume type (hostpath, pvc): 6 Enter external volume name for Central (default: "stackrox-db"): Enter external volume size in Gi for Central (default: "100"): Enter storage class name for Central (optional if you have a default StorageClass configured): Enter external volume name for Central DB (default: "central-db"): Enter external volume size in Gi for Central DB (default: "100"): Enter storage class name for Central DB (optional if you have a default StorageClass configured):
- 1
- If you want to add a custom TLS certificate, provide the file path for the PEM-encoded certificate. When you specify a custom certificate the interactive installer also prompts you to provide a PEM private key for the custom certificate you are using.
- 2
- If you are running Kubernetes version 1.25 or later, set this value to
false
. - 3
- To use the RHACS portal, you must expose Central by using a route, a load balancer or a node port.
- 4
- For more information on using declarative configurations for authentication and authorization, see "Declarative configuration for authentication and authorization resources" in "Managing RBAC in Red Hat Advanced Cluster Security for Kubernetes".
- 5
- For more information on using declarative configurations for authentication and authorization, see "Declarative configuration for authentication and authorization resources" in "Managing RBAC in Red Hat Advanced Cluster Security for Kubernetes".
- 6
- If you plan to install Red Hat Advanced Cluster Security for Kubernetes on OpenShift Container Platform with a hostPath volume, you must modify the SELinux policy.
WarningOn OpenShift Container Platform, for using a hostPath volume, you must modify the SELinux policy to allow access to the directory, which the host and the container share. It is because SELinux blocks directory sharing by default. To modify the SELinux policy, run the following command:
$ sudo chcon -Rt svirt_sandbox_file_t <full_volume_path>
However, Red Hat does not recommend modifying the SELinux policy, instead use PVC when installing on OpenShift Container Platform.
On completion, the installer creates a folder named central-bundle, which contains the necessary YAML manifests and scripts to deploy Central. In addition, it shows on-screen instructions for the scripts you need to run to deploy additional trusted certificate authorities, Central and Scanner, and the authentication instructions for logging into the RHACS portal along with the autogenerated password if you did not provide one when answering the prompts.
4.1.3.3. Running the Central installation scripts
After you run the interactive installer, you can run the setup.sh
script to install Central.
Procedure
Run the
setup.sh
script to configure image registry access:$ ./central-bundle/central/scripts/setup.sh
Create the necessary resources:
$ oc create -R -f central-bundle/central
Check the deployment progress:
$ oc get pod -n stackrox -w
After Central is running, find the RHACS portal IP address and open it in your browser. Depending on the exposure method you selected when answering the prompts, use one of the following methods to get the IP address.
Exposure method Command Address Example Route
oc -n stackrox get route central
The address under the
HOST/PORT
column in the outputhttps://central-stackrox.example.route
Node Port
oc get node -owide && oc -n stackrox get svc central-loadbalancer
IP or hostname of any node, on the port shown for the service
https://198.51.100.0:31489
Load Balancer
oc -n stackrox get svc central-loadbalancer
EXTERNAL-IP or hostname shown for the service, on port 443
https://192.0.2.0
None
central-bundle/central/scripts/port-forward.sh 8443
https://localhost:8443
https://localhost:8443
If you have selected autogenerated password during the interactive install, you can run the following command to see it for logging into Central:
$ cat central-bundle/password
4.2. Configuring Central configuration options for RHACS using the Operator
When installing the Central instance by using the Operator, you can configure optional settings.
4.2.1. Central configuration options using the Operator
When you create a Central instance, the Operator lists the following configuration options for the Central
custom resource.
The following table includes settings for an external PostgreSQL database.
4.2.1.1. Central settings
Parameter | Description |
---|---|
|
Specify a secret that contains the administrator password in the |
| By default, Central only serves an internal TLS certificate, which means that you need to handle TLS termination at the ingress or load balancer level. If you want to terminate TLS in Central and serve a custom server certificate, you can specify a secret containing the certificate and private key. |
|
Set this parameter to |
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Central. This parameter is mainly used for infrastructure nodes. |
| Use this parameter to inject hosts and IP addresses into the pod’s hosts file. |
|
Set this to |
| Use this parameter to specify a custom port for your load balancer. |
| Use this parameter to specify a static IP address reserved for your load balancer. |
|
Set this to |
| Specify a custom hostname to use for Central’s route. Leave this unset to accept the default value that OpenShift Container Platform provides. |
|
Set this to |
| Use this to specify an explicit node port. |
|
Use |
| If you want this component to only run on specific nodes, you can use this parameter to configure a node selector. |
| Specify a host path to store persistent data in a directory on the host. Red Hat does not recommend using this. If you need to use host path, you must use it with a node selector. |
|
The name of the PVC to manage persistent data. If no PVC with the given name exists, it is created. The default value is |
| The size of the persistent volume when created through the claim. This is automatically generated by default. |
| The name of the storage class to use for the PVC. If your cluster is not configured with a default storage class, you must provide a value for this parameter. |
| Use this parameter to override the default resource limits for the Central. |
| Use this parameter to override the default resource requests for the Central. |
| Use this parameter to specify the image pull secrets for the Central image. |
|
Specify a secret that has the database password in the |
|
Setting this parameter will not deploy Central DB, and Central will connect using the specified connection string. If you specify a value for this parameter, you must also specify a value for
|
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Central DB. This parameter is mainly used for infrastructure nodes. |
| Use this parameter to inject hosts and IP addresses into the pod’s hosts file. |
| Specify a host path to store persistent data in a directory on the host. Red Hat does not recommend using this. If you need to use host path, you must use it with a node selector. |
|
The name of the PVC to manage persistent data. If no PVC with the given name exists, it is created. The default value is |
| The size of the persistent volume when created through the claim. This is automatically generated by default. |
| The name of the storage class to use for the PVC. If your cluster is not configured with a default storage class, you must provide a value for this parameter. |
| Use this parameter to override the default resource limits for the Central DB. |
| Use this parameter to override the default resource requests for the Central DB. |
| If you want this component to only run on specific nodes, you can use this parameter to configure a node selector. |
4.2.1.2. StackRox Scanner settings
Parameter | Description |
---|---|
| If you want this scanner to only run on specific nodes, you can use this parameter to configure a node selector. |
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for the StackRox Scanner. This parameter is mainly used for infrastructure nodes. |
| Use this parameter to inject hosts and IP addresses into the pod’s hosts file. |
| Use this parameter to override the default resource limits for the StackRox Scanner. |
| Use this parameter to override the default resource requests for the StackRox Scanner. |
| When enabled, the number of analyzer replicas is managed dynamically based on the load, within the limits specified. |
| Specifies the maximum replicas to be used in the analyzer autoscaling configuration |
| Specifies the minimum replicas to be used in the analyzer autoscaling configuration |
| When autoscaling is disabled, the number of replicas is always configured to match this value. |
| If you want this component to only run on specific nodes, you can use this parameter to configure a node selector. |
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for the StackRox Scanner DB. This parameter is mainly used for infrastructure nodes. |
| Use this parameter to inject hosts and IP addresses into the pod’s hosts file. |
| Use this parameter to override the default resource limits for the StackRox Scanner DB. |
| Use this parameter to override the default resource requests for the StackRox Scanner DB. |
|
Use |
| If you do not want to deploy the StackRox Scanner, you can disable it by using this parameter. If you disable the StackRox Scanner, all other settings in this section have no effect. Red Hat does not recommend disabling Red Hat Advanced Cluster Security for Kubernetes the StackRox Scanner. Do not disable the StackRox Scanner if you have enabled Scanner V4. Scanner V4 requires that the StackRox Scanner is also enabled to provide the necessary scanning capabilities. |
4.2.1.3. Scanner V4 settings
Parameter | Description |
---|---|
| If you want this component to only run on specific nodes, you can use this parameter to configure a node selector. |
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner V4 DB. This parameter is mainly used for infrastructure nodes. |
| Use this parameter to inject hosts and IP addresses into the pod’s hosts file. |
| Use this parameter to override the default resource limits for Scanner V4 DB. |
| Use this parameter to override the default resource requests for Scanner V4 DB. |
|
The name of the PVC to manage persistent data for Scanner V4. If no PVC with the given name exists, it is created. The default value is |
| If you want this component to only run on specific nodes, you can use this parameter to configure a node selector. |
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for the Scanner V4 Indexer. This parameter is mainly used for infrastructure nodes. |
| Use this parameter to inject hosts and IP addresses into the pod’s hosts file. |
| Use this parameter to override the default resource limits for the Scanner V4 Indexer. |
| Use this parameter to override the default resource requests for the Scanner V4 Indexer. |
| When enabled, the number of Scanner V4 Indexer replicas is managed dynamically based on the load, within the limits specified. |
| Specifies the maximum replicas to be used in the Scanner V4 Indexer autoscaling configuration. |
| Specifies the minimum replicas to be used in the Scanner V4 Indexer autoscaling configuration. |
| When autoscaling is disabled for the Scanner V4 Indexer, the number of replicas is always configured to match this value. |
| If you want this component to only run on specific nodes, you can use this parameter to configure a node selector. |
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for the Scanner V4 Matcher. This parameter is mainly used for infrastructure nodes. |
| Use this parameter to inject hosts and IP addresses into the pod’s hosts file. |
| Use this parameter to override the default resource limits for the Scanner V4 Matcher. |
| Use this parameter to override the default resource requests for the Scanner V4 Matcher. |
| When enabled, the number of Scanner V4 Matcher replicas is managed dynamically based on the load, within the limits specified. |
| Specifies the maximum replicas to be used in the Scanner V4 Matcher autoscaling configuration. |
| Specifies the minimum replicas to be used in the Scanner V4 Matcher autoscaling configuration. |
| When autoscaling is disabled for the Scanner V4 Matcher, the number of replicas is always configured to match this value. |
|
Configures a monitoring endpoint for Scanner V4. The monitoring endpoint allows other services to collect metrics from Scanner V4, provided in a Prometheus-compatible format. Use |
|
Enables Scanner V4. The default value is |
4.2.1.4. General and miscellaneous settings
Parameter | Description |
---|---|
| Allows specifying custom annotations for the Central deployment. |
| Advanced settings to configure environment variables. |
| Configures whether RHACS should run in online or offline mode. In offline mode, automatic updates of vulnerability definitions and kernel modules are disabled. |
|
Specify |
|
If you set this option to |
|
To provide security at the network level, RHACS creates default Warning Disabling creation of default network policies can break communication between RHACS components. If you disable creation of default policies, you must create your own network policies to allow this communication. |
| See "Customizing the installation using the Operator with overlays". |
| Additional Trusted CA certificates for the secured cluster to trust. These certificates are typically used when integrating with services using a private certificate authority. |
4.2.2. Customizing the installation using the Operator with overlays
Learn how to tailor the installation of RHACS using the Operator method with overlays.
4.2.2.1. Overlays
When Central
or SecuredCluster
custom resources don’t expose certain low-level configuration options as parameters, you can use the .spec.overlays
field for adjustments. Use this field to amend the Kubernetes resources generated by these custom resources.
The .spec.overlays
field comprises a sequence of patches, applied in their listed order. These patches are processed by the Operator on the Kubernetes resources before deployment to the cluster.
The .spec.overlays
field in both Central
and SecuredCluster
allows users to modify low-level Kubernetes resources in arbitrary ways. Use this feature only when the desired customization is not available through the SecuredCluster
or Central
custom resources.
Support for the .spec.overlays
feature is limited primarily because it grants the ability to make intricate and highly specific modifications to Kubernetes resources, which can vary significantly from one implementation to another. This level of customization introduces a complexity that goes beyond standard usage scenarios, making it challenging to provide broad support. Each modification can be unique, potentially interacting with the Kubernetes system in unpredictable ways across different versions and configurations of the product. This variability means that troubleshooting and guaranteeing the stability of these customizations require a level of expertise and understanding specific to each individual’s setup. Consequently, while this feature empowers tailoring Kubernetes resources to meet precise needs, greater responsibility must also assumed to ensure the compatibility and stability of configurations, especially during upgrades or changes to the underlying product.
The following example shows the structure of an overlay:
overlays: - apiVersion: v1 1 kind: ConfigMap 2 name: my-configmap 3 patches: - path: .data 4 value: | 5 key1: data2 key2: data2
- 1
- Targeted Kubernetes resource ApiVersion, for example
apps/v1
,v1
,networking.k8s.io/v1
- 2
- Resource type (e.g., Deployment, ConfigMap, NetworkPolicy)
- 3
- Name of the resource, for example
my-configmap
- 4
- JSONPath expression to the field, for example
spec.template.spec.containers[name:central].env[-1]
- 5
- YAML string for the new field value
4.2.2.1.1. Adding an overlay
For customizations, you can add overlays to Central
or SecuredCluster
custom resources. Use the OpenShift CLI (oc
) or the OpenShift Container Platform web console for modifications.
If overlays do not take effect as expected, check the RHACS Operator logs for any syntax errors or issues logged.
4.2.2.2. Overlay examples
4.2.2.2.1. Specifying an EKS pod role ARN for the Central ServiceAccount
Add an Amazon Elastic Kubernetes Service (EKS) pod role Amazon Resource Name (ARN) annotation to the central
ServiceAccount as shown in the following example:
apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # ... overlays: - apiVersion: v1 kind: ServiceAccount name: central patches: - path: metadata.annotations.eks\.amazonaws\.com/role-arn value: "\"arn:aws:iam:1234:role\""
4.2.2.2.2. Injecting an environment variable into the Central deployment
Inject an environment variable into the central
deployment as shown in the following example:
apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # ... overlays: - apiVersion: apps/v1 kind: Deployment name: central patches: - path: spec.template.spec.containers[name:central].env[-1] value: | name: MY_ENV_VAR value: value
4.2.2.2.3. Extending network policy with an ingress rule
Add an ingress rule to the allow-ext-to-central
network policy for port 999 traffic as shown in the following example:
apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # ... overlays: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy name: allow-ext-to-central patches: - path: spec.ingress[-1] value: | ports: - port: 999 protocol: TCP
4.2.2.2.4. Modifying ConfigMap data
Modify the central-endpoints
ConfigMap data as shown in the following example:
apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # ... overlays: - apiVersion: v1 kind: ConfigMap name: central-endpoints patches: - path: data value: | endpoints.yaml: | disableDefault: false
4.2.2.2.5. Adding a container to the Central
deployment
Add a new container to the central
deployment as shown in the following example:.
apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # ... overlays: - apiVersion: apps/v1 kind: Deployment name: central patches: - path: spec.template.spec.containers[-1] value: | name: nginx image: nginx ports: - containerPort: 8000 name: http protocol: TCP
4.3. Generating and applying an init bundle for RHACS on Red Hat OpenShift
Before you install the SecuredCluster
resource on a cluster, you must create an init bundle. The cluster that has SecuredCluster
installed and configured then uses this bundle to authenticate with Central. You can create an init bundle by using either the RHACS portal or the roxctl
CLI. You then apply the init bundle by using it to create resources.
To configure an init bundle for RHACS Cloud Service, see the following resources:
You must have the Admin
user role to create an init bundle.
4.3.1. Generating an init bundle
4.3.1.1. Generating an init bundle by using the RHACS portal
You can create an init bundle containing secrets by using the RHACS portal.
You must have the Admin
user role to create an init bundle.
Procedure
- Find the address of the RHACS portal as described in "Verifying Central installation using the Operator method".
- Log in to the RHACS portal.
-
If you do not have secured clusters, the Platform Configuration
Clusters page appears. - Click Create init bundle.
- Enter a name for the cluster init bundle.
- Select your platform.
- Select the installation method you will use for your secured clusters: Operator or Helm chart.
Click Download to generate and download the init bundle, which is created in the form of a YAML file. You can use one init bundle and its corresponding YAML file for all secured clusters if you are using the same installation method.
ImportantStore this bundle securely because it contains secrets.
- Apply the init bundle by using it to create resources on the secured cluster.
- Install secured cluster services on each cluster.
4.3.1.2. Generating an init bundle by using the roxctl CLI
You can create an init bundle with secrets by using the roxctl
CLI.
You must have the Admin
user role to create init bundles.
Prerequisites
You have configured the
ROX_API_TOKEN
and theROX_CENTRAL_ADDRESS
environment variables:Set the
ROX_API_TOKEN
by running the following command:$ export ROX_API_TOKEN=<api_token>
Set the
ROX_CENTRAL_ADDRESS
environment variable by running the following command:$ export ROX_CENTRAL_ADDRESS=<address>:<port_number>
Procedure
To generate a cluster init bundle containing secrets for Helm installations, run the following command:
$ roxctl -e "$ROX_CENTRAL_ADDRESS" \ central init-bundles generate --output \ <cluster_init_bundle_name> cluster_init_bundle.yaml
To generate a cluster init bundle containing secrets for Operator installations, run the following command:
$ roxctl -e "$ROX_CENTRAL_ADDRESS" \ central init-bundles generate --output-secrets \ <cluster_init_bundle_name> cluster_init_bundle.yaml
ImportantEnsure that you store this bundle securely because it contains secrets. You can use the same bundle to set up multiple secured clusters.
4.3.1.3. Applying the init bundle on the secured cluster
Before you configure a secured cluster, you must apply the init bundle by using it to create the required resources on the cluster. Applying the init bundle allows the services on the secured cluster to communicate with Central.
If you are installing by using Helm charts, do not perform this step. Complete the installation by using Helm; See "Installing RHACS on secured clusters by using Helm charts" in the additional resources section.
Prerequisites
- You must have generated an init bundle containing secrets.
-
You must have created the
stackrox
project, or namespace, on the cluster where secured cluster services will be installed. Usingstackrox
for the project is not required, but ensures that vulnerabilities for RHACS processes are not reported when scanning your clusters.
Procedure
To create resources, perform only one of the following steps:
-
Create resources using the OpenShift Container Platform web console: In the OpenShift Container Platform web console, make sure that you are in the
stackrox
namespace. In the top menu, click + to open the Import YAML page. You can drag the init bundle file or copy and paste its contents into the editor, and then click Create. When the command is complete, the display shows that thecollector-tls
,sensor-tls
, and admission-control-tls` resources were created. Create resources using the Red Hat OpenShift CLI: Using the Red Hat OpenShift CLI, run the following command to create the resources:
$ oc create -f <init_bundle>.yaml \1 -n <stackrox> 2
4.3.2. Next steps
- Install RHACS secured cluster services in all clusters that you want to monitor.
4.3.3. Additional resources
4.4. Installing Secured Cluster services for RHACS on Red Hat OpenShift
You can install RHACS on your secured clusters by using one of the following methods:
- Install by using the Operator
- Install by using Helm charts
-
Install by using the
roxctl
CLI (do not use this method unless you have a specific installation need that requires using it)
4.4.1. Installing RHACS on secured clusters by using the Operator
4.4.1.1. Installing secured cluster services
You can install Secured Cluster services on your clusters by using the Operator, which creates the SecuredCluster
custom resource. You must install the Secured Cluster services on every cluster in your environment that you want to monitor.
When you install Red Hat Advanced Cluster Security for Kubernetes:
-
If you are installing RHACS for the first time, you must first install the
Central
custom resource because theSecuredCluster
custom resource installation is dependent on certificates that Central generates. -
Do not install
SecuredCluster
in projects whose names start withkube
,openshift
, orredhat
, or in theistio-system
project. -
If you are installing RHACS
SecuredCluster
custom resource on a cluster that also hosts Central, ensure that you install it in the same namespace as Central. -
If you are installing Red Hat Advanced Cluster Security for Kubernetes
SecuredCluster
custom resource on a cluster that does not host Central, Red Hat recommends that you install the Red Hat Advanced Cluster Security for KubernetesSecuredCluster
custom resource in its own project and not in the project in which you have installed the Red Hat Advanced Cluster Security for Kubernetes Operator.
Prerequisites
- If you are using OpenShift Container Platform, you must install version 4.11 or later.
- You have installed the RHACS Operator on the cluster that you want to secure, called the secured cluster.
- You have generated an init bundle and applied it to the cluster.
Procedure
-
On the OpenShift Container Platform web console for the secured cluster, go to the Operators
Installed Operators page. - Click the RHACS Operator.
If you have installed the Operator in the recommended namespace, OpenShift Container Platform lists the project as
rhacs-operator
. Select Project: rhacs-operatorCreate project. Note-
If you installed the Operator in a different namespace, OpenShift Container Platform lists the name of that namespace instead of
rhacs-operator
.
-
If you installed the Operator in a different namespace, OpenShift Container Platform lists the name of that namespace instead of
-
Enter the new project name (for example,
stackrox
), and click Create. Red Hat recommends that you usestackrox
as the project name. - Click Secured Cluster from the central navigation menu in the Operator details page.
- Click Create SecuredCluster.
Select one of the following options in the Configure via field:
- Form view: Use this option if you want to use the on-screen fields to configure the secured cluster and do not need to change any other fields.
- YAML view: Use this view to set up the secured cluster by using the YAML file. The YAML file is displayed in the window and you can edit fields in it. If you select this option, when you are finished editing the file, click Create.
- If you are using Form view, enter the new project name by accepting or editing the default name. The default value is stackrox-secured-cluster-services.
- Optional: Add any labels for the cluster.
-
Enter a unique name for your
SecuredCluster
custom resource. For Central Endpoint, enter the address of your Central instance. For example, if Central is available at
https://central.example.com
, then specify the central endpoint ascentral.example.com
.-
Use the default value of
central.stackrox.svc:443
only if you are installing secured cluster services in the same cluster where Central is installed. - Do not use the default value when you are configuring multiple clusters. Instead, use the hostname when configuring the Central Endpoint value for each cluster.
-
Use the default value of
- For the remaining fields, accept the default values or configure custom values if needed. For example, you might need to configure TLS if you are using custom certificates or untrusted CAs. See "Configuring Secured Cluster services options for RHACS using the Operator" for more information.
- Click Create.
After a brief pause, the SecuredClusters page displays the status of
stackrox-secured-cluster-services
. You might see the following conditions:- Conditions: Deployed, Initialized: The secured cluster services have been installed and the secured cluster is communicating with Central.
- Conditions: Initialized, Irreconcilable: The secured cluster is not communicating with Central. Make sure that you applied the init bundle you created in the RHACS web portal to the secured cluster.
Next steps
- Configure additional secured cluster settings (optional).
- Verify installation.
4.4.2. Installing RHACS on secured clusters by using Helm charts
You can install RHACS on secured clusters by using Helm charts with no customization, using the default values, or with customizations of configuration parameters.
4.4.2.1. Installing RHACS on secured clusters by using Helm charts without customizations
4.4.2.1.1. Adding the Helm chart repository
Procedure
Add the RHACS charts repository.
$ helm repo add rhacs https://mirror.openshift.com/pub/rhacs/charts/
The Helm repository for Red Hat Advanced Cluster Security for Kubernetes includes Helm charts for installing different components, including:
Central services Helm chart (
central-services
) for installing the centralized components (Central and Scanner).NoteYou deploy centralized components only once and you can monitor multiple separate clusters by using the same installation.
Secured Cluster Services Helm chart (
secured-cluster-services
) for installing the per-cluster and per-node components (Sensor, Admission Controller, Collector, and Scanner-slim).NoteDeploy the per-cluster components into each cluster that you want to monitor and deploy the per-node components in all nodes that you want to monitor.
Verification
Run the following command to verify the added chart repository:
$ helm search repo -l rhacs/
4.4.2.1.2. Installing the secured-cluster-services Helm chart without customization
Use the following instructions to install the secured-cluster-services
Helm chart to deploy the per-cluster and per-node components (Sensor, Admission controller, Collector, and Scanner-slim).
Prerequisites
- You must have generated an RHACS init bundle for your cluster.
-
You must have access to the Red Hat Container Registry and a pull secret for authentication. For information about downloading images from
registry.redhat.io
, see Red Hat Container Registry Authentication. - You must have the address that you are exposing the Central service on.
Procedure
Run the following command on OpenShift Container Platform clusters:
$ helm install -n stackrox --create-namespace \ stackrox-secured-cluster-services rhacs/secured-cluster-services \ -f <path_to_cluster_init_bundle.yaml> \1 -f <path_to_pull_secret.yaml> \2 --set clusterName=<name_of_the_secured_cluster> \ --set centralEndpoint=<endpoint_of_central_service> 3 --set scanner.disable=false 4
- 1
- Use the
-f
option to specify the path for the init bundle. - 2
- Use the -f option to specify the path for the pull secret for Red Hat Container Registry authentication.
- 3
- Specify the address and port number for Central. For example,
acs.domain.com:443
. - 4
- Set the value of the
scanner.disable
parameter tofalse
, which means that Scanner-slim will be enabled during the installation. In Kubernetes, the secured cluster services now include Scanner-slim as an optional component.
Additional resources
4.4.2.2. Configuring the secured-cluster-services Helm chart with customizations
This section describes Helm chart configuration parameters that you can use with the helm install
and helm upgrade
commands. You can specify these parameters by using the --set
option or by creating YAML configuration files.
Create the following files for configuring the Helm chart for installing Red Hat Advanced Cluster Security for Kubernetes:
-
Public configuration file
values-public.yaml
: Use this file to save all non-sensitive configuration options. -
Private configuration file
values-private.yaml
: Use this file to save all sensitive configuration options. Ensure that you store this file securely.
While using the secured-cluster-services
Helm chart, do not modify the values.yaml
file that is part of the chart.
4.4.2.2.1. Configuration parameters
Parameter | Description |
---|---|
| Name of your cluster. |
|
Address of the Central endpoint. If you are using a non-gRPC capable load balancer, use the WebSocket protocol by prefixing the endpoint address with |
| Address of the Sensor endpoint including port number. |
| Image pull policy for the Sensor container. |
| The internal service-to-service TLS certificate that Sensor uses. |
| The internal service-to-service TLS certificate key that Sensor uses. |
| The memory request for the Sensor container. Use this parameter to override the default value. |
| The CPU request for the Sensor container. Use this parameter to override the default value. |
| The memory limit for the Sensor container. Use this parameter to override the default value. |
| The CPU limit for the Sensor container. Use this parameter to override the default value. |
|
Specify a node selector label as |
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Sensor. This parameter is mainly used for infrastructure nodes. |
|
The name of the |
| The name of the Collector image. |
| The address of the registry you are using for the main image. |
| The address of the registry you are using for the Collector image. |
| The address of the registry you are using for the Scanner image. |
| The address of the registry you are using for the Scanner DB image. |
| The address of the registry you are using for the Scanner V4 image. |
| The address of the registry you are using for the Scanner V4 DB image. |
|
Image pull policy for |
| Image pull policy for the Collector images. |
|
Tag of |
|
Tag of |
|
Either |
| Image pull policy for the Collector container. |
| Image pull policy for the Compliance container. |
|
If you specify |
| The memory request for the Collector container. Use this parameter to override the default value. |
| The CPU request for the Collector container. Use this parameter to override the default value. |
| The memory limit for the Collector container. Use this parameter to override the default value. |
| The CPU limit for the Collector container. Use this parameter to override the default value. |
| The memory request for the Compliance container. Use this parameter to override the default value. |
| The CPU request for the Compliance container. Use this parameter to override the default value. |
| The memory limit for the Compliance container. Use this parameter to override the default value. |
| The CPU limit for the Compliance container. Use this parameter to override the default value. |
| The internal service-to-service TLS certificate that Collector uses. |
| The internal service-to-service TLS certificate key that Collector uses. |
|
This setting controls whether Kubernetes is configured to contact Red Hat Advanced Cluster Security for Kubernetes with |
|
When you set this parameter as |
|
This setting controls whether the cluster is configured to contact Red Hat Advanced Cluster Security for Kubernetes with |
| This setting controls whether Red Hat Advanced Cluster Security for Kubernetes evaluates policies; if it is disabled, all AdmissionReview requests are automatically accepted. |
|
This setting controls the behavior of the admission control service. You must specify |
|
If you set this option to |
|
Set it to |
|
Use this parameter to specify the maximum number of seconds RHACS must wait for an admission review before marking it as fail open. If the admission webhook does not receive information that it is requesting before the end of the timeout period, it fails, but in fail open status, it still allows the operation to succeed. For example, the admission controller would allow a deployment to be created even if a scan had timed out and RHACS could not determine if the deployment violated a policy. Beginning in release 4.5, Red Hat reduced the default timeout setting for the RHACS admission controller webhooks from 20 seconds to 10 seconds, resulting in an effective timeout of 12 seconds within the |
| The memory request for the Admission Control container. Use this parameter to override the default value. |
| The CPU request for the Admission Control container. Use this parameter to override the default value. |
| The memory limit for the Admission Control container. Use this parameter to override the default value. |
| The CPU limit for the Admission Control container. Use this parameter to override the default value. |
|
Specify a node selector label as |
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Admission Control. This parameter is mainly used for infrastructure nodes. |
| The internal service-to-service TLS certificate that Admission Control uses. |
| The internal service-to-service TLS certificate key that Admission Control uses. |
|
Use this parameter to override the default |
|
If you specify |
|
Specify |
|
Specify |
|
Deprecated. Specify |
| Resource specification for Sensor. |
| Resource specification for Admission controller. |
| Resource specification for Collector. |
| Resource specification for Collector’s Compliance container. |
|
If you set this option to |
|
If you set this option to |
|
If you set this option to |
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner DB. |
| Resource specification for Collector’s Compliance container. |
| Setting this parameter allows you to modify the scanner log level. Use this option only for troubleshooting purposes. |
|
If you set this option to |
| The minimum number of replicas for autoscaling. Defaults to 2. |
| The maximum number of replicas for autoscaling. Defaults to 5. |
|
Specify a node selector label as |
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner. |
|
Specify a node selector label as |
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner DB. |
| The memory request for the Scanner container. Use this parameter to override the default value. |
| The CPU request for the Scanner container. Use this parameter to override the default value. |
| The memory limit for the Scanner container. Use this parameter to override the default value. |
| The CPU limit for the Scanner container. Use this parameter to override the default value. |
| The memory request for the Scanner DB container. Use this parameter to override the default value. |
| The CPU request for the Scanner DB container. Use this parameter to override the default value. |
| The memory limit for the Scanner DB container. Use this parameter to override the default value. |
| The CPU limit for the Scanner DB container. Use this parameter to override the default value. |
|
If you set this option to |
|
To provide security at the network level, RHACS creates default Warning Disabling creation of default network policies can break communication between RHACS components. If you disable creation of default policies, you must create your own network policies to allow this communication. |
4.4.2.2.1.1. Environment variables
You can specify environment variables for Sensor and Admission controller in the following format:
customize: envVars: ENV_VAR1: "value1" ENV_VAR2: "value2"
The customize
setting allows you to specify custom Kubernetes metadata (labels and annotations) for all objects created by this Helm chart and additional pod labels, pod annotations, and container environment variables for workloads.
The configuration is hierarchical, in the sense that metadata defined at a more generic scope (for example, for all objects) can be overridden by metadata defined at a narrower scope (for example, only for the Sensor deployment).
4.4.2.2.2. Installing the secured-cluster-services Helm chart with customizations
After you configure the values-public.yaml
and values-private.yaml
files, install the secured-cluster-services
Helm chart to deploy the following per-cluster and per-node components:
- Sensor
- Admission controller
- Collector
- Scanner: optional for secured clusters when the StackRox Scanner is installed
- Scanner DB: optional for secured clusters when the StackRox Scanner is installed
- Scanner V4 Indexer and Scanner V4 DB: optional for secured clusters when Scanner V4 is installed
Prerequisites
- You must have generated an RHACS init bundle for your cluster.
-
You must have access to the Red Hat Container Registry and a pull secret for authentication. For information about downloading images from
registry.redhat.io
, see Red Hat Container Registry Authentication. - You must have the address and the port number that you are exposing the Central service on.
Procedure
Run the following command:
$ helm install -n stackrox \ --create-namespace stackrox-secured-cluster-services rhacs/secured-cluster-services \ -f <name_of_cluster_init_bundle.yaml> \ -f <path_to_values_public.yaml> -f <path_to_values_private.yaml> \1 --set imagePullSecrets.username=<username> \2 --set imagePullSecrets.password=<password> 3
To deploy secured-cluster-services
Helm chart by using a continuous integration (CI) system, pass the init bundle YAML file as an environment variable to the helm install
command:
$ helm install ... -f <(echo "$INIT_BUNDLE_YAML_SECRET") 1
- 1
- If you are using base64 encoded variables, use the
helm install … -f <(echo "$INIT_BUNDLE_YAML_SECRET" | base64 --decode)
command instead.
Additional resources
4.4.2.3. Changing configuration options after deploying the secured-cluster-services Helm chart
You can make changes to any configuration options after you have deployed the secured-cluster-services
Helm chart.
When using the helm upgrade
command to make changes, the following guidelines and requirements apply:
-
You can also specify configuration values using the
--set
or--set-file
parameters. However, these options are not saved, and you must manually specify all the options again whenever you make changes. Some changes, such as enabling a new component like Scanner V4, require new certificates to be issued for the component. Therefore, you must provide a CA when making these changes.
-
If the CA was generated by the Helm chart during the initial installation, you must retrieve these automatically generated values from the cluster and provide them to the
helm upgrade
command. The post-installation notes of thecentral-services
Helm chart include a command for retrieving the automatically generated values. -
If the CA was generated outside of the Helm chart and provided during the installation of the
central-services
chart, then you must perform that action again when using thehelm upgrade
command, for example, by using the--reuse-values
flag with thehelm upgrade
command.
-
If the CA was generated by the Helm chart during the initial installation, you must retrieve these automatically generated values from the cluster and provide them to the
Procedure
-
Update the
values-public.yaml
andvalues-private.yaml
configuration files with new values. Run the
helm upgrade
command and specify the configuration files using the-f
option:$ helm upgrade -n stackrox \ stackrox-secured-cluster-services rhacs/secured-cluster-services \ --reuse-values \1 -f <path_to_values_public.yaml> \ -f <path_to_values_private.yaml>
- 1
- If you have modified values that are not included in the
values_public.yaml
andvalues_private.yaml
files, include the--reuse-values
parameter.
4.4.3. Installing RHACS on secured clusters by using the roxctl CLI
This method is also referred to as the manifest installation method.
Prerequisites
-
If you plan to use the
roxctl
CLI command to generate the files used by the sensor installation script, you have installed theroxctl
CLI. - You have generated the files that will be used by the sensor installation script.
Procedure
- On the OpenShift Container Platform secured cluster, deploy the Sensor component by running the sensor installation script.
4.4.3.1. Installing the roxctl CLI
You must first download the binary. You can install roxctl
on Linux, Windows, or macOS.
4.4.3.1.1. Installing the roxctl CLI on Linux
You can install the roxctl
CLI binary on Linux by using the following procedure.
roxctl
CLI for Linux is available for amd64
, arm64
, ppc64le
, and s390x
architectures.
Procedure
Determine the
roxctl
architecture for the target operating system:$ arch="$(uname -m | sed "s/x86_64//")"; arch="${arch:+-$arch}"
Download the
roxctl
CLI:$ curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.5.4/bin/Linux/roxctl${arch}"
Make the
roxctl
binary executable:$ chmod +x roxctl
Place the
roxctl
binary in a directory that is on yourPATH
:To check your
PATH
, execute the following command:$ echo $PATH
Verification
Verify the
roxctl
version you have installed:$ roxctl version
4.4.3.1.2. Installing the roxctl CLI on macOS
You can install the roxctl
CLI binary on macOS by using the following procedure.
roxctl
CLI for macOS is available for amd64
and arm64
architectures.
Procedure
Determine the
roxctl
architecture for the target operating system:$ arch="$(uname -m | sed "s/x86_64//")"; arch="${arch:+-$arch}"
Download the
roxctl
CLI:$ curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.5.4/bin/Darwin/roxctl${arch}"
Remove all extended attributes from the binary:
$ xattr -c roxctl
Make the
roxctl
binary executable:$ chmod +x roxctl
Place the
roxctl
binary in a directory that is on yourPATH
:To check your
PATH
, execute the following command:$ echo $PATH
Verification
Verify the
roxctl
version you have installed:$ roxctl version
4.4.3.1.3. Installing the roxctl CLI on Windows
You can install the roxctl
CLI binary on Windows by using the following procedure.
roxctl
CLI for Windows is available for the amd64
architecture.
Procedure
Download the
roxctl
CLI:$ curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.5.4/bin/Windows/roxctl.exe
Verification
Verify the
roxctl
version you have installed:$ roxctl version
4.4.3.2. Installing Sensor
To monitor a cluster, you must deploy Sensor. You must deploy Sensor into each cluster that you want to monitor. This installation method is also called the manifest installation method.
To perform an installation by using the manifest installation method, follow only one of the following procedures:
- Use the RHACS web portal to download the cluster bundle, and then extract and run the sensor script.
-
Use the
roxctl
CLI to generate the required sensor configuration for your OpenShift Container Platform cluster and associate it with your Central instance.
Prerequisites
- You must have already installed Central services, or you can access Central services by selecting your ACS instance on Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service).
4.4.3.2.1. Manifest installation method by using the web portal
Procedure
-
On your secured cluster, in the RHACS portal, go to Platform Configuration
Clusters. -
Select Secure a cluster
Legacy installation method. - Specify a name for the cluster.
Provide appropriate values for the fields based on where you are deploying the Sensor.
- If you are deploying Sensor in the same cluster, accept the default values for all the fields.
-
If you are deploying into a different cluster, replace
central.stackrox.svc:443
with a load balancer, node port, or other address, including the port number, that is accessible from the other cluster. If you are using a non-gRPC capable load balancer, such as HAProxy, AWS Application Load Balancer (ALB), or AWS Elastic Load Balancing (ELB), use the WebSocket Secure (
wss
) protocol. To usewss
:-
Prefix the address with
wss://
. -
Add the port number after the address, for example,
wss://stackrox-central.example.com:443
.
-
Prefix the address with
- Click Next to continue with the Sensor setup.
Click Download YAML File and Keys to download the cluster bundle (zip archive).
ImportantThe cluster bundle zip archive includes unique configurations and keys for each cluster. Do not reuse the same files in another cluster.
From a system that has access to the monitored cluster, extract and run the
sensor
script from the cluster bundle:$ unzip -d sensor sensor-<cluster_name>.zip
$ ./sensor/sensor.sh
If you get a warning that you do not have the required permissions to deploy Sensor, follow the on-screen instructions, or contact your cluster administrator for help.
After Sensor is deployed, it contacts Central and provides cluster information.
4.4.3.2.2. Manifest installation by using the roxctl CLI
Procedure
Generate the required sensor configuration for your OpenShift Container Platform cluster and associate it with your Central instance by running the following command:
$ roxctl sensor generate openshift --openshift-version <ocp_version> --name <cluster_name> --central "$ROX_ENDPOINT" 1
- 1
- For the
--openshift-version
option, specify the major OpenShift Container Platform version number for your cluster. For example, specify3
for OpenShift Container Platform version3.x
and specify4
for OpenShift Container Platform version4.x
.
From a system that has access to the monitored cluster, extract and run the
sensor
script from the cluster bundle:$ unzip -d sensor sensor-<cluster_name>.zip
$ ./sensor/sensor.sh
If you get a warning that you do not have the required permissions to deploy Sensor, follow the on-screen instructions, or contact your cluster administrator for help.
After Sensor is deployed, it contacts Central and provides cluster information.
Verification
Return to the RHACS portal and check if the deployment is successful. If successful, when viewing your list of clusters in Platform Configuration
Clusters, the cluster status displays a green checkmark and a Healthy status. If you do not see a green checkmark, use the following command to check for problems: On OpenShift Container Platform, enter the following command:
$ oc get pod -n stackrox -w
On Kubernetes, enter the following command:
$ kubectl get pod -n stackrox -w
- Click Finish to close the window.
After installation, Sensor starts reporting security information to RHACS and the RHACS portal dashboard begins showing deployments, images, and policy violations from the cluster on which you have installed the Sensor.
4.5. Configuring Secured Cluster services options for RHACS using the Operator
When installing Secured Cluster services by using the Operator, you can configure optional settings.
4.5.1. Secured Cluster services configuration options
When you create a Central instance, the Operator lists the following configuration options for the Central
custom resource.
4.5.1.1. Required Configuration Settings
Parameter | Description |
---|---|
|
The endpoint of Central instance to connect to, including the port number. If using a non-gRPC capable load balancer, use the WebSocket protocol by prefixing the endpoint address with |
| The unique name of this cluster, which shows up in the RHACS portal. After you set the name by using this parameter, you cannot change it again. To change the name, you must delete and re-create the object. |
4.5.1.2. Admission controller settings
Parameter | Description |
---|---|
|
Specify |
|
Specify |
|
Specify |
| If you want this component to only run on specific nodes, you can configure a node selector using this parameter. |
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Admission Control. This parameter is mainly used for infrastructure nodes. |
| Use this parameter to inject hosts and IP addresses into the pod’s hosts file. |
| Use this parameter to override the default resource limits for the admission controller. |
| Use this parameter to override the default resource requests for the admission controller. |
| Use one of the following values to configure the bypassing of admission controller enforcement:
The default value is |
| Use one of the following values to specify if the admission controller must connect to the image scanner:
The default value is |
|
Use this parameter to specify the maximum number of seconds RHACS must wait for an admission review before marking it as fail open. If the admission webhook does not receive information that it is requesting before the end of the timeout period, it fails, but in fail open status, it still allows the operation to succeed. For example, the admission controller would allow a deployment to be created even if a scan had timed out and RHACS could not determine if the deployment violated a policy. Beginning in release 4.5, Red Hat reduced the default timeout setting for the RHACS admission controller webhooks from 20 seconds to 10 seconds, resulting in an effective timeout of 12 seconds within the |
4.5.1.3. Scanner configuration
Use Scanner configuration settings to modify the local cluster scanner for the integrated OpenShift image registry.
Parameter | Description |
---|---|
|
Specify a node selector label as |
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner. |
| Use this parameter to inject hosts and IP addresses into the pod’s hosts file. |
| The memory request for the Scanner container. Use this parameter to override the default value. |
| The CPU request for the Scanner container. Use this parameter to override the default value. |
| The memory limit for the Scanner container. Use this parameter to override the default value. |
| The CPU limit for the Scanner container. Use this parameter to override the default value. |
|
If you set this option to |
|
The minimum number of replicas for autoscaling. The default value is |
|
The maximum number of replicas for autoscaling. The default value is |
|
The default number of replicas. The default value is |
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner. |
|
Specify a node selector label as |
| Use this parameter to inject hosts and IP addresses into the pod’s hosts file. |
| The memory request for the Scanner DB container. Use this parameter to override the default value. |
| The CPU request for the Scanner DB container. Use this parameter to override the default value. |
| The memory limit for the Scanner DB container. Use this parameter to override the default value. |
| The CPU limit for the Scanner DB container. Use this parameter to override the default value. |
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner DB. |
|
If you set this option to |
| If you want this component to only run on specific nodes, you can use this parameter to configure a node selector. |
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner V4 DB. This parameter is mainly used for infrastructure nodes. |
| Use this parameter to override the default resource limits for Scanner V4 DB. |
| Use this parameter to override the default resource requests for Scanner V4 DB. |
|
The name of the PVC to manage persistent data for Scanner V4. If no PVC with the given name exists, it is created. The default value is |
| If you want this component to only run on specific nodes, you can use this parameter to configure a node selector. |
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for the Scanner V4 Indexer. This parameter is mainly used for infrastructure nodes. |
| Use this parameter to override the default resource limits for the Scanner V4 Indexer. |
| Use this parameter to override the default resource requests for the Scanner V4 Indexer. |
| When enabled, the number of Scanner V4 Indexer replicas is managed dynamically based on the load, within the limits specified. |
| Specifies the maximum replicas to be used in the Scanner V4 Indexer autoscaling configuration. |
| Specifies the minimum replicas to be used in the Scanner V4 Indexer autoscaling configuration. |
| When autoscaling is disabled for the Scanner V4 Indexer, the number of replicas is always configured to match this value. |
|
Configures a monitoring endpoint for Scanner V4. The monitoring endpoint allows other services to collect metrics from Scanner V4, provided in a Prometheus-compatible format. Use |
|
Enables Scanner V4. The default value is |
4.5.1.4. Image configuration
Use image configuration settings when you are using a custom registry.
Parameter | Description |
---|---|
| Additional image pull secrets to be taken into account for pulling images. |
4.5.1.5. Per node settings
Per node settings define the configuration settings for components that run on each node in a cluster to secure the cluster. These components are Collector and Compliance.
Parameter | Description |
---|---|
|
The method for system-level data collection. The default value is |
|
The image type to use for Collector. You can specify it as |
| Use this parameter to override the default resource limits for Collector. |
| Use this parameter to override the default resource requests for Collector. |
| Use this parameter to override the default resource requests for Compliance. |
| Use this parameter to override the default resource limits for Compliance. |
|
To ensure comprehensive monitoring of your cluster activity, Red Hat Advanced Cluster Security for Kubernetes runs services on every node in the cluster, including tainted nodes by default. If you do not want this behavior, specify |
4.5.1.6. Sensor configuration
This configuration defines the settings of the Sensor components, which runs on one node in a cluster.
Parameter | Description |
---|---|
| If you want Sensor to only run on specific nodes, you can configure a node selector. |
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Sensor. This parameter is mainly used for infrastructure nodes. |
| Use this parameter to inject hosts and IP addresses into the pod’s hosts file. |
| Use this parameter to override the default resource limits for Sensor. |
| Use this parameter to override the default resource requests for Sensor. |
4.5.1.7. General and miscellaneous settings
Parameter | Description |
---|---|
| Allows specifying custom annotations for the Central deployment. |
| Advanced settings to configure environment variables. |
| Configures whether Red Hat Advanced Cluster Security for Kubernetes should run in online or offline mode. In offline mode, automatic updates of vulnerability definitions and kernel modules are disabled. |
|
Set this to |
|
To provide security at the network level, RHACS creates default Warning Disabling creation of default network policies can break communication between RHACS components. If you disable creation of default policies, you must create your own network policies to allow this communication. |
| See "Customizing the installation using the Operator with overlays". |
| Additional trusted CA certificates for the secured cluster. These certificates are used when integrating with services using a private certificate authority. |
4.5.2. Customizing the installation using the Operator with overlays
Learn how to tailor the installation of RHACS using the Operator method with overlays.
4.5.2.1. Overlays
When Central
or SecuredCluster
custom resources don’t expose certain low-level configuration options as parameters, you can use the .spec.overlays
field for adjustments. Use this field to amend the Kubernetes resources generated by these custom resources.
The .spec.overlays
field comprises a sequence of patches, applied in their listed order. These patches are processed by the Operator on the Kubernetes resources before deployment to the cluster.
The .spec.overlays
field in both Central
and SecuredCluster
allows users to modify low-level Kubernetes resources in arbitrary ways. Use this feature only when the desired customization is not available through the SecuredCluster
or Central
custom resources.
Support for the .spec.overlays
feature is limited primarily because it grants the ability to make intricate and highly specific modifications to Kubernetes resources, which can vary significantly from one implementation to another. This level of customization introduces a complexity that goes beyond standard usage scenarios, making it challenging to provide broad support. Each modification can be unique, potentially interacting with the Kubernetes system in unpredictable ways across different versions and configurations of the product. This variability means that troubleshooting and guaranteeing the stability of these customizations require a level of expertise and understanding specific to each individual’s setup. Consequently, while this feature empowers tailoring Kubernetes resources to meet precise needs, greater responsibility must also assumed to ensure the compatibility and stability of configurations, especially during upgrades or changes to the underlying product.
The following example shows the structure of an overlay:
overlays: - apiVersion: v1 1 kind: ConfigMap 2 name: my-configmap 3 patches: - path: .data 4 value: | 5 key1: data2 key2: data2
- 1
- Targeted Kubernetes resource ApiVersion, for example
apps/v1
,v1
,networking.k8s.io/v1
- 2
- Resource type (e.g., Deployment, ConfigMap, NetworkPolicy)
- 3
- Name of the resource, for example
my-configmap
- 4
- JSONPath expression to the field, for example
spec.template.spec.containers[name:central].env[-1]
- 5
- YAML string for the new field value
4.5.2.1.1. Adding an overlay
For customizations, you can add overlays to Central
or SecuredCluster
custom resources. Use the OpenShift CLI (oc
) or the OpenShift Container Platform web console for modifications.
If overlays do not take effect as expected, check the RHACS Operator logs for any syntax errors or issues logged.
4.5.2.2. Overlay examples
4.5.2.2.1. Specifying an EKS pod role ARN for the Central ServiceAccount
Add an Amazon Elastic Kubernetes Service (EKS) pod role Amazon Resource Name (ARN) annotation to the central
ServiceAccount as shown in the following example:
apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # ... overlays: - apiVersion: v1 kind: ServiceAccount name: central patches: - path: metadata.annotations.eks\.amazonaws\.com/role-arn value: "\"arn:aws:iam:1234:role\""
4.5.2.2.2. Injecting an environment variable into the Central deployment
Inject an environment variable into the central
deployment as shown in the following example:
apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # ... overlays: - apiVersion: apps/v1 kind: Deployment name: central patches: - path: spec.template.spec.containers[name:central].env[-1] value: | name: MY_ENV_VAR value: value
4.5.2.2.3. Extending network policy with an ingress rule
Add an ingress rule to the allow-ext-to-central
network policy for port 999 traffic as shown in the following example:
apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # ... overlays: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy name: allow-ext-to-central patches: - path: spec.ingress[-1] value: | ports: - port: 999 protocol: TCP
4.5.2.2.4. Modifying ConfigMap data
Modify the central-endpoints
ConfigMap data as shown in the following example:
apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # ... overlays: - apiVersion: v1 kind: ConfigMap name: central-endpoints patches: - path: data value: | endpoints.yaml: | disableDefault: false
4.5.2.2.5. Adding a container to the Central
deployment
Add a new container to the central
deployment as shown in the following example:.
apiVersion: platform.stackrox.io kind: Central metadata: name: central spec: # ... overlays: - apiVersion: apps/v1 kind: Deployment name: central patches: - path: spec.template.spec.containers[-1] value: | name: nginx image: nginx ports: - containerPort: 8000 name: http protocol: TCP
4.6. Verifying installation of RHACS on Red Hat OpenShift
Provides steps to verify that RHACS is properly installed.
4.6.1. Verifying installation
After you complete the installation, run a few vulnerable applications and go to the RHACS portal to evaluate the results of security assessments and policy violations.
The sample applications listed in the following section contain critical vulnerabilities and they are specifically designed to verify the build and deploy-time assessment features of Red Hat Advanced Cluster Security for Kubernetes.
To verify installation:
Find the address of the RHACS portal based on your exposure method:
For a route:
$ oc get route central -n stackrox
For a load balancer:
$ oc get service central-loadbalancer -n stackrox
For port forward:
Run the following command:
$ oc port-forward svc/central 18443:443 -n stackrox
-
Go to
https://localhost:18443/
.
Using the Red Hat OpenShift CLI, create a new project:
$ oc new-project test
Start some applications with critical vulnerabilities:
$ oc run shell --labels=app=shellshock,team=test-team \ --image=quay.io/stackrox-io/docs:example-vulnerables-cve-2014-6271 -n test $ oc run samba --labels=app=rce \ --image=quay.io/stackrox-io/docs:example-vulnerables-cve-2017-7494 -n test
Red Hat Advanced Cluster Security for Kubernetes automatically scans these deployments for security risks and policy violations as soon as they are submitted to the cluster. Go to the RHACS portal to view the violations. You can log in to the RHACS portal by using the default username admin and the generated password.