Chapter 5. Setting up RHACS Cloud Service with Red Hat OpenShift secured clusters
5.1. Creating a RHACS Cloud instance on Red Hat Cloud Copy linkLink copied to clipboard!
Access Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service) by selecting an instance in the Red Hat Hybrid Cloud Console. An ACS instance contains the RHACS Cloud Service management interface and services that Red Hat configures and manages for you. The management interface connects to your secured clusters, which contain the services that scan and collect information about vulnerabilities. One instance can connect to and monitor many clusters.
5.1.1. Creating an instance in the console Copy linkLink copied to clipboard!
In the Red Hat Hybrid Cloud Console, create an ACS instance to connect to your secured clusters.
Procedure
To create an ACS instance:
- Log in to the Red Hat Hybrid Cloud Console.
-
From the navigation menu, select Advanced Cluster Security
ACS Instances. Select Create ACS instance and enter information into the displayed fields or select the appropriate option from the drop-down list:
- Name: Enter the name of your ACS instance. An ACS instance contains the RHACS Central component, also referred to as "Central", which includes the RHACS Cloud Service management interface and services that are configured and managed by Red Hat. You manage your secured clusters that communicate with Central. You can connect many secured clusters to one instance.
- Cloud provider: The cloud provider where Central is located. Select AWS.
Cloud region: The region for your cloud provider where Central is located. Select one of the following regions:
- US-East, N. Virginia
- Europe, Ireland
- Availability zones: Use the default value (Multi).
- Click Create instance.
5.1.2. Next steps Copy linkLink copied to clipboard!
-
On each Red Hat OpenShift cluster you want to secure, create a project named
stackrox. This project will contain the resources for RHACS Cloud Service secured clusters.
5.2. Creating a project on your Red Hat OpenShift secured cluster Copy linkLink copied to clipboard!
Create a project on each Red Hat OpenShift cluster that you want to secure. You then use this project to install RHACS Cloud Service resources by using the Operator or Helm charts.
5.2.1. Creating a project on your cluster Copy linkLink copied to clipboard!
Procedure
-
In your OpenShift Container Platform cluster, navigate to Home
Projects and create a project for RHACS Cloud Service. Use stackroxas the project Name.
5.2.2. Next steps Copy linkLink copied to clipboard!
- In the ACS Console, create an init bundle. The init bundle contains secrets that allow communication between RHACS Cloud Service secured clusters and the ACS Console.
5.3. Generating an init bundle for secured clusters Copy linkLink copied to clipboard!
Before you install the SecuredCluster resource on a cluster, you must create an init bundle. The cluster that has SecuredCluster installed and configured then uses this bundle to authenticate with Central. You can create an init bundle by using either the RHACS portal or the roxctl CLI. You then apply the init bundle by using it to create resources.
You must have the Admin user role to create an init bundle.
5.3.1. Generating an init bundle Copy linkLink copied to clipboard!
5.3.1.1. Generating an init bundle by using the RHACS portal Copy linkLink copied to clipboard!
You can create an init bundle containing secrets by using the RHACS portal.
You must have the Admin user role to create an init bundle.
Procedure
Find the address of the RHACS portal based on your exposure method:
For a route:
oc get route central -n stackrox
$ oc get route central -n stackroxCopy to Clipboard Copied! Toggle word wrap Toggle overflow For a load balancer:
oc get service central-loadbalancer -n stackrox
$ oc get service central-loadbalancer -n stackroxCopy to Clipboard Copied! Toggle word wrap Toggle overflow For port forward:
Run the following command:
oc port-forward svc/central 18443:443 -n stackrox
$ oc port-forward svc/central 18443:443 -n stackroxCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Navigate to
https://localhost:18443/.
-
On the RHACS portal, navigate to Platform Configuration
Integrations. - Navigate to the Authentication Tokens section and click on Cluster Init Bundle.
- Click Generate bundle.
Enter a name for the cluster init bundle and click Generate.
- If you are installing using Helm charts, click Download Helm Values File to download the generated bundle.
- If you are installing using the Operator, click Download Kubernetes Secret File to download the generated bundle.
Store this bundle securely because it contains secrets. You can use the same bundle to create multiple secured clusters.
Next steps
- Apply the init bundle by creating a resource on the secured cluster.
- Install secured cluster services on each cluster.
5.3.1.2. Generating an init bundle by using the roxctl CLI Copy linkLink copied to clipboard!
You can create an init bundle with secrets by using the roxctl CLI.
You must have the Admin user role to create init bundles.
Prerequisites
You have configured the ROX_API_TOKEN and the ROX_CENTRAL_ADDRESS environment variables.
Set the
ROX_API_TOKENand theROX_CENTRAL_ADDRESSenvironment variables:export ROX_API_TOKEN=<api_token>
$ export ROX_API_TOKEN=<api_token>Copy to Clipboard Copied! Toggle word wrap Toggle overflow export ROX_CENTRAL_ADDRESS=<address>:<port_number>
$ export ROX_CENTRAL_ADDRESS=<address>:<port_number>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Run the following command to generate a cluster init bundle containing secrets:
For Helm installations:
roxctl -e "$ROX_CENTRAL_ADDRESS" \ central init-bundles generate <cluster_init_bundle_name> \ --output cluster_init_bundle.yaml
$ roxctl -e "$ROX_CENTRAL_ADDRESS" \ central init-bundles generate <cluster_init_bundle_name> \ --output cluster_init_bundle.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow For Operator installations:
roxctl -e "$ROX_CENTRAL_ADDRESS" \ central init-bundles generate <cluster_init_bundle_name> \ --output-secrets cluster_init_bundle.yaml
$ roxctl -e "$ROX_CENTRAL_ADDRESS" \ central init-bundles generate <cluster_init_bundle_name> \ --output-secrets cluster_init_bundle.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantEnsure that you store this bundle securely because it contains secrets. You can use the same bundle to set up multiple secured clusters.
5.3.2. Next steps Copy linkLink copied to clipboard!
Next Step
- On each Red Hat OpenShift cluster, apply the init bundle by using it to create resources.
5.4. Applying an init bundle for secured clusters Copy linkLink copied to clipboard!
Apply the init bundle by using it to create resources.
You must have the Admin user role to apply an init bundle.
5.4.1. Creating resources by using the init bundle Copy linkLink copied to clipboard!
Before you install secured clusters, you must use the init bundle to create the required resources on the cluster that will allow the services on the secured clusters to communicate with RHACS Cloud Service.
If you are installing by using Helm charts, do not perform this step. Complete the installation by using Helm; See "Installing RHACS on secured clusters by using Helm charts" in the additional resources section.
Prerequisites
- You must have generated an init bundle containing secrets.
Procedure
To create resources, perform one of the following steps:
- In the OpenShift Container Platform web console, in the top menu, click + to open the Import YAML page. You can drag the init bundle file or copy and paste its contents into the editor, and then click Create.
Using the Red Hat OpenShift CLI, run the following command to create the resources:
oc create -f <init_bundle>.yaml \ -n <stackrox>
$ oc create -f <init_bundle>.yaml \1 -n <stackrox>2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next Step
- Install RHACS secured cluster services in all clusters that you want to monitor.
5.4.2. Next steps Copy linkLink copied to clipboard!
- On each Red Hat OpenShift cluster, install the RHACS Operator.
5.5. Installing the Operator Copy linkLink copied to clipboard!
Install the RHACS Operator on your secured clusters.
5.5.1. Installing the RHACS Operator for RHACS Cloud Service Copy linkLink copied to clipboard!
Using the OperatorHub provided with OpenShift Container Platform is the easiest way to install the RHACS Operator.
Prerequisites
- You have access to an OpenShift Container Platform cluster using an account with Operator installation permissions.
- You must be using OpenShift Container Platform 4.10 or later. For more information, see Red Hat Advanced Cluster Security for Kubernetes Support Policy.
Procedure
-
Navigate in the web console to the Operators
OperatorHub page. - If Red Hat Advanced Cluster Security for Kubernetes is not displayed, enter Advanced Cluster Security into the Filter by keyword box to find the Red Hat Advanced Cluster Security for Kubernetes Operator.
- Select the Red Hat Advanced Cluster Security for Kubernetes Operator to view the details page.
- Read the information about the Operator, and then click Install.
On the Install Operator page:
- Keep the default value for Installation mode as All namespaces on the cluster.
- Select a specific namespace in which to install the Operator for the Installed namespace field. Install the Red Hat Advanced Cluster Security for Kubernetes Operator in the rhacs-operator namespace.
Select automatic or manual updates for Update approval.
If you select automatic updates, when a new version of the Operator is available, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator.
If you select manual updates, when a newer version of the Operator is available, OLM creates an update request. As a cluster administrator, you must manually approve the update request to update the Operator to the latest version.
- Click Install.
Verification
-
After the installation completes, navigate to Operators
Installed Operators to verify that the Red Hat Advanced Cluster Security for Kubernetes Operator is listed with the status of Succeeded.
5.5.2. Next steps Copy linkLink copied to clipboard!
-
On each Red Hat OpenShift cluster, install secured cluster resources in the
stackroxproject.
5.6. Installing secured cluster resources from RHACS Cloud Service Copy linkLink copied to clipboard!
You can install RHACS Cloud Service on your secured clusters by using the the Operator or Helm charts. You can also use the roxctl CLI to install it, but do not use this method unless you have a specific installation need that requires using it.
Prerequisites
- You have created your Red Hat OpenShift cluster and installed the Operator on it.
- In the ACS Console in RHACS Cloud Service, you have created and downloaded the init bundle.
-
You applied the init bundle by using the
oc createcommand. -
During installation, you noted the Central API Endpoint, including the address and the port number. You can view this information by choosing Advanced Cluster Security
ACS Instances from the cloud console navigation menu, and then clicking the ACS instance you created.
5.6.1. Installing RHACS on secured clusters by using the Operator Copy linkLink copied to clipboard!
5.6.1.1. Installing secured cluster services Copy linkLink copied to clipboard!
You can install secured cluster services on your clusters by using the SecuredCluster custom resource. You must install the secured cluster services on every cluster in your environment that you want to monitor.
When you install secured cluster services, Collector is also installed. To install Collector on systems that have Unified Extensible Firmware Interface (UEFI) and that have Secure Boot enabled, you must use eBPF probes because kernel modules are unsigned, and the UEFI firmware cannot load unsigned packages. Collector identifies Secure Boot status at the start and switches to eBPF probes if required.
Prerequisites
- If you are using OpenShift Container Platform, you must install version 4.10 or later.
- You have installed the RHACS Operator.
- You have generated an init bundle and applied it to the cluster.
Procedure
-
On the OpenShift Container Platform web console, navigate to the Operators
Installed Operators page. - Click the RHACS Operator.
- Click Secured Cluster from the central navigation menu in the Operator details page.
- Click Create SecuredCluster.
Select one of the following options in the Configure via field:
- Form view: Use this option if you want to use the on-screen fields to configure the secured cluster and do not need to change any other fields.
- YAML view: Use this view to set up the secured cluster using the YAML file. The YAML file is displayed in the window and you can edit fields in it. If you select this option, when you are finished editing the file, click Create.
- If you are using Form view, enter the new project name by accepting or editing the default name. The default value is stackrox-secured-cluster-services.
- Optional: Add any labels for the cluster.
-
Enter a unique name for your
SecuredClustercustom resource. For Central Endpoint, enter the address and port number of your Central instance. For example, if Central is available at
https://central.example.com, then specify the central endpoint ascentral.example.com:443. The default value ofcentral.stackrox.svc:443only works when you install secured cluster services and Central in the same cluster. Do not use the default value when you are configuring multiple clusters. Instead, use the hostname when configuring the Central Endpoint value for each cluster.-
For RHACS Cloud Service use the Central API Endpoint, including the address and the port number. You can view this information by choosing Advanced Cluster Security
ACS Instances from the cloud console navigation menu, then clicking the ACS instance you created. -
Only if you are installing secured cluster services and Central in the same cluster, use
central.stackrox.svc:443.
-
For RHACS Cloud Service use the Central API Endpoint, including the address and the port number. You can view this information by choosing Advanced Cluster Security
- Accept the default values or configure custom values if needed. For example, you may need to configure TLS if you are using custom certificates or untrusted CAs.
- Click Create.
Next step
- Optional: Configure additional secured cluster settings.
- Verify installation.
5.6.2. Installing RHACS Cloud Service on secured clusters by using Helm charts Copy linkLink copied to clipboard!
You can install RHACS on secured clusters by using Helm charts with no customization, using the default values, or with customizations of configuration parameters.
First, ensure that you add the Helm chart repository.
5.6.2.1. Adding the Helm chart repository Copy linkLink copied to clipboard!
Procedure
Add the RHACS charts repository.
helm repo add rhacs https://mirror.openshift.com/pub/rhacs/charts/
$ helm repo add rhacs https://mirror.openshift.com/pub/rhacs/charts/Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The Helm repository for Red Hat Advanced Cluster Security for Kubernetes includes Helm charts for installing different components, including:
Secured Cluster Services Helm chart (
secured-cluster-services) for installing the per-cluster and per-node components (Sensor, Admission Controller, Collector, and Scanner-slim).NoteDeploy the per-cluster components into each cluster that you want to monitor and deploy the per-node components in all nodes that you want to monitor.
Verification
Run the following command to verify the added chart repository:
helm search repo -l rhacs/
$ helm search repo -l rhacs/Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.6.2.2. Installing RHACS Cloud Service on secured clusters by using Helm charts without customizations Copy linkLink copied to clipboard!
5.6.2.2.1. Installing the secured-cluster-services Helm chart without customization Copy linkLink copied to clipboard!
Use the following instructions to install the secured-cluster-services Helm chart to deploy the per-cluster and per-node components (Sensor, Admission controller, Collector, and Scanner-slim).
To install Collector on systems that have Unified Extensible Firmware Interface (UEFI) and that have Secure Boot enabled, you must use eBPF probes because kernel modules are unsigned, and the UEFI firmware cannot load unsigned packages. Collector identifies Secure Boot status at the start and switches to eBPF probes if required.
Prerequisites
- You must have generated an RHACS init bundle for your cluster.
-
You must have access to the Red Hat Container Registry and a pull secret for authentication. For information about downloading images from
registry.redhat.io, see Red Hat Container Registry Authentication. -
You must have the Central API Endpoint, including the address and the port number. You can view this information by choosing Advanced Cluster Security
ACS Instances from the cloud console navigation menu, then clicking the ACS instance you created.
5.6.2.3. Configuring the secured-cluster-services Helm chart with customizations Copy linkLink copied to clipboard!
You can use Helm chart configuration parameters with the helm install and helm upgrade commands. Specify these parameters by using the --set option or by creating YAML configuration files.
Create the following files for configuring the Helm chart for installing Red Hat Advanced Cluster Security for Kubernetes:
-
Public configuration file
values-public.yaml: Use this file to save all non-sensitive configuration options. -
Private configuration file
values-private.yaml: Use this file to save all sensitive configuration options. Ensure that you store this file securely.
When using the secured-cluster-services Helm chart, do not change the values.yaml file that is part of the chart.
5.6.2.3.1. Configuration parameters Copy linkLink copied to clipboard!
| Parameter | Description |
|---|---|
|
| Name of your cluster. |
|
|
Address, including port number, of the Central endpoint. If you are using a non-gRPC capable load balancer, use the WebSocket protocol by prefixing the endpoint address with |
|
| Address of the Sensor endpoint including port number. |
|
| Image pull policy for the Sensor container. |
|
| The internal service-to-service TLS certificate that Sensor uses. |
|
| The internal service-to-service TLS certificate key that Sensor uses. |
|
| The memory request for the Sensor container. Use this parameter to override the default value. |
|
| The CPU request for the Sensor container. Use this parameter to override the default value. |
|
| The memory limit for the Sensor container. Use this parameter to override the default value. |
|
| The CPU limit for the Sensor container. Use this parameter to override the default value. |
|
|
Specify a node selector label as |
|
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Sensor. This parameter is mainly used for infrastructure nodes. |
|
|
The name of the |
|
| The name of the Collector image. |
|
| Address of the registry you are using for the main image. |
|
| Address of the registry you are using for the Collector image. |
|
|
Image pull policy for |
|
| Image pull policy for the Collector images. |
|
|
Tag of |
|
|
Tag of |
|
|
Either |
|
| Image pull policy for the Collector container. |
|
| Image pull policy for the Compliance container. |
|
|
If you specify |
|
| The memory request for the Collector container. Use this parameter to override the default value. |
|
| The CPU request for the Collector container. Use this parameter to override the default value. |
|
| The memory limit for the Collector container. Use this parameter to override the default value. |
|
| The CPU limit for the Collector container. Use this parameter to override the default value. |
|
| The memory request for the Compliance container. Use this parameter to override the default value. |
|
| The CPU request for the Compliance container. Use this parameter to override the default value. |
|
| The memory limit for the Compliance container. Use this parameter to override the default value. |
|
| The CPU limit for the Compliance container. Use this parameter to override the default value. |
|
| The internal service-to-service TLS certificate that Collector uses. |
|
| The internal service-to-service TLS certificate key that Collector uses. |
|
|
This setting controls whether Kubernetes is configured to contact Red Hat Advanced Cluster Security for Kubernetes with |
|
|
When you set this parameter as |
|
|
This setting controls whether the cluster is configured to contact Red Hat Advanced Cluster Security for Kubernetes with |
|
| This setting controls whether Red Hat Advanced Cluster Security for Kubernetes evaluates policies; if it is disabled, all AdmissionReview requests are automatically accepted. |
|
|
This setting controls the behavior of the admission control service. You must specify |
|
|
If you set this option to |
|
|
Set it to |
|
| The maximum time, in seconds, Red Hat Advanced Cluster Security for Kubernetes should wait while evaluating admission review requests. Use this to set request timeouts when you enable image scanning. If the image scan runs longer than the specified time, Red Hat Advanced Cluster Security for Kubernetes accepts the request. |
|
| The memory request for the Admission Control container. Use this parameter to override the default value. |
|
| The CPU request for the Admission Control container. Use this parameter to override the default value. |
|
| The memory limit for the Admission Control container. Use this parameter to override the default value. |
|
| The CPU limit for the Admission Control container. Use this parameter to override the default value. |
|
|
Specify a node selector label as |
|
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Admission Control. This parameter is mainly used for infrastructure nodes. |
|
| The internal service-to-service TLS certificate that Admission Control uses. |
|
| The internal service-to-service TLS certificate key that Admission Control uses. |
|
|
Use this parameter to override the default |
|
|
If you specify |
|
|
Specify |
|
|
Specify |
|
|
Specify |
|
| Resource specification for Sensor. |
|
| Resource specification for Admission controller. |
|
| Resource specification for Collector. |
|
| Resource specification for Collector’s Compliance container. |
|
|
If you set this option to |
|
|
If you set this option to |
|
|
If you set this option to |
|
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner DB. |
|
| Resource specification for Collector’s Compliance container. |
|
| Setting this parameter allows you to modify the scanner log level. Use this option only for troubleshooting purposes. |
|
|
If you set this option to |
|
| The minimum number of replicas for autoscaling. Defaults to 2. |
|
| The maximum number of replicas for autoscaling. Defaults to 5. |
|
|
Specify a node selector label as |
|
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner. |
|
|
Specify a node selector label as |
|
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner DB. |
|
| The memory request for the Scanner container. Use this parameter to override the default value. |
|
| The CPU request for the Scanner container. Use this parameter to override the default value. |
|
| The memory limit for the Scanner container. Use this parameter to override the default value. |
|
| The CPU limit for the Scanner container. Use this parameter to override the default value. |
|
| The memory request for the Scanner DB container. Use this parameter to override the default value. |
|
| The CPU request for the Scanner DB container. Use this parameter to override the default value. |
|
| The memory limit for the Scanner DB container. Use this parameter to override the default value. |
|
| The CPU limit for the Scanner DB container. Use this parameter to override the default value. |
|
|
If you set this option to |
5.6.2.3.1.1. Environment variables Copy linkLink copied to clipboard!
You can specify environment variables for Sensor and Admission controller in the following format:
customize:
envVars:
ENV_VAR1: "value1"
ENV_VAR2: "value2"
customize:
envVars:
ENV_VAR1: "value1"
ENV_VAR2: "value2"
The customize setting allows you to specify custom Kubernetes metadata (labels and annotations) for all objects created by this Helm chart and additional pod labels, pod annotations, and container environment variables for workloads.
The configuration is hierarchical, in the sense that metadata defined at a more generic scope (for example, for all objects) can be overridden by metadata defined at a narrower scope (for example, only for the Sensor deployment).
5.6.2.3.2. Installing the secured-cluster-services Helm chart Copy linkLink copied to clipboard!
After you configure the values-public.yaml and values-private.yaml files, install the secured-cluster-services Helm chart to deploy the per-cluster and per-node components (Sensor, Admission controller, Collector, and Scanner-slim).
To install Collector on systems that have Unified Extensible Firmware Interface (UEFI) and that have Secure Boot enabled, you must use eBPF probes because kernel modules are unsigned, and the UEFI firmware cannot load unsigned packages. Collector identifies Secure Boot status at the start and switches to eBPF probes if required.
Prerequisites
- You must have generated an RHACS init bundle for your cluster.
-
You must have access to the Red Hat Container Registry and a pull secret for authentication. For information about downloading images from
registry.redhat.io, see Red Hat Container Registry Authentication. -
You must have the Central API Endpoint, including the address and the port number. You can view this information by choosing Advanced Cluster Security
ACS Instances from the cloud console navigation menu, then clicking the ACS instance you created.
Procedure
Run the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
To deploy secured-cluster-services Helm chart by using a continuous integration (CI) system, pass the init bundle YAML file as an environment variable to the helm install command:
helm install ... -f <(echo "$INIT_BUNDLE_YAML_SECRET")
$ helm install ... -f <(echo "$INIT_BUNDLE_YAML_SECRET")
- 1
- If you are using base64 encoded variables, use the
helm install … -f <(echo "$INIT_BUNDLE_YAML_SECRET" | base64 --decode)command instead.
5.6.2.4. Changing configuration options after deploying the secured-cluster-services Helm chart Copy linkLink copied to clipboard!
You can make changes to any configuration options after you have deployed the secured-cluster-services Helm chart.
Procedure
-
Update the
values-public.yamlandvalues-private.yamlconfiguration files with new values. Run the
helm upgradecommand and specify the configuration files using the-foption:helm upgrade -n stackrox \ stackrox-secured-cluster-services rhacs/secured-cluster-services \ --reuse-values \ -f <path_to_values_public.yaml> \ -f <path_to_values_private.yaml>
$ helm upgrade -n stackrox \ stackrox-secured-cluster-services rhacs/secured-cluster-services \ --reuse-values \1 -f <path_to_values_public.yaml> \ -f <path_to_values_private.yaml>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- You must specify the
--reuse-valuesparameter, otherwise the Helm upgrade command resets all previously configured settings.
NoteYou can also specify configuration values using the
--setor--set-fileparameters. However, these options are not saved, and it requires you to manually specify all the options again whenever you make changes.
5.6.3. Installing RHACS on secured clusters by using the roxctl CLI Copy linkLink copied to clipboard!
To install RHACS on secured clusters by using the CLI, perform the following steps:
-
Install the
roxctlCLI. - Install Sensor.
5.6.3.1. Installing the roxctl CLI Copy linkLink copied to clipboard!
You must first download the binary. You can install roxctl on Linux, Windows, or macOS.
5.6.3.1.1. Installing the roxctl CLI on Linux Copy linkLink copied to clipboard!
You can install the roxctl CLI binary on Linux by using the following procedure.
Procedure
Download the latest version of the
roxctlCLI:curl -O https://mirror.openshift.com/pub/rhacs/assets/4.2.5/bin/Linux/roxctl
$ curl -O https://mirror.openshift.com/pub/rhacs/assets/4.2.5/bin/Linux/roxctlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Make the
roxctlbinary executable:chmod +x roxctl
$ chmod +x roxctlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Place the
roxctlbinary in a directory that is on yourPATH:To check your
PATH, execute the following command:echo $PATH
$ echo $PATHCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify the
roxctlversion you have installed:roxctl version
$ roxctl versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.6.3.1.2. Installing the roxctl CLI on macOS Copy linkLink copied to clipboard!
You can install the roxctl CLI binary on macOS by using the following procedure.
Procedure
Download the latest version of the
roxctlCLI:curl -O https://mirror.openshift.com/pub/rhacs/assets/4.2.5/bin/Darwin/roxctl
$ curl -O https://mirror.openshift.com/pub/rhacs/assets/4.2.5/bin/Darwin/roxctlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove all extended attributes from the binary:
xattr -c roxctl
$ xattr -c roxctlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Make the
roxctlbinary executable:chmod +x roxctl
$ chmod +x roxctlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Place the
roxctlbinary in a directory that is on yourPATH:To check your
PATH, execute the following command:echo $PATH
$ echo $PATHCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify the
roxctlversion you have installed:roxctl version
$ roxctl versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.6.3.1.3. Installing the roxctl CLI on Windows Copy linkLink copied to clipboard!
You can install the roxctl CLI binary on Windows by using the following procedure.
Procedure
Download the latest version of the
roxctlCLI:curl -O https://mirror.openshift.com/pub/rhacs/assets/4.2.5/bin/Windows/roxctl.exe
$ curl -O https://mirror.openshift.com/pub/rhacs/assets/4.2.5/bin/Windows/roxctl.exeCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify the
roxctlversion you have installed:roxctl version
$ roxctl versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.6.3.2. Installing Sensor Copy linkLink copied to clipboard!
To monitor a cluster, you must deploy Sensor. You must deploy Sensor into each cluster that you want to monitor. The following steps describe adding Sensor by using the RHACS portal.
Prerequisites
- You must have already installed Central services, or you can access Central services by selecting your ACS instance on Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service).
Procedure
-
On your secured cluster, in the RHACS portal, navigate to Platform Configuration
Clusters. - Select + New Cluster.
- Specify a name for the cluster.
Provide appropriate values for the fields based on where you are deploying the Sensor.
-
Enter the Central API Endpoint, including the address and the port number. You can view this information again in the Red Hat Hybrid Cloud Console by choosing Advanced Cluster Security
ACS Instances, and then clicking the ACS instance you created.
-
Enter the Central API Endpoint, including the address and the port number. You can view this information again in the Red Hat Hybrid Cloud Console by choosing Advanced Cluster Security
- Click Next to continue with the Sensor setup.
Click Download YAML File and Keys to download the cluster bundle (zip archive).
ImportantThe cluster bundle zip archive includes unique configurations and keys for each cluster. Do not reuse the same files in another cluster.
From a system that has access to the monitored cluster, unzip and run the
sensorscript from the cluster bundle:unzip -d sensor sensor-<cluster_name>.zip
$ unzip -d sensor sensor-<cluster_name>.zipCopy to Clipboard Copied! Toggle word wrap Toggle overflow ./sensor/sensor.sh
$ ./sensor/sensor.shCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you get a warning that you do not have the required permissions to deploy Sensor, follow the on-screen instructions, or contact your cluster administrator for assistance.
After Sensor is deployed, it contacts Central and provides cluster information.
Verification
Return to the RHACS portal and check if the deployment is successful. If successful, when viewing your list of clusters in Platform Configuration
Clusters, the cluster status displays a green checkmark and a Healthy status. If you do not see a green checkmark, use the following command to check for problems: On OpenShift Container Platform, enter the following command:
oc get pod -n stackrox -w
$ oc get pod -n stackrox -wCopy to Clipboard Copied! Toggle word wrap Toggle overflow On Kubernetes, enter the following command:
kubectl get pod -n stackrox -w
$ kubectl get pod -n stackrox -wCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- Click Finish to close the window.
After installation, Sensor starts reporting security information to RHACS and the RHACS portal dashboard begins showing deployments, images, and policy violations from the cluster on which you have installed the Sensor.
5.6.4. Next steps Copy linkLink copied to clipboard!
- Verify installation by ensuring that your secured clusters can communicate with the ACS instance.
5.7. Configuring the proxy for secured cluster services in RHACS Cloud Service Copy linkLink copied to clipboard!
You must configure the proxy settings for secured cluster services within the Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service) environment to establish a connection between the Secured Cluster and the specified proxy server. This ensures reliable data collection and transmission.
5.7.1. Specifying the environment variables in the SecuredCluster CR Copy linkLink copied to clipboard!
To configure an egress proxy, you can either use the cluster-wide Red Hat OpenShift proxy or specify the HTTP_PROXY, HTTPS_PROXY, and NO_PROXY environment variables within the SecuredCluster Custom Resource (CR) configuration file to ensure proper use of the proxy and bypass for internal requests within the specified domain.
The proxy configuration applies to all running services: Sensor, Collector, Admission Controller and Scanner.
Procedure
Specify the
HTTP_PROXY,HTTPS_PROXY, andNO_PROXYenvironment variables under the customize specification in the SecuredCluster CR configuration file:For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The variable
HTTP_PROXYis set to the valuehttp://egress-proxy.stackrox.svc:xxxx. This is the proxy server used for HTTP connections. - 2
- The variable
HTTPS_PROXYis set to the valuehttp://egress-proxy.stackrox.svc:xxxx. This is the proxy server used for HTTPS connections. - 3
- The variable
NO _PROXYis set to.stackrox.svc. This variable is used to define the hostname or IP address that should not be accessed through the proxy server.
5.8. Verifying installation of secured clusters Copy linkLink copied to clipboard!
After installing RHACS Cloud Service, you can perform some steps to verify that the installation was successful.
To verify installation, access your ACS Console from the Red Hat Hybrid Cloud Console. The Dashboard displays the number of clusters that RHACS Cloud Service is monitoring, along with information about nodes, deployments, images, and violations.
If no data appears in the ACS Console:
- Ensure that at least one secured cluster is connected to your RHACS Cloud Service instance. For more information, see Installing secured cluster resources from RHACS Cloud Service.
- Examine your Sensor pod logs to ensure that the connection to your RHACS Cloud Service instance is successful.
-
In the Red Hat OpenShift cluster, navigate to Platform Configuration
Clusters to verify that the components are healthy and view additional operational information. -
Examine the values in the
SecuredClusterAPI in the Operator on your local cluster to ensure that the Central API Endpoint has been entered correctly. This value should be the same value as shown in the ACS instance details in the Red Hat Hybrid Cloud Console.