RHACS Cloud Service


Red Hat Advanced Cluster Security for Kubernetes 4.5

About the RHACS Cloud Service

Red Hat OpenShift Documentation Team

Abstract

Guidance on understanding the RHACS Cloud Service.

1.1. Introduction to RHACS

Red Hat Advanced Cluster Security for Kubernetes (RHACS) is an enterprise-ready, Kubernetes-native container security solution that helps you build, deploy, and run cloud-native applications more securely.

Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service) provides Kubernetes-native security as a service. With RHACS Cloud Service, Red Hat maintains, upgrades, and manages your Central services.

Central services include the user interface (UI), data storage, RHACS application programming interface (API), and image scanning capabilities. You deploy your Central service through the Red Hat Hybrid Cloud Console. When you create a new ACS instance, Red Hat creates your individual control plane for RHACS.

RHACS Cloud Service allows you to secure self-managed clusters that communicate with a Central instance. The clusters you secure, called Secured Clusters, are managed by you, and not by Red Hat. Secured Cluster services include optional vulnerability scanning services, admission control services, and data collection services used for runtime monitoring and compliance. You install Secured Cluster services on any OpenShift or Kubernetes cluster you want to secure.

1.2. Architecture

RHACS Cloud Service is hosted on Amazon Web Services (AWS) over two regions, eu-west-1 and us-east-1, and uses the network access points provided by the cloud provider. Each tenant from RHACS Cloud Service uses highly-available egress proxies and is spread over 3 availability zones. For more information about RHACS Cloud Service system architecture and components, see Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service) architecture.

1.3. Billing

Customers can purchase a RHACS Cloud Service subscription on the Amazon Web Services (AWS) marketplace. The service cost is charged hourly per secured core, or vCPU of a node belonging to a secured cluster.

Example 1.1. Subscription cost example

If you have established a connection to two secured clusters, each with 5 identical nodes with 8 vCPUs (such as Amazon EC2 m7g.2xlarge), the total number of secured cores is 80 (2 x 5 x 8 = 80).

1.4. Security and compliance

All RHACS Cloud Service data in the Central instance is encrypted in transit and at rest. The data is stored in secure storage with full replication and high availability together with regularly-scheduled backups. RHACS Cloud Service is available through cloud data centers that ensure optimal performance and the ability to meet data residency requirements.

Red Hat’s information security guidelines, aligned with the NIST Cybersecurity Framework, are approved by executive management. Red Hat maintains a dedicated team of globally-distributed certified information security professionals. See the following resources:

Red Hat has strict internal policies and practices to protect our customers and their businesses. These policies and practices are confidential. In addition, we comply with all applicable laws and regulations, including those related to data privacy.

Red Hat’s information security roles and responsibilities are not managed by third parties.

Red Hat maintains an ISO 27001 certification for our corporate information security management system (ISMS), which governs how all of our people work, corporate endpoint devices, and authentication and authorization practices. We have taken a standardized approach to this through the implementation of the Red Hat Enterprise Security Standard (ESS) to all infrastructure, products, services and technology that Red Hat employs. A copy of the ESS is available upon request.

RHACS Cloud Service runs on an instance of OpenShift Dedicated hosted on Amazon Web Services (AWS). OpenShift Dedicated is compliant with ISO 27001, ISO 27017, ISO 27018, PCI DSS, SOC 2 Type 2, and HIPAA. Strong processes and security controls are aligned with industry standards to manage information security.

RHACS Cloud Service follows the same security principles, guidelines, processes and controls defined for OpenShift Dedicated. These certifications demonstrate how our services platform, associated operations, and management practices align with core security requirements. We meet many of these requirements by following solid Secure Software Development Framework (SSDF) practices as defined by NIST, including build pipeline security. Implementation of SSDF controls are implemented via our Secure Software Management Lifecycle (SSML) for all products and services.

Red Hat’s proven and experienced global site reliability engineering (SRE) team is available 24x7 and proactively manages the cluster life cycle, infrastructure configuration, scaling, maintenance, security patching, and incident response as it relates to the hosted components of RHACS Cloud Service. The Red Hat SRE team is responsible for managing HA, uptime, backups, restore, and security for the RHACS Cloud Service control plane. RHACS Cloud Service comes with a 99.95% availability SLA and 24x7 RH SRE support by phone or chat.

You are responsible for use of the product, including implementation of policies, vulnerability management, and deployment of secured cluster components within your OpenShift Container Platform environments. The Red Hat SRE team manages the control plane that contains tenant data in line with the compliance frameworks noted previously, including:

  • All Red Hat SRE access the data plane clusters through the backplane which enables audited access to the cluster
  • Red Hat SRE only deploys images from the Red Hat registry. All content posted to the Red Hat registry goes through rigorous checks. These images are the same images available to self-managed customers.
  • Each tenant has their own individual mTLS CA, which encrypts data in-transit, enabling multi-tenant isolation. Additional isolation is provided via SELinux controls namespaces and network policies.
  • Each tenant has their own instance of the RDS database.

All Red Hat SREs and developers go through rigorous Secure Development Lifecycle training.

For more information, see the following resources:

1.4.2. Vulnerability management program

Red Hat scans for vulnerabilities in our products during the build process and our dedicated Product Security team tracks and assesses newly-discovered vulnerabilities. Red Hat Information Security regularly scans running environments for vulnerabilities.

Qualified critical and important Security Advisories (RHSAs) and urgent and selected high priority Bug Fix Advisories (RHBAs) are released as they become available. All other available fix and qualified patches are released via periodic updates. All RHACS Cloud Service software impacted by critical or important severity flaws are updated as soon as the fix is available. For more information about remediation of critical or high-priority issues, see Understanding Red Hat’s Product Security Incident Response Plan.

1.4.3. Security exams and audits

RHACS Cloud Service does not currently hold any external security certifications or attestations.

The Red Hat Information Risk and Security Team has achieved ISO 27001:2013 certification for our Information Security Management System (ISMS).

1.4.4. Systems interoperability security

RHACS Cloud Service supports integrations with registries, CI systems, notification systems, workflow systems like ServiceNow and Jira, and Security information and event management (SIEM) platforms. For more information about supported integrations, see the Integrating documentation. Custom integrations can be implemented using the API or generic webhooks.

RHACS Cloud Service uses certificate-based architecture (mTLS) for both authentication and end-to-end encryption of all inflight traffic between the customer’s site and Red Hat. It does not require a VPN. IP allowlists are not supported. Data transfer is encrypted using mTLS. File transfer, including Secure FTP, is not supported.

1.4.5. Malicious code prevention

RHACS Cloud Service is deployed on Red Hat Enterprise Linux CoreOS (RHCOS). The user space in RHCOS is read-only. In addition, all RHACS Cloud Service instances are monitored in runtime by RHACS. Red Hat uses a commercially-available, enterprise-grade anti-virus solution for Windows and Mac platforms, which is centrally managed and logged. Anti-virus solutions on Linux-based platforms are not part of Red Hat’s strategy, as they can introduce additional vulnerabilities. Instead, we harden and rely on the built-in tooling (for example, SELinux) to protect the platform.

Red Hat uses SentinelOne and osquery for individual endpoint security, with updates made as they are available from the vendor.

All third-party JavaScript libraries are downloaded and included in build images which are scanned for vulnerabilities before being published.

1.4.6. Systems development lifecycle security

Red Hat follows secure development lifecycle practices. Red Hat Product Security practices are aligned with the Open Web Application Security Project (OWASP) and ISO12207:2017 wherever it is feasible. Red Hat covers OWASP project recommendations along with other secure software development practices to increase the general security posture of our products. OWASP project analysis is included in Red Hat’s automated scanning, security testing, and threat models, as the OWASP project is built based on selected CWE weaknesses. Red Hat monitors weaknesses in our products to address issues before they are exploited and become vulnerabilities.

For more information, see the following resources:

Applications are scanned regularly and the container scan results of the product are available publicly. For example, on the Red Hat Ecosystem Catalog site, you can select a component image such as rhacs-main and click the Security tab to see the health index and the status of security updates.

As part of Red Hat’s policy, a support policy and maintenance plan is issued for any third-party components we depend on that go to end-of-life.

1.4.7. Software Bill of Materials

Red Hat has published software bill of materials (SBOMs) files for core Red Hat offerings. An SBOM is a machine-readable, comprehensive inventory (manifest) of software components and dependencies with license and provenance information. SBOM files help establish reviews for procurement and audits of what is in a set of software applications and libraries. Combined with Vulnerability Exploitability eXchange (VEX), SBOMs help an organization address its vulnerability risk assessment process. Together they provide information on where a potential risk might exist (where the vulnerable artifact is included and the correlation between this artifact and components or the product), and its current status to known vulnerabilities or exploits.

Red Hat, together with other vendors, is working to define the specific requirements for publishing useful SBOMs that can be correlated with Common Security Advisory Framework (CSAF)-VEX files, and inform consumers and partners about how to use this data. For now, SBOM files published by Red Hat, including SBOMs for RHACS Cloud Service, are considered to be beta versions for customer testing and are available at https://access.redhat.com/security/data/sbom/beta/spdx/.

For more detail on Red Hat’s Security data, see The future of Red Hat security data.

1.4.8. Data centers and providers

The following third-party providers are used by Red Hat in providing subscription support services:

  • Flexential hosts the Raleigh Data Center, which is the primary data center used to support the Red Hat Customer Portal databases.
  • Digital Realty hosts the Phoenix Data Center, which is the secondary backup data center supporting the Red Hat Customer Portal databases.
  • Salesforce provides the engine behind the customer ticketing system.
  • AWS is used to augment data center infrastructure capacity, some of which is used to support the Red Hat Customer Portal application.
  • Akamai is used to host the Web Application Firewall and provide DDoS protection.
  • Iron Mountain is used to handle the destruction of sensitive material.

1.5. Access control

User accounts are managed with role-based access control (RBAC). See Managing RBAC in Red Hat Advanced Cluster Security for Kubernetes for more information. Red Hat site reliability engineers (SREs) have access to Central instances. Access is controlled with OpenShift RBAC. Credentials are instantly revoked upon termination.

1.5.1. Authentication provider

When you create a Central instance using Red Hat Hybrid Cloud Console, authentication for the cluster administrator is configured as part of the process. Customers must manage all access to the Central instance as part of their integrated solution. For more information about the available authentication methods, see Understanding authentication providers.

The default identity provider in RHACS Cloud Service is Red Hat Single Sign-On (SSO). Authorization rules are set up to provide administrator access to the user who created the RHACS Cloud Service and to users who are marked as organization administrators in Red Hat SSO. The admin login is disabled for RHACS Cloud Service by default and can only be enabled temporarily by SREs. For more information about authentication using Red Hat SSO, see Default access to the ACS Console.

1.5.2. Password management

Red Hat’s password policy requires the use of a complex password. Passwords must contain at least 14 characters and at least three of the following character classes:

  • Base 10 digits (0 to 9)
  • Upper case characters (A to Z)
  • Lower case characters (a to z)
  • Punctuation, spaces, and other characters

Most systems require two-factor authentication.

Red Hat follows best password practices according to NIST guidelines.

1.5.3. Remote access

Access for remote support and troubleshooting is strictly controlled through implementation of the following guidelines:

  • Strong two-factor authentication for VPN access
  • A segregated network with management and administrative networks requiring additional authentication through a bastion host
  • All access and management is performed over encrypted sessions

Our customer support team offers Bomgar as a remote access solution for troubleshooting. Bomgar sessions are optional, must be initiated by the customer, and can be monitored and controlled.

To prevent information leakage, logs are shipped to SRE through our security information and event management (SIEM) application, Splunk.

1.5.4. Regulatory compliance

For the latest regulatory compliance information, see Understanding process and security for OpenShift Dedicated.

1.6. Data protection

Red Hat provides data protection by using various methods, such as logging, access control, and encryption.

1.6.1. Data storage media protection

To protect our data and client data from risk of theft or destruction, Red Hat employs the following methods:

  • access logging
  • automated account termination procedures
  • application of the principle of least privilege

Data is encrypted in transit and at rest using strong data encryption following NIST guidelines and Federal Information Processing Standards (FIPS) where possible and practical. This includes backup systems.

RHACS Cloud Service encrypts data at rest within the Amazon Relational Database Service (RDS) database by using AWS-managed Key Management Services (KMS) keys. All data between the application and the database, together with data exchange between the systems, are encrypted in transit.

1.6.1.1. Data retention and destruction

Records, including those containing personal data, are retained as required by law. Records not required by law or a reasonable business need are securely removed. Secure data destruction requirements are included in operating procedures, using military grade tools. In addition, staff have access to secure document destruction facilities.

1.6.1.2. Encryption

Red Hat uses AWS managed keys which are rotated by AWS each year. For information on the use of keys, see AWS KMS key management. For more information about RDS, see Amazon RDS Security.

1.6.1.3. Multi-tenancy

RHACS Cloud Service isolates tenants by namespace on OpenShift Container Platform. SELinux provides additional isolation. Each customer has a unique RDS instance.

1.6.1.4. Data ownership

Customer data is stored in an encrypted RDS database not available on the public internet. Only Site Reliability Engineers (SREs) have access to it, and the access is audited.

Every RHACS Cloud Service system comes integrated with Red Hat external SSO. Authorization rules are set up to provide administrator access to the user created the Cloud Service and to users who are marked as organization administrators in Red Hat SSO. The admin login is disabled for RHACS Cloud Service by default and can only be temporarily enabled by SREs.

Red Hat collects information about the number of secured clusters connected to RHACS Cloud Service and the usage of features. Metadata generated by the application and stored in the RDS database is owned by the customer. Red Hat only accesses data for troubleshooting purposes and with customer permission. Red Hat access requires audited privilege escalation.

Upon contract termination, Red Hat can perform a secure disk wipe upon request. However, we are unable to physically destroy media (cloud providers such as AWS do not provide this option).

To secure data in case of a breach, you can perform the following actions:

  • Disconnect all secured clusters from RHACS Cloud Service immediately using the cluster management page.
  • Immediately disable access to the RHACS Cloud Service by using the Access Control page.
  • Immediately delete your RHACS instance, which also deletes the RDS instance.

Any AWS RDS (data store) specific access modifications would be implemented by the RHACS Cloud Service SRE engineers.

1.7. Metrics and Logging

1.7.1. Service metrics

Service metrics are internal only. Red Hat provides and maintains the service at the agreed upon level. Service metrics are accessible only to authorized Red Hat personnel. For more information, see PRODUCT APPENDIX 4 RED HAT ONLINE SERVICES.

1.7.2. Customer metrics

Core usage capacity metrics are available either through Subscription Watch or the Subscriptions page.

1.7.3. Service logging

System logs for all components of the Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service) are internal and available only to Red Hat personnel. Red Hat does not provide user access to component logs. For more information, see PRODUCT APPENDIX 4 RED HAT ONLINE SERVICES.

1.8. Updates and Upgrades

Red Hat makes a commercially reasonable effort to notify customers prior to updates and upgrades that impact service. The decision regarding the need for a Service update to the Central instance and its timing is the sole responsibility of Red Hat.

Customers have no control over when a Central service update occurs. For more information, see PRODUCT APPENDIX 4 RED HAT ONLINE SERVICES. Upgrades to the version of Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service) are considered part of the service update. Upgrades are transparent to the customer and no connection to any update site is required.

Customers are responsible for timely RHACS Secured Cluster services upgrades that are required to maintain compatibility with RHACS Cloud Service.

Red Hat recommends enabling automatic upgrades for Secured Clusters that are connected to RHACS Cloud Service.

See the Red Hat Advanced Cluster Security for Kubernetes Support Matrix for more information about upgrade versions.

1.9. Availability

Availability and disaster avoidance are extremely important aspects of any security platform. Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service) provides numerous protections against failures at multiple levels. To account for possible cloud provider failures, Red Hat established multiple availability zones.

1.9.1. Backup and disaster recovery

The RHACS Cloud Service Disaster Recovery strategy includes backups of database and any customization. This also applies to customer data stored in the Central database. Recovery time varies based on the number of appliances and database sizes; however, because the appliances can be clustered and distributed, the RTO can be reduced upfront with proper architecture planning.

All snapshots are created using the appropriate cloud provider snapshot APIs, encrypted and then uploaded to secure object storage, which for Amazon Web Services (AWS) is an S3 bucket.

  • Red Hat does not commit to a Recovery Point Objective (RPO) or Recovery Time Objective (RTO). For more information, see PRODUCT APPENDIX 4 RED HAT ONLINE SERVICES.
  • Site Reliability Engineering performs backups only as a precautionary measure. They are stored in the same region as the cluster.
  • Customers should deploy multiple availability zone Secured Clusters with workloads that follow Kubernetes best practices to ensure high availability within a region.

Disaster recovery plans are exercised annually at a minimum. A Business Continuity Management standard and guideline is in place so that the BC lifecycle is consistently followed throughout the organization. This policy includes a requirement for testing at least annually, or with major change of functional plans. Review sessions are required to be conducted after any plan exercise or activation, and plan updates are made as needed.

Red Hat has generator backup systems. Our IT production systems are hosted in a Tier 3 data center facility that has recurring testing to ensure redundancy is operational. They are audited yearly to validate compliance.

1.10. Getting support for RHACS Cloud Service

If you experience difficulty with a procedure described in this documentation, or with RHACS Cloud Service in general, visit the Red Hat Customer Portal.

From the Customer Portal, you can perform the following actions:

  • Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products.
  • Submit a support case to Red Hat Support.
  • Access other product documentation.

To identify issues with your cluster, you can use Insights in RHACS Cloud Service. Insights provides details about issues and, if available, information on how to solve a problem.

1.11. Service removal

You can delete RHACS Cloud Service using the default delete operations from the Red Hat Hybrid Cloud Console. Deleting the RHACS Cloud Service Central instance automatically removes all RHACS components. Deleting is not reversible.

1.12. Pricing

For information about subscription fees, see PRODUCT APPENDIX 4 RED HAT ONLINE SERVICES.

1.13. Service Level Agreement

For more information about the Service Level Agreements (SLAs) offered for Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service), see PRODUCT APPENDIX 4 RED HAT ONLINE SERVICES.

This documentation outlines Red Hat and customer responsibilities for the RHACS Cloud Service managed service.

While Red Hat manages the RHACS Cloud Service services, also referred to as Central services, the customer has certain responsibilities.

Expand
Resource or actionRed Hat responsibilityCustomer responsibility

Hosted components, also called Central components

  • Platform monitoring
  • Software updates
  • High availability
  • Backup and restore
  • Security
  • Infrastructure configuration
  • Scaling
  • Maintenance
  • Vulnerability management
  • Access and identity authorization

Secured clusters (on-premise or cloud)

 
  • Software updates
  • Backup and restore
  • Security
  • Infrastructure configuration
  • Scaling
  • Maintenance
  • Access and identity authorization
  • Vulnerability management

Discover Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service) architecture and concepts.

Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service) is a Red Hat managed Software-as-a-Service (SaaS) platform that lets you protect your Kubernetes and OpenShift Container Platform clusters and applications throughout the build, deploy, and runtime lifecycles.

RHACS Cloud Service includes many built-in DevOps enforcement controls and security-focused best practices based on industry standards such as the Center for Internet Security (CIS) benchmarks and the National Institute of Standards Technology (NIST) guidelines. You can also integrate it with your existing DevOps tools and workflows to improve security and compliance.

RHACS Cloud Service architecture

The following graphic shows the architecture with the StackRox Scanner and Scanner V4. Installation of Scanner V4 is optional, but provides additional benefits.

Central services include the user interface (UI), data storage, RHACS application programming interface (API), and image scanning capabilities. You deploy your Central service through the Red Hat Hybrid Cloud Console. When you create a new ACS instance, Red Hat creates your individual control plane for RHACS.

RHACS Cloud Service allows you to secure self-managed clusters that communicate with a Central instance. The clusters you secure, called Secured Clusters, are managed by you, and not by Red Hat. Secured Cluster services include optional vulnerability scanning services, admission control services, and data collection services used for runtime monitoring and compliance. You install Secured Cluster services on any OpenShift or Kubernetes cluster you want to secure.

3.2. Central

Red Hat manages Central, the control plane for RHACS Cloud Service. These services include the following components:

  • Central: Central is the RHACS application management interface and services. It handles API interactions and user interface (RHACS Portal) access.
  • Central DB: Central DB is the database for RHACS and handles all data persistence. It is currently based on PostgreSQL 13.
  • Scanner V4: Beginning with version 4.4, RHACS contains the Scanner V4 vulnerability scanner for scanning container images. Scanner V4 is built on ClairCore, which also powers the Clair scanner. Scanner V4 includes the Indexer, Matcher, and Scanner V4 DB components, which are used in scanning.
  • StackRox Scanner: The StackRox Scanner is the default scanner in RHACS. The StackRox Scanner originates from a fork of the Clair v2 open source scanner.
  • Scanner-DB: This database contains data for the StackRox Scanner.

RHACS scanners analyze each image layer to determine the base operating system and identify programming language packages and packages that were installed by the operating system package manager. They match the findings against known vulnerabilities from various vulnerability sources. In addition, the StackRox Scanner identifies vulnerabilities in the node’s operating system and platform. These capabilities are planned for Scanner V4 in a future release.

3.2.1. Vulnerability sources

RHACS uses the following vulnerability sources:

The Scanner V4 Indexer uses the following files to index Red Hat containers:

3.3. Secured cluster services

You install the secured cluster services on each cluster that you want to secure by using the Red Hat Advanced Cluster Security Cloud Service. Secured cluster services include the following components:

  • Sensor: Sensor is the service responsible for analyzing and monitoring the cluster. Sensor listens to the OpenShift Container Platform or Kubernetes API and Collector events to report the current state of the cluster. Sensor also triggers deploy-time and runtime violations based on RHACS Cloud Service policies. In addition, Sensor is responsible for all cluster interactions, such as applying network policies, initiating reprocessing of RHACS Cloud Service policies, and interacting with the Admission controller.
  • Admission controller: The Admission controller prevents users from creating workloads that violate security policies in RHACS Cloud Service.
  • Collector: Collector analyzes and monitors container activity on cluster nodes. It collects container runtime and network activity information and sends the collected data to Sensor.
  • StackRox Scanner: In Kubernetes, the secured cluster services include Scanner-slim as an optional component. However, on OpenShift Container Platform, RHACS Cloud Service installs a Scanner-slim version on each secured cluster to scan images in the OpenShift Container Platform integrated registry and optionally other registries.
  • Scanner-DB: This database contains data for the StackRox Scanner.
  • Scanner V4: Scanner V4 components are installed on the secured cluster if enabled.

    • Scanner V4 Indexer: The Scanner V4 Indexer performs image indexing, previously known as image analysis. Given an image and registry credentials, the Indexer pulls the image from the registry. It finds the base operating system, if it exists, and looks for packages. It stores and outputs an index report, which contains the findings for the given image.
    • Scanner V4 DB: This component is installed if Scanner V4 is enabled. This database stores information for Scanner V4, including index reports. For best performance, configure a persistent volume claim (PVC) for Scanner V4 DB.

      Note

      When secured cluster services are installed on the same cluster as Central services and installed in the same namespace, secured cluster services do not deploy Scanner V4 components. Instead, it is assumed that Central services already include a deployment of Scanner V4.

3.4. Data access and permissions

Red Hat does not have access to the clusters on which you install the secured cluster services. Also, RHACS Cloud Service does not need permission to access the secured clusters. For example, you do not need to create new IAM policies, access roles, or API tokens.

However, RHACS Cloud Service stores the data that secured cluster services send. All data is encrypted within RHACS Cloud Service. Encrypting the data within the RHACS Cloud Service platform helps to ensure the confidentiality and integrity of the data.

When you install secured cluster services on a cluster, it generates data and transmits it to the RHACS Cloud Service. This data is kept secure within the RHACS Cloud Service platform, and only authorized SRE team members and systems can access this data. RHACS Cloud Service uses this data to monitor the security and compliance of your cluster and applications, and to provide valuable insights and analytics that can help you optimize your deployments.

Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service) provides security services for your Red Hat OpenShift and Kubernetes clusters. See the Red Hat Advanced Cluster Security for Kubernetes Support Matrix for more information on supported platforms for secured clusters.

Prerequisites

  • Ensure that you can access the Advanced Cluster Security menu option from the Red Hat Hybrid Cloud Console.

    Note

    To access the RHACS Cloud Service console, you need your Red Hat Single Sign-On (SSO) credentials, or credentials for another identity provider if that has been configured. See Default access to the ACS console.

4.1. High-level overview of installation steps

The following sections provide an overview of installation steps and links to the relevant documentation.

4.1.1. Securing Red Hat OpenShift clusters

To secure Red Hat OpenShift clusters by using the Operator, perform the following steps:

  1. Verify that the clusters you want to secure meet the requirements.
  2. In the Red Hat Hybrid Cloud Console, create an ACS Instance.
  3. On each Red Hat OpenShift cluster you want to secure, create a project named stackrox. This project will contain the resources for RHACS Cloud Service secured clusters.
  4. In the ACS Console, create an init bundle. The init bundle contains secrets that allow communication between RHACS Cloud Service secured clusters and the ACS Console.
  5. On each Red Hat OpenShift cluster, apply the init bundle by using it to create resources.
  6. On each Red Hat OpenShift cluster, install the RHACS Operator.
  7. On each Red Hat OpenShift cluster, install secured cluster resources in the stackrox project by using the Operator.
  8. Verify installation by ensuring that your secured clusters can communicate with the ACS instance.

To secure Red Hat OpenShift clusters by using Helm charts or the roxctl CLI, perform the following steps:

  1. Verify that the clusters you want to secure meet the requirements.
  2. In the Red Hat Hybrid Cloud Console, create an ACS Instance.
  3. On each Red Hat OpenShift cluster you want to secure, create a project named stackrox. This project will contain the resources for RHACS Cloud Service secured clusters.
  4. In the ACS Console, create an init bundle. The init bundle contains secrets that allow communication between RHACS Cloud Service secured clusters and the ACS Console.
  5. On each Red Hat OpenShift cluster, apply the init bundle by using it to create resources.
  6. On each Red Hat OpenShift cluster, install secured cluster resources in the stackrox project by using Helm charts or by using the roxctl CLI.
  7. Verify installation by ensuring that your secured clusters can communicate with the ACS instance.

4.1.2. Securing Kubernetes clusters

To secure Kubernetes clusters, perform the following steps:

  1. Verify that the clusters you want to secure meet the requirements.
  2. In the Red Hat Hybrid Cloud Console, create an ACS Instance.
  3. In the ACS Console, create an init bundle. The init bundle contains secrets that allow communication between RHACS Cloud Service secured clusters and the ACS Console.
  4. On each Kubernetes cluster, apply the init bundle by using it to create resources.
  5. On each Kubernetes cluster, install secured cluster resources by using Helm charts or the roxctl CLI.
  6. Verify installation by ensuring that your secured clusters can communicate with the ACS instance.

4.2. Default access to the ACS Console

By default, the authentication mechanism available to users is authentication by using Red Hat Single Sign-On (SSO). You cannot delete or change the Red Hat SSO authentication provider. However, you can change the minimum access role and add additional rules, or add another identity provider.

Note

To learn how authentication providers work in ACS, see Understanding authentication providers.

A dedicated OIDC client of sso.redhat.com is created for each ACS Console. All OIDC clients share the same sso.redhat.com realm. Claims from the token issued by sso.redhat.com are mapped to an ACS-issued token as follows:

  • realm_access.roles to groups
  • org_id to rh_org_id
  • is_org_admin to rh_is_org_admin
  • sub to userid

The built-in Red Hat SSO authentication provider has the required attribute rh_org_id set to the organization ID assigned to account of the user who created the RHACS Cloud Service instance. This is the ID of the organizational account the user is a part of. This can be thought of as the "tenant" the user is under and owned by. Only users with the same organizational account can access the ACS console by using the Red Hat SSO authentication provider.

Note

To gain more control over access to your ACS Console, configure another identity provider instead of relying on the Red Hat SSO authentication provider. For more information, see Understanding authentication providers. To configure the other authentication provider to be the first authentication option on the login page, its name should be lexicographically smaller than Red Hat SSO.

The minimum access role is set to None. Assigning a different value to this field gives access to the RHACS Cloud Service instance to all users with the same organizational account.

Other rules that are set up in the built-in Red Hat SSO authentication provider include the following:

  • Rule mapping your userid to Admin
  • Rules mapping administrators of the organization to Admin

You can add more rules to grant access to the ACS Console to someone else with the same organizational account. For example, you can use email as a key.

5.1. General requirements for RHACS Cloud Service

Before you can install Red Hat Advanced Cluster Security Cloud Service, your system must meet several requirements.

Warning

You must not install RHACS Cloud Service on:

  • Amazon Elastic File System (Amazon EFS). Use the Amazon Elastic Block Store (Amazon EBS) with the default gp2 volume type instead.
  • Older CPUs that do not have the Streaming SIMD Extensions (SSE) 4.2 instruction set. For example, Intel processors older than Sandy Bridge and AMD processors older than Bulldozer. These processors were released in 2011.

To install RHACS Cloud Service, you must have one of the following systems:

  • OpenShift Container Platform version 4.11 or later, and cluster nodes with a supported operating system of Red Hat Enterprise Linux CoreOS (RHCOS) or Red Hat Enterprise Linux (RHEL)
  • A supported managed Kubernetes platform, and cluster nodes with a supported operating system of Amazon Linux, CentOS, Container-Optimized OS from Google, Red Hat Enterprise Linux CoreOS (RHCOS), Debian, Red Hat Enterprise Linux (RHEL), or Ubuntu

    For information about supported platforms and architecture, see the Red Hat Advanced Cluster Security for Kubernetes Support Matrix.

The following minimum requirements and suggestions apply to cluster nodes.

Architecture

Supported architectures are amd64, ppc64le, or s390x.

Note

Secured cluster services are supported on IBM Power (ppc64le), IBM Z (s390x), and IBM® LinuxONE (s390x) clusters.

Processor
3 CPU cores are required.
Memory

6 GiB of RAM is required.

Note

See the default memory and CPU requirements for each component and ensure that the node size can support them.

Storage

For RHACS Cloud Service, a persistent volume claim (PVC) is not required. However, a PVC is strongly recommended if you have secured clusters with Scanner V4 enabled. Use Solid-State Drives (SSDs) for best performance. However, you can use another storage type if you do not have SSDs available.

Important

You must not use Ceph FS storage with RHACS Cloud Service. Red Hat recommends using RBD block mode PVCs for RHACS Cloud Service.

If you plan to install RHACS Cloud Service by using Helm charts, you must meet the following requirements:

  • You must have Helm command-line interface (CLI) v3.2 or newer, if you are installing or configuring RHACS Cloud Service using Helm charts. Use the helm version command to verify the version of Helm you have installed.
  • You must have access to the Red Hat Container Registry. For information about downloading images from registry.redhat.io, see Red Hat Container Registry Authentication.

5.2. Secured cluster services

Secured cluster services contain the following components:

  • Sensor
  • Admission controller
  • Collector
  • Scanner (optional)
  • Scanner V4 (optional)

If you use a web proxy or firewall, you must ensure that secured clusters and Central can communicate on HTTPS port 443.

5.2.1. Sensor

Sensor monitors your Kubernetes and OpenShift Container Platform clusters. These services currently deploy in a single deployment, which handles interactions with the Kubernetes API and coordinates with the other Red Hat Advanced Cluster Security for Kubernetes components.

CPU and memory requirements

The following table lists the minimum CPU and memory values required to install and run sensor on secured clusters.

Expand
SensorCPUMemory

Request

2 cores

4 GiB

Limit

4 cores

8 GiB

5.2.2. Admission controller

The Admission controller prevents users from creating workloads that violate policies you configure.

CPU and memory requirements

By default, the admission control service runs 3 replicas. The following table lists the request and limits for each replica.

Expand
Admission controllerCPUMemory

Request

0.05 cores

100 MiB

Limit

0.5 cores

500 MiB

5.2.3. Collector

Collector monitors runtime activity on each node in your secured clusters as a DaemonSet. It connects to Sensor to report this information. The collector pod has three containers. The first container is collector, which monitors and reports the runtime activity on the node. The other two are compliance and node-inventory.

Collection requirements

To use the CORE_BPF collection method, the base kernel must support BTF, and the BTF file must be available to collector. In general, the kernel version must be later than 5.8 (4.18 for RHEL nodes) and the CONFIG_DEBUG_INFO_BTF configuration option must be set.

Collector looks for the BTF file in the standard locations shown in the following list:

Example 5.1. BTF file locations

/sys/kernel/btf/vmlinux
/boot/vmlinux-<kernel-version>
/lib/modules/<kernel-version>/vmlinux-<kernel-version>
/lib/modules/<kernel-version>/build/vmlinux
/usr/lib/modules/<kernel-version>/kernel/vmlinux
/usr/lib/debug/boot/vmlinux-<kernel-version>
/usr/lib/debug/boot/vmlinux-<kernel-version>.debug
/usr/lib/debug/lib/modules/<kernel-version>/vmlinux
Copy to Clipboard Toggle word wrap

If any of these files exists, it is likely that the kernel has BTF support and CORE_BPF is configurable.

CPU and memory requirements

By default, the collector pod runs 3 containers. The following tables list the request and limits for each container and the total for each collector pod.

Collector container
Expand
TypeCPUMemory

Request

0.06 cores

320 MiB

Limit

0.9 cores

1000 MiB

Compliance container
Expand
TypeCPUMemory

Request

0.01 cores

10 MiB

Limit

1 core

2000 MiB

Node-inventory container
Expand
TypeCPUMemory

Request

0.01 cores

10 MiB

Limit

1 core

500 MiB

Total collector pod requirements
Expand
TypeCPUMemory

Request

0.07 cores

340 MiB

Limit

2.75 cores

3500 MiB

5.2.4. Scanner

CPU and memory requirements

The requirements in this table are based on the default of 3 replicas.

Expand
StackRox ScannerCPUMemory

Request

3 cores

4500 MiB

Limit

6 cores

12 GiB

The StackRox Scanner requires Scanner DB (PostgreSQL 15) to store data. The following table lists the minimum memory and storage values required to install and run Scanner DB.

Expand
Scanner DBCPUMemory

Request

0.2 cores

512 MiB

Limit

2 cores

4 GiB

5.2.5. Scanner V4

Scanner V4 is optional. If Scanner V4 is installed on secured clusters, the following requirements apply.

CPU, memory, and storage requirements
Scanner V4 Indexer

The requirements in this table are based on the default of 2 replicas.

Expand
Scanner V4 IndexerCPUMemory

Request

2 cores

3000 MiB

Limit

4 cores

6 GiB

Scanner V4 DB

Scanner V4 requires Scanner V4 DB (PostgreSQL 15) to store data. The following table lists the minimum CPU, memory, and storage values required to install and run Scanner V4 DB. For Scanner V4 DB, a PVC is not required, but it is strongly recommended because it ensures optimal performance.

Expand
Scanner V4 DBCPUMemoryStorage

Request

0.2 cores

2 GiB

10 GiB

Limit

2 cores

4 GiB

10 GiB

Access Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service) by selecting an instance in the Red Hat Hybrid Cloud Console. An ACS instance contains the RHACS Cloud Service management interface and services that Red Hat configures and manages for you. The management interface connects to your secured clusters, which contain the services that scan and collect information about vulnerabilities. One instance can connect to and monitor many clusters.

7.1.1. Creating an instance in the console

In the Red Hat Hybrid Cloud Console, create an ACS instance to connect to your secured clusters.

Procedure

To create an ACS instance:

  1. Log in to the Red Hat Hybrid Cloud Console.
  2. From the navigation menu, select Advanced Cluster SecurityACS Instances.
  3. Select Create ACS instance and enter information into the displayed fields or select the appropriate option from the drop-down list:

    • Name: Enter the name of your ACS instance. An ACS instance contains the RHACS Central component, also referred to as "Central", which includes the RHACS Cloud Service management interface and services that are configured and managed by Red Hat. You manage your secured clusters that communicate with Central. You can connect many secured clusters to one instance.
    • Cloud provider: The cloud provider where Central is located. Select AWS.
    • Cloud region: The region for your cloud provider where Central is located. Select one of the following regions:

      • US-East, N. Virginia
      • Europe, Ireland
    • Availability zones: Use the default value (Multi).
  4. Click Create instance.

7.1.2. Next steps

  • On each Red Hat OpenShift cluster you want to secure, create a project named stackrox. This project will contain the resources for RHACS Cloud Service secured clusters.

Create a project on each Red Hat OpenShift cluster that you want to secure. You then use this project to install RHACS Cloud Service resources by using the Operator or Helm charts.

7.2.1. Creating a project on your cluster

Procedure

  • In your OpenShift Container Platform cluster, go to HomeProjects and create a project for RHACS Cloud Service. Use stackrox as the project Name.

7.2.2. Next steps

  • In the ACS Console, create an init bundle. The init bundle contains secrets that allow communication between RHACS Cloud Service secured clusters and the ACS Console.

Before you install the SecuredCluster resource on a cluster, you must create an init bundle. The cluster that has SecuredCluster installed and configured then uses this bundle to authenticate with Central. You can create an init bundle by using either the RHACS portal or the roxctl CLI. You then apply the init bundle by using it to create resources.

Note

You must have the Admin user role to create an init bundle.

7.3.1. Generating an init bundle

You can create an init bundle containing secrets by using the RHACS portal.

Note

You must have the Admin user role to create an init bundle.

Procedure

  1. Find the address of the RHACS portal as described in "Verifying Central installation using the Operator method".
  2. Log in to the RHACS portal.
  3. If you do not have secured clusters, the Platform ConfigurationClusters page appears.
  4. Click Create init bundle.
  5. Enter a name for the cluster init bundle.
  6. Select your platform.
  7. Select the installation method you will use for your secured clusters: Operator or Helm chart.
  8. Click Download to generate and download the init bundle, which is created in the form of a YAML file. You can use one init bundle and its corresponding YAML file for all secured clusters if you are using the same installation method.

    Important

    Store this bundle securely because it contains secrets.

  9. Apply the init bundle by using it to create resources on the secured cluster.
  10. Install secured cluster services on each cluster.

You can create an init bundle with secrets by using the roxctl CLI.

Note

You must have the Admin user role to create init bundles.

Prerequisites

  • You have configured the ROX_API_TOKEN and the ROX_CENTRAL_ADDRESS environment variables:

    1. Set the ROX_API_TOKEN by running the following command:

      $ export ROX_API_TOKEN=<api_token>
      Copy to Clipboard Toggle word wrap
    2. Set the ROX_CENTRAL_ADDRESS environment variable by running the following command:

      $ export ROX_CENTRAL_ADDRESS=<address>:<port_number>
      Copy to Clipboard Toggle word wrap
Important

In RHACS Cloud Service, when using roxctl commands that require the Central address, use the Central instance address as displayed in the Instance Details section of the Red Hat Hybrid Cloud Console. For example, use acs-ABCD12345.acs.rhcloud.com instead of acs-data-ABCD12345.acs.rhcloud.com.

Procedure

  • To generate a cluster init bundle containing secrets for Helm installations, run the following command:

    $ roxctl -e "$ROX_CENTRAL_ADDRESS" \
      central init-bundles generate --output \
      <cluster_init_bundle_name> cluster_init_bundle.yaml
    Copy to Clipboard Toggle word wrap
  • To generate a cluster init bundle containing secrets for Operator installations, run the following command:

    $ roxctl -e "$ROX_CENTRAL_ADDRESS" \
      central init-bundles generate --output-secrets \
      <cluster_init_bundle_name> cluster_init_bundle.yaml
    Copy to Clipboard Toggle word wrap
    Important

    Ensure that you store this bundle securely because it contains secrets. You can use the same bundle to set up multiple secured clusters.

7.3.2. Next steps

7.4. Applying an init bundle for secured clusters

Apply the init bundle by using it to create resources.

Note

You must have the Admin user role to apply an init bundle.

Before you configure a secured cluster, you must apply the init bundle by using it to create the required resources on the secured cluster. Applying the init bundle allows the services on the secured cluster to communicate with RHACS Cloud Service.

Note

If you are installing by using Helm charts, do not perform this step. Complete the installation by using Helm; See "Installing RHACS on secured clusters by using Helm charts" in the additional resources section.

Prerequisites

  • You must have generated an init bundle containing secrets.
  • You must have created the stackrox project, or namespace, on the cluster where secured cluster services will be installed. Using stackrox for the project is not required, but ensures that vulnerabilities for RHACS processes are not reported when scanning your clusters.

Procedure

To create resources, perform only one of the following steps:

  • Create resources using the OpenShift Container Platform web console: In the OpenShift Container Platform web console, make sure that you are in the stackrox namespace. In the top menu, click + to open the Import YAML page. You can drag the init bundle file or copy and paste its contents into the editor, and then click Create. When the command is complete, the display shows that the collector-tls, sensor-tls, and admission-control-tls` resources were created.
  • Create resources using the Red Hat OpenShift CLI: Using the Red Hat OpenShift CLI, run the following command to create the resources:

    $ oc create -f <init_bundle>.yaml \
    1
    
      -n <stackrox> 
    2
    Copy to Clipboard Toggle word wrap
    1
    Specify the file name of the init bundle containing the secrets.
    2
    Specify the name of the project where Central services are installed.

Verification

  • Restart Sensor to pick up the new certificates.

    For more information about how to restart Sensor, see "Restarting the Sensor container" in the "Additional resources" section.

7.4.2. Next steps

  • On each Red Hat OpenShift cluster, install the RHACS Operator.
  • Install RHACS secured cluster services in all clusters that you want to monitor.

7.5. Installing the Operator

Install the RHACS Operator on your secured clusters.

Using the OperatorHub provided with OpenShift Container Platform is the easiest way to install the RHACS Operator.

Prerequisites

Procedure

  1. In the web console, go to the OperatorsOperatorHub page.
  2. If Red Hat Advanced Cluster Security for Kubernetes is not displayed, enter Advanced Cluster Security into the Filter by keyword box to find the Red Hat Advanced Cluster Security for Kubernetes Operator.
  3. Select the Red Hat Advanced Cluster Security for Kubernetes Operator to view the details page.
  4. Read the information about the Operator, and then click Install.
  5. On the Install Operator page:

    • Keep the default value for Installation mode as All namespaces on the cluster.
    • Select a specific namespace in which to install the Operator for the Installed namespace field. Install the Red Hat Advanced Cluster Security for Kubernetes Operator in the rhacs-operator namespace.
    • Select automatic or manual updates for Update approval.

      If you select automatic updates, when a new version of the Operator is available, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator.

      If you select manual updates, when a newer version of the Operator is available, OLM creates an update request. As a cluster administrator, you must manually approve the update request to update the Operator to the latest version.

      Red Hat recommends enabling automatic upgrades for Operator in RHACS Cloud Service. See the Red Hat Advanced Cluster Security for Kubernetes Support Matrix for more information.

  6. Click Install.

Verification

  • After the installation completes, go to OperatorsInstalled Operators to verify that the Red Hat Advanced Cluster Security for Kubernetes Operator is listed with the status of Succeeded.

7.5.2. Next steps

You can install RHACS Cloud Service on your secured clusters by using the Operator or Helm charts. You can also use the roxctl CLI to install it, but do not use this method unless you have a specific installation need that requires using it.

Prerequisites

  • You have created your Red Hat OpenShift cluster and installed the Operator on it.
  • In the ACS Console in RHACS Cloud Service, you have created and downloaded the init bundle.
  • You applied the init bundle by using the oc create command.
  • During installation, you noted the Central API Endpoint address. You can view this information by choosing Advanced Cluster SecurityACS Instances from the cloud console navigation menu, and then clicking the ACS instance you created.
7.6.1.1. Installing secured cluster services

You can install Secured Cluster services on your clusters by using the Operator, which creates the SecuredCluster custom resource. You must install the Secured Cluster services on every cluster in your environment that you want to monitor.

Important

When you install Red Hat Advanced Cluster Security for Kubernetes:

  • If you are installing RHACS for the first time, you must first install the Central custom resource because the SecuredCluster custom resource installation is dependent on certificates that Central generates.
  • Do not install SecuredCluster in projects whose names start with kube, openshift, or redhat, or in the istio-system project.
  • If you are installing RHACS SecuredCluster custom resource on a cluster that also hosts Central, ensure that you install it in the same namespace as Central.
  • If you are installing Red Hat Advanced Cluster Security for Kubernetes SecuredCluster custom resource on a cluster that does not host Central, Red Hat recommends that you install the Red Hat Advanced Cluster Security for Kubernetes SecuredCluster custom resource in its own project and not in the project in which you have installed the Red Hat Advanced Cluster Security for Kubernetes Operator.

Prerequisites

  • If you are using OpenShift Container Platform, you must install version 4.11 or later.
  • You have installed the RHACS Operator on the cluster that you want to secure, called the secured cluster.
  • You have generated an init bundle and applied it to the cluster.

Procedure

  1. On the OpenShift Container Platform web console for the secured cluster, go to the OperatorsInstalled Operators page.
  2. Click the RHACS Operator.
  3. If you have installed the Operator in the recommended namespace, OpenShift Container Platform lists the project as rhacs-operator. Select Project: rhacs-operatorCreate project.

    Note
    • If you installed the Operator in a different namespace, OpenShift Container Platform lists the name of that namespace instead of rhacs-operator.
  4. Enter the new project name (for example, stackrox), and click Create. Red Hat recommends that you use stackrox as the project name.
  5. Click Secured Cluster from the central navigation menu in the Operator details page.
  6. Click Create SecuredCluster.
  7. Select one of the following options in the Configure via field:

    • Form view: Use this option if you want to use the on-screen fields to configure the secured cluster and do not need to change any other fields.
    • YAML view: Use this view to set up the secured cluster by using the YAML file. The YAML file is displayed in the window and you can edit fields in it. If you select this option, when you are finished editing the file, click Create.
  8. If you are using Form view, enter the new project name by accepting or editing the default name. The default value is stackrox-secured-cluster-services.
  9. Optional: Add any labels for the cluster.
  10. Enter a unique name for your SecuredCluster custom resource.
  11. For Central Endpoint, enter the address of your Central instance. For example, if Central is available at https://central.example.com, then specify the central endpoint as central.example.com.

    • For RHACS Cloud Service use the Central API Endpoint address. You can view this information by choosing Advanced Cluster SecurityACS Instances from the Red Hat Hybrid Cloud Console navigation menu, then clicking the RHACS instance you created.
    • Use the default value of central.stackrox.svc:443 only if you are installing secured cluster services in the same cluster where Central is installed.
    • Do not use the default value when you are configuring multiple clusters. Instead, use the hostname when configuring the Central Endpoint value for each cluster.
  12. For the remaining fields, accept the default values or configure custom values if needed. For example, you might need to configure TLS if you are using custom certificates or untrusted CAs. See "Configuring Secured Cluster services options for RHACS using the Operator" for more information.
  13. Click Create.
  14. After a brief pause, the SecuredClusters page displays the status of stackrox-secured-cluster-services. You might see the following conditions:

    • Conditions: Deployed, Initialized: The secured cluster services have been installed and the secured cluster is communicating with Central.
    • Conditions: Initialized, Irreconcilable: The secured cluster is not communicating with Central. Make sure that you applied the init bundle you created in the RHACS web portal to the secured cluster.

Next steps

  1. Configure additional secured cluster settings (optional).
  2. Verify installation.

You can install RHACS on secured clusters by using Helm charts with no customization, using the default values, or with customizations of configuration parameters.

First, ensure that you add the Helm chart repository.

7.6.2.1. Adding the Helm chart repository

Procedure

  • Add the RHACS charts repository.

    $ helm repo add rhacs https://mirror.openshift.com/pub/rhacs/charts/
    Copy to Clipboard Toggle word wrap

The Helm repository for Red Hat Advanced Cluster Security for Kubernetes includes Helm charts for installing different components, including:

  • Central services Helm chart (central-services) for installing the centralized components (Central and Scanner).

    Note

    You deploy centralized components only once and you can monitor multiple separate clusters by using the same installation.

  • Secured Cluster Services Helm chart (secured-cluster-services) for installing the per-cluster and per-node components (Sensor, Admission Controller, Collector, and Scanner-slim).

    Note

    Deploy the per-cluster components into each cluster that you want to monitor and deploy the per-node components in all nodes that you want to monitor.

Verification

  • Run the following command to verify the added chart repository:

    $ helm search repo -l rhacs/
    Copy to Clipboard Toggle word wrap

Use the following instructions to install the secured-cluster-services Helm chart to deploy the per-cluster and per-node components (Sensor, Admission controller, Collector, and Scanner-slim).

Prerequisites

  • You must have generated an RHACS init bundle for your cluster.
  • You must have access to the Red Hat Container Registry and a pull secret for authentication. For information about downloading images from registry.redhat.io, see Red Hat Container Registry Authentication.
  • You must have the address that you are exposing the Central service on.

Procedure

  • Run the following command on your Kubernetes based clusters:

    $ helm install -n stackrox --create-namespace \
        stackrox-secured-cluster-services rhacs/secured-cluster-services \
        -f <path_to_cluster_init_bundle.yaml> \
    1
    
        -f <path_to_pull_secret.yaml> \
    2
    
        --set clusterName=<name_of_the_secured_cluster> \
        --set centralEndpoint=<endpoint_of_central_service> 
    3
    
        --set imagePullSecrets.username=<your redhat.com username> \
    4
    
        --set imagePullSecrets.password=<your redhat.com password>
    5
    Copy to Clipboard Toggle word wrap
    1
    Use the -f option to specify the path for the init bundle.
    2
    Use the -f option to specify the path for the pull secret for Red Hat Container Registry authentication.
    3
    Specify the address and port number for Central. For example, acs.domain.com:443.
    4
    Include the user name for your pull secret for Red Hat Container Registry authentication.
    5
    Include the password for your pull secret for Red Hat Container Registry authentication.
  • Run the following command on OpenShift Container Platform clusters:

    $ helm install -n stackrox --create-namespace \
        stackrox-secured-cluster-services rhacs/secured-cluster-services \
        -f <path_to_cluster_init_bundle.yaml> \
    1
    
        -f <path_to_pull_secret.yaml> \
    2
    
        --set clusterName=<name_of_the_secured_cluster> \
        --set centralEndpoint=<endpoint_of_central_service> 
    3
    
        --set scanner.disable=false 
    4
    Copy to Clipboard Toggle word wrap
    1
    Use the -f option to specify the path for the init bundle.
    2
    Use the -f option to specify the path for the pull secret for Red Hat Container Registry authentication.
    3
    Specify the address and port number for Central. For example, acs.domain.com:443.
    4
    Set the value of the scanner.disable parameter to false, which means that Scanner-slim will be enabled during the installation. In Kubernetes, the secured cluster services now include Scanner-slim as an optional component.

You can use Helm chart configuration parameters with the helm install and helm upgrade commands. Specify these parameters by using the --set option or by creating YAML configuration files.

Create the following files for configuring the Helm chart for installing Red Hat Advanced Cluster Security for Kubernetes:

  • Public configuration file values-public.yaml: Use this file to save all non-sensitive configuration options.
  • Private configuration file values-private.yaml: Use this file to save all sensitive configuration options. Ensure that you store this file securely.
Important

When using the secured-cluster-services Helm chart, do not change the values.yaml file that is part of the chart.

7.6.2.3.1. Configuration parameters
Expand
ParameterDescription

clusterName

Name of your cluster.

centralEndpoint

Address of the Central endpoint. If you are using a non-gRPC capable load balancer, use the WebSocket protocol by prefixing the endpoint address with wss://. When configuring multiple clusters, use the hostname for the address. For example, central.example.com.

sensor.endpoint

Address of the Sensor endpoint including port number.

sensor.imagePullPolicy

Image pull policy for the Sensor container.

sensor.serviceTLS.cert

The internal service-to-service TLS certificate that Sensor uses.

sensor.serviceTLS.key

The internal service-to-service TLS certificate key that Sensor uses.

sensor.resources.requests.memory

The memory request for the Sensor container. Use this parameter to override the default value.

sensor.resources.requests.cpu

The CPU request for the Sensor container. Use this parameter to override the default value.

sensor.resources.limits.memory

The memory limit for the Sensor container. Use this parameter to override the default value.

sensor.resources.limits.cpu

The CPU limit for the Sensor container. Use this parameter to override the default value.

sensor.nodeSelector

Specify a node selector label as label-key: label-value to force Sensor to only schedule on nodes with the specified label.

sensor.tolerations

If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Sensor. This parameter is mainly used for infrastructure nodes.

image.main.name

The name of the main image.

image.collector.name

The name of the Collector image.

image.main.registry

The address of the registry you are using for the main image.

image.collector.registry

The address of the registry you are using for the Collector image.

image.scanner.registry

The address of the registry you are using for the Scanner image.

image.scannerDb.registry

The address of the registry you are using for the Scanner DB image.

image.scannerV4.registry

The address of the registry you are using for the Scanner V4 image.

image.scannerV4DB.registry

The address of the registry you are using for the Scanner V4 DB image.

image.main.pullPolicy

Image pull policy for main images.

image.collector.pullPolicy

Image pull policy for the Collector images.

image.main.tag

Tag of main image to use.

image.collector.tag

Tag of collector image to use.

collector.collectionMethod

Either CORE_BPF or NO_COLLECTION.

collector.imagePullPolicy

Image pull policy for the Collector container.

collector.complianceImagePullPolicy

Image pull policy for the Compliance container.

collector.disableTaintTolerations

If you specify false, tolerations are applied to Collector, and the collector pods can schedule onto all nodes with taints. If you specify it as true, no tolerations are applied, and the collector pods are not scheduled onto nodes with taints.

collector.resources.requests.memory

The memory request for the Collector container. Use this parameter to override the default value.

collector.resources.requests.cpu

The CPU request for the Collector container. Use this parameter to override the default value.

collector.resources.limits.memory

The memory limit for the Collector container. Use this parameter to override the default value.

collector.resources.limits.cpu

The CPU limit for the Collector container. Use this parameter to override the default value.

collector.complianceResources.requests.memory

The memory request for the Compliance container. Use this parameter to override the default value.

collector.complianceResources.requests.cpu

The CPU request for the Compliance container. Use this parameter to override the default value.

collector.complianceResources.limits.memory

The memory limit for the Compliance container. Use this parameter to override the default value.

collector.complianceResources.limits.cpu

The CPU limit for the Compliance container. Use this parameter to override the default value.

collector.serviceTLS.cert

The internal service-to-service TLS certificate that Collector uses.

collector.serviceTLS.key

The internal service-to-service TLS certificate key that Collector uses.

admissionControl.listenOnCreates

This setting controls whether Kubernetes is configured to contact Red Hat Advanced Cluster Security for Kubernetes with AdmissionReview requests for workload creation events.

admissionControl.listenOnUpdates

When you set this parameter as false, Red Hat Advanced Cluster Security for Kubernetes creates the ValidatingWebhookConfiguration in a way that causes the Kubernetes API server not to send object update events. Since the volume of object updates is usually higher than the object creates, leaving this as false limits the load on the admission control service and decreases the chances of a malfunctioning admission control service.

admissionControl.listenOnEvents

This setting controls whether the cluster is configured to contact Red Hat Advanced Cluster Security for Kubernetes with AdmissionReview requests for Kubernetes exec and portforward events. RHACS does not support this feature on OpenShift Container Platform 3.11.

admissionControl.dynamic.enforceOnCreates

This setting controls whether Red Hat Advanced Cluster Security for Kubernetes evaluates policies; if it is disabled, all AdmissionReview requests are automatically accepted.

admissionControl.dynamic.enforceOnUpdates

This setting controls the behavior of the admission control service. You must specify listenOnUpdates as true for this to work.

admissionControl.dynamic.scanInline

If you set this option to true, the admission control service requests an image scan before making an admission decision. Since image scans take several seconds, enable this option only if you can ensure that all images used in your cluster are scanned before deployment (for example, by a CI integration during image build). This option corresponds to the Contact image scanners option in the RHACS portal.

admissionControl.dynamic.disableBypass

Set it to true to disable bypassing the Admission controller.

admissionControl.dynamic.timeout

Use this parameter to specify the maximum number of seconds RHACS must wait for an admission review before marking it as fail open. If the admission webhook does not receive information that it is requesting before the end of the timeout period, it fails, but in fail open status, it still allows the operation to succeed. For example, the admission controller would allow a deployment to be created even if a scan had timed out and RHACS could not determine if the deployment violated a policy. Beginning in release 4.5, Red Hat reduced the default timeout setting for the RHACS admission controller webhooks from 20 seconds to 10 seconds, resulting in an effective timeout of 12 seconds within the ValidatingWebhookConfiguration. This change does not negatively affect OpenShift Container Platform users because OpenShift Container Platform caps the timeout at 13 seconds.

admissionControl.resources.requests.memory

The memory request for the Admission Control container. Use this parameter to override the default value.

admissionControl.resources.requests.cpu

The CPU request for the Admission Control container. Use this parameter to override the default value.

admissionControl.resources.limits.memory

The memory limit for the Admission Control container. Use this parameter to override the default value.

admissionControl.resources.limits.cpu

The CPU limit for the Admission Control container. Use this parameter to override the default value.

admissionControl.nodeSelector

Specify a node selector label as label-key: label-value to force Admission Control to only schedule on nodes with the specified label.

admissionControl.tolerations

If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Admission Control. This parameter is mainly used for infrastructure nodes.

admissionControl.serviceTLS.cert

The internal service-to-service TLS certificate that Admission Control uses.

admissionControl.serviceTLS.key

The internal service-to-service TLS certificate key that Admission Control uses.

registryOverride

Use this parameter to override the default docker.io registry. Specify the name of your registry if you are using some other registry.

collector.disableTaintTolerations

If you specify false, tolerations are applied to Collector, and the Collector pods can schedule onto all nodes with taints. If you specify it as true, no tolerations are applied, and the Collector pods are not scheduled onto nodes with taints.

createUpgraderServiceAccount

Specify true to create the sensor-upgrader account. By default, Red Hat Advanced Cluster Security for Kubernetes creates a service account called sensor-upgrader in each secured cluster. This account is highly privileged but is only used during upgrades. If you do not create this account, you must complete future upgrades manually if the Sensor does not have enough permissions.

createSecrets

Specify false to skip the orchestrator secret creation for the Sensor, Collector, and Admission controller.

collector.slimMode

Deprecated. Specify true if you want to use a slim Collector image for deploying Collector.

sensor.resources

Resource specification for Sensor.

admissionControl.resources

Resource specification for Admission controller.

collector.resources

Resource specification for Collector.

collector.complianceResources

Resource specification for Collector’s Compliance container.

exposeMonitoring

If you set this option to true, Red Hat Advanced Cluster Security for Kubernetes exposes Prometheus metrics endpoints on port number 9090 for the Sensor, Collector, and the Admission controller.

auditLogs.disableCollection

If you set this option to true, Red Hat Advanced Cluster Security for Kubernetes disables the audit log detection features used to detect access and modifications to configuration maps and secrets.

scanner.disable

If you set this option to false, Red Hat Advanced Cluster Security for Kubernetes deploys a Scanner-slim and Scanner DB in the secured cluster to allow scanning images on the integrated OpenShift image registry. Enabling Scanner-slim is supported on OpenShift Container Platform and Kubernetes secured clusters. Defaults to true.

scanner.dbTolerations

If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner DB.

scanner.replicas

Resource specification for Collector’s Compliance container.

scanner.logLevel

Setting this parameter allows you to modify the scanner log level. Use this option only for troubleshooting purposes.

scanner.autoscaling.disable

If you set this option to true, Red Hat Advanced Cluster Security for Kubernetes disables autoscaling on the Scanner deployment.

scanner.autoscaling.minReplicas

The minimum number of replicas for autoscaling. Defaults to 2.

scanner.autoscaling.maxReplicas

The maximum number of replicas for autoscaling. Defaults to 5.

scanner.nodeSelector

Specify a node selector label as label-key: label-value to force Scanner to only schedule on nodes with the specified label.

scanner.tolerations

If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner.

scanner.dbNodeSelector

Specify a node selector label as label-key: label-value to force Scanner DB to only schedule on nodes with the specified label.

scanner.dbTolerations

If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner DB.

scanner.resources.requests.memory

The memory request for the Scanner container. Use this parameter to override the default value.

scanner.resources.requests.cpu

The CPU request for the Scanner container. Use this parameter to override the default value.

scanner.resources.limits.memory

The memory limit for the Scanner container. Use this parameter to override the default value.

scanner.resources.limits.cpu

The CPU limit for the Scanner container. Use this parameter to override the default value.

scanner.dbResources.requests.memory

The memory request for the Scanner DB container. Use this parameter to override the default value.

scanner.dbResources.requests.cpu

The CPU request for the Scanner DB container. Use this parameter to override the default value.

scanner.dbResources.limits.memory

The memory limit for the Scanner DB container. Use this parameter to override the default value.

scanner.dbResources.limits.cpu

The CPU limit for the Scanner DB container. Use this parameter to override the default value.

monitoring.openshift.enabled

If you set this option to false, Red Hat Advanced Cluster Security for Kubernetes will not set up Red Hat OpenShift monitoring. Defaults to true on Red Hat OpenShift 4.

network.enableNetworkPolicies

To provide security at the network level, RHACS creates default NetworkPolicy resources in the namespace where secured cluster resources are installed. These network policies allow ingress to specific components on specific ports. If you do not want RHACS to create these policies, set this parameter to False. This is a Boolean value. The default value is True, which means the default policies are automatically created.

Warning

Disabling creation of default network policies can break communication between RHACS components. If you disable creation of default policies, you must create your own network policies to allow this communication.

7.6.2.3.1.1. Environment variables

You can specify environment variables for Sensor and Admission controller in the following format:

customize:
  envVars:
    ENV_VAR1: "value1"
    ENV_VAR2: "value2"
Copy to Clipboard Toggle word wrap

The customize setting allows you to specify custom Kubernetes metadata (labels and annotations) for all objects created by this Helm chart and additional pod labels, pod annotations, and container environment variables for workloads.

The configuration is hierarchical, in the sense that metadata defined at a more generic scope (for example, for all objects) can be overridden by metadata defined at a narrower scope (for example, only for the Sensor deployment).

After you configure the values-public.yaml and values-private.yaml files, install the secured-cluster-services Helm chart to deploy the following per-cluster and per-node components:

  • Sensor
  • Admission controller
  • Collector
  • Scanner: optional for secured clusters when the StackRox Scanner is installed
  • Scanner DB: optional for secured clusters when the StackRox Scanner is installed
  • Scanner V4 Indexer and Scanner V4 DB: optional for secured clusters when Scanner V4 is installed

Prerequisites

  • You must have generated an RHACS init bundle for your cluster.
  • You must have access to the Red Hat Container Registry and a pull secret for authentication. For information about downloading images from registry.redhat.io, see Red Hat Container Registry Authentication.
  • You must have the address and the port number that you are exposing the Central service on.

Procedure

  • Run the following command:

    $ helm install -n stackrox \
      --create-namespace stackrox-secured-cluster-services rhacs/secured-cluster-services \
      -f <name_of_cluster_init_bundle.yaml> \
      -f <path_to_values_public.yaml> -f <path_to_values_private.yaml> \
    1
    
      --set imagePullSecrets.username=<username> \
    2
    
      --set imagePullSecrets.password=<password> 
    3
    Copy to Clipboard Toggle word wrap
    1
    Use the -f option to specify the paths for your YAML configuration files.
    2
    Include the user name for your pull secret for Red Hat Container Registry authentication.
    3
    Include the password for your pull secret for Red Hat Container Registry authentication.
Note

To deploy secured-cluster-services Helm chart by using a continuous integration (CI) system, pass the init bundle YAML file as an environment variable to the helm install command:

$ helm install ... -f <(echo "$INIT_BUNDLE_YAML_SECRET") 
1
Copy to Clipboard Toggle word wrap
1
If you are using base64 encoded variables, use the helm install …​ -f <(echo "$INIT_BUNDLE_YAML_SECRET" | base64 --decode) command instead.

You can make changes to any configuration options after you have deployed the secured-cluster-services Helm chart.

When using the helm upgrade command to make changes, the following guidelines and requirements apply:

  • You can also specify configuration values using the --set or --set-file parameters. However, these options are not saved, and you must manually specify all the options again whenever you make changes.
  • Some changes, such as enabling a new component like Scanner V4, require new certificates to be issued for the component. Therefore, you must provide a CA when making these changes.

    • If the CA was generated by the Helm chart during the initial installation, you must retrieve these automatically generated values from the cluster and provide them to the helm upgrade command. The post-installation notes of the central-services Helm chart include a command for retrieving the automatically generated values.
    • If the CA was generated outside of the Helm chart and provided during the installation of the central-services chart, then you must perform that action again when using the helm upgrade command, for example, by using the --reuse-values flag with the helm upgrade command.

Procedure

  1. Update the values-public.yaml and values-private.yaml configuration files with new values.
  2. Run the helm upgrade command and specify the configuration files using the -f option:

    $ helm upgrade -n stackrox \
      stackrox-secured-cluster-services rhacs/secured-cluster-services \
      --reuse-values \
    1
    
      -f <path_to_values_public.yaml> \
      -f <path_to_values_private.yaml>
    Copy to Clipboard Toggle word wrap
    1
    If you have modified values that are not included in the values_public.yaml and values_private.yaml files, include the --reuse-values parameter.

To install RHACS on secured clusters by using the CLI, perform the following steps:

  1. Install the roxctl CLI.
  2. Install Sensor.
7.6.3.1. Installing the roxctl CLI

You must first download the binary. You can install roxctl on Linux, Windows, or macOS.

7.6.3.1.1. Installing the roxctl CLI on Linux

You can install the roxctl CLI binary on Linux by using the following procedure.

Note

roxctl CLI for Linux is available for amd64, arm64, ppc64le, and s390x architectures.

Procedure

  1. Determine the roxctl architecture for the target operating system:

    $ arch="$(uname -m | sed "s/x86_64//")"; arch="${arch:+-$arch}"
    Copy to Clipboard Toggle word wrap
  2. Download the roxctl CLI:

    $ curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.5.9/bin/Linux/roxctl${arch}"
    Copy to Clipboard Toggle word wrap
  3. Make the roxctl binary executable:

    $ chmod +x roxctl
    Copy to Clipboard Toggle word wrap
  4. Place the roxctl binary in a directory that is on your PATH:

    To check your PATH, execute the following command:

    $ echo $PATH
    Copy to Clipboard Toggle word wrap

Verification

  • Verify the roxctl version you have installed:

    $ roxctl version
    Copy to Clipboard Toggle word wrap
7.6.3.1.2. Installing the roxctl CLI on macOS

You can install the roxctl CLI binary on macOS by using the following procedure.

Note

roxctl CLI for macOS is available for amd64 and arm64 architectures.

Procedure

  1. Determine the roxctl architecture for the target operating system:

    $ arch="$(uname -m | sed "s/x86_64//")"; arch="${arch:+-$arch}"
    Copy to Clipboard Toggle word wrap
  2. Download the roxctl CLI:

    $ curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.5.9/bin/Darwin/roxctl${arch}"
    Copy to Clipboard Toggle word wrap
  3. Remove all extended attributes from the binary:

    $ xattr -c roxctl
    Copy to Clipboard Toggle word wrap
  4. Make the roxctl binary executable:

    $ chmod +x roxctl
    Copy to Clipboard Toggle word wrap
  5. Place the roxctl binary in a directory that is on your PATH:

    To check your PATH, execute the following command:

    $ echo $PATH
    Copy to Clipboard Toggle word wrap

Verification

  • Verify the roxctl version you have installed:

    $ roxctl version
    Copy to Clipboard Toggle word wrap
7.6.3.1.3. Installing the roxctl CLI on Windows

You can install the roxctl CLI binary on Windows by using the following procedure.

Note

roxctl CLI for Windows is available for the amd64 architecture.

Procedure

  • Download the roxctl CLI:

    $ curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.5.9/bin/Windows/roxctl.exe
    Copy to Clipboard Toggle word wrap

Verification

  • Verify the roxctl version you have installed:

    $ roxctl version
    Copy to Clipboard Toggle word wrap
7.6.3.2. Installing Sensor

To monitor a cluster, you must deploy Sensor. You must deploy Sensor into each cluster that you want to monitor. This installation method is also called the manifest installation method.

To perform an installation by using the manifest installation method, follow only one of the following procedures:

  • Use the RHACS web portal to download the cluster bundle, and then extract and run the sensor script.
  • Use the roxctl CLI to generate the required sensor configuration for your OpenShift Container Platform cluster and associate it with your Central instance.

Prerequisites

  • You must have already installed Central services, or you can access Central services by selecting your ACS instance on Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service).

Procedure

  1. On your secured cluster, in the RHACS portal, go to Platform ConfigurationClusters.
  2. Select Secure a clusterLegacy installation method.
  3. Specify a name for the cluster.
  4. Provide appropriate values for the fields based on where you are deploying the Sensor.

    • If you are deploying Sensor in the same cluster, accept the default values for all the fields.
    • If you are deploying into a different cluster, replace central.stackrox.svc:443 with a load balancer, node port, or other address, including the port number, that is accessible from the other cluster.
    • If you are using a non-gRPC capable load balancer, such as HAProxy, AWS Application Load Balancer (ALB), or AWS Elastic Load Balancing (ELB), use the WebSocket Secure (wss) protocol. To use wss:

      • Prefix the address with wss://.
      • Add the port number after the address, for example, wss://stackrox-central.example.com:443.
  5. Click Next to continue with the Sensor setup.
  6. Click Download YAML File and Keys to download the cluster bundle (zip archive).

    Important

    The cluster bundle zip archive includes unique configurations and keys for each cluster. Do not reuse the same files in another cluster.

  7. From a system that has access to the monitored cluster, extract and run the sensor script from the cluster bundle:

    $ unzip -d sensor sensor-<cluster_name>.zip
    Copy to Clipboard Toggle word wrap
    $ ./sensor/sensor.sh
    Copy to Clipboard Toggle word wrap

    If you get a warning that you do not have the required permissions to deploy Sensor, follow the on-screen instructions, or contact your cluster administrator for help.

After Sensor is deployed, it contacts Central and provides cluster information.

Procedure

  1. Generate the required sensor configuration for your OpenShift Container Platform cluster and associate it with your Central instance by running the following command:

    $ roxctl sensor generate openshift --openshift-version <ocp_version> --name <cluster_name> --central "$ROX_ENDPOINT" 
    1
    Copy to Clipboard Toggle word wrap
    1
    For the --openshift-version option, specify the major OpenShift Container Platform version number for your cluster. For example, specify 3 for OpenShift Container Platform version 3.x and specify 4 for OpenShift Container Platform version 4.x.
  2. From a system that has access to the monitored cluster, extract and run the sensor script from the cluster bundle:

    $ unzip -d sensor sensor-<cluster_name>.zip
    Copy to Clipboard Toggle word wrap
    $ ./sensor/sensor.sh
    Copy to Clipboard Toggle word wrap

    If you get a warning that you do not have the required permissions to deploy Sensor, follow the on-screen instructions, or contact your cluster administrator for help.

After Sensor is deployed, it contacts Central and provides cluster information.

Verification

  1. Return to the RHACS portal and check if the deployment is successful. If successful, when viewing your list of clusters in Platform ConfigurationClusters, the cluster status displays a green checkmark and a Healthy status. If you do not see a green checkmark, use the following command to check for problems:

    • On OpenShift Container Platform, enter the following command:

      $ oc get pod -n stackrox -w
      Copy to Clipboard Toggle word wrap
    • On Kubernetes, enter the following command:

      $ kubectl get pod -n stackrox -w
      Copy to Clipboard Toggle word wrap
  2. Click Finish to close the window.

After installation, Sensor starts reporting security information to RHACS and the RHACS portal dashboard begins showing deployments, images, and policy violations from the cluster on which you have installed the Sensor.

7.6.4. Next steps

  • Verify installation by ensuring that your secured clusters can communicate with the ACS instance.

You must configure the proxy settings for secured cluster services within the Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service) environment to establish a connection between the Secured Cluster and the specified proxy server. This ensures reliable data collection and transmission.

To configure an egress proxy, you can either use the cluster-wide Red Hat OpenShift proxy or specify the HTTP_PROXY, HTTPS_PROXY, and NO_PROXY environment variables within the SecuredCluster Custom Resource (CR) configuration file to ensure proper use of the proxy and bypass for internal requests within the specified domain.

The proxy configuration applies to all running services: Sensor, Collector, Admission Controller and Scanner.

Procedure

  • Specify the HTTP_PROXY, HTTPS_PROXY, and NO_PROXY environment variables under the customize specification in the SecuredCluster CR configuration file:

    For example:

    # proxy collector
    customize:
      envVars:
        - name: HTTP_PROXY
          value: http://egress-proxy.stackrox.svc:xxxx 
    1
    
        - name: HTTPS_PROXY
          value: http://egress-proxy.stackrox.svc:xxxx 
    2
    
        - name: NO_PROXY
          value: .stackrox.svc 
    3
    Copy to Clipboard Toggle word wrap
    1
    The variable HTTP_PROXY is set to the value http://egress-proxy.stackrox.svc:xxxx. This is the proxy server used for HTTP connections.
    2
    The variable HTTPS_PROXY is set to the value http://egress-proxy.stackrox.svc:xxxx. This is the proxy server used for HTTPS connections.
    3
    The variable NO _PROXY is set to .stackrox.svc. This variable is used to define the hostname or IP address that should not be accessed through the proxy server.

7.8. Verifying installation of secured clusters

After installing RHACS Cloud Service, you can perform some steps to verify that the installation was successful.

To verify installation, access your ACS Console from the Red Hat Hybrid Cloud Console. The Dashboard displays the number of clusters that RHACS Cloud Service is monitoring, along with information about nodes, deployments, images, and violations.

If no data appears in the ACS Console:

  • Ensure that at least one secured cluster is connected to your RHACS Cloud Service instance. For more information, see Installing secured cluster resources from RHACS Cloud Service.
  • Examine your Sensor pod logs to ensure that the connection to your RHACS Cloud Service instance is successful.
  • In the Red Hat OpenShift cluster, go to Platform ConfigurationClusters to verify that the components are healthy and view additional operational information.
  • Examine the values in the SecuredCluster API in the Operator on your local cluster to ensure that the Central API Endpoint has been entered correctly. This value should be the same value as shown in the ACS instance details in the Red Hat Hybrid Cloud Console.

Access Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service) by selecting an instance in the Red Hat Hybrid Cloud Console. An ACS instance contains the RHACS Cloud Service management interface and services that Red Hat configures and manages for you. The management interface connects to your secured clusters, which contain the services that scan and collect information about vulnerabilities. One instance can connect to and monitor many clusters.

8.1.1. Creating an instance in the console

In the Red Hat Hybrid Cloud Console, create an ACS instance to connect to your secured clusters.

Procedure

To create an ACS instance:

  1. Log in to the Red Hat Hybrid Cloud Console.
  2. From the navigation menu, select Advanced Cluster SecurityACS Instances.
  3. Select Create ACS instance and enter information into the displayed fields or select the appropriate option from the drop-down list:

    • Name: Enter the name of your ACS instance. An ACS instance contains the RHACS Central component, also referred to as "Central", which includes the RHACS Cloud Service management interface and services that are configured and managed by Red Hat. You manage your secured clusters that communicate with Central. You can connect many secured clusters to one instance.
    • Cloud provider: The cloud provider where Central is located. Select AWS.
    • Cloud region: The region for your cloud provider where Central is located. Select one of the following regions:

      • US-East, N. Virginia
      • Europe, Ireland
    • Availability zones: Use the default value (Multi).
  4. Click Create instance.

8.1.2. Next steps

Before you install the SecuredCluster resource on a cluster, you must create an init bundle. The cluster that has SecuredCluster installed and configured then uses this bundle to authenticate with the ACS Console. You can create an init bundle by using either the RHACS portal or the roxctl CLI. You then apply the init bundle by using it to create resources.

You can create an init bundle containing secrets by using the RHACS portal.

Note

You must have the Admin user role to create an init bundle.

Procedure

  1. Find the address of the RHACS portal as described in "Verifying Central installation using the Operator method".
  2. Log in to the RHACS portal.
  3. If you do not have secured clusters, the Platform ConfigurationClusters page appears.
  4. Click Create init bundle.
  5. Enter a name for the cluster init bundle.
  6. Select your platform.
  7. Select the installation method you will use for your secured clusters: Operator or Helm chart.
  8. Click Download to generate and download the init bundle, which is created in the form of a YAML file. You can use one init bundle and its corresponding YAML file for all secured clusters if you are using the same installation method.

    Important

    Store this bundle securely because it contains secrets.

  9. Apply the init bundle by using it to create resources on the secured cluster.
  10. Install secured cluster services on each cluster.

You can create an init bundle with secrets by using the roxctl CLI.

Note

You must have the Admin user role to create init bundles.

Prerequisites

  • You have configured the ROX_API_TOKEN and the ROX_CENTRAL_ADDRESS environment variables:

    1. Set the ROX_API_TOKEN by running the following command:

      $ export ROX_API_TOKEN=<api_token>
      Copy to Clipboard Toggle word wrap
    2. Set the ROX_CENTRAL_ADDRESS environment variable by running the following command:

      $ export ROX_CENTRAL_ADDRESS=<address>:<port_number>
      Copy to Clipboard Toggle word wrap
Important

In RHACS Cloud Service, when using roxctl commands that require the Central address, use the Central instance address as displayed in the Instance Details section of the Red Hat Hybrid Cloud Console. For example, use acs-ABCD12345.acs.rhcloud.com instead of acs-data-ABCD12345.acs.rhcloud.com.

Procedure

  • To generate a cluster init bundle containing secrets for Helm installations, run the following command:

    $ roxctl -e "$ROX_CENTRAL_ADDRESS" \
      central init-bundles generate --output \
      <cluster_init_bundle_name> cluster_init_bundle.yaml
    Copy to Clipboard Toggle word wrap
  • To generate a cluster init bundle containing secrets for Operator installations, run the following command:

    $ roxctl -e "$ROX_CENTRAL_ADDRESS" \
      central init-bundles generate --output-secrets \
      <cluster_init_bundle_name> cluster_init_bundle.yaml
    Copy to Clipboard Toggle word wrap
    Important

    Ensure that you store this bundle securely because it contains secrets. You can use the same bundle to set up multiple secured clusters.

8.2.3. Next steps

Apply the init bundle by using it to create resources.

Before you configure a secured cluster, you must apply the init bundle by using it to create the required resources on the secured cluster. Applying the init bundle allows the services on the secured cluster to communicate with RHACS Cloud Service.

Note

If you are installing by using Helm charts, do not perform this step. Complete the installation by using Helm; See "Installing RHACS on secured clusters by using Helm charts" in the additional resources section.

Prerequisites

  • You must have generated an init bundle containing secrets.
  • You must have created the stackrox project, or namespace, on the cluster where secured cluster services will be installed. Using stackrox for the project is not required, but ensures that vulnerabilities for RHACS processes are not reported when scanning your clusters.

Procedure

To create resources, perform only one of the following steps:

  • Create resources using the OpenShift Container Platform web console: In the OpenShift Container Platform web console, make sure that you are in the stackrox namespace. In the top menu, click + to open the Import YAML page. You can drag the init bundle file or copy and paste its contents into the editor, and then click Create. When the command is complete, the display shows that the collector-tls, sensor-tls, and admission-control-tls` resources were created.
  • Create resources using the Red Hat OpenShift CLI: Using the Red Hat OpenShift CLI, run the following command to create the resources:

    $ oc create -f <init_bundle>.yaml \
    1
    
      -n <stackrox> 
    2
    Copy to Clipboard Toggle word wrap
    1
    Specify the file name of the init bundle containing the secrets.
    2
    Specify the name of the project where Central services are installed.
  • Using the kubectl CLI, run the following commands to create the resources:

    $ kubectl create namespace stackrox 
    1
    
    $ kubectl create -f <init_bundle>.yaml \
    2
    
      -n <stackrox> 
    3
    Copy to Clipboard Toggle word wrap
    1
    Create the project where secured cluster resources will be installed. This example uses stackrox.
    2
    Specify the file name of the init bundle containing the secrets.
    3
    Specify the project name that you created. This example uses stackrox.

Verification

  • Restart Sensor to pick up the new certificates.

    For more information about how to restart Sensor, see "Restarting the Sensor container" in the "Additional resources" section.

8.3.2. Next steps

  • Install RHACS secured cluster services in all clusters that you want to monitor.

You can install RHACS Cloud Service on your secured clusters by using one of the following methods:

  • By using Helm charts
  • By using the roxctl CLI (do not use this method unless you have a specific installation need that requires using it)

You can install RHACS on secured clusters by using Helm charts with no customization, by using Helm charts with the default values, or by using Helm charts with customizations of configuration parameters.

First, ensure that you add the Helm chart repository.

8.4.1.1. Adding the Helm chart repository

Procedure

  • Add the RHACS charts repository.

    $ helm repo add rhacs https://mirror.openshift.com/pub/rhacs/charts/
    Copy to Clipboard Toggle word wrap

The Helm repository for Red Hat Advanced Cluster Security for Kubernetes includes Helm charts for installing different components, including:

  • Secured Cluster Services Helm chart (secured-cluster-services) for installing the per-cluster and per-node components (Sensor, Admission Controller, Collector, and Scanner-slim).

    Note

    Deploy the per-cluster components into each cluster that you want to monitor and deploy the per-node components in all nodes that you want to monitor.

Verification

  • Run the following command to verify the added chart repository:

    $ helm search repo -l rhacs/
    Copy to Clipboard Toggle word wrap

Use the following instructions to install the secured-cluster-services Helm chart to deploy the per-cluster and per-node components (Sensor, Admission controller, Collector, and Scanner-slim).

Prerequisites

  • You must have generated an RHACS init bundle for your cluster.
  • You must have access to the Red Hat Container Registry and a pull secret for authentication. For information about downloading images from registry.redhat.io, see Red Hat Container Registry Authentication.
  • You must have the Central API Endpoint address. You can view this information by choosing Advanced Cluster SecurityACS Instances from the Red Hat Hybrid Cloud Console navigation menu, then clicking the ACS instance you created.

Procedure

This section describes Helm chart configuration parameters that you can use with the helm install and helm upgrade commands. You can specify these parameters by using the --set option or by creating YAML configuration files.

Create the following files for configuring the Helm chart for installing Red Hat Advanced Cluster Security for Kubernetes:

  • Public configuration file values-public.yaml: Use this file to save all non-sensitive configuration options.
  • Private configuration file values-private.yaml: Use this file to save all sensitive configuration options. Ensure that you store this file securely.
Important

While using the secured-cluster-services Helm chart, do not modify the values.yaml file that is part of the chart.

8.4.1.3.1. Configuration parameters
Expand
ParameterDescription

clusterName

Name of your cluster.

centralEndpoint

Address of the Central endpoint. If you are using a non-gRPC capable load balancer, use the WebSocket protocol by prefixing the endpoint address with wss://. When configuring multiple clusters, use the hostname for the address. For example, central.example.com.

sensor.endpoint

Address of the Sensor endpoint including port number.

sensor.imagePullPolicy

Image pull policy for the Sensor container.

sensor.serviceTLS.cert

The internal service-to-service TLS certificate that Sensor uses.

sensor.serviceTLS.key

The internal service-to-service TLS certificate key that Sensor uses.

sensor.resources.requests.memory

The memory request for the Sensor container. Use this parameter to override the default value.

sensor.resources.requests.cpu

The CPU request for the Sensor container. Use this parameter to override the default value.

sensor.resources.limits.memory

The memory limit for the Sensor container. Use this parameter to override the default value.

sensor.resources.limits.cpu

The CPU limit for the Sensor container. Use this parameter to override the default value.

sensor.nodeSelector

Specify a node selector label as label-key: label-value to force Sensor to only schedule on nodes with the specified label.

sensor.tolerations

If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Sensor. This parameter is mainly used for infrastructure nodes.

image.main.name

The name of the main image.

image.collector.name

The name of the Collector image.

image.main.registry

The address of the registry you are using for the main image.

image.collector.registry

The address of the registry you are using for the Collector image.

image.scanner.registry

The address of the registry you are using for the Scanner image.

image.scannerDb.registry

The address of the registry you are using for the Scanner DB image.

image.scannerV4.registry

The address of the registry you are using for the Scanner V4 image.

image.scannerV4DB.registry

The address of the registry you are using for the Scanner V4 DB image.

image.main.pullPolicy

Image pull policy for main images.

image.collector.pullPolicy

Image pull policy for the Collector images.

image.main.tag

Tag of main image to use.

image.collector.tag

Tag of collector image to use.

collector.collectionMethod

Either CORE_BPF or NO_COLLECTION.

collector.imagePullPolicy

Image pull policy for the Collector container.

collector.complianceImagePullPolicy

Image pull policy for the Compliance container.

collector.disableTaintTolerations

If you specify false, tolerations are applied to Collector, and the collector pods can schedule onto all nodes with taints. If you specify it as true, no tolerations are applied, and the collector pods are not scheduled onto nodes with taints.

collector.resources.requests.memory

The memory request for the Collector container. Use this parameter to override the default value.

collector.resources.requests.cpu

The CPU request for the Collector container. Use this parameter to override the default value.

collector.resources.limits.memory

The memory limit for the Collector container. Use this parameter to override the default value.

collector.resources.limits.cpu

The CPU limit for the Collector container. Use this parameter to override the default value.

collector.complianceResources.requests.memory

The memory request for the Compliance container. Use this parameter to override the default value.

collector.complianceResources.requests.cpu

The CPU request for the Compliance container. Use this parameter to override the default value.

collector.complianceResources.limits.memory

The memory limit for the Compliance container. Use this parameter to override the default value.

collector.complianceResources.limits.cpu

The CPU limit for the Compliance container. Use this parameter to override the default value.

collector.serviceTLS.cert

The internal service-to-service TLS certificate that Collector uses.

collector.serviceTLS.key

The internal service-to-service TLS certificate key that Collector uses.

admissionControl.listenOnCreates

This setting controls whether Kubernetes is configured to contact Red Hat Advanced Cluster Security for Kubernetes with AdmissionReview requests for workload creation events.

admissionControl.listenOnUpdates

When you set this parameter as false, Red Hat Advanced Cluster Security for Kubernetes creates the ValidatingWebhookConfiguration in a way that causes the Kubernetes API server not to send object update events. Since the volume of object updates is usually higher than the object creates, leaving this as false limits the load on the admission control service and decreases the chances of a malfunctioning admission control service.

admissionControl.listenOnEvents

This setting controls whether the cluster is configured to contact Red Hat Advanced Cluster Security for Kubernetes with AdmissionReview requests for Kubernetes exec and portforward events. RHACS does not support this feature on OpenShift Container Platform 3.11.

admissionControl.dynamic.enforceOnCreates

This setting controls whether Red Hat Advanced Cluster Security for Kubernetes evaluates policies; if it is disabled, all AdmissionReview requests are automatically accepted.

admissionControl.dynamic.enforceOnUpdates

This setting controls the behavior of the admission control service. You must specify listenOnUpdates as true for this to work.

admissionControl.dynamic.scanInline

If you set this option to true, the admission control service requests an image scan before making an admission decision. Since image scans take several seconds, enable this option only if you can ensure that all images used in your cluster are scanned before deployment (for example, by a CI integration during image build). This option corresponds to the Contact image scanners option in the RHACS portal.

admissionControl.dynamic.disableBypass

Set it to true to disable bypassing the Admission controller.

admissionControl.dynamic.timeout

Use this parameter to specify the maximum number of seconds RHACS must wait for an admission review before marking it as fail open. If the admission webhook does not receive information that it is requesting before the end of the timeout period, it fails, but in fail open status, it still allows the operation to succeed. For example, the admission controller would allow a deployment to be created even if a scan had timed out and RHACS could not determine if the deployment violated a policy. Beginning in release 4.5, Red Hat reduced the default timeout setting for the RHACS admission controller webhooks from 20 seconds to 10 seconds, resulting in an effective timeout of 12 seconds within the ValidatingWebhookConfiguration.

admissionControl.resources.requests.memory

The memory request for the Admission Control container. Use this parameter to override the default value.

admissionControl.resources.requests.cpu

The CPU request for the Admission Control container. Use this parameter to override the default value.

admissionControl.resources.limits.memory

The memory limit for the Admission Control container. Use this parameter to override the default value.

admissionControl.resources.limits.cpu

The CPU limit for the Admission Control container. Use this parameter to override the default value.

admissionControl.nodeSelector

Specify a node selector label as label-key: label-value to force Admission Control to only schedule on nodes with the specified label.

admissionControl.tolerations

If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Admission Control. This parameter is mainly used for infrastructure nodes.

admissionControl.serviceTLS.cert

The internal service-to-service TLS certificate that Admission Control uses.

admissionControl.serviceTLS.key

The internal service-to-service TLS certificate key that Admission Control uses.

registryOverride

Use this parameter to override the default docker.io registry. Specify the name of your registry if you are using some other registry.

collector.disableTaintTolerations

If you specify false, tolerations are applied to Collector, and the Collector pods can schedule onto all nodes with taints. If you specify it as true, no tolerations are applied, and the Collector pods are not scheduled onto nodes with taints.

createUpgraderServiceAccount

Specify true to create the sensor-upgrader account. By default, Red Hat Advanced Cluster Security for Kubernetes creates a service account called sensor-upgrader in each secured cluster. This account is highly privileged but is only used during upgrades. If you do not create this account, you must complete future upgrades manually if the Sensor does not have enough permissions.

createSecrets

Specify false to skip the orchestrator secret creation for the Sensor, Collector, and Admission controller.

collector.slimMode

Deprecated. Specify true if you want to use a slim Collector image for deploying Collector.

sensor.resources

Resource specification for Sensor.

admissionControl.resources

Resource specification for Admission controller.

collector.resources

Resource specification for Collector.

collector.complianceResources

Resource specification for Collector’s Compliance container.

exposeMonitoring

If you set this option to true, Red Hat Advanced Cluster Security for Kubernetes exposes Prometheus metrics endpoints on port number 9090 for the Sensor, Collector, and the Admission controller.

auditLogs.disableCollection

If you set this option to true, Red Hat Advanced Cluster Security for Kubernetes disables the audit log detection features used to detect access and modifications to configuration maps and secrets.

scanner.disable

If you set this option to false, Red Hat Advanced Cluster Security for Kubernetes deploys a Scanner-slim and Scanner DB in the secured cluster to allow scanning images on the integrated OpenShift image registry. Enabling Scanner-slim is supported on OpenShift Container Platform and Kubernetes secured clusters. Defaults to true.

scanner.dbTolerations

If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner DB.

scanner.replicas

Resource specification for Collector’s Compliance container.

scanner.logLevel

Setting this parameter allows you to modify the scanner log level. Use this option only for troubleshooting purposes.

scanner.autoscaling.disable

If you set this option to true, Red Hat Advanced Cluster Security for Kubernetes disables autoscaling on the Scanner deployment.

scanner.autoscaling.minReplicas

The minimum number of replicas for autoscaling. Defaults to 2.

scanner.autoscaling.maxReplicas

The maximum number of replicas for autoscaling. Defaults to 5.

scanner.nodeSelector

Specify a node selector label as label-key: label-value to force Scanner to only schedule on nodes with the specified label.

scanner.tolerations

If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner.

scanner.dbNodeSelector

Specify a node selector label as label-key: label-value to force Scanner DB to only schedule on nodes with the specified label.

scanner.dbTolerations

If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner DB.

scanner.resources.requests.memory

The memory request for the Scanner container. Use this parameter to override the default value.

scanner.resources.requests.cpu

The CPU request for the Scanner container. Use this parameter to override the default value.

scanner.resources.limits.memory

The memory limit for the Scanner container. Use this parameter to override the default value.

scanner.resources.limits.cpu

The CPU limit for the Scanner container. Use this parameter to override the default value.

scanner.dbResources.requests.memory

The memory request for the Scanner DB container. Use this parameter to override the default value.

scanner.dbResources.requests.cpu

The CPU request for the Scanner DB container. Use this parameter to override the default value.

scanner.dbResources.limits.memory

The memory limit for the Scanner DB container. Use this parameter to override the default value.

scanner.dbResources.limits.cpu

The CPU limit for the Scanner DB container. Use this parameter to override the default value.

monitoring.openshift.enabled

If you set this option to false, Red Hat Advanced Cluster Security for Kubernetes will not set up Red Hat OpenShift monitoring. Defaults to true on Red Hat OpenShift 4.

network.enableNetworkPolicies

To provide security at the network level, RHACS creates default NetworkPolicy resources in the namespace where secured cluster resources are installed. These network policies allow ingress to specific components on specific ports. If you do not want RHACS to create these policies, set this parameter to False. This is a Boolean value. The default value is True, which means the default policies are automatically created.

Warning

Disabling creation of default network policies can break communication between RHACS components. If you disable creation of default policies, you must create your own network policies to allow this communication.

8.4.1.3.1.1. Environment variables

You can specify environment variables for Sensor and Admission controller in the following format:

customize:
  envVars:
    ENV_VAR1: "value1"
    ENV_VAR2: "value2"
Copy to Clipboard Toggle word wrap

The customize setting allows you to specify custom Kubernetes metadata (labels and annotations) for all objects created by this Helm chart and additional pod labels, pod annotations, and container environment variables for workloads.

The configuration is hierarchical, in the sense that metadata defined at a more generic scope (for example, for all objects) can be overridden by metadata defined at a narrower scope (for example, only for the Sensor deployment).

After you configure the values-public.yaml and values-private.yaml files, install the secured-cluster-services Helm chart to deploy the following per-cluster and per-node components:

  • Sensor
  • Admission controller
  • Collector
  • Scanner: optional for secured clusters when the StackRox Scanner is installed
  • Scanner DB: optional for secured clusters when the StackRox Scanner is installed
  • Scanner V4 Indexer and Scanner V4 DB: optional for secured clusters when Scanner V4 is installed

Prerequisites

  • You must have generated an RHACS init bundle for your cluster.
  • You must have access to the Red Hat Container Registry and a pull secret for authentication. For information about downloading images from registry.redhat.io, see Red Hat Container Registry Authentication.
  • You must have the Central API Endpoint address. You can view this information by choosing Advanced Cluster SecurityACS Instances from the Red Hat Hybrid Cloud Console navigation menu, then clicking the RHACS instance you created.

Procedure

  • Run the following command:

    $ helm install -n stackrox \
      --create-namespace stackrox-secured-cluster-services rhacs/secured-cluster-services \
      -f <name_of_cluster_init_bundle.yaml> \
      -f <path_to_values_public.yaml> -f <path_to_values_private.yaml> \
    1
    
      --set imagePullSecrets.username=<username> \
    2
    
      --set imagePullSecrets.password=<password> 
    3
    Copy to Clipboard Toggle word wrap
    1
    Use the -f option to specify the paths for your YAML configuration files.
    2
    Include the user name for your pull secret for Red Hat Container Registry authentication.
    3
    Include the password for your pull secret for Red Hat Container Registry authentication.
Note

To deploy secured-cluster-services Helm chart by using a continuous integration (CI) system, pass the init bundle YAML file as an environment variable to the helm install command:

$ helm install ... -f <(echo "$INIT_BUNDLE_YAML_SECRET") 
1
Copy to Clipboard Toggle word wrap
1
If you are using base64 encoded variables, use the helm install …​ -f <(echo "$INIT_BUNDLE_YAML_SECRET" | base64 --decode) command instead.

You can make changes to any configuration options after you have deployed the secured-cluster-services Helm chart.

When using the helm upgrade command to make changes, the following guidelines and requirements apply:

  • You can also specify configuration values using the --set or --set-file parameters. However, these options are not saved, and you must manually specify all the options again whenever you make changes.
  • Some changes, such as enabling a new component like Scanner V4, require new certificates to be issued for the component. Therefore, you must provide a CA when making these changes.

    • If the CA was generated by the Helm chart during the initial installation, you must retrieve these automatically generated values from the cluster and provide them to the helm upgrade command. The post-installation notes of the central-services Helm chart include a command for retrieving the automatically generated values.
    • If the CA was generated outside of the Helm chart and provided during the installation of the central-services chart, then you must perform that action again when using the helm upgrade command, for example, by using the --reuse-values flag with the helm upgrade command.

Procedure

  1. Update the values-public.yaml and values-private.yaml configuration files with new values.
  2. Run the helm upgrade command and specify the configuration files using the -f option:

    $ helm upgrade -n stackrox \
      stackrox-secured-cluster-services rhacs/secured-cluster-services \
      --reuse-values \
    1
    
      -f <path_to_values_public.yaml> \
      -f <path_to_values_private.yaml>
    Copy to Clipboard Toggle word wrap
    1
    If you have modified values that are not included in the values_public.yaml and values_private.yaml files, include the --reuse-values parameter.

To install RHACS on secured clusters by using the CLI, perform the following steps:

  1. Install the roxctl CLI.
  2. Install Sensor.
8.4.2.1. Installing the roxctl CLI

You must first download the binary. You can install roxctl on Linux, Windows, or macOS.

8.4.2.1.1. Installing the roxctl CLI on Linux

You can install the roxctl CLI binary on Linux by using the following procedure.

Note

roxctl CLI for Linux is available for amd64, arm64, ppc64le, and s390x architectures.

Procedure

  1. Determine the roxctl architecture for the target operating system:

    $ arch="$(uname -m | sed "s/x86_64//")"; arch="${arch:+-$arch}"
    Copy to Clipboard Toggle word wrap
  2. Download the roxctl CLI:

    $ curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.5.9/bin/Linux/roxctl${arch}"
    Copy to Clipboard Toggle word wrap
  3. Make the roxctl binary executable:

    $ chmod +x roxctl
    Copy to Clipboard Toggle word wrap
  4. Place the roxctl binary in a directory that is on your PATH:

    To check your PATH, execute the following command:

    $ echo $PATH
    Copy to Clipboard Toggle word wrap

Verification

  • Verify the roxctl version you have installed:

    $ roxctl version
    Copy to Clipboard Toggle word wrap
8.4.2.1.2. Installing the roxctl CLI on macOS

You can install the roxctl CLI binary on macOS by using the following procedure.

Note

roxctl CLI for macOS is available for amd64 and arm64 architectures.

Procedure

  1. Determine the roxctl architecture for the target operating system:

    $ arch="$(uname -m | sed "s/x86_64//")"; arch="${arch:+-$arch}"
    Copy to Clipboard Toggle word wrap
  2. Download the roxctl CLI:

    $ curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.5.9/bin/Darwin/roxctl${arch}"
    Copy to Clipboard Toggle word wrap
  3. Remove all extended attributes from the binary:

    $ xattr -c roxctl
    Copy to Clipboard Toggle word wrap
  4. Make the roxctl binary executable:

    $ chmod +x roxctl
    Copy to Clipboard Toggle word wrap
  5. Place the roxctl binary in a directory that is on your PATH:

    To check your PATH, execute the following command:

    $ echo $PATH
    Copy to Clipboard Toggle word wrap

Verification

  • Verify the roxctl version you have installed:

    $ roxctl version
    Copy to Clipboard Toggle word wrap
8.4.2.1.3. Installing the roxctl CLI on Windows

You can install the roxctl CLI binary on Windows by using the following procedure.

Note

roxctl CLI for Windows is available for the amd64 architecture.

Procedure

  • Download the roxctl CLI:

    $ curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.5.9/bin/Windows/roxctl.exe
    Copy to Clipboard Toggle word wrap

Verification

  • Verify the roxctl version you have installed:

    $ roxctl version
    Copy to Clipboard Toggle word wrap
8.4.2.2. Installing Sensor

To monitor a cluster, you must deploy Sensor. You must deploy Sensor into each cluster that you want to monitor. This installation method is also called the manifest installation method.

To perform an installation by using the manifest installation method, follow only one of the following procedures:

  • Use the RHACS web portal to download the cluster bundle, and then extract and run the sensor script.
  • Use the roxctl CLI to generate the required sensor configuration for your OpenShift Container Platform cluster and associate it with your Central instance.

Prerequisites

  • You must have already installed Central services, or you can access Central services by selecting your ACS instance on Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service).

Procedure

  1. On your secured cluster, in the RHACS portal, go to Platform ConfigurationClusters.
  2. Select Secure a clusterLegacy installation method.
  3. Specify a name for the cluster.
  4. Provide appropriate values for the fields based on where you are deploying the Sensor.

    • Enter the Central API Endpoint address. You can view this information by choosing Advanced Cluster SecurityACS Instances from the Red Hat Hybrid Cloud Console navigation menu, then clicking the RHACS instance you created.
  5. Click Next to continue with the Sensor setup.
  6. Click Download YAML File and Keys to download the cluster bundle (zip archive).

    Important

    The cluster bundle zip archive includes unique configurations and keys for each cluster. Do not reuse the same files in another cluster.

  7. From a system that has access to the monitored cluster, extract and run the sensor script from the cluster bundle:

    $ unzip -d sensor sensor-<cluster_name>.zip
    Copy to Clipboard Toggle word wrap
    $ ./sensor/sensor.sh
    Copy to Clipboard Toggle word wrap

    If you get a warning that you do not have the required permissions to deploy Sensor, follow the on-screen instructions, or contact your cluster administrator for help.

After Sensor is deployed, it contacts Central and provides cluster information.

Procedure

  1. Generate the required sensor configuration for your OpenShift Container Platform cluster and associate it with your Central instance by running the following command:

    $ roxctl sensor generate openshift --openshift-version <ocp_version> --name <cluster_name> --central "$ROX_ENDPOINT" 
    1
    Copy to Clipboard Toggle word wrap
    1
    For the --openshift-version option, specify the major OpenShift Container Platform version number for your cluster. For example, specify 3 for OpenShift Container Platform version 3.x and specify 4 for OpenShift Container Platform version 4.x.
  2. From a system that has access to the monitored cluster, extract and run the sensor script from the cluster bundle:

    $ unzip -d sensor sensor-<cluster_name>.zip
    Copy to Clipboard Toggle word wrap
    $ ./sensor/sensor.sh
    Copy to Clipboard Toggle word wrap

    If you get a warning that you do not have the required permissions to deploy Sensor, follow the on-screen instructions, or contact your cluster administrator for help.

After Sensor is deployed, it contacts Central and provides cluster information.

Verification

  1. Return to the RHACS portal and check if the deployment is successful. If successful, when viewing your list of clusters in Platform ConfigurationClusters, the cluster status displays a green checkmark and a Healthy status. If you do not see a green checkmark, use the following command to check for problems:

    • On Kubernetes, enter the following command:

      $ kubectl get pod -n stackrox -w
      Copy to Clipboard Toggle word wrap
  2. Click Finish to close the window.

After installation, Sensor starts reporting security information to RHACS and the RHACS portal dashboard begins showing deployments, images, and policy violations from the cluster on which you have installed the Sensor.

8.5. Verifying installation of secured clusters

After installing RHACS Cloud Service, you can perform some steps to verify that the installation was successful.

To verify installation, access your ACS Console from the Red Hat Hybrid Cloud Console. The Dashboard displays the number of clusters that RHACS Cloud Service is monitoring, along with information about nodes, deployments, images, and violations.

If no data appears in the ACS Console:

  • Ensure that at least one secured cluster is connected to your RHACS Cloud Service instance. For more information, see instructions for installing by using Helm charts or by using the roxctl CLI.
  • Examine your Sensor pod logs to ensure that the connection to your RHACS Cloud Service instance is successful.
  • Examine the values in the SecuredCluster API in the Operator on your local cluster to ensure that the Central API Endpoint has been entered correctly. This value should be the same value as shown in the ACS instance details in the Red Hat Hybrid Cloud Console.

Chapter 9. Upgrading RHACS Cloud Service

Red Hat provides regular service updates for the components that it manages, including Central services. These service updates include upgrades to new versions of Red Hat Advanced Cluster Security Cloud Service.

You must regularly upgrade the version of RHACS on your secured clusters to ensure compatibility with RHACS Cloud Service.

9.1.1. Preparing to upgrade

Before you upgrade the Red Hat Advanced Cluster Security for Kubernetes (RHACS) version, complete the following steps:

  • If the cluster you are upgrading contains the SecuredCluster custom resource (CR), change the collection method to CORE_BPF. For more information, see "Changing the collection method".
9.1.1.1. Changing the collection method

If the cluster that you are upgrading contains the SecuredCluster CR, you must ensure that the per node collection setting is set to CORE_BPF before you upgrade.

Procedure

  1. In the OpenShift Container Platform web console, go to the RHACS Operator page.
  2. In the top navigation menu, select Secured Cluster.
  3. Click the instance name, for example, stackrox-secured-cluster-services.
  4. Use one of the following methods to change the setting:

    • In the Form view, under Per Node SettingsCollector SettingsCollection, select CORE_BPF.
    • Click YAML to open the YAML editor and locate the spec.perNode.collector.collection attribute. If the value is KernelModule or EBPF, then change it to CORE_BPF.
  5. Click Save.

To roll back an Operator upgrade, you can use either the CLI or the OpenShift Container Platform web console.

Note

On secured clusters, rolling back Operator upgrades is needed only in rare cases, for example, if an issue exists with the secured cluster.

You can roll back the Operator version by using CLI commands.

Procedure

  1. Delete the OLM subscription by running the following command:

    • For OpenShift Container Platform, run the following command:

      $ oc -n rhacs-operator delete subscription rhacs-operator
      Copy to Clipboard Toggle word wrap
    • For Kubernetes, run the following command:

      $ kubectl -n rhacs-operator delete subscription rhacs-operator
      Copy to Clipboard Toggle word wrap
  2. Delete the cluster service version (CSV) by running the following command:

    • For OpenShift Container Platform, run the following command:

      $ oc -n rhacs-operator delete csv -l operators.coreos.com/rhacs-operator.rhacs-operator
      Copy to Clipboard Toggle word wrap
    • For Kubernetes, run the following command:

      $ kubectl -n rhacs-operator delete csv -l operators.coreos.com/rhacs-operator.rhacs-operator
      Copy to Clipboard Toggle word wrap
  3. Install the latest version of the Operator on the rolled back channel.

You can roll back the Operator version by using the OpenShift Container Platform web console.

Prerequisites

  • You have access to an OpenShift Container Platform cluster web console using an account with cluster-admin permissions.

Procedure

  1. Go to the OperatorsInstalled Operators page.
  2. Click the RHACS Operator.
  3. On the Operator Details page, select Uninstall Operator from the Actions list. Following this action, the Operator stops running and no longer receives updates.
  4. Install the latest version of the Operator on the rolled back channel.

9.1.3. Troubleshooting Operator upgrade issues

Follow these instructions to investigate and resolve upgrade-related issues for the RHACS Operator.

When RHACS Operator has the following conditions, you must check the custom resource conditions to find the issue:

  • If the Operator fails to deploy Secured Cluster
  • If the Operator fails to apply CR changes to actual resources
  • For Secured clusters, run the following command to check the conditions:

    $ oc -n rhacs-operator describe securedclusters.platform.stackrox.io 
    1
    Copy to Clipboard Toggle word wrap
    1
    If you use Kubernetes, enter kubectl instead of oc.

You can identify configuration errors from the conditions output:

Example output

 Conditions:
    Last Transition Time:  2023-04-19T10:49:57Z
    Status:                False
    Type:                  Deployed
    Last Transition Time:  2023-04-19T10:49:57Z
    Status:                True
    Type:                  Initialized
    Last Transition Time:  2023-04-19T10:59:10Z
    Message:               Deployment.apps "central" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: "50": must be less than or equal to cpu limit
    Reason:                ReconcileError
    Status:                True
    Type:                  Irreconcilable
    Last Transition Time:  2023-04-19T10:49:57Z
    Message:               No proxy configuration is desired
    Reason:                NoProxyConfig
    Status:                False
    Type:                  ProxyConfigFailed
    Last Transition Time:  2023-04-19T10:49:57Z
    Message:               Deployment.apps "central" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: "50": must be less than or equal to cpu limit
    Reason:                InstallError
    Status:                True
    Type:                  ReleaseFailed
Copy to Clipboard Toggle word wrap

Additionally, you can view RHACS pod logs to find more information about the issue. Run the following command to view the logs:

oc -n rhacs-operator logs deploy/rhacs-operator-controller-manager manager 
1
Copy to Clipboard Toggle word wrap
1
If you use Kubernetes, enter kubectl instead of oc.

You can upgrade your secured clusters in RHACS Cloud Service by using Helm charts.

If you installed RHACS secured clusters by using Helm charts, you can upgrade to the latest version of RHACS by updating the Helm chart and running the helm upgrade command.

9.2.1. Updating the Helm chart repository

You must always update Helm charts before upgrading to a new version of Red Hat Advanced Cluster Security for Kubernetes.

Prerequisites

  • You must have already added the Red Hat Advanced Cluster Security for Kubernetes Helm chart repository.
  • You must be using Helm version 3.8.3 or newer.

Procedure

  • Update Red Hat Advanced Cluster Security for Kubernetes charts repository.

    $ helm repo update
    Copy to Clipboard Toggle word wrap

Verification

  • Run the following command to verify the added chart repository:

    $ helm search repo -l rhacs/
    Copy to Clipboard Toggle word wrap

9.2.2. Running the Helm upgrade command

You can use the helm upgrade command to update Red Hat Advanced Cluster Security for Kubernetes (RHACS).

Prerequisites

  • You must have access to the values-private.yaml configuration file that you have used to install Red Hat Advanced Cluster Security for Kubernetes (RHACS). Otherwise, you must generate the values-private.yaml configuration file containing root certificates before proceeding with these commands.

Procedure

  • Run the helm upgrade command and specify the configuration files by using the -f option:

    $ helm upgrade -n stackrox stackrox-secured-cluster-services \
      rhacs/secured-cluster-services --version <current-rhacs-version> \
    1
    
      -f values-private.yaml
    Copy to Clipboard Toggle word wrap
    1
    Use the -f option to specify the paths for your YAML configuration files.

You can upgrade your secured clusters in RHACS Cloud Service by using the roxctl CLI.

Important

You need to manually upgrade secured clusters only if you used the roxctl CLI to install the secured clusters.

9.3.1. Upgrading the roxctl CLI

To upgrade the roxctl CLI to the latest version, you must uninstall your current version of the roxctl CLI and then install the latest version of the roxctl CLI.

9.3.1.1. Uninstalling the roxctl CLI

You can uninstall the roxctl CLI binary on Linux by using the following procedure.

Procedure

  • Find and delete the roxctl binary:

    $ ROXPATH=$(which roxctl) && rm -f $ROXPATH 
    1
    Copy to Clipboard Toggle word wrap
    1
    Depending on your environment, you might need administrator rights to delete the roxctl binary.
9.3.1.2. Installing the roxctl CLI on Linux

You can install the roxctl CLI binary on Linux by using the following procedure.

Note

roxctl CLI for Linux is available for amd64, arm64, ppc64le, and s390x architectures.

Procedure

  1. Determine the roxctl architecture for the target operating system:

    $ arch="$(uname -m | sed "s/x86_64//")"; arch="${arch:+-$arch}"
    Copy to Clipboard Toggle word wrap
  2. Download the roxctl CLI:

    $ curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.5.9/bin/Linux/roxctl${arch}"
    Copy to Clipboard Toggle word wrap
  3. Make the roxctl binary executable:

    $ chmod +x roxctl
    Copy to Clipboard Toggle word wrap
  4. Place the roxctl binary in a directory that is on your PATH:

    To check your PATH, execute the following command:

    $ echo $PATH
    Copy to Clipboard Toggle word wrap

Verification

  • Verify the roxctl version you have installed:

    $ roxctl version
    Copy to Clipboard Toggle word wrap
9.3.1.3. Installing the roxctl CLI on macOS

You can install the roxctl CLI binary on macOS by using the following procedure.

Note

roxctl CLI for macOS is available for amd64 and arm64 architectures.

Procedure

  1. Determine the roxctl architecture for the target operating system:

    $ arch="$(uname -m | sed "s/x86_64//")"; arch="${arch:+-$arch}"
    Copy to Clipboard Toggle word wrap
  2. Download the roxctl CLI:

    $ curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.5.9/bin/Darwin/roxctl${arch}"
    Copy to Clipboard Toggle word wrap
  3. Remove all extended attributes from the binary:

    $ xattr -c roxctl
    Copy to Clipboard Toggle word wrap
  4. Make the roxctl binary executable:

    $ chmod +x roxctl
    Copy to Clipboard Toggle word wrap
  5. Place the roxctl binary in a directory that is on your PATH:

    To check your PATH, execute the following command:

    $ echo $PATH
    Copy to Clipboard Toggle word wrap

Verification

  • Verify the roxctl version you have installed:

    $ roxctl version
    Copy to Clipboard Toggle word wrap
9.3.1.4. Installing the roxctl CLI on Windows

You can install the roxctl CLI binary on Windows by using the following procedure.

Note

roxctl CLI for Windows is available for the amd64 architecture.

Procedure

  • Download the roxctl CLI:

    $ curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.5.9/bin/Windows/roxctl.exe
    Copy to Clipboard Toggle word wrap

Verification

  • Verify the roxctl version you have installed:

    $ roxctl version
    Copy to Clipboard Toggle word wrap

9.3.2. Upgrading all secured clusters manually

Important

To ensure optimal functionality, use the same RHACS version for your secured clusters that RHACS Cloud Service is running. If you are using automatic upgrades, update all your secured clusters by using automatic upgrades. If you are not using automatic upgrades, complete the instructions in this section on all secured clusters.

To complete manual upgrades of each secured cluster running Sensor, Collector, and Admission controller, follow these instructions.

9.3.2.1. Updating other images

You must update the sensor, collector and compliance images on each secured cluster when not using automatic upgrades.

Note

If you are using Kubernetes, use kubectl instead of oc for the commands listed in this procedure.

Procedure

  1. Update the Sensor image:

    $ oc -n stackrox set image deploy/sensor sensor=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.5.9 
    1
    Copy to Clipboard Toggle word wrap
    1
    If you use Kubernetes, enter kubectl instead of oc.
  2. Update the Compliance image:

    $ oc -n stackrox set image ds/collector compliance=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.5.9 
    1
    Copy to Clipboard Toggle word wrap
    1
    If you use Kubernetes, enter kubectl instead of oc.
  3. Update the Collector image:

    $ oc -n stackrox set image ds/collector collector=registry.redhat.io/advanced-cluster-security/rhacs-collector-rhel8:4.5.9 
    1
    Copy to Clipboard Toggle word wrap
    1
    If you use Kubernetes, enter kubectl instead of oc.
    Note

    If you are using the collector slim image, run the following command instead:

    $ oc -n stackrox set image ds/collector collector=registry.redhat.io/advanced-cluster-security/rhacs-collector-slim-rhel8:{rhacs-version}
    Copy to Clipboard Toggle word wrap
  4. Update the admission control image:

    $ oc -n stackrox set image deploy/admission-control admission-control=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.5.9
    Copy to Clipboard Toggle word wrap
Important

If you have installed RHACS on Red Hat OpenShift by using the roxctl CLI, you need to migrate the security context constraints (SCCs).

For more information, see "Migrating SCCs during the manual upgrade" in the "Additional resources" section.

9.3.2.2. Migrating SCCs during the manual upgrade

By migrating the security context constraints (SCCs) during the manual upgrade by using roxctl CLI, you can seamlessly transition the Red Hat Advanced Cluster Security for Kubernetes (RHACS) services to use the Red Hat OpenShift SCCs, ensuring compatibility and optimal security configurations across Central and all secured clusters.

Procedure

  1. List all of the RHACS services that are deployed on all secured clusters:

    $ oc -n stackrox describe pods | grep 'openshift.io/scc\|^Name:'
    Copy to Clipboard Toggle word wrap

    Example output

    Name:      admission-control-6f4dcc6b4c-2phwd
               openshift.io/scc: stackrox-admission-control
    #...
    Name:      central-575487bfcb-sjdx8
               openshift.io/scc: stackrox-central
    Name:      central-db-7c7885bb-6bgbd
               openshift.io/scc: stackrox-central-db
    Name:      collector-56nkr
               openshift.io/scc: stackrox-collector
    #...
    Name:      scanner-68fc55b599-f2wm6
               openshift.io/scc: stackrox-scanner
    Name:      scanner-68fc55b599-fztlh
    #...
    Name:      sensor-84545f86b7-xgdwf
               openshift.io/scc: stackrox-sensor
    #...
    Copy to Clipboard Toggle word wrap

    In this example, you can see that each pod has its own custom SCC, which is specified through the openshift.io/scc field.

  2. Add the required roles and role bindings to use the Red Hat OpenShift SCCs instead of the RHACS custom SCCs.
  3. To add the required roles and role bindings to use the Red Hat OpenShift SCCs for all secured clusters, complete the following steps:

    1. Create a file named upgrade-scs.yaml that defines the role and role binding resources by using the following content:

      Example 9.1. Example YAML file

      apiVersion: rbac.authorization.k8s.io/v1
      kind: Role  
      1
      
      metadata:
        annotations:
           email: support@stackrox.com
           owner: stackrox
        labels:
           app.kubernetes.io/component: collector
           app.kubernetes.io/instance: stackrox-secured-cluster-services
           app.kubernetes.io/name: stackrox
           app.kubernetes.io/part-of: stackrox-secured-cluster-services
           app.kubernetes.io/version: 4.4.0
           auto-upgrade.stackrox.io/component: sensor
        name: use-privileged-scc  
      2
      
        namespace: stackrox 
      3
      
      rules:  
      4
      
      - apiGroups:
        - security.openshift.io
        resourceNames:
        - privileged
        resources:
        - securitycontextconstraints
        verbs:
        - use
      - - -
      apiVersion: rbac.authorization.k8s.io/v1
      kind: RoleBinding 
      5
      
      metadata:
        annotations:
           email: support@stackrox.com
           owner: stackrox
        labels:
           app.kubernetes.io/component: collector
           app.kubernetes.io/instance: stackrox-secured-cluster-services
           app.kubernetes.io/name: stackrox
           app.kubernetes.io/part-of: stackrox-secured-cluster-services
           app.kubernetes.io/version: 4.4.0
           auto-upgrade.stackrox.io/component: sensor
        name: collector-use-scc 
      6
      
        namespace: stackrox
      roleRef: 
      7
      
        apiGroup: rbac.authorization.k8s.io
        kind: Role
        name: use-privileged-scc
      subjects: 
      8
      
      - kind: ServiceAccount
        name: collector
        namespace: stackrox
      - - -
      Copy to Clipboard Toggle word wrap
      1
      The type of Kubernetes resource, in this example, Role.
      2
      The name of the role resource.
      3
      The namespace in which the role is created.
      4
      Describes the permissions granted by the role resource.
      5
      The type of Kubernetes resource, in this example, RoleBinding.
      6
      The name of the role binding resource.
      7
      Specifies the role to bind in the same namespace.
      8
      Specifies the subjects that are bound to the role.
    2. Create the role and role binding resources specified in the upgrade-scs.yaml file by running the following command:

      $ oc -n stackrox create -f ./update-scs.yaml
      Copy to Clipboard Toggle word wrap
      Important

      You must run this command on each secured cluster to create the role and role bindings specified in the upgrade-scs.yaml file.

  4. Delete the SCCs that are specific to RHACS:

    1. To delete the SCCs that are specific to all secured clusters, run the following command:

      $ oc delete scc/stackrox-admission-control scc/stackrox-collector scc/stackrox-sensor
      Copy to Clipboard Toggle word wrap
      Important

      You must run this command on each secured cluster to delete the SCCs that are specific to each secured cluster.

Verification

  • Ensure that all the pods are using the correct SCCs by running the following command:

    $ oc -n stackrox describe pods | grep 'openshift.io/scc\|^Name:'
    Copy to Clipboard Toggle word wrap

    Compare the output with the following table:

    Expand
    ComponentPrevious custom SCCNew Red Hat OpenShift 4 SCC

    Central

    stackrox-central

    nonroot-v2

    Central-db

    stackrox-central-db

    nonroot-v2

    Scanner

    stackrox-scanner

    nonroot-v2

    Scanner-db

    stackrox-scanner

    nonroot-v2

    Admission Controller

    stackrox-admission-control

    restricted-v2

    Collector

    stackrox-collector

    privileged

    Sensor

    stackrox-sensor

    restricted-v2

Upgrading to version 4.4 requires that you manually replace the GOMEMLIMIT environment variable with the ROX_MEMLIMIT environment variable. You must edit this variable for each deployment.

Procedure

  1. Run the following command to edit the variable for the Sensor deployment:

    $ oc -n stackrox edit deploy/sensor 
    1
    Copy to Clipboard Toggle word wrap
    1
    If you use Kubernetes, enter kubectl instead of oc.
  2. Replace the GOMEMLIMIT variable with ROX_MEMLIMIT.
  3. Save the file.

Upgrading to version 4.4 requires that you manually replace the GOMEMLIMIT environment variable with the ROX_MEMLIMIT environment variable. You must edit this variable for each deployment.

Procedure

  1. Run the following command to edit the variable for the Collector deployment:

    $ oc -n stackrox edit deploy/collector 
    1
    Copy to Clipboard Toggle word wrap
    1
    If you use Kubernetes, enter kubectl instead of oc.
  2. Replace the GOMEMLIMIT variable with ROX_MEMLIMIT.
  3. Save the file.

Upgrading to version 4.4 requires that you manually replace the GOMEMLIMIT environment variable with the ROX_MEMLIMIT environment variable. You must edit this variable for each deployment.

Procedure

  1. Run the following command to edit the variable for the Admission Controller deployment:

    $ oc -n stackrox edit deploy/admission-control 
    1
    Copy to Clipboard Toggle word wrap
    1
    If you use Kubernetes, enter kubectl instead of oc.
  2. Replace the GOMEMLIMIT variable with ROX_MEMLIMIT.
  3. Save the file.
9.3.2.2.4. Verifying secured cluster upgrade

After you have upgraded secured clusters, verify that the updated pods are working.

Procedure

  • Check that the new pods have deployed:

    $ oc get deploy,ds -n stackrox -o wide 
    1
    Copy to Clipboard Toggle word wrap
    1
    If you use Kubernetes, enter kubectl instead of oc.
    $ oc get pod -n stackrox --watch 
    1
    Copy to Clipboard Toggle word wrap
    1
    If you use Kubernetes, enter kubectl instead of oc.

9.3.3. Enabling RHCOS node scanning

If you use OpenShift Container Platform, you can enable scanning of Red Hat Enterprise Linux CoreOS (RHCOS) nodes for vulnerabilities by using Red Hat Advanced Cluster Security for Kubernetes (RHACS).

Prerequisites

Procedure

  1. Run one of the following commands to update the compliance container.

    • For a default compliance container with metrics disabled, run the following command:

      $ oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"name":"compliance","env":[{"name":"ROX_METRICS_PORT","value":"disabled"},{"name":"ROX_NODE_SCANNING_ENDPOINT","value":"127.0.0.1:8444"},{"name":"ROX_NODE_SCANNING_INTERVAL","value":"4h"},{"name":"ROX_NODE_SCANNING_INTERVAL_DEVIATION","value":"24m"},{"name":"ROX_NODE_SCANNING_MAX_INITIAL_WAIT","value":"5m"},{"name":"ROX_RHCOS_NODE_SCANNING","value":"true"},{"name":"ROX_CALL_NODE_INVENTORY_ENABLED","value":"true"}]}]}}}}'
      Copy to Clipboard Toggle word wrap
    • For a compliance container with Prometheus metrics enabled, run the following command:

      $ oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"name":"compliance","env":[{"name":"ROX_METRICS_PORT","value":":9091"},{"name":"ROX_NODE_SCANNING_ENDPOINT","value":"127.0.0.1:8444"},{"name":"ROX_NODE_SCANNING_INTERVAL","value":"4h"},{"name":"ROX_NODE_SCANNING_INTERVAL_DEVIATION","value":"24m"},{"name":"ROX_NODE_SCANNING_MAX_INITIAL_WAIT","value":"5m"},{"name":"ROX_RHCOS_NODE_SCANNING","value":"true"},{"name":"ROX_CALL_NODE_INVENTORY_ENABLED","value":"true"}]}]}}}}'
      Copy to Clipboard Toggle word wrap
  2. Update the Collector DaemonSet (DS) by taking the following steps:

    1. Add new volume mounts to Collector DS by running the following command:

      $ oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"volumes":[{"name":"tmp-volume","emptyDir":{}},{"name":"cache-volume","emptyDir":{"sizeLimit":"200Mi"}}]}}}}'
      Copy to Clipboard Toggle word wrap
    2. Add the new NodeScanner container by running the following command:

      $ oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"command":["/scanner","--nodeinventory","--config=",""],"env":[{"name":"ROX_NODE_NAME","valueFrom":{"fieldRef":{"apiVersion":"v1","fieldPath":"spec.nodeName"}}},{"name":"ROX_CLAIR_V4_SCANNING","value":"true"},{"name":"ROX_COMPLIANCE_OPERATOR_INTEGRATION","value":"true"},{"name":"ROX_CSV_EXPORT","value":"false"},{"name":"ROX_DECLARATIVE_CONFIGURATION","value":"false"},{"name":"ROX_INTEGRATIONS_AS_CONFIG","value":"false"},{"name":"ROX_NETPOL_FIELDS","value":"true"},{"name":"ROX_NETWORK_DETECTION_BASELINE_SIMULATION","value":"true"},{"name":"ROX_NETWORK_GRAPH_PATTERNFLY","value":"true"},{"name":"ROX_NODE_SCANNING_CACHE_TIME","value":"3h36m"},{"name":"ROX_NODE_SCANNING_INITIAL_BACKOFF","value":"30s"},{"name":"ROX_NODE_SCANNING_MAX_BACKOFF","value":"5m"},{"name":"ROX_PROCESSES_LISTENING_ON_PORT","value":"false"},{"name":"ROX_QUAY_ROBOT_ACCOUNTS","value":"true"},{"name":"ROX_ROXCTL_NETPOL_GENERATE","value":"true"},{"name":"ROX_SOURCED_AUTOGENERATED_INTEGRATIONS","value":"false"},{"name":"ROX_SYSLOG_EXTRA_FIELDS","value":"true"},{"name":"ROX_SYSTEM_HEALTH_PF","value":"false"},{"name":"ROX_VULN_MGMT_WORKLOAD_CVES","value":"false"}],"image":"registry.redhat.io/advanced-cluster-security/rhacs-scanner-slim-rhel8:4.5.9","imagePullPolicy":"IfNotPresent","name":"node-inventory","ports":[{"containerPort":8444,"name":"grpc","protocol":"TCP"}],"volumeMounts":[{"mountPath":"/host","name":"host-root-ro","readOnly":true},{"mountPath":"/tmp/","name":"tmp-volume"},{"mountPath":"/cache","name":"cache-volume"}]}]}}}}'
      Copy to Clipboard Toggle word wrap

Legal Notice

Copyright © 2025 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat