RHACS Cloud Service
About the RHACS Cloud Service
Abstract
Chapter 1. RHACS Cloud Service service description
1.1. Introduction to RHACS
Red Hat Advanced Cluster Security for Kubernetes (RHACS) is an enterprise-ready, Kubernetes-native container security solution that helps you build, deploy, and run cloud-native applications more securely.
Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service) provides Kubernetes-native security as a service. With RHACS Cloud Service, Red Hat maintains, upgrades, and manages your Central services.
Central services include the user interface (UI), data storage, RHACS application programming interface (API), and image scanning capabilities. You deploy your Central service through the Red Hat Hybrid Cloud Console. When you create a new ACS instance, Red Hat creates your individual control plane for RHACS.
RHACS Cloud Service allows you to secure self-managed clusters that communicate with a Central instance. The clusters you secure, called Secured Clusters, are managed by you, and not by Red Hat. Secured Cluster services include optional vulnerability scanning services, admission control services, and data collection services used for runtime monitoring and compliance. You install Secured Cluster services on any OpenShift or Kubernetes cluster you want to secure.
1.2. Architecture
RHACS Cloud Service is hosted on Amazon Web Services (AWS) over two regions, eu-west-1 and us-east-1, and uses the network access points provided by the cloud provider. Each tenant from RHACS Cloud Service uses highly-available egress proxies and is spread over 3 availability zones. For more information about RHACS Cloud Service system architecture and components, see Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service) architecture.
1.3. Billing
Customers can purchase a RHACS Cloud Service subscription on the Amazon Web Services (AWS) marketplace. The service cost is charged hourly per secured core, or vCPU of a node belonging to a secured cluster.
Example 1.1. Subscription cost example
If you have established a connection to two secured clusters, each with 5 identical nodes with 8 vCPUs (such as Amazon EC2 m7g.2xlarge), the total number of secured cores is 80 (2 x 5 x 8 = 80).
1.4. Security and compliance
All RHACS Cloud Service data in the Central instance is encrypted in transit and at rest. The data is stored in secure storage with full replication and high availability together with regularly-scheduled backups. RHACS Cloud Service is available through cloud data centers that ensure optimal performance and the ability to meet data residency requirements.
1.4.1. Information security guidelines, roles, and responsibilities
Red Hat’s information security guidelines, aligned with the NIST Cybersecurity Framework, are approved by executive management. Red Hat maintains a dedicated team of globally-distributed certified information security professionals. See the following resources:
Red Hat has strict internal policies and practices to protect our customers and their businesses. These policies and practices are confidential. In addition, we comply with all applicable laws and regulations, including those related to data privacy.
Red Hat’s information security roles and responsibilities are not managed by third parties.
Red Hat maintains an ISO 27001 certification for our corporate information security management system (ISMS), which governs how all of our people work, corporate endpoint devices, and authentication and authorization practices. We have taken a standardized approach to this through the implementation of the Red Hat Enterprise Security Standard (ESS) to all infrastructure, products, services and technology that Red Hat employs. A copy of the ESS is available upon request.
RHACS Cloud Service runs on an instance of OpenShift Dedicated hosted on Amazon Web Services (AWS). OpenShift Dedicated is compliant with ISO 27001, ISO 27017, ISO 27018, PCI DSS, SOC 2 Type 2, and HIPAA. Strong processes and security controls are aligned with industry standards to manage information security.
RHACS Cloud Service follows the same security principles, guidelines, processes and controls defined for OpenShift Dedicated. These certifications demonstrate how our services platform, associated operations, and management practices align with core security requirements. We meet many of these requirements by following solid Secure Software Development Framework (SSDF) practices as defined by NIST, including build pipeline security. Implementation of SSDF controls are implemented via our Secure Software Management Lifecycle (SSML) for all products and services.
Red Hat’s proven and experienced global site reliability engineering (SRE) team is available 24x7 and proactively manages the cluster life cycle, infrastructure configuration, scaling, maintenance, security patching, and incident response as it relates to the hosted components of RHACS Cloud Service. The Red Hat SRE team is responsible for managing HA, uptime, backups, restore, and security for the RHACS Cloud Service control plane. RHACS Cloud Service comes with a 99.95% availability SLA and 24x7 RH SRE support by phone or chat.
You are responsible for use of the product, including implementation of policies, vulnerability management, and deployment of secured cluster components within your OpenShift Container Platform environments. The Red Hat SRE team manages the control plane that contains tenant data in line with the compliance frameworks noted previously, including:
- All Red Hat SRE access the data plane clusters through the backplane which enables audited access to the cluster
- Red Hat SRE only deploys images from the Red Hat registry. All content posted to the Red Hat registry goes through rigorous checks. These images are the same images available to self-managed customers.
- Each tenant has their own individual mTLS CA, which encrypts data in-transit, enabling multi-tenant isolation. Additional isolation is provided via SELinux controls namespaces and network policies.
- Each tenant has their own instance of the RDS database.
All Red Hat SREs and developers go through rigorous Secure Development Lifecycle training.
For more information, see the following resources:
1.4.2. Vulnerability management program
Red Hat scans for vulnerabilities in our products during the build process and our dedicated Product Security team tracks and assesses newly-discovered vulnerabilities. Red Hat Information Security regularly scans running environments for vulnerabilities.
Qualified critical and important Security Advisories (RHSAs) and urgent and selected high priority Bug Fix Advisories (RHBAs) are released as they become available. All other available fix and qualified patches are released via periodic updates. All RHACS Cloud Service software impacted by critical or important severity flaws are updated as soon as the fix is available. For more information about remediation of critical or high-priority issues, see Understanding Red Hat’s Product Security Incident Response Plan.
1.4.3. Security exams and audits
RHACS Cloud Service does not currently hold any external security certifications or attestations.
The Red Hat Information Risk and Security Team has achieved ISO 27001:2013 certification for our Information Security Management System (ISMS).
1.4.4. Systems interoperability security
RHACS Cloud Service supports integrations with registries, CI systems, notification systems, workflow systems like ServiceNow and Jira, and Security information and event management (SIEM) platforms. For more information about supported integrations, see the Integrating documentation. Custom integrations can be implemented using the API or generic webhooks.
RHACS Cloud Service uses certificate-based architecture (mTLS) for both authentication and end-to-end encryption of all inflight traffic between the customer’s site and Red Hat. It does not require a VPN. IP allowlists are not supported. Data transfer is encrypted using mTLS. File transfer, including Secure FTP, is not supported.
1.4.5. Malicious code prevention
RHACS Cloud Service is deployed on Red Hat Enterprise Linux CoreOS (RHCOS). The user space in RHCOS is read-only. In addition, all RHACS Cloud Service instances are monitored in runtime by RHACS. Red Hat uses a commercially-available, enterprise-grade anti-virus solution for Windows and Mac platforms, which is centrally managed and logged. Anti-virus solutions on Linux-based platforms are not part of Red Hat’s strategy, as they can introduce additional vulnerabilities. Instead, we harden and rely on the built-in tooling (for example, SELinux) to protect the platform.
Red Hat uses SentinelOne and osquery for individual endpoint security, with updates made as they are available from the vendor.
All third-party JavaScript libraries are downloaded and included in build images which are scanned for vulnerabilities before being published.
1.4.6. Systems development lifecycle security
Red Hat follows secure development lifecycle practices. Red Hat Product Security practices are aligned with the Open Web Application Security Project (OWASP) and ISO12207:2017 wherever it is feasible. Red Hat covers OWASP project recommendations along with other secure software development practices to increase the general security posture of our products. OWASP project analysis is included in Red Hat’s automated scanning, security testing, and threat models, as the OWASP project is built based on selected CWE weaknesses. Red Hat monitors weaknesses in our products to address issues before they are exploited and become vulnerabilities.
For more information, see the following resources:
Applications are scanned regularly and the container scan results of the product are available publicly. For example, on the Red Hat Ecosystem Catalog site, you can select a component image such as rhacs-main
and click the Security tab to see the health index and the status of security updates.
As part of Red Hat’s policy, a support policy and maintenance plan is issued for any third-party components we depend on that go to end-of-life.
1.4.7. Software Bill of Materials
Red Hat has published software bill of materials (SBOMs) files for core Red Hat offerings. An SBOM is a machine-readable, comprehensive inventory (manifest) of software components and dependencies with license and provenance information. SBOM files help establish reviews for procurement and audits of what is in a set of software applications and libraries. Combined with Vulnerability Exploitability eXchange (VEX), SBOMs help an organization address its vulnerability risk assessment process. Together they provide information on where a potential risk might exist (where the vulnerable artifact is included and the correlation between this artifact and components or the product), and its current status to known vulnerabilities or exploits.
Red Hat, together with other vendors, is working to define the specific requirements for publishing useful SBOMs that can be correlated with Common Security Advisory Framework (CSAF)-VEX files, and inform consumers and partners about how to use this data. For now, SBOM files published by Red Hat, including SBOMs for RHACS Cloud Service, are considered to be beta versions for customer testing and are available at https://access.redhat.com/security/data/sbom/beta/spdx/.
For more detail on Red Hat’s Security data, see The future of Red Hat security data.
1.4.8. Data centers and providers
The following third-party providers are used by Red Hat in providing subscription support services:
- Flexential hosts the Raleigh Data Center, which is the primary data center used to support the Red Hat Customer Portal databases.
- Digital Realty hosts the Phoenix Data Center, which is the secondary backup data center supporting the Red Hat Customer Portal databases.
- Salesforce provides the engine behind the customer ticketing system.
- AWS is used to augment data center infrastructure capacity, some of which is used to support the Red Hat Customer Portal application.
- Akamai is used to host the Web Application Firewall and provide DDoS protection.
- Iron Mountain is used to handle the destruction of sensitive material.
1.5. Access control
User accounts are managed with role-based access control (RBAC). See Managing RBAC in Red Hat Advanced Cluster Security for Kubernetes for more information. Red Hat site reliability engineers (SREs) have access to Central instances. Access is controlled with OpenShift RBAC. Credentials are instantly revoked upon termination.
1.5.1. Authentication provider
When you create a Central instance using Red Hat Hybrid Cloud Console, authentication for the cluster administrator is configured as part of the process. Customers must manage all access to the Central instance as part of their integrated solution. For more information about the available authentication methods, see Understanding authentication providers.
The default identity provider in RHACS Cloud Service is Red Hat Single Sign-On (SSO). Authorization rules are set up to provide administrator access to the user who created the RHACS Cloud Service and to users who are marked as organization administrators in Red Hat SSO. The admin
login is disabled for RHACS Cloud Service by default and can only be enabled temporarily by SREs. For more information about authentication using Red Hat SSO, see Default access to the ACS Console.
1.5.2. Password management
Red Hat’s password policy requires the use of a complex password. Passwords must contain at least 14 characters and at least three of the following character classes:
- Base 10 digits (0 to 9)
- Upper case characters (A to Z)
- Lower case characters (a to z)
- Punctuation, spaces, and other characters
Most systems require two-factor authentication.
Red Hat follows best password practices according to NIST guidelines.
1.5.3. Remote access
Access for remote support and troubleshooting is strictly controlled through implementation of the following guidelines:
- Strong two-factor authentication for VPN access
- A segregated network with management and administrative networks requiring additional authentication through a bastion host
- All access and management is performed over encrypted sessions
Our customer support team offers Bomgar as a remote access solution for troubleshooting. Bomgar sessions are optional, must be initiated by the customer, and can be monitored and controlled.
To prevent information leakage, logs are shipped to SRE through our security information and event management (SIEM) application, Splunk.
1.5.4. Central access restriction by IP addresses and CIDR ranges
Restricting access to Central ensures that only traffic from trusted IP addresses can reach the security policy enforcement point, thereby strengthening the overall security posture.
To request restricted access to Central, go to the Red Hat Customer Portal page. From the Customer Portal, submit a support case requesting the restriction of access to Central. Be sure to provide the list of trusted IP addresses and CIDR ranges. The maximum number of IP addresses and CIDR ranges is 55.
1.6. Compliance
RHACS Cloud Service is certified across key global standards, ensuring top-tier security, compliance, and data protection for your business.
The following table outlines certifications for RHACS Cloud Service.
Compliance | RHACS Cloud Service on Kubernetes |
---|---|
ISO/IEC 27001:2022 | Yes |
ISO/IEC 27017:2015 | Yes |
ISO/IEC 27018:2019 | Yes |
PCI DSS 4.0 | Yes |
SOC 2 Type 2 | Yes |
SOC 2 Type 3 | Yes |
1.7. Data protection
Red Hat provides data protection by using various methods, such as logging, access control, and encryption.
1.7.1. Data storage media protection
To protect our data and client data from risk of theft or destruction, Red Hat employs the following methods:
- access logging
- automated account termination procedures
- application of the principle of least privilege
Data is encrypted in transit and at rest using strong data encryption following NIST guidelines and Federal Information Processing Standards (FIPS) where possible and practical. This includes backup systems.
RHACS Cloud Service encrypts data at rest within the Amazon Relational Database Service (RDS) database by using AWS-managed Key Management Services (KMS) keys. All data between the application and the database, together with data exchange between the systems, are encrypted in transit.
1.7.1.1. Data retention and destruction
Records, including those containing personal data, are retained as required by law. Records not required by law or a reasonable business need are securely removed. Secure data destruction requirements are included in operating procedures, using military grade tools. In addition, staff have access to secure document destruction facilities.
1.7.1.2. Encryption
Red Hat uses AWS managed keys which are rotated by AWS each year. For information on the use of keys, see AWS KMS key management. For more information about RDS, see Amazon RDS Security.
1.7.1.3. Multi-tenancy
RHACS Cloud Service isolates tenants by namespace on OpenShift Container Platform. SELinux provides additional isolation. Each customer has a unique RDS instance.
1.7.1.4. Data ownership
Customer data is stored in an encrypted RDS database not available on the public internet. Only Site Reliability Engineers (SREs) have access to it, and the access is audited.
Every RHACS Cloud Service system comes integrated with Red Hat external SSO. Authorization rules are set up to provide administrator access to the user created the Cloud Service and to users who are marked as organization administrators in Red Hat SSO. The admin login is disabled for RHACS Cloud Service by default and can only be temporarily enabled by SREs.
Red Hat collects information about the number of secured clusters connected to RHACS Cloud Service and the usage of features. Metadata generated by the application and stored in the RDS database is owned by the customer. Red Hat only accesses data for troubleshooting purposes and with customer permission. Red Hat access requires audited privilege escalation.
Upon contract termination, Red Hat can perform a secure disk wipe upon request. However, we are unable to physically destroy media (cloud providers such as AWS do not provide this option).
To secure data in case of a breach, you can perform the following actions:
- Disconnect all secured clusters from RHACS Cloud Service immediately using the cluster management page.
- Immediately disable access to the RHACS Cloud Service by using the Access Control page.
- Immediately delete your RHACS instance, which also deletes the RDS instance.
Any AWS RDS (data store) specific access modifications would be implemented by the RHACS Cloud Service SRE engineers.
1.8. Metrics and Logging
1.8.1. Service metrics
Service metrics are internal only. Red Hat provides and maintains the service at the agreed upon level. Service metrics are accessible only to authorized Red Hat personnel. For more information, see PRODUCT APPENDIX 4 RED HAT ONLINE SERVICES.
1.8.2. Customer metrics
Core usage capacity metrics are available either through Subscription Watch or the Subscriptions page.
1.8.3. Service logging
System logs for all components of the Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service) are internal and available only to Red Hat personnel. Red Hat does not provide user access to component logs. For more information, see PRODUCT APPENDIX 4 RED HAT ONLINE SERVICES.
1.9. Updates and Upgrades
Red Hat makes a commercially reasonable effort to notify customers prior to updates and upgrades that impact service. The decision regarding the need for a Service update to the Central instance and its timing is the sole responsibility of Red Hat.
Customers have no control over when a Central service update occurs. For more information, see PRODUCT APPENDIX 4 RED HAT ONLINE SERVICES. Upgrades to the version of Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service) are considered part of the service update. Upgrades are transparent to the customer and no connection to any update site is required.
Customers are responsible for timely RHACS Secured Cluster services upgrades that are required to maintain compatibility with RHACS Cloud Service.
Red Hat recommends enabling automatic upgrades for Secured Clusters that are connected to RHACS Cloud Service.
See the Red Hat Advanced Cluster Security for Kubernetes Support Matrix for more information about upgrade versions.
1.10. Availability
Availability and disaster avoidance are extremely important aspects of any security platform. Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service) provides numerous protections against failures at multiple levels. To account for possible cloud provider failures, Red Hat established multiple availability zones.
1.10.1. Backup and disaster recovery
The RHACS Cloud Service Disaster Recovery strategy includes backups of database and any customization. This also applies to customer data stored in the Central database. Recovery time varies based on the number of appliances and database sizes; however, because the appliances can be clustered and distributed, the RTO can be reduced upfront with proper architecture planning.
All snapshots are created using the appropriate cloud provider snapshot APIs, encrypted and then uploaded to secure object storage, which for Amazon Web Services (AWS) is an S3 bucket.
- Red Hat does not commit to a Recovery Point Objective (RPO) or Recovery Time Objective (RTO). For more information, see PRODUCT APPENDIX 4 RED HAT ONLINE SERVICES.
- Site Reliability Engineering performs backups only as a precautionary measure. They are stored in the same region as the cluster.
- Customers should deploy multiple availability zone Secured Clusters with workloads that follow Kubernetes best practices to ensure high availability within a region.
Disaster recovery plans are exercised annually at a minimum. A Business Continuity Management standard and guideline is in place so that the BC lifecycle is consistently followed throughout the organization. This policy includes a requirement for testing at least annually, or with major change of functional plans. Review sessions are required to be conducted after any plan exercise or activation, and plan updates are made as needed.
Red Hat has generator backup systems. Our IT production systems are hosted in a Tier 3 data center facility that has recurring testing to ensure redundancy is operational. They are audited yearly to validate compliance.
1.11. Getting support for RHACS Cloud Service
If you experience difficulty with a procedure described in this documentation, or with RHACS Cloud Service in general, visit the Red Hat Customer Portal.
From the Customer Portal, you can perform the following actions:
- Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products.
- Submit a support case to Red Hat Support.
- Access other product documentation.
To identify issues with your cluster, you can use Insights in RHACS Cloud Service. Insights provides details about issues and, if available, information on how to solve a problem.
1.12. Service removal
You can delete RHACS Cloud Service using the default delete operations from the Red Hat Hybrid Cloud Console. Deleting the RHACS Cloud Service Central instance automatically removes all RHACS components. Deleting is not reversible.
1.13. Pricing
For information about subscription fees, see PRODUCT APPENDIX 4 RED HAT ONLINE SERVICES.
1.14. Service Level Agreement
For more information about the Service Level Agreements (SLAs) offered for Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service), see PRODUCT APPENDIX 4 RED HAT ONLINE SERVICES.
Chapter 2. Overview of responsibilities for Red Hat Advanced Cluster Security Cloud Service
This documentation outlines Red Hat and customer responsibilities for the RHACS Cloud Service managed service.
2.1. Shared responsibilities for RHACS Cloud Service
While Red Hat manages the RHACS Cloud Service services, also referred to as Central services, the customer has certain responsibilities.
Resource or action | Red Hat responsibility | Customer responsibility |
---|---|---|
Hosted components, also called Central components |
|
|
Secured clusters (on-premise or cloud) |
|
Chapter 3. Red Hat Advanced Cluster Security Cloud Service architecture
Discover Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service) architecture and concepts.
3.1. Red Hat Advanced Cluster Security Cloud Service architecture overview
Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service) is a Red Hat managed Software-as-a-Service (SaaS) platform that lets you protect your Kubernetes and OpenShift Container Platform clusters and applications throughout the build, deploy, and runtime lifecycles.
RHACS Cloud Service includes many built-in DevOps enforcement controls and security-focused best practices based on industry standards such as the Center for Internet Security (CIS) benchmarks and the National Institute of Standards Technology (NIST) guidelines. You can also integrate it with your existing DevOps tools and workflows to improve security and compliance.
RHACS Cloud Service architecture
The following graphic shows the architecture with the StackRox Scanner and Scanner V4. Installation of Scanner V4 is optional, but provides additional benefits.

Central services include the user interface (UI), data storage, RHACS application programming interface (API), and image scanning capabilities. You deploy your Central service through the Red Hat Hybrid Cloud Console. When you create a new ACS instance, Red Hat creates your individual control plane for RHACS.
RHACS Cloud Service allows you to secure self-managed clusters that communicate with a Central instance. The clusters you secure, called Secured Clusters, are managed by you, and not by Red Hat. Secured Cluster services include optional vulnerability scanning services, admission control services, and data collection services used for runtime monitoring and compliance. You install Secured Cluster services on any OpenShift or Kubernetes cluster you want to secure.
3.2. Central
Red Hat manages Central, the control plane for RHACS Cloud Service. These services include the following components:
- Central: Central is the RHACS application management interface and services. It handles API interactions and user interface (RHACS Portal) access.
- Central DB: Central DB is the database for RHACS and handles all data persistence. It is currently based on PostgreSQL 13.
- Scanner V4: Beginning with version 4.4, RHACS contains the Scanner V4 vulnerability scanner for scanning container images. Scanner V4 is built on ClairCore, which also powers the Clair scanner. Scanner V4 includes the Indexer, Matcher, and Scanner V4 DB components, which are used in scanning.
- StackRox Scanner: The StackRox Scanner is the default scanner in RHACS. The StackRox Scanner originates from a fork of the Clair v2 open source scanner.
- Scanner-DB: This database contains data for the StackRox Scanner.
RHACS scanners analyze each image layer to determine the base operating system and identify programming language packages and packages that were installed by the operating system package manager. They match the findings against known vulnerabilities from various vulnerability sources. In addition, the StackRox Scanner identifies vulnerabilities in the node’s operating system and platform. These capabilities are planned for Scanner V4 in a future release.
3.2.1. Vulnerability data sources
Sources for vulnerabilities depend on the scanner that is used in your system. RHACS contains two scanners: StackRox Scanner and Scanner V4. StackRox Scanner is the default scanner and is deprecated beginning with release 4.6. Scanner V4 was introduced in release 4.4 and is the recommended image scanner.
3.2.1.1. StackRox Scanner sources
StackRox Scanner uses the following vulnerability sources:
- Red Hat OVAL v2
- Alpine Security Database
- Data tracked in Amazon Linux Security Center
- Debian Security Tracker
- Ubuntu CVE Tracker
NVD: This is used for various purposes such as filling in information gaps when vendors do not provide information. For example, Alpine does not provide a description, CVSS score, severity, or published date.
NoteThis product uses the NVD API but is not endorsed or certified by the NVD.
- Linux manual entries and NVD manual entries: The upstream StackRox project maintains a set of vulnerabilities that might not be discovered due to data formatting from other sources or absence of data.
- repository-to-cpe.json: Maps RPM repositories to their related CPEs, which is required for matching vulnerabilities for RHEL-based images.
3.2.1.2. Scanner V4 sources
Scanner V4 uses the following vulnerability sources:
- Red Hat VEX
Used with release 4.6 and later. This source provides vulnerability data in Vulnerability Exploitability eXchange(VEX) format. RHACS takes advantage of VEX benefits to significantly decrease the time needed for the initial loading of vulnerability data, and the space needed to store vulnerability data.
RHACS might list a different number of vulnerabilities when you are scanning with a RHACS version that uses OVAL, such as RHACS version 4.5, and a version that uses VEX, such as version 4.6. For example, RHACS no longer displays vulnerabilities with a status of "under investigation," while these vulnerabilities were included with previous versions that used OVAL data.
For more information about Red Hat security data, including information about the use of OVAL, Common Security Advisory Framework Version 2.0 (CSAF), and VEX, see The future of Red Hat security data.
- Red Hat CVE Map
- This is used in addition with VEX data for images which appear in the Red Hat Container Catalog.
- OSV
This is used for language-related vulnerabilities, such as Go, Java, JavaScript, Python, and Ruby. This source might provide vulnerability IDs other than CVE IDs for vulnerabilities, such as a GitHub Security Advisory (GHSA) ID.
NoteRHACS uses the OSV database available at OSV.dev under Apache License 2.0.
- NVD
This is used for various purposes such as filling in information gaps when vendors do not provide information. For example, Alpine does not provide a description, CVSS score, severity, or published date.
NoteThis product uses the NVD API but is not endorsed or certified by the NVD.
- Additional vulnerability sources
- Alpine Security Database
- Data tracked in Amazon Linux Security Center
- Debian Security Tracker
- Oracle OVAL
- Photon OVAL
- SUSE OVAL
- Ubuntu OVAL
- StackRox: The upstream StackRox project maintains a set of vulnerabilities that might not be discovered due to data formatting from other sources or absence of data.
- Scanner V4 Indexer sources
Scanner V4 indexer uses the following files to index Red Hat containers:
- repository-to-cpe.json: Maps RPM repositories to their related CPEs, which is required for matching vulnerabilities for RHEL-based images.
- container-name-repos-map.json: This matches container names to their respective repositories.
3.3. Secured cluster services
You install the secured cluster services on each cluster that you want to secure by using the Red Hat Advanced Cluster Security Cloud Service. Secured cluster services include the following components:
- Sensor: Sensor is the service responsible for analyzing and monitoring the cluster. Sensor listens to the OpenShift Container Platform or Kubernetes API and Collector events to report the current state of the cluster. Sensor also triggers deploy-time and runtime violations based on RHACS Cloud Service policies. In addition, Sensor is responsible for all cluster interactions, such as applying network policies, initiating reprocessing of RHACS Cloud Service policies, and interacting with the Admission controller.
- Admission controller: The Admission controller prevents users from creating workloads that violate security policies in RHACS Cloud Service.
- Collector: Collector analyzes and monitors container activity on cluster nodes. It collects container runtime and network activity information and sends the collected data to Sensor.
- StackRox Scanner: In Kubernetes, the secured cluster services include Scanner-slim as an optional component. However, on OpenShift Container Platform, RHACS Cloud Service installs a Scanner-slim version on each secured cluster to scan images in the OpenShift Container Platform integrated registry and optionally other registries.
- Scanner-DB: This database contains data for the StackRox Scanner.
Scanner V4: Scanner V4 components are installed on the secured cluster if enabled.
- Scanner V4 Indexer: The Scanner V4 Indexer performs image indexing, previously known as image analysis. Given an image and registry credentials, the Indexer pulls the image from the registry. It finds the base operating system, if it exists, and looks for packages. It stores and outputs an index report, which contains the findings for the given image.
Scanner V4 DB: This component is installed if Scanner V4 is enabled. This database stores information for Scanner V4, including index reports. For best performance, configure a persistent volume claim (PVC) for Scanner V4 DB.
NoteWhen secured cluster services are installed on the same cluster as Central services and installed in the same namespace, secured cluster services do not deploy Scanner V4 components. Instead, it is assumed that Central services already include a deployment of Scanner V4.
Additional resources
3.4. Data access and permissions
Red Hat does not have access to the clusters on which you install the secured cluster services. Also, RHACS Cloud Service does not need permission to access the secured clusters. For example, you do not need to create new IAM policies, access roles, or API tokens.
However, RHACS Cloud Service stores the data that secured cluster services send. All data is encrypted within RHACS Cloud Service. Encrypting the data within the RHACS Cloud Service platform helps to ensure the confidentiality and integrity of the data.
When you install secured cluster services on a cluster, it generates data and transmits it to the RHACS Cloud Service. This data is kept secure within the RHACS Cloud Service platform, and only authorized SRE team members and systems can access this data. RHACS Cloud Service uses this data to monitor the security and compliance of your cluster and applications, and to provide valuable insights and analytics that can help you optimize your deployments.
Chapter 4. Getting started with RHACS Cloud Service
Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service) provides security services for your Red Hat OpenShift and Kubernetes clusters. See the Red Hat Advanced Cluster Security for Kubernetes Support Matrix for information about supported platforms for secured clusters.
Prerequisites
Ensure that you can access the Advanced Cluster Security menu option from the Red Hat Hybrid Cloud Console.
NoteTo access the RHACS Cloud Service console, you need your Red Hat Single Sign-On (SSO) credentials, or credentials for another identity provider if that has been configured. See Default access to the ACS console.
4.1. High-level overview of installation steps
The following sections provide an overview of installation steps and links to the relevant documentation.
4.1.1. Securing Red Hat OpenShift clusters
You can secure Red Hat OpenShift clusters by using the RHACS Operator, Helm charts, or the roxctl
CLI.
4.1.1.1. Securing Red Hat OpenShift clusters by using the Operator
Procedure
- Verify that the clusters you want to secure meet the default requirements.
- In the Red Hat Hybrid Cloud Console, create an ACS Instance.
-
On each Red Hat OpenShift cluster you want to secure, create a project named
stackrox
. This project will contain the resources for RHACS Cloud Service secured clusters. Create the mechanism that the Central instance, also called Central, uses to set up communication with the secured clusters. You can either create an init bundle or a cluster registration secret (CRS). Perform one of these actions:
- In the ACS Console, generate an init bundle. The init bundle contains secrets that allow communication between RHACS Cloud Service secured clusters and Central.
-
Log in to Central and use the
roxctl
CLI to generate an init bundle. -
Log in to Central and use the
roxctl
CLI to generate a CRS.
- On each Red Hat OpenShift cluster, apply the init bundle or CRS.
- On each Red Hat OpenShift cluster, install the RHACS Operator.
-
On each Red Hat OpenShift cluster, install secured cluster resources in the
stackrox
project by using the Operator. - Verify installation by ensuring that your secured clusters can communicate with the ACS instance.
4.1.1.2. Securing Red Hat OpenShift clusters by using Helm charts
Procedure
- Verify that the clusters you want to secure meet the default requirements.
- In the Red Hat Hybrid Cloud Console, create an ACS Instance.
-
On each Red Hat OpenShift cluster you want to secure, create a project named
stackrox
. This project will contain the resources for RHACS Cloud Service secured clusters. Create the mechanism that the Central instance, also called Central, uses to set up communication with the secured clusters. You can either generate an init bundle or a cluster registration secret (CRS). Perform only one of these actions:
- In the ACS Console, generate an init bundle. The init bundle contains secrets that allow communication between RHACS Cloud Service secured clusters and Central.
-
Log in to Central and use the
roxctl
CLI to generate an init bundle. -
Log in to Central and use the
roxctl
CLI to generate a CRS.
-
On each Red Hat OpenShift cluster, run the
helm install
command to install RHACS by using Helm charts, specifying the path of the init bundle or CRS. - Verify installation by ensuring that your secured clusters can communicate with the ACS instance.
4.1.1.3. Securing Red Hat OpenShift clusters by using the roxctl CLI, also called the manifest method
Procedure
- Verify that the clusters you want to secure meet the default requirements.
- In the Red Hat Hybrid Cloud Console, create an ACS Instance.
-
On each Red Hat OpenShift cluster you want to secure, create a project named
stackrox
. This project will contain the resources for RHACS Cloud Service secured clusters. Perform one of the following actions:
- In the ACS console, use the legacy installation method to create a cluster bundle.
- From a system that has access to the monitored cluster, generate the configuration and extract and run the sensor script from the cluster bundle.
- Verify installation by ensuring that your secured clusters can communicate with the ACS instance.
4.1.2. Securing Kubernetes clusters
You can secure Kubernetes clusters by using Helm charts or the roxctl
CLI.
4.1.2.1. Securing Kubernetes clusters by using Helm charts
- Verify that the clusters you want to secure meet the default requirements.
- In the Red Hat Hybrid Cloud Console, create an ACS Instance.
Create the mechanism that the Central instance, also called Central, uses to set up communication with the secured clusters. You can either create an init bundle or a cluster registration secret (CRS). Perform only one of these actions:
- In the ACS Console, generate an init bundle. The init bundle contains secrets that allow communication between RHACS Cloud Service secured clusters and Central.
-
Log in to Central and use the
roxctl
CLI to generate an init bundle. -
Log in to Central and use the
roxctl
CLI to generate a CRS.
-
On each Kubernetes cluster, run the
helm install
command to install by using Helm charts, specifying the path of the init bundle or CRS. - Verify installation by ensuring that your secured clusters can communicate with the ACS instance.
4.1.2.2. Securing Kubernetes clusters by using the roxctl CLI, also called the manifest method
- Verify that the clusters you want to secure meet the default requirements.
- In the Red Hat Hybrid Cloud Console, create an ACS Instance.
-
On each cluster you want to secure, create a namespace named
stackrox
. This namespace will contain the resources for RHACS Cloud Service secured clusters. Perform one of the following steps:
- In the ACS console, use the legacy installation method to create a cluster bundle.
- From a system that has access to the monitored cluster, generate the configuration and extract and run the sensor script from the cluster bundle.
- Verify installation by ensuring that your secured clusters can communicate with the ACS instance.
4.2. Default access to the ACS Console
By default, the authentication mechanism available to users is authentication by using Red Hat Single Sign-On (SSO). You cannot delete or change the Red Hat SSO authentication provider. However, you can change the minimum access role and add additional rules, or add another identity provider.
To learn how authentication providers work in ACS, see Understanding authentication providers.
A dedicated OIDC client of sso.redhat.com
is created for each ACS Console. All OIDC clients share the same sso.redhat.com
realm. Claims from the token issued by sso.redhat.com
are mapped to an ACS-issued token as follows:
-
realm_access.roles
togroups
-
org_id
torh_org_id
-
is_org_admin
torh_is_org_admin
-
sub
touserid
The built-in Red Hat SSO authentication provider has the required attribute rh_org_id
set to the organization ID assigned to account of the user who created the RHACS Cloud Service instance. This is the ID of the organizational account the user is a part of. This can be thought of as the "tenant" the user is under and owned by. Only users with the same organizational account can access the ACS console by using the Red Hat SSO authentication provider.
To gain more control over access to your ACS Console, configure another identity provider instead of relying on the Red Hat SSO authentication provider. For more information, see Understanding authentication providers. To configure the other authentication provider to be the first authentication option on the login page, its name should be lexicographically smaller than Red Hat SSO
.
The minimum access role is set to None
. Assigning a different value to this field gives access to the RHACS Cloud Service instance to all users with the same organizational account.
Other rules that are set up in the built-in Red Hat SSO authentication provider include the following:
-
Rule mapping your
userid
toAdmin
-
Rules mapping administrators of the organization to
Admin
You can add more rules to grant access to the ACS Console to someone else with the same organizational account. For example, you can use email
as a key.
Chapter 5. Default resource requirements for Red Hat Advanced Cluster Security Cloud Service
5.1. General requirements for RHACS Cloud Service
Before you can install Red Hat Advanced Cluster Security Cloud Service, your system must meet several requirements.
You must not install RHACS Cloud Service on:
- Amazon Elastic File System (Amazon EFS). Use the Amazon Elastic Block Store (Amazon EBS) with the default gp2 volume type instead.
- Older CPUs that do not have the Streaming SIMD Extensions (SSE) 4.2 instruction set. For example, Intel processors older than Sandy Bridge and AMD processors older than Bulldozer. These processors were released in 2011.
To install RHACS Cloud Service, you must have one of the following systems:
- OpenShift Container Platform version 4.12 or later, and cluster nodes with a supported operating system of Red Hat Enterprise Linux CoreOS (RHCOS) or Red Hat Enterprise Linux (RHEL)
A supported managed Kubernetes platform, and cluster nodes with a supported operating system of Amazon Linux, CentOS, Container-Optimized OS from Google, Red Hat Enterprise Linux CoreOS (RHCOS), Debian, Red Hat Enterprise Linux (RHEL), or Ubuntu
For information about supported platforms and architecture, see the Red Hat Advanced Cluster Security for Kubernetes Support Matrix.
The following minimum requirements and suggestions apply to cluster nodes.
- Architecture
Supported architectures are
amd64
,ppc64le
, ors390x
.NoteSecured cluster services are supported on IBM Power (
ppc64le
), IBM Z (s390x
), and IBM® LinuxONE (s390x
) clusters.- Processor
- 3 CPU cores are required.
- Memory
6 GiB of RAM is required.
NoteSee the default memory and CPU requirements for each component and ensure that the node size can support them.
- Storage
For RHACS Cloud Service, a persistent volume claim (PVC) is not required. However, a PVC is strongly recommended if you have secured clusters with Scanner V4 enabled. Use Solid-State Drives (SSDs) for best performance. However, you can use another storage type if you do not have SSDs available.
ImportantYou must not use Ceph FS storage with RHACS Cloud Service. Red Hat recommends using RBD block mode PVCs for RHACS Cloud Service.
If you plan to install RHACS Cloud Service by using Helm charts, you must meet the following requirements:
-
You must have Helm command-line interface (CLI) v3.2 or newer, if you are installing or configuring RHACS Cloud Service using Helm charts. Use the
helm version
command to verify the version of Helm you have installed. -
You must have access to the Red Hat Container Registry. For information about downloading images from
registry.redhat.io
, see Red Hat Container Registry Authentication.
5.2. Secured cluster services
Secured cluster services contain the following components:
- Sensor
- Admission controller
- Collector
- Scanner (optional)
- Scanner V4 (optional)
If you use a web proxy or firewall, you must ensure that secured clusters and Central can communicate on HTTPS port 443.
5.2.1. Sensor
Sensor monitors your Kubernetes and OpenShift Container Platform clusters. These services currently deploy in a single deployment, which handles interactions with the Kubernetes API and coordinates with the other Red Hat Advanced Cluster Security for Kubernetes components.
CPU and memory requirements
The following table lists the minimum CPU and memory values required to install and run sensor on secured clusters.
Sensor | CPU | Memory |
---|---|---|
Request | 2 cores | 4 GiB |
Limit | 4 cores | 8 GiB |
5.2.2. Admission controller
The Admission controller prevents users from creating workloads that violate policies you configure.
CPU and memory requirements
By default, the admission control service runs 3 replicas. The following table lists the request and limits for each replica.
Admission controller | CPU | Memory |
---|---|---|
Request | 0.05 cores | 100 MiB |
Limit | 0.5 cores | 500 MiB |
5.2.3. Collector
Collector monitors runtime activity on each node in your secured clusters as a DaemonSet. It connects to Sensor to report this information. The collector pod has three containers. The first container is collector, which monitors and reports the runtime activity on the node. The other two are compliance and node-inventory.
Collection requirements
To use the CORE_BPF
collection method, the base kernel must support BTF, and the BTF file must be available to collector. In general, the kernel version must be later than 5.8 (4.18 for RHEL nodes) and the CONFIG_DEBUG_INFO_BTF
configuration option must be set.
Collector looks for the BTF file in the standard locations shown in the following list:
Example 5.1. BTF file locations
/sys/kernel/btf/vmlinux /boot/vmlinux-<kernel-version> /lib/modules/<kernel-version>/vmlinux-<kernel-version> /lib/modules/<kernel-version>/build/vmlinux /usr/lib/modules/<kernel-version>/kernel/vmlinux /usr/lib/debug/boot/vmlinux-<kernel-version> /usr/lib/debug/boot/vmlinux-<kernel-version>.debug /usr/lib/debug/lib/modules/<kernel-version>/vmlinux
If any of these files exists, it is likely that the kernel has BTF support and CORE_BPF
is configurable.
CPU and memory requirements
By default, the collector pod runs 3 containers. The following tables list the request and limits for each container and the total for each collector pod.
Collector container
Type | CPU | Memory |
---|---|---|
Request | 0.06 cores | 320 MiB |
Limit | 0.9 cores | 1000 MiB |
Compliance container
Type | CPU | Memory |
---|---|---|
Request | 0.01 cores | 10 MiB |
Limit | 1 core | 2000 MiB |
Node-inventory container
Type | CPU | Memory |
---|---|---|
Request | 0.01 cores | 10 MiB |
Limit | 1 core | 500 MiB |
Total collector pod requirements
Type | CPU | Memory |
---|---|---|
Request | 0.07 cores | 340 MiB |
Limit | 2.75 cores | 3500 MiB |
5.2.4. Scanner
CPU and memory requirements
The requirements in this table are based on the default of 3 replicas.
StackRox Scanner | CPU | Memory |
---|---|---|
Request | 3 cores | 4500 MiB |
Limit | 6 cores | 12 GiB |
The StackRox Scanner requires Scanner DB (PostgreSQL 15) to store data. The following table lists the minimum memory and storage values required to install and run Scanner DB.
Scanner DB | CPU | Memory |
---|---|---|
Request | 0.2 cores | 512 MiB |
Limit | 2 cores | 4 GiB |
5.2.5. Scanner V4
Scanner V4 is optional. If Scanner V4 is installed on secured clusters, the following requirements apply.
CPU, memory, and storage requirements
Scanner V4 Indexer
The requirements in this table are based on the default of 2 replicas.
Scanner V4 Indexer | CPU | Memory |
---|---|---|
Request | 2 cores | 3000 MiB |
Limit | 4 cores | 6 GiB |
Scanner V4 DB
Scanner V4 requires Scanner V4 DB (PostgreSQL 15) to store data. The following table lists the minimum CPU, memory, and storage values required to install and run Scanner V4 DB. For Scanner V4 DB, a PVC is not required, but it is strongly recommended because it ensures optimal performance.
Scanner V4 DB | CPU | Memory | Storage |
---|---|---|---|
Request | 0.2 cores | 2 GiB | 10 GiB |
Limit | 2 cores | 4 GiB | 10 GiB |
Chapter 6. Recommended resource requirements for Red Hat Advanced Cluster Security Cloud Service
The recommended resource guidelines were developed by performing a focused test that created the following objects across a given number of namespaces:
- 10 deployments, with 3 pod replicas in a sleep state, mounting 4 secrets, 4 config maps
- 10 services, each one pointing to the TCP/8080 and TCP/8443 ports of one of the previous deployments
- 1 route pointing to the first of the previous services
- 10 secrets containing 2048 random string characters
- 10 config maps containing 2048 random string characters
During the analysis of results, the number of deployments was identified as a primary factor for increasing of used resources. The number of deployments was used for the estimation of required resources.
Additional resources
6.1. Secured cluster services
Secured cluster services contain the following components:
- Sensor
- Admission controller
Collector
NoteCollector component is not included on this page. Required resource requirements are listed on the default resource requirements page.
6.1.1. Sensor
Sensor monitors your Kubernetes and OpenShift Container Platform clusters. These services currently deploy in a single deployment, which handles interactions with the Kubernetes API and coordinates with Collector.
Memory and CPU requirements
The following table lists the minimum memory and CPU values required to run Sensor on a secured cluster.
Deployments | CPU | Memory |
---|---|---|
< 25,000 | 2 cores | 10 GiB |
< 50,000 | 2 cores | 20 GiB |
6.1.2. Admission controller
The admission controller prevents users from creating workloads that violate policies that you configure.
Memory and CPU requirements
The following table lists the minimum memory and CPU values required to run the admission controller on a secured cluster.
Deployments | CPU | Memory |
---|---|---|
< 25,000 | 0.5 cores | 300 MiB |
< 50,000 | 0.5 cores | 600 MiB |
Chapter 7. Setting up RHACS Cloud Service with Red Hat OpenShift secured clusters
7.1. Creating a RHACS Cloud instance on Red Hat Cloud
Access Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service) by selecting an instance in the Red Hat Hybrid Cloud Console. An ACS instance contains the RHACS Cloud Service management interface and services that Red Hat configures and manages for you. The management interface connects to your secured clusters, which contain the services that scan and collect information about vulnerabilities. One instance can connect to and monitor many clusters.
7.1.1. Creating an instance in the console
In the Red Hat Hybrid Cloud Console, create an ACS instance to connect to your secured clusters.
Procedure
To create an ACS instance:
- Log in to the Red Hat Hybrid Cloud Console.
- From the navigation menu, select Advanced Cluster Security → ACS Instances.
Select Create ACS instance and enter information into the displayed fields or select the appropriate option from the drop-down list:
- Name: Enter the name of your ACS instance. An ACS instance contains the RHACS Central component, also referred to as "Central", which includes the RHACS Cloud Service management interface and services that are configured and managed by Red Hat. You manage your secured clusters that communicate with Central. You can connect many secured clusters to one instance.
- Cloud provider: The cloud provider where Central is located. Select AWS.
Cloud region: The region for your cloud provider where Central is located. Select one of the following regions:
- US-East, N. Virginia
- Europe, Ireland
- Availability zones: Use the default value (Multi).
- Click Create instance.
7.1.2. Next steps
-
On each Red Hat OpenShift cluster you want to secure, create a project named
stackrox
. This project will contain the resources for RHACS Cloud Service secured clusters.
7.2. Creating a project on your Red Hat OpenShift secured cluster
Create a project on each Red Hat OpenShift cluster that you want to secure. You then use this project to install RHACS Cloud Service resources by using the Operator or Helm charts.
7.2.1. Creating a project on your cluster
Procedure
-
In your OpenShift Container Platform cluster, go to Home → Projects and create a project for RHACS Cloud Service. Use
stackrox
as the project Name.
7.2.2. Next steps
- In the ACS Console, create an init bundle or cluster registration secret (CRS). The init bundle contains secrets that allow communication between RHACS Cloud Service secured clusters and Central. The CRS can also be used to set up this initial communication and is more flexible and secure.
7.3. Generating an init bundle or cluster registration secret for secured clusters
Before you set up a secured cluster, you must create an init bundle or cluster registration secret (CRS). The secured cluster then uses this bundle or CRS to authenticate with the Central instance, also called Central. You can create an init bundle or CRS by using either the RHACS portal or the roxctl
CLI. You then apply the init bundle or CRS by using it to create resources.
You must have the Admin
user role to create an init bundle.
RHACS uses a special artifact during installation that allows the RHACS Central component to communicate securely with secured clusters that you are adding. Before the 4.7 release, RHACS used init bundles exclusively for initiating the secure communication channel. Beginning with 4.7, RHACS provides an alternative to init bundles called cluster registration secrets (CRSes).
Cluster registration secrets is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Cluster registration secrets (CRSes) offer improved security and are easier to use. CRSes contain a single token that can be used when installing RHACS by using both Operator and Helm installation methods.
CRSes provide better security because they are only used for registering a new secured cluster. If leaked, the certificates and keys in an init bundle can be used to impersonate services running on a secured cluster. By contrast, the certificate and key in a CRS can only be used for registering a new cluster.
After the cluster is set up by using the CRS, service-specific certificates are issued by Central and sent to the new secured cluster. These service certificates are used for communication between Central and secured clusters. Therefore, a CRS can be revoked after the cluster is registered without disconnecting secured clusters.
You can use either an init bundle or a cluster registration secret (CRS) during installation of a secured cluster. However, RHACS does not yet provide a way to create a CRS by using the portal. Therefore, you must create the CRS by using the roxctl
CLI.
Before you set up a secured cluster, you must create an init bundle or CRS. The secured cluster then uses this bundle or CRS to authenticate with Central. You can create an init bundle by using either the RHACS portal or the roxctl
CLI. If you are using a CRS, you must use the roxctl
CLI to create it.
You can then apply the init bundle or the CRS by using the OpenShift Container Platform web console or by using the oc
or kubectl
CLI. If you install RHACS by using Helm, you provide the init bundle or CRS when you run the helm install
command.
7.3.1. Generating an init bundle
7.3.1.1. Generating an init bundle by using the RHACS portal
You can create an init bundle containing secrets by using the RHACS portal.
You must have the Admin
user role to create an init bundle.
Procedure
- Find the address of the RHACS portal as described in "Verifying Central installation using the Operator method".
- Log in to the RHACS portal.
- If you do not have secured clusters, the Platform Configuration → Clusters page appears.
- Click Create init bundle.
- Enter a name for the cluster init bundle.
- Select your platform.
- Select the installation method you will use for your secured clusters: Operator or Helm chart.
Click Download to generate and download the init bundle, which is created in the form of a YAML file. You can use one init bundle and its corresponding YAML file for all secured clusters if you are using the same installation method.
ImportantStore this bundle securely because it contains secrets.
- Apply the init bundle by using it to create resources on the secured cluster.
- Install secured cluster services on each cluster.
7.3.1.2. Generating an init bundle by using the roxctl CLI
You can create an init bundle with secrets by using the roxctl
CLI.
You must have the Admin
user role to create init bundles.
Prerequisites
You have configured the
ROX_API_TOKEN
and theROX_CENTRAL_ADDRESS
environment variables:Set the
ROX_API_TOKEN
by running the following command:$ export ROX_API_TOKEN=<api_token>
Set the
ROX_CENTRAL_ADDRESS
environment variable by running the following command:$ export ROX_CENTRAL_ADDRESS=<address>:<port_number>
In RHACS Cloud Service, when using roxctl
commands that require the Central address, use the Central instance address as displayed in the Instance Details section of the Red Hat Hybrid Cloud Console. For example, use acs-ABCD12345.acs.rhcloud.com
instead of acs-data-ABCD12345.acs.rhcloud.com
.
Procedure
To generate a cluster init bundle containing secrets for Helm installations, run the following command:
$ roxctl -e "$ROX_CENTRAL_ADDRESS" \ central init-bundles generate <cluster_init_bundle_name> --output \ cluster_init_bundle.yaml
To generate a cluster init bundle containing secrets for Operator installations, run the following command:
$ roxctl -e "$ROX_CENTRAL_ADDRESS" \ central init-bundles generate <cluster_init_bundle_name> --output-secrets \ cluster_init_bundle.yaml
ImportantEnsure that you store this bundle securely because it contains secrets. You can use the same bundle to set up multiple secured clusters.
7.3.2. Generating a CRS
7.3.2.1. Generating a CRS by using the roxctl CLI
You can create a cluster registration secret by using the roxctl
CLI.
You must have the Admin
user role to create a CRS.
Prerequisites
You have configured the
ROX_API_TOKEN
and theROX_CENTRAL_ADDRESS
environment variables:Set the
ROX_API_TOKEN
by running the following command:$ export ROX_API_TOKEN=<api_token>
Set the
ROX_CENTRAL_ADDRESS
environment variable by running the following command:$ export ROX_CENTRAL_ADDRESS=<address>:<port_number>
In RHACS Cloud Service, when using roxctl
commands that require the Central address, use the Central instance address as displayed in the Instance Details section of the Red Hat Hybrid Cloud Console. For example, use acs-ABCD12345.acs.rhcloud.com
instead of acs-data-ABCD12345.acs.rhcloud.com
.
Procedure
To generate a CRS, run the following command:
$ roxctl -e "$ROX_CENTRAL_ADDRESS" \ central crs generate <crs_name> \ 1 --output <file_name> 2
ImportantEnsure that you store this file securely because it contains secrets. You can use the same file to set up multiple secured clusters. You cannot retrieve a previously-generated CRS.
Depending on the output that you select, the command might return some INFO messages about the CRS and the YAML file.
Sample output
INFO: Successfully generated new CRS INFO: INFO: Name: test-crs INFO: Created at: 2025-02-26T19:07:21Z INFO: Expires at: 2026-02-26T19:07:00Z INFO: Created By: sample-token INFO: ID: 9214a63f-7e0e-485a-baae-0757b0860ac9 # This is a StackRox Cluster Registration Secret (CRS). # It is used for setting up StackRox secured clusters. # NOTE: This file contains secret data that allows connecting new secured clusters to central, # and needs to be handled and stored accordingly. apiVersion: v1 data: crs: EXAMPLEZXlKMlpYSnphVzl1SWpveExDSkRRWE1pT2xzaUxTMHRMUzFDUlVkSlRpQkRSVkpVU1VaSlEwREXAMPLE= kind: Secret metadata: annotations: crs.platform.stackrox.io/created-at: "2025-02-26T19:07:21.800414339Z" crs.platform.stackrox.io/expires-at: "2026-02-26T19:07:00Z" crs.platform.stackrox.io/id: 9214a63f-7e0e-485a-baae-0757b0860ac9 crs.platform.stackrox.io/name: test-crs creationTimestamp: null name: cluster-registration-secret INFO: Then CRS needs to be stored securely, since it contains secrets. INFO: It is not possible to retrieve previously generated CRSs.
7.3.3. Next steps
7.4. Applying an init bundle or cluster registration secret for secured clusters
Apply the init bundle or cluster registration secret (CRS) by using it to create resources.
You must have the Admin
user role to apply an init bundle or CRS.
7.4.1. Applying the init bundle on the secured cluster
Before you configure a secured cluster, you must apply the init bundle by using it to create the required resources on the secured cluster. Applying the init bundle allows the services on the secured cluster to communicate with RHACS Cloud Service.
If you are installing by using Helm charts, do not perform this step. Complete the installation by using Helm; See "Installing RHACS on secured clusters by using Helm charts" in the additional resources section.
Prerequisites
- You must have generated an init bundle containing secrets.
-
You must have created the
stackrox
project, or namespace, on the cluster where secured cluster services will be installed. Usingstackrox
for the project is not required, but ensures that vulnerabilities for RHACS processes are not reported when scanning your clusters.
Procedure
To create resources, perform only one of the following steps:
-
Create resources using the OpenShift Container Platform web console: In the OpenShift Container Platform web console, make sure that you are in the
stackrox
namespace. In the top menu, click + to open the Import YAML page. You can drag the init bundle file or copy and paste its contents into the editor, and then click Create. When the command is complete, the display shows that thecollector-tls
,sensor-tls
, andadmission-control-tls
resources were created. Create resources using the Red Hat OpenShift CLI: Using the Red Hat OpenShift CLI, run the following command to create the resources:
$ oc create -f <init_bundle.yaml> \1 -n <stackrox> 2
Verification
Restart Sensor to pick up the new certificates.
For more information about how to restart Sensor, see "Restarting the Sensor container" in the "Additional resources" section.
7.4.2. Applying the cluster registration secret (CRS) on the secured cluster
Before you configure a secured cluster, you must apply the CRS to the secured cluster. After you have applied the CRS, the services on the secured cluster can communicate securely with RHACS Cloud Service.
If you are installing by using Helm charts, do not perform this step. Complete the installation by using Helm; See "Installing RHACS on secured clusters by using Helm charts" in the additional resources section.
Prerequisites
- You must have generated a CRS.
Procedure
To create resources, perform only one of the following steps:
-
Create resources using the OpenShift Container Platform web console: In the OpenShift Container Platform web console, go to the
stackrox
project or the project where you want to install the secured cluster services. In the top menu, click + to open the Import YAML page. You can drag the CRS file or copy and paste its contents into the editor, and then click Create. When the command is complete, the display shows that the secret namedcluster-registration-secret
was created. Create resources using the Red Hat OpenShift CLI: Using the Red Hat OpenShift CLI, run the following command to create the resources:
$ oc create -f <file_name.yaml> \1 -n <stackrox> 2
Verification
Restart Sensor to pick up the new certificates.
For more information about how to restart Sensor, see "Restarting the Sensor container" in the "Additional resources" section.
7.4.3. Next steps
- On each Red Hat OpenShift cluster, install the RHACS Operator.
- Install RHACS secured cluster services in all clusters that you want to monitor.
7.4.4. Additional resources
7.5. Installing the Operator
Install the RHACS Operator on your secured clusters.
7.5.1. Installing the RHACS Operator for RHACS Cloud Service
Using the OperatorHub provided with OpenShift Container Platform is the easiest way to install the RHACS Operator.
Prerequisites
- You have access to an OpenShift Container Platform cluster using an account with Operator installation permissions.
- You must be using OpenShift Container Platform 4.12 or later. For information about supported platforms and architecture, see the Red Hat Advanced Cluster Security for Kubernetes Support Matrix.
Procedure
- In the web console, go to the Operators → OperatorHub page.
- If Red Hat Advanced Cluster Security for Kubernetes is not displayed, enter Advanced Cluster Security into the Filter by keyword box to find the Red Hat Advanced Cluster Security for Kubernetes Operator.
- Select the Red Hat Advanced Cluster Security for Kubernetes Operator to view the details page.
- Read the information about the Operator, and then click Install.
On the Install Operator page:
- Keep the default value for Installation mode as All namespaces on the cluster.
- Select a specific namespace in which to install the Operator for the Installed namespace field. Install the Red Hat Advanced Cluster Security for Kubernetes Operator in the rhacs-operator namespace.
Select automatic or manual updates for Update approval.
If you select automatic updates, when a new version of the Operator is available, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator.
If you select manual updates, when a newer version of the Operator is available, OLM creates an update request. As a cluster administrator, you must manually approve the update request to update the Operator to the latest version.
Red Hat recommends enabling automatic upgrades for Operator in RHACS Cloud Service. See the Red Hat Advanced Cluster Security for Kubernetes Support Matrix for more information.
- Click Install.
Verification
- After the installation completes, go to Operators → Installed Operators to verify that the Red Hat Advanced Cluster Security for Kubernetes Operator is listed with the status of Succeeded.
7.5.2. Next steps
-
On each Red Hat OpenShift cluster, install secured cluster resources in the
stackrox
project.
7.6. Installing secured cluster resources from RHACS Cloud Service
You can install RHACS Cloud Service on your secured clusters by using the Operator or Helm charts. You can also use the roxctl
CLI to install it, but do not use this method unless you have a specific installation need that requires using it.
Prerequisites
- During RHACS installation, you noted the Central instance address. You can view this information by choosing Advanced Cluster Security → ACS Instances from the cloud console navigation menu, and then clicking the ACS instance you created.
- If you are installing by using the Operator, you created your Red Hat OpenShift cluster that you want to secure and installed the Operator on it.
-
You created and downloaded the init bundle or cluster registration secret (CRS) by using the ACS Console or by using the
roxctl
CLI. - You applied the init bundle or CRS on the cluster that you want to secure, unless you are installing by using a Helm chart.
7.6.1. Installing RHACS on secured clusters by using the Operator
7.6.1.1. Installing secured cluster services
You can install Secured Cluster services on your clusters by using the Operator, which creates the SecuredCluster
custom resource. You must install the Secured Cluster services on every cluster in your environment that you want to monitor.
When you install Red Hat Advanced Cluster Security for Kubernetes:
-
If you are installing RHACS for the first time, you must first install the
Central
custom resource because theSecuredCluster
custom resource installation is dependent on certificates that Central generates. -
Do not install
SecuredCluster
in projects whose names start withkube
,openshift
, orredhat
, or in theistio-system
project. -
If you are installing RHACS
SecuredCluster
custom resource on a cluster that also hosts Central, ensure that you install it in the same namespace as Central. -
If you are installing Red Hat Advanced Cluster Security for Kubernetes
SecuredCluster
custom resource on a cluster that does not host Central, Red Hat recommends that you install the Red Hat Advanced Cluster Security for KubernetesSecuredCluster
custom resource in its own project and not in the project in which you have installed the Red Hat Advanced Cluster Security for Kubernetes Operator.
Prerequisites
- If you are using OpenShift Container Platform, you must install version 4.12 or later.
- You have installed the RHACS Operator on the cluster that you want to secure, called the secured cluster.
-
You have generated an init bundle or cluster registration secret (CRS) and applied it to the cluster in the recommended
stackrox
namespace.
Procedure
- On the OpenShift Container Platform web console for the secured cluster, go to the Operators → Installed Operators page.
- Click the RHACS Operator.
If you have installed the Operator in the recommended namespace, OpenShift Container Platform lists the project as
rhacs-operator
. Select Project: rhacs-operator → Create project.Note-
If you installed the Operator in a different namespace, OpenShift Container Platform lists the name of that namespace instead of
rhacs-operator
.
-
If you installed the Operator in a different namespace, OpenShift Container Platform lists the name of that namespace instead of
- Click Installed Operators.
-
You should have created the
stackrox
namespace when you applied the init bundle or the CRS. Make sure that you are in this namespace by verifying that Project:stackrox is selected in the menu. - In Provided APIs, click Secured Cluster.
- Click Create SecuredCluster.
Select one of the following options in the Configure via field:
- Form view: Use this option if you want to use the on-screen fields to configure the secured cluster and do not need to change any other fields.
- YAML view: Use this view to set up the secured cluster by using the YAML file. The YAML file is displayed in the window and you can edit fields in it. If you select this option, when you are finished editing the file, click Create.
- If you are using Form view, enter the new project name by accepting or editing the default name. The default value is stackrox-secured-cluster-services.
- Optional: Add any labels for the cluster.
-
Enter a unique name for your
SecuredCluster
custom resource. For Central Endpoint, enter the address of your Central instance. For example, if Central is available at
https://central.example.com
, then specify the central endpoint ascentral.example.com
.- For RHACS Cloud Service use the Central API Endpoint address. You can view this information by choosing Advanced Cluster Security → ACS Instances from the Red Hat Hybrid Cloud Console navigation menu, then clicking the RHACS instance you created.
-
Use the default value of
central.stackrox.svc:443
only if you are installing secured cluster services in the same cluster where Central is installed. - Do not use the default value when you are configuring multiple clusters. Instead, use the hostname when configuring the Central Endpoint value for each cluster.
- For the remaining fields, accept the default values or configure custom values if needed. For example, you might need to configure TLS if you are using custom certificates or untrusted CAs. See "Configuring Secured Cluster services options for RHACS using the Operator" for more information.
- Click Create.
After a brief pause, the SecuredClusters page displays the status of
stackrox-secured-cluster-services
. You might see the following conditions:- Conditions: Deployed, Initialized: The secured cluster services have been installed and the secured cluster is communicating with Central.
- Conditions: Initialized, Irreconcilable: The secured cluster is not communicating with Central. Make sure that you applied the init bundle you created in the RHACS web portal to the secured cluster.
Next steps
- Configure additional secured cluster settings (optional).
- Verify installation.
7.6.2. Installing RHACS Cloud Service on secured clusters by using Helm charts
You can install RHACS on secured clusters by using Helm charts with no customization, using the default values, or with customizations of configuration parameters.
First, ensure that you add the Helm chart repository.
7.6.2.1. Adding the Helm chart repository
Procedure
Add the RHACS charts repository.
$ helm repo add rhacs https://mirror.openshift.com/pub/rhacs/charts/
The Helm repository for Red Hat Advanced Cluster Security for Kubernetes includes Helm charts for installing different components, including:
Central services Helm chart (
central-services
) for installing the centralized components (Central and Scanner).NoteYou deploy centralized components only once and you can monitor multiple separate clusters by using the same installation.
Secured Cluster Services Helm chart (
secured-cluster-services
) for installing the per-cluster and per-node components (Sensor, Admission Controller, Collector, and Scanner-slim).NoteDeploy the per-cluster components into each cluster that you want to monitor and deploy the per-node components in all nodes that you want to monitor.
Verification
Run the following command to verify the added chart repository:
$ helm search repo -l rhacs/
7.6.2.2. Installing RHACS Cloud Service on secured clusters by using Helm charts without customizations
7.6.2.2.1. Installing the secured-cluster-services Helm chart without customization
Use the following instructions to install the secured-cluster-services
Helm chart to deploy the per-cluster and per-node components (Sensor, Admission controller, Collector, and Scanner-slim).
Prerequisites
- You must have generated an RHACS init bundle or CRS for your cluster.
-
You must have access to the Red Hat Container Registry and a pull secret for authentication. For information about downloading images from
registry.redhat.io
, see Red Hat Container Registry Authentication. - You must have the Central API Endpoint address. You can view this information by choosing Advanced Cluster Security → ACS Instances from the Red Hat Hybrid Cloud Console navigation menu, then clicking the ACS instance you created.
Procedure
Run one of the following commands on your Kubernetes-based clusters:
If you are using an init bundle, run the following command:
$ helm install -n stackrox --create-namespace \ stackrox-secured-cluster-services rhacs/secured-cluster-services \ -f <path_to_cluster_init_bundle.yaml> \1 -f <path_to_pull_secret.yaml> \2 --set clusterName=<name_of_the_secured_cluster> \ --set centralEndpoint=<endpoint_of_central_service> \3 --set imagePullSecrets.username=<your redhat.com username> \4 --set imagePullSecrets.password=<your redhat.com password>5
- 1
- Use the
-f
option to specify the path for the init bundle. - 2
- Use the
-f
option to specify the path for the pull secret for Red Hat Container Registry authentication. - 3
- Enter the Central API Endpoint address. You can view this information by choosing Advanced Cluster Security → ACS Instances from the Red Hat Hybrid Cloud Console navigation menu, then clicking the RHACS instance you created.
- 4
- Include the user name for your pull secret for Red Hat Container Registry authentication.
- 5
- Include the password for your pull secret for Red Hat Container Registry authentication.
Procedure
Run one of the following commands on an OpenShift Container Platform cluster:
If you are using an init bundle, run the following command:
$ helm install -n stackrox --create-namespace \ stackrox-secured-cluster-services rhacs/secured-cluster-services \ -f <path_to_cluster_init_bundle.yaml> \1 -f <path_to_pull_secret.yaml> \2 --set clusterName=<name_of_the_secured_cluster> \ --set centralEndpoint=<endpoint_of_central_service> \3 --set scanner.disable=false 4
- 1
- Use the
-f
option to specify the path for the init bundle. - 2
- Use the
-f
option to specify the path for the pull secret for Red Hat Container Registry authentication. - 3
- Enter the Central API Endpoint address. You can view this information by choosing Advanced Cluster Security → ACS Instances from the Red Hat Hybrid Cloud Console navigation menu, then clicking the RHACS instance you created.
- 4
- Set the value of the
scanner.disable
parameter tofalse
, which means that Scanner-slim will be enabled during the installation. In Kubernetes, the secured cluster services now include Scanner-slim.
If you are using a CRS, run the following command:
$ helm install -n stackrox --create-namespace \ stackrox-secured-cluster-services rhacs/secured-cluster-services \ --set-file crs.file=<crs_file_name.yaml> \1 -f <path_to_pull_secret.yaml> \2 --set clusterName=<name_of_the_secured_cluster> \ --set centralEndpoint=<endpoint_of_central_service> \3 --set scanner.disable=false 4
- 1
- Use the name of the file in which the generated CRS has been stored.
- 2
- Use the
-f
option to specify the path for the pull secret for Red Hat Container Registry authentication. - 3
- Enter the Central API Endpoint address. You can view this information by choosing Advanced Cluster Security → ACS Instances from the Red Hat Hybrid Cloud Console navigation menu, then clicking the RHACS instance you created.
- 4
- Set the value of the
scanner.disable
parameter tofalse
, which means that Scanner-slim will be enabled during the installation. In Kubernetes, the secured cluster services now include Scanner-slim.
7.6.2.3. Configuring the secured-cluster-services Helm chart with customizations
You can use Helm chart configuration parameters with the helm install
and helm upgrade
commands. Specify these parameters by using the --set
option or by creating YAML configuration files.
Create the following files for configuring the Helm chart for installing Red Hat Advanced Cluster Security for Kubernetes:
-
Public configuration file
values-public.yaml
: Use this file to save all non-sensitive configuration options. -
Private configuration file
values-private.yaml
: Use this file to save all sensitive configuration options. Ensure that you store this file securely.
When using the secured-cluster-services
Helm chart, do not change the values.yaml
file that is part of the chart.
7.6.2.3.1. Configuration parameters
Parameter | Description |
---|---|
| Name of your cluster. |
|
Address of the Central endpoint. If you are using a non-gRPC capable load balancer, use the WebSocket protocol by prefixing the endpoint address with |
| Address of the Sensor endpoint including port number. |
| Image pull policy for the Sensor container. |
| The internal service-to-service TLS certificate that Sensor uses. |
| The internal service-to-service TLS certificate key that Sensor uses. |
| The memory request for the Sensor container. Use this parameter to override the default value. |
| The CPU request for the Sensor container. Use this parameter to override the default value. |
| The memory limit for the Sensor container. Use this parameter to override the default value. |
| The CPU limit for the Sensor container. Use this parameter to override the default value. |
|
Specify a node selector label as |
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Sensor. This parameter is mainly used for infrastructure nodes. |
|
The name of the |
| The name of the Collector image. |
| The address of the registry you are using for the main image. |
| The address of the registry you are using for the Collector image. |
| The address of the registry you are using for the Scanner image. |
| The address of the registry you are using for the Scanner DB image. |
| The address of the registry you are using for the Scanner V4 image. |
| The address of the registry you are using for the Scanner V4 DB image. |
|
Image pull policy for |
| Image pull policy for the Collector images. |
|
Tag of |
|
Tag of |
|
Either |
| Image pull policy for the Collector container. |
| Image pull policy for the Compliance container. |
|
If you specify |
| The memory request for the Collector container. Use this parameter to override the default value. |
| The CPU request for the Collector container. Use this parameter to override the default value. |
| The memory limit for the Collector container. Use this parameter to override the default value. |
| The CPU limit for the Collector container. Use this parameter to override the default value. |
| The memory request for the Compliance container. Use this parameter to override the default value. |
| The CPU request for the Compliance container. Use this parameter to override the default value. |
| The memory limit for the Compliance container. Use this parameter to override the default value. |
| The CPU limit for the Compliance container. Use this parameter to override the default value. |
| The internal service-to-service TLS certificate that Collector uses. |
| The internal service-to-service TLS certificate key that Collector uses. |
|
This setting controls whether Kubernetes is configured to contact Red Hat Advanced Cluster Security for Kubernetes with |
|
When you set this parameter as |
|
This setting controls whether the cluster is configured to contact Red Hat Advanced Cluster Security for Kubernetes with |
| This setting controls whether Red Hat Advanced Cluster Security for Kubernetes evaluates policies; if it is disabled, all AdmissionReview requests are automatically accepted. |
|
This setting controls the behavior of the admission control service. You must specify |
|
If you set this option to |
|
Set it to |
|
Use this parameter to specify the maximum number of seconds RHACS must wait for an admission review before marking it as fail open. If the admission webhook does not receive information that it is requesting before the end of the timeout period, it fails, but in fail open status, it still allows the operation to succeed. For example, the admission controller would allow a deployment to be created even if a scan had timed out and RHACS could not determine if the deployment violated a policy. Beginning in release 4.5, Red Hat reduced the default timeout setting for the RHACS admission controller webhooks from 20 seconds to 10 seconds, resulting in an effective timeout of 12 seconds within the |
| The memory request for the Admission Control container. Use this parameter to override the default value. |
| The CPU request for the Admission Control container. Use this parameter to override the default value. |
| The memory limit for the Admission Control container. Use this parameter to override the default value. |
| The CPU limit for the Admission Control container. Use this parameter to override the default value. |
|
Specify a node selector label as |
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Admission Control. This parameter is mainly used for infrastructure nodes. |
|
If the admission controller webhook needs a specific |
| The internal service-to-service TLS certificate that Admission Control uses. |
| The internal service-to-service TLS certificate key that Admission Control uses. |
|
Use this parameter to override the default |
|
If you specify |
|
Specify |
|
Specify |
|
Deprecated. Specify |
| Resource specification for Sensor. |
| Resource specification for Admission controller. |
| Resource specification for Collector. |
| Resource specification for Collector’s Compliance container. |
|
If you set this option to |
|
If you set this option to |
|
If you set this option to |
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner DB. |
| Resource specification for Collector’s Compliance container. |
| Setting this parameter allows you to modify the scanner log level. Use this option only for troubleshooting purposes. |
|
If you set this option to |
| The minimum number of replicas for autoscaling. Defaults to 2. |
| The maximum number of replicas for autoscaling. Defaults to 5. |
|
Specify a node selector label as |
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner. |
|
Specify a node selector label as |
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner DB. |
| The memory request for the Scanner container. Use this parameter to override the default value. |
| The CPU request for the Scanner container. Use this parameter to override the default value. |
| The memory limit for the Scanner container. Use this parameter to override the default value. |
| The CPU limit for the Scanner container. Use this parameter to override the default value. |
| The memory request for the Scanner DB container. Use this parameter to override the default value. |
| The CPU request for the Scanner DB container. Use this parameter to override the default value. |
| The memory limit for the Scanner DB container. Use this parameter to override the default value. |
| The CPU limit for the Scanner DB container. Use this parameter to override the default value. |
|
If you set this option to |
|
To provide security at the network level, RHACS creates default Warning Disabling creation of default network policies can break communication between RHACS components. If you disable creation of default policies, you must create your own network policies to allow this communication. |
7.6.2.3.1.1. Environment variables
You can specify environment variables for Sensor and Admission controller in the following format:
customize: envVars: ENV_VAR1: "value1" ENV_VAR2: "value2"
The customize
setting allows you to specify custom Kubernetes metadata (labels and annotations) for all objects created by this Helm chart and additional pod labels, pod annotations, and container environment variables for workloads.
The configuration is hierarchical, in the sense that metadata defined at a more generic scope (for example, for all objects) can be overridden by metadata defined at a narrower scope (for example, only for the Sensor deployment).
7.6.2.3.2. Installing the secured-cluster-services Helm chart with customizations
After you configure the values-public.yaml
and values-private.yaml
files, install the secured-cluster-services
Helm chart to deploy the following per-cluster and per-node components:
- Sensor
- Admission controller
- Collector
- Scanner: optional for secured clusters when the StackRox Scanner is installed
- Scanner DB: optional for secured clusters when the StackRox Scanner is installed
- Scanner V4 Indexer and Scanner V4 DB: optional for secured clusters when Scanner V4 is installed
Prerequisites
- You must have generated an RHACS init bundle for your cluster.
-
You must have access to the Red Hat Container Registry and a pull secret for authentication. For information about downloading images from
registry.redhat.io
, see Red Hat Container Registry Authentication. - You must have the Central API Endpoint address. You can view this information by choosing Advanced Cluster Security → ACS Instances from the Red Hat Hybrid Cloud Console navigation menu, then clicking the RHACS instance you created.
Procedure
Run the following command:
$ helm install -n stackrox \ --create-namespace stackrox-secured-cluster-services rhacs/secured-cluster-services \ -f <name_of_cluster_init_bundle.yaml> \ -f <path_to_values_public.yaml> \1 -f <path_to_values_private.yaml> \2 --set imagePullSecrets.username=<username> \3 --set imagePullSecrets.password=<password> 4
- 1
- Use the
-f
option to specify the paths for your public YAML configuration file. - 2
- Use the
-f
option to specify the paths for your private YAML configuration file. - 3
- Include the user name for your pull secret for Red Hat Container Registry authentication.
- 4
- Include the password for your pull secret for Red Hat Container Registry authentication.
To deploy secured-cluster-services
Helm chart by using a continuous integration (CI) system, pass the init bundle YAML file as an environment variable to the helm install
command:
$ helm install ... -f <(echo "$INIT_BUNDLE_YAML_SECRET") 1
- 1
- If you are using base64 encoded variables, use the
helm install … -f <(echo "$INIT_BUNDLE_YAML_SECRET" | base64 --decode)
command instead.
7.6.2.4. Changing configuration options after deploying the secured-cluster-services Helm chart
You can make changes to any configuration options after you have deployed the secured-cluster-services
Helm chart.
When using the helm upgrade
command to make changes, the following guidelines and requirements apply:
-
You can also specify configuration values using the
--set
or--set-file
parameters. However, these options are not saved, and you must manually specify all the options again whenever you make changes. Some changes, such as enabling a new component like Scanner V4, require new certificates to be issued for the component. Therefore, you must provide a CA when making these changes.
-
If the CA was generated by the Helm chart during the initial installation, you must retrieve these automatically generated values from the cluster and provide them to the
helm upgrade
command. The post-installation notes of thecentral-services
Helm chart include a command for retrieving the automatically generated values. -
If the CA was generated outside of the Helm chart and provided during the installation of the
central-services
chart, then you must perform that action again when using thehelm upgrade
command, for example, by using the--reuse-values
flag with thehelm upgrade
command.
-
If the CA was generated by the Helm chart during the initial installation, you must retrieve these automatically generated values from the cluster and provide them to the
Procedure
-
Update the
values-public.yaml
andvalues-private.yaml
configuration files with new values. Run the
helm upgrade
command and specify the configuration files using the-f
option:$ helm upgrade -n stackrox \ stackrox-secured-cluster-services rhacs/secured-cluster-services \ --reuse-values \1 -f <path_to_values_public.yaml> \ -f <path_to_values_private.yaml>
- 1
- If you have modified values that are not included in the
values_public.yaml
andvalues_private.yaml
files, include the--reuse-values
parameter.
7.6.3. Installing RHACS on secured clusters by using the roxctl CLI
To install RHACS on secured clusters by using the CLI, perform the following steps:
-
Install the
roxctl
CLI. - Install Sensor.
7.6.3.1. Installing the roxctl CLI
You must first download the binary. You can install roxctl
on Linux, Windows, or macOS.
7.6.3.1.1. Installing the roxctl CLI on Linux
You can install the roxctl
CLI binary on Linux by using the following procedure.
roxctl
CLI for Linux is available for amd64
, arm64
, ppc64le
, and s390x
architectures.
Procedure
Determine the
roxctl
architecture for the target operating system:$ arch="$(uname -m | sed "s/x86_64//")"; arch="${arch:+-$arch}"
Download the
roxctl
CLI:$ curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.7.3/bin/Linux/roxctl${arch}"
Make the
roxctl
binary executable:$ chmod +x roxctl
Place the
roxctl
binary in a directory that is on yourPATH
:To check your
PATH
, execute the following command:$ echo $PATH
Verification
Verify the
roxctl
version you have installed:$ roxctl version
7.6.3.1.2. Installing the roxctl CLI on macOS
You can install the roxctl
CLI binary on macOS by using the following procedure.
roxctl
CLI for macOS is available for amd64
and arm64
architectures.
Procedure
Determine the
roxctl
architecture for the target operating system:$ arch="$(uname -m | sed "s/x86_64//")"; arch="${arch:+-$arch}"
Download the
roxctl
CLI:$ curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.7.3/bin/Darwin/roxctl${arch}"
Remove all extended attributes from the binary:
$ xattr -c roxctl
Make the
roxctl
binary executable:$ chmod +x roxctl
Place the
roxctl
binary in a directory that is on yourPATH
:To check your
PATH
, execute the following command:$ echo $PATH
Verification
Verify the
roxctl
version you have installed:$ roxctl version
7.6.3.1.3. Installing the roxctl CLI on Windows
You can install the roxctl
CLI binary on Windows by using the following procedure.
roxctl
CLI for Windows is available for the amd64
architecture.
Procedure
Download the
roxctl
CLI:$ curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.7.3/bin/Windows/roxctl.exe
Verification
Verify the
roxctl
version you have installed:$ roxctl version
7.6.3.2. Installing Sensor
To monitor a cluster, you must deploy Sensor. You must deploy Sensor into each cluster that you want to monitor. This installation method is also called the manifest installation method.
To perform an installation by using the manifest installation method, follow only one of the following procedures:
- Use the RHACS web portal to download the cluster bundle, and then extract and run the sensor script.
-
Use the
roxctl
CLI to generate the required sensor configuration for your OpenShift Container Platform cluster and associate it with your Central instance.
Prerequisites
- You must have already installed Central services, or you can access Central services by selecting your ACS instance on Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service).
7.6.3.2.1. Manifest installation method by using the web portal
Procedure
- On your secured cluster, in the RHACS portal, go to Platform Configuration → Clusters.
- Select Secure a cluster → Legacy installation method.
- Specify a name for the cluster.
Provide appropriate values for the fields based on where you are deploying the Sensor.
- Enter the Central API Endpoint address. You can view this information by choosing Advanced Cluster Security → ACS Instances from the Red Hat Hybrid Cloud Console navigation menu, then clicking the RHACS instance you created.
- Click Next to continue with the Sensor setup.
Click Download YAML File and Keys to download the cluster bundle (zip archive).
ImportantThe cluster bundle zip archive includes unique configurations and keys for each cluster. Do not reuse the same files in another cluster.
From a system that has access to the monitored cluster, extract and run the
sensor
script from the cluster bundle:$ unzip -d sensor sensor-<cluster_name>.zip
$ ./sensor/sensor.sh
If you get a warning that you do not have the required permissions to deploy Sensor, follow the on-screen instructions, or contact your cluster administrator for help.
After Sensor is deployed, it contacts Central and provides cluster information.
7.6.3.2.2. Manifest installation by using the roxctl CLI
Procedure
Generate the required sensor configuration for your OpenShift Container Platform cluster and associate it with your Central instance by running the following command:
$ roxctl sensor generate openshift --openshift-version <ocp_version> --name <cluster_name> --central "$ROX_ENDPOINT" 1
- 1
- For the
--openshift-version
option, specify the major OpenShift Container Platform version number for your cluster. For example, specify3
for OpenShift Container Platform version3.x
and specify4
for OpenShift Container Platform version4.x
.
From a system that has access to the monitored cluster, extract and run the
sensor
script from the cluster bundle:$ unzip -d sensor sensor-<cluster_name>.zip
$ ./sensor/sensor.sh
If you get a warning that you do not have the required permissions to deploy Sensor, follow the on-screen instructions, or contact your cluster administrator for help.
After Sensor is deployed, it contacts Central and provides cluster information.
Verification
Return to the RHACS portal and check if the deployment is successful. If successful, when viewing your list of clusters in Platform Configuration → Clusters, the cluster status displays a green checkmark and a Healthy status. If you do not see a green checkmark, use the following command to check for problems:
On OpenShift Container Platform, enter the following command:
$ oc get pod -n stackrox -w
On Kubernetes, enter the following command:
$ kubectl get pod -n stackrox -w
- Click Finish to close the window.
After installation, Sensor starts reporting security information to RHACS and the RHACS portal dashboard begins showing deployments, images, and policy violations from the cluster on which you have installed the Sensor.
7.6.4. Next steps
- Verify installation by ensuring that your secured clusters can communicate with the ACS instance.
7.7. Configuring the proxy for secured cluster services in RHACS Cloud Service
You must configure the proxy settings for secured cluster services within the Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service) environment to establish a connection between the Secured Cluster and the specified proxy server. This ensures reliable data collection and transmission.
7.7.1. Specifying the environment variables in the SecuredCluster CR
To configure an egress proxy, you can either use the cluster-wide Red Hat OpenShift proxy or specify the HTTP_PROXY
, HTTPS_PROXY
, and NO_PROXY
environment variables within the SecuredCluster Custom Resource (CR) configuration file to ensure proper use of the proxy and bypass for internal requests within the specified domain.
The proxy configuration applies to all running services: Sensor, Collector, Admission Controller and Scanner.
Procedure
Specify the
HTTP_PROXY
,HTTPS_PROXY
, andNO_PROXY
environment variables under the customize specification in the SecuredCluster CR configuration file:For example:
# proxy collector customize: envVars: - name: HTTP_PROXY value: http://egress-proxy.stackrox.svc:xxxx 1 - name: HTTPS_PROXY value: http://egress-proxy.stackrox.svc:xxxx 2 - name: NO_PROXY value: .stackrox.svc 3
- 1
- The variable
HTTP_PROXY
is set to the valuehttp://egress-proxy.stackrox.svc:xxxx
. This is the proxy server used for HTTP connections. - 2
- The variable
HTTPS_PROXY
is set to the valuehttp://egress-proxy.stackrox.svc:xxxx
. This is the proxy server used for HTTPS connections. - 3
- The variable
NO _PROXY
is set to.stackrox.svc
. This variable is used to define the hostname or IP address that should not be accessed through the proxy server.
7.8. Verifying installation of secured clusters
After installing RHACS Cloud Service, you can perform some steps to verify that the installation was successful.
To verify installation, access your ACS Console from the Red Hat Hybrid Cloud Console. The Dashboard displays the number of clusters that RHACS Cloud Service is monitoring, along with information about nodes, deployments, images, and violations.
If no data appears in the ACS Console:
- Ensure that at least one secured cluster is connected to your RHACS Cloud Service instance. For more information, see Installing secured cluster resources from RHACS Cloud Service.
- Examine your Sensor pod logs to ensure that the connection to your RHACS Cloud Service instance is successful.
- In the Red Hat OpenShift cluster, go to Platform Configuration → Clusters to verify that the components are healthy and view additional operational information.
-
Examine the values in the
SecuredCluster
API in the Operator on your local cluster to ensure that the Central API Endpoint has been entered correctly. This value should be the same value as shown in the ACS instance details in the Red Hat Hybrid Cloud Console.
Chapter 8. Setting up RHACS Cloud Service with Kubernetes secured clusters
8.1. Creating an RHACS Cloud Service instance for Kubernetes clusters
Access Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service) by selecting an instance in the Red Hat Hybrid Cloud Console. An ACS instance contains the RHACS Cloud Service management interface and services that Red Hat configures and manages for you. The management interface connects to your secured clusters, which contain the services that scan and collect information about vulnerabilities. One instance can connect to and monitor many clusters.
8.1.1. Creating an instance in the console
In the Red Hat Hybrid Cloud Console, create an ACS instance to connect to your secured clusters.
Procedure
To create an ACS instance:
- Log in to the Red Hat Hybrid Cloud Console.
- From the navigation menu, select Advanced Cluster Security → ACS Instances.
Select Create ACS instance and enter information into the displayed fields or select the appropriate option from the drop-down list:
- Name: Enter the name of your ACS instance. An ACS instance contains the RHACS Central component, also referred to as "Central", which includes the RHACS Cloud Service management interface and services that are configured and managed by Red Hat. You manage your secured clusters that communicate with Central. You can connect many secured clusters to one instance.
- Cloud provider: The cloud provider where Central is located. Select AWS.
Cloud region: The region for your cloud provider where Central is located. Select one of the following regions:
- US-East, N. Virginia
- Europe, Ireland
- Availability zones: Use the default value (Multi).
- Click Create instance.
8.1.2. Next steps
-
On each Kubernetes cluster you want to secure, install secured cluster resources by using Helm charts or the
roxctl
CLI.
8.2. Generating an init bundle or cluster registration secret for Kubernetes secured clusters
Before you set up a secured cluster, you must create an init bundle or cluster registration secret (CRS). The secured cluster then uses this bundle or CRS to authenticate with the Central instance, also called Central. You can create an init bundle by using either the RHACS portal or the roxctl
CLI. You then apply the init bundle by using it to create resources.
Cluster registration secrets is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
You can use either an init bundle or a cluster registration secret (CRS) during installation of a secured cluster. However, RHACS does not yet provide a way to create a CRS by using the portal. Therefore, you must create the CRS by using the roxctl
CLI.
Before you set up a secured cluster, you must create an init bundle or CRS. The secured cluster then uses this bundle or CRS to authenticate with Central. You can create an init bundle by using either the RHACS portal or the roxctl
CLI. If you are using a CRS, you must use the roxctl
CLI to create it.
You can then apply the init bundle or the CRS by using the OpenShift Container Platform kubectl
CLI. If you install RHACS by using Helm, you provide the init bundle or CRS when you run the helm install
command.
8.2.1. Generating an init bundle
8.2.1.1. Generating an init bundle by using the RHACS portal
You can create an init bundle containing secrets by using the RHACS portal, also called the ACS Console.
You must have the Admin
user role to create an init bundle.
Procedure
- Log in to the RHACS portal.
- If you do not have secured clusters, the Platform Configuration → Clusters page appears.
- Click Create init bundle.
- Enter a name for the cluster init bundle.
- Select your platform.
- Select the installation method you will use for your secured clusters: Operator or Helm chart.
Click Download to generate and download the init bundle, which is created in the form of a YAML file. You can use one init bundle and its corresponding YAML file for all secured clusters if you are using the same installation method.
ImportantStore this bundle securely because it contains secrets.
- Apply the init bundle by using it to create resources on the secured cluster.
- Install secured cluster services on each cluster.
8.2.1.2. Generating an init bundle by using the roxctl CLI
You can create an init bundle with secrets by using the roxctl
CLI.
You must have the Admin
user role to create init bundles.
Prerequisites
You have configured the
ROX_API_TOKEN
and theROX_CENTRAL_ADDRESS
environment variables:Set the
ROX_API_TOKEN
by running the following command:$ export ROX_API_TOKEN=<api_token>
Set the
ROX_CENTRAL_ADDRESS
environment variable by running the following command:$ export ROX_CENTRAL_ADDRESS=<address>:<port_number>
In RHACS Cloud Service, when using roxctl
commands that require the Central address, use the Central instance address as displayed in the Instance Details section of the Red Hat Hybrid Cloud Console. For example, use acs-ABCD12345.acs.rhcloud.com
instead of acs-data-ABCD12345.acs.rhcloud.com
.
Procedure
To generate a cluster init bundle containing secrets for Helm installations, run the following command:
$ roxctl -e "$ROX_CENTRAL_ADDRESS" \ central init-bundles generate <cluster_init_bundle_name> --output \ cluster_init_bundle.yaml
To generate a cluster init bundle containing secrets for Operator installations, run the following command:
$ roxctl -e "$ROX_CENTRAL_ADDRESS" \ central init-bundles generate <cluster_init_bundle_name> --output-secrets \ cluster_init_bundle.yaml
ImportantEnsure that you store this bundle securely because it contains secrets. You can use the same bundle to set up multiple secured clusters.
8.2.2. Generating a CRS
8.2.2.1. Generating a CRS by using the roxctl CLI
You can create a cluster registration secret by using the roxctl
CLI.
You must have the Admin
user role to create a CRS.
Prerequisites
You have configured the
ROX_API_TOKEN
and theROX_CENTRAL_ADDRESS
environment variables:Set the
ROX_API_TOKEN
by running the following command:$ export ROX_API_TOKEN=<api_token>
Set the
ROX_CENTRAL_ADDRESS
environment variable by running the following command:$ export ROX_CENTRAL_ADDRESS=<address>:<port_number>
In RHACS Cloud Service, when using roxctl
commands that require the Central address, use the Central instance address as displayed in the Instance Details section of the Red Hat Hybrid Cloud Console. For example, use acs-ABCD12345.acs.rhcloud.com
instead of acs-data-ABCD12345.acs.rhcloud.com
.
Procedure
To generate a CRS, run the following command:
$ roxctl -e "$ROX_CENTRAL_ADDRESS" \ central crs generate <crs_name> \ 1 --output <file_name> 2
ImportantEnsure that you store this file securely because it contains secrets. You can use the same file to set up multiple secured clusters. You cannot retrieve a previously-generated CRS.
Depending on the output that you select, the command might return some INFO messages about the CRS and the YAML file.
Sample output
INFO: Successfully generated new CRS INFO: INFO: Name: test-crs INFO: Created at: 2025-02-26T19:07:21Z INFO: Expires at: 2026-02-26T19:07:00Z INFO: Created By: sample-token INFO: ID: 9214a63f-7e0e-485a-baae-0757b0860ac9 # This is a StackRox Cluster Registration Secret (CRS). # It is used for setting up StackRox secured clusters. # NOTE: This file contains secret data that allows connecting new secured clusters to central, # and needs to be handled and stored accordingly. apiVersion: v1 data: crs: EXAMPLEZXlKMlpYSnphVzl1SWpveExDSkRRWE1pT2xzaUxTMHRMUzFDUlVkSlRpQkRSVkpVU1VaSlEwREXAMPLE= kind: Secret metadata: annotations: crs.platform.stackrox.io/created-at: "2025-02-26T19:07:21.800414339Z" crs.platform.stackrox.io/expires-at: "2026-02-26T19:07:00Z" crs.platform.stackrox.io/id: 9214a63f-7e0e-485a-baae-0757b0860ac9 crs.platform.stackrox.io/name: test-crs creationTimestamp: null name: cluster-registration-secret INFO: Then CRS needs to be stored securely, since it contains secrets. INFO: It is not possible to retrieve previously generated CRSs.
8.2.3. Next steps
8.3. Applying an init bundle for Kubernetes secured clusters
Apply the init bundle by using it to create resources.
8.3.1. Applying the init bundle on the secured cluster
Before you configure a secured cluster, you must apply the init bundle by using it to create the required resources on the secured cluster. Applying the init bundle allows the services on the secured cluster to communicate with RHACS Cloud Service.
If you are installing by using Helm charts, do not perform this step. Complete the installation by using Helm; See "Installing RHACS on secured clusters by using Helm charts" in the additional resources section.
Prerequisites
- You must have generated an init bundle containing secrets.
-
You must have created the
stackrox
project, or namespace, on the cluster where secured cluster services will be installed. Usingstackrox
for the project is not required, but ensures that vulnerabilities for RHACS processes are not reported when scanning your clusters.
Procedure
To create resources, perform only one of the following steps:
-
Create resources using the OpenShift Container Platform web console: In the OpenShift Container Platform web console, make sure that you are in the
stackrox
namespace. In the top menu, click + to open the Import YAML page. You can drag the init bundle file or copy and paste its contents into the editor, and then click Create. When the command is complete, the display shows that thecollector-tls
,sensor-tls
, andadmission-control-tls
resources were created. Create resources using the Red Hat OpenShift CLI: Using the Red Hat OpenShift CLI, run the following command to create the resources:
$ oc create -f <init_bundle.yaml> \1 -n <stackrox> 2
Using the
kubectl
CLI, run the following commands to create the resources:$ kubectl create namespace stackrox 1 $ kubectl create -f <init_bundle.yaml> \2 -n <stackrox> 3
Verification
Restart Sensor to pick up the new certificates.
For more information about how to restart Sensor, see "Restarting the Sensor container" in the "Additional resources" section.
8.3.2. Next steps
- Install RHACS secured cluster services in all clusters that you want to monitor.
8.3.3. Additional resources
8.4. Installing secured cluster services from RHACS Cloud Service on Kubernetes clusters
You can install RHACS Cloud Service on your secured clusters by using one of the following methods:
- By using Helm charts
-
By using the
roxctl
CLI (do not use this method unless you have a specific installation need that requires using it)
8.4.1. Installing RHACS Cloud Service on secured clusters by using Helm charts
You can install RHACS on secured clusters by using Helm charts with no customization, by using Helm charts with the default values, or by using Helm charts with customizations of configuration parameters.
First, ensure that you add the Helm chart repository.
8.4.1.1. Adding the Helm chart repository
Procedure
Add the RHACS charts repository.
$ helm repo add rhacs https://mirror.openshift.com/pub/rhacs/charts/
The Helm repository for Red Hat Advanced Cluster Security for Kubernetes includes Helm charts for installing different components, including:
Secured Cluster Services Helm chart (
secured-cluster-services
) for installing the per-cluster and per-node components (Sensor, Admission Controller, Collector, and Scanner-slim).NoteDeploy the per-cluster components into each cluster that you want to monitor and deploy the per-node components in all nodes that you want to monitor.
Verification
Run the following command to verify the added chart repository:
$ helm search repo -l rhacs/
8.4.1.2. Installing RHACS Cloud Service on secured clusters by using Helm charts without customizations
8.4.1.2.1. Installing the secured-cluster-services Helm chart without customization
Use the following instructions to install the secured-cluster-services
Helm chart to deploy the per-cluster and per-node components (Sensor, Admission controller, Collector, and Scanner-slim).
Prerequisites
- You must have generated an RHACS init bundle or CRS for your cluster.
-
You must have access to the Red Hat Container Registry and a pull secret for authentication. For information about downloading images from
registry.redhat.io
, see Red Hat Container Registry Authentication. - You must have the Central API Endpoint address. You can view this information by choosing Advanced Cluster Security → ACS Instances from the Red Hat Hybrid Cloud Console navigation menu, then clicking the ACS instance you created.
Procedure
Run one of the following commands on your Kubernetes-based clusters:
If you are using an init bundle, run the following command:
$ helm install -n stackrox --create-namespace \ stackrox-secured-cluster-services rhacs/secured-cluster-services \ -f <path_to_cluster_init_bundle.yaml> \1 -f <path_to_pull_secret.yaml> \2 --set clusterName=<name_of_the_secured_cluster> \ --set centralEndpoint=<endpoint_of_central_service> \3 --set imagePullSecrets.username=<your redhat.com username> \4 --set imagePullSecrets.password=<your redhat.com password>5
- 1
- Use the
-f
option to specify the path for the init bundle. - 2
- Use the
-f
option to specify the path for the pull secret for Red Hat Container Registry authentication. - 3
- Enter the Central API Endpoint address. You can view this information by choosing Advanced Cluster Security → ACS Instances from the Red Hat Hybrid Cloud Console navigation menu, then clicking the RHACS instance you created.
- 4
- Include the user name for your pull secret for Red Hat Container Registry authentication.
- 5
- Include the password for your pull secret for Red Hat Container Registry authentication.
Procedure
Run one of the following commands on an OpenShift Container Platform cluster:
If you are using an init bundle, run the following command:
$ helm install -n stackrox --create-namespace \ stackrox-secured-cluster-services rhacs/secured-cluster-services \ -f <path_to_cluster_init_bundle.yaml> \1 -f <path_to_pull_secret.yaml> \2 --set clusterName=<name_of_the_secured_cluster> \ --set centralEndpoint=<endpoint_of_central_service> \3 --set scanner.disable=false 4
- 1
- Use the
-f
option to specify the path for the init bundle. - 2
- Use the
-f
option to specify the path for the pull secret for Red Hat Container Registry authentication. - 3
- Enter the Central API Endpoint address. You can view this information by choosing Advanced Cluster Security → ACS Instances from the Red Hat Hybrid Cloud Console navigation menu, then clicking the RHACS instance you created.
- 4
- Set the value of the
scanner.disable
parameter tofalse
, which means that Scanner-slim will be enabled during the installation. In Kubernetes, the secured cluster services now include Scanner-slim.
If you are using a CRS, run the following command:
$ helm install -n stackrox --create-namespace \ stackrox-secured-cluster-services rhacs/secured-cluster-services \ --set-file crs.file=<crs_file_name.yaml> \1 -f <path_to_pull_secret.yaml> \2 --set clusterName=<name_of_the_secured_cluster> \ --set centralEndpoint=<endpoint_of_central_service> \3 --set scanner.disable=false 4
- 1
- Use the name of the file in which the generated CRS has been stored.
- 2
- Use the
-f
option to specify the path for the pull secret for Red Hat Container Registry authentication. - 3
- Enter the Central API Endpoint address. You can view this information by choosing Advanced Cluster Security → ACS Instances from the Red Hat Hybrid Cloud Console navigation menu, then clicking the RHACS instance you created.
- 4
- Set the value of the
scanner.disable
parameter tofalse
, which means that Scanner-slim will be enabled during the installation. In Kubernetes, the secured cluster services now include Scanner-slim.
8.4.1.3. Configuring the secured-cluster-services Helm chart with customizations
This section describes Helm chart configuration parameters that you can use with the helm install
and helm upgrade
commands. You can specify these parameters by using the --set
option or by creating YAML configuration files.
Create the following files for configuring the Helm chart for installing Red Hat Advanced Cluster Security for Kubernetes:
-
Public configuration file
values-public.yaml
: Use this file to save all non-sensitive configuration options. -
Private configuration file
values-private.yaml
: Use this file to save all sensitive configuration options. Ensure that you store this file securely.
While using the secured-cluster-services
Helm chart, do not modify the values.yaml
file that is part of the chart.
8.4.1.3.1. Configuration parameters
Parameter | Description |
---|---|
| Name of your cluster. |
|
Address of the Central endpoint. If you are using a non-gRPC capable load balancer, use the WebSocket protocol by prefixing the endpoint address with |
| Address of the Sensor endpoint including port number. |
| Image pull policy for the Sensor container. |
| The internal service-to-service TLS certificate that Sensor uses. |
| The internal service-to-service TLS certificate key that Sensor uses. |
| The memory request for the Sensor container. Use this parameter to override the default value. |
| The CPU request for the Sensor container. Use this parameter to override the default value. |
| The memory limit for the Sensor container. Use this parameter to override the default value. |
| The CPU limit for the Sensor container. Use this parameter to override the default value. |
|
Specify a node selector label as |
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Sensor. This parameter is mainly used for infrastructure nodes. |
|
The name of the |
| The name of the Collector image. |
| The address of the registry you are using for the main image. |
| The address of the registry you are using for the Collector image. |
| The address of the registry you are using for the Scanner image. |
| The address of the registry you are using for the Scanner DB image. |
| The address of the registry you are using for the Scanner V4 image. |
| The address of the registry you are using for the Scanner V4 DB image. |
|
Image pull policy for |
| Image pull policy for the Collector images. |
|
Tag of |
|
Tag of |
|
Either |
| Image pull policy for the Collector container. |
| Image pull policy for the Compliance container. |
|
If you specify |
| The memory request for the Collector container. Use this parameter to override the default value. |
| The CPU request for the Collector container. Use this parameter to override the default value. |
| The memory limit for the Collector container. Use this parameter to override the default value. |
| The CPU limit for the Collector container. Use this parameter to override the default value. |
| The memory request for the Compliance container. Use this parameter to override the default value. |
| The CPU request for the Compliance container. Use this parameter to override the default value. |
| The memory limit for the Compliance container. Use this parameter to override the default value. |
| The CPU limit for the Compliance container. Use this parameter to override the default value. |
| The internal service-to-service TLS certificate that Collector uses. |
| The internal service-to-service TLS certificate key that Collector uses. |
|
This setting controls whether Kubernetes is configured to contact Red Hat Advanced Cluster Security for Kubernetes with |
|
When you set this parameter as |
|
This setting controls whether the cluster is configured to contact Red Hat Advanced Cluster Security for Kubernetes with |
| This setting controls whether Red Hat Advanced Cluster Security for Kubernetes evaluates policies; if it is disabled, all AdmissionReview requests are automatically accepted. |
|
This setting controls the behavior of the admission control service. You must specify |
|
If you set this option to |
|
Set it to |
|
Use this parameter to specify the maximum number of seconds RHACS must wait for an admission review before marking it as fail open. If the admission webhook does not receive information that it is requesting before the end of the timeout period, it fails, but in fail open status, it still allows the operation to succeed. For example, the admission controller would allow a deployment to be created even if a scan had timed out and RHACS could not determine if the deployment violated a policy. Beginning in release 4.5, Red Hat reduced the default timeout setting for the RHACS admission controller webhooks from 20 seconds to 10 seconds, resulting in an effective timeout of 12 seconds within the |
| The memory request for the Admission Control container. Use this parameter to override the default value. |
| The CPU request for the Admission Control container. Use this parameter to override the default value. |
| The memory limit for the Admission Control container. Use this parameter to override the default value. |
| The CPU limit for the Admission Control container. Use this parameter to override the default value. |
|
Specify a node selector label as |
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Admission Control. This parameter is mainly used for infrastructure nodes. |
|
If the admission controller webhook needs a specific |
| The internal service-to-service TLS certificate that Admission Control uses. |
| The internal service-to-service TLS certificate key that Admission Control uses. |
|
Use this parameter to override the default |
|
If you specify |
|
Specify |
|
Specify |
|
Deprecated. Specify |
| Resource specification for Sensor. |
| Resource specification for Admission controller. |
| Resource specification for Collector. |
| Resource specification for Collector’s Compliance container. |
|
If you set this option to |
|
If you set this option to |
|
If you set this option to |
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner DB. |
| Resource specification for Collector’s Compliance container. |
| Setting this parameter allows you to modify the scanner log level. Use this option only for troubleshooting purposes. |
|
If you set this option to |
| The minimum number of replicas for autoscaling. Defaults to 2. |
| The maximum number of replicas for autoscaling. Defaults to 5. |
|
Specify a node selector label as |
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner. |
|
Specify a node selector label as |
| If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner DB. |
| The memory request for the Scanner container. Use this parameter to override the default value. |
| The CPU request for the Scanner container. Use this parameter to override the default value. |
| The memory limit for the Scanner container. Use this parameter to override the default value. |
| The CPU limit for the Scanner container. Use this parameter to override the default value. |
| The memory request for the Scanner DB container. Use this parameter to override the default value. |
| The CPU request for the Scanner DB container. Use this parameter to override the default value. |
| The memory limit for the Scanner DB container. Use this parameter to override the default value. |
| The CPU limit for the Scanner DB container. Use this parameter to override the default value. |
|
If you set this option to |
|
To provide security at the network level, RHACS creates default Warning Disabling creation of default network policies can break communication between RHACS components. If you disable creation of default policies, you must create your own network policies to allow this communication. |
8.4.1.3.1.1. Environment variables
You can specify environment variables for Sensor and Admission controller in the following format:
customize: envVars: ENV_VAR1: "value1" ENV_VAR2: "value2"
The customize
setting allows you to specify custom Kubernetes metadata (labels and annotations) for all objects created by this Helm chart and additional pod labels, pod annotations, and container environment variables for workloads.
The configuration is hierarchical, in the sense that metadata defined at a more generic scope (for example, for all objects) can be overridden by metadata defined at a narrower scope (for example, only for the Sensor deployment).
8.4.1.3.2. Installing the secured-cluster-services Helm chart with customizations
After you configure the values-public.yaml
and values-private.yaml
files, install the secured-cluster-services
Helm chart to deploy the following per-cluster and per-node components:
- Sensor
- Admission controller
- Collector
- Scanner: optional for secured clusters when the StackRox Scanner is installed
- Scanner DB: optional for secured clusters when the StackRox Scanner is installed
- Scanner V4 Indexer and Scanner V4 DB: optional for secured clusters when Scanner V4 is installed
Prerequisites
- You must have generated an RHACS init bundle for your cluster.
-
You must have access to the Red Hat Container Registry and a pull secret for authentication. For information about downloading images from
registry.redhat.io
, see Red Hat Container Registry Authentication. - You must have the Central API Endpoint address. You can view this information by choosing Advanced Cluster Security → ACS Instances from the Red Hat Hybrid Cloud Console navigation menu, then clicking the RHACS instance you created.
Procedure
Run the following command:
$ helm install -n stackrox \ --create-namespace stackrox-secured-cluster-services rhacs/secured-cluster-services \ -f <name_of_cluster_init_bundle.yaml> \ -f <path_to_values_public.yaml> \1 -f <path_to_values_private.yaml> \2 --set imagePullSecrets.username=<username> \3 --set imagePullSecrets.password=<password> 4
- 1
- Use the
-f
option to specify the paths for your public YAML configuration file. - 2
- Use the
-f
option to specify the paths for your private YAML configuration file. - 3
- Include the user name for your pull secret for Red Hat Container Registry authentication.
- 4
- Include the password for your pull secret for Red Hat Container Registry authentication.
To deploy secured-cluster-services
Helm chart by using a continuous integration (CI) system, pass the init bundle YAML file as an environment variable to the helm install
command:
$ helm install ... -f <(echo "$INIT_BUNDLE_YAML_SECRET") 1
- 1
- If you are using base64 encoded variables, use the
helm install … -f <(echo "$INIT_BUNDLE_YAML_SECRET" | base64 --decode)
command instead.
8.4.1.4. Changing configuration options after deploying the secured-cluster-services Helm chart
You can make changes to any configuration options after you have deployed the secured-cluster-services
Helm chart.
When using the helm upgrade
command to make changes, the following guidelines and requirements apply:
-
You can also specify configuration values using the
--set
or--set-file
parameters. However, these options are not saved, and you must manually specify all the options again whenever you make changes. Some changes, such as enabling a new component like Scanner V4, require new certificates to be issued for the component. Therefore, you must provide a CA when making these changes.
-
If the CA was generated by the Helm chart during the initial installation, you must retrieve these automatically generated values from the cluster and provide them to the
helm upgrade
command. The post-installation notes of thecentral-services
Helm chart include a command for retrieving the automatically generated values. -
If the CA was generated outside of the Helm chart and provided during the installation of the
central-services
chart, then you must perform that action again when using thehelm upgrade
command, for example, by using the--reuse-values
flag with thehelm upgrade
command.
-
If the CA was generated by the Helm chart during the initial installation, you must retrieve these automatically generated values from the cluster and provide them to the
Procedure
-
Update the
values-public.yaml
andvalues-private.yaml
configuration files with new values. Run the
helm upgrade
command and specify the configuration files using the-f
option:$ helm upgrade -n stackrox \ stackrox-secured-cluster-services rhacs/secured-cluster-services \ --reuse-values \1 -f <path_to_values_public.yaml> \ -f <path_to_values_private.yaml>
- 1
- If you have modified values that are not included in the
values_public.yaml
andvalues_private.yaml
files, include the--reuse-values
parameter.
8.4.2. Installing RHACS on secured clusters by using the roxctl CLI
To install RHACS on secured clusters by using the CLI, perform the following steps:
-
Install the
roxctl
CLI. - Install Sensor.
8.4.2.1. Installing the roxctl CLI
You must first download the binary. You can install roxctl
on Linux, Windows, or macOS.
8.4.2.1.1. Installing the roxctl CLI on Linux
You can install the roxctl
CLI binary on Linux by using the following procedure.
roxctl
CLI for Linux is available for amd64
, arm64
, ppc64le
, and s390x
architectures.
Procedure
Determine the
roxctl
architecture for the target operating system:$ arch="$(uname -m | sed "s/x86_64//")"; arch="${arch:+-$arch}"
Download the
roxctl
CLI:$ curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.7.3/bin/Linux/roxctl${arch}"
Make the
roxctl
binary executable:$ chmod +x roxctl
Place the
roxctl
binary in a directory that is on yourPATH
:To check your
PATH
, execute the following command:$ echo $PATH
Verification
Verify the
roxctl
version you have installed:$ roxctl version
8.4.2.1.2. Installing the roxctl CLI on macOS
You can install the roxctl
CLI binary on macOS by using the following procedure.
roxctl
CLI for macOS is available for amd64
and arm64
architectures.
Procedure
Determine the
roxctl
architecture for the target operating system:$ arch="$(uname -m | sed "s/x86_64//")"; arch="${arch:+-$arch}"
Download the
roxctl
CLI:$ curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.7.3/bin/Darwin/roxctl${arch}"
Remove all extended attributes from the binary:
$ xattr -c roxctl
Make the
roxctl
binary executable:$ chmod +x roxctl
Place the
roxctl
binary in a directory that is on yourPATH
:To check your
PATH
, execute the following command:$ echo $PATH
Verification
Verify the
roxctl
version you have installed:$ roxctl version
8.4.2.1.3. Installing the roxctl CLI on Windows
You can install the roxctl
CLI binary on Windows by using the following procedure.
roxctl
CLI for Windows is available for the amd64
architecture.
Procedure
Download the
roxctl
CLI:$ curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.7.3/bin/Windows/roxctl.exe
Verification
Verify the
roxctl
version you have installed:$ roxctl version
8.4.2.2. Installing Sensor
To monitor a cluster, you must deploy Sensor. You must deploy Sensor into each cluster that you want to monitor. This installation method is also called the manifest installation method.
To perform an installation by using the manifest installation method, follow only one of the following procedures:
- Use the RHACS web portal to download the cluster bundle, and then extract and run the sensor script.
-
Use the
roxctl
CLI to generate the required sensor configuration for your OpenShift Container Platform cluster and associate it with your Central instance.
Prerequisites
- You must have already installed Central services, or you can access Central services by selecting your ACS instance on Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service).
8.4.2.2.1. Manifest installation method by using the web portal
Procedure
- On your secured cluster, in the RHACS portal, go to Platform Configuration → Clusters.
- Select Secure a cluster → Legacy installation method.
- Specify a name for the cluster.
Provide appropriate values for the fields based on where you are deploying the Sensor.
- Enter the Central API Endpoint address. You can view this information by choosing Advanced Cluster Security → ACS Instances from the Red Hat Hybrid Cloud Console navigation menu, then clicking the RHACS instance you created.
- Click Next to continue with the Sensor setup.
Click Download YAML File and Keys to download the cluster bundle (zip archive).
ImportantThe cluster bundle zip archive includes unique configurations and keys for each cluster. Do not reuse the same files in another cluster.
From a system that has access to the monitored cluster, extract and run the
sensor
script from the cluster bundle:$ unzip -d sensor sensor-<cluster_name>.zip
$ ./sensor/sensor.sh
If you get a warning that you do not have the required permissions to deploy Sensor, follow the on-screen instructions, or contact your cluster administrator for help.
After Sensor is deployed, it contacts Central and provides cluster information.
8.4.2.2.2. Manifest installation by using the roxctl CLI
Procedure
Generate the required sensor configuration for your OpenShift Container Platform cluster and associate it with your Central instance by running the following command:
$ roxctl sensor generate openshift --openshift-version <ocp_version> --name <cluster_name> --central "$ROX_ENDPOINT" 1
- 1
- For the
--openshift-version
option, specify the major OpenShift Container Platform version number for your cluster. For example, specify3
for OpenShift Container Platform version3.x
and specify4
for OpenShift Container Platform version4.x
.
From a system that has access to the monitored cluster, extract and run the
sensor
script from the cluster bundle:$ unzip -d sensor sensor-<cluster_name>.zip
$ ./sensor/sensor.sh
If you get a warning that you do not have the required permissions to deploy Sensor, follow the on-screen instructions, or contact your cluster administrator for help.
After Sensor is deployed, it contacts Central and provides cluster information.
Verification
Return to the RHACS portal and check if the deployment is successful. If successful, when viewing your list of clusters in Platform Configuration → Clusters, the cluster status displays a green checkmark and a Healthy status. If you do not see a green checkmark, use the following command to check for problems:
On Kubernetes, enter the following command:
$ kubectl get pod -n stackrox -w
- Click Finish to close the window.
After installation, Sensor starts reporting security information to RHACS and the RHACS portal dashboard begins showing deployments, images, and policy violations from the cluster on which you have installed the Sensor.
8.5. Verifying installation of secured clusters
After installing RHACS Cloud Service, you can perform some steps to verify that the installation was successful.
To verify installation, access your ACS Console from the Red Hat Hybrid Cloud Console. The Dashboard displays the number of clusters that RHACS Cloud Service is monitoring, along with information about nodes, deployments, images, and violations.
If no data appears in the ACS Console:
-
Ensure that at least one secured cluster is connected to your RHACS Cloud Service instance. For more information, see instructions for installing by using Helm charts or by using the
roxctl
CLI. - Examine your Sensor pod logs to ensure that the connection to your RHACS Cloud Service instance is successful.
-
Examine the values in the
SecuredCluster
API in the Operator on your local cluster to ensure that the Central API Endpoint has been entered correctly. This value should be the same value as shown in the ACS instance details in the Red Hat Hybrid Cloud Console.
Chapter 9. Upgrading RHACS Cloud Service
9.1. Upgrading secured clusters in RHACS Cloud Service by using the Operator
Red Hat provides regular service updates for the components that it manages, including Central services. These service updates include upgrades to new versions of Red Hat Advanced Cluster Security Cloud Service.
You must regularly upgrade the version of RHACS on your secured clusters to ensure compatibility with RHACS Cloud Service.
9.1.1. Preparing to upgrade
Before you upgrade the Red Hat Advanced Cluster Security for Kubernetes (RHACS) version, complete the following steps:
-
If the cluster you are upgrading contains the
SecuredCluster
custom resource (CR), change the collection method toCORE_BPF
. For more information, see "Changing the collection method".
9.1.1.1. Changing the collection method
If the cluster that you are upgrading contains the SecuredCluster
CR, you must ensure that the per node collection setting is set to CORE_BPF
before you upgrade.
Procedure
- In the OpenShift Container Platform web console, go to the RHACS Operator page.
- In the top navigation menu, select Secured Cluster.
- Click the instance name, for example, stackrox-secured-cluster-services.
Use one of the following methods to change the setting:
- In the Form view, under Per Node Settings → Collector Settings → Collection, select CORE_BPF.
-
Click YAML to open the YAML editor and locate the
spec.perNode.collector.collection
attribute. If the value isKernelModule
orEBPF
, then change it toCORE_BPF
.
- Click Save.
Additional resources
9.1.2. Rolling back an Operator upgrade for secured clusters
To roll back an Operator upgrade, you can use either the CLI or the OpenShift Container Platform web console.
On secured clusters, rolling back Operator upgrades is needed only in rare cases, for example, if an issue exists with the secured cluster.
9.1.2.1. Rolling back an Operator upgrade by using the CLI
You can roll back the Operator version by using CLI commands.
Procedure
Delete the OLM subscription by running the following command:
For OpenShift Container Platform, run the following command:
$ oc -n rhacs-operator delete subscription rhacs-operator
For Kubernetes, run the following command:
$ kubectl -n rhacs-operator delete subscription rhacs-operator
Delete the cluster service version (CSV) by running the following command:
For OpenShift Container Platform, run the following command:
$ oc -n rhacs-operator delete csv -l operators.coreos.com/rhacs-operator.rhacs-operator
For Kubernetes, run the following command:
$ kubectl -n rhacs-operator delete csv -l operators.coreos.com/rhacs-operator.rhacs-operator
- Install the latest version of the Operator on the rolled back channel.
9.1.2.2. Rolling back an Operator upgrade by using the web console
You can roll back the Operator version by using the OpenShift Container Platform web console.
Prerequisites
-
You have access to an OpenShift Container Platform cluster web console using an account with
cluster-admin
permissions.
Procedure
- Go to the Operators → Installed Operators page.
- Click the RHACS Operator.
- On the Operator Details page, select Uninstall Operator from the Actions list. Following this action, the Operator stops running and no longer receives updates.
- Install the latest version of the Operator on the rolled back channel.
Additional resources
9.1.3. Troubleshooting Operator upgrade issues
Follow these instructions to investigate and resolve upgrade-related issues for the RHACS Operator.
9.1.3.1. Central or Secured cluster fails to deploy
When RHACS Operator has the following conditions, you must check the custom resource conditions to find the issue:
- If the Operator fails to deploy Secured Cluster
- If the Operator fails to apply CR changes to actual resources
For Secured clusters, run the following command to check the conditions:
$ oc -n rhacs-operator describe securedclusters.platform.stackrox.io 1
- 1
- If you use Kubernetes, enter
kubectl
instead ofoc
.
You can identify configuration errors from the conditions output:
Example output
Conditions: Last Transition Time: 2023-04-19T10:49:57Z Status: False Type: Deployed Last Transition Time: 2023-04-19T10:49:57Z Status: True Type: Initialized Last Transition Time: 2023-04-19T10:59:10Z Message: Deployment.apps "central" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: "50": must be less than or equal to cpu limit Reason: ReconcileError Status: True Type: Irreconcilable Last Transition Time: 2023-04-19T10:49:57Z Message: No proxy configuration is desired Reason: NoProxyConfig Status: False Type: ProxyConfigFailed Last Transition Time: 2023-04-19T10:49:57Z Message: Deployment.apps "central" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: "50": must be less than or equal to cpu limit Reason: InstallError Status: True Type: ReleaseFailed
Additionally, you can view RHACS pod logs to find more information about the issue. Run the following command to view the logs:
oc -n rhacs-operator logs deploy/rhacs-operator-controller-manager manager 1
- 1
- If you use Kubernetes, enter
kubectl
instead ofoc
.
9.2. Upgrading secured clusters in RHACS Cloud Service by using Helm charts
You can upgrade your secured clusters in RHACS Cloud Service by using Helm charts.
If you installed RHACS secured clusters by using Helm charts, you can upgrade to the latest version of RHACS by updating the Helm chart and running the helm upgrade
command.
9.2.1. Updating the Helm chart repository
You must always update Helm charts before upgrading to a new version of Red Hat Advanced Cluster Security for Kubernetes.
Prerequisites
- You must have already added the Red Hat Advanced Cluster Security for Kubernetes Helm chart repository.
- You must be using Helm version 3.8.3 or newer.
Procedure
Update Red Hat Advanced Cluster Security for Kubernetes charts repository.
$ helm repo update
Verification
Run the following command to verify the added chart repository:
$ helm search repo -l rhacs/
9.2.2. Running the Helm upgrade command
You can use the helm upgrade
command to update Red Hat Advanced Cluster Security for Kubernetes (RHACS).
Prerequisites
-
You must have access to the
values-private.yaml
configuration file that you have used to install Red Hat Advanced Cluster Security for Kubernetes (RHACS). Otherwise, you must generate thevalues-private.yaml
configuration file containing root certificates before proceeding with these commands.
Procedure
Run the helm upgrade command and specify the configuration files by using the
-f
option:$ helm upgrade -n stackrox stackrox-secured-cluster-services \ rhacs/secured-cluster-services --version <current-rhacs-version> \1 -f values-private.yaml
- 1
- Use the
-f
option to specify the paths for your YAML configuration files.
9.2.3. Additional resources
9.3. Manually upgrading secured clusters in RHACS Cloud Service by using the roxctl CLI
You can upgrade your secured clusters in RHACS Cloud Service by using the roxctl
CLI.
You need to manually upgrade secured clusters only if you used the roxctl
CLI to install the secured clusters.
9.3.1. Upgrading the roxctl
CLI
To upgrade the roxctl
CLI to the latest version, you must uninstall your current version of the roxctl
CLI and then install the latest version of the roxctl
CLI.
9.3.1.1. Uninstalling the roxctl CLI
You can uninstall the roxctl
CLI binary on Linux by using the following procedure.
Procedure
Find and delete the
roxctl
binary:$ ROXPATH=$(which roxctl) && rm -f $ROXPATH 1
- 1
- Depending on your environment, you might need administrator rights to delete the
roxctl
binary.
9.3.1.2. Installing the roxctl CLI on Linux
You can install the roxctl
CLI binary on Linux by using the following procedure.
roxctl
CLI for Linux is available for amd64
, arm64
, ppc64le
, and s390x
architectures.
Procedure
Determine the
roxctl
architecture for the target operating system:$ arch="$(uname -m | sed "s/x86_64//")"; arch="${arch:+-$arch}"
Download the
roxctl
CLI:$ curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.7.3/bin/Linux/roxctl${arch}"
Make the
roxctl
binary executable:$ chmod +x roxctl
Place the
roxctl
binary in a directory that is on yourPATH
:To check your
PATH
, execute the following command:$ echo $PATH
Verification
Verify the
roxctl
version you have installed:$ roxctl version
9.3.1.3. Installing the roxctl CLI on macOS
You can install the roxctl
CLI binary on macOS by using the following procedure.
roxctl
CLI for macOS is available for amd64
and arm64
architectures.
Procedure
Determine the
roxctl
architecture for the target operating system:$ arch="$(uname -m | sed "s/x86_64//")"; arch="${arch:+-$arch}"
Download the
roxctl
CLI:$ curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.7.3/bin/Darwin/roxctl${arch}"
Remove all extended attributes from the binary:
$ xattr -c roxctl
Make the
roxctl
binary executable:$ chmod +x roxctl
Place the
roxctl
binary in a directory that is on yourPATH
:To check your
PATH
, execute the following command:$ echo $PATH
Verification
Verify the
roxctl
version you have installed:$ roxctl version
9.3.1.4. Installing the roxctl CLI on Windows
You can install the roxctl
CLI binary on Windows by using the following procedure.
roxctl
CLI for Windows is available for the amd64
architecture.
Procedure
Download the
roxctl
CLI:$ curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.7.3/bin/Windows/roxctl.exe
Verification
Verify the
roxctl
version you have installed:$ roxctl version
9.3.2. Upgrading all secured clusters manually
To ensure optimal functionality, use the same RHACS version for your secured clusters that RHACS Cloud Service is running. If you are using automatic upgrades, update all your secured clusters by using automatic upgrades. If you are not using automatic upgrades, complete the instructions in this section on all secured clusters.
To complete manual upgrades of each secured cluster running Sensor, Collector, and Admission controller, follow these instructions.
9.3.2.1. Updating other images
You must update the sensor, collector and compliance images on each secured cluster when not using automatic upgrades.
If you are using Kubernetes, use kubectl
instead of oc
for the commands listed in this procedure.
Procedure
Update the Sensor image:
$ oc -n stackrox set image deploy/sensor sensor=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.7.3 1
- 1
- If you use Kubernetes, enter
kubectl
instead ofoc
.
Update the Compliance image:
$ oc -n stackrox set image ds/collector compliance=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.7.3 1
- 1
- If you use Kubernetes, enter
kubectl
instead ofoc
.
Update the Collector image:
$ oc -n stackrox set image ds/collector collector=registry.redhat.io/advanced-cluster-security/rhacs-collector-rhel8:4.7.3 1
- 1
- If you use Kubernetes, enter
kubectl
instead ofoc
.
Update the admission control image:
$ oc -n stackrox set image deploy/admission-control admission-control=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.7.3
If you have installed RHACS on Red Hat OpenShift by using the roxctl
CLI, you need to migrate the security context constraints (SCCs).
For more information, see "Migrating SCCs during the manual upgrade" in the "Additional resources" section.
Additional resources
9.3.2.2. Migrating SCCs during the manual upgrade
By migrating the security context constraints (SCCs) during the manual upgrade by using roxctl
CLI, you can seamlessly transition the Red Hat Advanced Cluster Security for Kubernetes (RHACS) services to use the Red Hat OpenShift SCCs, ensuring compatibility and optimal security configurations across Central and all secured clusters.
Procedure
List all of the RHACS services that are deployed on all secured clusters:
$ oc -n stackrox describe pods | grep 'openshift.io/scc\|^Name:'
Example output
Name: admission-control-6f4dcc6b4c-2phwd openshift.io/scc: stackrox-admission-control #... Name: central-575487bfcb-sjdx8 openshift.io/scc: stackrox-central Name: central-db-7c7885bb-6bgbd openshift.io/scc: stackrox-central-db Name: collector-56nkr openshift.io/scc: stackrox-collector #... Name: scanner-68fc55b599-f2wm6 openshift.io/scc: stackrox-scanner Name: scanner-68fc55b599-fztlh #... Name: sensor-84545f86b7-xgdwf openshift.io/scc: stackrox-sensor #...
In this example, you can see that each pod has its own custom SCC, which is specified through the
openshift.io/scc
field.- Add the required roles and role bindings to use the Red Hat OpenShift SCCs instead of the RHACS custom SCCs.
To add the required roles and role bindings to use the Red Hat OpenShift SCCs for all secured clusters, complete the following steps:
Create a file named
upgrade-scs.yaml
that defines the role and role binding resources by using the following content:Example 9.1. Example YAML file
apiVersion: rbac.authorization.k8s.io/v1 kind: Role 1 metadata: annotations: email: support@stackrox.com owner: stackrox labels: app.kubernetes.io/component: collector app.kubernetes.io/instance: stackrox-secured-cluster-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-secured-cluster-services app.kubernetes.io/version: 4.4.0 auto-upgrade.stackrox.io/component: sensor name: use-privileged-scc 2 namespace: stackrox 3 rules: 4 - apiGroups: - security.openshift.io resourceNames: - privileged resources: - securitycontextconstraints verbs: - use - - - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding 5 metadata: annotations: email: support@stackrox.com owner: stackrox labels: app.kubernetes.io/component: collector app.kubernetes.io/instance: stackrox-secured-cluster-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-secured-cluster-services app.kubernetes.io/version: 4.4.0 auto-upgrade.stackrox.io/component: sensor name: collector-use-scc 6 namespace: stackrox roleRef: 7 apiGroup: rbac.authorization.k8s.io kind: Role name: use-privileged-scc subjects: 8 - kind: ServiceAccount name: collector namespace: stackrox - - -
- 1
- The type of Kubernetes resource, in this example,
Role
. - 2
- The name of the role resource.
- 3
- The namespace in which the role is created.
- 4
- Describes the permissions granted by the role resource.
- 5
- The type of Kubernetes resource, in this example,
RoleBinding
. - 6
- The name of the role binding resource.
- 7
- Specifies the role to bind in the same namespace.
- 8
- Specifies the subjects that are bound to the role.
Create the role and role binding resources specified in the
upgrade-scs.yaml
file by running the following command:$ oc -n stackrox create -f ./update-scs.yaml
ImportantYou must run this command on each secured cluster to create the role and role bindings specified in the
upgrade-scs.yaml
file.
Delete the SCCs that are specific to RHACS:
To delete the SCCs that are specific to all secured clusters, run the following command:
$ oc delete scc/stackrox-admission-control scc/stackrox-collector scc/stackrox-sensor
ImportantYou must run this command on each secured cluster to delete the SCCs that are specific to each secured cluster.
Verification
Ensure that all the pods are using the correct SCCs by running the following command:
$ oc -n stackrox describe pods | grep 'openshift.io/scc\|^Name:'
Compare the output with the following table:
Component Previous custom SCC New Red Hat OpenShift 4 SCC Central
stackrox-central
nonroot-v2
Central-db
stackrox-central-db
nonroot-v2
Scanner
stackrox-scanner
nonroot-v2
Scanner-db
stackrox-scanner
nonroot-v2
Admission Controller
stackrox-admission-control
restricted-v2
Collector
stackrox-collector
privileged
Sensor
stackrox-sensor
restricted-v2
9.3.2.2.1. Editing the GOMEMLIMIT environment variable for the Sensor deployment
Upgrading to version 4.4 requires that you manually replace the GOMEMLIMIT
environment variable with the ROX_MEMLIMIT
environment variable. You must edit this variable for each deployment.
Procedure
Run the following command to edit the variable for the Sensor deployment:
$ oc -n stackrox edit deploy/sensor 1
- 1
- If you use Kubernetes, enter
kubectl
instead ofoc
.
-
Replace the
GOMEMLIMIT
variable withROX_MEMLIMIT
. - Save the file.
9.3.2.2.2. Editing the GOMEMLIMIT environment variable for the Collector deployment
Upgrading to version 4.4 requires that you manually replace the GOMEMLIMIT
environment variable with the ROX_MEMLIMIT
environment variable. You must edit this variable for each deployment.
Procedure
Run the following command to edit the variable for the Collector deployment:
$ oc -n stackrox edit deploy/collector 1
- 1
- If you use Kubernetes, enter
kubectl
instead ofoc
.
-
Replace the
GOMEMLIMIT
variable withROX_MEMLIMIT
. - Save the file.
9.3.2.2.3. Editing the GOMEMLIMIT environment variable for the Admission Controller deployment
Upgrading to version 4.4 requires that you manually replace the GOMEMLIMIT
environment variable with the ROX_MEMLIMIT
environment variable. You must edit this variable for each deployment.
Procedure
Run the following command to edit the variable for the Admission Controller deployment:
$ oc -n stackrox edit deploy/admission-control 1
- 1
- If you use Kubernetes, enter
kubectl
instead ofoc
.
-
Replace the
GOMEMLIMIT
variable withROX_MEMLIMIT
. - Save the file.
9.3.2.2.4. Verifying secured cluster upgrade
After you have upgraded secured clusters, verify that the updated pods are working.
9.3.3. Enabling RHCOS node scanning with the StackRox Scanner
If you use OpenShift Container Platform, you can enable scanning of Red Hat Enterprise Linux CoreOS (RHCOS) nodes for vulnerabilities by using Red Hat Advanced Cluster Security for Kubernetes (RHACS).
Prerequisites
- For scanning RHCOS node hosts of the secured cluster, you must have installed Secured Cluster services on OpenShift Container Platform 4.12 or later. For information about supported platforms and architecture, see the Red Hat Advanced Cluster Security for Kubernetes Support Matrix. For life cycle support information for RHACS, see the Red Hat Advanced Cluster Security for Kubernetes Support Policy.
- This procedure describes how to enable node scanning for the first time. If you are reconfiguring Red Hat Advanced Cluster Security for Kubernetes to use the StackRox Scanner instead of Scanner V4, follow the procedure in "Restoring RHCOS node scanning with the StackRox Scanner".
Procedure
Run one of the following commands to update the compliance container.
For a default compliance container with metrics disabled, run the following command:
$ oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"name":"compliance","env":[{"name":"ROX_METRICS_PORT","value":"disabled"},{"name":"ROX_NODE_SCANNING_ENDPOINT","value":"127.0.0.1:8444"},{"name":"ROX_NODE_SCANNING_INTERVAL","value":"4h"},{"name":"ROX_NODE_SCANNING_INTERVAL_DEVIATION","value":"24m"},{"name":"ROX_NODE_SCANNING_MAX_INITIAL_WAIT","value":"5m"},{"name":"ROX_RHCOS_NODE_SCANNING","value":"true"},{"name":"ROX_CALL_NODE_INVENTORY_ENABLED","value":"true"}]}]}}}}'
For a compliance container with Prometheus metrics enabled, run the following command:
$ oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"name":"compliance","env":[{"name":"ROX_METRICS_PORT","value":":9091"},{"name":"ROX_NODE_SCANNING_ENDPOINT","value":"127.0.0.1:8444"},{"name":"ROX_NODE_SCANNING_INTERVAL","value":"4h"},{"name":"ROX_NODE_SCANNING_INTERVAL_DEVIATION","value":"24m"},{"name":"ROX_NODE_SCANNING_MAX_INITIAL_WAIT","value":"5m"},{"name":"ROX_RHCOS_NODE_SCANNING","value":"true"},{"name":"ROX_CALL_NODE_INVENTORY_ENABLED","value":"true"}]}]}}}}'
Update the Collector DaemonSet (DS) by taking the following steps:
Add new volume mounts to Collector DS by running the following command:
$ oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"volumes":[{"name":"tmp-volume","emptyDir":{}},{"name":"cache-volume","emptyDir":{"sizeLimit":"200Mi"}}]}}}}'
Add the new
NodeScanner
container by running the following command:$ oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"command":["/scanner","--nodeinventory","--config=",""],"env":[{"name":"ROX_NODE_NAME","valueFrom":{"fieldRef":{"apiVersion":"v1","fieldPath":"spec.nodeName"}}},{"name":"ROX_CLAIR_V4_SCANNING","value":"true"},{"name":"ROX_COMPLIANCE_OPERATOR_INTEGRATION","value":"true"},{"name":"ROX_CSV_EXPORT","value":"false"},{"name":"ROX_DECLARATIVE_CONFIGURATION","value":"false"},{"name":"ROX_INTEGRATIONS_AS_CONFIG","value":"false"},{"name":"ROX_NETPOL_FIELDS","value":"true"},{"name":"ROX_NETWORK_DETECTION_BASELINE_SIMULATION","value":"true"},{"name":"ROX_NETWORK_GRAPH_PATTERNFLY","value":"true"},{"name":"ROX_NODE_SCANNING_CACHE_TIME","value":"3h36m"},{"name":"ROX_NODE_SCANNING_INITIAL_BACKOFF","value":"30s"},{"name":"ROX_NODE_SCANNING_MAX_BACKOFF","value":"5m"},{"name":"ROX_PROCESSES_LISTENING_ON_PORT","value":"false"},{"name":"ROX_QUAY_ROBOT_ACCOUNTS","value":"true"},{"name":"ROX_ROXCTL_NETPOL_GENERATE","value":"true"},{"name":"ROX_SOURCED_AUTOGENERATED_INTEGRATIONS","value":"false"},{"name":"ROX_SYSLOG_EXTRA_FIELDS","value":"true"},{"name":"ROX_SYSTEM_HEALTH_PF","value":"false"},{"name":"ROX_VULN_MGMT_WORKLOAD_CVES","value":"false"}],"image":"registry.redhat.io/advanced-cluster-security/rhacs-scanner-slim-rhel8:4.7.3","imagePullPolicy":"IfNotPresent","name":"node-inventory","ports":[{"containerPort":8444,"name":"grpc","protocol":"TCP"}],"volumeMounts":[{"mountPath":"/host","name":"host-root-ro","readOnly":true},{"mountPath":"/tmp/","name":"tmp-volume"},{"mountPath":"/cache","name":"cache-volume"}]}]}}}}'
Additional resources