Installing and configuring


Red Hat OpenShift Pipelines 1.20

Installing and configuring OpenShift Pipelines

Red Hat OpenShift Documentation Team

Abstract

This document provides information about installing and configuring OpenShift Pipelines.

Chapter 1. Installing OpenShift Pipelines

This guide walks cluster administrators through the process of installing the Red Hat OpenShift Pipelines Operator to an OpenShift Container Platform cluster.

Prerequisites

  • You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions.
  • You have installed oc CLI.
  • You have installed OpenShift Pipelines (tkn) CLI on your local system.
  • Your cluster has the Marketplace capability enabled or the Red Hat Operator catalog source configured manually.
Note

In a cluster with both Windows and Linux nodes, Red Hat OpenShift Pipelines can run on only Linux nodes.

You can install the Red Hat OpenShift Pipelines Operator by using the OpenShift Container Platform web console to automatically configure the necessary custom resources (CRs) for your pipelines. This method provides a graphical interface to manage the installation and seamless upgrades of the Operator and its components.

The default Operator custom resource definition (CRD) config.operator.tekton.dev is now replaced by tektonconfigs.operator.tekton.dev. In addition, the Operator provides the following additional CRDs to individually manage OpenShift Pipelines components: tektonpipelines.operator.tekton.dev, tektontriggers.operator.tekton.dev and tektonaddons.operator.tekton.dev.

If you have OpenShift Pipelines already installed on your cluster, the existing installation is seamlessly upgraded. The Operator will replace the instance of config.operator.tekton.dev on your cluster with an instance of tektonconfigs.operator.tekton.dev and additional objects of the other CRDs as necessary.

Warning

If you manually changed your existing installation, such as, changing the target namespace in the config.operator.tekton.dev CRD instance by making changes to the resource name - cluster field, then the upgrade path is not smooth. In such cases, the recommended workflow is to uninstall your installation and reinstall the Red Hat OpenShift Pipelines Operator.

The Red Hat OpenShift Pipelines Operator now provides the option to select the components that you want to install by specifying profiles as part of the TektonConfig custom resource (CR). The Operator automatically installs the TektonConfig CR when you install the Operator. The supported profiles are:

  • Lite: This profile installs only Tekton Pipelines.
  • Basic: This profile installs Tekton Pipelines, Tekton Triggers, Tekton Chains, and Tekton Results.
  • All: This is the default profile used when you install the TektonConfig CR. This profile installs all of the Tekton components, including Tekton Pipelines, Tekton Triggers, Tekton Chains, Tekton Results, Pipelines as Code, and Tekton add-ons. Tekton add-ons includes the ClusterTriggerBindings, ConsoleCLIDownload, ConsoleQuickStart, and ConsoleYAMLSample resources, and the tasks and step action definitions available by using the cluster resolver from the openshift-pipelines namespace.

Procedure

  1. In the Administrator perspective of the web console, navigate to OperatorsOperatorHub.
  2. Use the Filter by keyword box to search for Red Hat OpenShift Pipelines Operator in the catalog. Click the Red Hat OpenShift Pipelines Operator tile.
  3. Read the brief description about the Operator on the Red Hat OpenShift Pipelines Operator page. Click Install.
  4. On the Install Operator page:

    1. Select All namespaces on the cluster (default) for the Installation Mode. This mode installs the Operator in the default openshift-operators namespace, which enables the Operator to watch and be available to all namespaces in the cluster.
    2. Select Automatic for the Approval Strategy. This ensures that the Operator Lifecycle Manager (OLM) automatically handles future upgrades to the Operator. If you select the Manual approval strategy, OLM creates an update request. As a cluster administrator, you must then manually approve the OLM update request to update the Operator to the new version.
    3. Select an Update Channel.

      • The latest channel enables installation of the most recent stable version of the Red Hat OpenShift Pipelines Operator. Currently, it is the default channel for installing the Red Hat OpenShift Pipelines Operator.
      • To install a specific version of the Red Hat OpenShift Pipelines Operator, cluster administrators can use the corresponding pipelines-<version> channel. For example, to install the Red Hat OpenShift Pipelines Operator version 1.8.x, you can use the pipelines-1.8 channel.

        Note

        Starting with OpenShift Container Platform 4.11, the preview and stable channels for installing and upgrading the Red Hat OpenShift Pipelines Operator are not available. However, in OpenShift Container Platform 4.10 and earlier versions, you can use the preview and stable channels for installing and upgrading the Operator.

  5. Click Install. You will see the Operator listed on the Installed Operators page.

    Note

    The Operator installs automatically into the openshift-operators namespace.

  6. Verify that the Status displays Succeeded Up to date to confirm successful installation of Red Hat OpenShift Pipelines Operator.

    Warning

    The success status might show as Succeeded Up to date even if installation of other components is in-progress. Therefore, it is important to verify the installation manually in the terminal.

  7. Verify that the Red Hat OpenShift Pipelines Operator installed all components successfully. Login to the cluster on the terminal, and run the following command:

    $ oc get tektonconfig config
    Copy to Clipboard Toggle word wrap

    Example output

    NAME     VERSION                          READY   REASON
    config   1.20.0     True
    Copy to Clipboard Toggle word wrap

    If the READY condition is True, the Operator and its components installed successfully.

    Additionally, check the components' versions by running the following command:

    $ oc get tektonpipeline,tektontrigger,tektonchain,tektonaddon,pac
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                                             VERSION   READY   REASON
    tektonpipeline.operator.tekton.dev/pipeline                      v0.47.0   True
    
    NAME                                                             VERSION   READY   REASON
    tektontrigger.operator.tekton.dev/trigger                        v0.23.1   True
    
    NAME                                                             VERSION   READY   REASON
    tektonchain.operator.tekton.dev/chain                            v0.16.0   True
    
    NAME                                                             VERSION   READY   REASON
    tektonaddon.operator.tekton.dev/addon                            1.11.0     True
    
    NAME                                                             VERSION   READY   REASON
    openshiftpipelinesascode.operator.tekton.dev/pipelines-as-code   v0.19.0   True
    Copy to Clipboard Toggle word wrap

You can install the Red Hat OpenShift Pipelines Operator from the OperatorHub by using the command-line interface (CLI) to manage your installation programmatically. Once you install the Operator, you can create a Subscription object to subscribe a namespace to the Operator and automate the deployment process.

Procedure

  1. Create a Subscription object YAML file to subscribe a namespace to the Red Hat OpenShift Pipelines Operator, for example, sub.yaml:

    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: openshift-pipelines-operator
      namespace: openshift-operators
    spec:
      channel:  <channel_name>
      name: openshift-pipelines-operator-rh
      source: redhat-operators
      sourceNamespace: openshift-marketplace
    Copy to Clipboard Toggle word wrap
    spec.channel
    Name of the channel that you want to subscribe. The pipelines-<version> channel is the default channel. For example, the default channel for Red Hat OpenShift Pipelines Operator version 1.7 is pipelines-1.7. The latest channel enables installation of the most recent stable version of the Red Hat OpenShift Pipelines Operator.
    spec.name
    Name of the Operator to subscribe to.
    spec.source
    Name of the CatalogSource object that provides the Operator.
    spec.sourceNamespace
    Namespace of the CatalogSource object. Use openshift-marketplace for the default OperatorHub catalog sources.
  2. Create the Subscription object by running the following command:

    $ oc apply -f sub.yaml
    Copy to Clipboard Toggle word wrap

    The subscription installs the Red Hat OpenShift Pipelines Operator into the openshift-operators namespace. The Operator automatically installs OpenShift Pipelines into the default openshift-pipelines target namespace.

You can use the Red Hat OpenShift Pipelines Operator to support the installation of pipelines in a restricted network environment. The Operator automatically configures proxy settings for your pipeline containers and resources, ensuring they can operate securely within your network constraints.

The Operator installs a proxy webhook that sets the proxy environment variables in the containers of the pod created by tekton-controllers based on the cluster proxy object. It also sets the proxy environment variables in the TektonPipelines, TektonTriggers, Controllers, Webhooks, and Operator Proxy Webhook resources.

By default, the proxy webhook is disabled for the openshift-pipelines namespace. To disable it for any other namespace, you can add the operator.tekton.dev/disable-proxy: true label to the namespace object.

Chapter 2. Uninstalling OpenShift Pipelines

Cluster administrators can uninstall the Red Hat OpenShift Pipelines Operator by performing the following steps:

  1. Delete the Custom Resources (CRs) for the optional components, TektonHub and TektonResult, if these CRs exist, and then delete the TektonConfig CR.

    Important

    If you uninstall the Operator without removing the CRs of optional components, you cannot remove the components later.

  2. Uninstall the Red Hat OpenShift Pipelines Operator.
  3. Delete the Custom Resource Definitions (CRDs) of the operator.tekton.dev group.

Uninstalling only the Operator will not remove the Red Hat OpenShift Pipelines components created by default when the Operator is installed.

You can remove the OpenShift Pipelines custom resources (CRs) to clean up the configuration before uninstalling the Operator. This involves deleting optional components such as TektonHub and TektonResult, followed by the main TektonConfig CR.

Procedure

  1. In the Administrator perspective of the web console, navigate to AdministrationCustomResourceDefinitions.
  2. Type TektonHub in the Filter by name field to search for the TektonHub Custom Resource Definition (CRD).
  3. Click the name of the TektonHub CRD to display the details page for the CRD.
  4. Click the Instances tab.
  5. If an instance is displayed, click the Options menu kebab for the displayed instance.
  6. Select Delete TektonHub.
  7. Click Delete to confirm the deletion of the CR.
  8. Repeat these steps, searching for TektonResult and then TektonConfig in the Filter by name box. If any instances are found for these CRDs, delete these instances.
Note

Deleting the CRs also deletes the Red Hat OpenShift Pipelines components and all the tasks and pipelines on the cluster.

Important

If you uninstall the Operator without removing the TektonHub and TektonResult CRs, you cannot remove the Tekton Hub and Tekton Results components later.

You can uninstall the Red Hat OpenShift Pipelines Operator by using the OpenShift Container Platform web console to remove the OpenShift Pipelines service from your cluster. This process involves deleting the Operator subscription and its associated operand instances.

Procedure

  1. From the OperatorsOperatorHub page, use the Filter by keyword box to search for the Red Hat OpenShift Pipelines Operator.
  2. Click the Red Hat OpenShift Pipelines Operator tile. The Operator tile indicates that the Operator is installed.
  3. In the Red Hat OpenShift Pipelines Operator description page, click Uninstall.
  4. In the Uninstall Operator? window, select Delete all operand instances for this operator, and then click Uninstall.
Warning

When you uninstall the OpenShift Pipelines Operator, the uninstallation process deletes all resources within the openshift-pipelines target namespace where OpenShift Pipelines is installed, including the secrets you configured.

You can delete the operator.tekton.dev custom resource definitions (CRDs) to fully remove all OpenShift Pipelines traces from your cluster. This step ensures that no residual definitions remain after the Operator uninstallation.

Delete the CustomResourceDefinitions of the operator.tekton.dev group. The Red Hat OpenShift Pipelines Operator creates these CRDs by default during installation.

Procedure

  1. In the Administrator perspective of the web console, navigate to AdministrationCustomResourceDefinitions.
  2. Type operator.tekton.dev in the Filter by name box to search for the CRDs in the operator.tekton.dev group.
  3. To delete each of the displayed CRDs, complete the following steps:

    1. Click the Options menu kebab .
    2. Select Delete CustomResourceDefinition.
    3. Click Delete to confirm the deletion of the CRD.

In Red Hat OpenShift Pipelines, you can customize the following configurations by using the TektonConfig custom resource (CR):

  • Optimizing OpenShift Pipelines performance, including high-availability mode for the OpenShift Pipelines controller
  • Configuring the Red Hat OpenShift Pipelines control plane
  • Changing the default service account
  • Disabling the service monitor
  • Configuring pipeline resolvers
  • Disabling pipeline templates
  • Disabling the integration of Tekton Hub
  • Disabling the automatic creation of RBAC resources
  • Pruning of task runs and pipeline runs

3.1. Prerequisites

  • You have installed the Red Hat OpenShift Pipelines Operator.

You can tune the performance and high availability (HA) of the OpenShift Pipelines controller by editing the TektonConfig custom resource (CR). You can adjust parameters such as replica counts, buckets, and API query limits to optimize the controller for your specific workload requirements.

apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
  name: config
spec:
  pipeline:
    performance:
      disable-ha: false
      buckets: 7
      replicas: 5
      threads-per-controller: 2
      kube-api-qps: 5.0
      kube-api-burst: 10
Copy to Clipboard Toggle word wrap

All fields are optional. If you set them, the Red Hat OpenShift Pipelines Operator includes most of the fields as arguments in the openshift-pipelines-controller deployment under the openshift-pipelines-controller container. The OpenShift Pipelines Operator also updates the buckets field in the config-leader-election config map under the openshift-pipelines namespace.

If you do not specify the values, the OpenShift Pipelines Operator does not update those fields and applies the default values for the OpenShift Pipelines controller.

Note

If you change or remove any of the performance fields, the OpenShift Pipelines Operator updates the openshift-pipelines-controller deployment and the config-leader-election configuration map (if the buckets field changed) and re-creates openshift-pipelines-controller pods.

High-availability (HA) mode applies to the OpenShift Pipelines controller, which creates and starts pods based on pipeline run and task run definitions. Without HA mode, a single pod executes these operations, potentially creating significant delays under a high load.

In HA mode, OpenShift Pipelines uses several pods (replicas) to run these operations. Initially, OpenShift Pipelines assigns every controller operation into a bucket. Each replica picks operations from one or more buckets. If two replicas could pick the same operation at the same time, the controller internally determines a leader that executes this operation.

HA mode does not affect execution of task runs after the pods are created.

Expand
Table 3.1. Modifiable fields for tuning OpenShift Pipelines performance
NameDescriptionDefault value for the OpenShift Pipelines controller

disable-ha

Enable or disable the high availability (HA) mode. By default, the HA mode is enabled.

false

buckets

In HA mode, the number of buckets used to process controller operations. The maximum value is 10

1

replicas

In HA mode, the number of pods created to process controller operations. Set this value to the same or lower number than the buckets value.

1

threads-per-controller

The number of threads (workers) to use when the work queue of the OpenShift Pipelines controller is processed.

2

kube-api-qps

The maximum queries per second (QPS) to the cluster control plane from the REST client.

5.0

kube-api-burst

The maximum burst for a throttle.

10

Note

The OpenShift Pipelines Operator does not control the number of replicas of the OpenShift Pipelines controller. The replicas setting of the deployment determines the number of replicas. For example, to change the number of replicas to 3, enter the following command:

$ oc --namespace openshift-pipelines scale deployment openshift-pipelines-controller --replicas=3
Copy to Clipboard Toggle word wrap
Important

The kube-api-qps and kube-api-burst fields are multiplied by 2 in the OpenShift Pipelines controller. For example, if the kube-api-qps and kube-api-burst values are 10, the actual QPS and burst values become 20.

You can configure the OpenShift Pipelines control plane to suit your operational needs by editing the TektonConfig custom resource (CR). Customize settings such as metrics collection, sidecar injection, and service account defaults directly through the OpenShift Container Platform web console as needed.

Procedure

  1. In the Administrator perspective of the web console, navigate to AdministrationCustomResourceDefinitions.
  2. Use the Search by name box to search for the tektonconfigs.operator.tekton.dev custom resource definition (CRD). Click TektonConfig to see the CRD details page.
  3. Click the Instances tab.
  4. Click the config instance to see the TektonConfig CR details.
  5. Click the YAML tab.
  6. Edit the TektonConfig YAML file based on your requirements.

    Example TektonConfig CR

    apiVersion: operator.tekton.dev/v1alpha1
    kind: TektonConfig
    metadata:
      name: config
    spec:
      pipeline:
        running-in-environment-with-injected-sidecars: true
        metrics.taskrun.duration-type: histogram
        metrics.pipelinerun.duration-type: histogram
        await-sidecar-readiness: true
        params:
          - name: enableMetrics
            value: 'true'
        default-service-account: pipeline
        require-git-ssh-secret-known-hosts: false
        enable-tekton-oci-bundles: false
        metrics.taskrun.level: task
        metrics.pipelinerun.level: pipeline
        enable-api-fields: stable
        enable-provenance-in-status: false
        enable-custom-tasks: true
        disable-creds-init: false
        disable-affinity-assistant: true
    Copy to Clipboard Toggle word wrap

3.3.1. Modifiable fields with default values

You can change various default configuration fields in the TektonConfig custom resource (CR) to tailor the behavior of your pipelines. This reference lists the available fields, such as sidecar injection and metric levels, along with their default values and descriptions.

The following list includes all modifiable fields with their default values in the TektonConfig CR:

  • running-in-environment-with-injected-sidecars (default: true): Set this field to false if pipelines run in a cluster that does not use injected sidecars, such as Istio. Setting it to false decreases the time a pipeline takes for a task run to start.

    Note

    For clusters that use injected sidecars, setting this field to false can lead to an unexpected behavior.

  • await-sidecar-readiness (default: true): Set this field to false to stop OpenShift Pipelines from waiting for TaskRun sidecar containers to run before it begins to operate. When set to false, tasks run in environments that do not support the downwardAPI volume type.
  • default-service-account (default: pipeline): This field has the default service account name to use for the TaskRun and PipelineRun resources, if none is specified.
  • require-git-ssh-secret-known-hosts (default: false): Setting this field to true requires that any Git SSH secret must include the known_hosts field.

    • For more information about configuring Git SSH secrets, see Configuring SSH authentication for Git in the Additional resources section.
  • enable-tekton-oci-bundles (default: false): Set this field to true to enable the use of an experimental alpha feature named Tekton OCI bundle.
  • enable-api-fields (default: stable): You can enable or disable API fields. Acceptable values are stable, beta, or alpha.

    Note

    Red Hat OpenShift Pipelines does not support the alpha value.

  • enable-provenance-in-status (default: false): Set this field to true to enable populating the provenance field in TaskRun and PipelineRun statuses. The provenance field has metadata about resources used in the task run and pipeline run, such as the source for fetching a remote task or pipeline definition.
  • enable-custom-tasks (default: true): Set this field to false to disable the use of custom tasks in pipelines.
  • disable-creds-init (default: false): Set this field to true to prevent OpenShift Pipelines from scanning attached service accounts and injecting any credentials into your steps.
  • disable-affinity-assistant (default: true): Set this field to false to enable affinity assistant for each TaskRun resource sharing a persistent volume claim workspace.

You can modify the default values of the following metrics fields in the TektonConfig CR:

  • metrics.taskrun.duration-type and metrics.pipelinerun.duration-type (default: histogram): Setting these fields determines the duration type for a task or pipeline run. Acceptable value is gauge or histogram.
  • metrics.taskrun.level (default: task): This field determines the level of the task run metrics. Acceptable value is taskrun, task, or namespace.
  • metrics.pipelinerun.level (default: pipeline): This field determines the level of the pipeline run metrics. Acceptable value is pipelinerun, pipeline, or namespace.

3.3.2. Optional configuration fields

You can configure optional fields in the TektonConfig custom resource (CR) to enable advanced features or override specific defaults. These fields, such as default timeouts and pod templates, are not set by default and allow for fine-grained control over your pipeline execution environment.

The following fields do not have a default value, and are considered only if you configure them. By default, the Operator does not add and configure these fields in the TektonConfig custom resource (CR).

  • default-timeout-minutes: This field sets the default timeout for the TaskRun and PipelineRun resources, if none is specified when creating them. If a task run or pipeline run takes more time than the set number of minutes for its execution, then the task run or pipeline run is timed out and canceled. For example, default-timeout-minutes: 60 sets 60 minutes as default.
  • default-managed-by-label-value: This field contains the default value given to the app.kubernetes.io/managed-by label that is applied to all TaskRun pods, if none is specified. For example, default-managed-by-label-value: tekton-pipelines.
  • default-pod-template: This field sets the default TaskRun and PipelineRun pod templates, if none is specified.
  • default-cloud-events-sink: This field sets the default CloudEvents sink that is used for the TaskRun and PipelineRun resources, if none is specified.
  • default-task-run-workspace-binding: This field contains the default workspace configuration for the workspaces that a Task resource declares, but a TaskRun resource does not explicitly declare.
  • default-affinity-assistant-pod-template: This field sets the default PipelineRun pod template that is used for affinity assistant pods, if none is specified.
  • default-max-matrix-combinations-count: This field contains the default maximum number of combinations generated from a matrix, if none is specified.

You can change the default service account used by OpenShift Pipelines for task and pipeline runs to meet your security or operational requirements. By editing the TektonConfig custom resource (CR), you can specify a different service account for pipelines and triggers.

Example TektonConfig CR

apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
  name: config
spec:
  pipeline:
    default-service-account: pipeline
  trigger:
    default-service-account: pipeline
    enable-api-fields: stable
Copy to Clipboard Toggle word wrap

You can apply custom labels and annotations to the openshift-pipelines namespace to integrate with your organization’s metadata standards or tools. You can configure these metadata fields in the TektonConfig custom resource (CR) and apply them.

Note

Changing the name of the openshift-pipelines namespace is not supported.

Specify the labels and annotations by adding them to the spec.targetNamespaceMetadata specification in the TektonConfig custom resource (CR).

Example of setting labels and annotations for the openshift-pipelines namespace

apiVersion: operator.tekton.dev/v1
kind: TektonConfig
metadata:
  name: config
spec:
  targetNamespaceMetadata:
    labels: {"example-label":"example-value"}
    annotations: {"example-annotation":"example-value"}
Copy to Clipboard Toggle word wrap

You can configure the resync period for the pipelines controller to optimize resource usage in clusters with a large number of pipeline and task runs. By adjusting this interval in the TektonConfig custom resource, you control how often the controller reconciles all resources regardless of events.

The default resync period is 10 hours. If you have a large number of pipeline runs and task runs, a full reconciliation every 10 hours might consume too many resources. In this case, you can configure a longer resync period.

Prerequisites

  • You are logged in to your OpenShift Container Platform cluster with cluster-admin privileges.

Procedure

  • In the TektonConfig custom resource, configure the resync period for the pipelines controller, as shown in the following example.

    Example

    apiVersion: operator.tekton.dev/v1alpha1
    kind: TektonConfig
    metadata:
      name: config
    spec:
      pipeline:
        options:
          deployments:
            tekton-pipelines-controller:
              spec:
                template:
                  spec:
                    containers:
                    - name: tekton-pipelines-controller
                      args:
                        - "-resync-period=24h"
    Copy to Clipboard Toggle word wrap

    name.args
    This example sets the resync period to 24 hours.

3.7. Disabling the service monitor

You can disable the service monitor in OpenShift Pipelines if you do not need to expose telemetry data or want to reduce resource consumption. This configuration is managed by setting the enableMetrics parameter to false in the TektonConfig custom resource (CR).

Example

apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
  name: config
spec:
  pipeline:
    params:
       - name: enableMetrics
         value: 'false'
Copy to Clipboard Toggle word wrap

3.8. Configuring pipeline resolvers

You can enable or disable specific pipeline resolvers, such as git, cluster, bundle, and hub resolvers, to control how your pipelines fetch resources. These settings are managed within the TektonConfig custom resource (CR), where you can also provide resolver-specific configurations.

  • enable-bundles-resolver
  • enable-cluster-resolver
  • enable-git-resolver
  • enable-hub-resolver

Example

apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
  name: config
spec:
  pipeline:
    enable-bundles-resolver: true
    enable-cluster-resolver: true
    enable-git-resolver: true
    enable-hub-resolver: true
Copy to Clipboard Toggle word wrap

You can also provide resolver specific configurations in the TektonConfig CR. For example, define the following fields in the map[string]string format to set configurations for each pipeline resolver:

Example

apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
  name: config
spec:
  pipeline:
    bundles-resolver-config:
      default-service-account: pipelines
    cluster-resolver-config:
      default-namespace: test
    git-resolver-config:
      server-url: localhost.com
    hub-resolver-config:
      default-tekton-hub-catalog: tekton
Copy to Clipboard Toggle word wrap

You can disable the automatic installation of resolver tasks and pipeline templates to customize your cluster’s initial state. By modifying the TektonConfig custom resource (CR), you can prevent these default resources from being deployed if they are not required for your environment.

By default, the TektonAddon custom resource (CR) installs resolverTasks and pipelineTemplates resources along with OpenShift Pipelines on the cluster.

Procedure

  1. Edit the TektonConfig CR by running the following command:

    $ oc edit TektonConfig config
    Copy to Clipboard Toggle word wrap
  2. In the TektonConfig CR, set the resolverTasks and pipelineTemplates parameter value in .addon.params spec to false:

    Example of disabling resolver task and pipeline template resources

    apiVersion: operator.tekton.dev/v1alpha1
    kind: TektonConfig
    metadata:
      name: config
    spec:
    # ...
      addon:
        params:
          - name: resolverTasks
            value: 'false'
          - name: pipelineTemplates
            value: 'false'
    # ...
    Copy to Clipboard Toggle word wrap

    Important

    You can set the value of the pipelinesTemplates parameter to true only when the value of the resolverTasks parameter is true.

You can disable the automatic installation of Tekton Triggers during the OpenShift Pipelines deployment to manage triggers separately or exclude them from your environment. This is achieved by setting the disabled parameter to true in the TektonConfig custom resource (CR).

apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
  name: config
spec:
  trigger:
    disabled: true
#...
Copy to Clipboard Toggle word wrap

The default setting is false.

3.11. Disabling the integration of Tekton Hub

You can disable the Tekton Hub integration in the OpenShift Container Platform web console Developer perspective to customize the user experience. This setting is controlled by the enable-devconsole-integration parameter in the TektonConfig custom resource (CR).

Example of disabling Tekton Hub

apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
  name: config
spec:
  hub:
    params:
      - name: enable-devconsole-integration
        value: false
Copy to Clipboard Toggle word wrap

You can disable the automatic creation of cluster-wide RBAC resources by using the Red Hat OpenShift Pipelines Operator to improve security and control over permissions. This is done by setting the createRbacResource parameter to false in the TektonConfig custom resource (CR), preventing the creation of potentially privileged role bindings.

The default installation of the Red Hat OpenShift Pipelines Operator creates multiple role-based access control (RBAC) resources for all namespaces in the cluster, except the namespaces matching the ^(openshift|kube)-* regular expression pattern. Among these RBAC resources, the pipelines-scc-rolebinding security context constraint (SCC) role binding resource is a potential security issue, because the associated pipelines-scc SCC has the RunAsAny privilege.

Prerequisites

  • You have access to the cluster with cluster-admin privileges.
  • You installed the OpenShift CLI (oc).

Procedure

  1. Edit the TektonConfig CR by running the following command:

    $ oc edit TektonConfig config
    Copy to Clipboard Toggle word wrap
  2. In the TektonConfig CR, set the createRbacResource param value to false:

    apiVersion: operator.tekton.dev/v1alpha1
    kind: TektonConfig
    metadata:
      name: config
    spec:
      params:
      - name: createRbacResource
        value: "false"
    # ...
    Copy to Clipboard Toggle word wrap

You can disable the inline specification of tasks and pipelines to enforce the use of referenced resources and improve security. By configuring the disable-inline-spec field in the TektonConfig custom resource (CR), you can restrict the use of embedded specs in Pipeline, PipelineRun, and TaskRun resources.

By default, OpenShift Pipelines supports inline specification of pipelines and tasks in the following cases:

  • You can create a Pipeline CR that includes one or more task specifications, as in the following example:

    Example of an inline specification in a Pipeline CR

    apiVersion: operator.tekton.dev/v1
    kind: Pipeline
    metadata:
      name: pipelineInline
    spec:
      tasks:
        taskSpec:
    # ...
    Copy to Clipboard Toggle word wrap

  • You can create a PipelineRun custom resource (CR) that includes a pipeline specification, as in the following example:

    Example of an inline specification in a PipelineRun CR

    apiVersion: operator.tekton.dev/v1
    kind: PipelineRun
    metadata:
      name: pipelineRunInline
    spec:
      pipelineSpec:
        tasks:
    # ...
    Copy to Clipboard Toggle word wrap

  • You can create a TaskRun custom resource (CR) that includes a task specification, as in the following example:

    Example of an inline specification in a TaskRun CR

    apiVersion: operator.tekton.dev/v1
    kind: TaskRun
    metadata:
      name: taskRunInline
    spec:
      taskSpec:
        steps:
    # ...
    Copy to Clipboard Toggle word wrap

You can disable inline specification in some or all of these cases. To disable the inline specification, set the disable-inline-spec field of the .spec.pipeline specification of the TektonConfig CR, as in the following example:

Example configuration that disables inline specification

apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
  name: config
spec:
  pipeline:
    disable-inline-spec: "pipeline,pipelinerun,taskrun"
# ...
Copy to Clipboard Toggle word wrap

You can set the disable-inline-spec parameter to any single value or to a comma-separated list of multiple values. The following values for the parameter are valid:

Expand
Table 3.2. Supported values for the disable-inline-spec parameter
ValueDescription

pipeline

You cannot use a taskSpec: spec to define a task inside a Pipeline CR. Instead, you must use a taskRef: spec to incorporate a task from a Task CR or to specify a task using a resolver.

pipelinerun

You cannot use a pipelineSpec: spec to define a pipeline inside a PipelineRun CR. Instead, you must use a pipelineRef: spec to incorporate a pipeline from a Pipeline CR or to specify a pipeline using a resolver.

taskrun

You cannot use a taskSpec: spec to define a task inside a TaskRun CR. Instead, you must use a taskRef: spec to incorporate a task from a Task CR or to specify a task using a resolver.

3.14. Configuration of RBAC and Trusted CA flags

You can independently control the creation of RBAC resources and Trusted CA bundle config maps to customize your OpenShift Pipelines installation. The TektonConfig custom resource (CR) provides specific flags, createRbacResource and createCABundleConfigMaps, to manage these components separately.

Expand
ParameterDescriptionDefault value

createRbacResource

Controls the creation of RBAC resources only. This flag does not affect Trusted CA bundle config map.

true

createCABundleConfigMaps

Controls the creation of Trusted CA bundle config map and Service CA bundle config map. This flag must be set to false to disable config map creation.

true

apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
  name: config
spec:
  profile: all
  targetNamespace: openshift-pipelines
  params:
    - name: createRbacResource
      value: "true"
    - name: createCABundleConfigMaps
      value: "true"
    - name: legacyPipelineRbac
      value: "true"
Copy to Clipboard Toggle word wrap
params[0].name
Specifies RBAC resource creation.
params[1].name
Specifies Trusted CA bundle config map creation.

You can automatically prune stale TaskRun and PipelineRun resources to free up cluster resources and maintain optimal performance. Red Hat OpenShift Pipelines provides a configurable pruner component that removes unused objects based on your defined policies.

Note

You can configure the pruner for your entire installation by using the TektonConfig custom resource and modify configuration for a namespace by using namespace annotations. However, you cannot selectively auto-prune an individual task run or pipeline run in a namespace.

3.15.1. Configuring the pruner

You can configure the default pruner to automatically remove old TaskRun and PipelineRun resources based on a schedule or resource count. By modifying the TektonConfig custom resource (CR), you can set retention limits and pruning intervals to manage resource usage.

The following example corresponds to the default configuration:

Example of the pruner configuration

apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
  name: config
# ...
spec:
  pruner:
    resources:
      - taskrun
      - pipelinerun
    keep: 100
    prune-per-resource: false
    schedule: "* 8 * * *"
    startingDeadlineSeconds: 60
# ...
Copy to Clipboard Toggle word wrap

Expand
Table 3.3. Supported parameters for pruner configuration
ParameterDescription

schedule

The Cron schedule for running the pruner process. The default schedule runs the process at 08:00 every day. For more information about the Cron schedule syntax, see Cron schedule syntax in the Kubernetes documentation.

resources

The resource types to which the pruner applies. The available resource types are taskrun and pipelinerun

keep

The number of most recent resources of every type to keep.

prune-per-resource

If set to false, the value for the keep parameter denotes the total number of task runs or pipeline runs. For example, if keep is set to 100, then the pruner keeps 100 most recent task runs and 100 most recent pipeline runs and removes all other resources.

If set to true, the value for the keep parameter is calculated separately for pipeline runs referencing each pipeline and for task runs referencing each task. For example, if keep is set to 100, then the pruner keeps 100 most recent pipeline runs for Pipeline1, 100 most recent pipeline runs for Pipeline2, 100 most recent task runs for Task1, and so on, and removes all other resources.

keep-since

The maximum time for which to keep resources, in minutes. For example, to retain resources which were created not more than five days ago, set keep-since to 7200.

startingDeadlineSeconds

This parameter is optional. If the pruner job is not started at the scheduled time for any reason, this setting configures the maximum time, in seconds, in which the job can still be started. If the job is not started within the specified time, OpenShift Pipelines considers this job failed and starts the pruner at the next scheduled time. If you do not specify this parameter and the pruner job does not start at the scheduled time, OpenShift Pipelines attempts to start the job at any later time possible.

Note

The keep and keep-since parameters are mutually exclusive. Use only one of them in your configuration.

You can customize the pruning behavior for specific namespaces by applying annotations to the Namespace resource. These annotations allow you to override global pruning settings, such as retention limits and schedules, for individual projects.

The following namespace annotations have the same meanings as the corresponding keys in the TektonConfig custom resource:

  • operator.tekton.dev/prune.schedule
  • operator.tekton.dev/prune.resources
  • operator.tekton.dev/prune.keep
  • operator.tekton.dev/prune.prune-per-resource
  • operator.tekton.dev/prune.keep-since
Note

The operator.tekton.dev/prune.resources annotation accepts a comma-separated list. To prune both task runs and pipeline runs, set this annotation to "taskrun, pipelinerun".

The following additional namespace annotations are available:

  • operator.tekton.dev/prune.skip: When set to true, the namespace for which the annotation is configured is not pruned.
  • operator.tekton.dev/prune.strategy: Set the value of this annotation to either keep or keep-since.

For example, the following annotations retain all task runs and pipeline runs created in the last five days and delete the older resources:

Example of auto-pruning annotations

kind: Namespace
apiVersion: v1
# ...
metadata:
  annotations:
    operator.tekton.dev/prune.resources: "taskrun, pipelinerun"
    operator.tekton.dev/prune.keep-since: 7200
# ...
Copy to Clipboard Toggle word wrap

3.16. Enabling the event-based pruner

Important

The event-based pruner is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

You can enable the event-based pruner to delete completed PipelineRun and TaskRun resources in near real-time. By configuring the tektonpruner controller in the TektonConfig custom resource (CR), you can replace the default scheduled pruner with an event-driven approach for more immediate resource cleanup.

Important

You must disable the default pruner in the TektonConfig custom resource (CR) before you enable the event-based pruner. If both pruner types are enabled, the deployment readiness status changes to False and the following error message is displayed on the output:

Components not in ready state: Invalid Pruner Configuration!! Both pruners, tektonpruner(event based) and pruner(job based) cannot be enabled simultaneously. Please disable one of them.
Copy to Clipboard Toggle word wrap

Procedure

  1. In your TektonConfig CR, disable the default pruner by setting the spec.pruner.disabled field to true and enable the event-based pruner by setting the spec.tektonpruner.disabled field to false. For example:

    apiVersion: operator.tekton.dev/v1alpha1
    kind: TektonConfig
    metadata:
     name: config
    spec:
      # ...
      pruner:
        disabled: true
      # ...
      tektonpruner:
        disabled: false
        options: {}
      # ...
    Copy to Clipboard Toggle word wrap

    After you apply the updated CR, the Operator deploys the tekton-pruner-controller pod in the openshift-pipelines namespace.

  2. Ensure that the following config maps are present in the openshift-pipelines namespace:

    Expand
    Config mapPurpose

    tekton-pruner-default-spec

    Define default pruning behavior

    pruner-info

    Store internal runtime data used by the controller

    config-logging-tekton-pruner

    Configure logging settings for the pruner

    config-observability-tekton-pruner

    Enable observability features such as metrics and tracing

Verification

  1. To verify that the tekton-pruner-controller pod is running, run the following command:

    $ oc get pods -n openshift-pipelines
    Copy to Clipboard Toggle word wrap
  2. Verify that the output includes a tekton-pruner-controller pod in the Running state. Example output:

    $ tekton-pruner-controller-<id>       Running
    Copy to Clipboard Toggle word wrap

3.16.1. Configuration of the event-based pruner

Important

The event-based pruner is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

You can fine-tune the event-based pruner by adjusting settings in the TektonConfig custom resource (CR). This reference details the available configuration options, including history limits, time-to-live (TTL) values, and namespace-specific policies.

The following is an example of the TektonConfig CR with the default configuration that uses global pruning rules:

apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
 name: config
spec:
  # ...
  tektonpruner:
    disabled: false
    global-config:
      enforcedConfigLevel: global
      failedHistoryLimit: null
      historyLimit: 10
      namespaces: null
      successfulHistoryLimit: null
      ttlSecondsAfterFinished: null
    options: {}
  # ...
Copy to Clipboard Toggle word wrap
failedHistoryLimit
The amount of retained failed runs.
historyLimit
The amount of runs to retain. Pruner uses this setting if status-specific limits are not defined.
namespaces
Definition of per-namespace pruning policies, when you set enforcedConfigLevel to namespace.
successfulHistoryLimit
The amount of retained successful runs.
ttlSecondsAfterFinished
Time in seconds after completion, after which the pruner deletes resources.

You can define pruning rules for individual namespaces by setting enforcedConfigLevel to namespace and configuring policies under the namespaces section. In the following example, a 60 second time to live (TTL) is applied to resources in the dev-project namespace:

apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
 name: config
spec:
  # ...
  tektonpruner:
    disabled: false
    global-config:
      enforcedConfigLevel: namespace
      ttlSecondsAfterFinished: 300
      namespaces:
        dev-project:
          ttlSecondsAfterFinished: 60
  # ...
Copy to Clipboard Toggle word wrap

You can use the following parameters in your TektonConfig CR tektonpruner:

The following is an example of the TektonConfig CR with the default configuration that uses global pruning rules:

apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
 name: config
spec:
  # ...
  tektonpruner:
    disabled: false
    global-config:
      enforcedConfigLevel: global
      failedHistoryLimit: null
      historyLimit: 10
      namespaces: null
      successfulHistoryLimit: null
      ttlSecondsAfterFinished: null
    options: {}
  # ...
Copy to Clipboard Toggle word wrap
  • failedHistoryLimit: The amount of retained failed runs.
  • historyLimit: The amount of runs to retain. Pruner uses this setting if status-specific limits are not defined.
  • namespaces: Definition of per-namespace pruning policies, when you set enforcedConfigLevel to namespace.
  • successfulHistoryLimit: The amount of retained successful runs.
  • ttlSecondsAfterFinished: Time in seconds after completion, after which the pruner deletes resources.

You can define pruning rules for individual namespaces by setting enforcedConfigLevel to namespace and configuring policies under the namespaces section. In the following example, a 60 second time to live (TTL) is applied to resources in the dev-project namespace:

apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
 name: config
spec:
  # ...
  tektonpruner:
    disabled: false
    global-config:
      enforcedConfigLevel: namespace
      ttlSecondsAfterFinished: 300
      namespaces:
        dev-project:
          ttlSecondsAfterFinished: 60
  # ...
Copy to Clipboard Toggle word wrap

You can use the following parameters in your TektonConfig CR tektonpruner:

Expand
ParameterDescription

ttlSecondsAfterFinished

Delete resources a fixed number of seconds after they complete.

successfulHistoryLimit

Retain the specified number of the most recent successful runs. Delete older successful runs.

failedHistoryLimit

Retain the specified number of the most recent failed runs. Delete older failed runs.

historyLimit

Apply a generic history limit when failedHistoryLimit and successfulHistoryLimit are not defined.

enforcedConfigLevel

Specify the level at which pruner applies the configuration. Accepted values: global or namespace.

namespaces

Define per-namespace pruning policies.

Note

You can use TTL-based pruning to prune resources exceeding set expiration times. Use history-based pruning to prune resources exceeding the configured historyLimit.

Important

The event-based pruner is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

You can monitor the performance and health of the event-based pruner using the metrics exposed by the tekton-pruner-controller. These metrics, available in OpenTelemetry format, provide insights into resource processing, error rates, and reconciliation times for effective troubleshooting and capacity planning.

The event-based pruner exposes detailed metrics through the tekton-pruner-controller controller Service definition on port 9090 in OpenTelemetry format for monitoring, troubleshooting, and capacity planning.

The following bullets are categories of the metrics exposed:

  • Resource processing
  • Performance timing
  • State tracking
  • Error monitoring

Most pruner metrics use labels to provide additional context. You can use these labels in PromQL queries or dashboards to filter and group the metrics.

Expand
LabelDescription

namespace

The Kubernetes namespace of the PipelineRun or TaskRun.

resource_type

The Tekton resource type.

status

The outcome of processing a resource.

operation

The pruning method that deleted a resource.

reason

Specific cause for skipping or error outcomes.

Resource processing metrics

The following resource processing metrics are exposed by the event-based pruner:

Expand
NameTypeDescriptionLabels

tekton_pruner_controller_resources_processed_total

Counter

Total resources processed

namespace, resource_type, status

tekton_pruner_controller_resources_deleted_total

Counter

Total resources deleted

namespace, resource_type, operation

Performance timing metrics

The following performance timing metrics are exposed by the event-based pruner:

Expand
NameTypeDescriptionLabelsBucket

tekton_pruner_controller_reconciliation_duration_seconds

Histogram

Time spent in reconciliation

namespace, resource_type

0.1 to 30 seconds

tekton_pruner_controller_ttl_processing_duration_seconds

Histogram

Time spent processing TTL

namespace, resource_type

0.1 to 30 seconds

tekton_pruner_controller_history_processing_duration_seconds

Histogram

Time spent processing history limits

namespace, resource_type

0.1 to 30 seconds

State tracking metrics

The following state tracking metrics are exposed by the event-based pruner:

Expand
NameTypeDescription

kn_workqueue_adds_total

Counter

Total resources queued

kn_workqueue_depth

Gauge

Number of current items in queue

Error monitoring metrics

The following error monitoring metrics are exposed by the event-based pruner:

Expand
NameTypeDescriptionLabels

tekton_pruner_controller_resources_errors_total

Counter

Total processing errors

namespace, resource_type, reason

3.17. Setting additional options for webhooks

You can configure advanced webhook options, such as failure policies and timeouts, for OpenShift Pipelines controllers to improve stability and error handling. These settings are applied by using the TektonConfig custom resource (CR) and allow you to customize how admission controllers interact with the Kubernetes API server.

Prerequisites

  • You installed the oc command-line utility.
  • You have logged in to your OpenShift Container Platform cluster with administrator rights for the namespace in which OpenShift Pipelines is installed, typically the openshift-pipelines namespace.

Procedure

  1. View the list of webhooks that the OpenShift Pipelines controllers created. There are two types of webhooks: mutating webhooks and validating webhooks.

    1. To view the list of mutating webhooks, enter the following command:

      $ oc get MutatingWebhookConfiguration
      Copy to Clipboard Toggle word wrap

      Example output

      NAME                             WEBHOOKS   AGE
      annotation.operator.tekton.dev   1          4m20s
      proxy.operator.tekton.dev        1          4m20s
      webhook.operator.tekton.dev      1          4m22s
      webhook.pipeline.tekton.dev      1          4m20s
      webhook.triggers.tekton.dev      1          3m50s
      Copy to Clipboard Toggle word wrap

    2. To view the list of validating webhooks, enter the following command:

      $ oc get ValidatingWebhookConfiguration
      Copy to Clipboard Toggle word wrap

      Example output

      NAME                                                 WEBHOOKS   AGE
      config.webhook.operator.tekton.dev                   1          4m24s
      config.webhook.pipeline.tekton.dev                   1          4m22s
      config.webhook.triggers.tekton.dev                   1          3m52s
      namespace.operator.tekton.dev                        1          4m22s
      validation.pipelinesascode.tekton.dev                1          2m49s
      validation.webhook.operator.tekton.dev               1          4m24s
      validation.webhook.pipeline.tekton.dev               1          4m22s
      validation.webhook.triggers.tekton.dev               1          3m52s
      Copy to Clipboard Toggle word wrap

  2. In the TektonConfig custom resource (CR), add configuration for mutating and validating webhooks under the section for each of the controllers as necessary, as shown in the following examples. Use the validation.webhook.pipeline.tekton.dev spec for the validating webhooks and the webhook.pipeline.tekton.dev spec for the mutating webhooks.

    Important
    • You cannot set configuration for operator webhooks.
    • All settings are optional. For example, you can set the timeoutSeconds parameter and omit the failurePolicy and sideEffects parameters.

    Example settings for the Pipelines controller

    apiVersion: operator.tekton.dev/v1
    kind: TektonConfig
    metadata:
      name: config
    spec:
      pipeline:
        options:
          webhookConfigurationOptions:
            validation.webhook.pipeline.tekton.dev:
              failurePolicy: Fail
              timeoutSeconds: 20
              sideEffects: None
            webhook.pipeline.tekton.dev:
              failurePolicy: Fail
              timeoutSeconds: 20
              sideEffects: None
    Copy to Clipboard Toggle word wrap

    Example settings for the Triggers controller

    apiVersion: operator.tekton.dev/v1
    kind: TektonConfig
    metadata:
      name: config
    spec:
      triggers:
        options:
          webhookConfigurationOptions:
            validation.webhook.triggers.tekton.dev:
              failurePolicy: Fail
              timeoutSeconds: 20
              sideEffects: None
            webhook.triggers.tekton.dev:
              failurePolicy: Fail
              timeoutSeconds: 20
              sideEffects: None
    Copy to Clipboard Toggle word wrap

    Example settings for the Pipelines as Code controller

    apiVersion: operator.tekton.dev/v1
    kind: TektonConfig
    metadata:
      name: config
    spec:
      pipelinesAsCode:
        options:
          webhookConfigurationOptions:
            validation.pipelinesascode.tekton.dev:
              failurePolicy: Fail
              timeoutSeconds: 20
              sideEffects: None
            pipelines.triggers.tekton.dev:
              failurePolicy: Fail
              timeoutSeconds: 20
              sideEffects: None
    Copy to Clipboard Toggle word wrap

    Example settings for the Tekton Hub controller

    apiVersion: operator.tekton.dev/v1
    kind: TektonConfig
    metadata:
      name: config
    spec:
      hub:
        options:
          webhookConfigurationOptions:
            validation.webhook.hub.tekton.dev:
              failurePolicy: Fail
              timeoutSeconds: 20
              sideEffects: None
            webhook.hub.tekton.dev:
              failurePolicy: Fail
              timeoutSeconds: 20
              sideEffects: None
    Copy to Clipboard Toggle word wrap

Legal Notice

Copyright © Red Hat.
Except as otherwise noted below, the text of and illustrations in this documentation are licensed by Red Hat under the Creative Commons Attribution–Share Alike 3.0 Unported license . If you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, the Red Hat logo, JBoss, Hibernate, and RHCE are trademarks or registered trademarks of Red Hat, LLC. or its subsidiaries in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS is a trademark or registered trademark of Hewlett Packard Enterprise Development LP or its subsidiaries in the United States and other countries.
The OpenStack® Word Mark and OpenStack logo are trademarks or registered trademarks of the Linux Foundation, used under license.
All other trademarks are the property of their respective owners.
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top