Chapter 1. Managing OpenShift Pipelines performance


If your OpenShift Pipelines installation runs a large number of tasks at the same time, its performance might degrade. You might experience slowdowns and failed pipeline runs.

For reference, in Red Hat tests, on a three-node OpenShift Container Platform cluster running on Amazon Web Services (AWS) m6a.2xlarge nodes, up to 60 simple test pipelines ran concurrently without significant failures or delays. If more pipelines ran concurrently, the number of failed pipeline runs, the average duration of a pipeline run, the pod creation latency, the work queue depth, and the number of pending pods increased. This testing was performed on Red Hat OpenShift Pipelines version 1.13; no statistically significant difference was observed from version 1.12.

Note

These results depend on the test configuration. Performance results with your configuration can be different.

1.1. Improving OpenShift Pipelines performance

If you experience slowness or recurrent failures of pipeline runs, you can take any of the following steps to improve the performance of OpenShift Pipelines.

  • Monitor the resource usage of the nodes in the OpenShift Container Platform cluster on which OpenShift Pipelines runs. If the resource usage is high, increase the number of nodes.
  • Enable high-availability mode. This mode affects the controller that creates and starts pods for task runs and pipeline runs. In Red Hat testing, high-availability mode significantly reduced pipeline execution times as well as the delay from creating a TaskRun resource CR to the start of the pod executing the task run. To enable high-availability mode, make the following changes in the TektonConfig custom resource (CR):

    • Set the pipeline.performance.disable-ha spec to false.
    • Set the pipeline.performance.buckets spec to a number between 5 and 10.
    • Set the pipeline.performance.replicas spec to a number higher than 2 and lower than or equal to the pipeline.performance.buckets setting.

      Note

      You can try different numbers for buckets and replicas to observe the effect on performance. In general, higher numbers are beneficial. Monitor for exhausting the resources of the nodes, including CPU and memory utilization.

1.2. Additional resources

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.