Performance tuning for operator environments
A pod in Kubernetes is the smallest deployable compute unit, consisting of one or more containers sharing networking and storage on a single host. Red Hat Ansible Automation Platform uses a default pod specification, which can be customized with a user-defined YAML or JSON document.
- Introduction
The Kubernetes concept of a pod is one or more containers deployed together on one host, and the smallest compute unit that can be defined, deployed, or managed. - Customize pod specifications to improve performance
You can use the following procedure to customize the pod. - Manage resources for pods and containers
When you specify a pod, you can specify how much of each resource a container needs. The most common resources to specify are CPU and memory (RAM). - Adjust the control plane to tune performance
The control plane refers to the automation controller pods which contain the web and task containers that, among other things, provide the user interface and handle the scheduling and launching of jobs. - Specify dedicated nodes for pods and job execution
A Kubernetes cluster runs on multiple nodes. Use thetopologySpreadConstraintssetting to control how pods are distributed across your nodes during scheduling. This ensures high availability and balanced workloads across your infrastructure. - How job capacity is determined and impacts job runs
The automation controller capacity system determines how many jobs can run on an instance given the amount of resources available to the instance and the size of the jobs that are running (referred to as Impact). The algorithm used to determine this is based on the following two things: - Job type impact on capacity
When configuring automation controller capacity, it is important to understand how different job types impact the system capacity. - Fine-tune Receptor worker backoff strategies for API reliability
Configure the Receptor worker within the Ansible Automation Platform Operator through theRECEPTOR_KUBE_RETRY_COUNTenvironment variable. This variable controls how the worker handles Kubernetes API connection failures.