Chapter 13. Red Hat Operators
13.1. Cloud Credential Operator
Purpose
The Cloud Credential Operator manages cloud provider credentials as Kubernetes custom resource definitions (CRDs).
Project
openshift-cloud-credential-operator
CRDs
credentialsrequests.cloudcredential.openshift.io
- Scope: Namespaced
- CR: credentialsrequest
- Validation: Yes
Configuration objects
No configuration required.
Notes
-
The Cloud Credential Operator uses credentials from
kube-system/aws-creds
. -
The Cloud Credential Operator creates secrets based on
credentialsrequest
.
13.2. Cluster Authentication Operator
Purpose
The Cluster Authentication Operator installs and maintains the Authentication Custom Resource in a cluster and can be viewed with:
$ oc get clusteroperator authentication -o yaml
Project
13.3. Cluster Autoscaler Operator
Purpose
The Cluster Autoscaler Operator manages deployments of the OpenShift Cluster Autoscaler using the cluster-api provider.
Project
CRDs
-
ClusterAutoscaler
: This is a singleton resource, which controls the configuration of the cluster’s autoscaler instance. The Operator will only respond to theClusterAutoscaler
resource nameddefault
in the managed namespace, the value of theWATCH_NAMESPACE
environment variable. -
MachineAutoscaler
: This resource targets a node group and manages the annotations to enable and configure autoscaling for that group, themin
andmax
size. Currently onlyMachineSet
objects can be targeted.
13.4. Cluster Image Registry Operator
Purpose
The Cluster Image Registry Operator manages a singleton instance of the OpenShift Container Platform registry. It manages all configuration of the registry, including creating storage.
On initial start up, the Operator creates a default image-registry
resource instance based on the configuration detected in the cluster. This indicates what cloud storage type to use based on the cloud provider.
If insufficient information is available to define a complete image-registry
resource, then an incomplete resource is defined and the Operator updates the resource status with information about what is missing.
The Cluster Image Registry Operator runs in the openshift-image-registry
namespace and it also manages the registry instance in that location. All configuration and workload resources for the registry reside in that namespace.
Project
13.5. Cluster Monitoring Operator
Purpose
The Cluster Monitoring Operator manages and updates the Prometheus-based cluster monitoring stack deployed on top of OpenShift Container Platform.
Project
CRDs
alertmanagers.monitoring.coreos.com
- Scope: Namespaced
- CR: alertmanager
- Validation: Yes
prometheuses.monitoring.coreos.com
- Scope: Namespaced
- CR: prometheus
- Validation: Yes
prometheusrules.monitoring.coreos.com
- Scope: Namespaced
- CR: prometheusrule
- Validation: Yes
servicemonitors.monitoring.coreos.com
- Scope: Namespaced
- CR: servicemonitor
- Validation: Yes
Configuration objects
$ oc -n openshift-monitoring edit cm cluster-monitoring-config
13.6. Cluster Network Operator
Purpose
The Cluster Network Operator installs and upgrades the networking components on an OpenShift Kubernetes cluster.
13.7. OpenShift Controller Manager Operator
Purpose
The OpenShift Controller Manager Operator installs and maintains the OpenShiftControllerManager
Custom Resource in a cluster and can be viewed with:
$ oc get clusteroperator openshift-controller-manager -o yaml
The Custom Resource Definition openshiftcontrollermanagers.operator.openshift.io
can be viewed in a cluster with:
$ oc get crd openshiftcontrollermanagers.operator.openshift.io -o yaml
Project
13.8. Cluster Samples Operator
Purpose
The Cluster Samples Operator manages the sample imagestreams and templates stored in the openshift
namespace, and any container credentials, stored as a secret, needed for the imagestreams to import the images they reference.
On initial start up, the Operator creates the default samples resource to initiate the creation of the imagestreams and templates. The imagestreams are the Red Hat Enterprise Linux CoreOS (RHCOS)-based OpenShift Container Platform imagestreams pointing to images on registry.redhat.io
. Similarly, the templates are those categorized as OpenShift Container Platform templates.
The Cluster Samples Operator, along with its configuration resources, are contained within the openshift-cluster-samples-operator
namespace. On start up, it will copy the pull secret captured by the installation into the openshift
namespace with the name samples-registry-credentials
to facilitate imagestream imports. An administrator can create any additional secrets in the openshift
namespace as needed. Those secrets contain the content of a container config.json
needed to facilitate image import.
The image for the Cluster Samples Operator contains imagestream and template definitions for the associated OpenShift Container Platform release. Each sample includes an annotation that denotes the OpenShift Container Platform version that it is compatible with. The Operator uses this annotation to ensure that each sample matches it’s release version. Samples outside of its inventory are ignored, as are skipped samples. Modifications to any samples that are managed by the Operator are reverted automatically. The jenkins images are part of the image payload from the installation and are tagged into the imagestreams directly.
The samples resource includes a finalizer, which cleans up the following upon its deletion:
- Operator-managed imagestreams
- Operator-managed templates
- Operator-generated configuration resources
- Cluster status resources
-
The
samples-registry-credentials
secret
Upon deletion of the samples resource, the Cluster Samples Operator recreates the resource using the default configuration.
Project
13.9. Cluster Storage Operator
Purpose
The Cluster Storage Operator sets OpenShift Container Platform cluster-wide storage defaults. It ensures a default storage class exists for OpenShift Container Platform clusters.
Project
Configuration
No configuration is required.
Notes
- The Cluster Storage Operator supports Amazon Web Services (AWS) and Red Hat OpenStack Platform (RHOSP).
- The created storage class can be made non-default by editing its annotation, but the storage class cannot be deleted as long as the Operator runs.
13.10. Cluster SVCAT API Server Operator
Purpose
The Cluster SVCAT API Server Operator installs and maintains a singleton instance of the OpenShift Service Catalog on a cluster. Service Catalog is comprised of an aggregated API server and a controller manager; this Operator only deals with the API server portion of Service Catalog. See cluster-svcat-controller-manager-operator for the Operator responsible for the controller manager component of Service Catalog.
Project
13.11. Cluster SVCAT Controller Manager Operator
Purpose
The Cluster SVCAT Controller Manager Operator installs and maintains a singleton instance of the OpenShift Service Catalog on a cluster. Service Catalog is comprised of an aggregated API server and a controller manager; this Operator only deals with the Controller Manager portion of Service Catalog. See the cluster-svcat-apiserver-operator for the Operator responsible for the API Server component of Service Catalog.
Project
13.12. Cluster Version Operator
Purpose
Project
13.13. Console Operator
Purpose
The Console Operator installs and maintains the OpenShift Container Platform web console on a cluster.
Project
13.14. DNS Operator
Purpose
The DNS Operator deploys and manages CoreDNS to provide a name resolution service to pods that enables DNS-based Kubernetes Service discovery in OpenShift Container Platform.
The Operator creates a working default deployment based on the cluster’s configuration.
-
The default cluster domain is
cluster.local
. - Configuration of the CoreDNS Corefile or Kubernetes plug-in is not yet supported.
The DNS Operator manages CoreDNS as a Kubernetes daemon set exposed as a service with a static IP. CoreDNS runs on all nodes in the cluster.
Project
13.15. etcd cluster Operator
Purpose
The etcd cluster Operator automates etcd cluster scaling, enables etcd monitoring and metrics, and simplifies disaster recovery procedures.
Project
CRDs
etcds.operator.openshift.io
- Scope: Cluster
- CR: etcd
- Validation: Yes
Configuration objects
$ oc edit etcd cluster
13.16. Ingress Operator
Purpose
The Ingress Operator configures and manages the OpenShift Container Platform router.
Project
CRDs
clusteringresses.ingress.openshift.io
- Scope: Namespaced
- CR: clusteringresses
- Validation: No
Configuration objects
Cluster config
- Type Name: clusteringresses.ingress.openshift.io
- Instance Name: default
View Command:
$ oc get clusteringresses.ingress.openshift.io -n openshift-ingress-operator default -o yaml
Notes
The Ingress Operator sets up the router in the openshift-ingress
project and creates the deployment for the router:
$ oc get deployment -n openshift-ingress
The Ingress Operator uses the clusterNetwork[].cidr
from the network/cluster status to determine what mode (IPv4, IPv6, or dual stack) the managed ingress controller (router) should operate in. For example, if clusterNetwork
contains only a v6 cidr
, then the ingress controller will operate in v6-only mode. In the following example, ingress controllers managed by the Ingress Operator will run in v4-only mode because only one cluster network exists and the network is a v4 cidr
:
$ oc get network/cluster -o jsonpath='{.status.clusterNetwork[*]}' map[cidr:10.128.0.0/14 hostPrefix:23]
13.17. Kubernetes API Server Operator
Purpose
The Kubernetes API Server Operator manages and updates the Kubernetes API server deployed on top of OpenShift Container Platform. The Operator is based on the OpenShift library-go framework and it is installed using the Cluster Version Operator (CVO).
Project
openshift-kube-apiserver-operator
CRDs
kubeapiservers.operator.openshift.io
- Scope: Cluser
- CR: kubeapiserver
- Validation: Yes
Configuration objects
$ oc edit kubeapiserver
13.18. Kubernetes Controller Manager Operator
Purpose
The Kubernetes Controller Manager Operator manages and updates the Kubernetes Controller Manager deployed on top of OpenShift Container Platform. The Operator is based on OpenShift library-go framework and it is installed via the Cluster Version Operator (CVO).
It contains the following components:
- Operator
- Bootstrap manifest renderer
- Installer based on static pods
- Configuration observer
By default, the Operator exposes Prometheus metrics through the metrics service.
Project
13.19. Kubernetes Scheduler Operator
Purpose
The Kubernetes Scheduler Operator manages and updates the Kubernetes Scheduler deployed on top of OpenShift Container Platform. The operator is based on the OpenShift Container Platform library-go framework and it is installed with the Cluster Version Operator (CVO).
The Kubernetes Scheduler Operator contains the following components:
- Operator
- Bootstrap manifest renderer
- Installer based on static pods
- Configuration observer
By default, the Operator exposes Prometheus metrics through the metrics service.
Project
cluster-kube-scheduler-operator
Configuration
The configuration for the Kubernetes Scheduler is the result of merging:
- a default configuration.
-
an observed configuration from the spec
schedulers.config.openshift.io
.
All of these are sparse configurations, invalidated JSON snippets which are merged in order to form a valid configuration at the end.
13.20. Machine API Operator
Purpose
The Machine API Operator manages the lifecycle of specific purpose CRDs, controllers, and RBAC objects that extend the Kubernetes API. This declares the desired state of machines in a cluster.
Project
CRDs
-
MachineSet
-
Machine
-
MachineHealthCheck
13.21. Machine Config Operator
Purpose
The Machine Congig Operator manages and applies configuration and updates of the base operating system and container runtime, including everything between the kernel and kubelet.
There are four components:
- machine-config-server: Provides Ignition configuration to new machines joining the cluster.
-
machine-config-controller: Coordinates the upgrade of machines to the desired configurations defined by a
MachineConfig
object. Options are provided to control the upgrade for sets of machines individually. - machine-config-daemon: Applies new machine configuration during update. Validates and verifies the machine’s state to the requested machine configuration.
- machine-config: Provides a complete source of machine configuration at installation, first start up, and updates for a machine.
Project
13.22. Marketplace Operator
Purpose
The Marketplace Operator is a conduit to bring off-cluster Operators to your cluster.
Project
13.23. Node Tuning Operator
Purpose
The Node Tuning Operator helps you manage node-level tuning by orchestrating the Tuned daemon. The majority of high-performance applications require some level of kernel tuning. The Node Tuning Operator provides a unified management interface to users of node-level sysctls and more flexibility to add custom tuning specified by user needs. The Operator manages the containerized Tuned daemon for OpenShift Container Platform as a Kubernetes daemon set. It ensures the custom tuning specification is passed to all containerized Tuned daemons running in the cluster in the format that the daemons understand. The daemons run on all nodes in the cluster, one per node.
Node-level settings applied by the containerized Tuned daemon are rolled back on an event that triggers a profile change or when the containerized Tuned daemon is terminated gracefully by receiving and handling a termination signal.
The Node Tuning Operator is part of a standard OpenShift Container Platform installation in version 4.1 and later.
Project
13.24. OpenShift API Server Operator
Purpose
The OpenShift API Server Operator installs and maintains the openshift-apiserver
on a cluster.
Project
CRDs
openshiftapiservers.operator.openshift.io
- Scope: Cluster
- CR: openshiftapiserver
- Validation: Yes
13.25. Prometheus Operator
Purpose
The Prometheus Operator for Kubernetes provides easy monitoring definitions for Kubernetes services and deployment and management of Prometheus instances.
Once installed, the Prometheus Operator provides the following features:
- Create and Destroy: Easily launch a Prometheus instance for your Kubernetes namespace, a specific application or team easily using the Operator.
- Simple Configuration: Configure the fundamentals of Prometheus like versions, persistence, retention policies, and replicas from a native Kubernetes resource.
- Target Services via Labels: Automatically generate monitoring target configurations based on familiar Kubernetes label queries; no need to learn a Prometheus specific configuration language.