Este conteúdo não está disponível no idioma selecionado.

Chapter 6. Networking Operators


6.1. About the Kubernetes NMState Operator

The Kubernetes NMState Operator provides a Kubernetes API for performing state-driven network configuration across the OpenShift Container Platform cluster’s nodes with NMState. The Kubernetes NMState Operator provides users with functionality to configure various network interface types, DNS, and routing on cluster nodes. Additionally, the daemons on the cluster nodes periodically report on the state of each node’s network interfaces to the API server.

Important

Red Hat supports the Kubernetes NMState Operator in production environments on bare-metal, IBM Power®, IBM Z®, IBM® LinuxONE, VMware vSphere, and Red Hat OpenStack Platform (RHOSP) installations.

Red Hat support exists for using the Kubernetes NMState Operator on Microsoft Azure but in a limited capacity. Support is limited to configuring DNS servers on your system as a postinstallation task.

Before you can use NMState with OpenShift Container Platform, you must install the Kubernetes NMState Operator.

Note

The Kubernetes NMState Operator updates the network configuration of a secondary NIC. The Operator cannot update the network configuration of the primary NIC, or update the br-ex bridge on most on-premise networks.

On a bare-metal platform, using the Kubernetes NMState Operator to update the br-ex bridge network configuration is only supported if you set the br-ex bridge as the interface in a machine config manifest file. To update the br-ex bridge as a postinstallation task, you must set the br-ex bridge as the interface in the NMState configuration of the NodeNetworkConfigurationPolicy custom resource (CR) for your cluster. For more information, see Creating a manifest object that includes a customized br-ex bridge in Postinstallation configuration.

OpenShift Container Platform uses nmstate to report on and configure the state of the node network. This makes it possible to modify the network policy configuration, such as by creating a Linux bridge on all nodes, by applying a single configuration manifest to the cluster.

Node networking is monitored and updated by the following objects:

NodeNetworkState
Reports the state of the network on that node.
NodeNetworkConfigurationPolicy
Describes the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a NodeNetworkConfigurationPolicy CR to the cluster.
NodeNetworkConfigurationEnactment
Reports the network policies enacted upon each node.

6.1.1. Installing the Kubernetes NMState Operator

You can install the Kubernetes NMState Operator by using the web console or the CLI.

6.1.1.1. Installing the Kubernetes NMState Operator by using the web console

You can install the Kubernetes NMState Operator by using the web console. After it is installed, the Operator can deploy the NMState State Controller as a daemon set across all of the cluster nodes.

Prerequisites

  • You are logged in as a user with cluster-admin privileges.

Procedure

  1. Select Operators OperatorHub.
  2. In the search field below All Items, enter nmstate and click Enter to search for the Kubernetes NMState Operator.
  3. Click on the Kubernetes NMState Operator search result.
  4. Click on Install to open the Install Operator window.
  5. Click Install to install the Operator.
  6. After the Operator finishes installing, click View Operator.
  7. Under Provided APIs, click Create Instance to open the dialog box for creating an instance of kubernetes-nmstate.
  8. In the Name field of the dialog box, ensure the name of the instance is nmstate.

    Note

    The name restriction is a known issue. The instance is a singleton for the entire cluster.

  9. Accept the default settings and click Create to create the instance.

Summary

After you install the Kubernetes NMState Operator, the Operator has deployed the NMState State Controller as a daemon set across all of the cluster nodes.

6.1.1.2. Installing the Kubernetes NMState Operator using the CLI

You can install the Kubernetes NMState Operator by using the OpenShift CLI (oc). After it is installed, the Operator can deploy the NMState State Controller as a daemon set across all of the cluster nodes.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You are logged in as a user with cluster-admin privileges.

Procedure

  1. Create the nmstate Operator namespace:

    $ cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Namespace
    metadata:
      name: openshift-nmstate
    spec:
      finalizers:
      - kubernetes
    EOF
  2. Create the OperatorGroup:

    $ cat << EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: openshift-nmstate
      namespace: openshift-nmstate
    spec:
      targetNamespaces:
      - openshift-nmstate
    EOF
  3. Subscribe to the nmstate Operator:

    $ cat << EOF| oc apply -f -
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: kubernetes-nmstate-operator
      namespace: openshift-nmstate
    spec:
      channel: stable
      installPlanApproval: Automatic
      name: kubernetes-nmstate-operator
      source: redhat-operators
      sourceNamespace: openshift-marketplace
    EOF
  4. Confirm the ClusterServiceVersion (CSV) status for the nmstate Operator deployment equals Succeeded:

    $ oc get clusterserviceversion -n openshift-nmstate \
     -o custom-columns=Name:.metadata.name,Phase:.status.phase

    Example output

    Name                                             Phase
    kubernetes-nmstate-operator.4.17.0-202210210157   Succeeded

  5. Create an instance of the nmstate Operator:

    $ cat << EOF | oc apply -f -
    apiVersion: nmstate.io/v1
    kind: NMState
    metadata:
      name: nmstate
    EOF
  6. Verify the pods for NMState Operator are running:

    $ oc get pod -n openshift-nmstate

    Example output

    Name                                      Ready   Status  Restarts  Age
    pod/nmstate-cert-manager-5b47d8dddf-5wnb5   1/1   Running  0         77s
    pod/nmstate-console-plugin-d6b76c6b9-4dcwm  1/1   Running  0         77s
    pod/nmstate-handler-6v7rm                   1/1   Running  0         77s
    pod/nmstate-handler-bjcxw                   1/1   Running  0         77s
    pod/nmstate-handler-fv6m2                   1/1   Running  0         77s
    pod/nmstate-handler-kb8j6                   1/1   Running  0         77s
    pod/nmstate-handler-wn55p                   1/1   Running  0         77s
    pod/nmstate-operator-f6bb869b6-v5m92        1/1   Running  0        4m51s
    pod/nmstate-webhook-66d6bbd84b-6n674        1/1   Running  0         77s
    pod/nmstate-webhook-66d6bbd84b-vlzrd        1/1   Running  0         77s

6.1.1.3. Viewing metrics collected by the Kubernetes NMState Operator

The Kubernetes NMState Operator, kubernetes-nmstate-operator, can collect metrics from the kubernetes_nmstate_features_applied component and expose them as ready-to-use metrics. As a use case for viewing metrics, consider a situation where you created a NodeNetworkConfigurationPolicy custom resource and you want to confirm that the policy is active.

Note

The kubernetes_nmstate_features_applied metrics are not an API and might change between OpenShift Container Platform versions.

In the Developer and Administrator perspectives, the Metrics UI includes some predefined CPU, memory, bandwidth, and network packet queries for the selected project. You can run custom Prometheus Query Language (PromQL) queries for CPU, memory, bandwidth, network packet and application metrics for the project.

The following example demonstrates a NodeNetworkConfigurationPolicy manifest example that is applied to an OpenShift Container Platform cluster:

# ...
interfaces:
  - name: br1
    type: linux-bridge
    state: up
    ipv4:
      enabled: true
      dhcp: true
      dhcp-custom-hostname: foo
    bridge:
      options:
        stp:
          enabled: false
      port: []
# ...

The NodeNetworkConfigurationPolicy manifest exposes metrics and makes them available to the Cluster Monitoring Operator (CMO). The following example shows some exposed metrics:

controller_runtime_reconcile_time_seconds_bucket{controller="nodenetworkconfigurationenactment",le="0.005"} 16
controller_runtime_reconcile_time_seconds_bucket{controller="nodenetworkconfigurationenactment",le="0.01"} 16
controller_runtime_reconcile_time_seconds_bucket{controller="nodenetworkconfigurationenactment",le="0.025"} 16
...
# HELP kubernetes_nmstate_features_applied Number of nmstate features applied labeled by its name
# TYPE kubernetes_nmstate_features_applied gauge
kubernetes_nmstate_features_applied{name="dhcpv4-custom-hostname"} 1

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have logged in to the web console as the administrator and installed the Kubernetes NMState Operator.
  • You have access to the cluster as a developer or as a user with view permissions for the project that you are viewing metrics for.
  • You have enabled monitoring for user-defined projects.
  • You have deployed a service in a user-defined project.
  • You have created a NodeNetworkConfigurationPolicy manifest and applied it to your cluster.

Procedure

  1. If you want to view the metrics from the Developer perspective in the OpenShift Container Platform web console, complete the following tasks:

    1. Click Observe.
    2. To view the metrics of a specific project, select the project in the Project: list. For example, openshift-nmstate.
    3. Click the Metrics tab.
    4. To visualize the metrics on the plot, select a query from the Select query list or create a custom PromQL query based on the selected query by selecting Show PromQL.

      Note

      In the Developer perspective, you can only run one query at a time.

  2. If you want to view the metrics from the Administrator perspective in the OpenShift Container Platform web console, complete the following tasks:

    1. Click Observe Metrics.
    2. Enter kubernetes_nmstate_features_applied in the Expression field.
    3. Click Add query and then Run queries.
  3. To explore the visualized metrics, do any of the following tasks:

    1. To zoom into the plot and change the time range, do any of the following tasks:

      • To visually select the time range, click and drag on the plot horizontally.
      • To select the time range, use the menu which is in the upper left of the console.
    2. To reset the time range, select Reset zoom.
    3. To display the output for all the queries at a specific point in time, hold the mouse cursor on the plot at that point. The query output displays in a pop-up box.

6.1.2. Additional resources

6.2. AWS Load Balancer Operator

6.2.1. AWS Load Balancer Operator release notes

The AWS Load Balancer (ALB) Operator deploys and manages an instance of the AWSLoadBalancerController resource.

Important

The AWS Load Balancer (ALB) Operator is only supported on the x86_64 architecture.

These release notes track the development of the AWS Load Balancer Operator in OpenShift Container Platform.

For an overview of the AWS Load Balancer Operator, see AWS Load Balancer Operator in OpenShift Container Platform.

Note

AWS Load Balancer Operator currently does not support AWS GovCloud.

6.2.1.1. AWS Load Balancer Operator 1.1.1

The following advisory is available for the AWS Load Balancer Operator version 1.1.1:

6.2.1.2. AWS Load Balancer Operator 1.1.0

The AWS Load Balancer Operator version 1.1.0 supports the AWS Load Balancer Controller version 2.4.4.

The following advisory is available for the AWS Load Balancer Operator version 1.1.0:

6.2.1.2.1. Notable changes
  • This release uses the Kubernetes API version 0.27.2.
6.2.1.2.2. New features
  • The AWS Load Balancer Operator now supports a standardized Security Token Service (STS) flow by using the Cloud Credential Operator.
6.2.1.2.3. Bug fixes
  • A FIPS-compliant cluster must use TLS version 1.2. Previously, webhooks for the AWS Load Balancer Controller only accepted TLS 1.3 as the minimum version, resulting in an error such as the following on a FIPS-compliant cluster:

    remote error: tls: protocol version not supported

    Now, the AWS Load Balancer Controller accepts TLS 1.2 as the minimum TLS version, resolving this issue. (OCPBUGS-14846)

6.2.1.3. AWS Load Balancer Operator 1.0.1

The following advisory is available for the AWS Load Balancer Operator version 1.0.1:

6.2.1.4. AWS Load Balancer Operator 1.0.0

The AWS Load Balancer Operator is now generally available with this release. The AWS Load Balancer Operator version 1.0.0 supports the AWS Load Balancer Controller version 2.4.4.

The following advisory is available for the AWS Load Balancer Operator version 1.0.0:

Important

The AWS Load Balancer (ALB) Operator version 1.x.x cannot upgrade automatically from the Technology Preview version 0.x.x. To upgrade from an earlier version, you must uninstall the ALB operands and delete the aws-load-balancer-operator namespace.

6.2.1.4.1. Notable changes
  • This release uses the new v1 API version.
6.2.1.4.2. Bug fixes
  • Previously, the controller provisioned by the AWS Load Balancer Operator did not properly use the configuration for the cluster-wide proxy. These settings are now applied appropriately to the controller. (OCPBUGS-4052, OCPBUGS-5295)

6.2.1.5. Earlier versions

The two earliest versions of the AWS Load Balancer Operator are available as a Technology Preview. These versions should not be used in a production cluster. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

The following advisory is available for the AWS Load Balancer Operator version 0.2.0:

The following advisory is available for the AWS Load Balancer Operator version 0.0.1:

6.2.2. AWS Load Balancer Operator in OpenShift Container Platform

The AWS Load Balancer Operator deploys and manages the AWS Load Balancer Controller. You can install the AWS Load Balancer Operator from OperatorHub by using OpenShift Container Platform web console or CLI.

6.2.2.1. AWS Load Balancer Operator considerations

Review the following limitations before installing and using the AWS Load Balancer Operator:

  • The IP traffic mode only works on AWS Elastic Kubernetes Service (EKS). The AWS Load Balancer Operator disables the IP traffic mode for the AWS Load Balancer Controller. As a result of disabling the IP traffic mode, the AWS Load Balancer Controller cannot use the pod readiness gate.
  • The AWS Load Balancer Operator adds command-line flags such as --disable-ingress-class-annotation and --disable-ingress-group-name-annotation to the AWS Load Balancer Controller. Therefore, the AWS Load Balancer Operator does not allow using the kubernetes.io/ingress.class and alb.ingress.kubernetes.io/group.name annotations in the Ingress resource.
  • You have configured the AWS Load Balancer Operator so that the SVC type is NodePort (not LoadBalancer or ClusterIP).

6.2.2.2. AWS Load Balancer Operator

The AWS Load Balancer Operator can tag the public subnets if the kubernetes.io/role/elb tag is missing. Also, the AWS Load Balancer Operator detects the following information from the underlying AWS cloud:

  • The ID of the virtual private cloud (VPC) on which the cluster hosting the Operator is deployed in.
  • Public and private subnets of the discovered VPC.

The AWS Load Balancer Operator supports the Kubernetes service resource of type LoadBalancer by using Network Load Balancer (NLB) with the instance target type only.

Procedure

  1. You can deploy the AWS Load Balancer Operator on demand from OperatorHub, by creating a Subscription object by running the following command:

    $ oc -n aws-load-balancer-operator get sub aws-load-balancer-operator --template='{{.status.installplan.name}}{{"\n"}}'

    Example output

    install-zlfbt

  2. Check if the status of an install plan is Complete by running the following command:

    $ oc -n aws-load-balancer-operator get ip <install_plan_name> --template='{{.status.phase}}{{"\n"}}'

    Example output

    Complete

  3. View the status of the aws-load-balancer-operator-controller-manager deployment by running the following command:

    $ oc get -n aws-load-balancer-operator deployment/aws-load-balancer-operator-controller-manager

    Example output

    NAME                                           READY     UP-TO-DATE   AVAILABLE   AGE
    aws-load-balancer-operator-controller-manager  1/1       1            1           23h

6.2.2.3. Using the AWS Load Balancer Operator in an AWS VPC cluster extended into an Outpost

You can configure the AWS Load Balancer Operator to provision an AWS Application Load Balancer in an AWS VPC cluster extended into an Outpost. AWS Outposts does not support AWS Network Load Balancers. As a result, the AWS Load Balancer Operator cannot provision Network Load Balancers in an Outpost.

You can create an AWS Application Load Balancer either in the cloud subnet or in the Outpost subnet. An Application Load Balancer in the cloud can attach to cloud-based compute nodes and an Application Load Balancer in the Outpost can attach to edge compute nodes. You must annotate Ingress resources with the Outpost subnet or the VPC subnet, but not both.

Prerequisites

  • You have extended an AWS VPC cluster into an Outpost.
  • You have installed the OpenShift CLI (oc).
  • You have installed the AWS Load Balancer Operator and created the AWS Load Balancer Controller.

Procedure

  • Configure the Ingress resource to use a specified subnet:

    Example Ingress resource configuration

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: <application_name>
      annotations:
        alb.ingress.kubernetes.io/subnets: <subnet_id> 1
    spec:
      ingressClassName: alb
      rules:
        - http:
            paths:
              - path: /
                pathType: Exact
                backend:
                  service:
                    name: <application_name>
                    port:
                      number: 80

    1
    Specifies the subnet to use.
    • To use the Application Load Balancer in an Outpost, specify the Outpost subnet ID.
    • To use the Application Load Balancer in the cloud, you must specify at least two subnets in different availability zones.

6.2.2.4. AWS Load Balancer Operator logs

You can view the AWS Load Balancer Operator logs by using the oc logs command.

Procedure

  • View the logs of the AWS Load Balancer Operator by running the following command:

    $ oc logs -n aws-load-balancer-operator deployment/aws-load-balancer-operator-controller-manager -c manager

6.2.3. Installing the AWS Load Balancer Operator

The AWS Load Balancer Operator deploys and manages the AWS Load Balancer Controller. You can install the AWS Load Balancer Operator from the OperatorHub by using OpenShift Container Platform web console or CLI.

6.2.3.1. Installing the AWS Load Balancer Operator by using the web console

You can install the AWS Load Balancer Operator by using the web console.

Prerequisites

  • You have logged in to the OpenShift Container Platform web console as a user with cluster-admin permissions.
  • Your cluster is configured with AWS as the platform type and cloud provider.
  • If you are using a security token service (STS) or user-provisioned infrastructure, follow the related preparation steps. For example, if you are using AWS Security Token Service, see "Preparing for the AWS Load Balancer Operator on a cluster using the AWS Security Token Service (STS)".

Procedure

  1. Navigate to Operators OperatorHub in the OpenShift Container Platform web console.
  2. Select the AWS Load Balancer Operator. You can use the Filter by keyword text box or use the filter list to search for the AWS Load Balancer Operator from the list of Operators.
  3. Select the aws-load-balancer-operator namespace.
  4. On the Install Operator page, select the following options:

    1. Update the channel as stable-v1.
    2. Installation mode as All namespaces on the cluster (default).
    3. Installed Namespace as aws-load-balancer-operator. If the aws-load-balancer-operator namespace does not exist, it gets created during the Operator installation.
    4. Select Update approval as Automatic or Manual. By default, the Update approval is set to Automatic. If you select automatic updates, the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select manual updates, the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator updated to the new version.
  5. Click Install.

Verification

  • Verify that the AWS Load Balancer Operator shows the Status as Succeeded on the Installed Operators dashboard.

6.2.3.2. Installing the AWS Load Balancer Operator by using the CLI

You can install the AWS Load Balancer Operator by using the CLI.

Prerequisites

  • You are logged in to the OpenShift Container Platform web console as a user with cluster-admin permissions.
  • Your cluster is configured with AWS as the platform type and cloud provider.
  • You are logged into the OpenShift CLI (oc).

Procedure

  1. Create a Namespace object:

    1. Create a YAML file that defines the Namespace object:

      Example namespace.yaml file

      apiVersion: v1
      kind: Namespace
      metadata:
        name: aws-load-balancer-operator

    2. Create the Namespace object by running the following command:

      $ oc apply -f namespace.yaml
  2. Create an OperatorGroup object:

    1. Create a YAML file that defines the OperatorGroup object:

      Example operatorgroup.yaml file

      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
        name: aws-lb-operatorgroup
        namespace: aws-load-balancer-operator
      spec:
        upgradeStrategy: Default

    2. Create the OperatorGroup object by running the following command:

      $ oc apply -f operatorgroup.yaml
  3. Create a Subscription object:

    1. Create a YAML file that defines the Subscription object:

      Example subscription.yaml file

      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: aws-load-balancer-operator
        namespace: aws-load-balancer-operator
      spec:
        channel: stable-v1
        installPlanApproval: Automatic
        name: aws-load-balancer-operator
        source: redhat-operators
        sourceNamespace: openshift-marketplace

    2. Create the Subscription object by running the following command:

      $ oc apply -f subscription.yaml

Verification

  1. Get the name of the install plan from the subscription:

    $ oc -n aws-load-balancer-operator \
        get subscription aws-load-balancer-operator \
        --template='{{.status.installplan.name}}{{"\n"}}'
  2. Check the status of the install plan:

    $ oc -n aws-load-balancer-operator \
        get ip <install_plan_name> \
        --template='{{.status.phase}}{{"\n"}}'

    The output must be Complete.

6.2.4. Installing the AWS Load Balancer Operator on a cluster that uses AWS STS

You can install the Amazon Web Services (AWS) Load Balancer Operator on a cluster that uses the Security Token Service (STS). Follow these steps to prepare your cluster before installing the Operator.

The AWS Load Balancer Operator relies on the CredentialsRequest object to bootstrap the Operator and the AWS Load Balancer Controller. The AWS Load Balancer Operator waits until the required secrets are created and available.

6.2.4.1. Prerequisites

  • You installed the OpenShift CLI (oc).
  • You know the infrastructure ID of your cluster. To show this ID, run the following command in your CLI:

    $ oc get infrastructure cluster -o=jsonpath="{.status.infrastructureName}"
  • You know the OpenID Connect (OIDC) DNS information for your cluster. To show this information, enter the following command in your CLI:

    $ oc get authentication.config cluster -o=jsonpath="{.spec.serviceAccountIssuer}" 1
  • You logged into the AWS Web Console, navigated to IAM Access management Identity providers, and located the OIDC Amazon Resource Name (ARN) information. An OIDC ARN example is arn:aws:iam::777777777777:oidc-provider/<oidc_dns_url>.

6.2.4.2. Creating an IAM role for the AWS Load Balancer Operator

An additional Amazon Web Services (AWS) Identity and Access Management (IAM) role is required to successfully install the AWS Load Balancer Operator on a cluster that uses STS. The IAM role is required to interact with subnets and Virtual Private Clouds (VPCs). The AWS Load Balancer Operator generates the CredentialsRequest object with the IAM role to bootstrap itself.

You can create the IAM role by using the following options:

Use the AWS CLI if your environment does not support the ccoctl command.

6.2.4.2.1. Creating an AWS IAM role by using the Cloud Credential Operator utility

You can use the Cloud Credential Operator utility (ccoctl) to create an AWS IAM role for the AWS Load Balancer Operator. An AWS IAM role interacts with subnets and Virtual Private Clouds (VPCs).

Prerequisites

  • You must extract and prepare the ccoctl binary.

Procedure

  1. Download the CredentialsRequest custom resource (CR) and store it in a directory by running the following command:

    $ curl --create-dirs -o <credentials_requests_dir>/operator.yaml https://raw.githubusercontent.com/openshift/aws-load-balancer-operator/main/hack/operator-credentials-request.yaml
  2. Use the ccoctl utility to create an AWS IAM role by running the following command:

    $ ccoctl aws create-iam-roles \
        --name <name> \
        --region=<aws_region> \
        --credentials-requests-dir=<credentials_requests_dir> \
        --identity-provider-arn <oidc_arn>

    Example output

    2023/09/12 11:38:57 Role arn:aws:iam::777777777777:role/<name>-aws-load-balancer-operator-aws-load-balancer-operator created 1
    2023/09/12 11:38:57 Saved credentials configuration to: /home/user/<credentials_requests_dir>/manifests/aws-load-balancer-operator-aws-load-balancer-operator-credentials.yaml
    2023/09/12 11:38:58 Updated Role policy for Role <name>-aws-load-balancer-operator-aws-load-balancer-operator created

    1
    Note the Amazon Resource Name (ARN) of an AWS IAM role that was created for the AWS Load Balancer Operator, such as arn:aws:iam::777777777777:role/<name>-aws-load-balancer-operator-aws-load-balancer-operator.
    Note

    The length of an AWS IAM role name must be less than or equal to 12 characters.

6.2.4.2.2. Creating an AWS IAM role by using the AWS CLI

You can use the AWS Command Line Interface to create an IAM role for the AWS Load Balancer Operator. The IAM role is used to interact with subnets and Virtual Private Clouds (VPCs).

Prerequisites

  • You must have access to the AWS Command Line Interface (aws).

Procedure

  1. Generate a trust policy file by using your identity provider by running the following command:

    $ cat <<EOF > albo-operator-trust-policy.json
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Principal": {
                    "Federated": "<oidc_arn>" 1
                },
                "Action": "sts:AssumeRoleWithWebIdentity",
                "Condition": {
                    "StringEquals": {
                        "<cluster_oidc_endpoint>:sub": "system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-controller-cluster" 2
                    }
                }
            }
        ]
    }
    EOF
    1
    Specifies the Amazon Resource Name (ARN) of the OIDC identity provider, such as arn:aws:iam::777777777777:oidc-provider/rh-oidc.s3.us-east-1.amazonaws.com/28292va7ad7mr9r4he1fb09b14t59t4f.
    2
    Specifies the service account for the AWS Load Balancer Controller. An example of <cluster_oidc_endpoint> is rh-oidc.s3.us-east-1.amazonaws.com/28292va7ad7mr9r4he1fb09b14t59t4f.
  2. Create the IAM role with the generated trust policy by running the following command:

    $ aws iam create-role --role-name albo-operator --assume-role-policy-document file://albo-operator-trust-policy.json

    Example output

    ROLE	arn:aws:iam::<aws_account_number>:role/albo-operator	2023-08-02T12:13:22Z 1
    ASSUMEROLEPOLICYDOCUMENT	2012-10-17
    STATEMENT	sts:AssumeRoleWithWebIdentity	Allow
    STRINGEQUALS	system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-controller-manager
    PRINCIPAL	arn:aws:iam:<aws_account_number>:oidc-provider/<cluster_oidc_endpoint>

    1
    Note the ARN of the created AWS IAM role that was created for the AWS Load Balancer Operator, such as arn:aws:iam::777777777777:role/albo-operator.
  3. Download the permission policy for the AWS Load Balancer Operator by running the following command:

    $ curl -o albo-operator-permission-policy.json https://raw.githubusercontent.com/openshift/aws-load-balancer-operator/main/hack/operator-permission-policy.json
  4. Attach the permission policy for the AWS Load Balancer Controller to the IAM role by running the following command:

    $ aws iam put-role-policy --role-name albo-operator --policy-name perms-policy-albo-operator --policy-document file://albo-operator-permission-policy.json

6.2.4.3. Configuring the ARN role for the AWS Load Balancer Operator

You can configure the Amazon Resource Name (ARN) role for the AWS Load Balancer Operator as an environment variable. You can configure the ARN role by using the CLI.

Prerequisites

  • You have installed the OpenShift CLI (oc).

Procedure

  1. Create the aws-load-balancer-operator project by running the following command:

    $ oc new-project aws-load-balancer-operator
  2. Create the OperatorGroup object by running the following command:

    $ cat <<EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: aws-load-balancer-operator
      namespace: aws-load-balancer-operator
    spec:
      targetNamespaces: []
    EOF
  3. Create the Subscription object by running the following command:

    $ cat <<EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: aws-load-balancer-operator
      namespace: aws-load-balancer-operator
    spec:
      channel: stable-v1
      name: aws-load-balancer-operator
      source: redhat-operators
      sourceNamespace: openshift-marketplace
      config:
        env:
        - name: ROLEARN
          value: "<albo_role_arn>" 1
    EOF
    1
    Specifies the ARN role to be used in the CredentialsRequest to provision the AWS credentials for the AWS Load Balancer Operator. An example for <albo_role_arn> is arn:aws:iam::<aws_account_number>:role/albo-operator.
    Note

    The AWS Load Balancer Operator waits until the secret is created before moving to the Available status.

6.2.4.4. Creating an IAM role for the AWS Load Balancer Controller

The CredentialsRequest object for the AWS Load Balancer Controller must be set with a manually provisioned IAM role.

You can create the IAM role by using the following options:

Use the AWS CLI if your environment does not support the ccoctl command.

6.2.4.4.1. Creating an AWS IAM role for the controller by using the Cloud Credential Operator utility

You can use the Cloud Credential Operator utility (ccoctl) to create an AWS IAM role for the AWS Load Balancer Controller. An AWS IAM role is used to interact with subnets and Virtual Private Clouds (VPCs).

Prerequisites

  • You must extract and prepare the ccoctl binary.

Procedure

  1. Download the CredentialsRequest custom resource (CR) and store it in a directory by running the following command:

    $ curl --create-dirs -o <credentials_requests_dir>/controller.yaml https://raw.githubusercontent.com/openshift/aws-load-balancer-operator/main/hack/controller/controller-credentials-request.yaml
  2. Use the ccoctl utility to create an AWS IAM role by running the following command:

    $ ccoctl aws create-iam-roles \
        --name <name> \
        --region=<aws_region> \
        --credentials-requests-dir=<credentials_requests_dir> \
        --identity-provider-arn <oidc_arn>

    Example output

    2023/09/12 11:38:57 Role arn:aws:iam::777777777777:role/<name>-aws-load-balancer-operator-aws-load-balancer-controller created 1
    2023/09/12 11:38:57 Saved credentials configuration to: /home/user/<credentials_requests_dir>/manifests/aws-load-balancer-operator-aws-load-balancer-controller-credentials.yaml
    2023/09/12 11:38:58 Updated Role policy for Role <name>-aws-load-balancer-operator-aws-load-balancer-controller created

    1
    Note the Amazon Resource Name (ARN) of an AWS IAM role that was created for the AWS Load Balancer Controller, such as arn:aws:iam::777777777777:role/<name>-aws-load-balancer-operator-aws-load-balancer-controller.
    Note

    The length of an AWS IAM role name must be less than or equal to 12 characters.

6.2.4.4.2. Creating an AWS IAM role for the controller by using the AWS CLI

You can use the AWS command line interface to create an AWS IAM role for the AWS Load Balancer Controller. An AWS IAM role is used to interact with subnets and Virtual Private Clouds (VPCs).

Prerequisites

  • You must have access to the AWS command line interface (aws).

Procedure

  1. Generate a trust policy file using your identity provider by running the following command:

    $ cat <<EOF > albo-controller-trust-policy.json
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Principal": {
                    "Federated": "<oidc_arn>" 1
                },
                "Action": "sts:AssumeRoleWithWebIdentity",
                "Condition": {
                    "StringEquals": {
                        "<cluster_oidc_endpoint>:sub": "system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-controller-cluster" 2
                    }
                }
            }
        ]
    }
    EOF
    1
    Specifies the Amazon Resource Name (ARN) of the OIDC identity provider, such as arn:aws:iam::777777777777:oidc-provider/rh-oidc.s3.us-east-1.amazonaws.com/28292va7ad7mr9r4he1fb09b14t59t4f.
    2
    Specifies the service account for the AWS Load Balancer Controller. An example of <cluster_oidc_endpoint> is rh-oidc.s3.us-east-1.amazonaws.com/28292va7ad7mr9r4he1fb09b14t59t4f.
  2. Create an AWS IAM role with the generated trust policy by running the following command:

    $ aws iam create-role --role-name albo-controller --assume-role-policy-document file://albo-controller-trust-policy.json

    Example output

    ROLE	arn:aws:iam::<aws_account_number>:role/albo-controller	2023-08-02T12:13:22Z 1
    ASSUMEROLEPOLICYDOCUMENT	2012-10-17
    STATEMENT	sts:AssumeRoleWithWebIdentity	Allow
    STRINGEQUALS	system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-controller-cluster
    PRINCIPAL	arn:aws:iam:<aws_account_number>:oidc-provider/<cluster_oidc_endpoint>

    1
    Note the ARN of an AWS IAM role for the AWS Load Balancer Controller, such as arn:aws:iam::777777777777:role/albo-controller.
  3. Download the permission policy for the AWS Load Balancer Controller by running the following command:

    $ curl -o albo-controller-permission-policy.json https://raw.githubusercontent.com/openshift/aws-load-balancer-operator/main/assets/iam-policy.json
  4. Attach the permission policy for the AWS Load Balancer Controller to an AWS IAM role by running the following command:

    $ aws iam put-role-policy --role-name albo-controller --policy-name perms-policy-albo-controller --policy-document file://albo-controller-permission-policy.json
  5. Create a YAML file that defines the AWSLoadBalancerController object:

    Example sample-aws-lb-manual-creds.yaml file

    apiVersion: networking.olm.openshift.io/v1
    kind: AWSLoadBalancerController 1
    metadata:
      name: cluster 2
    spec:
      credentialsRequestConfig:
        stsIAMRoleARN: <albc_role_arn> 3

    1
    Defines the AWSLoadBalancerController object.
    2
    Defines the AWS Load Balancer Controller name. All related resources use this instance name as a suffix.
    3
    Specifies the ARN role for the AWS Load Balancer Controller. The CredentialsRequest object uses this ARN role to provision the AWS credentials. An example of <albc_role_arn> is arn:aws:iam::777777777777:role/albo-controller.

6.2.4.5. Additional resources

6.2.5. Creating an instance of the AWS Load Balancer Controller

After installing the AWS Load Balancer Operator, you can create the AWS Load Balancer Controller.

6.2.5.1. Creating the AWS Load Balancer Controller

You can install only a single instance of the AWSLoadBalancerController object in a cluster. You can create the AWS Load Balancer Controller by using CLI. The AWS Load Balancer Operator reconciles only the cluster named resource.

Prerequisites

  • You have created the echoserver namespace.
  • You have access to the OpenShift CLI (oc).

Procedure

  1. Create a YAML file that defines the AWSLoadBalancerController object:

    Example sample-aws-lb.yaml file

    apiVersion: networking.olm.openshift.io/v1
    kind: AWSLoadBalancerController 1
    metadata:
      name: cluster 2
    spec:
      subnetTagging: Auto 3
      additionalResourceTags: 4
      - key: example.org/security-scope
        value: staging
      ingressClass: alb 5
      config:
        replicas: 2 6
      enabledAddons: 7
        - AWSWAFv2 8

    1
    Defines the AWSLoadBalancerController object.
    2
    Defines the AWS Load Balancer Controller name. This instance name gets added as a suffix to all related resources.
    3
    Configures the subnet tagging method for the AWS Load Balancer Controller. The following values are valid:
    • Auto: The AWS Load Balancer Operator determines the subnets that belong to the cluster and tags them appropriately. The Operator cannot determine the role correctly if the internal subnet tags are not present on internal subnet.
    • Manual: You manually tag the subnets that belong to the cluster with the appropriate role tags. Use this option if you installed your cluster on user-provided infrastructure.
    4
    Defines the tags used by the AWS Load Balancer Controller when it provisions AWS resources.
    5
    Defines the ingress class name. The default value is alb.
    6
    Specifies the number of replicas of the AWS Load Balancer Controller.
    7
    Specifies annotations as an add-on for the AWS Load Balancer Controller.
    8
    Enables the alb.ingress.kubernetes.io/wafv2-acl-arn annotation.
  2. Create the AWSLoadBalancerController object by running the following command:

    $ oc create -f sample-aws-lb.yaml
  3. Create a YAML file that defines the Deployment resource:

    Example sample-aws-lb.yaml file

    apiVersion: apps/v1
    kind: Deployment 1
    metadata:
      name: <echoserver> 2
      namespace: echoserver
    spec:
      selector:
        matchLabels:
          app: echoserver
      replicas: 3 3
      template:
        metadata:
          labels:
            app: echoserver
        spec:
          containers:
            - image: openshift/origin-node
              command:
               - "/bin/socat"
              args:
                - TCP4-LISTEN:8080,reuseaddr,fork
                - EXEC:'/bin/bash -c \"printf \\\"HTTP/1.0 200 OK\r\n\r\n\\\"; sed -e \\\"/^\r/q\\\"\"'
              imagePullPolicy: Always
              name: echoserver
              ports:
                - containerPort: 8080

    1
    Defines the deployment resource.
    2
    Specifies the deployment name.
    3
    Specifies the number of replicas of the deployment.
  4. Create a YAML file that defines the Service resource:

    Example service-albo.yaml file

    apiVersion: v1
    kind: Service 1
    metadata:
      name: <echoserver> 2
      namespace: echoserver
    spec:
      ports:
        - port: 80
          targetPort: 8080
          protocol: TCP
      type: NodePort
      selector:
        app: echoserver

    1
    Defines the service resource.
    2
    Specifies the service name.
  5. Create a YAML file that defines the Ingress resource:

    Example ingress-albo.yaml file

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: <name> 1
      namespace: echoserver
      annotations:
        alb.ingress.kubernetes.io/scheme: internet-facing
        alb.ingress.kubernetes.io/target-type: instance
    spec:
      ingressClassName: alb
      rules:
        - http:
            paths:
              - path: /
                pathType: Exact
                backend:
                  service:
                    name: <echoserver> 2
                    port:
                      number: 80

    1
    Specify a name for the Ingress resource.
    2
    Specifies the service name.

Verification

  • Save the status of the Ingress resource in the HOST variable by running the following command:

    $ HOST=$(oc get ingress -n echoserver echoserver --template='{{(index .status.loadBalancer.ingress 0).hostname}}')
  • Verify the status of the Ingress resource by running the following command:

    $ curl $HOST

6.2.6. Serving multiple ingress resources through a single AWS Load Balancer

You can route the traffic to different services that are part of a single domain through a single AWS Load Balancer. Each Ingress resource provides different endpoints of the domain.

6.2.6.1. Creating multiple ingress resources through a single AWS Load Balancer

You can route the traffic to multiple ingress resources through a single AWS Load Balancer by using the CLI.

Prerequisites

  • You have an access to the OpenShift CLI (oc).

Procedure

  1. Create an IngressClassParams resource YAML file, for example, sample-single-lb-params.yaml, as follows:

    apiVersion: elbv2.k8s.aws/v1beta1 1
    kind: IngressClassParams
    metadata:
      name: single-lb-params 2
    spec:
      group:
        name: single-lb 3
    1
    Defines the API group and version of the IngressClassParams resource.
    2
    Specifies the IngressClassParams resource name.
    3
    Specifies the IngressGroup resource name. All of the Ingress resources of this class belong to this IngressGroup.
  2. Create the IngressClassParams resource by running the following command:

    $ oc create -f sample-single-lb-params.yaml
  3. Create the IngressClass resource YAML file, for example, sample-single-lb-class.yaml, as follows:

    apiVersion: networking.k8s.io/v1 1
    kind: IngressClass
    metadata:
      name: single-lb 2
    spec:
      controller: ingress.k8s.aws/alb 3
      parameters:
        apiGroup: elbv2.k8s.aws 4
        kind: IngressClassParams 5
        name: single-lb-params 6
    1
    Defines the API group and version of the IngressClass resource.
    2
    Specifies the ingress class name.
    3
    Defines the controller name. The ingress.k8s.aws/alb value denotes that all ingress resources of this class should be managed by the AWS Load Balancer Controller.
    4
    Defines the API group of the IngressClassParams resource.
    5
    Defines the resource type of the IngressClassParams resource.
    6
    Defines the IngressClassParams resource name.
  4. Create the IngressClass resource by running the following command:

    $ oc create -f sample-single-lb-class.yaml
  5. Create the AWSLoadBalancerController resource YAML file, for example, sample-single-lb.yaml, as follows:

    apiVersion: networking.olm.openshift.io/v1
    kind: AWSLoadBalancerController
    metadata:
      name: cluster
    spec:
      subnetTagging: Auto
      ingressClass: single-lb 1
    1
    Defines the name of the IngressClass resource.
  6. Create the AWSLoadBalancerController resource by running the following command:

    $ oc create -f sample-single-lb.yaml
  7. Create the Ingress resource YAML file, for example, sample-multiple-ingress.yaml, as follows:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: example-1 1
      annotations:
        alb.ingress.kubernetes.io/scheme: internet-facing 2
        alb.ingress.kubernetes.io/group.order: "1" 3
        alb.ingress.kubernetes.io/target-type: instance 4
    spec:
      ingressClassName: single-lb 5
      rules:
      - host: example.com 6
        http:
            paths:
            - path: /blog 7
              pathType: Prefix
              backend:
                service:
                  name: example-1 8
                  port:
                    number: 80 9
    ---
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: example-2
      annotations:
        alb.ingress.kubernetes.io/scheme: internet-facing
        alb.ingress.kubernetes.io/group.order: "2"
        alb.ingress.kubernetes.io/target-type: instance
    spec:
      ingressClassName: single-lb
      rules:
      - host: example.com
        http:
            paths:
            - path: /store
              pathType: Prefix
              backend:
                service:
                  name: example-2
                  port:
                    number: 80
    ---
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: example-3
      annotations:
        alb.ingress.kubernetes.io/scheme: internet-facing
        alb.ingress.kubernetes.io/group.order: "3"
        alb.ingress.kubernetes.io/target-type: instance
    spec:
      ingressClassName: single-lb
      rules:
      - host: example.com
        http:
            paths:
            - path: /
              pathType: Prefix
              backend:
                service:
                  name: example-3
                  port:
                    number: 80
    1
    Specifies the ingress name.
    2
    Indicates the load balancer to provision in the public subnet to access the internet.
    3
    Specifies the order in which the rules from the multiple ingress resources are matched when the request is received at the load balancer.
    4
    Indicates that the load balancer will target OpenShift Container Platform nodes to reach the service.
    5
    Specifies the ingress class that belongs to this ingress.
    6
    Defines a domain name used for request routing.
    7
    Defines the path that must route to the service.
    8
    Defines the service name that serves the endpoint configured in the Ingress resource.
    9
    Defines the port on the service that serves the endpoint.
  8. Create the Ingress resource by running the following command:

    $ oc create -f sample-multiple-ingress.yaml

6.2.7. Adding TLS termination

You can add TLS termination on the AWS Load Balancer.

6.2.7.1. Adding TLS termination on the AWS Load Balancer

You can route the traffic for the domain to pods of a service and add TLS termination on the AWS Load Balancer.

Prerequisites

  • You have an access to the OpenShift CLI (oc).

Procedure

  1. Create a YAML file that defines the AWSLoadBalancerController resource:

    Example add-tls-termination-albc.yaml file

    apiVersion: networking.olm.openshift.io/v1
    kind: AWSLoadBalancerController
    metadata:
      name: cluster
    spec:
      subnetTagging: Auto
      ingressClass: tls-termination 1

    1
    Defines the ingress class name. If the ingress class is not present in your cluster the AWS Load Balancer Controller creates one. The AWS Load Balancer Controller reconciles the additional ingress class values if spec.controller is set to ingress.k8s.aws/alb.
  2. Create a YAML file that defines the Ingress resource:

    Example add-tls-termination-ingress.yaml file

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: <example> 1
      annotations:
        alb.ingress.kubernetes.io/scheme: internet-facing 2
        alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-west-2:xxxxx 3
    spec:
      ingressClassName: tls-termination 4
      rules:
      - host: <example.com> 5
        http:
            paths:
              - path: /
                pathType: Exact
                backend:
                  service:
                    name: <example-service> 6
                    port:
                      number: 80

    1
    Specifies the ingress name.
    2
    The controller provisions the load balancer for ingress in a public subnet to access the load balancer over the internet.
    3
    The Amazon Resource Name (ARN) of the certificate that you attach to the load balancer.
    4
    Defines the ingress class name.
    5
    Defines the domain for traffic routing.
    6
    Defines the service for traffic routing.

6.2.8. Configuring cluster-wide proxy

You can configure the cluster-wide proxy in the AWS Load Balancer Operator. After configuring the cluster-wide proxy, Operator Lifecycle Manager (OLM) automatically updates all the deployments of the Operators with the environment variables such as HTTP_PROXY, HTTPS_PROXY, and NO_PROXY. These variables are populated to the managed controller by the AWS Load Balancer Operator.

6.2.8.1. Trusting the certificate authority of the cluster-wide proxy

  1. Create the config map to contain the certificate authority (CA) bundle in the aws-load-balancer-operator namespace by running the following command:

    $ oc -n aws-load-balancer-operator create configmap trusted-ca
  2. To inject the trusted CA bundle into the config map, add the config.openshift.io/inject-trusted-cabundle=true label to the config map by running the following command:

    $ oc -n aws-load-balancer-operator label cm trusted-ca config.openshift.io/inject-trusted-cabundle=true
  3. Update the AWS Load Balancer Operator subscription to access the config map in the AWS Load Balancer Operator deployment by running the following command:

    $ oc -n aws-load-balancer-operator patch subscription aws-load-balancer-operator --type='merge' -p '{"spec":{"config":{"env":[{"name":"TRUSTED_CA_CONFIGMAP_NAME","value":"trusted-ca"}],"volumes":[{"name":"trusted-ca","configMap":{"name":"trusted-ca"}}],"volumeMounts":[{"name":"trusted-ca","mountPath":"/etc/pki/tls/certs/albo-tls-ca-bundle.crt","subPath":"ca-bundle.crt"}]}}}'
  4. After the AWS Load Balancer Operator is deployed, verify that the CA bundle is added to the aws-load-balancer-operator-controller-manager deployment by running the following command:

    $ oc -n aws-load-balancer-operator exec deploy/aws-load-balancer-operator-controller-manager -c manager -- bash -c "ls -l /etc/pki/tls/certs/albo-tls-ca-bundle.crt; printenv TRUSTED_CA_CONFIGMAP_NAME"

    Example output

    -rw-r--r--. 1 root 1000690000 5875 Jan 11 12:25 /etc/pki/tls/certs/albo-tls-ca-bundle.crt
    trusted-ca

  5. Optional: Restart deployment of the AWS Load Balancer Operator every time the config map changes by running the following command:

    $ oc -n aws-load-balancer-operator rollout restart deployment/aws-load-balancer-operator-controller-manager

6.2.8.2. Additional resources

6.3. eBPF manager Operator

6.3.1. About the eBPF Manager Operator

Important

eBPF Manager Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

6.3.1.1. About Extended Berkeley Packet Filter (eBPF)

eBPF extends the original Berkeley Packet Filter for advanced network traffic filtering. It acts as a virtual machine inside the Linux kernel, allowing you to run sandboxed programs in response to events such as network packets, system calls, or kernel functions.

6.3.1.2. About the eBPF Manager Operator

eBPF Manager simplifies the management and deployment of eBPF programs within Kubernetes, as well as enhancing the security around using eBPF programs. It utilizes Kubernetes custom resource definitions (CRDs) to manage eBPF programs packaged as OCI container images. This approach helps to delineate deployment permissions and enhance security by restricting program types deployable by specific users.

eBPF Manager is a software stack designed to manage eBPF programs within Kubernetes. It facilitates the loading, unloading, modifying, and monitoring of eBPF programs in Kubernetes clusters. It includes a daemon, CRDs, an agent, and an operator:

bpfman
A system daemon that manages eBPF programs via a gRPC API.
eBPF CRDs
A set of CRDs like XdpProgram and TcProgram for loading eBPF programs, and a bpfman-generated CRD (BpfProgram) for representing the state of loaded programs.
bpfman-agent
Runs within a daemonset container, ensuring eBPF programs on each node are in the desired state.
bpfman-operator
Manages the lifecycle of the bpfman-agent and CRDs in the cluster using the Operator SDK.

The eBPF Manager Operator offers the following features:

  • Enhances security by centralizing eBPF program loading through a controlled daemon. eBPF Manager has the elevated privileges so the applications don’t need to be. eBPF program control is regulated by standard Kubernetes Role-based access control (RBAC), which can allow or deny an application’s access to the different eBPF Manager CRDs that manage eBPF program loading and unloading.
  • Provides detailed visibility into active eBPF programs, improving your ability to debug issues across the system.
  • Facilitates the coexistence of multiple eBPF programs from different sources using protocols like libxdp for XDP and TC programs, enhancing interoperability.
  • Streamlines the deployment and lifecycle management of eBPF programs in Kubernetes. Developers can focus on program interaction rather than lifecycle management, with support for existing eBPF libraries like Cilium, libbpf, and Aya.

6.3.1.3. Additional resources

6.3.1.4. Next steps

6.3.2. Installing the eBPF Manager Operator

As a cluster administrator, you can install the eBPF Manager Operator by using the OpenShift Container Platform CLI or the web console.

Important

eBPF Manager Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

6.3.2.1. Installing the eBPF Manager Operator using the CLI

As a cluster administrator, you can install the Operator using the CLI.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have an account with administrator privileges.

Procedure

  1. To create the bpfman namespace, enter the following command:

    $ cat << EOF| oc create -f -
    apiVersion: v1
    kind: Namespace
    metadata:
      labels:
        pod-security.kubernetes.io/enforce: privileged
        pod-security.kubernetes.io/enforce-version: v1.24
      name: bpfman
    EOF
  2. To create an OperatorGroup CR, enter the following command:

    $ cat << EOF| oc create -f -
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: bpfman-operators
      namespace: bpfman
    EOF
  3. Subscribe to the eBPF Manager Operator.

    1. To create a Subscription CR for the eBPF Manager Operator, enter the following command:

      $ cat << EOF| oc create -f -
      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: bpfman-operator
        namespace: bpfman
      spec:
        name: bpfman-operator
        channel: alpha
        source: community-operators
        sourceNamespace: openshift-marketplace
      EOF
  4. To verify that the Operator is installed, enter the following command:

    $ oc get ip -n bpfman

    Example output

    NAME            CSV                                 APPROVAL    APPROVED
    install-ppjxl   security-profiles-operator.v0.8.5   Automatic   true

  5. To verify the version of the Operator, enter the following command:

    $ oc get csv -n bpfman

    Example output

    NAME                                DISPLAY                      VERSION   REPLACES                            PHASE
    bpfman-operator.v0.5.0              eBPF Manager Operator              0.5.0     bpfman-operator.v0.4.2              Succeeded

6.3.2.2. Installing the eBPF Manager Operator using the web console

As a cluster administrator, you can install the eBPF Manager Operator using the web console.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have an account with administrator privileges.

Procedure

  1. Install the eBPF Manager Operator:

    1. In the OpenShift Container Platform web console, click Operators OperatorHub.
    2. Select eBPF Manager Operator from the list of available Operators, and if prompted to Show community Operator, click Continue.
    3. Click Install.
    4. On the Install Operator page, under Installed Namespace, select Operator recommended Namespace.
    5. Click Install.
  2. Verify that the eBPF Manager Operator is installed successfully:

    1. Navigate to the Operators Installed Operators page.
    2. Ensure that eBPF Manager Operator is listed in the openshift-ingress-node-firewall project with a Status of InstallSucceeded.

      Note

      During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message.

      If the Operator does not have a Status of InstallSucceeded, troubleshoot using the following steps:

      • Inspect the Operator Subscriptions and Install Plans tabs for any failures or errors under Status.
      • Navigate to the Workloads Pods page and check the logs for pods in the bpfman project.

6.3.2.3. Next steps

6.3.3. Deploying an eBPF program

As a cluster administrator, you can deploy containerized eBPF applications with the eBPF Manager Operator.

For the example eBPF program deployed in this procedure, the sample manifest does the following:

First, it creates basic Kubernetes objects like Namespace, ServiceAccount, and ClusterRoleBinding. It also creates a XdpProgram object, which is a custom resource definition (CRD) that eBPF Manager provides, that loads the eBPF XDP program. Each program type has it’s own CRD, but they are similar in what they do. For more information, see Loading eBPF Programs On Kubernetes.

Second, it creates a daemon set which runs a user space program that reads the eBPF maps that the eBPF program is populating. This eBPF map is volume mounted using a Container Storage Interface (CSI) driver. By volume mounting the eBPF map in the container in lieu of accessing it on the host, the application pod can access the eBPF maps without being privileged. For more information on how the CSI is configured, see See Deploying an eBPF enabled application On Kubernetes.

Important

eBPF Manager Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

6.3.3.1. Deploying a containerized eBPF program

As a cluster administrator, you can deploy an eBPF program to nodes on your cluster. In this procedure, a sample containerized eBPF program is installed in the go-xdp-counter namespace.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have an account with administrator privileges.
  • You have installed the eBPF Manager Operator.

Procedure

  1. To download the manifest, enter the following command:

    $ curl -L https://github.com/bpfman/bpfman/releases/download/v0.5.1/go-xdp-counter-install-selinux.yaml -o go-xdp-counter-install-selinux.yaml
  2. To deploy the sample eBPF application, enter the following command:

    $ oc create -f go-xdp-counter-install-selinux.yaml

    Example output

    namespace/go-xdp-counter created
    serviceaccount/bpfman-app-go-xdp-counter created
    clusterrolebinding.rbac.authorization.k8s.io/xdp-binding created
    daemonset.apps/go-xdp-counter-ds created
    xdpprogram.bpfman.io/go-xdp-counter-example created
    selinuxprofile.security-profiles-operator.x-k8s.io/bpfman-secure created

  3. To confirm that the eBPF sample application deployed successfully, enter the following command:

    $ oc get all -o wide -n go-xdp-counter

    Example output

    NAME                          READY   STATUS    RESTARTS   AGE   IP             NODE                                 NOMINATED NODE   READINESS GATES
    pod/go-xdp-counter-ds-4m9cw   1/1     Running   0          44s   10.129.0.92    ci-ln-dcbq7d2-72292-ztrkp-master-1   <none>           <none>
    pod/go-xdp-counter-ds-7hzww   1/1     Running   0          44s   10.130.0.86    ci-ln-dcbq7d2-72292-ztrkp-master-2   <none>           <none>
    pod/go-xdp-counter-ds-qm9zx   1/1     Running   0          44s   10.128.0.101   ci-ln-dcbq7d2-72292-ztrkp-master-0   <none>           <none>
    
    NAME                               DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE   CONTAINERS       IMAGES                                           SELECTOR
    daemonset.apps/go-xdp-counter-ds   3         3         3       3            3           <none>          44s   go-xdp-counter   quay.io/bpfman-userspace/go-xdp-counter:v0.5.0   name=go-xdp-counter

  4. To confirm that the example XDP program is running, enter the following command:

    $ oc get xdpprogram go-xdp-counter-example

    Example output

    NAME                     BPFFUNCTIONNAME   NODESELECTOR   STATUS
    go-xdp-counter-example   xdp_stats         {}             ReconcileSuccess

  5. To confirm that the XDP program is collecting data, enter the following command:

    $ oc logs <pod_name> -n go-xdp-counter

    Replace <pod_name> with the name of a XDP program pod, such as go-xdp-counter-ds-4m9cw.

    Example output

    2024/08/13 15:20:06 15016 packets received
    2024/08/13 15:20:06 93581579 bytes received
    
    2024/08/13 15:20:09 19284 packets received
    2024/08/13 15:20:09 99638680 bytes received
    
    2024/08/13 15:20:12 23522 packets received
    2024/08/13 15:20:12 105666062 bytes received
    
    2024/08/13 15:20:15 27276 packets received
    2024/08/13 15:20:15 112028608 bytes received
    
    2024/08/13 15:20:18 29470 packets received
    2024/08/13 15:20:18 112732299 bytes received
    
    2024/08/13 15:20:21 32588 packets received
    2024/08/13 15:20:21 113813781 bytes received

6.4. External DNS Operator

6.4.1. External DNS Operator release notes

The External DNS Operator deploys and manages ExternalDNS to provide name resolution for services and routes from the external DNS provider to OpenShift Container Platform.

Important

The External DNS Operator is only supported on the x86_64 architecture.

These release notes track the development of the External DNS Operator in OpenShift Container Platform.

6.4.1.1. External DNS Operator 1.3.0

The following advisory is available for the External DNS Operator version 1.3.0:

*RHEA-2024:8550 Produce Enhancement Advisory

This update includes a rebase to the 0.14.2 version of the upstream project.

6.4.1.1.1. Bug fixes

Previously, the ExternalDNS Operator could not deploy operands on HCP clusters. With this release, the Operator deploys operands in a running and ready state. (OCPBUGS-37059)

Previously, the ExternalDNS Operator was not using RHEL 9 as its building or base images. With this release, RHEL9 is the base. (OCPBUGS-41683)

Previously, the godoc had a broken link for Infoblox provider. With this release, the godoc is revised for accuracy. Some links are removed while some other are replaced with GitHub permalinks. (OCPBUGS-36797)

6.4.1.2. External DNS Operator 1.2.0

The following advisory is available for the External DNS Operator version 1.2.0:

6.4.1.2.1. New features
6.4.1.2.2. Bug fixes
  • The update strategy for the operand changed from Rolling to Recreate. (OCPBUGS-3630)

6.4.1.3. External DNS Operator 1.1.1

The following advisory is available for the External DNS Operator version 1.1.1:

6.4.1.4. External DNS Operator 1.1.0

This release included a rebase of the operand from the upstream project version 0.13.1. The following advisory is available for the External DNS Operator version 1.1.0:

6.4.1.4.1. Bug fixes
  • Previously, the ExternalDNS Operator enforced an empty defaultMode value for volumes, which caused constant updates due to a conflict with the OpenShift API. Now, the defaultMode value is not enforced and operand deployment does not update constantly. (OCPBUGS-2793)

6.4.1.5. External DNS Operator 1.0.1

The following advisory is available for the External DNS Operator version 1.0.1:

6.4.1.6. External DNS Operator 1.0.0

The following advisory is available for the External DNS Operator version 1.0.0:

6.4.1.6.1. Bug fixes
  • Previously, the External DNS Operator issued a warning about the violation of the restricted SCC policy during ExternalDNS operand pod deployments. This issue has been resolved. (BZ#2086408)

6.4.2. External DNS Operator in OpenShift Container Platform

The External DNS Operator deploys and manages ExternalDNS to provide the name resolution for services and routes from the external DNS provider to OpenShift Container Platform.

6.4.2.1. External DNS Operator

The External DNS Operator implements the External DNS API from the olm.openshift.io API group. The External DNS Operator updates services, routes, and external DNS providers.

Prerequisites

  • You have installed the yq CLI tool.

Procedure

You can deploy the External DNS Operator on demand from the OperatorHub. Deploying the External DNS Operator creates a Subscription object.

  1. Check the name of an install plan by running the following command:

    $ oc -n external-dns-operator get sub external-dns-operator -o yaml | yq '.status.installplan.name'

    Example output

    install-zcvlr

  2. Check if the status of an install plan is Complete by running the following command:

    $ oc -n external-dns-operator get ip <install_plan_name> -o yaml | yq '.status.phase'

    Example output

    Complete

  3. View the status of the external-dns-operator deployment by running the following command:

    $ oc get -n external-dns-operator deployment/external-dns-operator

    Example output

    NAME                    READY     UP-TO-DATE   AVAILABLE   AGE
    external-dns-operator   1/1       1            1           23h

6.4.2.2. Viewing External DNS Operator logs

You can view External DNS Operator logs by using the oc logs command.

Procedure

  1. View the logs of the External DNS Operator by running the following command:

    $ oc logs -n external-dns-operator deployment/external-dns-operator -c external-dns-operator
6.4.2.2.1. External DNS Operator domain name limitations

The External DNS Operator uses the TXT registry which adds the prefix for TXT records. This reduces the maximum length of the domain name for TXT records. A DNS record cannot be present without a corresponding TXT record, so the domain name of the DNS record must follow the same limit as the TXT records. For example, a DNS record of <domain_name_from_source> results in a TXT record of external-dns-<record_type>-<domain_name_from_source>.

The domain name of the DNS records generated by the External DNS Operator has the following limitations:

Record typeNumber of characters

CNAME

44

Wildcard CNAME records on AzureDNS

42

A

48

Wildcard A records on AzureDNS

46

The following error appears in the External DNS Operator logs if the generated domain name exceeds any of the domain name limitations:

time="2022-09-02T08:53:57Z" level=error msg="Failure in zone test.example.io. [Id: /hostedzone/Z06988883Q0H0RL6UMXXX]"
time="2022-09-02T08:53:57Z" level=error msg="InvalidChangeBatch: [FATAL problem: DomainLabelTooLong (Domain label is too long) encountered with 'external-dns-a-hello-openshift-aaaaaaaaaa-bbbbbbbbbb-ccccccc']\n\tstatus code: 400, request id: e54dfd5a-06c6-47b0-bcb9-a4f7c3a4e0c6"

6.4.3. Installing External DNS Operator on cloud providers

You can install the External DNS Operator on cloud providers such as AWS, Azure, and GCP.

6.4.3.1. Installing the External DNS Operator with OperatorHub

You can install the External DNS Operator by using the OpenShift Container Platform OperatorHub.

Procedure

  1. Click Operators OperatorHub in the OpenShift Container Platform web console.
  2. Click External DNS Operator. You can use the Filter by keyword text box or the filter list to search for External DNS Operator from the list of Operators.
  3. Select the external-dns-operator namespace.
  4. On the External DNS Operator page, click Install.
  5. On the Install Operator page, ensure that you selected the following options:

    1. Update the channel as stable-v1.
    2. Installation mode as A specific name on the cluster.
    3. Installed namespace as external-dns-operator. If namespace external-dns-operator does not exist, it gets created during the Operator installation.
    4. Select Approval Strategy as Automatic or Manual. Approval Strategy is set to Automatic by default.
    5. Click Install.

If you select Automatic updates, the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention.

If you select Manual updates, the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version.

Verification

Verify that the External DNS Operator shows the Status as Succeeded on the Installed Operators dashboard.

6.4.3.2. Installing the External DNS Operator by using the CLI

You can install the External DNS Operator by using the CLI.

Prerequisites

  • You are logged in to the OpenShift Container Platform web console as a user with cluster-admin permissions.
  • You are logged into the OpenShift CLI (oc).

Procedure

  1. Create a Namespace object:

    1. Create a YAML file that defines the Namespace object:

      Example namespace.yaml file

      apiVersion: v1
      kind: Namespace
      metadata:
        name: external-dns-operator

    2. Create the Namespace object by running the following command:

      $ oc apply -f namespace.yaml
  2. Create an OperatorGroup object:

    1. Create a YAML file that defines the OperatorGroup object:

      Example operatorgroup.yaml file

      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
        name: external-dns-operator
        namespace: external-dns-operator
      spec:
        upgradeStrategy: Default
        targetNamespaces:
        - external-dns-operator

    2. Create the OperatorGroup object by running the following command:

      $ oc apply -f operatorgroup.yaml
  3. Create a Subscription object:

    1. Create a YAML file that defines the Subscription object:

      Example subscription.yaml file

      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: external-dns-operator
        namespace: external-dns-operator
      spec:
        channel: stable-v1
        installPlanApproval: Automatic
        name: external-dns-operator
        source: redhat-operators
        sourceNamespace: openshift-marketplace

    2. Create the Subscription object by running the following command:

      $ oc apply -f subscription.yaml

Verification

  1. Get the name of the install plan from the subscription by running the following command:

    $ oc -n external-dns-operator \
        get subscription external-dns-operator \
        --template='{{.status.installplan.name}}{{"\n"}}'
  2. Verify that the status of the install plan is Complete by running the following command:

    $ oc -n external-dns-operator \
        get ip <install_plan_name> \
        --template='{{.status.phase}}{{"\n"}}'
  3. Verify that the status of the external-dns-operator pod is Running by running the following command:

    $ oc -n external-dns-operator get pod

    Example output

    NAME                                     READY   STATUS    RESTARTS   AGE
    external-dns-operator-5584585fd7-5lwqm   2/2     Running   0          11m

  4. Verify that the catalog source of the subscription is redhat-operators by running the following command:

    $ oc -n external-dns-operator get subscription

    Example output

    NAME                    PACKAGE                 SOURCE             CHANNEL
    external-dns-operator   external-dns-operator   redhat-operators   stable-v1

  5. Check the external-dns-operator version by running the following command:

    $ oc -n external-dns-operator get csv

    Example output

    NAME                           DISPLAY                VERSION   REPLACES   PHASE
    external-dns-operator.v<1.y.z>   ExternalDNS Operator   <1.y.z>                Succeeded

6.4.4. External DNS Operator configuration parameters

The External DNS Operator includes the following configuration parameters.

6.4.4.1. External DNS Operator configuration parameters

The External DNS Operator includes the following configuration parameters:

ParameterDescription

spec

Enables the type of a cloud provider.

spec:
  provider:
    type: AWS 1
    aws:
      credentials:
        name: aws-access-key 2
1
Defines available options such as AWS, GCP, Azure, and Infoblox.
2
Defines a secret name for your cloud provider.

zones

Enables you to specify DNS zones by their domains. If you do not specify zones, the ExternalDNS resource discovers all of the zones present in your cloud provider account.

zones:
- "myzoneid" 1
1
Specifies the name of DNS zones.

domains

Enables you to specify AWS zones by their domains. If you do not specify domains, the ExternalDNS resource discovers all of the zones present in your cloud provider account.

domains:
- filterType: Include 1
  matchType: Exact 2
  name: "myzonedomain1.com" 3
- filterType: Include
  matchType: Pattern 4
  pattern: ".*\\.otherzonedomain\\.com" 5
1
Ensures that the ExternalDNS resource includes the domain name.
2
Instructs ExtrnalDNS that the domain matching has to be exact as opposed to regular expression match.
3
Defines the name of the domain.
4
Sets the regex-domain-filter flag in the ExternalDNS resource. You can limit possible domains by using a Regex filter.
5
Defines the regex pattern to be used by the ExternalDNS resource to filter the domains of the target zones.

source

Enables you to specify the source for the DNS records, Service or Route.

source: 1
  type: Service 2
  service:
    serviceType:3
      - LoadBalancer
      - ClusterIP
  labelFilter: 4
    matchLabels:
      external-dns.mydomain.org/publish: "yes"
  hostnameAnnotation: "Allow" 5
  fqdnTemplate:
  - "{{.Name}}.myzonedomain.com" 6
1
Defines the settings for the source of DNS records.
2
The ExternalDNS resource uses the Service type as the source for creating DNS records.
3
Sets the service-type-filter flag in the ExternalDNS resource. The serviceType contains the following fields:
  • default: LoadBalancer
  • expected: ClusterIP
  • NodePort
  • LoadBalancer
  • ExternalName
4
Ensures that the controller considers only those resources which matches with label filter.
5
The default value for hostnameAnnotation is Ignore which instructs ExternalDNS to generate DNS records using the templates specified in the field fqdnTemplates. When the value is Allow the DNS records get generated based on the value specified in the external-dns.alpha.kubernetes.io/hostname annotation.
6
The External DNS Operator uses a string to generate DNS names from sources that do not define a hostname, or to add a hostname suffix when paired with the fake source.
source:
  type: OpenShiftRoute 1
  openshiftRouteOptions:
    routerName: default 2
    labelFilter:
      matchLabels:
        external-dns.mydomain.org/publish: "yes"
1
Creates DNS records.
2
If the source type is OpenShiftRoute, then you can pass the Ingress Controller name. The ExternalDNS resource uses the canonical name of the Ingress Controller as the target for CNAME records.

6.4.5. Creating DNS records on AWS

You can create DNS records on AWS and AWS GovCloud by using External DNS Operator.

6.4.5.1. Creating DNS records on an public hosted zone for AWS by using Red Hat External DNS Operator

You can create DNS records on a public hosted zone for AWS by using the Red Hat External DNS Operator. You can use the same instructions to create DNS records on a hosted zone for AWS GovCloud.

Procedure

  1. Check the user. The user must have access to the kube-system namespace. If you don’t have the credentials, as you can fetch the credentials from the kube-system namespace to use the cloud provider client:

    $ oc whoami

    Example output

    system:admin

  2. Fetch the values from aws-creds secret present in kube-system namespace.

    $ export AWS_ACCESS_KEY_ID=$(oc get secrets aws-creds -n kube-system  --template={{.data.aws_access_key_id}} | base64 -d)
    $ export AWS_SECRET_ACCESS_KEY=$(oc get secrets aws-creds -n kube-system  --template={{.data.aws_secret_access_key}} | base64 -d)
  3. Get the routes to check the domain:

    $ oc get routes --all-namespaces | grep console

    Example output

    openshift-console          console             console-openshift-console.apps.testextdnsoperator.apacshift.support                       console             https   reencrypt/Redirect     None
    openshift-console          downloads           downloads-openshift-console.apps.testextdnsoperator.apacshift.support                     downloads           http    edge/Redirect          None

  4. Get the list of dns zones to find the one which corresponds to the previously found route’s domain:

    $ aws route53 list-hosted-zones | grep testextdnsoperator.apacshift.support

    Example output

    HOSTEDZONES	terraform	/hostedzone/Z02355203TNN1XXXX1J6O	testextdnsoperator.apacshift.support.	5

  5. Create ExternalDNS resource for route source:

    $ cat <<EOF | oc create -f -
    apiVersion: externaldns.olm.openshift.io/v1beta1
    kind: ExternalDNS
    metadata:
      name: sample-aws 1
    spec:
      domains:
      - filterType: Include   2
        matchType: Exact   3
        name: testextdnsoperator.apacshift.support 4
      provider:
        type: AWS 5
      source:  6
        type: OpenShiftRoute 7
        openshiftRouteOptions:
          routerName: default 8
    EOF
    1
    Defines the name of external DNS resource.
    2
    By default all hosted zones are selected as potential targets. You can include a hosted zone that you need.
    3
    The matching of the target zone’s domain has to be exact (as opposed to regular expression match).
    4
    Specify the exact domain of the zone you want to update. The hostname of the routes must be subdomains of the specified domain.
    5
    Defines the AWS Route53 DNS provider.
    6
    Defines options for the source of DNS records.
    7
    Defines OpenShift route resource as the source for the DNS records which gets created in the previously specified DNS provider.
    8
    If the source is OpenShiftRoute, then you can pass the OpenShift Ingress Controller name. External DNS Operator selects the canonical hostname of that router as the target while creating CNAME record.
  6. Check the records created for OCP routes using the following command:

    $ aws route53 list-resource-record-sets --hosted-zone-id Z02355203TNN1XXXX1J6O --query "ResourceRecordSets[?Type == 'CNAME']" | grep console

6.4.5.2. Creating DNS records in a different AWS Account using a shared VPC

You can use the ExternalDNS Operator to create DNS records in a different AWS account using a shared Virtual Private Cloud (VPC). By using a shared VPC, an organization can connect resources from multiple projects to a common VPC network. Organizations can then use VPC sharing to use a single Route 53 instance across multiple AWS accounts.

Prerequisites

  • You have created two Amazon AWS accounts: one with a VPC and a Route 53 private hosted zone configured (Account A), and another for installing a cluster (Account B).
  • You have created an IAM Policy and IAM Role with the appropriate permissions in Account A for Account B to create DNS records in the Route 53 hosted zone of Account A.
  • You have installed a cluster in Account B into the existing VPC for Account A.
  • You have installed the ExternalDNS Operator in the cluster in Account B.

Procedure

  1. Get the Role ARN of the IAM Role that you created to allow Account B to access Account A’s Route 53 hosted zone by running the following command:

    $ aws --profile account-a iam get-role --role-name user-rol1 | head -1

    Example output

    ROLE	arn:aws:iam::1234567890123:role/user-rol1	2023-09-14T17:21:54+00:00	3600	/	AROA3SGB2ZRKRT5NISNJN	user-rol1

  2. Locate the private hosted zone to use with Account A’s credentials by running the following command:

    $ aws --profile account-a route53 list-hosted-zones | grep testextdnsoperator.apacshift.support

    Example output

    HOSTEDZONES	terraform	/hostedzone/Z02355203TNN1XXXX1J6O	testextdnsoperator.apacshift.support. 5

  3. Create the ExternalDNS object by running the following command:

    $ cat <<EOF | oc create -f -
    apiVersion: externaldns.olm.openshift.io/v1beta1
    kind: ExternalDNS
    metadata:
      name: sample-aws
    spec:
      domains:
      - filterType: Include
        matchType: Exact
        name: testextdnsoperator.apacshift.support
      provider:
        type: AWS
        aws:
          assumeRole:
            arn: arn:aws:iam::12345678901234:role/user-rol1 1
      source:
        type: OpenShiftRoute
        openshiftRouteOptions:
          routerName: default
    EOF
    1
    Specify the Role ARN to have DNS records created in Account A.
  4. Check the records created for OpenShift Container Platform (OCP) routes by using the following command:

    $ aws --profile account-a route53 list-resource-record-sets --hosted-zone-id Z02355203TNN1XXXX1J6O --query "ResourceRecordSets[?Type == 'CNAME']" | grep console-openshift-console

6.4.6. Creating DNS records on Azure

You can create DNS records on Azure by using the External DNS Operator.

Important

Using the External DNS Operator on a Microsoft Entra Workload ID-enabled cluster or a cluster that runs in Microsoft Azure Government (MAG) regions is not supported.

6.4.6.1. Creating DNS records on an Azure public DNS zone

You can create DNS records on a public DNS zone for Azure by using the External DNS Operator.

Prerequisites

  • You must have administrator privileges.
  • The admin user must have access to the kube-system namespace.

Procedure

  1. Fetch the credentials from the kube-system namespace to use the cloud provider client by running the following command:

    $ CLIENT_ID=$(oc get secrets azure-credentials  -n kube-system  --template={{.data.azure_client_id}} | base64 -d)
    $ CLIENT_SECRET=$(oc get secrets azure-credentials  -n kube-system  --template={{.data.azure_client_secret}} | base64 -d)
    $ RESOURCE_GROUP=$(oc get secrets azure-credentials  -n kube-system  --template={{.data.azure_resourcegroup}} | base64 -d)
    $ SUBSCRIPTION_ID=$(oc get secrets azure-credentials  -n kube-system  --template={{.data.azure_subscription_id}} | base64 -d)
    $ TENANT_ID=$(oc get secrets azure-credentials  -n kube-system  --template={{.data.azure_tenant_id}} | base64 -d)
  2. Log in to Azure by running the following command:

    $ az login --service-principal -u "${CLIENT_ID}" -p "${CLIENT_SECRET}" --tenant "${TENANT_ID}"
  3. Get a list of routes by running the following command:

    $ oc get routes --all-namespaces | grep console

    Example output

    openshift-console          console             console-openshift-console.apps.test.azure.example.com                       console             https   reencrypt/Redirect     None
    openshift-console          downloads           downloads-openshift-console.apps.test.azure.example.com                     downloads           http    edge/Redirect          None

  4. Get a list of DNS zones by running the following command:

    $ az network dns zone list --resource-group "${RESOURCE_GROUP}"
  5. Create a YAML file, for example, external-dns-sample-azure.yaml, that defines the ExternalDNS object:

    Example external-dns-sample-azure.yaml file

    apiVersion: externaldns.olm.openshift.io/v1beta1
    kind: ExternalDNS
    metadata:
      name: sample-azure 1
    spec:
      zones:
      - "/subscriptions/1234567890/resourceGroups/test-azure-xxxxx-rg/providers/Microsoft.Network/dnszones/test.azure.example.com" 2
      provider:
        type: Azure 3
      source:
        openshiftRouteOptions: 4
          routerName: default 5
        type: OpenShiftRoute 6

    1
    Specifies the External DNS name.
    2
    Defines the zone ID.
    3
    Defines the provider type.
    4
    You can define options for the source of DNS records.
    5
    If the source type is OpenShiftRoute, you can pass the OpenShift Ingress Controller name. External DNS selects the canonical hostname of that router as the target while creating CNAME record.
    6
    Defines the route resource as the source for the Azure DNS records.
  6. Check the DNS records created for OpenShift Container Platform routes by running the following command:

    $ az network dns record-set list -g "${RESOURCE_GROUP}"  -z test.azure.example.com | grep console
    Note

    To create records on private hosted zones on private Azure DNS, you need to specify the private zone under the zones field which populates the provider type to azure-private-dns in the ExternalDNS container arguments.

6.4.7. Creating DNS records on GCP

You can create DNS records on Google Cloud Platform (GCP) by using the External DNS Operator.

Important

Using the External DNS Operator on a cluster with GCP Workload Identity enabled is not supported. For more information about the GCP Workload Identity, see GCP Workload Identity.

6.4.7.1. Creating DNS records on a public managed zone for GCP

You can create DNS records on a public managed zone for GCP by using the External DNS Operator.

Prerequisites

  • You must have administrator privileges.

Procedure

  1. Copy the gcp-credentials secret in the encoded-gcloud.json file by running the following command:

    $ oc get secret gcp-credentials -n kube-system --template='{{$v := index .data "service_account.json"}}{{$v}}' | base64 -d - > decoded-gcloud.json
  2. Export your Google credentials by running the following command:

    $ export GOOGLE_CREDENTIALS=decoded-gcloud.json
  3. Activate your account by using the following command:

    $ gcloud auth activate-service-account  <client_email as per decoded-gcloud.json> --key-file=decoded-gcloud.json
  4. Set your project by running the following command:

    $ gcloud config set project <project_id as per decoded-gcloud.json>
  5. Get a list of routes by running the following command:

    $ oc get routes --all-namespaces | grep console

    Example output

    openshift-console          console             console-openshift-console.apps.test.gcp.example.com                       console             https   reencrypt/Redirect     None
    openshift-console          downloads           downloads-openshift-console.apps.test.gcp.example.com                     downloads           http    edge/Redirect          None

  6. Get a list of managed zones by running the following command:

    $ gcloud dns managed-zones list | grep test.gcp.example.com

    Example output

    qe-cvs4g-private-zone test.gcp.example.com

  7. Create a YAML file, for example, external-dns-sample-gcp.yaml, that defines the ExternalDNS object:

    Example external-dns-sample-gcp.yaml file

    apiVersion: externaldns.olm.openshift.io/v1beta1
    kind: ExternalDNS
    metadata:
      name: sample-gcp 1
    spec:
      domains:
        - filterType: Include 2
          matchType: Exact 3
          name: test.gcp.example.com 4
      provider:
        type: GCP 5
      source:
        openshiftRouteOptions: 6
          routerName: default 7
        type: OpenShiftRoute 8

    1
    Specifies the External DNS name.
    2
    By default, all hosted zones are selected as potential targets. You can include your hosted zone.
    3
    The domain of the target must match the string defined by the name key.
    4
    Specify the exact domain of the zone you want to update. The hostname of the routes must be subdomains of the specified domain.
    5
    Defines the provider type.
    6
    You can define options for the source of DNS records.
    7
    If the source type is OpenShiftRoute, you can pass the OpenShift Ingress Controller name. External DNS selects the canonical hostname of that router as the target while creating CNAME record.
    8
    Defines the route resource as the source for GCP DNS records.
  8. Check the DNS records created for OpenShift Container Platform routes by running the following command:

    $ gcloud dns record-sets list --zone=qe-cvs4g-private-zone | grep console

6.4.8. Creating DNS records on Infoblox

You can create DNS records on Infoblox by using the External DNS Operator.

6.4.8.1. Creating DNS records on a public DNS zone on Infoblox

You can create DNS records on a public DNS zone on Infoblox by using the External DNS Operator.

Prerequisites

  • You have access to the OpenShift CLI (oc).
  • You have access to the Infoblox UI.

Procedure

  1. Create a secret object with Infoblox credentials by running the following command:

    $ oc -n external-dns-operator create secret generic infoblox-credentials --from-literal=EXTERNAL_DNS_INFOBLOX_WAPI_USERNAME=<infoblox_username> --from-literal=EXTERNAL_DNS_INFOBLOX_WAPI_PASSWORD=<infoblox_password>
  2. Get a list of routes by running the following command:

    $ oc get routes --all-namespaces | grep console

    Example Output

    openshift-console          console             console-openshift-console.apps.test.example.com                       console             https   reencrypt/Redirect     None
    openshift-console          downloads           downloads-openshift-console.apps.test.example.com                     downloads           http    edge/Redirect          None

  3. Create a YAML file, for example, external-dns-sample-infoblox.yaml, that defines the ExternalDNS object:

    Example external-dns-sample-infoblox.yaml file

    apiVersion: externaldns.olm.openshift.io/v1beta1
    kind: ExternalDNS
    metadata:
      name: sample-infoblox 1
    spec:
      provider:
        type: Infoblox 2
        infoblox:
          credentials:
            name: infoblox-credentials
          gridHost: ${INFOBLOX_GRID_PUBLIC_IP}
          wapiPort: 443
          wapiVersion: "2.3.1"
      domains:
      - filterType: Include
        matchType: Exact
        name: test.example.com
      source:
        type: OpenShiftRoute 3
        openshiftRouteOptions:
          routerName: default 4

    1
    Specifies the External DNS name.
    2
    Defines the provider type.
    3
    You can define options for the source of DNS records.
    4
    If the source type is OpenShiftRoute, you can pass the OpenShift Ingress Controller name. External DNS selects the canonical hostname of that router as the target while creating CNAME record.
  4. Create the ExternalDNS resource on Infoblox by running the following command:

    $ oc create -f external-dns-sample-infoblox.yaml
  5. From the Infoblox UI, check the DNS records created for console routes:

    1. Click Data Management DNS Zones.
    2. Select the zone name.

6.4.9. Configuring the cluster-wide proxy on the External DNS Operator

After configuring the cluster-wide proxy, the Operator Lifecycle Manager (OLM) triggers automatic updates to all of the deployed Operators with the new contents of the HTTP_PROXY, HTTPS_PROXY, and NO_PROXY environment variables.

6.4.9.1. Trusting the certificate authority of the cluster-wide proxy

You can configure the External DNS Operator to trust the certificate authority of the cluster-wide proxy.

Procedure

  1. Create the config map to contain the CA bundle in the external-dns-operator namespace by running the following command:

    $ oc -n external-dns-operator create configmap trusted-ca
  2. To inject the trusted CA bundle into the config map, add the config.openshift.io/inject-trusted-cabundle=true label to the config map by running the following command:

    $ oc -n external-dns-operator label cm trusted-ca config.openshift.io/inject-trusted-cabundle=true
  3. Update the subscription of the External DNS Operator by running the following command:

    $ oc -n external-dns-operator patch subscription external-dns-operator --type='json' -p='[{"op": "add", "path": "/spec/config", "value":{"env":[{"name":"TRUSTED_CA_CONFIGMAP_NAME","value":"trusted-ca"}]}}]'

Verification

  • After the deployment of the External DNS Operator is completed, verify that the trusted CA environment variable is added to the external-dns-operator deployment by running the following command:

    $ oc -n external-dns-operator exec deploy/external-dns-operator -c external-dns-operator -- printenv TRUSTED_CA_CONFIGMAP_NAME

    Example output

    trusted-ca

Red Hat logoGithubRedditYoutubeTwitter

Aprender

Experimente, compre e venda

Comunidades

Sobre a documentação da Red Hat

Ajudamos os usuários da Red Hat a inovar e atingir seus objetivos com nossos produtos e serviços com conteúdo em que podem confiar.

Tornando o open source mais inclusivo

A Red Hat está comprometida em substituir a linguagem problemática em nosso código, documentação e propriedades da web. Para mais detalhes veja oBlog da Red Hat.

Sobre a Red Hat

Fornecemos soluções robustas que facilitam o trabalho das empresas em plataformas e ambientes, desde o data center principal até a borda da rede.

© 2024 Red Hat, Inc.