Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 1. Integrating Service Mesh


1.1. Integrating Service Mesh 2.x with OpenShift Serverless

The OpenShift Serverless Operator uses Kourier as the default ingress for Knative. You can use Service Mesh with OpenShift Serverless whether you enable Kourier or disable it. If you disable Kourier, you can configure additional networking and routing options, such as mTLS that Kourier does not support.

1.1.1. Assumptions and limitations for Service Mesh integration

The key assumptions and limitations for integrating OpenShift Serverless with Service Mesh.

Note the following assumptions and limitations:

  • All Knative internal components and Knative Services run within Service Mesh with sidecar injection enabled. As a result, the mesh enforces strict mTLS across all components. Clients must use mTLS when sending requests to Knative Services and must present a valid certificate. OpenShift Routing is the only exception.
  • OpenShift Serverless integrates with only one service mesh. You can run multiple meshes in the cluster, but OpenShift Serverless operates in a single mesh.
  • You cannot change the target ServiceMeshMemberRoll for OpenShift Serverless. To use a different service mesh, uninstall and reinstall OpenShift Serverless.

1.1.2. Prerequisites for Service Mesh 2.x integration

Learn about the requirements that you must meet before integrating Service Mesh 2.x with OpenShift Serverless.

  • You have access to an Red Hat OpenShift Serverless account with cluster administrator access.
  • You have installed the OpenShift CLI (oc).
  • You have installed the Serverless Operator.
  • You have installed the Red Hat OpenShift Service Mesh 2.x Operator.
  • The examples in the following procedures use the domain example.com. The example certificate for this domain is used as a certificate authority (CA) that signs the subdomain certificate.

    To complete and verify these procedures in your deployment, you need either a certificate signed by a widely trusted public CA or a CA provided by your organization. Example commands must be adjusted according to your domain, subdomain, and CA.

  • You must configure the wildcard certificate to match the domain of your OpenShift Container Platform cluster. For example, if your OpenShift Container Platform console address is https://console-openshift-console.apps.openshift.example.com, you must configure the wildcard certificate so that the domain is *.apps.openshift.example.com.
  • If you want to use any domain name, including those which are not subdomains of the default OpenShift Container Platform cluster domain, you must set up domain mapping for those domains.
Important

OpenShift Serverless only supports the use of Red Hat OpenShift Service Mesh functionality that is explicitly documented in this guide, and does not support other undocumented features.

Using Serverless 1.31 with Service Mesh is only supported with Service Mesh version 2.2 or later. For details and information about versions other than 1.31, see the "Red Hat OpenShift Serverless Supported Configurations" page.

1.1.3. Creating a certificate to encrypt incoming external traffic

By default, the Service Mesh mTLS feature only secures traffic inside of the Service Mesh itself, between the ingress gateway and individual pods that have sidecars. To encrypt traffic as it flows into the OpenShift Container Platform cluster, you must generate a certificate before you enable the OpenShift Serverless and Service Mesh integration.

Prerequisites

  • You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated.
  • You have installed the OpenShift Serverless Operator and Knative Serving.
  • You have installed the OpenShift CLI (oc).
  • You have access to the knative-serving-ingress namespace, which the OpenShift Serverless Operator creates automatically during installation.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads.

Procedure

  1. Create a root certificate and private key that signs the certificates for your Knative services:

    $ openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 \
        -subj '/O=Example Inc./CN=example.com' \
        -keyout root.key \
        -out root.crt
  2. Create a wildcard certificate:

    $ openssl req -nodes -newkey rsa:2048 \
        -subj "/CN=*.apps.openshift.example.com/O=Example Inc." \
        -keyout wildcard.key \
        -out wildcard.csr
  3. Sign the wildcard certificate:

    $ openssl x509 -req -days 365 -set_serial 0 \
        -CA root.crt \
        -CAkey root.key \
        -in wildcard.csr \
        -out wildcard.crt
  4. Create a secret containing the wildcard certificate by entering one of the following commands, depending on your Service Mesh version:

    • Option A: For Service Mesh 2.x, create the secret in the istio-system namespace by entering the folloing command:

      $ oc create -n istio-system secret tls wildcard-certs \
          --key=wildcard.key \
          --cert=wildcard.crt
    • Option B: For Service Mesh 3.x, create the secret in the knative-serving-ingress namespace by entering the folloing command:

      $ oc create -n knative-serving-ingress secret tls wildcard-certs \
          --key=wildcard.key \
          --cert=wildcard.crt

      The namespace used for the secret depends on the version of Service Mesh. Service Mesh 2.x expects the certificate in the istio-system namespace. The Service Mesh 3.x uses the dedicated knative-serving-ingress namespace where the OpenShift Serverless ingress gateway runs.

1.1.4. Integrate Service Mesh with OpenShift Serverless

You can integrate Service Mesh 2.x with OpenShift Serverless to enable advanced traffic management, security, and observability for serverless applications. Follow the steps to verify prerequisites, install and configure both components, and verify the integration.

1.1.4.1. Verifying installation prerequisites

Before you install and configure the Service Mesh integration with Serverless, ensure that you meet all prerequisites.

Procedure

  • Check for conflicting gateways by running the following command:

    $ oc get gateway -A -o jsonpath='{range .items[*]}{@.metadata.namespace}{"/"}{@.metadata.name}{" "}{@.spec.servers}{"\n"}{end}' | column -t

    You get an output similar to the following example:

    knative-serving/knative-ingress-gateway  [{"hosts":["*"],"port":{"name":"https","number":443,"protocol":"HTTPS"},"tls":{"credentialName":"wildcard-certs","mode":"SIMPLE"}}]
    knative-serving/knative-local-gateway    [{"hosts":["*"],"port":{"name":"http","number":8081,"protocol":"HTTP"}}]

    This command should not return a Gateway that binds port: 443 and hosts: ["*"], except the Gateways in knative-serving and Gateways that are part of another Service Mesh instance.

    Note

    The mesh that Serverless is part of must be distinct and preferably reserved only for Serverless workloads. That is because additional configuration, such as Gateways, might interfere with the Serverless gateways knative-local-gateway and knative-ingress-gateway. Red Hat OpenShift Service Mesh only allows one Gateway to claim a wildcard host binding (hosts: ["*"]) on the same port (port: 443). If another Gateway is already binding this configuration, a separate mesh has to be created for Serverless workloads.

1.1.4.2. Installing and configuring Service Mesh

To integrate Serverless with Service Mesh, you need to install Service Mesh with a specific configuration.

Procedure

  1. Create a ServiceMeshControlPlane resource in the istio-system namespace with the following configuration:

    Important

    If you have an existing ServiceMeshControlPlane object, make sure that you have the same configuration applied.

    apiVersion: maistra.io/v2
    kind: ServiceMeshControlPlane
    metadata:
      name: basic
      namespace: istio-system
    spec:
      profiles:
      - default
      security:
        dataPlane:
          mtls: true
      techPreview:
        meshConfig:
          defaultConfig:
            terminationDrainDuration: 35s
      gateways:
        ingress:
          service:
            metadata:
              labels:
                knative: ingressgateway
      proxy:
        networking:
          trafficControl:
            inbound:
              excludedPorts:
              - 8444 # metrics
              - 8022 # serving: wait-for-drain k8s pre-stop hook
    mtls: true
    Enforce strict mTLS in the mesh. Only calls using a valid client certificate are allowed.
    techPreview.terminationDrainDuration
    Serverless has a graceful termination for Knative Services of 30 seconds. istio-proxy needs to have a longer termination duration to make sure no requests are dropped.
    knative: ingressgateway
    Define a specific selector for the ingress gateway to target only the Knative gateway.
    excludedPorts
    These ports are called by Kubernetes and cluster monitoring, which are not part of the mesh and cannot be called using mTLS. Therefore, these ports are excluded from the mesh.
  2. Add the namespaces that you want to integrate with Service Mesh to the ServiceMeshMemberRoll object as members:

    You get an output similar to the following example:

    apiVersion: maistra.io/v1
    kind: ServiceMeshMemberRoll
    metadata:
      name: default
      namespace: istio-system
    spec:
      members:
        - knative-serving
        - knative-eventing
        - your-OpenShift-projects
    spec.members

    A list of namespaces to be integrated with Service Mesh.

    Important

    This list of namespaces must include the knative-serving and knative-eventing namespaces.

  3. Apply the ServiceMeshMemberRoll resource by running the following commad::

    $ oc apply -f servicemesh-member-roll.yaml
  4. Create the necessary gateways so that Service Mesh can accept traffic. The following example uses the knative-local-gateway object with the ISTIO_MUTUAL mode (mTLS):

    You get an output similar to the following example:

    apiVersion: networking.istio.io/v1alpha3
    kind: Gateway
    metadata:
      name: knative-ingress-gateway
      namespace: knative-serving
    spec:
      selector:
        knative: ingressgateway
      servers:
        - port:
            number: 443
            name: https
            protocol: HTTPS
          hosts:
            - "*"
          tls:
            mode: SIMPLE
            credentialName: <wildcard_certs>
    ---
    apiVersion: networking.istio.io/v1alpha3
    kind: Gateway
    metadata:
     name: knative-local-gateway
     namespace: knative-serving
    spec:
     selector:
       knative: ingressgateway
     servers:
       - port:
           number: 8081
           name: https
           protocol: HTTPS
         tls:
           mode: ISTIO_MUTUAL
         hosts:
           - "*"
    ---
    apiVersion: v1
    kind: Service
    metadata:
     name: knative-local-gateway
     namespace: istio-system
     labels:
       experimental.istio.io/disable-gateway-port-translation: "true"
    spec:
     type: ClusterIP
     selector:
       istio: ingressgateway
     ports:
       - name: http2
         port: 80
         targetPort: 8081
    credentialName: <wildcard_certs>
    Name of the secret containing the wildcard certificate.
    protocol: HTTPS and mode: ISTIO_MUTUAL
    The knative-local-gateway object serves HTTPS traffic and expects all clients to send requests by using mTLS. This means that only traffic coming from within Service Mesh is possible. Workloads from outside the Service Mesh must use the external domain through OpenShift Routing.
  5. Apply the Gateway resources by running the following commad:

    $ oc apply -f istio-knative-gateways.yaml

1.1.4.3. Installing and configuring Serverless

After installing Service Mesh, you need to install Serverless with a specific configuration.

Procedure

  1. Install Knative Serving with the following KnativeServing custom resource, which enables the Istio integration:

    You get an output similar to the following example:

    apiVersion: operator.knative.dev/v1beta1
    kind: KnativeServing
    metadata:
      name: knative-serving
      namespace: knative-serving
      annotations:
        serverless.openshift.io/disable-istio-net-policies-generation: "true"
    spec:
      ingress:
        istio:
          enabled: true
      deployments:
      - name: activator
        labels:
          "sidecar.istio.io/inject": "true"
        annotations:
          "sidecar.istio.io/rewriteAppHTTPProbers": "true"
      - name: autoscaler
        labels:
          "sidecar.istio.io/inject": "true"
        annotations:
          "sidecar.istio.io/rewriteAppHTTPProbers": "true"
      config:
        istio:
          gateway.knative-serving.knative-ingress-gateway: istio-ingressgateway.<your_istio_namespace>.svc.cluster.local
          local-gateway.knative-serving.knative-local-gateway: knative-local-gateway.<your_istio_namespace>.svc.cluster.local
    enabled: true
    Enable Istio integration.
    deployments
    Enable sidecar injection for Knative Serving data plane pods.
    config.istio
    If your istio is not running in the istio-system namespace, you need to set these two flags with the correct namespace.
  2. Apply the KnativeServing resource by running the following command::

    $ oc apply -f knative-serving-config.yaml
  3. Install Knative Eventing with the following KnativeEventing object, which enables the Istio integration:

    You get an output similar to the following example:

    apiVersion: operator.knative.dev/v1beta1
    kind: KnativeEventing
    metadata:
      name: knative-eventing
      namespace: knative-eventing
      annotations:
        serverless.openshift.io/disable-istio-net-policies-generation: "true"
    spec:
      config:
        features:
          istio: enabled
      workloads:
        - name: pingsource-mt-adapter
          labels:
            sidecar.istio.io/inject: "true"
          annotations:
            sidecar.istio.io/rewriteAppHTTPProbers: "true"
    
        - name: imc-dispatcher
          labels:
            sidecar.istio.io/inject: "true"
          annotations:
            sidecar.istio.io/rewriteAppHTTPProbers: "true"
    
        - name: mt-broker-ingress
          labels:
            sidecar.istio.io/inject: "true"
          annotations:
            sidecar.istio.io/rewriteAppHTTPProbers: "true"
    
        - name: mt-broker-filter
          labels:
            sidecar.istio.io/inject: "true"
          annotations:
            sidecar.istio.io/rewriteAppHTTPProbers: "true"
    
        - name: job-sink
          labels:
            sidecar.istio.io/inject: "true"
          annotations:
            sidecar.istio.io/rewriteAppHTTPProbers: "true"
    spec.config.features.istio
    enables Eventing Istio controller to create a DestinationRule for each InMemoryChannel or KafkaChannel service.
    spec.workload
    enables sidecar injection for Knative Eventing pods.
  4. Apply the KnativeEventing resource by running the following command::

    $ oc apply -f knative-eventing-config.yaml
  5. Install Knative Kafka with the following KnativeKafka custom resource, which enables the Istio integration:

    You get an output similar to the following example:

    apiVersion: operator.serverless.openshift.io/v1alpha1
    kind: KnativeKafka
    metadata:
      name: knative-kafka
      namespace: knative-eventing
    spec:
      channel:
        enabled: true
        bootstrapServers: <bootstrap_servers>
      source:
        enabled: true
      broker:
        enabled: true
        defaultConfig:
          bootstrapServers: <bootstrap_servers>
          numPartitions: <num_partitions>
          replicationFactor: <replication_factor>
        sink:
          enabled: true
      workloads:
      - name: kafka-controller
        labels:
          "sidecar.istio.io/inject": "true"
        annotations:
          "sidecar.istio.io/rewriteAppHTTPProbers": "true"
      - name: kafka-broker-receiver
        labels:
          "sidecar.istio.io/inject": "true"
        annotations:
          "sidecar.istio.io/rewriteAppHTTPProbers": "true"
      - name: kafka-broker-dispatcher
        labels:
          "sidecar.istio.io/inject": "true"
        annotations:
          "sidecar.istio.io/rewriteAppHTTPProbers": "true"
      - name: kafka-channel-receiver
        labels:
          "sidecar.istio.io/inject": "true"
        annotations:
          "sidecar.istio.io/rewriteAppHTTPProbers": "true"
      - name: kafka-channel-dispatcher
        labels:
          "sidecar.istio.io/inject": "true"
        annotations:
          "sidecar.istio.io/rewriteAppHTTPProbers": "true"
      - name: kafka-source-dispatcher
        labels:
          "sidecar.istio.io/inject": "true"
        annotations:
          "sidecar.istio.io/rewriteAppHTTPProbers": "true"
      - name: kafka-sink-receiver
        labels:
          "sidecar.istio.io/inject": "true"
        annotations:
          "sidecar.istio.io/rewriteAppHTTPProbers": "true"
    bootstrapServers: <bootstrap_servers>
    The Apache Kafka cluster URL, for example my-cluster-kafka-bootstrap.kafka:9092.
    spec.workloads
    Enable sidecar injection for Knative Kafka pods.
  6. Apply the KnativeEventing object by running the following command::

    $ oc apply -f knative-kafka-config.yaml
  7. Install ServiceEntry to inform Service Mesh of the communication between KnativeKafka components and an Apache Kafka cluster:

    You get an output similar to the following example:

    apiVersion: networking.istio.io/v1alpha3
    kind: ServiceEntry
    metadata:
      name: kafka-cluster
      namespace: knative-eventing
    spec:
      hosts:
        - <bootstrap_servers_without_port>
      exportTo:
        - "."
      ports:
        - number: 9092
          name: tcp-plain
          protocol: TCP
        - number: 9093
          name: tcp-tls
          protocol: TCP
        - number: 9094
          name: tcp-sasl-tls
          protocol: TCP
        - number: 9095
          name: tcp-sasl-tls
          protocol: TCP
        - number: 9096
          name: tcp-tls
          protocol: TCP
      location: MESH_EXTERNAL
      resolution: NONE
    spec.hosts
    The list of Apache Kafka cluster hosts, for example my-cluster-kafka-bootstrap.kafka.
    spec.ports

    Apache Kafka cluster listeners ports.

    Note

    The ports listed in spec.ports are example TCP (Transmission Control Protocol) ports. The actual values depend on the Apache Kafka cluster configuration.

  8. Apply the ServiceEntry resource by running the following command:

    $ oc apply -f kafka-cluster-serviceentry.yaml

1.1.4.4. Verifying the integration

After installing Service Mesh and Serverless with Istio enabled, you can verify that the integration works.

Procedure

  1. Create a Knative Service that has sidecar injection enabled and uses a pass-through route:

    You get an output similar to the following example:

    apiVersion: serving.knative.dev/v1
    kind: Service
    metadata:
      name: <service_name>
      namespace: <namespace>
      annotations:
        serving.knative.openshift.io/enablePassthrough: "true"
    spec:
      template:
        metadata:
          annotations:
            sidecar.istio.io/inject: "true"
            sidecar.istio.io/rewriteAppHTTPProbers: "true"
        spec:
          containers:
          - image: <image_url>
    metadat.namespace
    A namespace that is part of the service mesh member roll.
    serving.knative.openshift.io/enablePassthrough: "true"
    Instruct Knative Serving to generate a pass-through enabled route, so that the certificates you have generated are served through the ingress gateway directly.
    sidecar.istio.io/inject: "true"

    Inject Service Mesh sidecars into the Knative service pods.

    Important

    Always add the annotation from this example to all of your Knative Service to make them work with Service Mesh.

  2. Apply the Service resource by running the following command:

    $ oc apply -f knative-service.yaml
  3. Access your serverless application by using a secure connection that is now trusted by the CA:

    $ curl --cacert root.crt <service_url>

    You get an output similar to the following example command:

    $ curl --cacert root.crt https://hello-default.apps.openshift.example.com

    You get an output similar to the following example:

    Hello Openshift!

If you enable Service Mesh with Mutual Transport Layer Security (mTLS), Service Mesh prevents Prometheus from scraping metrics. As a result, Knative Serving and Knative Eventing metrics are unavailable by default. You can enable these metrics when you use Service Mesh with mTLS.

Prerequisites

  • You have one of the following permissions to access the cluster:

    • Cluster administrator permissions on OpenShift Container Platform
    • Cluster administrator permissions on Red Hat OpenShift Service on AWS
    • Dedicated administrator permissions on OpenShift Dedicated
  • You have installed the OpenShift CLI (oc).
  • You have access to a project with the appropriate roles and permissions to create applications and other workloads.
  • You have installed the OpenShift Serverless Operator, Knative Serving, and Knative Eventing on your cluster.
  • You have installed Red Hat OpenShift Service Mesh with the mTLS functionality enabled.

Procedure

  1. Specify prometheus as the metrics.backend-destination in the observability spec of the Knative Serving custom resource (CR):

    apiVersion: operator.knative.dev/v1beta1
    kind: KnativeServing
    metadata:
      name: knative-serving
      namespace: knative-serving
    spec:
      config:
        observability:
          metrics.backend-destination: "prometheus"
    ...

    This step prevents metrics from being disabled by default.

    Note

    When you configure ServiceMeshControlPlane with manageNetworkPolicy: false, you must use the annotation on KnativeEventing to ensure proper event delivery.

    The same mechanism is used for Knative Eventing. To enable metrics for Knative Eventing, you need to specify prometheus as the metrics.backend-destination in the observability spec of the Knative Eventing custom resource (CR) as follows:

    apiVersion: operator.knative.dev/v1beta1
    kind: KnativeEventing
    metadata:
      name: knative-eventing
      namespace: knative-eventing
    spec:
      config:
        observability:
          metrics.backend-destination: "prometheus"
    ...
  2. Change and reapply the default Service Mesh control plane in the istio-system namespace, so that it includes the following spec:

    ...
    spec:
      proxy:
        networking:
          trafficControl:
            inbound:
              excludedPorts:
              - 8444
    ...

1.1.6. Disabling the default network policies

The OpenShift Serverless Operator generates the network policies by default. To disable the default network policy generation, you can add the serverless.openshift.io/disable-istio-net-policies-generation annotation in the KnativeEventing and KnativeServing custom resources (CRs).

Important

The OpenShift Serverless Operator generates the required network policies by default. However, the support for Service Mesh 3.x is currently in Technology Preview, these default network policies do not yet account for the networking requirements of Service Mesh 3.x. As a result, newly created Knative Services (ksvc) may fail to reach the Ready state when these policies are applied.

To avoid this issue, you must disable the automatic generation of Istio-related network policies by setting the serverless.openshift.io/disable-istio-net-policies-generation annotation to true in both the KnativeServing and KnativeEventing custom resources.

Prerequisites

  • You have one of the following permissions to access the cluster:

    • Cluster administrator permissions on OpenShift Container Platform
    • Cluster administrator permissions on Red Hat OpenShift Service on AWS
    • Dedicated administrator permissions on OpenShift Dedicated
  • You have installed the OpenShift CLI (oc).
  • You have access to a project with the appropriate roles and permissions to create applications and other workloads.
  • You have installed the OpenShift Serverless Operator, Knative Serving, and Knative Eventing on your cluster.
  • You have installed Red Hat OpenShift Service Mesh with the mTLS functionality enabled.

Procedure

  • Add the serverless.openshift.io/disable-istio-net-policies-generation: "true" annotation to your Knative custom resources.

    Note

    The OpenShift Serverless Operator generates the required network policies by default. When you configure ServiceMeshControlPlane with manageNetworkPolicy: false, you must disable the default network policy generation to ensure proper event delivery. To disable the default network policy generation, you can add the serverless.openshift.io/disable-istio-net-policies-generation annotation in the KnativeEventing and KnativeServing custom resources (CRs).

    1. Annotate the KnativeEventing CR by running the following command:

      $ oc edit KnativeEventing -n knative-eventing

      You get an output similar to the following example:

      apiVersion: operator.knative.dev/v1beta1
      kind: KnativeEventing
      metadata:
        name: knative-eventing
        namespace: knative-eventing
        annotations:
          serverless.openshift.io/disable-istio-net-policies-generation: "true"
    2. Annotate the KnativeServing CR by running the following command:

      $ oc edit KnativeServing -n knative-serving

      You get an output similar to the following example:

      apiVersion: operator.knative.dev/v1beta1
      kind: KnativeServing
      metadata:
        name: knative-serving
        namespace: knative-serving
        annotations:
          serverless.openshift.io/disable-istio-net-policies-generation: "true"

By default, the informers implementation in the Kubernetes client-go library fetches all resources of a given type. When many resources exist, this behavior increases memory usage and can cause the Knative net-istio ingress controller to fail on large clusters due to memory leaks. The Knative net-istio ingress controller provides a filtering mechanism that lets controllers fetch only Knative-related secrets.

The secret filtering is enabled by default on the OpenShift Serverless Operator side. An environment variable, ENABLE_SECRET_INFORMER_FILTERING_BY_CERT_UID=true, is added by default to the net-istio controller pods.

Important

If you enable secret filtering, you must label all of your secrets with networking.internal.knative.dev/certificate-uid: "<id>". Otherwise, Knative Serving does not detect them, which leads to failures. You must label both new and existing secrets.

Prerequisites

  • You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads.
  • Install Red Hat OpenShift Service Mesh. OpenShift Serverless with Service Mesh only is supported for use with Red Hat OpenShift Service Mesh version 2.0.5 or later.
  • Install the OpenShift Serverless Operator and Knative Serving.
  • Install the OpenShift CLI (oc).

Procedure

  • Disable the secret filtering by setting the ENABLE_SECRET_INFORMER_FILTERING_BY_CERT_UID variable to false by using the workloads field in the KnativeServing custom resource (CR).

    You get an output similar to the following example:

    apiVersion: operator.knative.dev/v1beta1
    kind: KnativeServing
    metadata:
      name: knative-serving
      namespace: knative-serving
    spec:
    ...
      workloads:
        - env:
            - container: controller
              envVars:
                - name: ENABLE_SECRET_INFORMER_FILTERING_BY_CERT_UID
                  value: 'false'
          name: net-istio-controller

You can use Service Mesh 2.x with OpenShift Serverless to control and isolate network traffic between services. This integration helps you define fine-grained communication policies, enhance security through mutual TLS, and manage traffic flow within your serverless environment.

Learn how to use Service Mesh to isolate network traffic between tenants in OpenShift Serverless.

Important

Using Service Mesh to isolate network traffic with OpenShift Serverless is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Service Mesh isolates network traffic between tenants on a shared Red Hat OpenShift Serverless cluster by using Service Mesh AuthorizationPolicy resources. Serverless supports this approach by using many Service Mesh resources. A tenant is a group of one or more projects that can communicate with each other over the network on a shared cluster.

1.2.2. High-level architecture

The high-level architecture of Serverless traffic isolation provided by Service Mesh consists of AuthorizationPolicy objects in the knative-serving, knative-eventing, and the tenants' namespaces, with all the components being part of the Service Mesh. The injected Service Mesh sidecars enforce those rules to isolate network traffic between tenants.

1.2.3. Securing the Service Mesh

You can use authorization policies and mTLS to secure Service Mesh.

Prerequisites

  • You have access to an Red Hat OpenShift Serverless account with cluster administrator access.
  • You have set up the Service Mesh 2.x and Serverless integration.
  • You have created one or more OpenShift projects for each tenant.

Procedure

  1. Make sure that all Red Hat OpenShift Serverless projects of your tenant are part of the same ServiceMeshMemberRoll object as members:

    apiVersion: maistra.io/v1
    kind: ServiceMeshMemberRoll
    metadata:
     name: default
     namespace: istio-system
    spec:
     members:
       - knative-serving    # static value, needs to be here, see setup page
       - knative-eventing   # static value, needs to be here, see setup page
       - team-alpha-1       # example OpenShift project that belongs to the team-alpha tenant
       - team-alpha-2       # example OpenShift project that belongs th the team-alpha tenant
       - team-bravo-1       # example OpenShift project that belongs to the team-bravo tenant
       - team-bravo-2       # example OpenShift project that belongs th the team-bravo tenant

    All projects that are part of the mesh must enforce mTLS in strict mode. This forces Istio to only accept connections with a client-certificate present and allows the Service Mesh sidecar to validate the origin by using an AuthorizationPolicy object.

  2. Create the configuration with AuthorizationPolicy objects in the knative-serving and knative-eventing namespaces and you get an output similar to the following example:

    apiVersion: security.istio.io/v1beta1
    kind: AuthorizationPolicy
    metadata:
      name: deny-all-by-default
      namespace: knative-eventing
    spec: { }
    ---
    apiVersion: security.istio.io/v1beta1
    kind: AuthorizationPolicy
    metadata:
      name: deny-all-by-default
      namespace: knative-serving
    spec: { }
    ---
    apiVersion: security.istio.io/v1beta1
    kind: AuthorizationPolicy
    metadata:
      name: allow-mt-channel-based-broker-ingress-to-imc-dispatcher
      namespace: knative-eventing
    spec:
      action: ALLOW
      selector:
        matchLabels:
          app.kubernetes.io/component: "imc-dispatcher"
      rules:
        - from:
            - source:
                namespaces: [ "knative-eventing" ]
                principals: [ "cluster.local/ns/knative-eventing/sa/mt-broker-ingress" ]
          to:
            - operation:
                methods: [ "POST" ]
    ---
    apiVersion: security.istio.io/v1beta1
    kind: AuthorizationPolicy
    metadata:
      name: allow-mt-channel-based-broker-ingress-to-kafka-channel
      namespace: knative-eventing
    spec:
      action: ALLOW
      selector:
        matchLabels:
          app.kubernetes.io/component: "kafka-channel-receiver"
      rules:
        - from:
            - source:
                namespaces: [ "knative-eventing" ]
                principals: [ "cluster.local/ns/knative-eventing/sa/mt-broker-ingress" ]
          to:
            - operation:
                methods: [ "POST" ]
    ---
    apiVersion: security.istio.io/v1beta1
    kind: AuthorizationPolicy
    metadata:
      name: allow-kafka-channel-to-mt-channel-based-broker-filter
      namespace: knative-eventing
    spec:
      action: ALLOW
      selector:
        matchLabels:
          app.kubernetes.io/component: "broker-filter"
      rules:
        - from:
            - source:
                namespaces: [ "knative-eventing" ]
                principals: [ "cluster.local/ns/knative-eventing/sa/knative-kafka-channel-data-plane" ]
          to:
            - operation:
                methods: [ "POST" ]
    ---
    apiVersion: security.istio.io/v1beta1
    kind: AuthorizationPolicy
    metadata:
      name: allow-imc-to-mt-channel-based-broker-filter
      namespace: knative-eventing
    spec:
      action: ALLOW
      selector:
        matchLabels:
          app.kubernetes.io/component: "broker-filter"
      rules:
        - from:
            - source:
                namespaces: [ "knative-eventing" ]
                principals: [ "cluster.local/ns/knative-eventing/sa/imc-dispatcher" ]
          to:
            - operation:
                methods: [ "POST" ]
    ---
    apiVersion: security.istio.io/v1beta1
    kind: AuthorizationPolicy
    metadata:
      name: allow-probe-kafka-broker-receiver
      namespace: knative-eventing
    spec:
      action: ALLOW
      selector:
        matchLabels:
          app.kubernetes.io/component: "kafka-broker-receiver"
      rules:
        - from:
            - source:
                namespaces: [ "knative-eventing" ]
                principals: [ "cluster.local/ns/knative-eventing/sa/kafka-controller" ]
          to:
            - operation:
                methods: [ "GET" ]
    ---
    apiVersion: security.istio.io/v1beta1
    kind: AuthorizationPolicy
    metadata:
      name: allow-probe-kafka-sink-receiver
      namespace: knative-eventing
    spec:
      action: ALLOW
      selector:
        matchLabels:
          app.kubernetes.io/component: "kafka-sink-receiver"
      rules:
        - from:
            - source:
                namespaces: [ "knative-eventing" ]
                principals: [ "cluster.local/ns/knative-eventing/sa/kafka-controller" ]
          to:
            - operation:
                methods: [ "GET" ]
    ---
    apiVersion: security.istio.io/v1beta1
    kind: AuthorizationPolicy
    metadata:
      name: allow-probe-kafka-channel-receiver
      namespace: knative-eventing
    spec:
      action: ALLOW
      selector:
        matchLabels:
          app.kubernetes.io/component: "kafka-channel-receiver"
      rules:
        - from:
            - source:
                namespaces: [ "knative-eventing" ]
                principals: [ "cluster.local/ns/knative-eventing/sa/kafka-controller" ]
          to:
            - operation:
                methods: [ "GET" ]
    ---
    apiVersion: security.istio.io/v1beta1
    kind: AuthorizationPolicy
    metadata:
      name: allow-traffic-to-activator
      namespace: knative-serving
    spec:
      selector:
        matchLabels:
          app: activator
      action: ALLOW
      rules:
        - from:
            - source:
                namespaces: [ "knative-serving", "istio-system" ]
    ---
    apiVersion: security.istio.io/v1beta1
    kind: AuthorizationPolicy
    metadata:
      name: allow-traffic-to-autoscaler
      namespace: knative-serving
    spec:
      selector:
        matchLabels:
          app: autoscaler
      action: ALLOW
      rules:
        - from:
            - source:
                namespaces: [ "knative-serving" ]

    These policies restrict the access rules for the network communication between Serverless system components. Specifically, they enforce the following rules:

    • Deny all traffic that is not explicitly allowed in the knative-serving and knative-eventing namespaces
    • Allow traffic from the istio-system and knative-serving namespaces to activator
    • Allow traffic from the knative-serving namespace to autoscaler
    • Allow health probes for Apache Kafka components in the knative-eventing namespace
    • Allow internal traffic for channel-based brokers in the knative-eventing namespace
  3. Apply the authorization policy configuration by running the following command:

    $ oc apply -f knative-default-authz-policies.yaml
  4. Define which OpenShift projects can communicate with each other. For this communication, every OpenShift project of a tenant requires the following:

    • One AuthorizationPolicy object limiting directly incoming traffic to the tenant’s project
    • One AuthorizationPolicy object limiting incoming traffic using the activator component of Serverless that runs in the knative-serving project
    • One AuthorizationPolicy object allowing Kubernetes to call PreStopHooks on Knative Services

      Instead of creating these policies manually, install the helm utility and create the necessary resources for each tenant:

      $ helm repo add openshift-helm-charts https://charts.openshift.io/
      $ helm template openshift-helm-charts/redhat-knative-istio-authz --version 1.37.0 --set "name=team-alpha" --set "namespaces={team-alpha-1,team-alpha-2}" > team-alpha.yaml
      $ helm template openshift-helm-charts/redhat-knative-istio-authz --version 1.31.0 --set "name=team-bravo" --set "namespaces={team-bravo-1,team-bravo-2}" > team-bravo.yaml
  5. Apply the authorization policy configuration by running the following command:

    $ oc apply -f team-alpha.yaml team-bravo.yaml

1.2.4. Verifying the configuration

You can use the curl command to verify the configuration for network traffic isolation.

Prerequisites

  • You have access to an Red Hat OpenShift Serverless account with cluster administrator access.
  • You have set up the Service Mesh 2.x and Serverless integration.
  • You have created one or more OpenShift projects for each tenant.
Note

The following examples assume having two tenants, each having one namespace, and all part of the ServiceMeshMemberRoll object, configured with the resources in the team-alpha.yaml and team-bravo.yaml files.

Procedure

  1. Deploy Knative Services in the namespaces of both of the tenants:

    You get an output similar to the following example command:

    $ kn service create test-webapp -n team-alpha-1 \
        --annotation-service serving.knative.openshift.io/enablePassthrough=true \
        --annotation-revision sidecar.istio.io/inject=true \
        --env RESPONSE="Hello Serverless" \
        --image docker.io/openshift/hello-openshift

    You get an output similar to the following example command:

    $ kn service create test-webapp -n team-bravo-1 \
        --annotation-service serving.knative.openshift.io/enablePassthrough=true \
        --annotation-revision sidecar.istio.io/inject=true \
        --env RESPONSE="Hello Serverless" \
        --image docker.io/openshift/hello-openshift

    You can also use the following YAML configuration:

    apiVersion: serving.knative.dev/v1
    kind: Service
    metadata:
      name: test-webapp
      namespace: team-alpha-1
      annotations:
        serving.knative.openshift.io/enablePassthrough: "true"
    spec:
      template:
        metadata:
          annotations:
            sidecar.istio.io/inject: 'true'
        spec:
          containers:
            - image: docker.io/openshift/hello-openshift
              env:
                - name: RESPONSE
                  value: "Hello Serverless!"
    ---
    apiVersion: serving.knative.dev/v1
    kind: Service
    metadata:
      name: test-webapp
      namespace: team-bravo-1
      annotations:
        serving.knative.openshift.io/enablePassthrough: "true"
    spec:
      template:
        metadata:
          annotations:
            sidecar.istio.io/inject: 'true'
        spec:
          containers:
            - image: docker.io/openshift/hello-openshift
              env:
                - name: RESPONSE
                  value: "Hello Serverless!"
  2. Deploy a curl pod for testing the connections:

    $ cat <<EOF | oc apply -f -
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: curl
      namespace: team-alpha-1
      labels:
        app: curl
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: curl
      template:
        metadata:
          labels:
            app: curl
          annotations:
            sidecar.istio.io/inject: 'true'
        spec:
          containers:
          - name: curl
            image: curlimages/curl
            command:
            - sleep
            - "3600"
    EOF
  3. Verify the configuration by using the following curl command:

    Test team-alpha-1 team-alpha-1 through cluster local domain, which is allowed:

    You get an output similar to the following example command:

    $ oc exec deployment/curl -n team-alpha-1 -it -- curl -v http://test-webapp.team-alpha-1:80

    You get an output similar to the following example:

    HTTP/1.1 200 OK
    content-length: 18
    content-type: text/plain; charset=utf-8
    date: Wed, 26 Jul 2023 12:49:59 GMT
    server: envoy
    x-envoy-upstream-service-time: 9
    
    Hello Serverless!

    Test the team-alpha-1 to team-alpha-1 connection through an external domain, which is allowed:

    You get an output similar to the following example command:

    $ EXTERNAL_URL=$(oc get ksvc -n team-alpha-1 test-webapp -o custom-columns=:.status.url --no-headers) && \
    oc exec deployment/curl -n team-alpha-1 -it -- curl -ik $EXTERNAL_URL

    You get an output similar to the following example:

    HTTP/2 200
    content-length: 18
    content-type: text/plain; charset=utf-8
    date: Wed, 26 Jul 2023 12:55:30 GMT
    server: istio-envoy
    x-envoy-upstream-service-time: 3629
    
    Hello Serverless!

    Test the team-alpha-1 to team-bravo-1 connection through the cluster’s local domain, which is not allowed:

    You get an output similar to the following example command:

    $ oc exec deployment/curl -n team-alpha-1 -it -- curl -v http://test-webapp.team-bravo-1:80

    You get an output similar to the following example:

    * processing: http://test-webapp.team-bravo-1:80
    *   Trying 172.30.73.216:80...
    * Connected to test-webapp.team-bravo-1 (172.30.73.216) port 80
    > GET / HTTP/1.1
    > Host: test-webapp.team-bravo-1
    > User-Agent: curl/8.2.0
    > Accept: */*
    >
    < HTTP/1.1 403 Forbidden
    < content-length: 19
    < content-type: text/plain
    < date: Wed, 26 Jul 2023 12:55:49 GMT
    < server: envoy
    < x-envoy-upstream-service-time: 6
    <
    * Connection #0 to host test-webapp.team-bravo-1 left intact
    RBAC: access denied

    Test the team-alpha-1 to team-bravo-1 connection through an external domain, which is allowed:

    You get an output similar to the following example command:

    $ EXTERNAL_URL=$(oc get ksvc -n team-bravo-1 test-webapp -o custom-columns=:.status.url --no-headers) && \
    oc exec deployment/curl -n team-alpha-1 -it -- curl -ik $EXTERNAL_URL

    You get an output similar to the following example:

    HTTP/2 200
    content-length: 18
    content-type: text/plain; charset=utf-8
    date: Wed, 26 Jul 2023 12:56:22 GMT
    server: istio-envoy
    x-envoy-upstream-service-time: 2856
    
    Hello Serverless!
  4. Delete the resources that were created for verification:

    $ oc delete deployment/curl -n team-alpha-1 && \
    oc delete ksvc/test-webapp -n team-alpha-1 && \
    oc delete ksvc/test-webapp -n team-bravo-1

1.3. Integrating Service Mesh 3.x with OpenShift Serverless

The OpenShift Serverless Operator uses Kourier as the default ingress for Knative. You can use Service Mesh with OpenShift Serverless whether you enable Kourier or disable it. If you disable Kourier, you can configure additional networking and routing options that Kourier does not support, such as mTLS.

1.3.1. Assumptions and limitations for Service Mesh 3.x integration

Learn about the key assumptions and limitations for integrating OpenShift Serverless with Service Mesh 3.x.

Note the following assumptions and limitations:

  • All Knative internal components and Knative Services run within Service Mesh with sidecar injection enabled. As a result, the mesh enforces strict mutual Transport Layer Security (mTLS). Clients must use mTLS when sending requests to Knative Services and must present a valid certificate. OpenShift Routing is the only exception.
  • OpenShift Serverless integrates with only one service mesh. You can run many meshes in the cluster, but OpenShift Serverless operates in a single mesh.

1.3.2. Prerequisites for Service Mesh 3.x integration

Learn about the requirements that you must meet before integrating Service Mesh 3.x with OpenShift Serverless.

  • You have access to an Red Hat OpenShift Serverless account with cluster administrator access.
  • You have installed the OpenShift CLI (oc).
  • You have installed the Serverless Operator.
  • You have installed the Red Hat OpenShift Service Mesh 3.x Operator.
  • The examples in the following procedures use the domain example.com. The example certificate for this domain is used as a certificate authority (CA) that signs the subdomain certificate.

    To complete and verify these procedures in your deployment, you need either a certificate signed by a widely trusted public CA or a CA provided by your organisation. Example commands must be adjusted according to your domain, subdomain, and CA.

  • You must configure the wildcard certificate to match the domain of your OpenShift Container Platform cluster. For example, if your OpenShift Container Platform console address is https://console-openshift-console.apps.openshift.example.com, you must configure the wildcard certificate so that the domain is *.apps.openshift.example.com.
  • If you want to use any domain name, including those which are not subdomains of the default OpenShift Container Platform cluster domain, you must set up a domain mapping for those domains.

1.3.3. Creating a certificate to encrypt incoming external traffic

By default, the Service Mesh mTLS feature only secures traffic inside of the Service Mesh itself, between the ingress gateway and individual pods that have sidecars. To encrypt traffic as it flows into the OpenShift Container Platform cluster, you must generate a certificate before you enable the OpenShift Serverless and Service Mesh integration.

Prerequisites

  • You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated.
  • You have installed the OpenShift Serverless Operator and Knative Serving.
  • You have installed the OpenShift CLI (oc).
  • You have access to the knative-serving-ingress namespace, which the OpenShift Serverless Operator creates automatically during installation.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads.

Procedure

  1. Create a root certificate and private key that signs the certificates for your Knative services:

    $ openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 \
        -subj '/O=Example Inc./CN=example.com' \
        -keyout root.key \
        -out root.crt
  2. Create a wildcard certificate:

    $ openssl req -nodes -newkey rsa:2048 \
        -subj "/CN=*.apps.openshift.example.com/O=Example Inc." \
        -keyout wildcard.key \
        -out wildcard.csr
  3. Sign the wildcard certificate:

    $ openssl x509 -req -days 365 -set_serial 0 \
        -CA root.crt \
        -CAkey root.key \
        -in wildcard.csr \
        -out wildcard.crt
  4. Create a secret containing the wildcard certificate by entering one of the following commands, depending on your Service Mesh version:

    • Option A: For Service Mesh 2.x, create the secret in the istio-system namespace by entering the folloing command:

      $ oc create -n istio-system secret tls wildcard-certs \
          --key=wildcard.key \
          --cert=wildcard.crt
    • Option B: For Service Mesh 3.x, create the secret in the knative-serving-ingress namespace by entering the folloing command:

      $ oc create -n knative-serving-ingress secret tls wildcard-certs \
          --key=wildcard.key \
          --cert=wildcard.crt

      The namespace used for the secret depends on the version of Service Mesh. Service Mesh 2.x expects the certificate in the istio-system namespace. The Service Mesh 3.x uses the dedicated knative-serving-ingress namespace where the OpenShift Serverless ingress gateway runs.

Integrate Service Mesh 3.x with OpenShift Serverless to enable advanced traffic management, security, and observability for serverless applications. Verify prerequisites, install and configure both components, and then verify the integration.

1.3.4.1. Verifying installation prerequisites

Before you install and configure the Service Mesh integration with Serverless, ensure that you meet all prerequisites.

Procedure

  • Check for conflicting gateways by running the following command:

    $ oc get gateway -A -o jsonpath='{range .items[*]}{@.metadata.namespace}{"/"}{@.metadata.name}{" "}{@.spec.servers}{"\n"}{end}' | column -t

    You get an output similar to the following example:

    knative-serving/knative-ingress-gateway  [{"hosts":["*"],"port":{"name":"https","number":443,"protocol":"HTTPS"},"tls":{"credentialName":"wildcard-certs","mode":"SIMPLE"}}]
    knative-serving/knative-local-gateway    [{"hosts":["*"],"port":{"name":"http","number":8081,"protocol":"HTTP"}}]

    This command should not return a Gateway that binds port: 443 and hosts: ["*"], except the Gateways in knative-serving and Gateways that are part of another Service Mesh instance.

    Note

    The mesh that Serverless is part of must be distinct and preferably reserved only for Serverless workloads. That is because additional configuration, such as Gateways, might interfere with the Serverless gateways knative-local-gateway and knative-ingress-gateway. Red Hat OpenShift Service Mesh only allows one Gateway to claim a wildcard host binding (hosts: ["*"]) on the same port (port: 443). If another Gateway is already binding this configuration, a separate mesh has to be created for Serverless workloads.

1.3.4.2. Installing and configuring Service Mesh 3.x

You can integrate Service Mesh 3.x with Serverless by installing and configuring the required Istio components, gateways, and Knative Serving resources. Once these resources are configured, you can deploy the Knative Serving instance with Istio to ensure that your serverless workloads run within the Service Mesh environment.

Procedure

  1. Create an Istio resource in the istio-system namespace with the following configuration:

    apiVersion: sailoperator.io/v1
    kind: Istio
    metadata:
      name: default
    spec:
      values:
        meshConfig:
          defaultConfig:
            terminationDrainDuration: 35s
      updateStrategy:
        inactiveRevisionDeletionGracePeriodSeconds: 30
        type: InPlace
      namespace: istio-system
      version: v1.26-latest
  2. Create a project called istio-system by running the following command:

    $ oc new-project istio-system
  3. Apply the Istio custom resource (CR) by running the following command:

    $ oc apply -f istio.yaml
  4. Create an IstioCNI resource in the istio-cni namespace with the following configuration:

    apiVersion: sailoperator.io/v1
    kind: IstioCNI
    metadata:
      name: default
    spec:
      namespace: istio-cni
      version: v1.26-latest
  5. Create a project called istio-cni by running the following command:

    $ oc new-project istio-cni
  6. Apply the IstioCNI CR by running the following command:

    $ oc apply -f istio-cni.yaml
  7. Create a file named gateway-deploy.yaml with the following configuration:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: knative-istio-ingressgateway
      namespace: knative-serving-ingress
    spec:
      selector:
        matchLabels:
          knative: ingressgateway
      template:
        metadata:
          annotations:
            inject.istio.io/templates: gateway
          labels:
            knative: ingressgateway
            sidecar.istio.io/inject: "true"
        spec:
          containers:
            - name: istio-proxy
              image: auto
    
    ---
    # Set up roles to allow reading credentials for TLS
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: istio-ingressgateway-sds
      namespace: knative-serving-ingress
    rules:
      - apiGroups: [""]
        resources: ["secrets"]
        verbs: ["get", "watch", "list"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: istio-ingressgateway-sds
      namespace: knative-serving-ingress
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: istio-ingressgateway-sds
    subjects:
      - kind: ServiceAccount
        name: default
    • spec.template.metadata.annotations.inject.istio.io/templates specifies the gateway injection template rather than the default sidecar template.
    • spec.template.metadata.labels.knative defines a unique label for the gateway. This is required to ensure Gateways can select this workload.
    • spec.template.metadata.labels.sidecar.istio.io/inject enables gateway injection.
    • spec.template.spec.containers.image ensures that the image automatically updates each time the pod starts.
  8. Apply the resource by running the following command:

    $ oc apply -f gateway-deploy.yaml
  9. Create gateway resources for the Knative Serving component by creating a file named serving-gateways.yaml with the following configuration:

    ###########################################################
    # cluster external
    ###########################################################
    apiVersion: v1
    kind: Service
    metadata:
      name: knative-istio-ingressgateway
      namespace: knative-serving-ingress
    spec:
      type: ClusterIP
      selector:
        knative: ingressgateway
      ports:
        - name: http2
          port: 80
          targetPort: 8080
        - name: https
          port: 443
          targetPort: 8443
    ---
    apiVersion: networking.istio.io/v1beta1
    kind: Gateway
    metadata:
      name: knative-ingress-gateway
      namespace: knative-serving
    spec:
      selector:
        knative: ingressgateway
      servers:
        - hosts:
            - '*'
          port:
            name: https
            number: 443
            protocol: HTTPS
          tls:
            credentialName: wildcard-certs
            mode: SIMPLE
    ---
    ###########################################################
    # cluster local
    ###########################################################
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        experimental.istio.io/disable-gateway-port-translation: "true"
      name: knative-local-gateway
      namespace: knative-serving-ingress
    spec:
      ports:
        - name: http2
          port: 80
          protocol: TCP
          targetPort: 8081
      selector:
        knative: ingressgateway
      type: ClusterIP
    ---
    apiVersion: networking.istio.io/v1beta1
    kind: Gateway
    metadata:
      name: knative-local-gateway
      namespace: knative-serving
    spec:
      selector:
        knative: ingressgateway
      servers:
        - hosts:
            - '*'
          port:
            name: http
            number: 8081
            protocol: HTTP
  10. Apply the resource by running the following command:

    $ oc apply -f serving-gateways.yaml
  11. Create a PeerAuthentication resource in the istio-system namespace to enforce mTLS across the mesh with the following configuration:

    apiVersion: security.istio.io/v1
    kind: PeerAuthentication
    metadata:
      name: mesh-mtls
      namespace: istio-system
    spec:
      mtls:
        mode: STRICT
  12. Apply the resource by running the following command:

    $ oc apply -f peerauth.yaml

1.3.4.3. Installing and configuring Serverless

After installing Service Mesh, you need to install Serverless with a specific configuration.

Procedure

  1. Install Knative Serving with the following KnativeServing custom resource, which enables the Istio integration:

    You get an output similar to the following example:

    apiVersion: operator.knative.dev/v1beta1
    kind: KnativeServing
    metadata:
      name: knative-serving
      namespace: knative-serving
      annotations:
        serverless.openshift.io/disable-istio-net-policies-generation: "true"
    spec:
      ingress:
        istio:
          enabled: true
      deployments:
      - name: activator
        labels:
          "sidecar.istio.io/inject": "true"
        annotations:
          "sidecar.istio.io/rewriteAppHTTPProbers": "true"
      - name: autoscaler
        labels:
          "sidecar.istio.io/inject": "true"
        annotations:
          "sidecar.istio.io/rewriteAppHTTPProbers": "true"
      config:
        istio:
          gateway.knative-serving.knative-ingress-gateway: istio-ingressgateway.<your_istio_namespace>.svc.cluster.local
          local-gateway.knative-serving.knative-local-gateway: knative-local-gateway.<your_istio_namespace>.svc.cluster.local
    enabled: true
    Enable Istio integration.
    deployments
    Enable sidecar injection for Knative Serving data plane pods.
    config.istio
    If your istio is not running in the istio-system namespace, you need to set these two flags with the correct namespace.
  2. Apply the KnativeServing resource by running the following command::

    $ oc apply -f knative-serving-config.yaml
  3. Install Knative Eventing with the following KnativeEventing object, which enables the Istio integration:

    You get an output similar to the following example:

    apiVersion: operator.knative.dev/v1beta1
    kind: KnativeEventing
    metadata:
      name: knative-eventing
      namespace: knative-eventing
      annotations:
        serverless.openshift.io/disable-istio-net-policies-generation: "true"
    spec:
      config:
        features:
          istio: enabled
      workloads:
        - name: pingsource-mt-adapter
          labels:
            sidecar.istio.io/inject: "true"
          annotations:
            sidecar.istio.io/rewriteAppHTTPProbers: "true"
    
        - name: imc-dispatcher
          labels:
            sidecar.istio.io/inject: "true"
          annotations:
            sidecar.istio.io/rewriteAppHTTPProbers: "true"
    
        - name: mt-broker-ingress
          labels:
            sidecar.istio.io/inject: "true"
          annotations:
            sidecar.istio.io/rewriteAppHTTPProbers: "true"
    
        - name: mt-broker-filter
          labels:
            sidecar.istio.io/inject: "true"
          annotations:
            sidecar.istio.io/rewriteAppHTTPProbers: "true"
    
        - name: job-sink
          labels:
            sidecar.istio.io/inject: "true"
          annotations:
            sidecar.istio.io/rewriteAppHTTPProbers: "true"
    spec.config.features.istio
    enables Eventing Istio controller to create a DestinationRule for each InMemoryChannel or KafkaChannel service.
    spec.workload
    enables sidecar injection for Knative Eventing pods.
  4. Apply the KnativeEventing resource by running the following command::

    $ oc apply -f knative-eventing-config.yaml
  5. Install Knative Kafka with the following KnativeKafka custom resource, which enables the Istio integration:

    You get an output similar to the following example:

    apiVersion: operator.serverless.openshift.io/v1alpha1
    kind: KnativeKafka
    metadata:
      name: knative-kafka
      namespace: knative-eventing
    spec:
      channel:
        enabled: true
        bootstrapServers: <bootstrap_servers>
      source:
        enabled: true
      broker:
        enabled: true
        defaultConfig:
          bootstrapServers: <bootstrap_servers>
          numPartitions: <num_partitions>
          replicationFactor: <replication_factor>
        sink:
          enabled: true
      workloads:
      - name: kafka-controller
        labels:
          "sidecar.istio.io/inject": "true"
        annotations:
          "sidecar.istio.io/rewriteAppHTTPProbers": "true"
      - name: kafka-broker-receiver
        labels:
          "sidecar.istio.io/inject": "true"
        annotations:
          "sidecar.istio.io/rewriteAppHTTPProbers": "true"
      - name: kafka-broker-dispatcher
        labels:
          "sidecar.istio.io/inject": "true"
        annotations:
          "sidecar.istio.io/rewriteAppHTTPProbers": "true"
      - name: kafka-channel-receiver
        labels:
          "sidecar.istio.io/inject": "true"
        annotations:
          "sidecar.istio.io/rewriteAppHTTPProbers": "true"
      - name: kafka-channel-dispatcher
        labels:
          "sidecar.istio.io/inject": "true"
        annotations:
          "sidecar.istio.io/rewriteAppHTTPProbers": "true"
      - name: kafka-source-dispatcher
        labels:
          "sidecar.istio.io/inject": "true"
        annotations:
          "sidecar.istio.io/rewriteAppHTTPProbers": "true"
      - name: kafka-sink-receiver
        labels:
          "sidecar.istio.io/inject": "true"
        annotations:
          "sidecar.istio.io/rewriteAppHTTPProbers": "true"
    bootstrapServers: <bootstrap_servers>
    The Apache Kafka cluster URL, for example my-cluster-kafka-bootstrap.kafka:9092.
    spec.workloads
    Enable sidecar injection for Knative Kafka pods.
  6. Apply the KnativeEventing object by running the following command::

    $ oc apply -f knative-kafka-config.yaml
  7. Install ServiceEntry to inform Service Mesh of the communication between KnativeKafka components and an Apache Kafka cluster:

    You get an output similar to the following example:

    apiVersion: networking.istio.io/v1alpha3
    kind: ServiceEntry
    metadata:
      name: kafka-cluster
      namespace: knative-eventing
    spec:
      hosts:
        - <bootstrap_servers_without_port>
      exportTo:
        - "."
      ports:
        - number: 9092
          name: tcp-plain
          protocol: TCP
        - number: 9093
          name: tcp-tls
          protocol: TCP
        - number: 9094
          name: tcp-sasl-tls
          protocol: TCP
        - number: 9095
          name: tcp-sasl-tls
          protocol: TCP
        - number: 9096
          name: tcp-tls
          protocol: TCP
      location: MESH_EXTERNAL
      resolution: NONE
    spec.hosts
    The list of Apache Kafka cluster hosts, for example my-cluster-kafka-bootstrap.kafka.
    spec.ports

    Apache Kafka cluster listeners ports.

    Note

    The ports listed in spec.ports are example TCP (Transmission Control Protocol) ports. The actual values depend on the Apache Kafka cluster configuration.

  8. Apply the ServiceEntry resource by running the following command:

    $ oc apply -f kafka-cluster-serviceentry.yaml

1.3.4.4. Verifying the integration setup for Service Mesh 3.x

After installing and configuring Service Mesh 3.x with Serverless, you can verify that the integration is working correctly. This verification ensures that the Service Mesh components, gateways, and Knative Serving configuration are properly set up and that serverless workloads can communicate securely within the mesh.

The folloiwng test deploys a simple Knative service and verifies sidecar injection, mutual Transport Layer Security (mTLS) compatibility, and passthrough by the ingress gateway.

Procedure

  1. Verify that the Istio component is running by running the following command:

    $ oc get pods -n istio-system
  2. Verify that the Istio Container Network Interface (CNI) component is running by running the following command:

    $ oc get pods -n istio-cni
  3. Verify that the Knative component is running by running the following command:

    $ oc get pods -n knative-serving
  4. Verify gateway services exist by running the following command:

    $ oc get svc -n knative-serving-ingress
  5. Create a test namespace by running the following command:

    $ oc new-project demo
  6. Create the sample Knative service manifest and save as hello-service.yaml with the following configuration:

    apiVersion: serving.knative.dev/v1
    kind: Service
    metadata:
      annotations:
        serving.knative.openshift.io/enablePassthrough: "true"
      name: hello-service
      namespace: demo
    spec:
      template:
        metadata:
          labels:
            sidecar.istio.io/inject: "true"
          annotations:
            sidecar.istio.io/rewriteAppHTTPProbers: "true"
        spec:
          containers:
            - image: quay.io/openshift-knative/showcase
    • serving.knative.openshift.io/enablePassthrough: "true" configures the ingress to allow TLS passthrough via the Istio gateway.
    • sidecar.istio.io/inject: "true" ensures the Istio proxy is injected.
    • sidecar.istio.io/rewriteAppHTTPProbers: "true" makes Knative health probes work with mTLS.
  7. Apply the Knative service by running the following command:

    $ oc apply -f hello-service.yaml
  8. Confirm sidecar injection and pod readiness by running the following commands:

    $ oc get pods -n demo
    $ oc get pod -n demo -l serving.knative.dev/service=hello-service -o jsonpath='{.items[0].spec.containers[*].name}{"\n"}'
  9. Retrieve the service URL by running the following command:

    $ oc get ksvc hello-service -n demo -o jsonpath='{.status.url}{"\n"}'
  10. Call the service by entering any one of the following commands:

    • Option A: If you have a trusted certificate set on the ingress domain, enter the folloing command:

      $ curl https://$(oc get ksvc hello-service -n demo -o jsonpath='{.status.url}' | sed 's#https://##')
    • Option B: If you are using a custom or self-signed certificate, use -k or provide your CA file with --cacert <path> by entering the following command:

      $ curl  --cacert <path_to_your_CA_file> https://$(oc get ksvc hello-service -n demo -o jsonpath='{.status.url}' | sed 's#https://##')

      You should see an output similar to the following example:

      {"artifact":"knative-showcase","greeting":"Welcome"}

      The exact JSON values might vary, but the response should indicate that the knative-showcase application is running successfully.

1.3.5. Disabling the default network policies

The OpenShift Serverless Operator generates the network policies by default. To disable the default network policy generation, you can add the serverless.openshift.io/disable-istio-net-policies-generation annotation in the KnativeEventing and KnativeServing custom resources (CRs).

Important

The OpenShift Serverless Operator generates the required network policies by default. However, the support for Service Mesh 3.x is currently in Technology Preview, these default network policies do not yet account for the networking requirements of Service Mesh 3.x. As a result, newly created Knative Services (ksvc) may fail to reach the Ready state when these policies are applied.

To avoid this issue, you must disable the automatic generation of Istio-related network policies by setting the serverless.openshift.io/disable-istio-net-policies-generation annotation to true in both the KnativeServing and KnativeEventing custom resources.

Prerequisites

  • You have one of the following permissions to access the cluster:

    • Cluster administrator permissions on OpenShift Container Platform
    • Cluster administrator permissions on Red Hat OpenShift Service on AWS
    • Dedicated administrator permissions on OpenShift Dedicated
  • You have installed the OpenShift CLI (oc).
  • You have access to a project with the appropriate roles and permissions to create applications and other workloads.
  • You have installed the OpenShift Serverless Operator, Knative Serving, and Knative Eventing on your cluster.
  • You have installed Red Hat OpenShift Service Mesh with the mTLS functionality enabled.

Procedure

  • Add the serverless.openshift.io/disable-istio-net-policies-generation: "true" annotation to your Knative custom resources.

    Note

    The OpenShift Serverless Operator generates the required network policies by default. When you configure ServiceMeshControlPlane with manageNetworkPolicy: false, you must disable the default network policy generation to ensure proper event delivery. To disable the default network policy generation, you can add the serverless.openshift.io/disable-istio-net-policies-generation annotation in the KnativeEventing and KnativeServing custom resources (CRs).

    1. Annotate the KnativeEventing CR by running the following command:

      $ oc edit KnativeEventing -n knative-eventing

      You get an output similar to the following example:

      apiVersion: operator.knative.dev/v1beta1
      kind: KnativeEventing
      metadata:
        name: knative-eventing
        namespace: knative-eventing
        annotations:
          serverless.openshift.io/disable-istio-net-policies-generation: "true"
    2. Annotate the KnativeServing CR by running the following command:

      $ oc edit KnativeServing -n knative-serving

      You get an output similar to the following example:

      apiVersion: operator.knative.dev/v1beta1
      kind: KnativeServing
      metadata:
        name: knative-serving
        namespace: knative-serving
        annotations:
          serverless.openshift.io/disable-istio-net-policies-generation: "true"
Red Hat logoGithubredditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können. Entdecken Sie unsere neuesten Updates.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

Theme

© 2026 Red Hat
Nach oben