Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 1. Integrating Service Mesh
1.1. Integrating Service Mesh 2.x with OpenShift Serverless Link kopierenLink in die Zwischenablage kopiert!
The OpenShift Serverless Operator uses Kourier as the default ingress for Knative. You can use Service Mesh with OpenShift Serverless whether you enable Kourier or disable it. If you disable Kourier, you can configure additional networking and routing options, such as mTLS that Kourier does not support.
1.1.1. Assumptions and limitations for Service Mesh integration Link kopierenLink in die Zwischenablage kopiert!
The key assumptions and limitations for integrating OpenShift Serverless with Service Mesh.
Note the following assumptions and limitations:
- All Knative internal components and Knative Services run within Service Mesh with sidecar injection enabled. As a result, the mesh enforces strict mTLS across all components. Clients must use mTLS when sending requests to Knative Services and must present a valid certificate. OpenShift Routing is the only exception.
- OpenShift Serverless integrates with only one service mesh. You can run multiple meshes in the cluster, but OpenShift Serverless operates in a single mesh.
-
You cannot change the target
ServiceMeshMemberRollfor OpenShift Serverless. To use a different service mesh, uninstall and reinstall OpenShift Serverless.
1.1.2. Prerequisites for Service Mesh 2.x integration Link kopierenLink in die Zwischenablage kopiert!
Learn about the requirements that you must meet before integrating Service Mesh 2.x with OpenShift Serverless.
- You have access to an Red Hat OpenShift Serverless account with cluster administrator access.
-
You have installed the OpenShift CLI (
oc). - You have installed the Serverless Operator.
- You have installed the Red Hat OpenShift Service Mesh 2.x Operator.
The examples in the following procedures use the domain
example.com. The example certificate for this domain is used as a certificate authority (CA) that signs the subdomain certificate.To complete and verify these procedures in your deployment, you need either a certificate signed by a widely trusted public CA or a CA provided by your organization. Example commands must be adjusted according to your domain, subdomain, and CA.
-
You must configure the wildcard certificate to match the domain of your OpenShift Container Platform cluster. For example, if your OpenShift Container Platform console address is
https://console-openshift-console.apps.openshift.example.com, you must configure the wildcard certificate so that the domain is*.apps.openshift.example.com. - If you want to use any domain name, including those which are not subdomains of the default OpenShift Container Platform cluster domain, you must set up domain mapping for those domains.
OpenShift Serverless only supports the use of Red Hat OpenShift Service Mesh functionality that is explicitly documented in this guide, and does not support other undocumented features.
Using Serverless 1.31 with Service Mesh is only supported with Service Mesh version 2.2 or later. For details and information about versions other than 1.31, see the "Red Hat OpenShift Serverless Supported Configurations" page.
1.1.3. Creating a certificate to encrypt incoming external traffic Link kopierenLink in die Zwischenablage kopiert!
By default, the Service Mesh mTLS feature only secures traffic inside of the Service Mesh itself, between the ingress gateway and individual pods that have sidecars. To encrypt traffic as it flows into the OpenShift Container Platform cluster, you must generate a certificate before you enable the OpenShift Serverless and Service Mesh integration.
Prerequisites
- You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated.
- You have installed the OpenShift Serverless Operator and Knative Serving.
-
You have installed the OpenShift CLI (
oc). -
You have access to the
knative-serving-ingressnamespace, which the OpenShift Serverless Operator creates automatically during installation. - You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads.
Procedure
Create a root certificate and private key that signs the certificates for your Knative services:
$ openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 \ -subj '/O=Example Inc./CN=example.com' \ -keyout root.key \ -out root.crtCreate a wildcard certificate:
$ openssl req -nodes -newkey rsa:2048 \ -subj "/CN=*.apps.openshift.example.com/O=Example Inc." \ -keyout wildcard.key \ -out wildcard.csrSign the wildcard certificate:
$ openssl x509 -req -days 365 -set_serial 0 \ -CA root.crt \ -CAkey root.key \ -in wildcard.csr \ -out wildcard.crtCreate a secret containing the wildcard certificate by entering one of the following commands, depending on your Service Mesh version:
Option A: For Service Mesh 2.x, create the secret in the
istio-systemnamespace by entering the folloing command:$ oc create -n istio-system secret tls wildcard-certs \ --key=wildcard.key \ --cert=wildcard.crtOption B: For Service Mesh 3.x, create the secret in the
knative-serving-ingressnamespace by entering the folloing command:$ oc create -n knative-serving-ingress secret tls wildcard-certs \ --key=wildcard.key \ --cert=wildcard.crtThe namespace used for the secret depends on the version of Service Mesh. Service Mesh 2.x expects the certificate in the
istio-systemnamespace. The Service Mesh 3.x uses the dedicatedknative-serving-ingressnamespace where the OpenShift Serverless ingress gateway runs.
1.1.4. Integrate Service Mesh with OpenShift Serverless Link kopierenLink in die Zwischenablage kopiert!
You can integrate Service Mesh 2.x with OpenShift Serverless to enable advanced traffic management, security, and observability for serverless applications. Follow the steps to verify prerequisites, install and configure both components, and verify the integration.
1.1.4.1. Verifying installation prerequisites Link kopierenLink in die Zwischenablage kopiert!
Before you install and configure the Service Mesh integration with Serverless, ensure that you meet all prerequisites.
Procedure
Check for conflicting gateways by running the following command:
$ oc get gateway -A -o jsonpath='{range .items[*]}{@.metadata.namespace}{"/"}{@.metadata.name}{" "}{@.spec.servers}{"\n"}{end}' | column -tYou get an output similar to the following example:
knative-serving/knative-ingress-gateway [{"hosts":["*"],"port":{"name":"https","number":443,"protocol":"HTTPS"},"tls":{"credentialName":"wildcard-certs","mode":"SIMPLE"}}] knative-serving/knative-local-gateway [{"hosts":["*"],"port":{"name":"http","number":8081,"protocol":"HTTP"}}]This command should not return a
Gatewaythat bindsport: 443andhosts: ["*"], except theGatewaysinknative-servingandGatewaysthat are part of another Service Mesh instance.NoteThe mesh that Serverless is part of must be distinct and preferably reserved only for Serverless workloads. That is because additional configuration, such as
Gateways, might interfere with the Serverless gatewaysknative-local-gatewayandknative-ingress-gateway. Red Hat OpenShift Service Mesh only allows one Gateway to claim a wildcard host binding (hosts: ["*"]) on the same port (port: 443). If another Gateway is already binding this configuration, a separate mesh has to be created for Serverless workloads.
1.1.4.2. Installing and configuring Service Mesh Link kopierenLink in die Zwischenablage kopiert!
To integrate Serverless with Service Mesh, you need to install Service Mesh with a specific configuration.
Procedure
Create a
ServiceMeshControlPlaneresource in theistio-systemnamespace with the following configuration:ImportantIf you have an existing
ServiceMeshControlPlaneobject, make sure that you have the same configuration applied.apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: profiles: - default security: dataPlane: mtls: true techPreview: meshConfig: defaultConfig: terminationDrainDuration: 35s gateways: ingress: service: metadata: labels: knative: ingressgateway proxy: networking: trafficControl: inbound: excludedPorts: - 8444 # metrics - 8022 # serving: wait-for-drain k8s pre-stop hookmtls: true- Enforce strict mTLS in the mesh. Only calls using a valid client certificate are allowed.
techPreview.terminationDrainDuration-
Serverless has a graceful termination for Knative Services of 30 seconds.
istio-proxyneeds to have a longer termination duration to make sure no requests are dropped. knative: ingressgateway- Define a specific selector for the ingress gateway to target only the Knative gateway.
excludedPorts- These ports are called by Kubernetes and cluster monitoring, which are not part of the mesh and cannot be called using mTLS. Therefore, these ports are excluded from the mesh.
Add the namespaces that you want to integrate with Service Mesh to the
ServiceMeshMemberRollobject as members:You get an output similar to the following example:
apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: - knative-serving - knative-eventing - your-OpenShift-projectsspec.membersA list of namespaces to be integrated with Service Mesh.
ImportantThis list of namespaces must include the
knative-servingandknative-eventingnamespaces.
Apply the
ServiceMeshMemberRollresource by running the following commad::$ oc apply -f servicemesh-member-roll.yamlCreate the necessary gateways so that Service Mesh can accept traffic. The following example uses the
knative-local-gatewayobject with theISTIO_MUTUALmode (mTLS):You get an output similar to the following example:
apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: knative-ingress-gateway namespace: knative-serving spec: selector: knative: ingressgateway servers: - port: number: 443 name: https protocol: HTTPS hosts: - "*" tls: mode: SIMPLE credentialName: <wildcard_certs> --- apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: knative-local-gateway namespace: knative-serving spec: selector: knative: ingressgateway servers: - port: number: 8081 name: https protocol: HTTPS tls: mode: ISTIO_MUTUAL hosts: - "*" --- apiVersion: v1 kind: Service metadata: name: knative-local-gateway namespace: istio-system labels: experimental.istio.io/disable-gateway-port-translation: "true" spec: type: ClusterIP selector: istio: ingressgateway ports: - name: http2 port: 80 targetPort: 8081credentialName: <wildcard_certs>- Name of the secret containing the wildcard certificate.
protocol: HTTPSandmode: ISTIO_MUTUAL-
The
knative-local-gatewayobject serves HTTPS traffic and expects all clients to send requests by using mTLS. This means that only traffic coming from within Service Mesh is possible. Workloads from outside the Service Mesh must use the external domain through OpenShift Routing.
Apply the
Gatewayresources by running the following commad:$ oc apply -f istio-knative-gateways.yaml
1.1.4.3. Installing and configuring Serverless Link kopierenLink in die Zwischenablage kopiert!
After installing Service Mesh, you need to install Serverless with a specific configuration.
Procedure
Install Knative Serving with the following
KnativeServingcustom resource, which enables the Istio integration:You get an output similar to the following example:
apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving annotations: serverless.openshift.io/disable-istio-net-policies-generation: "true" spec: ingress: istio: enabled: true deployments: - name: activator labels: "sidecar.istio.io/inject": "true" annotations: "sidecar.istio.io/rewriteAppHTTPProbers": "true" - name: autoscaler labels: "sidecar.istio.io/inject": "true" annotations: "sidecar.istio.io/rewriteAppHTTPProbers": "true" config: istio: gateway.knative-serving.knative-ingress-gateway: istio-ingressgateway.<your_istio_namespace>.svc.cluster.local local-gateway.knative-serving.knative-local-gateway: knative-local-gateway.<your_istio_namespace>.svc.cluster.localenabled: true- Enable Istio integration.
deployments- Enable sidecar injection for Knative Serving data plane pods.
config.istio-
If your istio is not running in the
istio-systemnamespace, you need to set these two flags with the correct namespace.
Apply the
KnativeServingresource by running the following command::$ oc apply -f knative-serving-config.yamlInstall Knative Eventing with the following
KnativeEventingobject, which enables the Istio integration:You get an output similar to the following example:
apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing annotations: serverless.openshift.io/disable-istio-net-policies-generation: "true" spec: config: features: istio: enabled workloads: - name: pingsource-mt-adapter labels: sidecar.istio.io/inject: "true" annotations: sidecar.istio.io/rewriteAppHTTPProbers: "true" - name: imc-dispatcher labels: sidecar.istio.io/inject: "true" annotations: sidecar.istio.io/rewriteAppHTTPProbers: "true" - name: mt-broker-ingress labels: sidecar.istio.io/inject: "true" annotations: sidecar.istio.io/rewriteAppHTTPProbers: "true" - name: mt-broker-filter labels: sidecar.istio.io/inject: "true" annotations: sidecar.istio.io/rewriteAppHTTPProbers: "true" - name: job-sink labels: sidecar.istio.io/inject: "true" annotations: sidecar.istio.io/rewriteAppHTTPProbers: "true"spec.config.features.istio-
enables Eventing Istio controller to create a
DestinationRulefor eachInMemoryChannelorKafkaChannelservice. spec.workload- enables sidecar injection for Knative Eventing pods.
Apply the
KnativeEventingresource by running the following command::$ oc apply -f knative-eventing-config.yamlInstall Knative Kafka with the following
KnativeKafkacustom resource, which enables the Istio integration:You get an output similar to the following example:
apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: name: knative-kafka namespace: knative-eventing spec: channel: enabled: true bootstrapServers: <bootstrap_servers> source: enabled: true broker: enabled: true defaultConfig: bootstrapServers: <bootstrap_servers> numPartitions: <num_partitions> replicationFactor: <replication_factor> sink: enabled: true workloads: - name: kafka-controller labels: "sidecar.istio.io/inject": "true" annotations: "sidecar.istio.io/rewriteAppHTTPProbers": "true" - name: kafka-broker-receiver labels: "sidecar.istio.io/inject": "true" annotations: "sidecar.istio.io/rewriteAppHTTPProbers": "true" - name: kafka-broker-dispatcher labels: "sidecar.istio.io/inject": "true" annotations: "sidecar.istio.io/rewriteAppHTTPProbers": "true" - name: kafka-channel-receiver labels: "sidecar.istio.io/inject": "true" annotations: "sidecar.istio.io/rewriteAppHTTPProbers": "true" - name: kafka-channel-dispatcher labels: "sidecar.istio.io/inject": "true" annotations: "sidecar.istio.io/rewriteAppHTTPProbers": "true" - name: kafka-source-dispatcher labels: "sidecar.istio.io/inject": "true" annotations: "sidecar.istio.io/rewriteAppHTTPProbers": "true" - name: kafka-sink-receiver labels: "sidecar.istio.io/inject": "true" annotations: "sidecar.istio.io/rewriteAppHTTPProbers": "true"bootstrapServers: <bootstrap_servers>-
The Apache Kafka cluster URL, for example
my-cluster-kafka-bootstrap.kafka:9092. spec.workloads- Enable sidecar injection for Knative Kafka pods.
Apply the
KnativeEventingobject by running the following command::$ oc apply -f knative-kafka-config.yamlInstall
ServiceEntryto inform Service Mesh of the communication betweenKnativeKafkacomponents and an Apache Kafka cluster:You get an output similar to the following example:
apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: kafka-cluster namespace: knative-eventing spec: hosts: - <bootstrap_servers_without_port> exportTo: - "." ports: - number: 9092 name: tcp-plain protocol: TCP - number: 9093 name: tcp-tls protocol: TCP - number: 9094 name: tcp-sasl-tls protocol: TCP - number: 9095 name: tcp-sasl-tls protocol: TCP - number: 9096 name: tcp-tls protocol: TCP location: MESH_EXTERNAL resolution: NONEspec.hosts-
The list of Apache Kafka cluster hosts, for example
my-cluster-kafka-bootstrap.kafka. spec.portsApache Kafka cluster listeners ports.
NoteThe ports listed in
spec.portsare example TCP (Transmission Control Protocol) ports. The actual values depend on the Apache Kafka cluster configuration.
Apply the
ServiceEntryresource by running the following command:$ oc apply -f kafka-cluster-serviceentry.yaml
1.1.4.4. Verifying the integration Link kopierenLink in die Zwischenablage kopiert!
After installing Service Mesh and Serverless with Istio enabled, you can verify that the integration works.
Procedure
Create a Knative Service that has sidecar injection enabled and uses a pass-through route:
You get an output similar to the following example:
apiVersion: serving.knative.dev/v1 kind: Service metadata: name: <service_name> namespace: <namespace> annotations: serving.knative.openshift.io/enablePassthrough: "true" spec: template: metadata: annotations: sidecar.istio.io/inject: "true" sidecar.istio.io/rewriteAppHTTPProbers: "true" spec: containers: - image: <image_url>metadat.namespace- A namespace that is part of the service mesh member roll.
serving.knative.openshift.io/enablePassthrough: "true"- Instruct Knative Serving to generate a pass-through enabled route, so that the certificates you have generated are served through the ingress gateway directly.
sidecar.istio.io/inject: "true"Inject Service Mesh sidecars into the Knative service pods.
ImportantAlways add the annotation from this example to all of your Knative Service to make them work with Service Mesh.
Apply the
Serviceresource by running the following command:$ oc apply -f knative-service.yamlAccess your serverless application by using a secure connection that is now trusted by the CA:
$ curl --cacert root.crt <service_url>You get an output similar to the following example command:
$ curl --cacert root.crt https://hello-default.apps.openshift.example.comYou get an output similar to the following example:
Hello Openshift!
1.1.5. Enabling Knative Serving and Knative Eventing metrics when using Service Mesh with mTLS Link kopierenLink in die Zwischenablage kopiert!
If you enable Service Mesh with Mutual Transport Layer Security (mTLS), Service Mesh prevents Prometheus from scraping metrics. As a result, Knative Serving and Knative Eventing metrics are unavailable by default. You can enable these metrics when you use Service Mesh with mTLS.
Prerequisites
You have one of the following permissions to access the cluster:
- Cluster administrator permissions on OpenShift Container Platform
- Cluster administrator permissions on Red Hat OpenShift Service on AWS
- Dedicated administrator permissions on OpenShift Dedicated
-
You have installed the OpenShift CLI (
oc). - You have access to a project with the appropriate roles and permissions to create applications and other workloads.
- You have installed the OpenShift Serverless Operator, Knative Serving, and Knative Eventing on your cluster.
- You have installed Red Hat OpenShift Service Mesh with the mTLS functionality enabled.
Procedure
Specify
prometheusas themetrics.backend-destinationin theobservabilityspec of the Knative Serving custom resource (CR):apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: config: observability: metrics.backend-destination: "prometheus" ...This step prevents metrics from being disabled by default.
NoteWhen you configure
ServiceMeshControlPlanewithmanageNetworkPolicy: false, you must use the annotation on KnativeEventing to ensure proper event delivery.The same mechanism is used for Knative Eventing. To enable metrics for Knative Eventing, you need to specify
prometheusas themetrics.backend-destinationin theobservabilityspec of the Knative Eventing custom resource (CR) as follows:apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: config: observability: metrics.backend-destination: "prometheus" ...Change and reapply the default Service Mesh control plane in the
istio-systemnamespace, so that it includes the following spec:... spec: proxy: networking: trafficControl: inbound: excludedPorts: - 8444 ...
1.1.6. Disabling the default network policies Link kopierenLink in die Zwischenablage kopiert!
The OpenShift Serverless Operator generates the network policies by default. To disable the default network policy generation, you can add the serverless.openshift.io/disable-istio-net-policies-generation annotation in the KnativeEventing and KnativeServing custom resources (CRs).
The OpenShift Serverless Operator generates the required network policies by default. However, the support for Service Mesh 3.x is currently in Technology Preview, these default network policies do not yet account for the networking requirements of Service Mesh 3.x. As a result, newly created Knative Services (ksvc) may fail to reach the Ready state when these policies are applied.
To avoid this issue, you must disable the automatic generation of Istio-related network policies by setting the serverless.openshift.io/disable-istio-net-policies-generation annotation to true in both the KnativeServing and KnativeEventing custom resources.
Prerequisites
You have one of the following permissions to access the cluster:
- Cluster administrator permissions on OpenShift Container Platform
- Cluster administrator permissions on Red Hat OpenShift Service on AWS
- Dedicated administrator permissions on OpenShift Dedicated
-
You have installed the OpenShift CLI (
oc). - You have access to a project with the appropriate roles and permissions to create applications and other workloads.
- You have installed the OpenShift Serverless Operator, Knative Serving, and Knative Eventing on your cluster.
- You have installed Red Hat OpenShift Service Mesh with the mTLS functionality enabled.
Procedure
Add the
serverless.openshift.io/disable-istio-net-policies-generation: "true"annotation to your Knative custom resources.NoteThe OpenShift Serverless Operator generates the required network policies by default. When you configure
ServiceMeshControlPlanewithmanageNetworkPolicy: false, you must disable the default network policy generation to ensure proper event delivery. To disable the default network policy generation, you can add theserverless.openshift.io/disable-istio-net-policies-generationannotation in theKnativeEventingandKnativeServingcustom resources (CRs).Annotate the
KnativeEventingCR by running the following command:$ oc edit KnativeEventing -n knative-eventingYou get an output similar to the following example:
apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing annotations: serverless.openshift.io/disable-istio-net-policies-generation: "true"Annotate the
KnativeServingCR by running the following command:$ oc edit KnativeServing -n knative-servingYou get an output similar to the following example:
apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving annotations: serverless.openshift.io/disable-istio-net-policies-generation: "true"
1.1.7. Improving net-istio memory usage by using secret filtering for Service Mesh Link kopierenLink in die Zwischenablage kopiert!
By default, the informers implementation in the Kubernetes client-go library fetches all resources of a given type. When many resources exist, this behavior increases memory usage and can cause the Knative net-istio ingress controller to fail on large clusters due to memory leaks. The Knative net-istio ingress controller provides a filtering mechanism that lets controllers fetch only Knative-related secrets.
The secret filtering is enabled by default on the OpenShift Serverless Operator side. An environment variable, ENABLE_SECRET_INFORMER_FILTERING_BY_CERT_UID=true, is added by default to the net-istio controller pods.
If you enable secret filtering, you must label all of your secrets with networking.internal.knative.dev/certificate-uid: "<id>". Otherwise, Knative Serving does not detect them, which leads to failures. You must label both new and existing secrets.
Prerequisites
- You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated.
- You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads.
- Install Red Hat OpenShift Service Mesh. OpenShift Serverless with Service Mesh only is supported for use with Red Hat OpenShift Service Mesh version 2.0.5 or later.
- Install the OpenShift Serverless Operator and Knative Serving.
-
Install the OpenShift CLI (
oc).
Procedure
Disable the secret filtering by setting the
ENABLE_SECRET_INFORMER_FILTERING_BY_CERT_UIDvariable tofalseby using theworkloadsfield in theKnativeServingcustom resource (CR).You get an output similar to the following example:
apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: ... workloads: - env: - container: controller envVars: - name: ENABLE_SECRET_INFORMER_FILTERING_BY_CERT_UID value: 'false' name: net-istio-controller
1.2. Using Service Mesh 2.x to isolate network traffic with OpenShift Serverless Link kopierenLink in die Zwischenablage kopiert!
You can use Service Mesh 2.x with OpenShift Serverless to control and isolate network traffic between services. This integration helps you define fine-grained communication policies, enhance security through mutual TLS, and manage traffic flow within your serverless environment.
1.2.1. Network traffic isolation with Service Mesh and OpenShift Serverless Link kopierenLink in die Zwischenablage kopiert!
Learn how to use Service Mesh to isolate network traffic between tenants in OpenShift Serverless.
Using Service Mesh to isolate network traffic with OpenShift Serverless is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Service Mesh isolates network traffic between tenants on a shared Red Hat OpenShift Serverless cluster by using Service Mesh AuthorizationPolicy resources. Serverless supports this approach by using many Service Mesh resources. A tenant is a group of one or more projects that can communicate with each other over the network on a shared cluster.
1.2.2. High-level architecture Link kopierenLink in die Zwischenablage kopiert!
The high-level architecture of Serverless traffic isolation provided by Service Mesh consists of AuthorizationPolicy objects in the knative-serving, knative-eventing, and the tenants' namespaces, with all the components being part of the Service Mesh. The injected Service Mesh sidecars enforce those rules to isolate network traffic between tenants.
1.2.3. Securing the Service Mesh Link kopierenLink in die Zwischenablage kopiert!
You can use authorization policies and mTLS to secure Service Mesh.
Prerequisites
- You have access to an Red Hat OpenShift Serverless account with cluster administrator access.
- You have set up the Service Mesh 2.x and Serverless integration.
- You have created one or more OpenShift projects for each tenant.
Procedure
Make sure that all Red Hat OpenShift Serverless projects of your tenant are part of the same
ServiceMeshMemberRollobject as members:apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: - knative-serving # static value, needs to be here, see setup page - knative-eventing # static value, needs to be here, see setup page - team-alpha-1 # example OpenShift project that belongs to the team-alpha tenant - team-alpha-2 # example OpenShift project that belongs th the team-alpha tenant - team-bravo-1 # example OpenShift project that belongs to the team-bravo tenant - team-bravo-2 # example OpenShift project that belongs th the team-bravo tenantAll projects that are part of the mesh must enforce mTLS in strict mode. This forces Istio to only accept connections with a client-certificate present and allows the Service Mesh sidecar to validate the origin by using an
AuthorizationPolicyobject.Create the configuration with
AuthorizationPolicyobjects in theknative-servingandknative-eventingnamespaces and you get an output similar to the following example:apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: deny-all-by-default namespace: knative-eventing spec: { } --- apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: deny-all-by-default namespace: knative-serving spec: { } --- apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: allow-mt-channel-based-broker-ingress-to-imc-dispatcher namespace: knative-eventing spec: action: ALLOW selector: matchLabels: app.kubernetes.io/component: "imc-dispatcher" rules: - from: - source: namespaces: [ "knative-eventing" ] principals: [ "cluster.local/ns/knative-eventing/sa/mt-broker-ingress" ] to: - operation: methods: [ "POST" ] --- apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: allow-mt-channel-based-broker-ingress-to-kafka-channel namespace: knative-eventing spec: action: ALLOW selector: matchLabels: app.kubernetes.io/component: "kafka-channel-receiver" rules: - from: - source: namespaces: [ "knative-eventing" ] principals: [ "cluster.local/ns/knative-eventing/sa/mt-broker-ingress" ] to: - operation: methods: [ "POST" ] --- apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: allow-kafka-channel-to-mt-channel-based-broker-filter namespace: knative-eventing spec: action: ALLOW selector: matchLabels: app.kubernetes.io/component: "broker-filter" rules: - from: - source: namespaces: [ "knative-eventing" ] principals: [ "cluster.local/ns/knative-eventing/sa/knative-kafka-channel-data-plane" ] to: - operation: methods: [ "POST" ] --- apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: allow-imc-to-mt-channel-based-broker-filter namespace: knative-eventing spec: action: ALLOW selector: matchLabels: app.kubernetes.io/component: "broker-filter" rules: - from: - source: namespaces: [ "knative-eventing" ] principals: [ "cluster.local/ns/knative-eventing/sa/imc-dispatcher" ] to: - operation: methods: [ "POST" ] --- apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: allow-probe-kafka-broker-receiver namespace: knative-eventing spec: action: ALLOW selector: matchLabels: app.kubernetes.io/component: "kafka-broker-receiver" rules: - from: - source: namespaces: [ "knative-eventing" ] principals: [ "cluster.local/ns/knative-eventing/sa/kafka-controller" ] to: - operation: methods: [ "GET" ] --- apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: allow-probe-kafka-sink-receiver namespace: knative-eventing spec: action: ALLOW selector: matchLabels: app.kubernetes.io/component: "kafka-sink-receiver" rules: - from: - source: namespaces: [ "knative-eventing" ] principals: [ "cluster.local/ns/knative-eventing/sa/kafka-controller" ] to: - operation: methods: [ "GET" ] --- apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: allow-probe-kafka-channel-receiver namespace: knative-eventing spec: action: ALLOW selector: matchLabels: app.kubernetes.io/component: "kafka-channel-receiver" rules: - from: - source: namespaces: [ "knative-eventing" ] principals: [ "cluster.local/ns/knative-eventing/sa/kafka-controller" ] to: - operation: methods: [ "GET" ] --- apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: allow-traffic-to-activator namespace: knative-serving spec: selector: matchLabels: app: activator action: ALLOW rules: - from: - source: namespaces: [ "knative-serving", "istio-system" ] --- apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: allow-traffic-to-autoscaler namespace: knative-serving spec: selector: matchLabels: app: autoscaler action: ALLOW rules: - from: - source: namespaces: [ "knative-serving" ]These policies restrict the access rules for the network communication between Serverless system components. Specifically, they enforce the following rules:
-
Deny all traffic that is not explicitly allowed in the
knative-servingandknative-eventingnamespaces -
Allow traffic from the
istio-systemandknative-servingnamespaces to activator -
Allow traffic from the
knative-servingnamespace to autoscaler -
Allow health probes for Apache Kafka components in the
knative-eventingnamespace -
Allow internal traffic for channel-based brokers in the
knative-eventingnamespace
-
Deny all traffic that is not explicitly allowed in the
Apply the authorization policy configuration by running the following command:
$ oc apply -f knative-default-authz-policies.yamlDefine which OpenShift projects can communicate with each other. For this communication, every OpenShift project of a tenant requires the following:
-
One
AuthorizationPolicyobject limiting directly incoming traffic to the tenant’s project -
One
AuthorizationPolicyobject limiting incoming traffic using the activator component of Serverless that runs in theknative-servingproject One
AuthorizationPolicyobject allowing Kubernetes to callPreStopHookson Knative ServicesInstead of creating these policies manually, install the
helmutility and create the necessary resources for each tenant:$ helm repo add openshift-helm-charts https://charts.openshift.io/$ helm template openshift-helm-charts/redhat-knative-istio-authz --version 1.37.0 --set "name=team-alpha" --set "namespaces={team-alpha-1,team-alpha-2}" > team-alpha.yaml$ helm template openshift-helm-charts/redhat-knative-istio-authz --version 1.31.0 --set "name=team-bravo" --set "namespaces={team-bravo-1,team-bravo-2}" > team-bravo.yaml
-
One
Apply the authorization policy configuration by running the following command:
$ oc apply -f team-alpha.yaml team-bravo.yaml
1.2.4. Verifying the configuration Link kopierenLink in die Zwischenablage kopiert!
You can use the curl command to verify the configuration for network traffic isolation.
Prerequisites
- You have access to an Red Hat OpenShift Serverless account with cluster administrator access.
- You have set up the Service Mesh 2.x and Serverless integration.
- You have created one or more OpenShift projects for each tenant.
The following examples assume having two tenants, each having one namespace, and all part of the ServiceMeshMemberRoll object, configured with the resources in the team-alpha.yaml and team-bravo.yaml files.
Procedure
Deploy Knative Services in the namespaces of both of the tenants:
You get an output similar to the following example command:
$ kn service create test-webapp -n team-alpha-1 \ --annotation-service serving.knative.openshift.io/enablePassthrough=true \ --annotation-revision sidecar.istio.io/inject=true \ --env RESPONSE="Hello Serverless" \ --image docker.io/openshift/hello-openshiftYou get an output similar to the following example command:
$ kn service create test-webapp -n team-bravo-1 \ --annotation-service serving.knative.openshift.io/enablePassthrough=true \ --annotation-revision sidecar.istio.io/inject=true \ --env RESPONSE="Hello Serverless" \ --image docker.io/openshift/hello-openshiftYou can also use the following YAML configuration:
apiVersion: serving.knative.dev/v1 kind: Service metadata: name: test-webapp namespace: team-alpha-1 annotations: serving.knative.openshift.io/enablePassthrough: "true" spec: template: metadata: annotations: sidecar.istio.io/inject: 'true' spec: containers: - image: docker.io/openshift/hello-openshift env: - name: RESPONSE value: "Hello Serverless!" --- apiVersion: serving.knative.dev/v1 kind: Service metadata: name: test-webapp namespace: team-bravo-1 annotations: serving.knative.openshift.io/enablePassthrough: "true" spec: template: metadata: annotations: sidecar.istio.io/inject: 'true' spec: containers: - image: docker.io/openshift/hello-openshift env: - name: RESPONSE value: "Hello Serverless!"Deploy a
curlpod for testing the connections:$ cat <<EOF | oc apply -f - apiVersion: apps/v1 kind: Deployment metadata: name: curl namespace: team-alpha-1 labels: app: curl spec: replicas: 1 selector: matchLabels: app: curl template: metadata: labels: app: curl annotations: sidecar.istio.io/inject: 'true' spec: containers: - name: curl image: curlimages/curl command: - sleep - "3600" EOFVerify the configuration by using the following
curlcommand:Test
team-alpha-1through cluster local domain, which is allowed:team-alpha-1 You get an output similar to the following example command:
$ oc exec deployment/curl -n team-alpha-1 -it -- curl -v http://test-webapp.team-alpha-1:80You get an output similar to the following example:
HTTP/1.1 200 OK content-length: 18 content-type: text/plain; charset=utf-8 date: Wed, 26 Jul 2023 12:49:59 GMT server: envoy x-envoy-upstream-service-time: 9 Hello Serverless!Test the
team-alpha-1toteam-alpha-1connection through an external domain, which is allowed:You get an output similar to the following example command:
$ EXTERNAL_URL=$(oc get ksvc -n team-alpha-1 test-webapp -o custom-columns=:.status.url --no-headers) && \ oc exec deployment/curl -n team-alpha-1 -it -- curl -ik $EXTERNAL_URLYou get an output similar to the following example:
HTTP/2 200 content-length: 18 content-type: text/plain; charset=utf-8 date: Wed, 26 Jul 2023 12:55:30 GMT server: istio-envoy x-envoy-upstream-service-time: 3629 Hello Serverless!Test the
team-alpha-1toteam-bravo-1connection through the cluster’s local domain, which is not allowed:You get an output similar to the following example command:
$ oc exec deployment/curl -n team-alpha-1 -it -- curl -v http://test-webapp.team-bravo-1:80You get an output similar to the following example:
* processing: http://test-webapp.team-bravo-1:80 * Trying 172.30.73.216:80... * Connected to test-webapp.team-bravo-1 (172.30.73.216) port 80 > GET / HTTP/1.1 > Host: test-webapp.team-bravo-1 > User-Agent: curl/8.2.0 > Accept: */* > < HTTP/1.1 403 Forbidden < content-length: 19 < content-type: text/plain < date: Wed, 26 Jul 2023 12:55:49 GMT < server: envoy < x-envoy-upstream-service-time: 6 < * Connection #0 to host test-webapp.team-bravo-1 left intact RBAC: access deniedTest the
team-alpha-1toteam-bravo-1connection through an external domain, which is allowed:You get an output similar to the following example command:
$ EXTERNAL_URL=$(oc get ksvc -n team-bravo-1 test-webapp -o custom-columns=:.status.url --no-headers) && \ oc exec deployment/curl -n team-alpha-1 -it -- curl -ik $EXTERNAL_URLYou get an output similar to the following example:
HTTP/2 200 content-length: 18 content-type: text/plain; charset=utf-8 date: Wed, 26 Jul 2023 12:56:22 GMT server: istio-envoy x-envoy-upstream-service-time: 2856 Hello Serverless!Delete the resources that were created for verification:
$ oc delete deployment/curl -n team-alpha-1 && \ oc delete ksvc/test-webapp -n team-alpha-1 && \ oc delete ksvc/test-webapp -n team-bravo-1
1.3. Integrating Service Mesh 3.x with OpenShift Serverless Link kopierenLink in die Zwischenablage kopiert!
The OpenShift Serverless Operator uses Kourier as the default ingress for Knative. You can use Service Mesh with OpenShift Serverless whether you enable Kourier or disable it. If you disable Kourier, you can configure additional networking and routing options that Kourier does not support, such as mTLS.
1.3.1. Assumptions and limitations for Service Mesh 3.x integration Link kopierenLink in die Zwischenablage kopiert!
Learn about the key assumptions and limitations for integrating OpenShift Serverless with Service Mesh 3.x.
Note the following assumptions and limitations:
- All Knative internal components and Knative Services run within Service Mesh with sidecar injection enabled. As a result, the mesh enforces strict mutual Transport Layer Security (mTLS). Clients must use mTLS when sending requests to Knative Services and must present a valid certificate. OpenShift Routing is the only exception.
- OpenShift Serverless integrates with only one service mesh. You can run many meshes in the cluster, but OpenShift Serverless operates in a single mesh.
1.3.2. Prerequisites for Service Mesh 3.x integration Link kopierenLink in die Zwischenablage kopiert!
Learn about the requirements that you must meet before integrating Service Mesh 3.x with OpenShift Serverless.
- You have access to an Red Hat OpenShift Serverless account with cluster administrator access.
-
You have installed the OpenShift CLI (
oc). - You have installed the Serverless Operator.
- You have installed the Red Hat OpenShift Service Mesh 3.x Operator.
The examples in the following procedures use the domain
example.com. The example certificate for this domain is used as a certificate authority (CA) that signs the subdomain certificate.To complete and verify these procedures in your deployment, you need either a certificate signed by a widely trusted public CA or a CA provided by your organisation. Example commands must be adjusted according to your domain, subdomain, and CA.
-
You must configure the wildcard certificate to match the domain of your OpenShift Container Platform cluster. For example, if your OpenShift Container Platform console address is
https://console-openshift-console.apps.openshift.example.com, you must configure the wildcard certificate so that the domain is*.apps.openshift.example.com. - If you want to use any domain name, including those which are not subdomains of the default OpenShift Container Platform cluster domain, you must set up a domain mapping for those domains.
1.3.3. Creating a certificate to encrypt incoming external traffic Link kopierenLink in die Zwischenablage kopiert!
By default, the Service Mesh mTLS feature only secures traffic inside of the Service Mesh itself, between the ingress gateway and individual pods that have sidecars. To encrypt traffic as it flows into the OpenShift Container Platform cluster, you must generate a certificate before you enable the OpenShift Serverless and Service Mesh integration.
Prerequisites
- You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated.
- You have installed the OpenShift Serverless Operator and Knative Serving.
-
You have installed the OpenShift CLI (
oc). -
You have access to the
knative-serving-ingressnamespace, which the OpenShift Serverless Operator creates automatically during installation. - You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads.
Procedure
Create a root certificate and private key that signs the certificates for your Knative services:
$ openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 \ -subj '/O=Example Inc./CN=example.com' \ -keyout root.key \ -out root.crtCreate a wildcard certificate:
$ openssl req -nodes -newkey rsa:2048 \ -subj "/CN=*.apps.openshift.example.com/O=Example Inc." \ -keyout wildcard.key \ -out wildcard.csrSign the wildcard certificate:
$ openssl x509 -req -days 365 -set_serial 0 \ -CA root.crt \ -CAkey root.key \ -in wildcard.csr \ -out wildcard.crtCreate a secret containing the wildcard certificate by entering one of the following commands, depending on your Service Mesh version:
Option A: For Service Mesh 2.x, create the secret in the
istio-systemnamespace by entering the folloing command:$ oc create -n istio-system secret tls wildcard-certs \ --key=wildcard.key \ --cert=wildcard.crtOption B: For Service Mesh 3.x, create the secret in the
knative-serving-ingressnamespace by entering the folloing command:$ oc create -n knative-serving-ingress secret tls wildcard-certs \ --key=wildcard.key \ --cert=wildcard.crtThe namespace used for the secret depends on the version of Service Mesh. Service Mesh 2.x expects the certificate in the
istio-systemnamespace. The Service Mesh 3.x uses the dedicatedknative-serving-ingressnamespace where the OpenShift Serverless ingress gateway runs.
1.3.4. Configure and verify Service Mesh 3.x integration with OpenShift Serverless Link kopierenLink in die Zwischenablage kopiert!
Integrate Service Mesh 3.x with OpenShift Serverless to enable advanced traffic management, security, and observability for serverless applications. Verify prerequisites, install and configure both components, and then verify the integration.
1.3.4.1. Verifying installation prerequisites Link kopierenLink in die Zwischenablage kopiert!
Before you install and configure the Service Mesh integration with Serverless, ensure that you meet all prerequisites.
Procedure
Check for conflicting gateways by running the following command:
$ oc get gateway -A -o jsonpath='{range .items[*]}{@.metadata.namespace}{"/"}{@.metadata.name}{" "}{@.spec.servers}{"\n"}{end}' | column -tYou get an output similar to the following example:
knative-serving/knative-ingress-gateway [{"hosts":["*"],"port":{"name":"https","number":443,"protocol":"HTTPS"},"tls":{"credentialName":"wildcard-certs","mode":"SIMPLE"}}] knative-serving/knative-local-gateway [{"hosts":["*"],"port":{"name":"http","number":8081,"protocol":"HTTP"}}]This command should not return a
Gatewaythat bindsport: 443andhosts: ["*"], except theGatewaysinknative-servingandGatewaysthat are part of another Service Mesh instance.NoteThe mesh that Serverless is part of must be distinct and preferably reserved only for Serverless workloads. That is because additional configuration, such as
Gateways, might interfere with the Serverless gatewaysknative-local-gatewayandknative-ingress-gateway. Red Hat OpenShift Service Mesh only allows one Gateway to claim a wildcard host binding (hosts: ["*"]) on the same port (port: 443). If another Gateway is already binding this configuration, a separate mesh has to be created for Serverless workloads.
1.3.4.2. Installing and configuring Service Mesh 3.x Link kopierenLink in die Zwischenablage kopiert!
You can integrate Service Mesh 3.x with Serverless by installing and configuring the required Istio components, gateways, and Knative Serving resources. Once these resources are configured, you can deploy the Knative Serving instance with Istio to ensure that your serverless workloads run within the Service Mesh environment.
Procedure
Create an
Istioresource in theistio-systemnamespace with the following configuration:apiVersion: sailoperator.io/v1 kind: Istio metadata: name: default spec: values: meshConfig: defaultConfig: terminationDrainDuration: 35s updateStrategy: inactiveRevisionDeletionGracePeriodSeconds: 30 type: InPlace namespace: istio-system version: v1.26-latestCreate a project called
istio-systemby running the following command:$ oc new-project istio-systemApply the
Istiocustom resource (CR) by running the following command:$ oc apply -f istio.yamlCreate an
IstioCNIresource in theistio-cninamespace with the following configuration:apiVersion: sailoperator.io/v1 kind: IstioCNI metadata: name: default spec: namespace: istio-cni version: v1.26-latestCreate a project called
istio-cniby running the following command:$ oc new-project istio-cniApply the
IstioCNICR by running the following command:$ oc apply -f istio-cni.yamlCreate a file named
gateway-deploy.yamlwith the following configuration:apiVersion: apps/v1 kind: Deployment metadata: name: knative-istio-ingressgateway namespace: knative-serving-ingress spec: selector: matchLabels: knative: ingressgateway template: metadata: annotations: inject.istio.io/templates: gateway labels: knative: ingressgateway sidecar.istio.io/inject: "true" spec: containers: - name: istio-proxy image: auto --- # Set up roles to allow reading credentials for TLS apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: istio-ingressgateway-sds namespace: knative-serving-ingress rules: - apiGroups: [""] resources: ["secrets"] verbs: ["get", "watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: istio-ingressgateway-sds namespace: knative-serving-ingress roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: istio-ingressgateway-sds subjects: - kind: ServiceAccount name: default-
spec.template.metadata.annotations.inject.istio.io/templatesspecifies the gateway injection template rather than the default sidecar template. -
spec.template.metadata.labels.knativedefines a unique label for the gateway. This is required to ensure Gateways can select this workload. -
spec.template.metadata.labels.sidecar.istio.io/injectenables gateway injection. -
spec.template.spec.containers.imageensures that the image automatically updates each time the pod starts.
-
Apply the resource by running the following command:
$ oc apply -f gateway-deploy.yamlCreate gateway resources for the Knative Serving component by creating a file named
serving-gateways.yamlwith the following configuration:########################################################### # cluster external ########################################################### apiVersion: v1 kind: Service metadata: name: knative-istio-ingressgateway namespace: knative-serving-ingress spec: type: ClusterIP selector: knative: ingressgateway ports: - name: http2 port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8443 --- apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: knative-ingress-gateway namespace: knative-serving spec: selector: knative: ingressgateway servers: - hosts: - '*' port: name: https number: 443 protocol: HTTPS tls: credentialName: wildcard-certs mode: SIMPLE --- ########################################################### # cluster local ########################################################### apiVersion: v1 kind: Service metadata: labels: experimental.istio.io/disable-gateway-port-translation: "true" name: knative-local-gateway namespace: knative-serving-ingress spec: ports: - name: http2 port: 80 protocol: TCP targetPort: 8081 selector: knative: ingressgateway type: ClusterIP --- apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: knative-local-gateway namespace: knative-serving spec: selector: knative: ingressgateway servers: - hosts: - '*' port: name: http number: 8081 protocol: HTTPApply the resource by running the following command:
$ oc apply -f serving-gateways.yamlCreate a
PeerAuthenticationresource in theistio-systemnamespace to enforce mTLS across the mesh with the following configuration:apiVersion: security.istio.io/v1 kind: PeerAuthentication metadata: name: mesh-mtls namespace: istio-system spec: mtls: mode: STRICTApply the resource by running the following command:
$ oc apply -f peerauth.yaml
1.3.4.3. Installing and configuring Serverless Link kopierenLink in die Zwischenablage kopiert!
After installing Service Mesh, you need to install Serverless with a specific configuration.
Procedure
Install Knative Serving with the following
KnativeServingcustom resource, which enables the Istio integration:You get an output similar to the following example:
apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving annotations: serverless.openshift.io/disable-istio-net-policies-generation: "true" spec: ingress: istio: enabled: true deployments: - name: activator labels: "sidecar.istio.io/inject": "true" annotations: "sidecar.istio.io/rewriteAppHTTPProbers": "true" - name: autoscaler labels: "sidecar.istio.io/inject": "true" annotations: "sidecar.istio.io/rewriteAppHTTPProbers": "true" config: istio: gateway.knative-serving.knative-ingress-gateway: istio-ingressgateway.<your_istio_namespace>.svc.cluster.local local-gateway.knative-serving.knative-local-gateway: knative-local-gateway.<your_istio_namespace>.svc.cluster.localenabled: true- Enable Istio integration.
deployments- Enable sidecar injection for Knative Serving data plane pods.
config.istio-
If your istio is not running in the
istio-systemnamespace, you need to set these two flags with the correct namespace.
Apply the
KnativeServingresource by running the following command::$ oc apply -f knative-serving-config.yamlInstall Knative Eventing with the following
KnativeEventingobject, which enables the Istio integration:You get an output similar to the following example:
apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing annotations: serverless.openshift.io/disable-istio-net-policies-generation: "true" spec: config: features: istio: enabled workloads: - name: pingsource-mt-adapter labels: sidecar.istio.io/inject: "true" annotations: sidecar.istio.io/rewriteAppHTTPProbers: "true" - name: imc-dispatcher labels: sidecar.istio.io/inject: "true" annotations: sidecar.istio.io/rewriteAppHTTPProbers: "true" - name: mt-broker-ingress labels: sidecar.istio.io/inject: "true" annotations: sidecar.istio.io/rewriteAppHTTPProbers: "true" - name: mt-broker-filter labels: sidecar.istio.io/inject: "true" annotations: sidecar.istio.io/rewriteAppHTTPProbers: "true" - name: job-sink labels: sidecar.istio.io/inject: "true" annotations: sidecar.istio.io/rewriteAppHTTPProbers: "true"spec.config.features.istio-
enables Eventing Istio controller to create a
DestinationRulefor eachInMemoryChannelorKafkaChannelservice. spec.workload- enables sidecar injection for Knative Eventing pods.
Apply the
KnativeEventingresource by running the following command::$ oc apply -f knative-eventing-config.yamlInstall Knative Kafka with the following
KnativeKafkacustom resource, which enables the Istio integration:You get an output similar to the following example:
apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: name: knative-kafka namespace: knative-eventing spec: channel: enabled: true bootstrapServers: <bootstrap_servers> source: enabled: true broker: enabled: true defaultConfig: bootstrapServers: <bootstrap_servers> numPartitions: <num_partitions> replicationFactor: <replication_factor> sink: enabled: true workloads: - name: kafka-controller labels: "sidecar.istio.io/inject": "true" annotations: "sidecar.istio.io/rewriteAppHTTPProbers": "true" - name: kafka-broker-receiver labels: "sidecar.istio.io/inject": "true" annotations: "sidecar.istio.io/rewriteAppHTTPProbers": "true" - name: kafka-broker-dispatcher labels: "sidecar.istio.io/inject": "true" annotations: "sidecar.istio.io/rewriteAppHTTPProbers": "true" - name: kafka-channel-receiver labels: "sidecar.istio.io/inject": "true" annotations: "sidecar.istio.io/rewriteAppHTTPProbers": "true" - name: kafka-channel-dispatcher labels: "sidecar.istio.io/inject": "true" annotations: "sidecar.istio.io/rewriteAppHTTPProbers": "true" - name: kafka-source-dispatcher labels: "sidecar.istio.io/inject": "true" annotations: "sidecar.istio.io/rewriteAppHTTPProbers": "true" - name: kafka-sink-receiver labels: "sidecar.istio.io/inject": "true" annotations: "sidecar.istio.io/rewriteAppHTTPProbers": "true"bootstrapServers: <bootstrap_servers>-
The Apache Kafka cluster URL, for example
my-cluster-kafka-bootstrap.kafka:9092. spec.workloads- Enable sidecar injection for Knative Kafka pods.
Apply the
KnativeEventingobject by running the following command::$ oc apply -f knative-kafka-config.yamlInstall
ServiceEntryto inform Service Mesh of the communication betweenKnativeKafkacomponents and an Apache Kafka cluster:You get an output similar to the following example:
apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: kafka-cluster namespace: knative-eventing spec: hosts: - <bootstrap_servers_without_port> exportTo: - "." ports: - number: 9092 name: tcp-plain protocol: TCP - number: 9093 name: tcp-tls protocol: TCP - number: 9094 name: tcp-sasl-tls protocol: TCP - number: 9095 name: tcp-sasl-tls protocol: TCP - number: 9096 name: tcp-tls protocol: TCP location: MESH_EXTERNAL resolution: NONEspec.hosts-
The list of Apache Kafka cluster hosts, for example
my-cluster-kafka-bootstrap.kafka. spec.portsApache Kafka cluster listeners ports.
NoteThe ports listed in
spec.portsare example TCP (Transmission Control Protocol) ports. The actual values depend on the Apache Kafka cluster configuration.
Apply the
ServiceEntryresource by running the following command:$ oc apply -f kafka-cluster-serviceentry.yaml
1.3.4.4. Verifying the integration setup for Service Mesh 3.x Link kopierenLink in die Zwischenablage kopiert!
After installing and configuring Service Mesh 3.x with Serverless, you can verify that the integration is working correctly. This verification ensures that the Service Mesh components, gateways, and Knative Serving configuration are properly set up and that serverless workloads can communicate securely within the mesh.
The folloiwng test deploys a simple Knative service and verifies sidecar injection, mutual Transport Layer Security (mTLS) compatibility, and passthrough by the ingress gateway.
Procedure
Verify that the Istio component is running by running the following command:
$ oc get pods -n istio-systemVerify that the Istio Container Network Interface (CNI) component is running by running the following command:
$ oc get pods -n istio-cniVerify that the Knative component is running by running the following command:
$ oc get pods -n knative-servingVerify gateway services exist by running the following command:
$ oc get svc -n knative-serving-ingressCreate a test namespace by running the following command:
$ oc new-project demoCreate the sample Knative service manifest and save as
hello-service.yamlwith the following configuration:apiVersion: serving.knative.dev/v1 kind: Service metadata: annotations: serving.knative.openshift.io/enablePassthrough: "true" name: hello-service namespace: demo spec: template: metadata: labels: sidecar.istio.io/inject: "true" annotations: sidecar.istio.io/rewriteAppHTTPProbers: "true" spec: containers: - image: quay.io/openshift-knative/showcase-
serving.knative.openshift.io/enablePassthrough: "true"configures the ingress to allow TLS passthrough via the Istio gateway. -
sidecar.istio.io/inject: "true"ensures the Istio proxy is injected. -
sidecar.istio.io/rewriteAppHTTPProbers: "true"makes Knative health probes work with mTLS.
-
Apply the Knative service by running the following command:
$ oc apply -f hello-service.yamlConfirm sidecar injection and pod readiness by running the following commands:
$ oc get pods -n demo$ oc get pod -n demo -l serving.knative.dev/service=hello-service -o jsonpath='{.items[0].spec.containers[*].name}{"\n"}'Retrieve the service URL by running the following command:
$ oc get ksvc hello-service -n demo -o jsonpath='{.status.url}{"\n"}'Call the service by entering any one of the following commands:
Option A: If you have a trusted certificate set on the ingress domain, enter the folloing command:
$ curl https://$(oc get ksvc hello-service -n demo -o jsonpath='{.status.url}' | sed 's#https://##')Option B: If you are using a custom or self-signed certificate, use -k or provide your CA file with --cacert <path> by entering the following command:
$ curl --cacert <path_to_your_CA_file> https://$(oc get ksvc hello-service -n demo -o jsonpath='{.status.url}' | sed 's#https://##')You should see an output similar to the following example:
{"artifact":"knative-showcase","greeting":"Welcome"}The exact JSON values might vary, but the response should indicate that the
knative-showcaseapplication is running successfully.
1.3.5. Disabling the default network policies Link kopierenLink in die Zwischenablage kopiert!
The OpenShift Serverless Operator generates the network policies by default. To disable the default network policy generation, you can add the serverless.openshift.io/disable-istio-net-policies-generation annotation in the KnativeEventing and KnativeServing custom resources (CRs).
The OpenShift Serverless Operator generates the required network policies by default. However, the support for Service Mesh 3.x is currently in Technology Preview, these default network policies do not yet account for the networking requirements of Service Mesh 3.x. As a result, newly created Knative Services (ksvc) may fail to reach the Ready state when these policies are applied.
To avoid this issue, you must disable the automatic generation of Istio-related network policies by setting the serverless.openshift.io/disable-istio-net-policies-generation annotation to true in both the KnativeServing and KnativeEventing custom resources.
Prerequisites
You have one of the following permissions to access the cluster:
- Cluster administrator permissions on OpenShift Container Platform
- Cluster administrator permissions on Red Hat OpenShift Service on AWS
- Dedicated administrator permissions on OpenShift Dedicated
-
You have installed the OpenShift CLI (
oc). - You have access to a project with the appropriate roles and permissions to create applications and other workloads.
- You have installed the OpenShift Serverless Operator, Knative Serving, and Knative Eventing on your cluster.
- You have installed Red Hat OpenShift Service Mesh with the mTLS functionality enabled.
Procedure
Add the
serverless.openshift.io/disable-istio-net-policies-generation: "true"annotation to your Knative custom resources.NoteThe OpenShift Serverless Operator generates the required network policies by default. When you configure
ServiceMeshControlPlanewithmanageNetworkPolicy: false, you must disable the default network policy generation to ensure proper event delivery. To disable the default network policy generation, you can add theserverless.openshift.io/disable-istio-net-policies-generationannotation in theKnativeEventingandKnativeServingcustom resources (CRs).Annotate the
KnativeEventingCR by running the following command:$ oc edit KnativeEventing -n knative-eventingYou get an output similar to the following example:
apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing annotations: serverless.openshift.io/disable-istio-net-policies-generation: "true"Annotate the
KnativeServingCR by running the following command:$ oc edit KnativeServing -n knative-servingYou get an output similar to the following example:
apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving annotations: serverless.openshift.io/disable-istio-net-policies-generation: "true"