Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 5. Kourier and Istio ingresses

download PDF

OpenShift Serverless supports the following two ingress solutions:

  • Kourier
  • Istio using Red Hat OpenShift Service Mesh

The default is Kourier.

5.1. Kourier and Istio ingress solutions

5.1.1. Kourier

Kourier is the default ingress solution for OpenShift Serverless. It has the following properties:

  • It is based on envoy proxy.
  • It is simple and lightweight.
  • It provides the basic routing functionality that Serverless needs to provide its set of features.
  • It supports basic observability and metrics.
  • It supports basic TLS termination of Knative Service routing.
  • It provides only limited configuration and extension options.

5.1.2. Istio using OpenShift Service Mesh

Using Istio as the ingress solution for OpenShift Serverless enables an additional feature set that is based on what Red Hat OpenShift Service Mesh offers:

  • Native mTLS between all connections
  • Serverless components are part of a service mesh
  • Additional observability and metrics
  • Authorization and authentication support
  • Custom rules and configuration, as supported by Red Hat OpenShift Service Mesh

However, the additional features come with a higher overhead and resource consumption. For details, see the Red Hat OpenShift Service Mesh documentation.

See the "Integrating Service Mesh with OpenShift Serverless" section of Serverless documentation for Istio requirements and installation instructions.

5.1.3. Traffic configuration and routing

Regardless of whether you use Kourier or Istio, the traffic for a Knative Service is configured in the knative-serving namespace by the net-kourier-controller or the net-istio-controller respectively.

The controller reads the KnativeService and its child custom resources to configure the ingress solution. Both ingress solutions provide an ingress gateway pod that becomes part of the traffic path. Both ingress solutions are based on Envoy. By default, Serverless has two routes for each KnativeService object:

  • A cluster-external route that is forwarded by the OpenShift router, for example myapp-namespace.example.com.
  • A cluster-local route containing the cluster domain, for example myapp.namespace.svc.cluster.local. This domain can and should be used to call Knative services from Knative or other user workloads.

The ingress gateway can forward requests either in the serve mode or the proxy mode:

  • In the serve mode, requests go directly to the Queue-Proxy sidecar container of the Knative service.
  • In the proxy mode, requests first go through the Activator component in the knative-serving namespace.

The choice of mode depends on the configuration of Knative, the Knative service, and the current traffic. For example, if a Knative Service is scaled to zero, requests are sent to the Activator component, which acts as a buffer until a new Knative service pod is started.

Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.