OpenShift Service Mesh 3.0 is a Technology Preview feature only
Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. This documentation is a work in progress and might not be complete or fully tested.Chapter 1. About gateways
A gateway is a standalone Envoy proxy deployment and an associated Kubernetes service operating at the edge of a service mesh. You can configure a gateway to provide fine-grained control over the traffic that enters or leaves the mesh. In Red Hat OpenShift Service Mesh, you install gateways using gateway injection.
1.1. About gateway injection
Gateway injection relies upon the same mechanism as sidecar injection to inject the Envoy proxy into gateway pods. To install a gateway using gateway injection, you create a Kubernetes Deployment
object and an associated Kubernetes Service
object in a namespace that is visible to the Istio control plane. When creating the Deployment
object you label and annotate it so that the Istio control plane injects a proxy, and the proxy is configured as a gateway. After installing the gateway, you configure it to control ingress and egress traffic using the Istio Gateway
and VirtualService
resources.
1.1.1. Installing a gateway by using gateway injection
This procedure explains how to install a gateway by using gateway injection.
You can use this procedure to create ingress or egress gateways.
Prerequisites
- You have installed the OpenShift Service Mesh Operator version 3.0 or later.
- You have created an Istio control plane.
-
You have created an
IstioCNI
resource.
Procedure
Create a namespace that you will use to install the gateway.
$ oc create namespace <gateway_namespace>
NoteInstall the gateway and the Istio control plane in different namespaces.
You can install the gateway in a dedicated gateway namespace. This approach allows the gateway to be shared by many applications operating in different namespaces. Alternatively, you can install the gateway in an application namespace. In this approach, the gateway acts as a dedicated gateway for the application in that namespace.
Create a YAML file named
secret-reader.yml
that defines the service account, role, and role binding for the gateway deployment. These settings enable the gateway to read the secrets, which is required for obtaining TLS credentials.apiVersion: v1 kind: ServiceAccount metadata: name: secret-reader namespace: <gateway_namespace> --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: secret-reader namespace: <gateway_namespace> rules: - apiGroups: [""] resources: ["secrets"] verbs: ["get", "watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: secret-reader namespace: <gateway_namespace> roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: secret-reader subjects: - kind: ServiceAccount name: secret-reader
Apply the YAML file by running the following command:
$ oc apply -f secret-reader.yml
Create a YAML file named
gateway-deployment.yml
that defines the KubernetesDeployment
object for the gateway.apiVersion: apps/v1 kind: Deployment metadata: name: <gateway_name> namespace: <gateway_namespace> spec: selector: matchLabels: istio: <gateway_name> template: metadata: annotations: inject.istio.io/templates: gateway 1 labels: istio: <gateway_name> 2 sidecar.istio.io/inject: "true" 3 spec: containers: - name: istio-proxy image: auto 4 securityContext: capabilities: drop: - ALL allowPrivilegeEscalation: false privileged: false readOnlyRootFilesystem: true runAsNonRoot: true ports: - containerPort: 15090 protocol: TCP name: http-envoy-prom resources: limits: cpu: 2000m memory: 1024Mi requests: cpu: 100m memory: 128Mi securityContext: sysctls: - name: net.ipv4.ip_unprivileged_port_start value: "0" serviceAccountName: secret-reader 5
- 1
- Indicates that the Istio control plane uses the gateway injection template instead of the default sidecar template.
- 2
- Ensure that a unique label is set for the gateway deployment. A unique label is required so that Istio
Gateway
resources can select gateway workloads. - 3
- Enables gateway injection by setting the
sidecar.istio.io/inject
label totrue
. If the name of the Istio resource is notdefault
you must use theistio.io/rev: <istio_revision>
label instead, where the revision represents the active revision of the Istio resource. - 4
- Sets the image field to
auto
so that the image automatically updates each time the pod starts. - 5
- Sets the
serviceAccountName
to the name of theServiceAccount
created previously.
Apply the YAML file by running the following command:
$ oc apply -f gateway-deployment.yml
Verify that the gateway
Deployment
rollout was successful by running the following command:$ oc rollout status deployment/<gateway_name> -n <gateway_namespace>
You should see output similar to the following:
Example output
Waiting for deployment "<gateway_name>" rollout to finish: 0 of 1 updated replicas are available... deployment "<gateway_name>" successfully rolled out
Create a YAML file named
gateway-service.yml
that contains the KubernetesService
object for the gateway.apiVersion: v1 kind: Service metadata: name: <gateway_name> namespace: <gateway_namespace> spec: type: ClusterIP 1 selector: istio: <gateway_name> 2 ports: - name: status-port port: 15021 protocol: TCP targetPort: 15021 - name: http2 port: 80 protocol: TCP targetPort: 80 - name: https port: 443 protocol: TCP targetPort: 443
- 1
- When you set
spec.type
toClusterIP
the gatewayService
object can be accessed only from within the cluster. If the gateway has to handle ingress traffic from outside the cluster, setspec.type
toLoadBalancer
. Alternatively, you can use OpenShift Routes. - 2
- Set the
selector
to the unique label or set of labels specified in the pod template of the gateway deployment that you previously created.
Apply the YAML file by running the following command:
$ oc apply -f gateway-service.yml
Verify that the gateway service is targeting the endpoint of the gateway pods by running the following command:
$ oc get endpoints <gateway_name> -n <gateway_namespace>
You should see output similar to the following example:
Example output
NAME ENDPOINTS AGE <gateway_name> 10.131.0.181:8080,10.131.0.181:8443 1m
Optional: Create a YAML file named
gateway-hpa.yml
that defines a horizontal pod autoscaler for the gateway. The following example sets the minimum replicas to2
and the maximum replicas to5
and scales the replicas up when average CPU utilization exceeds 80% of the CPU resource limit. This limit is specified in the pod template of the deployment for the gateway.apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: <gateway_name> namespace: <gateway_namespace> spec: minReplicas: 2 maxReplicas: 5 metrics: - resource: name: cpu target: averageUtilization: 80 type: Utilization type: Resource scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: <gateway_name> 1
- 1
- Set
spec.scaleTargetRef.name
to the name of the gateway deployment previously created.
Optional: Apply the YAML file by running the following command:
$ oc apply -f gateway-hpa.yml
Optional: Create a YAML file named
gateway-pdb.yml
that defines a pod disruption budget for the gateway. The following example allows gateway pods to be evicted only when at least 1 healthy gateway pod will remain on the cluster after the eviction.apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: <gateway_name> namespace: <gateway_namespace> spec: minAvailable: 1 selector: matchLabels: istio: <gateway_name> 1
- 1
- Set the
spec.selector.matchLabels
to the unique label or set of labels specified in the pod template of the gateway deployment previously created.
Optional: Apply the YAML file by running the following command:
$ oc apply -f gateway-pdb.yml