Este contenido no está disponible en el idioma seleccionado.

Chapter 2. Service Mesh 1.x


2.1. Service Mesh Release Notes

Warning

You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported.

Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh.

For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page.

2.1.1. Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.

2.1.2. Introduction to Red Hat OpenShift Service Mesh

Red Hat OpenShift Service Mesh addresses a variety of problems in a microservice architecture by creating a centralized point of control in an application. It adds a transparent layer on existing distributed applications without requiring any changes to the application code.

Microservice architectures split the work of enterprise applications into modular services, which can make scaling and maintenance easier. However, as an enterprise application built on a microservice architecture grows in size and complexity, it becomes difficult to understand and manage. Service Mesh can address those architecture problems by capturing or intercepting traffic between services and can modify, redirect, or create new requests to other services.

Service Mesh, which is based on the open source Istio project, provides an easy way to create a network of deployed services that provides discovery, load balancing, service-to-service authentication, failure recovery, metrics, and monitoring. A service mesh also provides more complex operational functionality, including A/B testing, canary releases, access control, and end-to-end authentication.

2.1.3. Getting support

If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal. From the Customer Portal, you can:

  • Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products.
  • Submit a support case to Red Hat Support.
  • Access other product documentation.

To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager. Insights provides details about issues and, if available, information on how to solve a problem.

If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version.

When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support.

The must-gather tool enables you to collect diagnostic information about your OpenShift Container Platform cluster, including virtual machines and other data related to Red Hat OpenShift Service Mesh.

For prompt support, supply diagnostic information for both OpenShift Container Platform and Red Hat OpenShift Service Mesh.

2.1.3.1. About the must-gather tool

The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues, including:

  • Resource definitions
  • Service logs

By default, the oc adm must-gather command uses the default plugin image and writes into ./must-gather.local.

Alternatively, you can collect specific information by running the command with the appropriate arguments as described in the following sections:

  • To collect data related to one or more specific features, use the --image argument with an image, as listed in a following section.

    For example:

    $ oc adm must-gather  --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.10.0
  • To collect the audit logs, use the -- /usr/bin/gather_audit_logs argument, as described in a following section.

    For example:

    $ oc adm must-gather -- /usr/bin/gather_audit_logs
    Note

    Audit logs are not collected as part of the default set of information to reduce the size of the files.

When you run oc adm must-gather, a new pod with a random name is created in a new project on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local. This directory is created in the current working directory.

For example:

NAMESPACE                      NAME                 READY   STATUS      RESTARTS      AGE
...
openshift-must-gather-5drcj    must-gather-bklx4    2/2     Running     0             72s
openshift-must-gather-5drcj    must-gather-s8sdh    2/2     Running     0             72s
...

2.1.3.2. Prerequisites

  • Access to the cluster as a user with the cluster-admin role.
  • The OpenShift Container Platform CLI (oc) installed.

2.1.3.3. About collecting service mesh data

You can use the oc adm must-gather CLI command to collect information about your cluster, including features and objects associated with Red Hat OpenShift Service Mesh.

Prerequisites

  • Access to the cluster as a user with the cluster-admin role.
  • The OpenShift Container Platform CLI (oc) installed.

Precedure

  1. To collect Red Hat OpenShift Service Mesh data with must-gather, you must specify the Red Hat OpenShift Service Mesh image.

    $ oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.4
  2. To collect Red Hat OpenShift Service Mesh data for a specific Service Mesh control plane namespace with must-gather, you must specify the Red Hat OpenShift Service Mesh image and namespace. In this example, after gather, replace <namespace> with your Service Mesh control plane namespace, such as istio-system.

    $ oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.4 gather <namespace>

2.1.4. Red Hat OpenShift Service Mesh supported configurations

The following are the only supported configurations for the Red Hat OpenShift Service Mesh:

  • OpenShift Container Platform version 4.6 or later.
Note

OpenShift Online and Red Hat OpenShift Dedicated are not supported for Red Hat OpenShift Service Mesh.

  • The deployment must be contained within a single OpenShift Container Platform cluster that is not federated.
  • This release of Red Hat OpenShift Service Mesh is only available on OpenShift Container Platform x86_64.
  • This release only supports configurations where all Service Mesh components are contained in the OpenShift Container Platform cluster in which it operates. It does not support management of microservices that reside outside of the cluster, or in a multi-cluster scenario.
  • This release only supports configurations that do not integrate external services such as virtual machines.

For additional information about Red Hat OpenShift Service Mesh lifecycle and supported configurations, refer to the Support Policy.

2.1.4.1. Supported configurations for Kiali on Red Hat OpenShift Service Mesh

  • The Kiali observability console is only supported on the two most recent releases of the Chrome, Edge, Firefox, or Safari browsers.

2.1.4.2. Supported Mixer adapters

  • This release only supports the following Mixer adapter:

    • 3scale Istio Adapter

2.1.5. New Features

Red Hat OpenShift Service Mesh provides a number of key capabilities uniformly across a network of services:

  • Traffic Management - Control the flow of traffic and API calls between services, make calls more reliable, and make the network more robust in the face of adverse conditions.
  • Service Identity and Security - Provide services in the mesh with a verifiable identity and provide the ability to protect service traffic as it flows over networks of varying degrees of trustworthiness.
  • Policy Enforcement - Apply organizational policy to the interaction between services, ensure access policies are enforced and resources are fairly distributed among consumers. Policy changes are made by configuring the mesh, not by changing application code.
  • Telemetry - Gain understanding of the dependencies between services and the nature and flow of traffic between them, providing the ability to quickly identify issues.

2.1.5.1. New features Red Hat OpenShift Service Mesh 1.1.18.2

This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs).

2.1.5.1.1. Component versions included in Red Hat OpenShift Service Mesh version 1.1.18.2
ComponentVersion

Istio

1.4.10

Jaeger

1.30.2

Kiali

1.12.21.1

3scale Istio Adapter

1.0.0

2.1.5.2. New features Red Hat OpenShift Service Mesh 1.1.18.1

This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs).

2.1.5.2.1. Component versions included in Red Hat OpenShift Service Mesh version 1.1.18.1
ComponentVersion

Istio

1.4.10

Jaeger

1.30.2

Kiali

1.12.20.1

3scale Istio Adapter

1.0.0

2.1.5.3. New features Red Hat OpenShift Service Mesh 1.1.18

This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs).

2.1.5.3.1. Component versions included in Red Hat OpenShift Service Mesh version 1.1.18
ComponentVersion

Istio

1.4.10

Jaeger

1.24.0

Kiali

1.12.18

3scale Istio Adapter

1.0.0

2.1.5.4. New features Red Hat OpenShift Service Mesh 1.1.17.1

This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs).

2.1.5.4.1. Change in how Red Hat OpenShift Service Mesh handles URI fragments

Red Hat OpenShift Service Mesh contains a remotely exploitable vulnerability, CVE-2021-39156, where an HTTP request with a fragment (a section in the end of a URI that begins with a # character) in the URI path could bypass the Istio URI path-based authorization policies. For instance, an Istio authorization policy denies requests sent to the URI path /user/profile. In the vulnerable versions, a request with URI path /user/profile#section1 bypasses the deny policy and routes to the backend (with the normalized URI path /user/profile%23section1), possibly leading to a security incident.

You are impacted by this vulnerability if you use authorization policies with DENY actions and operation.paths, or ALLOW actions and operation.notPaths.

With the mitigation, the fragment part of the request’s URI is removed before the authorization and routing. This prevents a request with a fragment in its URI from bypassing authorization policies which are based on the URI without the fragment part.

2.1.5.4.2. Required update for authorization policies

Istio generates hostnames for both the hostname itself and all matching ports. For instance, a virtual service or Gateway for a host of "httpbin.foo" generates a config matching "httpbin.foo and httpbin.foo:*". However, exact match authorization policies only match the exact string given for the hosts or notHosts fields.

Your cluster is impacted if you have AuthorizationPolicy resources using exact string comparison for the rule to determine hosts or notHosts.

You must update your authorization policy rules to use prefix match instead of exact match. For example, replacing hosts: ["httpbin.com"] with hosts: ["httpbin.com:*"] in the first AuthorizationPolicy example.

First example AuthorizationPolicy using prefix match

apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
  name: httpbin
  namespace: foo
spec:
  action: DENY
  rules:
  - from:
    - source:
        namespaces: ["dev"]
    to:
    - operation:
        hosts: [“httpbin.com”,"httpbin.com:*"]

Second example AuthorizationPolicy using prefix match

apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
  name: httpbin
  namespace: default
spec:
  action: DENY
  rules:
  - to:
    - operation:
        hosts: ["httpbin.example.com:*"]

2.1.5.5. New features Red Hat OpenShift Service Mesh 1.1.17

This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.

2.1.5.6. New features Red Hat OpenShift Service Mesh 1.1.16

This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.

2.1.5.7. New features Red Hat OpenShift Service Mesh 1.1.15

This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.

2.1.5.8. New features Red Hat OpenShift Service Mesh 1.1.14

This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.

Important

There are manual steps that must be completed to address CVE-2021-29492 and CVE-2021-31920.

2.1.5.8.1. Manual updates required by CVE-2021-29492 and CVE-2021-31920

Istio contains a remotely exploitable vulnerability where an HTTP request path with multiple slashes or escaped slash characters (%2F` or %5C`) could potentially bypass an Istio authorization policy when path-based authorization rules are used.

For example, assume an Istio cluster administrator defines an authorization DENY policy to reject the request at path /admin. A request sent to the URL path //admin will NOT be rejected by the authorization policy.

According to RFC 3986, the path //admin with multiple slashes should technically be treated as a different path from the /admin. However, some backend services choose to normalize the URL paths by merging multiple slashes into a single slash. This can result in a bypass of the authorization policy (//admin does not match /admin), and a user can access the resource at path /admin in the backend; this would represent a security incident.

Your cluster is impacted by this vulnerability if you have authorization policies using ALLOW action + notPaths field or DENY action + paths field patterns. These patterns are vulnerable to unexpected policy bypasses.

Your cluster is NOT impacted by this vulnerability if:

  • You don’t have authorization policies.
  • Your authorization policies don’t define paths or notPaths fields.
  • Your authorization policies use ALLOW action + paths field or DENY action + notPaths field patterns. These patterns could only cause unexpected rejection instead of policy bypasses. The upgrade is optional for these cases.
Note

The Red Hat OpenShift Service Mesh configuration location for path normalization is different from the Istio configuration.

2.1.5.8.2. Updating the path normalization configuration

Istio authorization policies can be based on the URL paths in the HTTP request. Path normalization, also known as URI normalization, modifies and standardizes the incoming requests' paths so that the normalized paths can be processed in a standard way. Syntactically different paths may be equivalent after path normalization.

Istio supports the following normalization schemes on the request paths before evaluating against the authorization policies and routing the requests:

Table 2.1. Normalization schemes
OptionDescriptionExampleNotes

NONE

No normalization is done. Anything received by Envoy will be forwarded exactly as-is to any backend service.

../%2Fa../b is evaluated by the authorization policies and sent to your service.

This setting is vulnerable to CVE-2021-31920.

BASE

This is currently the option used in the default installation of Istio. This applies the normalize_path option on Envoy proxies, which follows RFC 3986 with extra normalization to convert backslashes to forward slashes.

/a/../b is normalized to /b. \da is normalized to /da.

This setting is vulnerable to CVE-2021-31920.

MERGE_SLASHES

Slashes are merged after the BASE normalization.

/a//b is normalized to /a/b.

Update to this setting to mitigate CVE-2021-31920.

DECODE_AND_MERGE_SLASHES

The strictest setting when you allow all traffic by default. This setting is recommended, with the caveat that you must thoroughly test your authorization policies routes. Percent-encoded slash and backslash characters (%2F, %2f, %5C and %5c) are decoded to / or \, before the MERGE_SLASHES normalization.

/a%2fb is normalized to /a/b.

Update to this setting to mitigate CVE-2021-31920. This setting is more secure, but also has the potential to break applications. Test your applications before deploying to production.

The normalization algorithms are conducted in the following order:

  1. Percent-decode %2F, %2f, %5C and %5c.
  2. The RFC 3986 and other normalization implemented by the normalize_path option in Envoy.
  3. Merge slashes.
Warning

While these normalization options represent recommendations from HTTP standards and common industry practices, applications may interpret a URL in any way it chooses to. When using denial policies, ensure that you understand how your application behaves.

2.1.5.8.3. Path normalization configuration examples

Ensuring Envoy normalizes request paths to match your backend services' expectations is critical to the security of your system. The following examples can be used as a reference for you to configure your system. The normalized URL paths, or the original URL paths if NONE is selected, will be:

  1. Used to check against the authorization policies.
  2. Forwarded to the backend application.
Table 2.2. Configuration examples
If your application…​Choose…​

Relies on the proxy to do normalization

BASE, MERGE_SLASHES or DECODE_AND_MERGE_SLASHES

Normalizes request paths based on RFC 3986 and does not merge slashes.

BASE

Normalizes request paths based on RFC 3986 and merges slashes, but does not decode percent-encoded slashes.

MERGE_SLASHES

Normalizes request paths based on RFC 3986, decodes percent-encoded slashes, and merges slashes.

DECODE_AND_MERGE_SLASHES

Processes request paths in a way that is incompatible with RFC 3986.

NONE

2.1.5.8.4. Configuring your SMCP for path normalization

To configure path normalization for Red Hat OpenShift Service Mesh, specify the following in your ServiceMeshControlPlane. Use the configuration examples to help determine the settings for your system.

SMCP v1 pathNormalization

spec:
  global:
    pathNormalization: <option>

2.1.5.9. New features Red Hat OpenShift Service Mesh 1.1.13

This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.

2.1.5.10. New features Red Hat OpenShift Service Mesh 1.1.12

This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.

2.1.5.11. New features Red Hat OpenShift Service Mesh 1.1.11

This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.

2.1.5.12. New features Red Hat OpenShift Service Mesh 1.1.10

This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.

2.1.5.13. New features Red Hat OpenShift Service Mesh 1.1.9

This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.

2.1.5.14. New features Red Hat OpenShift Service Mesh 1.1.8

This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.

2.1.5.15. New features Red Hat OpenShift Service Mesh 1.1.7

This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.

2.1.5.16. New features Red Hat OpenShift Service Mesh 1.1.6

This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.

2.1.5.17. New features Red Hat OpenShift Service Mesh 1.1.5

This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.

This release also added support for configuring cipher suites.

2.1.5.18. New features Red Hat OpenShift Service Mesh 1.1.4

This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.

Note

There are manual steps that must be completed to address CVE-2020-8663.

2.1.5.18.1. Manual updates required by CVE-2020-8663

The fix for CVE-2020-8663: envoy: Resource exhaustion when accepting too many connections added a configurable limit on downstream connections. The configuration option for this limit must be configured to mitigate this vulnerability.

Important

These manual steps are required to mitigate this CVE whether you are using the 1.1 version or the 1.0 version of Red Hat OpenShift Service Mesh.

This new configuration option is called overload.global_downstream_max_connections, and it is configurable as a proxy runtime setting. Perform the following steps to configure limits at the Ingress Gateway.

Procedure

  1. Create a file named bootstrap-override.json with the following text to force the proxy to override the bootstrap template and load runtime configuration from disk:

      {
        "runtime": {
          "symlink_root": "/var/lib/istio/envoy/runtime"
        }
      }
  2. Create a secret from the bootstrap-override.json file, replacing <SMCPnamespace> with the namespace where you created the service mesh control plane (SMCP):

    $  oc create secret generic -n <SMCPnamespace> gateway-bootstrap --from-file=bootstrap-override.json
  3. Update the SMCP configuration to activate the override.

    Updated SMCP configuration example #1

    apiVersion: maistra.io/v1
    kind: ServiceMeshControlPlane
    spec:
      istio:
        gateways:
          istio-ingressgateway:
            env:
              ISTIO_BOOTSTRAP_OVERRIDE: /var/lib/istio/envoy/custom-bootstrap/bootstrap-override.json
            secretVolumes:
            - mountPath: /var/lib/istio/envoy/custom-bootstrap
              name: custom-bootstrap
              secretName: gateway-bootstrap

  4. To set the new configuration option, create a secret that has the desired value for the overload.global_downstream_max_connections setting. The following example uses a value of 10000:

    $  oc create secret generic -n <SMCPnamespace> gateway-settings --from-literal=overload.global_downstream_max_connections=10000
  5. Update the SMCP again to mount the secret in the location where Envoy is looking for runtime configuration:

Updated SMCP configuration example #2

apiVersion: maistra.io/v1
kind: ServiceMeshControlPlane
spec:
  template: default
#Change the version to "v1.0" if you are on the 1.0 stream.
  version: v1.1
  istio:
    gateways:
      istio-ingressgateway:
        env:
          ISTIO_BOOTSTRAP_OVERRIDE: /var/lib/istio/envoy/custom-bootstrap/bootstrap-override.json
        secretVolumes:
        - mountPath: /var/lib/istio/envoy/custom-bootstrap
          name: custom-bootstrap
          secretName: gateway-bootstrap
        # below is the new secret mount
        - mountPath: /var/lib/istio/envoy/runtime
          name: gateway-settings
          secretName: gateway-settings

2.1.5.18.2. Upgrading from Elasticsearch 5 to Elasticsearch 6

When updating from Elasticsearch 5 to Elasticsearch 6, you must delete your Jaeger instance, then recreate the Jaeger instance because of an issue with certificates. Re-creating the Jaeger instance triggers creating a new set of certificates. If you are using persistent storage the same volumes can be mounted for the new Jaeger instance as long as the Jaeger name and namespace for the new Jaeger instance are the same as the deleted Jaeger instance.

Procedure if Jaeger is installed as part of Red Hat Service Mesh

  1. Determine the name of your Jaeger custom resource file:

    $ oc get jaeger -n istio-system

    You should see something like the following:

    NAME     AGE
    jaeger   3d21h
  2. Copy the generated custom resource file into a temporary directory:

    $ oc get jaeger jaeger -oyaml -n istio-system > /tmp/jaeger-cr.yaml
  3. Delete the Jaeger instance:

    $ oc delete jaeger jaeger -n istio-system
  4. Recreate the Jaeger instance from your copy of the custom resource file:

    $ oc create -f /tmp/jaeger-cr.yaml -n istio-system
  5. Delete the copy of the generated custom resource file:

    $ rm /tmp/jaeger-cr.yaml

Procedure if Jaeger not installed as part of Red Hat Service Mesh

Before you begin, create a copy of your Jaeger custom resource file.

  1. Delete the Jaeger instance by deleting the custom resource file:

    $ oc delete -f <jaeger-cr-file>

    For example:

    $ oc delete -f jaeger-prod-elasticsearch.yaml
  2. Recreate your Jaeger instance from the backup copy of your custom resource file:

    $ oc create -f <jaeger-cr-file>
  3. Validate that your Pods have restarted:

    $ oc get pods -n jaeger-system -w

2.1.5.19. New features Red Hat OpenShift Service Mesh 1.1.3

This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.

2.1.5.20. New features Red Hat OpenShift Service Mesh 1.1.2

This release of Red Hat OpenShift Service Mesh addresses a security vulnerability.

2.1.5.21. New features Red Hat OpenShift Service Mesh 1.1.1

This release of Red Hat OpenShift Service Mesh adds support for a disconnected installation.

2.1.5.22. New features Red Hat OpenShift Service Mesh 1.1.0

This release of Red Hat OpenShift Service Mesh adds support for Istio 1.4.6 and Jaeger 1.17.1.

2.1.5.22.1. Manual updates from 1.0 to 1.1

If you are updating from Red Hat OpenShift Service Mesh 1.0 to 1.1, you must update the ServiceMeshControlPlane resource to update the control plane components to the new version.

  1. In the web console, click the Red Hat OpenShift Service Mesh Operator.
  2. Click the Project menu and choose the project where your ServiceMeshControlPlane is deployed from the list, for example istio-system.
  3. Click the name of your control plane, for example basic-install.
  4. Click YAML and add a version field to the spec: of your ServiceMeshControlPlane resource. For example, to update to Red Hat OpenShift Service Mesh 1.1.0, add version: v1.1.
spec:
  version: v1.1
  ...

The version field specifies the version of Service Mesh to install and defaults to the latest available version.

Note

Note that support for Red Hat OpenShift Service Mesh v1.0 ended in October, 2020. You must upgrade to either v1.1 or v2.0.

2.1.6. Deprecated features

Some features available in previous releases have been deprecated or removed.

Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.

2.1.6.1. Deprecated features Red Hat OpenShift Service Mesh 1.1.5

The following custom resources were deprecated in release 1.1.5 and were removed in release 1.1.12

  • Policy - The Policy resource is deprecated and will be replaced by the PeerAuthentication resource in a future release.
  • MeshPolicy - The MeshPolicy resource is deprecated and will be replaced by the PeerAuthentication resource in a future release.
  • v1alpha1 RBAC API -The v1alpha1 RBAC policy is deprecated by the v1beta1 AuthorizationPolicy. RBAC (Role Based Access Control) defines ServiceRole and ServiceRoleBinding objects.

    • ServiceRole
    • ServiceRoleBinding
  • RbacConfig - RbacConfig implements the Custom Resource Definition for controlling Istio RBAC behavior.

    • ClusterRbacConfig(versions prior to Red Hat OpenShift Service Mesh 1.0)
    • ServiceMeshRbacConfig (Red Hat OpenShift Service Mesh version 1.0 and later)
  • In Kiali, the login and LDAP strategies are deprecated. A future version will introduce authentication using OpenID providers.

The following components are also deprecated in this release and will be replaced by the Istiod component in a future release.

  • Mixer - access control and usage policies
  • Pilot - service discovery and proxy configuration
  • Citadel - certificate generation
  • Galley - configuration validation and distribution

2.1.7. Known issues

These limitations exist in Red Hat OpenShift Service Mesh:

  • Red Hat OpenShift Service Mesh does not support IPv6, as it is not supported by the upstream Istio project, nor fully supported by OpenShift Container Platform.
  • Graph layout - The layout for the Kiali graph can render differently, depending on your application architecture and the data to display (number of graph nodes and their interactions). Because it is difficult if not impossible to create a single layout that renders nicely for every situation, Kiali offers a choice of several different layouts. To choose a different layout, you can choose a different Layout Schema from the Graph Settings menu.
  • The first time you access related services such as Jaeger and Grafana, from the Kiali console, you must accept the certificate and re-authenticate using your OpenShift Container Platform login credentials. This happens due to an issue with how the framework displays embedded pages in the console.

2.1.7.1. Service Mesh known issues

These are the known issues in Red Hat OpenShift Service Mesh:

  • Jaeger/Kiali Operator upgrade blocked with operator pending When upgrading the Jaeger or Kiali Operators with Service Mesh 1.0.x installed, the operator status shows as Pending.

    Workaround: See the linked Knowledge Base article for more information.

  • Istio-14743 Due to limitations in the version of Istio that this release of Red Hat OpenShift Service Mesh is based on, there are several applications that are currently incompatible with Service Mesh. See the linked community issue for details.
  • MAISTRA-858 The following Envoy log messages describing deprecated options and configurations associated with Istio 1.1.x are expected:

    • [2019-06-03 07:03:28.943][19][warning][misc] [external/envoy/source/common/protobuf/utility.cc:129] Using deprecated option 'envoy.api.v2.listener.Filter.config'. This configuration will be removed from Envoy soon.
    • [2019-08-12 22:12:59.001][13][warning][misc] [external/envoy/source/common/protobuf/utility.cc:174] Using deprecated option 'envoy.api.v2.Listener.use_original_dst' from file lds.proto. This configuration will be removed from Envoy soon.
  • MAISTRA-806 Evicted Istio Operator Pod causes mesh and CNI not to deploy.

    Workaround: If the istio-operator pod is evicted while deploying the control pane, delete the evicted istio-operator pod.

  • MAISTRA-681 When the control plane has many namespaces, it can lead to performance issues.
  • MAISTRA-465 The Maistra Operator fails to create a service for operator metrics.
  • MAISTRA-453 If you create a new project and deploy pods immediately, sidecar injection does not occur. The operator fails to add the maistra.io/member-of before the pods are created, therefore the pods must be deleted and recreated for sidecar injection to occur.
  • MAISTRA-158 Applying multiple gateways referencing the same hostname will cause all gateways to stop functioning.

2.1.7.2. Kiali known issues

Note

New issues for Kiali should be created in the OpenShift Service Mesh project with the Component set to Kiali.

These are the known issues in Kiali:

  • KIALI-2206 When you are accessing the Kiali console for the first time, and there is no cached browser data for Kiali, the “View in Grafana” link on the Metrics tab of the Kiali Service Details page redirects to the wrong location. The only way you would encounter this issue is if you are accessing Kiali for the first time.
  • KIALI-507 Kiali does not support Internet Explorer 11. This is because the underlying frameworks do not support Internet Explorer. To access the Kiali console, use one of the two most recent versions of the Chrome, Edge, Firefox or Safari browser.

2.1.7.3. Red Hat OpenShift distributed tracing known issues

These limitations exist in Red Hat OpenShift distributed tracing:

  • Apache Spark is not supported.
  • The streaming deployment via AMQ/Kafka is unsupported on IBM Z and IBM Power Systems.

These are the known issues for Red Hat OpenShift distributed tracing:

  • OBSDA-220 In some cases, if you try to pull an image using distributed tracing data collection, the image pull fails and a Failed to pull image error message appears. There is no workaround for this issue.
  • TRACING-2057 The Kafka API has been updated to v1beta2 to support the Strimzi Kafka Operator 0.23.0. However, this API version is not supported by AMQ Streams 1.6.3. If you have the following environment, your Jaeger services will not be upgraded, and you cannot create new Jaeger services or modify existing Jaeger services:

    • Jaeger Operator channel: 1.17.x stable or 1.20.x stable
    • AMQ Streams Operator channel: amq-streams-1.6.x

      To resolve this issue, switch the subscription channel for your AMQ Streams Operator to either amq-streams-1.7.x or stable.

2.1.8. Fixed issues

The following issues been resolved in the current release:

2.1.8.1. Service Mesh fixed issues

  • MAISTRA-2371 Handle tombstones in listerInformer. The updated cache codebase was not handling tombstones when translating the events from the namespace caches to the aggregated cache, leading to a panic in the go routine.
  • OSSM-542 Galley is not using the new certificate after rotation.
  • OSSM-99 Workloads generated from direct pod without labels may crash Kiali.
  • OSSM-93 IstioConfigList can’t filter by two or more names.
  • OSSM-92 Cancelling unsaved changes on the VS/DR YAML edit page does not cancel the changes.
  • OSSM-90 Traces not available on the service details page.
  • MAISTRA-1649 Headless services conflict when in different namespaces. When deploying headless services within different namespaces the endpoint configuration is merged and results in invalid Envoy configurations being pushed to the sidecars.
  • MAISTRA-1541 Panic in kubernetesenv when the controller is not set on owner reference. If a pod has an ownerReference which does not specify the controller, this will cause a panic within the kubernetesenv cache.go code.
  • MAISTRA-1352 Cert-manager Custom Resource Definitions (CRD) from the control plane installation have been removed for this release and future releases. If you have already installed Red Hat OpenShift Service Mesh, the CRDs must be removed manually if cert-manager is not being used.
  • MAISTRA-1001 Closing HTTP/2 connections could lead to segmentation faults in istio-proxy.
  • MAISTRA-932 Added the requires metadata to add dependency relationship between Jaeger Operator and OpenShift Elasticsearch Operator. Ensures that when the Jaeger Operator is installed, it automatically deploys the OpenShift Elasticsearch Operator if it is not available.
  • MAISTRA-862 Galley dropped watches and stopped providing configuration to other components after many namespace deletions and re-creations.
  • MAISTRA-833 Pilot stopped delivering configuration after many namespace deletions and re-creations.
  • MAISTRA-684 The default Jaeger version in the istio-operator is 1.12.0, which does not match Jaeger version 1.13.1 that shipped in Red Hat OpenShift Service Mesh 0.12.TechPreview.
  • MAISTRA-622 In Maistra 0.12.0/TP12, permissive mode does not work. The user has the option to use Plain text mode or Mutual TLS mode, but not permissive.
  • MAISTRA-572 Jaeger cannot be used with Kiali. In this release Jaeger is configured to use the OAuth proxy, but is also only configured to work through a browser and does not allow service access. Kiali cannot properly communicate with the Jaeger endpoint and it considers Jaeger to be disabled. See also TRACING-591.
  • MAISTRA-357 In OpenShift 4 Beta on AWS, it is not possible, by default, to access a TCP or HTTPS service through the ingress gateway on a port other than port 80. The AWS load balancer has a health check that verifies if port 80 on the service endpoint is active. Without a service running on port 80, the load balancer health check fails.
  • MAISTRA-348 OpenShift 4 Beta on AWS does not support ingress gateway traffic on ports other than 80 or 443. If you configure your ingress gateway to handle TCP traffic with a port number other than 80 or 443, you have to use the service hostname provided by the AWS load balancer rather than the OpenShift router as a workaround.
  • MAISTRA-193 Unexpected console info messages are visible when health checking is enabled for citadel.
  • Bug 1821432 Toggle controls in OpenShift Container Platform Control Resource details page do not update the CR correctly. UI Toggle controls in the Service Mesh Control Plane (SMCP) Overview page in the OpenShift Container Platform web console sometimes update the wrong field in the resource. To update a ServiceMeshControlPlane resource, edit the YAML content directly or update the resource from the command line instead of clicking the toggle controls.

2.1.8.2. Kiali fixed issues

  • KIALI-3239 If a Kiali Operator pod has failed with a status of “Evicted” it blocks the Kiali operator from deploying. The workaround is to delete the Evicted pod and redeploy the Kiali operator.
  • KIALI-3118 After changes to the ServiceMeshMemberRoll, for example adding or removing projects, the Kiali pod restarts and then displays errors on the Graph page while the Kiali pod is restarting.
  • KIALI-3096 Runtime metrics fail in Service Mesh. There is an OAuth filter between the Service Mesh and Prometheus, requiring a bearer token to be passed to Prometheus before access is granted. Kiali has been updated to use this token when communicating to the Prometheus server, but the application metrics are currently failing with 403 errors.
  • KIALI-3070 This bug only affects custom dashboards, not the default dashboards. When you select labels in metrics settings and refresh the page, your selections are retained in the menu but your selections are not displayed on the charts.
  • KIALI-2686 When the control plane has many namespaces, it can lead to performance issues.

2.1.8.3. Red Hat OpenShift distributed tracing fixed issues

  • OSSM-1910 Because of an issue introduced in version 2.6, TLS connections could not be established with OpenShift Container Platform Service Mesh. This update resolves the issue by changing the service port names to match conventions used by OpenShift Container Platform Service Mesh and Istio.
  • OBSDA-208 Before this update, the default 200m CPU and 256Mi memory resource limits could cause distributed tracing data collection to restart continuously on large clusters. This update resolves the issue by removing these resource limits.
  • OBSDA-222 Before this update, spans could be dropped in the OpenShift Container Platform distributed tracing platform. To help prevent this issue from occurring, this release updates version dependencies.
  • TRACING-2337 Jaeger is logging a repetitive warning message in the Jaeger logs similar to the following:

    {"level":"warn","ts":1642438880.918793,"caller":"channelz/logging.go:62","msg":"[core]grpc: Server.Serve failed to create ServerTransport: connection error: desc = \"transport: http2Server.HandleStreams received bogus greeting from client: \\\"\\\\x16\\\\x03\\\\x01\\\\x02\\\\x00\\\\x01\\\\x00\\\\x01\\\\xfc\\\\x03\\\\x03vw\\\\x1a\\\\xc9T\\\\xe7\\\\xdaCj\\\\xb7\\\\x8dK\\\\xa6\\\"\"","system":"grpc","grpc_log":true}

    This issue was resolved by exposing only the HTTP(S) port of the query service, and not the gRPC port.

  • TRACING-2009 The Jaeger Operator has been updated to include support for the Strimzi Kafka Operator 0.23.0.
  • TRACING-1907 The Jaeger agent sidecar injection was failing due to missing config maps in the application namespace. The config maps were getting automatically deleted due to an incorrect OwnerReference field setting and as a result, the application pods were not moving past the "ContainerCreating" stage. The incorrect settings have been removed.
  • TRACING-1725 Follow-up to TRACING-1631. Additional fix to ensure that Elasticsearch certificates are properly reconciled when there are multiple Jaeger production instances, using same name but within different namespaces. See also BZ-1918920.
  • TRACING-1631 Multiple Jaeger production instances, using same name but within different namespaces, causing Elasticsearch certificate issue. When multiple service meshes were installed, all of the Jaeger Elasticsearch instances had the same Elasticsearch secret instead of individual secrets, which prevented the OpenShift Elasticsearch Operator from communicating with all of the Elasticsearch clusters.
  • TRACING-1300 Failed connection between Agent and Collector when using Istio sidecar. An update of the Jaeger Operator enabled TLS communication by default between a Jaeger sidecar agent and the Jaeger Collector.
  • TRACING-1208 Authentication "500 Internal Error" when accessing Jaeger UI. When trying to authenticate to the UI using OAuth, I get a 500 error because oauth-proxy sidecar doesn’t trust the custom CA bundle defined at installation time with the additionalTrustBundle.
  • TRACING-1166 It is not currently possible to use the Jaeger streaming strategy within a disconnected environment. When a Kafka cluster is being provisioned, it results in a error: Failed to pull image registry.redhat.io/amq7/amq-streams-kafka-24-rhel7@sha256:f9ceca004f1b7dccb3b82d9a8027961f9fe4104e0ed69752c0bdd8078b4a1076.
  • TRACING-809 Jaeger Ingester is incompatible with Kafka 2.3. When there are two or more instances of the Jaeger Ingester and enough traffic it will continuously generate rebalancing messages in the logs. This is due to a regression in Kafka 2.3 that was fixed in Kafka 2.3.1. For more information, see Jaegertracing-1819.
  • BZ-1918920/LOG-1619 The Elasticsearch pods does not get restarted automatically after an update.

    Workaround: Restart the pods manually.

2.2. Understanding Service Mesh

Warning

You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported.

Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh.

For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page.

Red Hat OpenShift Service Mesh provides a platform for behavioral insight and operational control over your networked microservices in a service mesh. With Red Hat OpenShift Service Mesh, you can connect, secure, and monitor microservices in your OpenShift Container Platform environment.

2.2.1. Understanding service mesh

A service mesh is the network of microservices that make up applications in a distributed microservice architecture and the interactions between those microservices. When a Service Mesh grows in size and complexity, it can become harder to understand and manage.

Based on the open source Istio project, Red Hat OpenShift Service Mesh adds a transparent layer on existing distributed applications without requiring any changes to the service code. You add Red Hat OpenShift Service Mesh support to services by deploying a special sidecar proxy to relevant services in the mesh that intercepts all network communication between microservices. You configure and manage the Service Mesh using the Service Mesh control plane features.

Red Hat OpenShift Service Mesh gives you an easy way to create a network of deployed services that provide:

  • Discovery
  • Load balancing
  • Service-to-service authentication
  • Failure recovery
  • Metrics
  • Monitoring

Red Hat OpenShift Service Mesh also provides more complex operational functions including:

  • A/B testing
  • Canary releases
  • Access control
  • End-to-end authentication

2.2.2. Red Hat OpenShift Service Mesh Architecture

Red Hat OpenShift Service Mesh is logically split into a data plane and a control plane:

The data plane is a set of intelligent proxies deployed as sidecars. These proxies intercept and control all inbound and outbound network communication between microservices in the service mesh. Sidecar proxies also communicate with Mixer, the general-purpose policy and telemetry hub.

  • Envoy proxy intercepts all inbound and outbound traffic for all services in the service mesh. Envoy is deployed as a sidecar to the relevant service in the same pod.

The control plane manages and configures proxies to route traffic, and configures Mixers to enforce policies and collect telemetry.

  • Mixer enforces access control and usage policies (such as authorization, rate limits, quotas, authentication, and request tracing) and collects telemetry data from the Envoy proxy and other services.
  • Pilot configures the proxies at runtime. Pilot provides service discovery for the Envoy sidecars, traffic management capabilities for intelligent routing (for example, A/B tests or canary deployments), and resiliency (timeouts, retries, and circuit breakers).
  • Citadel issues and rotates certificates. Citadel provides strong service-to-service and end-user authentication with built-in identity and credential management. You can use Citadel to upgrade unencrypted traffic in the service mesh. Operators can enforce policies based on service identity rather than on network controls using Citadel.
  • Galley ingests the service mesh configuration, then validates, processes, and distributes the configuration. Galley protects the other service mesh components from obtaining user configuration details from OpenShift Container Platform.

Red Hat OpenShift Service Mesh also uses the istio-operator to manage the installation of the control plane. An Operator is a piece of software that enables you to implement and automate common activities in your OpenShift Container Platform cluster. It acts as a controller, allowing you to set or change the desired state of objects in your cluster.

2.2.3. Understanding Kiali

Kiali provides visibility into your service mesh by showing you the microservices in your service mesh, and how they are connected.

2.2.3.1. Kiali overview

Kiali provides observability into the Service Mesh running on OpenShift Container Platform. Kiali helps you define, validate, and observe your Istio service mesh. It helps you to understand the structure of your service mesh by inferring the topology, and also provides information about the health of your service mesh.

Kiali provides an interactive graph view of your namespace in real time that provides visibility into features like circuit breakers, request rates, latency, and even graphs of traffic flows. Kiali offers insights about components at different levels, from Applications to Services and Workloads, and can display the interactions with contextual information and charts on the selected graph node or edge. Kiali also provides the ability to validate your Istio configurations, such as gateways, destination rules, virtual services, mesh policies, and more. Kiali provides detailed metrics, and a basic Grafana integration is available for advanced queries. Distributed tracing is provided by integrating Jaeger into the Kiali console.

Kiali is installed by default as part of the Red Hat OpenShift Service Mesh.

2.2.3.2. Kiali architecture

Kiali is based on the open source Kiali project. Kiali is composed of two components: the Kiali application and the Kiali console.

  • Kiali application (back end) – This component runs in the container application platform and communicates with the service mesh components, retrieves and processes data, and exposes this data to the console. The Kiali application does not need storage. When deploying the application to a cluster, configurations are set in ConfigMaps and secrets.
  • Kiali console (front end) – The Kiali console is a web application. The Kiali application serves the Kiali console, which then queries the back end for data to present it to the user.

In addition, Kiali depends on external services and components provided by the container application platform and Istio.

  • Red Hat Service Mesh (Istio) - Istio is a Kiali requirement. Istio is the component that provides and controls the service mesh. Although Kiali and Istio can be installed separately, Kiali depends on Istio and will not work if it is not present. Kiali needs to retrieve Istio data and configurations, which are exposed through Prometheus and the cluster API.
  • Prometheus - A dedicated Prometheus instance is included as part of the Red Hat OpenShift Service Mesh installation. When Istio telemetry is enabled, metrics data are stored in Prometheus. Kiali uses this Prometheus data to determine the mesh topology, display metrics, calculate health, show possible problems, and so on. Kiali communicates directly with Prometheus and assumes the data schema used by Istio Telemetry. Prometheus is an Istio dependency and a hard dependency for Kiali, and many of Kiali’s features will not work without Prometheus.
  • Cluster API - Kiali uses the API of the OpenShift Container Platform (cluster API) to fetch and resolve service mesh configurations. Kiali queries the cluster API to retrieve, for example, definitions for namespaces, services, deployments, pods, and other entities. Kiali also makes queries to resolve relationships between the different cluster entities. The cluster API is also queried to retrieve Istio configurations like virtual services, destination rules, route rules, gateways, quotas, and so on.
  • Jaeger - Jaeger is optional, but is installed by default as part of the Red Hat OpenShift Service Mesh installation. When you install the distributed tracing platform as part of the default Red Hat OpenShift Service Mesh installation, the Kiali console includes a tab to display distributed tracing data. Note that tracing data will not be available if you disable Istio’s distributed tracing feature. Also note that user must have access to the namespace where the Service Mesh control plane is installed to view tracing data.
  • Grafana - Grafana is optional, but is installed by default as part of the Red Hat OpenShift Service Mesh installation. When available, the metrics pages of Kiali display links to direct the user to the same metric in Grafana. Note that user must have access to the namespace where the Service Mesh control plane is installed to view links to the Grafana dashboard and view Grafana data.

2.2.3.3. Kiali features

The Kiali console is integrated with Red Hat Service Mesh and provides the following capabilities:

  • Health – Quickly identify issues with applications, services, or workloads.
  • Topology – Visualize how your applications, services, or workloads communicate via the Kiali graph.
  • Metrics – Predefined metrics dashboards let you chart service mesh and application performance for Go, Node.js. Quarkus, Spring Boot, Thorntail and Vert.x. You can also create your own custom dashboards.
  • Tracing – Integration with Jaeger lets you follow the path of a request through various microservices that make up an application.
  • Validations – Perform advanced validations on the most common Istio objects (Destination Rules, Service Entries, Virtual Services, and so on).
  • Configuration – Optional ability to create, update and delete Istio routing configuration using wizards or directly in the YAML editor in the Kiali Console.

2.2.4. Understanding Jaeger

Every time a user takes an action in an application, a request is executed by the architecture that may require dozens of different services to participate to produce a response. The path of this request is a distributed transaction. Jaeger lets you perform distributed tracing, which follows the path of a request through various microservices that make up an application.

Distributed tracing is a technique that is used to tie the information about different units of work together—usually executed in different processes or hosts—to understand a whole chain of events in a distributed transaction. Distributed tracing lets developers visualize call flows in large service oriented architectures. It can be invaluable in understanding serialization, parallelism, and sources of latency.

Jaeger records the execution of individual requests across the whole stack of microservices, and presents them as traces. A trace is a data/execution path through the system. An end-to-end trace is comprised of one or more spans.

A span represents a logical unit of work in Jaeger that has an operation name, the start time of the operation, and the duration. Spans may be nested and ordered to model causal relationships.

2.2.4.1. Distributed tracing overview

As a service owner, you can use distributed tracing to instrument your services to gather insights into your service architecture. You can use distributed tracing for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications.

With distributed tracing you can perform the following functions:

  • Monitor distributed transactions
  • Optimize performance and latency
  • Perform root cause analysis

Red Hat OpenShift distributed tracing consists of two main components:

  • Red Hat OpenShift distributed tracing platform - This component is based on the open source Jaeger project.
  • Red Hat OpenShift distributed tracing data collection - This component is based on the open source OpenTelemetry project.
Important

Jaeger does not use FIPS validated cryptographic modules.

2.2.4.2. Distributed tracing architecture

The distributed tracing platform is based on the open source Jaeger project. The distributed tracing platform is made up of several components that work together to collect, store, and display tracing data.

  • Jaeger Client (Tracer, Reporter, instrumented application, client libraries)- Jaeger clients are language specific implementations of the OpenTracing API. They can be used to instrument applications for distributed tracing either manually or with a variety of existing open source frameworks, such as Camel (Fuse), Spring Boot (RHOAR), MicroProfile (RHOAR/Thorntail), Wildfly (EAP), and many more, that are already integrated with OpenTracing.
  • Jaeger Agent (Server Queue, Processor Workers) - The Jaeger agent is a network daemon that listens for spans sent over User Datagram Protocol (UDP), which it batches and sends to the collector. The agent is meant to be placed on the same host as the instrumented application. This is typically accomplished by having a sidecar in container environments like Kubernetes.
  • Jaeger Collector (Queue, Workers) - Similar to the Agent, the Collector is able to receive spans and place them in an internal queue for processing. This allows the collector to return immediately to the client/agent instead of waiting for the span to make its way to the storage.
  • Storage (Data Store) - Collectors require a persistent storage backend. Jaeger has a pluggable mechanism for span storage. Note that for this release, the only supported storage is Elasticsearch.
  • Query (Query Service) - Query is a service that retrieves traces from storage.
  • Ingester (Ingester Service) - Jaeger can use Apache Kafka as a buffer between the collector and the actual backing storage (Elasticsearch). Ingester is a service that reads data from Kafka and writes to another storage backend (Elasticsearch).
  • Jaeger Console – Jaeger provides a user interface that lets you visualize your distributed tracing data. On the Search page, you can find traces and explore details of the spans that make up an individual trace.

2.2.4.3. Red Hat OpenShift distributed tracing features

Red Hat OpenShift distributed tracing provides the following capabilities:

  • Integration with Kiali – When properly configured, you can view distributed tracing data from the Kiali console.
  • High scalability – The distributed tracing back end is designed to have no single points of failure and to scale with the business needs.
  • Distributed Context Propagation – Enables you to connect data from different components together to create a complete end-to-end trace.
  • Backwards compatibility with Zipkin – Red Hat OpenShift distributed tracing has APIs that enable it to be used as a drop-in replacement for Zipkin, but Red Hat is not supporting Zipkin compatibility in this release.

2.2.5. Next steps

2.3. Service Mesh and Istio differences

Warning

You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported.

Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh.

For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page.

An installation of Red Hat OpenShift Service Mesh differs from upstream Istio community installations in multiple ways. The modifications to Red Hat OpenShift Service Mesh are sometimes necessary to resolve issues, provide additional features, or to handle differences when deploying on OpenShift Container Platform.

The current release of Red Hat OpenShift Service Mesh differs from the current upstream Istio community release in the following ways:

2.3.1. Multitenant installations

Whereas upstream Istio takes a single tenant approach, Red Hat OpenShift Service Mesh supports multiple independent control planes within the cluster. Red Hat OpenShift Service Mesh uses a multitenant operator to manage the control plane lifecycle.

Red Hat OpenShift Service Mesh installs a multitenant control plane by default. You specify the projects that can access the Service Mesh, and isolate the Service Mesh from other control plane instances.

2.3.1.1. Multitenancy versus cluster-wide installations

The main difference between a multitenant installation and a cluster-wide installation is the scope of privileges used by istod. The components no longer use cluster-scoped Role Based Access Control (RBAC) resource ClusterRoleBinding.

Every project in the ServiceMeshMemberRoll members list will have a RoleBinding for each service account associated with the control plane deployment and each control plane deployment will only watch those member projects. Each member project has a maistra.io/member-of label added to it, where the member-of value is the project containing the control plane installation.

Red Hat OpenShift Service Mesh configures each member project to ensure network access between itself, the control plane, and other member projects. The exact configuration differs depending on how OpenShift Container Platform software-defined networking (SDN) is configured. See About OpenShift SDN for additional details.

If the OpenShift Container Platform cluster is configured to use the SDN plugin:

  • NetworkPolicy: Red Hat OpenShift Service Mesh creates a NetworkPolicy resource in each member project allowing ingress to all pods from the other members and the control plane. If you remove a member from Service Mesh, this NetworkPolicy resource is deleted from the project.

    Note

    This also restricts ingress to only member projects. If you require ingress from non-member projects, you need to create a NetworkPolicy to allow that traffic through.

  • Multitenant: Red Hat OpenShift Service Mesh joins the NetNamespace for each member project to the NetNamespace of the control plane project (the equivalent of running oc adm pod-network join-projects --to control-plane-project member-project). If you remove a member from the Service Mesh, its NetNamespace is isolated from the control plane (the equivalent of running oc adm pod-network isolate-projects member-project).
  • Subnet: No additional configuration is performed.

2.3.1.2. Cluster scoped resources

Upstream Istio has two cluster scoped resources that it relies on. The MeshPolicy and the ClusterRbacConfig. These are not compatible with a multitenant cluster and have been replaced as described below.

  • ServiceMeshPolicy replaces MeshPolicy for configuration of control-plane-wide authentication policies. This must be created in the same project as the control plane.
  • ServicemeshRbacConfig replaces ClusterRbacConfig for configuration of control-plane-wide role based access control. This must be created in the same project as the control plane.

2.3.2. Differences between Istio and Red Hat OpenShift Service Mesh

An installation of Red Hat OpenShift Service Mesh differs from an installation of Istio in multiple ways. The modifications to Red Hat OpenShift Service Mesh are sometimes necessary to resolve issues, provide additional features, or to handle differences when deploying on OpenShift Container Platform.

2.3.2.1. Command line tool

The command line tool for Red Hat OpenShift Service Mesh is oc.  Red Hat OpenShift Service Mesh  does not support istioctl.

2.3.2.2. Automatic injection

The upstream Istio community installation automatically injects the sidecar into pods within the projects you have labeled.

Red Hat OpenShift Service Mesh does not automatically inject the sidecar to any pods, but requires you to opt in to injection using an annotation without labeling projects. This method requires fewer privileges and does not conflict with other OpenShift capabilities such as builder pods. To enable automatic injection you specify the sidecar.istio.io/inject annotation as described in the Automatic sidecar injection section.

2.3.2.3. Istio Role Based Access Control features

Istio Role Based Access Control (RBAC) provides a mechanism you can use to control access to a service. You can identify subjects by user name or by specifying a set of properties and apply access controls accordingly.

The upstream Istio community installation includes options to perform exact header matches, match wildcards in headers, or check for a header containing a specific prefix or suffix.

Red Hat OpenShift Service Mesh extends the ability to match request headers by using a regular expression. Specify a property key of request.regex.headers with a regular expression.

Upstream Istio community matching request headers example

apiVersion: "rbac.istio.io/v1alpha1"
kind: ServiceRoleBinding
metadata:
  name: httpbin-client-binding
  namespace: httpbin
spec:
  subjects:
  - user: "cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account"
    properties:
      request.headers[<header>]: "value"

Red Hat OpenShift Service Mesh matching request headers by using regular expressions

apiVersion: "rbac.istio.io/v1alpha1"
kind: ServiceRoleBinding
metadata:
  name: httpbin-client-binding
  namespace: httpbin
spec:
  subjects:
  - user: "cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account"
    properties:
      request.regex.headers[<header>]: "<regular expression>"

2.3.2.4. OpenSSL

Red Hat OpenShift Service Mesh replaces BoringSSL with OpenSSL. OpenSSL is a software library that contains an open source implementation of the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols. The Red Hat OpenShift Service Mesh Proxy binary dynamically links the OpenSSL libraries (libssl and libcrypto) from the underlying Red Hat Enterprise Linux operating system.

2.3.2.5. Component modifications

  • A maistra-version label has been added to all resources.
  • All Ingress resources have been converted to OpenShift Route resources.
  • Grafana, Tracing (Jaeger), and Kiali are enabled by default and exposed through OpenShift routes.
  • Godebug has been removed from all templates
  • The istio-multi ServiceAccount and ClusterRoleBinding have been removed, as well as the istio-reader ClusterRole.

2.3.2.6. Envoy, Secret Discovery Service, and certificates

  • Red Hat OpenShift Service Mesh does not support QUIC-based services.
  • Deployment of TLS certificates using the Secret Discovery Service (SDS) functionality of Istio is not currently supported in Red Hat OpenShift Service Mesh. The Istio implementation depends on a nodeagent container that uses hostPath mounts.

2.3.2.7. Istio Container Network Interface (CNI) plugin

Red Hat OpenShift Service Mesh includes CNI plugin, which provides you with an alternate way to configure application pod networking. The CNI plugin replaces the init-container network configuration eliminating the need to grant service accounts and projects access to Security Context Constraints (SCCs) with elevated privileges.

2.3.2.8. Routes for Istio Gateways

OpenShift routes for Istio Gateways are automatically managed in Red Hat OpenShift Service Mesh. Every time an Istio Gateway is created, updated or deleted inside the service mesh, an OpenShift route is created, updated or deleted.

A Red Hat OpenShift Service Mesh control plane component called Istio OpenShift Routing (IOR) synchronizes the gateway route. For more information, see Automatic route creation.

2.3.2.8.1. Catch-all domains

Catch-all domains ("*") are not supported. If one is found in the Gateway definition, Red Hat OpenShift Service Mesh will create the route, but will rely on OpenShift to create a default hostname. This means that the newly created route will not be a catch all ("*") route, instead it will have a hostname in the form <route-name>[-<project>].<suffix>. See the OpenShift documentation for more information about how default hostnames work and how a cluster administrator can customize it.

2.3.2.8.2. Subdomains

Subdomains (e.g.: "*.domain.com") are supported. However this ability doesn’t come enabled by default in OpenShift Container Platform. This means that Red Hat OpenShift Service Mesh will create the route with the subdomain, but it will only be in effect if OpenShift Container Platform is configured to enable it.

2.3.2.8.3. Transport layer security

Transport Layer Security (TLS) is supported. This means that, if the Gateway contains a tls section, the OpenShift Route will be configured to support TLS.

Additional resources

2.3.3. Kiali and service mesh

Installing Kiali via the Service Mesh on OpenShift Container Platform differs from community Kiali installations in multiple ways. These modifications are sometimes necessary to resolve issues, provide additional features, or to handle differences when deploying on OpenShift Container Platform.

  • Kiali has been enabled by default.
  • Ingress has been enabled by default.
  • Updates have been made to the Kiali ConfigMap.
  • Updates have been made to the ClusterRole settings for Kiali.
  • Do not edit the ConfigMap, because your changes might be overwritten by the Service Mesh or Kiali Operators. Files that the Kiali Operator manages have a kiali.io/ label or annotation. Updating the Operator files should be restricted to those users with cluster-admin privileges. If you use Red Hat OpenShift Dedicated, updating the Operator files should be restricted to those users with dedicated-admin privileges.

2.3.4. Distributed tracing and service mesh

Installing the distributed tracing platform with the Service Mesh on OpenShift Container Platform differs from community Jaeger installations in multiple ways. These modifications are sometimes necessary to resolve issues, provide additional features, or to handle differences when deploying on OpenShift Container Platform.

  • Distributed tracing has been enabled by default for Service Mesh.
  • Ingress has been enabled by default for Service Mesh.
  • The name for the Zipkin port name has changed to jaeger-collector-zipkin (from http)
  • Jaeger uses Elasticsearch for storage by default when you select either the production or streaming deployment option.
  • The community version of Istio provides a generic "tracing" route. Red Hat OpenShift Service Mesh uses a "jaeger" route that is installed by the Red Hat OpenShift distributed tracing platform Operator and is already protected by OAuth.
  • Red Hat OpenShift Service Mesh uses a sidecar for the Envoy proxy, and Jaeger also uses a sidecar, for the Jaeger agent. These two sidecars are configured separately and should not be confused with each other. The proxy sidecar creates spans related to the pod’s ingress and egress traffic. The agent sidecar receives the spans emitted by the application and sends them to the Jaeger Collector.

2.4. Preparing to install Service Mesh

Warning

You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported.

Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh.

For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page.

Before you can install Red Hat OpenShift Service Mesh, review the installation activities, ensure that you meet the prerequisites:

2.4.1. Prerequisites

2.4.2. Red Hat OpenShift Service Mesh supported configurations

The following are the only supported configurations for the Red Hat OpenShift Service Mesh:

  • OpenShift Container Platform version 4.6 or later.
Note

OpenShift Online and Red Hat OpenShift Dedicated are not supported for Red Hat OpenShift Service Mesh.

  • The deployment must be contained within a single OpenShift Container Platform cluster that is not federated.
  • This release of Red Hat OpenShift Service Mesh is only available on OpenShift Container Platform x86_64.
  • This release only supports configurations where all Service Mesh components are contained in the OpenShift Container Platform cluster in which it operates. It does not support management of microservices that reside outside of the cluster, or in a multi-cluster scenario.
  • This release only supports configurations that do not integrate external services such as virtual machines.

For additional information about Red Hat OpenShift Service Mesh lifecycle and supported configurations, refer to the Support Policy.

2.4.2.1. Supported configurations for Kiali on Red Hat OpenShift Service Mesh

  • The Kiali observability console is only supported on the two most recent releases of the Chrome, Edge, Firefox, or Safari browsers.

2.4.2.2. Supported Mixer adapters

  • This release only supports the following Mixer adapter:

    • 3scale Istio Adapter

2.4.3. Operator overview

Red Hat OpenShift Service Mesh requires the following four Operators:

  • OpenShift Elasticsearch - (Optional) Provides database storage for tracing and logging with the distributed tracing platform. It is based on the open source Elasticsearch project.
  • Red Hat OpenShift distributed tracing platform - Provides distributed tracing to monitor and troubleshoot transactions in complex distributed systems. It is based on the open source Jaeger project.
  • Kiali Operator provided by Red Hat - Provides observability for your service mesh. You can view configurations, monitor traffic, and analyze traces in a single console. It is based on the open source Kiali project.
  • Red Hat OpenShift Service Mesh - Allows you to connect, secure, control, and observe the microservices that comprise your applications. The Service Mesh Operator defines and monitors the ServiceMeshControlPlane resources that manage the deployment, updating, and deletion of the Service Mesh components. It is based on the open source Istio project.
Warning

See Configuring the log store for details on configuring the default Jaeger parameters for Elasticsearch in a production environment.

2.4.4. Next steps

2.5. Installing Service Mesh

Warning

You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported.

Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh.

For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page.

Installing the Service Mesh involves installing the OpenShift Elasticsearch, Jaeger, Kiali and Service Mesh Operators, creating and managing a ServiceMeshControlPlane resource to deploy the control plane, and creating a ServiceMeshMemberRoll resource to specify the namespaces associated with the Service Mesh.

Note

Mixer’s policy enforcement is disabled by default. You must enable it to run policy tasks. See Update Mixer policy enforcement for instructions on enabling Mixer policy enforcement.

Note

Multi-tenant control plane installations are the default configuration.

Note

The Service Mesh documentation uses istio-system as the example project, but you can deploy the service mesh to any project.

2.5.1. Prerequisites

The Service Mesh installation process uses the OperatorHub to install the ServiceMeshControlPlane custom resource definition within the openshift-operators project. The Red Hat OpenShift Service Mesh defines and monitors the ServiceMeshControlPlane related to the deployment, update, and deletion of the control plane.

Starting with Red Hat OpenShift Service Mesh 1.1.18.2, you must install the OpenShift Elasticsearch Operator, the Jaeger Operator, and the Kiali Operator before the Red Hat OpenShift Service Mesh Operator can install the control plane.

2.5.2. Installing the OpenShift Elasticsearch Operator

The default Red Hat OpenShift distributed tracing platform deployment uses in-memory storage because it is designed to be installed quickly for those evaluating Red Hat OpenShift distributed tracing, giving demonstrations, or using Red Hat OpenShift distributed tracing platform in a test environment. If you plan to use Red Hat OpenShift distributed tracing platform in production, you must install and configure a persistent storage option, in this case, Elasticsearch.

Prerequisites

  • You have access to the OpenShift Container Platform web console.
  • You have access to the cluster as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role.
Warning

Do not install Community versions of the Operators. Community Operators are not supported.

Note

If you have already installed the OpenShift Elasticsearch Operator as part of OpenShift Logging, you do not need to install the OpenShift Elasticsearch Operator again. The Red Hat OpenShift distributed tracing platform Operator creates the Elasticsearch instance using the installed OpenShift Elasticsearch Operator.

Procedure

  1. Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role.
  2. Navigate to Operators OperatorHub.
  3. Type Elasticsearch into the filter box to locate the OpenShift Elasticsearch Operator.
  4. Click the OpenShift Elasticsearch Operator provided by Red Hat to display information about the Operator.
  5. Click Install.
  6. On the Install Operator page, select the stable Update Channel. This automatically updates your Operator as new versions are released.
  7. Accept the default All namespaces on the cluster (default). This installs the Operator in the default openshift-operators-redhat project and makes the Operator available to all projects in the cluster.

    Note

    The Elasticsearch installation requires the openshift-operators-redhat namespace for the OpenShift Elasticsearch Operator. The other Red Hat OpenShift distributed tracing Operators are installed in the openshift-operators namespace.

    • Accept the default Automatic approval strategy. By accepting the default, when a new version of this Operator is available, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select Manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version.

      Note

      The Manual approval strategy requires a user with appropriate credentials to approve the Operator install and subscription process.

  8. Click Install.
  9. On the Installed Operators page, select the openshift-operators-redhat project. Wait until you see that the OpenShift Elasticsearch Operator shows a status of "InstallSucceeded" before continuing.

2.5.3. Installing the Red Hat OpenShift distributed tracing platform Operator

To install Red Hat OpenShift distributed tracing platform, you use the OperatorHub to install the Red Hat OpenShift distributed tracing platform Operator.

By default, the Operator is installed in the openshift-operators project.

Prerequisites

  • You have access to the OpenShift Container Platform web console.
  • You have access to the cluster as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role.
  • If you require persistent storage, you must also install the OpenShift Elasticsearch Operator before installing the Red Hat OpenShift distributed tracing platform Operator.
Warning

Do not install Community versions of the Operators. Community Operators are not supported.

Procedure

  1. Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role.
  2. Navigate to Operators OperatorHub.
  3. Type distributed tracing platform into the filter to locate the Red Hat OpenShift distributed tracing platform Operator.
  4. Click the Red Hat OpenShift distributed tracing platform Operator provided by Red Hat to display information about the Operator.
  5. Click Install.
  6. On the Install Operator page, select the stable Update Channel. This automatically updates your Operator as new versions are released.
  7. Accept the default All namespaces on the cluster (default). This installs the Operator in the default openshift-operators project and makes the Operator available to all projects in the cluster.

    • Accept the default Automatic approval strategy. By accepting the default, when a new version of this Operator is available, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select Manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version.

      Note

      The Manual approval strategy requires a user with appropriate credentials to approve the Operator install and subscription process.

  8. Click Install.
  9. Navigate to Operators Installed Operators.
  10. On the Installed Operators page, select the openshift-operators project. Wait until you see that the Red Hat OpenShift distributed tracing platform Operator shows a status of "Succeeded" before continuing.

2.5.4. Installing the Kiali Operator

You must install the Kiali Operator for the Red Hat OpenShift Service Mesh Operator to install the Service Mesh control plane.

Warning

Do not install Community versions of the Operators. Community Operators are not supported.

Prerequisites

  • Access to the OpenShift Container Platform web console.

Procedure

  1. Log in to the OpenShift Container Platform web console.
  2. Navigate to Operators OperatorHub.
  3. Type Kiali into the filter box to find the Kiali Operator.
  4. Click the Kiali Operator provided by Red Hat to display information about the Operator.
  5. Click Install.
  6. On the Operator Installation page, select the stable Update Channel.
  7. Select All namespaces on the cluster (default). This installs the Operator in the default openshift-operators project and makes the Operator available to all projects in the cluster.
  8. Select the Automatic Approval Strategy.

    Note

    The Manual approval strategy requires a user with appropriate credentials to approve the Operator install and subscription process.

  9. Click Install.
  10. The Installed Operators page displays the Kiali Operator’s installation progress.

2.5.5. Installing the Operators

To install Red Hat OpenShift Service Mesh, install the following Operators in this order. Repeat the procedure for each Operator.

  • OpenShift Elasticsearch
  • Red Hat OpenShift distributed tracing platform
  • Kiali Operator provided by Red Hat
  • Red Hat OpenShift Service Mesh
Note

If you have already installed the OpenShift Elasticsearch Operator as part of OpenShift Logging, you do not need to install the OpenShift Elasticsearch Operator again. The Red Hat OpenShift distributed tracing platform Operator will create the Elasticsearch instance using the installed OpenShift Elasticsearch Operator.

Procedure

  1. Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role.
  2. In the OpenShift Container Platform web console, click Operators OperatorHub.
  3. Type the name of the Operator into the filter box and select the Red Hat version of the Operator. Community versions of the Operators are not supported.
  4. Click Install.
  5. On the Install Operator page for each Operator, accept the default settings.
  6. Click Install. Wait until the Operator has installed before repeating the steps for the next Operator in the list.

    • The OpenShift Elasticsearch Operator is installed in the openshift-operators-redhat namespace and is available for all namespaces in the cluster.
    • The Red Hat OpenShift distributed tracing platform is installed in the openshift-distributed-tracing namespace and is available for all namespaces in the cluster.
    • The Kiali Operator provided by Red Hat and the Red Hat OpenShift Service Mesh Operator are installed in the openshift-operators namespace and are available for all namespaces in the cluster.
  7. After all you have installed all four Operators, click Operators Installed Operators to verify that your Operators installed.

2.5.6. Deploying the Red Hat OpenShift Service Mesh control plane

The ServiceMeshControlPlane resource defines the configuration to be used during installation. You can deploy the default configuration provided by Red Hat or customize the ServiceMeshControlPlane file to fit your business needs.

You can deploy the Service Mesh control plane by using the OpenShift Container Platform web console or from the command line using the oc client tool.

2.5.6.1. Deploying the control plane from the web console

Follow this procedure to deploy the Red Hat OpenShift Service Mesh control plane by using the web console. In this example, istio-system is the name of the control plane project.

Prerequisites

  • The Red Hat OpenShift Service Mesh Operator must be installed.
  • Review the instructions for how to customize the Red Hat OpenShift Service Mesh installation.
  • An account with the cluster-admin role.

Procedure

  1. Log in to the OpenShift Container Platform web console as a user with the cluster-admin role.
  2. Create a project named istio-system.

    1. Navigate to Home Projects.
    2. Click Create Project.
    3. Enter istio-system in the Name field.
    4. Click Create.
  3. Navigate to Operators Installed Operators.
  4. If necessary, select istio-system from the Project menu. You may have to wait a few moments for the Operators to be copied to the new project.
  5. Click the Red Hat OpenShift Service Mesh Operator. Under Provided APIs, the Operator provides links to create two resource types:

    • A ServiceMeshControlPlane resource
    • A ServiceMeshMemberRoll resource
  6. Under Istio Service Mesh Control Plane click Create ServiceMeshControlPlane.
  7. On the Create Service Mesh Control Plane page, modify the YAML for the default ServiceMeshControlPlane template as needed.

    Note

    For additional information about customizing the control plane, see customizing the Red Hat OpenShift Service Mesh installation. For production, you must change the default Jaeger template.

  8. Click Create to create the control plane. The Operator creates pods, services, and Service Mesh control plane components based on your configuration parameters.
  9. Click the Istio Service Mesh Control Plane tab.
  10. Click the name of the new control plane.
  11. Click the Resources tab to see the Red Hat OpenShift Service Mesh control plane resources the Operator created and configured.

2.5.6.2. Deploying the control plane from the CLI

Follow this procedure to deploy the Red Hat OpenShift Service Mesh control plane the command line.

Prerequisites

  • The Red Hat OpenShift Service Mesh Operator must be installed.
  • Review the instructions for how to customize the Red Hat OpenShift Service Mesh installation.
  • An account with the cluster-admin role.
  • Access to the OpenShift CLI (oc).

Procedure

  1. Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role.

    $ oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443
  2. Create a project named istio-system.

    $ oc new-project istio-system
  3. Create a ServiceMeshControlPlane file named istio-installation.yaml using the example found in "Customize the Red Hat OpenShift Service Mesh installation". You can customize the values as needed to match your use case. For production deployments you must change the default Jaeger template.
  4. Run the following command to deploy the control plane:

    $ oc create -n istio-system -f istio-installation.yaml
  5. Execute the following command to see the status of the control plane installation.

    $ oc get smcp -n istio-system

    The installation has finished successfully when the STATUS column is ComponentsReady.

    NAME            READY   STATUS            PROFILES      VERSION   AGE
    basic-install   11/11   ComponentsReady   ["default"]   v1.1.18   4m25s
  6. Run the following command to watch the progress of the Pods during the installation process:

    $ oc get pods -n istio-system -w

    You should see output similar to the following:

    Example output

    NAME                                     READY   STATUS             RESTARTS   AGE
    grafana-7bf5764d9d-2b2f6                 2/2     Running            0          28h
    istio-citadel-576b9c5bbd-z84z4           1/1     Running            0          28h
    istio-egressgateway-5476bc4656-r4zdv     1/1     Running            0          28h
    istio-galley-7d57b47bb7-lqdxv            1/1     Running            0          28h
    istio-ingressgateway-dbb8f7f46-ct6n5     1/1     Running            0          28h
    istio-pilot-546bf69578-ccg5x             2/2     Running            0          28h
    istio-policy-77fd498655-7pvjw            2/2     Running            0          28h
    istio-sidecar-injector-df45bd899-ctxdt   1/1     Running            0          28h
    istio-telemetry-66f697d6d5-cj28l         2/2     Running            0          28h
    jaeger-896945cbc-7lqrr                   2/2     Running            0          11h
    kiali-78d9c5b87c-snjzh                   1/1     Running            0          22h
    prometheus-6dff867c97-gr2n5              2/2     Running            0          28h

For a multitenant installation, Red Hat OpenShift Service Mesh supports multiple independent control planes within the cluster. You can create reusable configurations with ServiceMeshControlPlane templates. For more information, see Creating control plane templates.

2.5.7. Creating the Red Hat OpenShift Service Mesh member roll

The ServiceMeshMemberRoll lists the projects that belong to the Service Mesh control plane. Only projects listed in the ServiceMeshMemberRoll are affected by the control plane. A project does not belong to a service mesh until you add it to the member roll for a particular control plane deployment.

You must create a ServiceMeshMemberRoll resource named default in the same project as the ServiceMeshControlPlane, for example istio-system.

2.5.7.1. Creating the member roll from the web console

You can add one or more projects to the Service Mesh member roll from the web console. In this example, istio-system is the name of the Service Mesh control plane project.

Prerequisites

  • An installed, verified Red Hat OpenShift Service Mesh Operator.
  • List of existing projects to add to the service mesh.

Procedure

  1. Log in to the OpenShift Container Platform web console.
  2. If you do not already have services for your mesh, or you are starting from scratch, create a project for your applications. It must be different from the project where you installed the Service Mesh control plane.

    1. Navigate to Home Projects.
    2. Enter a name in the Name field.
    3. Click Create.
  3. Navigate to Operators Installed Operators.
  4. Click the Project menu and choose the project where your ServiceMeshControlPlane resource is deployed from the list, for example istio-system.
  5. Click the Red Hat OpenShift Service Mesh Operator.
  6. Click the Istio Service Mesh Member Roll tab.
  7. Click Create ServiceMeshMemberRoll
  8. Click Members, then enter the name of your project in the Value field. You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource.
  9. Click Create.

2.5.7.2. Creating the member roll from the CLI

You can add a project to the ServiceMeshMemberRoll from the command line.

Prerequisites

  • An installed, verified Red Hat OpenShift Service Mesh Operator.
  • List of projects to add to the service mesh.
  • Access to the OpenShift CLI (oc).

Procedure

  1. Log in to the OpenShift Container Platform CLI.

    $ oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443
  2. If you do not already have services for your mesh, or you are starting from scratch, create a project for your applications. It must be different from the project where you installed the Service Mesh control plane.

    $ oc new-project <your-project>
  3. To add your projects as members, modify the following example YAML. You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource. In this example, istio-system is the name of the Service Mesh control plane project.

    Example servicemeshmemberroll-default.yaml

    apiVersion: maistra.io/v1
    kind: ServiceMeshMemberRoll
    metadata:
      name: default
      namespace: istio-system
    spec:
      members:
        # a list of projects joined into the service mesh
        - your-project-name
        - another-project-name

  4. Run the following command to upload and create the ServiceMeshMemberRoll resource in the istio-system namespace.

    $ oc create -n istio-system -f servicemeshmemberroll-default.yaml
  5. Run the following command to verify the ServiceMeshMemberRoll was created successfully.

    $ oc get smmr -n istio-system default

    The installation has finished successfully when the STATUS column is Configured.

2.5.8. Adding or removing projects from the service mesh

You can add or remove projects from an existing Service Mesh ServiceMeshMemberRoll resource using the web console.

  • You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource.
  • The ServiceMeshMemberRoll resource is deleted when its corresponding ServiceMeshControlPlane resource is deleted.

2.5.8.1. Adding or removing projects from the member roll using the web console

Prerequisites

  • An installed, verified Red Hat OpenShift Service Mesh Operator.
  • An existing ServiceMeshMemberRoll resource.
  • Name of the project with the ServiceMeshMemberRoll resource.
  • Names of the projects you want to add or remove from the mesh.

Procedure

  1. Log in to the OpenShift Container Platform web console.
  2. Navigate to Operators Installed Operators.
  3. Click the Project menu and choose the project where your ServiceMeshControlPlane resource is deployed from the list, for example istio-system.
  4. Click the Red Hat OpenShift Service Mesh Operator.
  5. Click the Istio Service Mesh Member Roll tab.
  6. Click the default link.
  7. Click the YAML tab.
  8. Modify the YAML to add or remove projects as members. You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource.
  9. Click Save.
  10. Click Reload.

2.5.8.2. Adding or removing projects from the member roll using the CLI

You can modify an existing Service Mesh member roll using the command line.

Prerequisites

  • An installed, verified Red Hat OpenShift Service Mesh Operator.
  • An existing ServiceMeshMemberRoll resource.
  • Name of the project with the ServiceMeshMemberRoll resource.
  • Names of the projects you want to add or remove from the mesh.
  • Access to the OpenShift CLI (oc).

Procedure

  1. Log in to the OpenShift Container Platform CLI.
  2. Edit the ServiceMeshMemberRoll resource.

    $ oc edit smmr -n <controlplane-namespace>
  3. Modify the YAML to add or remove projects as members. You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource.

    Example servicemeshmemberroll-default.yaml

    apiVersion: maistra.io/v1
    kind: ServiceMeshMemberRoll
    metadata:
      name: default
      namespace: istio-system #control plane project
    spec:
      members:
        # a list of projects joined into the service mesh
        - your-project-name
        - another-project-name

2.5.9. Manual updates

If you choose to update manually, the Operator Lifecycle Manager (OLM) controls the installation, upgrade, and role-based access control (RBAC) of Operators in a cluster. OLM runs by default in OpenShift Container Platform. OLM uses CatalogSources, which use the Operator Registry API, to query for available Operators as well as upgrades for installed Operators.

  • For more information about how OpenShift Container Platform handled upgrades, refer to the Operator Lifecycle Manager documentation.

2.5.9.1. Updating sidecar proxies

In order to update the configuration for sidecar proxies the application administrator must restart the application pods.

If your deployment uses automatic sidecar injection, you can update the pod template in the deployment by adding or modifying an annotation. Run the following command to redeploy the pods:

$ oc patch deployment/<deployment> -p '{"spec":{"template":{"metadata":{"annotations":{"kubectl.kubernetes.io/restartedAt": "'`date -Iseconds`'"}}}}}'

If your deployment does not use automatic sidecar injection, you must manually update the sidecars by modifying the sidecar container image specified in the deployment or pod, and then restart the pods.

2.5.10. Next steps

2.6. Customizing security in a Service Mesh

Warning

You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported.

Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh.

For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page.

If your service mesh application is constructed with a complex array of microservices, you can use Red Hat OpenShift Service Mesh to customize the security of the communication between those services. The infrastructure of OpenShift Container Platform along with the traffic management features of Service Mesh can help you manage the complexity of your applications and provide service and identity security for microservices.

2.6.1. Enabling mutual Transport Layer Security (mTLS)

Mutual Transport Layer Security (mTLS) is a protocol where two parties authenticate each other. It is the default mode of authentication in some protocols (IKE, SSH) and optional in others (TLS).

mTLS can be used without changes to the application or service code. The TLS is handled entirely by the service mesh infrastructure and between the two sidecar proxies.

By default, Red Hat OpenShift Service Mesh is set to permissive mode, where the sidecars in Service Mesh accept both plain-text traffic and connections that are encrypted using mTLS. If a service in your mesh is communicating with a service outside the mesh, strict mTLS could break communication between those services. Use permissive mode while you migrate your workloads to Service Mesh.

2.6.1.1. Enabling strict mTLS across the mesh

If your workloads do not communicate with services outside your mesh and communication will not be interrupted by only accepting encrypted connections, you can enable mTLS across your mesh quickly. Set spec.istio.global.mtls.enabled to true in your ServiceMeshControlPlane resource. The operator creates the required resources.

apiVersion: maistra.io/v1
kind: ServiceMeshControlPlane
spec:
  istio:
    global:
      mtls:
        enabled: true
2.6.1.1.1. Configuring sidecars for incoming connections for specific services

You can also configure mTLS for individual services or namespaces by creating a policy.

apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
  name: default
  namespace: <NAMESPACE>
spec:
  peers:
    - mtls: {}

2.6.1.2. Configuring sidecars for outgoing connections

Create a destination rule to configure Service Mesh to use mTLS when sending requests to other services in the mesh.

apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
  name: "default"
  namespace: <CONTROL_PLANE_NAMESPACE>>
spec:
  host: "*.local"
  trafficPolicy:
    tls:
      mode: ISTIO_MUTUAL

2.6.1.3. Setting the minimum and maximum protocol versions

If your environment has specific requirements for encrypted traffic in your service mesh, you can control the cryptographic functions that are allowed by setting the spec.security.controlPlane.tls.minProtocolVersion or spec.security.controlPlane.tls.maxProtocolVersion in your ServiceMeshControlPlane resource. Those values, configured in your control plane resource, define the minimum and maximum TLS version used by mesh components when communicating securely over TLS.

apiVersion: maistra.io/v1
kind: ServiceMeshControlPlane
spec:
  istio:
    global:
      tls:
        minProtocolVersion: TLSv1_2
        maxProtocolVersion: TLSv1_3

The default is TLS_AUTO and does not specify a version of TLS.

Table 2.3. Valid values
ValueDescription

TLS_AUTO

default

TLSv1_0

TLS version 1.0

TLSv1_1

TLS version 1.1

TLSv1_2

TLS version 1.2

TLSv1_3

TLS version 1.3

2.6.2. Configuring cipher suites and ECDH curves

Cipher suites and Elliptic-curve Diffie–Hellman (ECDH curves) can help you secure your service mesh. You can define a comma separated list of cipher suites using spec.istio.global.tls.cipherSuites and ECDH curves using spec.istio.global.tls.ecdhCurves in your ServiceMeshControlPlane resource. If either of these attributes are empty, then the default values are used.

The cipherSuites setting is effective if your service mesh uses TLS 1.2 or earlier. It has no effect when negotiating with TLS 1.3.

Set your cipher suites in the comma separated list in order of priority. For example, ecdhCurves: CurveP256, CurveP384 sets CurveP256 as a higher priority than CurveP384.

Note

You must include either TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 or TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 when you configure the cipher suite. HTTP/2 support requires at least one of these cipher suites.

The supported cipher suites are:

  • TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
  • TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
  • TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
  • TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
  • TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
  • TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
  • TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
  • TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
  • TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
  • TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
  • TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
  • TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
  • TLS_RSA_WITH_AES_128_GCM_SHA256
  • TLS_RSA_WITH_AES_256_GCM_SHA384
  • TLS_RSA_WITH_AES_128_CBC_SHA256
  • TLS_RSA_WITH_AES_128_CBC_SHA
  • TLS_RSA_WITH_AES_256_CBC_SHA
  • TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA
  • TLS_RSA_WITH_3DES_EDE_CBC_SHA

The supported ECDH Curves are:

  • CurveP256
  • CurveP384
  • CurveP521
  • X25519

2.6.3. Adding an external certificate authority key and certificate

By default, Red Hat OpenShift Service Mesh generates self-signed root certificate and key, and uses them to sign the workload certificates. You can also use the user-defined certificate and key to sign workload certificates, with user-defined root certificate. This task demonstrates an example to plug certificates and key into Service Mesh.

Prerequisites

  • You must have installed Red Hat OpenShift Service Mesh with mutual TLS enabled to configure certificates.
  • This example uses the certificates from the Maistra repository. For production, use your own certificates from your certificate authority.
  • You must deploy the Bookinfo sample application to verify the results with these instructions.

2.6.3.1. Adding an existing certificate and key

To use an existing signing (CA) certificate and key, you must create a chain of trust file that includes the CA certificate, key, and root certificate. You must use the following exact file names for each of the corresponding certificates. The CA certificate is called ca-cert.pem, the key is ca-key.pem, and the root certificate, which signs ca-cert.pem, is called root-cert.pem. If your workload uses intermediate certificates, you must specify them in a cert-chain.pem file.

Add the certificates to Service Mesh by following these steps. Save the example certificates from the Maistra repo locally and replace <path> with the path to your certificates.

  1. Create a secret cacert that includes the input files ca-cert.pem, ca-key.pem, root-cert.pem and cert-chain.pem.

    $ oc create secret generic cacerts -n istio-system --from-file=<path>/ca-cert.pem \
        --from-file=<path>/ca-key.pem --from-file=<path>/root-cert.pem \
        --from-file=<path>/cert-chain.pem
  2. In the ServiceMeshControlPlane resource set global.mtls.enabled to true and security.selfSigned set to false. Service Mesh reads the certificates and key from the secret-mount files.

    apiVersion: maistra.io/v1
    kind: ServiceMeshControlPlane
    spec:
      istio:
        global:
          mtls:
            enabled: true
        security:
          selfSigned: false
  3. To make sure the workloads add the new certificates promptly, delete the secrets generated by Service Mesh, named istio.*. In this example, istio.default. Service Mesh issues new certificates for the workloads.

    $ oc delete secret istio.default

2.6.3.2. Verifying your certificates

Use the Bookinfo sample application to verify your certificates are mounted correctly. First, retrieve the mounted certificates. Then, verify the certificates mounted on the pod.

  1. Store the pod name in the variable RATINGSPOD.

    $ RATINGSPOD=`oc get pods -l app=ratings -o jsonpath='{.items[0].metadata.name}'`
  2. Run the following commands to retrieve the certificates mounted on the proxy.

    $ oc exec -it $RATINGSPOD -c istio-proxy -- /bin/cat /etc/certs/root-cert.pem > /tmp/pod-root-cert.pem

    The file /tmp/pod-root-cert.pem contains the root certificate propagated to the pod.

    $ oc exec -it $RATINGSPOD -c istio-proxy -- /bin/cat /etc/certs/cert-chain.pem > /tmp/pod-cert-chain.pem

    The file /tmp/pod-cert-chain.pem contains the workload certificate and the CA certificate propagated to the pod.

  3. Verify the root certificate is the same as the one specified by the Operator. Replace <path> with the path to your certificates.

    $ openssl x509 -in <path>/root-cert.pem -text -noout > /tmp/root-cert.crt.txt
    $ openssl x509 -in /tmp/pod-root-cert.pem -text -noout > /tmp/pod-root-cert.crt.txt
    $ diff /tmp/root-cert.crt.txt /tmp/pod-root-cert.crt.txt

    Expect the output to be empty.

  4. Verify the CA certificate is the same as the one specified by Operator. Replace <path> with the path to your certificates.

    $ sed '0,/^-----END CERTIFICATE-----/d' /tmp/pod-cert-chain.pem > /tmp/pod-cert-chain-ca.pem
    $ openssl x509 -in <path>/ca-cert.pem -text -noout > /tmp/ca-cert.crt.txt
    $ openssl x509 -in /tmp/pod-cert-chain-ca.pem -text -noout > /tmp/pod-cert-chain-ca.crt.txt
    $ diff /tmp/ca-cert.crt.txt /tmp/pod-cert-chain-ca.crt.txt

    Expect the output to be empty.

  5. Verify the certificate chain from the root certificate to the workload certificate. Replace <path> with the path to your certificates.

    $ head -n 21 /tmp/pod-cert-chain.pem > /tmp/pod-cert-chain-workload.pem
    $ openssl verify -CAfile <(cat <path>/ca-cert.pem <path>/root-cert.pem) /tmp/pod-cert-chain-workload.pem

    Example output

    /tmp/pod-cert-chain-workload.pem: OK

2.6.3.3. Removing the certificates

To remove the certificates you added, follow these steps.

  1. Remove the secret cacerts.

    $ oc delete secret cacerts -n istio-system
  2. Redeploy Service Mesh with a self-signed root certificate in the ServiceMeshControlPlane resource.

    apiVersion: maistra.io/v1
    kind: ServiceMeshControlPlane
    spec:
      istio:
        global:
          mtls:
            enabled: true
        security:
          selfSigned: true

2.7. Traffic management

Warning

You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported.

Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh.

For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page.

You can control the flow of traffic and API calls between services in Red Hat OpenShift Service Mesh. For example, some services in your service mesh may need to communicate within the mesh and others may need to be hidden. Manage the traffic to hide specific backend services, expose services, create testing or versioning deployments, or add a security layer on a set of services.

2.7.1. Using gateways

You can use a gateway to manage inbound and outbound traffic for your mesh to specify which traffic you want to enter or leave the mesh. Gateway configurations are applied to standalone Envoy proxies that are running at the edge of the mesh, rather than sidecar Envoy proxies running alongside your service workloads.

Unlike other mechanisms for controlling traffic entering your systems, such as the Kubernetes Ingress APIs, Red Hat OpenShift Service Mesh gateways use the full power and flexibility of traffic routing.

The Red Hat OpenShift Service Mesh gateway resource can use layer 4-6 load balancing properties, such as ports, to expose and configure Red Hat OpenShift Service Mesh TLS settings. Instead of adding application-layer traffic routing (L7) to the same API resource, you can bind a regular Red Hat OpenShift Service Mesh virtual service to the gateway and manage gateway traffic like any other data plane traffic in a service mesh.

Gateways are primarily used to manage ingress traffic, but you can also configure egress gateways. An egress gateway lets you configure a dedicated exit node for the traffic leaving the mesh. This enables you to limit which services have access to external networks, which adds security control to your service mesh. You can also use a gateway to configure a purely internal proxy.

Gateway example

A gateway resource describes a load balancer operating at the edge of the mesh receiving incoming or outgoing HTTP/TCP connections. The specification describes a set of ports that should be exposed, the type of protocol to use, SNI configuration for the load balancer, and so on.

The following example shows a sample gateway configuration for external HTTPS ingress traffic:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: ext-host-gwy
spec:
  selector:
    istio: ingressgateway # use istio default controller
  servers:
  - port:
      number: 443
      name: https
      protocol: HTTPS
    hosts:
    - ext-host.example.com
    tls:
      mode: SIMPLE
      serverCertificate: /tmp/tls.crt
      privateKey: /tmp/tls.key

This gateway configuration lets HTTPS traffic from ext-host.example.com into the mesh on port 443, but doesn’t specify any routing for the traffic.

To specify routing and for the gateway to work as intended, you must also bind the gateway to a virtual service. You do this using the virtual service’s gateways field, as shown in the following example:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: virtual-svc
spec:
  hosts:
  - ext-host.example.com
  gateways:
    - ext-host-gwy

You can then configure the virtual service with routing rules for the external traffic.

2.7.2. Configuring an ingress gateway

An ingress gateway is a load balancer operating at the edge of the mesh that receives incoming HTTP/TCP connections. It configures exposed ports and protocols but does not include any traffic routing configuration. Traffic routing for ingress traffic is instead configured with routing rules, the same way as for internal service requests.

The following steps show how to create a gateway and configure a VirtualService to expose a service in the Bookinfo sample application to outside traffic for paths /productpage and /login.

Procedure

  1. Create a gateway to accept traffic.

    1. Create a YAML file, and copy the following YAML into it.

      Gateway example gateway.yaml

      apiVersion: networking.istio.io/v1alpha3
      kind: Gateway
      metadata:
        name: bookinfo-gateway
      spec:
        selector:
          istio: ingressgateway
        servers:
        - port:
            number: 80
            name: http
            protocol: HTTP
          hosts:
          - "*"

    2. Apply the YAML file.

      $ oc apply -f gateway.yaml
  2. Create a VirtualService object to rewrite the host header.

    1. Create a YAML file, and copy the following YAML into it.

      Virtual service example

      apiVersion: networking.istio.io/v1alpha3
      kind: VirtualService
      metadata:
        name: bookinfo
      spec:
        hosts:
        - "*"
        gateways:
        - bookinfo-gateway
        http:
        - match:
          - uri:
              exact: /productpage
          - uri:
              prefix: /static
          - uri:
              exact: /login
          - uri:
              exact: /logout
          - uri:
              prefix: /api/v1/products
          route:
          - destination:
              host: productpage
              port:
                number: 9080

    2. Apply the YAML file.

      $ oc apply -f vs.yaml
  3. Test that the gateway and VirtualService have been set correctly.

    1. Set the Gateway URL.

      export GATEWAY_URL=$(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')
    2. Set the port number. In this example, istio-system is the name of the Service Mesh control plane project.

      export TARGET_PORT=$(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.port.targetPort}')
    3. Test a page that has been explicitly exposed.

      curl -s -I "$GATEWAY_URL/productpage"

      The expected result is 200.

2.7.3. Managing ingress traffic

In Red Hat OpenShift Service Mesh, the Ingress Gateway enables features such as monitoring, security, and route rules to apply to traffic that enters the cluster. Use a Service Mesh gateway to expose a service outside of the service mesh.

2.7.3.1. Determining the ingress IP and ports

Ingress configuration differs depending on if your environment supports an external load balancer. An external load balancer is set in the ingress IP and ports for the cluster. To determine if your cluster’s IP and ports are configured for external load balancers, run the following command. In this example, istio-system is the name of the Service Mesh control plane project.

$ oc get svc istio-ingressgateway -n istio-system

That command returns the NAME, TYPE, CLUSTER-IP, EXTERNAL-IP, PORT(S), and AGE of each item in your namespace.

If the EXTERNAL-IP value is set, your environment has an external load balancer that you can use for the ingress gateway.

If the EXTERNAL-IP value is <none>, or perpetually <pending>, your environment does not provide an external load balancer for the ingress gateway. You can access the gateway using the service’s node port.

2.7.3.1.1. Determining ingress ports with a load balancer

Follow these instructions if your environment has an external load balancer.

Procedure

  1. Run the following command to set the ingress IP and ports. This command sets a variable in your terminal.

    $ export INGRESS_HOST=$(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
  2. Run the following command to set the ingress port.

    $ export INGRESS_PORT=$(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].port}')
  3. Run the following command to set the secure ingress port.

    $ export SECURE_INGRESS_PORT=$(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].port}')
  4. Run the following command to set the TCP ingress port.

    $ export TCP_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="tcp")].port}')
Note

In some environments, the load balancer may be exposed using a hostname instead of an IP address. For that case, the ingress gateway’s EXTERNAL-IP value is not an IP address. Instead, it’s a hostname, and the previous command fails to set the INGRESS_HOST environment variable.

In that case, use the following command to correct the INGRESS_HOST value:

$ export INGRESS_HOST=$(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
2.7.3.1.2. Determining ingress ports without a load balancer

If your environment does not have an external load balancer, determine the ingress ports and use a node port instead.

Procedure

  1. Set the ingress ports.

    $ export INGRESS_PORT=$(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')
  2. Run the following command to set the secure ingress port.

    $ export SECURE_INGRESS_PORT=$(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}')
  3. Run the following command to set the TCP ingress port.

    $ export TCP_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="tcp")].nodePort}')

2.7.4. Automatic route creation

OpenShift routes for Istio Gateways are automatically managed in Red Hat OpenShift Service Mesh. Every time an Istio Gateway is created, updated or deleted inside the service mesh, an OpenShift route is created, updated or deleted.

2.7.4.1. Enabling Automatic Route Creation

A Red Hat OpenShift Service Mesh control plane component called Istio OpenShift Routing (IOR) synchronizes the gateway route. Enable IOR as part of the control plane deployment.

If the Gateway contains a TLS section, the OpenShift Route will be configured to support TLS.

  1. In the ServiceMeshControlPlane resource, add the ior_enabled parameter and set it to true. For example, see the following resource snippet:
spec:
  istio:
    gateways:
     istio-egressgateway:
       autoscaleEnabled: false
       autoscaleMin: 1
       autoscaleMax: 5
     istio-ingressgateway:
       autoscaleEnabled: false
       autoscaleMin: 1
       autoscaleMax: 5
       ior_enabled: true

2.7.4.2. Subdomains

Red Hat OpenShift Service Mesh creates the route with the subdomain, but OpenShift Container Platform must be configured to enable it. Subdomains, for example *.domain.com, are supported but not by default. Configure an OpenShift Container Platform wildcard policy before configuring a wildcard host Gateway. For more information, see the "Links" section.

If the following gateway is created:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: gateway1
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - www.bookinfo.com
    - bookinfo.example.com

Then, the following OpenShift Routes are created automatically. You can check that the routes are created with the following command.

$ oc -n <control_plane_namespace> get routes

Expected output

NAME           HOST/PORT             PATH  SERVICES               PORT  TERMINATION   WILDCARD
gateway1-lvlfn bookinfo.example.com        istio-ingressgateway   <all>               None
gateway1-scqhv www.bookinfo.com            istio-ingressgateway   <all>               None

If the gateway is deleted, Red Hat OpenShift Service Mesh deletes the routes. However, routes created manually are never modified by Red Hat OpenShift Service Mesh.

2.7.5. Understanding service entries

A service entry adds an entry to the service registry that Red Hat OpenShift Service Mesh maintains internally. After you add the service entry, the Envoy proxies send traffic to the service as if it is a service in your mesh. Service entries allow you to do the following:

  • Manage traffic for services that run outside of the service mesh.
  • Redirect and forward traffic for external destinations (such as, APIs consumed from the web) or traffic to services in legacy infrastructure.
  • Define retry, timeout, and fault injection policies for external destinations.
  • Run a mesh service in a Virtual Machine (VM) by adding VMs to your mesh.
Note

Add services from a different cluster to the mesh to configure a multicluster Red Hat OpenShift Service Mesh mesh on Kubernetes.

Service entry examples

The following example is a mesh-external service entry that adds the ext-resource external dependency to the Red Hat OpenShift Service Mesh service registry:

apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
  name: svc-entry
spec:
  hosts:
  - ext-svc.example.com
  ports:
  - number: 443
    name: https
    protocol: HTTPS
  location: MESH_EXTERNAL
  resolution: DNS

Specify the external resource using the hosts field. You can qualify it fully or use a wildcard prefixed domain name.

You can configure virtual services and destination rules to control traffic to a service entry in the same way you configure traffic for any other service in the mesh. For example, the following destination rule configures the traffic route to use mutual TLS to secure the connection to the ext-svc.example.com external service that is configured using the service entry:

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: ext-res-dr
spec:
  host: ext-svc.example.com
  trafficPolicy:
    tls:
      mode: MUTUAL
      clientCertificate: /etc/certs/myclientcert.pem
      privateKey: /etc/certs/client_private_key.pem
      caCertificates: /etc/certs/rootcacerts.pem

2.7.6. Using VirtualServices

You can route requests dynamically to multiple versions of a microservice through Red Hat OpenShift Service Mesh with a virtual service. With virtual services, you can:

  • Address multiple application services through a single virtual service. If your mesh uses Kubernetes, for example, you can configure a virtual service to handle all services in a specific namespace. A virtual service enables you to turn a monolithic application into a service consisting of distinct microservices with a seamless consumer experience.
  • Configure traffic rules in combination with gateways to control ingress and egress traffic.

2.7.6.1. Configuring VirtualServices

Requests are routed to services within a service mesh with virtual services. Each virtual service consists of a set of routing rules that are evaluated in order. Red Hat OpenShift Service Mesh matches each given request to the virtual service to a specific real destination within the mesh.

Without virtual services, Red Hat OpenShift Service Mesh distributes traffic using least requests load balancing between all service instances. With a virtual service, you can specify traffic behavior for one or more hostnames. Routing rules in the virtual service tell Red Hat OpenShift Service Mesh how to send the traffic for the virtual service to appropriate destinations. Route destinations can be versions of the same service or entirely different services.

Procedure

  1. Create a YAML file using the following example to route requests to different versions of the Bookinfo sample application service depending on which user connects to the application.

    Example VirtualService.yaml

    apiVersion: networking.istio.io/v1alpha3
    kind: VirtualService
    metadata:
      name: reviews
    spec:
      hosts:
      - reviews
      http:
      - match:
        - headers:
            end-user:
              exact: jason
        route:
        - destination:
            host: reviews
            subset: v2
      - route:
        - destination:
            host: reviews
            subset: v3

  2. Run the following command to apply VirtualService.yaml, where VirtualService.yaml is the path to the file.

    $ oc apply -f <VirtualService.yaml>

2.7.6.2. VirtualService configuration reference

ParameterDescription
spec:
  hosts:

The hosts field lists the virtual service’s destination address to which the routing rules apply. This is the address(es) that are used to send requests to the service. The virtual service hostname can be an IP address, a DNS name, or a short name that resolves to a fully qualified domain name.

spec:
  http:
  - match:

The http section contains the virtual service’s routing rules which describe match conditions and actions for routing HTTP/1.1, HTTP2, and gRPC traffic sent to the destination as specified in the hosts field. A routing rule consists of the destination where you want the traffic to go and any specified match conditions. The first routing rule in the example has a condition that begins with the match field. In this example, this routing applies to all requests from the user jason. Add the headers, end-user, and exact fields to select the appropriate requests.

spec:
  http:
  - match:
    - destination:

The destination field in the route section specifies the actual destination for traffic that matches this condition. Unlike the virtual service’s host, the destination’s host must be a real destination that exists in the Red Hat OpenShift Service Mesh service registry. This can be a mesh service with proxies or a non-mesh service added using a service entry. In this example, the hostname is a Kubernetes service name:

2.7.7. Understanding destination rules

Destination rules are applied after virtual service routing rules are evaluated, so they apply to the traffic’s real destination. Virtual services route traffic to a destination. Destination rules configure what happens to traffic at that destination.

By default, Red Hat OpenShift Service Mesh uses a least requests load balancing policy, where the service instance in the pool with the least number of active connections receives the request. Red Hat OpenShift Service Mesh also supports the following models, which you can specify in destination rules for requests to a particular service or service subset.

  • Random: Requests are forwarded at random to instances in the pool.
  • Weighted: Requests are forwarded to instances in the pool according to a specific percentage.
  • Least requests: Requests are forwarded to instances with the least number of requests.

Destination rule example

The following example destination rule configures three different subsets for the my-svc destination service, with different load balancing policies:

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: my-destination-rule
spec:
  host: my-svc
  trafficPolicy:
    loadBalancer:
      simple: RANDOM
  subsets:
  - name: v1
    labels:
      version: v1
  - name: v2
    labels:
      version: v2
    trafficPolicy:
      loadBalancer:
        simple: ROUND_ROBIN
  - name: v3
    labels:
      version: v3

This guide references the Bookinfo sample application to provide examples of routing in an example application. Install the Bookinfo application to learn how these routing examples work.

2.7.8. Bookinfo routing tutorial

The Service Mesh Bookinfo sample application consists of four separate microservices, each with multiple versions. After installing the Bookinfo sample application, three different versions of the reviews microservice run concurrently.

When you access the Bookinfo app /product page in a browser and refresh several times, sometimes the book review output contains star ratings and other times it does not. Without an explicit default service version to route to, Service Mesh routes requests to all available versions one after the other.

This tutorial helps you apply rules that route all traffic to v1 (version 1) of the microservices. Later, you can apply a rule to route traffic based on the value of an HTTP request header.

Prerequisites:

  • Deploy the Bookinfo sample application to work with the following examples.

2.7.8.1. Applying a virtual service

In the following procedure, the virtual service routes all traffic to v1 of each micro-service by applying virtual services that set the default version for the micro-services.

Procedure

  1. Apply the virtual services.

    $ oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.4/samples/bookinfo/networking/virtual-service-all-v1.yaml
  2. To verify that you applied the virtual services, display the defined routes with the following command:

    $ oc get virtualservices -o yaml

    That command returns a resource of kind: VirtualService in YAML format.

You have configured Service Mesh to route to the v1 version of the Bookinfo microservices including the reviews service version 1.

2.7.8.2. Testing the new route configuration

Test the new configuration by refreshing the /productpage of the Bookinfo application.

Procedure

  1. Set the value for the GATEWAY_URL parameter. You can use this variable to find the URL for your Bookinfo product page later. In this example, istio-system is the name of the control plane project.

    export GATEWAY_URL=$(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')
  2. Run the following command to retrieve the URL for the product page.

    echo "http://$GATEWAY_URL/productpage"
  3. Open the Bookinfo site in your browser.

The reviews part of the page displays with no rating stars, no matter how many times you refresh. This is because you configured Service Mesh to route all traffic for the reviews service to the version reviews:v1 and this version of the service does not access the star ratings service.

Your service mesh now routes traffic to one version of a service.

2.7.8.3. Route based on user identity

Change the route configuration so that all traffic from a specific user is routed to a specific service version. In this case, all traffic from a user named jason will be routed to the service reviews:v2.

Service Mesh does not have any special, built-in understanding of user identity. This example is enabled by the fact that the productpage service adds a custom end-user header to all outbound HTTP requests to the reviews service.

Procedure

  1. Run the following command to enable user-based routing in the Bookinfo sample application.

    $ oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.4/samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml
  2. Run the following command to confirm the rule is created. This command returns all resources of kind: VirtualService in YAML format.

    $ oc get virtualservice reviews -o yaml
  3. On the /productpage of the Bookinfo app, log in as user jason with no password.
  4. Refresh the browser. The star ratings appear next to each review.
  5. Log in as another user (pick any name you want). Refresh the browser. Now the stars are gone. Traffic is now routed to reviews:v1 for all users except Jason.

You have successfully configured the Bookinfo sample application to route traffic based on user identity.

2.7.9. Additional resources

For more information about configuring an OpenShift Container Platform wildcard policy, see Using wildcard routes.

2.8. Deploying applications on Service Mesh

Warning

You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported.

Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh.

For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page.

When you deploy an application into the Service Mesh, there are several differences between the behavior of applications in the upstream community version of Istio and the behavior of applications within a Red Hat OpenShift Service Mesh installation.

2.8.1. Prerequisites

2.8.2. Creating control plane templates

You can create reusable configurations with ServiceMeshControlPlane templates. Individual users can extend the templates they create with their own configurations. Templates can also inherit configuration information from other templates. For example, you can create an accounting control plane for the accounting team and a marketing control plane for the marketing team. If you create a development template and a production template, members of the marketing team and the accounting team can extend the development and production templates with team specific customization.

When you configure control plane templates, which follow the same syntax as the ServiceMeshControlPlane, users inherit settings in a hierarchical fashion. The Operator is delivered with a default template with default settings for Red Hat OpenShift Service Mesh. To add custom templates you must create a ConfigMap named smcp-templates in the openshift-operators project and mount the ConfigMap in the Operator container at /usr/local/share/istio-operator/templates.

2.8.2.1. Creating the ConfigMap

Follow this procedure to create the ConfigMap.

Prerequisites

  • An installed, verified Service Mesh Operator.
  • An account with the cluster-admin role.
  • Location of the Operator deployment.
  • Access to the OpenShift CLI (oc).

Procedure

  1. Log in to the OpenShift Container Platform CLI as a cluster administrator.
  2. From the CLI, run this command to create the ConfigMap named smcp-templates in the openshift-operators project and replace <templates-directory> with the location of the ServiceMeshControlPlane files on your local disk:

    $ oc create configmap --from-file=<templates-directory> smcp-templates -n openshift-operators
  3. Locate the Operator ClusterServiceVersion name.

    $ oc get clusterserviceversion -n openshift-operators | grep 'Service Mesh'

    Example output

    maistra.v1.0.0            Red Hat OpenShift Service Mesh   1.0.0                Succeeded

  4. Edit the Operator cluster service version to instruct the Operator to use the smcp-templates ConfigMap.

    $ oc edit clusterserviceversion -n openshift-operators maistra.v1.0.0
  5. Add a volume mount and volume to the Operator deployment.

    deployments:
      - name: istio-operator
        spec:
          template:
            spec:
              containers:
                volumeMounts:
                  - name: discovery-cache
                    mountPath: /home/istio-operator/.kube/cache/discovery
                  - name: smcp-templates
                    mountPath: /usr/local/share/istio-operator/templates/
              volumes:
                - name: discovery-cache
                  emptyDir:
                    medium: Memory
                - name: smcp-templates
                  configMap:
                    name: smcp-templates
    ...
  6. Save your changes and exit the editor.
  7. You can now use the template parameter in the ServiceMeshControlPlane to specify a template.

    apiVersion: maistra.io/v1
    kind: ServiceMeshControlPlane
    metadata:
      name: minimal-install
    spec:
      template: default

2.8.3. Enabling automatic sidecar injection

When deploying an application, you must opt-in to injection by configuring the annotation sidecar.istio.io/inject in spec.template.metadata.annotations to true in the deployment object. Opting in ensures that the sidecar injection does not interfere with other OpenShift Container Platform features such as builder pods used by numerous frameworks within the OpenShift Container Platform ecosystem.

Prerequisites

  • Identify the namespaces that are part of your service mesh and the deployments that need automatic sidecar injection.

Procedure

  1. To find your deployments use the oc get command.

    $ oc get deployment -n <namespace>

    For example, to view the deployment file for the 'ratings-v1' microservice in the bookinfo namespace, use the following command to see the resource in YAML format.

    oc get deployment -n bookinfo ratings-v1 -o yaml
  2. Open the application’s deployment configuration YAML file in an editor.
  3. Add spec.template.metadata.annotations.sidecar.istio/inject to your Deployment YAML and set sidecar.istio.io/inject to true as shown in the following example.

    Example snippet from bookinfo deployment-ratings-v1.yaml

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: ratings-v1
      namespace: bookinfo
      labels:
        app: ratings
        version: v1
    spec:
      template:
        metadata:
          annotations:
            sidecar.istio.io/inject: 'true'

  4. Save the Deployment configuration file.
  5. Add the file back to the project that contains your app.

    $ oc apply -n <namespace> -f deployment.yaml

    In this example, bookinfo is the name of the project that contains the ratings-v1 app and deployment-ratings-v1.yaml is the file you edited.

    $ oc apply -n bookinfo -f deployment-ratings-v1.yaml
  6. To verify that the resource uploaded successfully, run the following command.

    $ oc get deployment -n <namespace> <deploymentName> -o yaml

    For example,

    $ oc get deployment -n bookinfo ratings-v1 -o yaml

2.8.4. Setting proxy environment variables through annotations

Configuration for the Envoy sidecar proxies is managed by the ServiceMeshControlPlane.

You can set environment variables for the sidecar proxy for applications by adding pod annotations to the deployment in the injection-template.yaml file. The environment variables are injected to the sidecar.

Example injection-template.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: resource
spec:
  replicas: 7
  selector:
    matchLabels:
      app: resource
  template:
    metadata:
      annotations:
        sidecar.maistra.io/proxyEnv: "{ \"maistra_test_env\": \"env_value\", \"maistra_test_env_2\": \"env_value_2\" }"

Warning

You should never include maistra.io/ labels and annotations when creating your own custom resources. These labels and annotations indicate that the resources are generated and managed by the Operator. If you are copying content from an Operator-generated resource when creating your own resources, do not include labels or annotations that start with maistra.io/. Resources that include these labels or annotations will be overwritten or deleted by the Operator during the next reconciliation.

2.8.5. Updating Mixer policy enforcement

In previous versions of Red Hat OpenShift Service Mesh, Mixer’s policy enforcement was enabled by default. Mixer policy enforcement is now disabled by default. You must enable it before running policy tasks.

Prerequisites

  • Access to the OpenShift CLI (oc).
Note

The examples use <istio-system> as the control plane namespace. Replace this value with the namespace where you deployed the Service Mesh Control Plane (SMCP).

Procedure

  1. Log in to the OpenShift Container Platform CLI.
  2. Run this command to check the current Mixer policy enforcement status:

    $ oc get cm -n <istio-system> istio -o jsonpath='{.data.mesh}' | grep disablePolicyChecks
  3. If disablePolicyChecks: true, edit the Service Mesh ConfigMap:

    $ oc edit cm -n <istio-system> istio
  4. Locate disablePolicyChecks: true within the ConfigMap and change the value to false.
  5. Save the configuration and exit the editor.
  6. Re-check the Mixer policy enforcement status to ensure it is set to false.

2.8.5.1. Setting the correct network policy

Service Mesh creates network policies in the Service Mesh control plane and member namespaces to allow traffic between them. Before you deploy, consider the following conditions to ensure the services in your service mesh that were previously exposed through an OpenShift Container Platform route.

  • Traffic into the service mesh must always go through the ingress-gateway for Istio to work properly.
  • Deploy services external to the service mesh in separate namespaces that are not in any service mesh.
  • Non-mesh services that need to be deployed within a service mesh enlisted namespace should label their deployments maistra.io/expose-route: "true", which ensures OpenShift Container Platform routes to these services still work.

2.8.6. Bookinfo example application

The Bookinfo example application allows you to test your Red Hat OpenShift Service Mesh 2.4.2 installation on OpenShift Container Platform.

The Bookinfo application displays information about a book, similar to a single catalog entry of an online book store. The application displays a page that describes the book, book details (ISBN, number of pages, and other information), and book reviews.

The Bookinfo application consists of these microservices:

  • The productpage microservice calls the details and reviews microservices to populate the page.
  • The details microservice contains book information.
  • The reviews microservice contains book reviews. It also calls the ratings microservice.
  • The ratings microservice contains book ranking information that accompanies a book review.

There are three versions of the reviews microservice:

  • Version v1 does not call the ratings Service.
  • Version v2 calls the ratings Service and displays each rating as one to five black stars.
  • Version v3 calls the ratings Service and displays each rating as one to five red stars.

2.8.6.1. Installing the Bookinfo application

This tutorial walks you through how to create a sample application by creating a project, deploying the Bookinfo application to that project, and viewing the running application in Service Mesh.

Prerequisites:

  • OpenShift Container Platform 4.1 or higher installed.
  • Red Hat OpenShift Service Mesh 2.4.2 installed.
  • Access to the OpenShift CLI (oc).
  • An account with the cluster-admin role.
Note

The Bookinfo sample application cannot be installed on IBM Z and IBM Power Systems.

Note

The commands in this section assume the Service Mesh control plane project is istio-system. If you installed the control plane in another namespace, edit each command before you run it.

Procedure

  1. Log in to the OpenShift Container Platform web console as a user with cluster-admin rights. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role.
  2. Click Home Projects.
  3. Click Create Project.
  4. Enter bookinfo as the Project Name, enter a Display Name, and enter a Description, then click Create.

    • Alternatively, you can run this command from the CLI to create the bookinfo project.

      $ oc new-project bookinfo
  5. Click Operators Installed Operators.
  6. Click the Project menu and use the Service Mesh control plane namespace. In this example, use istio-system.
  7. Click the Red Hat OpenShift Service Mesh Operator.
  8. Click the Istio Service Mesh Member Roll tab.

    1. If you have already created a Istio Service Mesh Member Roll, click the name, then click the YAML tab to open the YAML editor.
    2. If you have not created a ServiceMeshMemberRoll, click Create ServiceMeshMemberRoll.
  9. Click Members, then enter the name of your project in the Value field.
  10. Click Create to save the updated Service Mesh Member Roll.

    1. Or, save the following example to a YAML file.

      Bookinfo ServiceMeshMemberRoll example servicemeshmemberroll-default.yaml

      apiVersion: maistra.io/v1
      kind: ServiceMeshMemberRoll
      metadata:
        name: default
      spec:
        members:
        - bookinfo

    2. Run the following command to upload that file and create the ServiceMeshMemberRoll resource in the istio-system namespace. In this example, istio-system is the name of the Service Mesh control plane project.

      $ oc create -n istio-system -f servicemeshmemberroll-default.yaml
  11. Run the following command to verify the ServiceMeshMemberRoll was created successfully.

    $ oc get smmr -n istio-system -o wide

    The installation has finished successfully when the STATUS column is Configured.

    NAME      READY   STATUS       AGE   MEMBERS
    default   1/1     Configured   70s   ["bookinfo"]
  12. From the CLI, deploy the Bookinfo application in the `bookinfo` project by applying the bookinfo.yaml file:

    $ oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.4/samples/bookinfo/platform/kube/bookinfo.yaml

    You should see output similar to the following:

    service/details created
    serviceaccount/bookinfo-details created
    deployment.apps/details-v1 created
    service/ratings created
    serviceaccount/bookinfo-ratings created
    deployment.apps/ratings-v1 created
    service/reviews created
    serviceaccount/bookinfo-reviews created
    deployment.apps/reviews-v1 created
    deployment.apps/reviews-v2 created
    deployment.apps/reviews-v3 created
    service/productpage created
    serviceaccount/bookinfo-productpage created
    deployment.apps/productpage-v1 created
  13. Create the ingress gateway by applying the bookinfo-gateway.yaml file:

    $ oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.4/samples/bookinfo/networking/bookinfo-gateway.yaml

    You should see output similar to the following:

    gateway.networking.istio.io/bookinfo-gateway created
    virtualservice.networking.istio.io/bookinfo created
  14. Set the value for the GATEWAY_URL parameter:

    $ export GATEWAY_URL=$(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')

2.8.6.2. Adding default destination rules

Before you can use the Bookinfo application, you must first add default destination rules. There are two preconfigured YAML files, depending on whether or not you enabled mutual transport layer security (TLS) authentication.

Procedure

  1. To add destination rules, run one of the following commands:

    • If you did not enable mutual TLS:

      $ oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.4/samples/bookinfo/networking/destination-rule-all.yaml
    • If you enabled mutual TLS:

      $ oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.4/samples/bookinfo/networking/destination-rule-all-mtls.yaml

      You should see output similar to the following:

      destinationrule.networking.istio.io/productpage created
      destinationrule.networking.istio.io/reviews created
      destinationrule.networking.istio.io/ratings created
      destinationrule.networking.istio.io/details created

2.8.6.3. Verifying the Bookinfo installation

To confirm that the sample Bookinfo application was successfully deployed, perform the following steps.

Prerequisites

  • Red Hat OpenShift Service Mesh installed.
  • Complete the steps for installing the Bookinfo sample app.

Procedure from CLI

  1. Log in to the OpenShift Container Platform CLI.
  2. Verify that all pods are ready with this command:

    $ oc get pods -n bookinfo

    All pods should have a status of Running. You should see output similar to the following:

    NAME                              READY   STATUS    RESTARTS   AGE
    details-v1-55b869668-jh7hb        2/2     Running   0          12m
    productpage-v1-6fc77ff794-nsl8r   2/2     Running   0          12m
    ratings-v1-7d7d8d8b56-55scn       2/2     Running   0          12m
    reviews-v1-868597db96-bdxgq       2/2     Running   0          12m
    reviews-v2-5b64f47978-cvssp       2/2     Running   0          12m
    reviews-v3-6dfd49b55b-vcwpf       2/2     Running   0          12m
  3. Run the following command to retrieve the URL for the product page:

    echo "http://$GATEWAY_URL/productpage"
  4. Copy and paste the output in a web browser to verify the Bookinfo product page is deployed.

Procedure from Kiali web console

  1. Obtain the address for the Kiali web console.

    1. Log in to the OpenShift Container Platform web console as a user with cluster-admin rights. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role.
    2. Navigate to Networking Routes.
    3. On the Routes page, select the Service Mesh control plane project, for example istio-system, from the Namespace menu.

      The Location column displays the linked address for each route.

    4. Click the link in the Location column for Kiali.
    5. Click Log In With OpenShift. The Kiali Overview screen presents tiles for each project namespace.
  2. In Kiali, click Graph.
  3. Select bookinfo from the Namespace list, and App graph from the Graph Type list.
  4. Click Display idle nodes from the Display menu.

    This displays nodes that are defined but have not received or sent requests. It can confirm that an application is properly defined, but that no request traffic has been reported.

    Kiali displaying bookinfo application
    • Use the Duration menu to increase the time period to help ensure older traffic is captured.
    • Use the Refresh Rate menu to refresh traffic more or less often, or not at all.
  5. Click Services, Workloads or Istio Config to see list views of bookinfo components, and confirm that they are healthy.

2.8.6.4. Removing the Bookinfo application

Follow these steps to remove the Bookinfo application.

Prerequisites

  • OpenShift Container Platform 4.1 or higher installed.
  • Red Hat OpenShift Service Mesh 2.4.2 installed.
  • Access to the OpenShift CLI (oc).
2.8.6.4.1. Delete the Bookinfo project

Procedure

  1. Log in to the OpenShift Container Platform web console.
  2. Click to Home Projects.
  3. Click the bookinfo menu kebab , and then click Delete Project.
  4. Type bookinfo in the confirmation dialog box, and then click Delete.

    • Alternatively, you can run this command using the CLI to create the bookinfo project.

      $ oc delete project bookinfo
2.8.6.4.2. Remove the Bookinfo project from the Service Mesh member roll

Procedure

  1. Log in to the OpenShift Container Platform web console.
  2. Click Operators Installed Operators.
  3. Click the Project menu and choose istio-system from the list.
  4. Click the Istio Service Mesh Member Roll link under Provided APIS for the Red Hat OpenShift Service Mesh Operator.
  5. Click the ServiceMeshMemberRoll menu kebab and select Edit Service Mesh Member Roll.
  6. Edit the default Service Mesh Member Roll YAML and remove bookinfo from the members list.

    • Alternatively, you can run this command using the CLI to remove the bookinfo project from the ServiceMeshMemberRoll. In this example, istio-system is the name of the Service Mesh control plane project.

      $ oc -n istio-system patch --type='json' smmr default -p '[{"op": "remove", "path": "/spec/members", "value":["'"bookinfo"'"]}]'
  7. Click Save to update Service Mesh Member Roll.

2.8.7. Generating example traces and analyzing trace data

Jaeger is an open source distributed tracing system. With Jaeger, you can perform a trace that follows the path of a request through various microservices which make up an application. Jaeger is installed by default as part of the Service Mesh.

This tutorial uses Service Mesh and the Bookinfo sample application to demonstrate how you can use Jaeger to perform distributed tracing.

Prerequisites:

  • OpenShift Container Platform 4.1 or higher installed.
  • Red Hat OpenShift Service Mesh 2.4.2 installed.
  • Jaeger enabled during the installation.
  • Bookinfo example application installed.

Procedure

  1. After installing the Bookinfo sample application, send traffic to the mesh. Enter the following command several times.

    $ curl "http://$GATEWAY_URL/productpage"

    This command simulates a user visiting the productpage microservice of the application.

  2. In the OpenShift Container Platform console, navigate to Networking Routes and search for the Jaeger route, which is the URL listed under Location.

    • Alternatively, use the CLI to query for details of the route. In this example, istio-system is the Service Mesh control plane namespace:

      $ export JAEGER_URL=$(oc get route -n istio-system jaeger -o jsonpath='{.spec.host}')
      1. Enter the following command to reveal the URL for the Jaeger console. Paste the result in a browser and navigate to that URL.

        echo $JAEGER_URL
  3. Log in using the same user name and password as you use to access the OpenShift Container Platform console.
  4. In the left pane of the Jaeger dashboard, from the Service menu, select productpage.bookinfo and click Find Traces at the bottom of the pane. A list of traces is displayed.
  5. Click one of the traces in the list to open a detailed view of that trace. If you click the first one in the list, which is the most recent trace, you see the details that correspond to the latest refresh of the /productpage.

2.9. Data visualization and observability

Warning

You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported.

Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh.

For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page.

You can view your application’s topology, health and metrics in the Kiali console. If your service is having issues, the Kiali console offers ways to visualize the data flow through your service. You can view insights about the mesh components at different levels, including abstract applications, services, and workloads. It also provides an interactive graph view of your namespace in real time.

Before you begin

You can observe the data flow through your application if you have an application installed. If you don’t have your own application installed, you can see how observability works in Red Hat OpenShift Service Mesh by installing the Bookinfo sample application.

2.9.1. Viewing service mesh data

The Kiali operator works with the telemetry data gathered in Red Hat OpenShift Service Mesh to provide graphs and real-time network diagrams of the applications, services, and workloads in your namespace.

To access the Kiali console you must have Red Hat OpenShift Service Mesh installed and projects configured for the service mesh.

Procedure

  1. Use the perspective switcher to switch to the Administrator perspective.
  2. Click Home Projects.
  3. Click the name of your project. For example, click bookinfo.
  4. In the Launcher section, click Kiali.
  5. Log in to the Kiali console with the same user name and password that you use to access the OpenShift Container Platform console.

When you first log in to the Kiali Console, you see the Overview page which displays all the namespaces in your service mesh that you have permission to view.

If you are validating the console installation, there might not be any data to display.

2.9.2. Viewing service mesh data in the Kiali console

The Kiali Graph offers a powerful visualization of your mesh traffic. The topology combines real-time request traffic with your Istio configuration information to present immediate insight into the behavior of your service mesh, letting you quickly pinpoint issues. Multiple Graph Types let you visualize traffic as a high-level service topology, a low-level workload topology, or as an application-level topology.

There are several graphs to choose from:

  • The App graph shows an aggregate workload for all applications that are labeled the same.
  • The Service graph shows a node for each service in your mesh but excludes all applications and workloads from the graph. It provides a high level view and aggregates all traffic for defined services.
  • The Versioned App graph shows a node for each version of an application. All versions of an application are grouped together.
  • The Workload graph shows a node for each workload in your service mesh. This graph does not require you to use the application and version labels. If your application does not use version labels, use this the graph.

Graph nodes are decorated with a variety of information, pointing out various route routing options like virtual services and service entries, as well as special configuration like fault-injection and circuit breakers. It can identify mTLS issues, latency issues, error traffic and more. The Graph is highly configurable, can show traffic animation, and has powerful Find and Hide abilities.

Click the Legend button to view information about the shapes, colors, arrows, and badges displayed in the graph.

To view a summary of metrics, select any node or edge in the graph to display its metric details in the summary details panel.

2.9.2.1. Changing graph layouts in Kiali

The layout for the Kiali graph can render differently depending on your application architecture and the data to display. For example, the number of graph nodes and their interactions can determine how the Kiali graph is rendered. Because it is not possible to create a single layout that renders nicely for every situation, Kiali offers a choice of several different layouts.

Prerequisites

  • If you do not have your own application installed, install the Bookinfo sample application. Then generate traffic for the Bookinfo application by entering the following command several times.

    $ curl "http://$GATEWAY_URL/productpage"

    This command simulates a user visiting the productpage microservice of the application.

Procedure

  1. Launch the Kiali console.
  2. Click Log In With OpenShift.
  3. In Kiali console, click Graph to view a namespace graph.
  4. From the Namespace menu, select your application namespace, for example, bookinfo.
  5. To choose a different graph layout, do either or both of the following:

    • Select different graph data groupings from the menu at the top of the graph.

      • App graph
      • Service graph
      • Versioned App graph (default)
      • Workload graph
    • Select a different graph layout from the Legend at the bottom of the graph.

      • Layout default dagre
      • Layout 1 cose-bilkent
      • Layout 2 cola

2.10. Custom resources

Warning

You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported.

Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh.

For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page.

You can customize your Red Hat OpenShift Service Mesh by modifying the default Service Mesh custom resource or by creating a new custom resource.

2.10.1. Prerequisites

2.10.2. Red Hat OpenShift Service Mesh custom resources

Note

The istio-system project is used as an example throughout the Service Mesh documentation, but you can use other projects as necessary.

A custom resource allows you to extend the API in an Red Hat OpenShift Service Mesh project or cluster. When you deploy Service Mesh it creates a default ServiceMeshControlPlane that you can modify to change the project parameters.

The Service Mesh operator extends the API by adding the ServiceMeshControlPlane resource type, which enables you to create ServiceMeshControlPlane objects within projects. By creating a ServiceMeshControlPlane object, you instruct the Operator to install a Service Mesh control plane into the project, configured with the parameters you set in the ServiceMeshControlPlane object.

This example ServiceMeshControlPlane definition contains all of the supported parameters and deploys Red Hat OpenShift Service Mesh 1.1.18.2 images based on Red Hat Enterprise Linux (RHEL).

Important

The 3scale Istio Adapter is deployed and configured in the custom resource file. It also requires a working 3scale account (SaaS or On-Premises).

Example istio-installation.yaml

apiVersion: maistra.io/v1
kind: ServiceMeshControlPlane
metadata:
  name: basic-install
spec:

  istio:
    global:
      proxy:
        resources:
          requests:
            cpu: 100m
            memory: 128Mi
          limits:
            cpu: 500m
            memory: 128Mi

    gateways:
      istio-egressgateway:
        autoscaleEnabled: false
      istio-ingressgateway:
        autoscaleEnabled: false
        ior_enabled: false

    mixer:
      policy:
        autoscaleEnabled: false

      telemetry:
        autoscaleEnabled: false
        resources:
          requests:
            cpu: 100m
            memory: 1G
          limits:
            cpu: 500m
            memory: 4G

    pilot:
      autoscaleEnabled: false
      traceSampling: 100

    kiali:
      enabled: true

    grafana:
      enabled: true

    tracing:
      enabled: true
      jaeger:
        template: all-in-one

2.10.3. ServiceMeshControlPlane parameters

The following examples illustrate use of the ServiceMeshControlPlane parameters and the tables provide additional information about supported parameters.

Important

The resources you configure for Red Hat OpenShift Service Mesh with these parameters, including CPUs, memory, and the number of pods, are based on the configuration of your OpenShift Container Platform cluster. Configure these parameters based on the available resources in your current cluster configuration.

2.10.3.1. Istio global example

Here is an example that illustrates the Istio global parameters for the ServiceMeshControlPlane and a description of the available parameters with appropriate values.

Note

In order for the 3scale Istio Adapter to work, disablePolicyChecks must be false.

Example global parameters

  istio:
    global:
      tag: 1.1.0
      hub: registry.redhat.io/openshift-service-mesh/
      proxy:
        resources:
          requests:
            cpu: 10m
            memory: 128Mi
          limits:
      mtls:
        enabled: false
      disablePolicyChecks: true
      policyCheckFailOpen: false
      imagePullSecrets:
        - MyPullSecret

Table 2.4. Global parameters
ParameterDescriptionValuesDefault value

disablePolicyChecks

This parameter enables/disables policy checks.

true/false

true

policyCheckFailOpen

This parameter indicates whether traffic is allowed to pass through to the Envoy sidecar when the Mixer policy service cannot be reached.

true/false

false

tag

The tag that the Operator uses to pull the Istio images.

A valid container image tag.

1.1.0

hub

The hub that the Operator uses to pull Istio images.

A valid image repository.

maistra/ or registry.redhat.io/openshift-service-mesh/

mtls

This parameter controls whether to enable/disable Mutual Transport Layer Security (mTLS) between services by default.

true/false

false

imagePullSecrets

If access to the registry providing the Istio images is secure, list an imagePullSecret here.

redhat-registry-pullsecret OR quay-pullsecret

None

These parameters are specific to the proxy subset of global parameters.

Table 2.5. Proxy parameters
TypeParameterDescriptionValuesDefault value

requests

cpu

The amount of CPU resources requested for Envoy proxy.

CPU resources, specified in cores or millicores (for example, 200m, 0.5, 1) based on your environment’s configuration.

10m

 

memory

The amount of memory requested for Envoy proxy

Available memory in bytes(for example, 200Ki, 50Mi, 5Gi) based on your environment’s configuration.

128Mi

limits

cpu

The maximum amount of CPU resources requested for Envoy proxy.

CPU resources, specified in cores or millicores (for example, 200m, 0.5, 1) based on your environment’s configuration.

2000m

 

memory

The maximum amount of memory Envoy proxy is permitted to use.

Available memory in bytes (for example, 200Ki, 50Mi, 5Gi) based on your environment’s configuration.

1024Mi

2.10.3.2. Istio gateway configuration

Here is an example that illustrates the Istio gateway parameters for the ServiceMeshControlPlane and a description of the available parameters with appropriate values.

Example gateway parameters

  gateways:
    egress:
      enabled: true
      runtime:
        deployment:
          autoScaling:
            enabled: true
            maxReplicas: 5
            minReplicas: 1
    enabled: true
    ingress:
      enabled: true
      runtime:
        deployment:
          autoScaling:
            enabled: true
            maxReplicas: 5
            minReplicas: 1

Table 2.6. Istio Gateway parameters
ParameterDescriptionValuesDefault value

gateways.egress.runtime.deployment.autoScaling.enabled

This parameter enables/disables autoscaling.

true/false

true

gateways.egress.runtime.deployment.autoScaling.minReplicas

The minimum number of pods to deploy for the egress gateway based on the autoscaleEnabled setting.

A valid number of allocatable pods based on your environment’s configuration.

1

gateways.egress.runtime.deployment.autoScaling.maxReplicas

The maximum number of pods to deploy for the egress gateway based on the autoscaleEnabled setting.

A valid number of allocatable pods based on your environment’s configuration.

5

gateways.ingress.runtime.deployment.autoScaling.enabled

This parameter enables/disables autoscaling.

true/false

true

gateways.ingress.runtime.deployment.autoScaling.minReplicas

The minimum number of pods to deploy for the ingress gateway based on the autoscaleEnabled setting.

A valid number of allocatable pods based on your environment’s configuration.

1

gateways.ingress.runtime.deployment.autoScaling.maxReplicas

The maximum number of pods to deploy for the ingress gateway based on the autoscaleEnabled setting.

A valid number of allocatable pods based on your environment’s configuration.

5

Cluster administrators can refer to Using wildcard routes for instructions on how to enable subdomains.

2.10.3.3. Istio Mixer configuration

Here is an example that illustrates the Mixer parameters for the ServiceMeshControlPlane and a description of the available parameters with appropriate values.

Example mixer parameters

mixer:
  enabled: true
  policy:
    autoscaleEnabled: false
  telemetry:
    autoscaleEnabled: false
    resources:
    requests:
      cpu: 10m
      memory: 128Mi
      limits:

Table 2.7. Istio Mixer policy parameters
ParameterDescriptionValuesDefault value

enabled

This parameter enables/disables Mixer.

true/false

true

autoscaleEnabled

This parameter enables/disables autoscaling. Disable this for small environments.

true/false

true

autoscaleMin

The minimum number of pods to deploy based on the autoscaleEnabled setting.

A valid number of allocatable pods based on your environment’s configuration.

1

autoscaleMax

The maximum number of pods to deploy based on the autoscaleEnabled setting.

A valid number of allocatable pods based on your environment’s configuration.

5

Table 2.8. Istio Mixer telemetry parameters
TypeParameterDescriptionValuesDefault

requests

cpu

The percentage of CPU resources requested for Mixer telemetry.

CPU resources in millicores based on your environment’s configuration.

10m

 

memory

The amount of memory requested for Mixer telemetry.

Available memory in bytes (for example, 200Ki, 50Mi, 5Gi) based on your environment’s configuration.

128Mi

limits

cpu

The maximum percentage of CPU resources Mixer telemetry is permitted to use.

CPU resources in millicores based on your environment’s configuration.

4800m

 

memory

The maximum amount of memory Mixer telemetry is permitted to use.

Available memory in bytes (for example, 200Ki, 50Mi, 5Gi) based on your environment’s configuration.

4G

2.10.3.4. Istio Pilot configuration

You can configure Pilot to schedule or set limits on resource allocation. The following example illustrates the Pilot parameters for the ServiceMeshControlPlane and a description of the available parameters with appropriate values.

Example pilot parameters

spec:
  runtime:
    components:
      pilot:
        deployment:
          autoScaling:
            enabled: true
            minReplicas: 1
            maxReplicas: 5
            targetCPUUtilizationPercentage: 85
        pod:
          tolerations:
          - key: node.kubernetes.io/unreachable
            operator: Exists
            effect: NoExecute
            tolerationSeconds: 60
          affinity:
            podAntiAffinity:
              requiredDuringScheduling:
              - key: istio
                topologyKey: kubernetes.io/hostname
                operator: In
                values:
                - pilot
        container:
          resources:
            limits:
              cpu: 100m
              memory: 128M

Table 2.9. Istio Pilot parameters
ParameterDescriptionValuesDefault value

cpu

The percentage of CPU resources requested for Pilot.

CPU resources in millicores based on your environment’s configuration.

10m

memory

The amount of memory requested for Pilot.

Available memory in bytes (for example, 200Ki, 50Mi, 5Gi) based on your environment’s configuration.

128Mi

autoscaleEnabled

This parameter enables/disables autoscaling. Disable this for small environments.

true/false

true

traceSampling

This value controls how often random sampling occurs. Note: Increase for development or testing.

A valid percentage.

1.0

2.10.4. Configuring Kiali

When the Service Mesh Operator creates the ServiceMeshControlPlane it also processes the Kiali resource. The Kiali Operator then uses this object when creating Kiali instances.

The default Kiali parameters specified in the ServiceMeshControlPlane are as follows:

Example Kiali parameters

apiVersion: maistra.io/v1
kind: ServiceMeshControlPlane
spec:
    kiali:
      enabled: true
      dashboard:
        viewOnlyMode: false
      ingress:
        enabled: true

Table 2.10. Kiali parameters
ParameterDescriptionValuesDefault value
enabled

This parameter enables/disables Kiali. Kiali is enabled by default.

true/false

true

dashboard
   viewOnlyMode

This parameter enables/disables view-only mode for the Kiali console. When view-only mode is enabled, users cannot use the console to make changes to the Service Mesh.

true/false

false

ingress
   enabled

This parameter enables/disables ingress for Kiali.

true/false

true

2.10.4.1. Configuring Kiali for Grafana

When you install Kiali and Grafana as part of Red Hat OpenShift Service Mesh the Operator configures the following by default:

  • Grafana is enabled as an external service for Kiali
  • Grafana authorization for the Kiali console
  • Grafana URL for the Kiali console

Kiali can automatically detect the Grafana URL. However if you have a custom Grafana installation that is not easily auto-detectable by Kiali, you must update the URL value in the ServiceMeshControlPlane resource.

Additional Grafana parameters

spec:
  kiali:
    enabled: true
    dashboard:
      viewOnlyMode: false
      grafanaURL:  "https://grafana-istio-system.127.0.0.1.nip.io"
    ingress:
      enabled: true

2.10.4.2. Configuring Kiali for Jaeger

When you install Kiali and Jaeger as part of Red Hat OpenShift Service Mesh the Operator configures the following by default:

  • Jaeger is enabled as an external service for Kiali
  • Jaeger authorization for the Kiali console
  • Jaeger URL for the Kiali console

Kiali can automatically detect the Jaeger URL. However if you have a custom Jaeger installation that is not easily auto-detectable by Kiali, you must update the URL value in the ServiceMeshControlPlane resource.

Additional Jaeger parameters

spec:
  kiali:
    enabled: true
    dashboard:
      viewOnlyMode: false
      jaegerURL: "http://jaeger-query-istio-system.127.0.0.1.nip.io"
    ingress:
      enabled: true

2.10.5. Configuring Jaeger

When the Service Mesh Operator creates the ServiceMeshControlPlane resource it can also create the resources for distributed tracing. Service Mesh uses Jaeger for distributed tracing.

You can specify your Jaeger configuration in either of two ways:

  • Configure Jaeger in the ServiceMeshControlPlane resource. There are some limitations with this approach.
  • Configure Jaeger in a custom Jaeger resource and then reference that Jaeger instance in the ServiceMeshControlPlane resource. If a Jaeger resource matching the value of name exists, the control plane will use the existing installation. This approach lets you fully customize your Jaeger configuration.

The default Jaeger parameters specified in the ServiceMeshControlPlane are as follows:

Default all-in-one Jaeger parameters

apiVersion: maistra.io/v1
kind: ServiceMeshControlPlane
spec:
  version: v1.1
  istio:
    tracing:
      enabled: true
      jaeger:
        template: all-in-one

Table 2.11. Jaeger parameters
ParameterDescriptionValuesDefault value
tracing:
   enabled:

This parameter enables/disables installing and deploying tracing by the Service Mesh Operator. Installing Jaeger is enabled by default. To use an existing Jaeger deployment, set this value to false.

true/false

true

jaeger:
   template:

This parameter specifies which Jaeger deployment strategy to use.

  • all-in-one- For development, testing, demonstrations, and proof of concept.
  • production-elasticsearch - For production use.

all-in-one

Note

The default template in the ServiceMeshControlPlane resource is the all-in-one deployment strategy which uses in-memory storage. For production, the only supported storage option is Elasticsearch, therefore you must configure the ServiceMeshControlPlane to request the production-elasticsearch template when you deploy Service Mesh within a production environment.

2.10.5.1. Configuring Elasticsearch

The default Jaeger deployment strategy uses the all-in-one template so that the installation can be completed using minimal resources. However, because the all-in-one template uses in-memory storage, it is only recommended for development, demo, or testing purposes and should NOT be used for production environments.

If you are deploying Service Mesh and Jaeger in a production environment you must change the template to the production-elasticsearch template, which uses Elasticsearch for Jaeger’s storage needs.

Elasticsearch is a memory intensive application. The initial set of nodes specified in the default OpenShift Container Platform installation may not be large enough to support the Elasticsearch cluster. You should modify the default Elasticsearch configuration to match your use case and the resources you have requested for your OpenShift Container Platform installation. You can adjust both the CPU and memory limits for each component by modifying the resources block with valid CPU and memory values. Additional nodes must be added to the cluster if you want to run with the recommended amount (or more) of memory. Ensure that you do not exceed the resources requested for your OpenShift Container Platform installation.

Default "production" Jaeger parameters with Elasticsearch

apiVersion: maistra.io/v1
kind: ServiceMeshControlPlane
spec:
  istio:
    tracing:
    enabled: true
    ingress:
      enabled: true
    jaeger:
      template: production-elasticsearch
      elasticsearch:
        nodeCount: 3
        redundancyPolicy:
        resources:
          requests:
            cpu: "1"
            memory: "16Gi"
          limits:
            cpu: "1"
            memory: "16Gi"

Table 2.12. Elasticsearch parameters
ParameterDescriptionValuesDefault ValueExamples
tracing:
  enabled:

This parameter enables/disables tracing in Service Mesh. Jaeger is installed by default.

true/false

true

 
ingress:
  enabled:

This parameter enables/disables ingress for Jaeger.

true/false

true

 
jaeger:
   template:

This parameter specifies which Jaeger deployment strategy to use.

all-in-one/production-elasticsearch

all-in-one

 
elasticsearch:
  nodeCount:

Number of Elasticsearch nodes to create.

Integer value.

1

Proof of concept = 1, Minimum deployment =3

requests:
  cpu:

Number of central processing units for requests, based on your environment’s configuration.

Specified in cores or millicores (for example, 200m, 0.5, 1).

1Gi

Proof of concept = 500m, Minimum deployment =1

requests:
  memory:

Available memory for requests, based on your environment’s configuration.

Specified in bytes (for example, 200Ki, 50Mi, 5Gi).

500m

Proof of concept = 1Gi, Minimum deployment = 16Gi*

limits:
  cpu:

Limit on number of central processing units, based on your environment’s configuration.

Specified in cores or millicores (for example, 200m, 0.5, 1).

 

Proof of concept = 500m, Minimum deployment =1

limits:
  memory:

Available memory limit based on your environment’s configuration.

Specified in bytes (for example, 200Ki, 50Mi, 5Gi).

 

Proof of concept = 1Gi, Minimum deployment = 16Gi*

* Each Elasticsearch node can operate with a lower memory setting though this is not recommended for production deployments. For production use, you should have no less than 16Gi allocated to each pod by default, but preferably allocate as much as you can, up to 64Gi per pod.

Procedure

  1. Log in to the OpenShift Container Platform web console as a user with the cluster-admin role.
  2. Navigate to Operators Installed Operators.
  3. Click the Red Hat OpenShift Service Mesh Operator.
  4. Click the Istio Service Mesh Control Plane tab.
  5. Click the name of your control plane file, for example, basic-install.
  6. Click the YAML tab.
  7. Edit the Jaeger parameters, replacing the default all-in-one template with parameters for the production-elasticsearch template, modified for your use case. Ensure that the indentation is correct.
  8. Click Save.
  9. Click Reload. OpenShift Container Platform redeploys Jaeger and creates the Elasticsearch resources based on the specified parameters.

2.10.5.2. Connecting to an existing Jaeger instance

In order for the SMCP to connect to an existing Jaeger instance, the following must be true:

  • The Jaeger instance is deployed in the same namespace as the control plane, for example, into the istio-system namespace.
  • To enable secure communication between services, you should enable the oauth-proxy, which secures communication to your Jaeger instance, and make sure the secret is mounted into your Jaeger instance so Kiali can communicate with it.
  • To use a custom or already existing Jaeger instance, set spec.istio.tracing.enabled to "false" to disable the deployment of a Jaeger instance.
  • Supply the correct jaeger-collector endpoint to Mixer by setting spec.istio.global.tracer.zipkin.address to the hostname and port of your jaeger-collector service. The hostname of the service is usually <jaeger-instance-name>-collector.<namespace>.svc.cluster.local.
  • Supply the correct jaeger-query endpoint to Kiali for gathering traces by setting spec.istio.kiali.jaegerInClusterURL to the hostname of your jaeger-query service - the port is normally not required, as it uses 443 by default. The hostname of the service is usually <jaeger-instance-name>-query.<namespace>.svc.cluster.local.
  • Supply the dashboard URL of your Jaeger instance to Kiali to enable accessing Jaeger through the Kiali console. You can retrieve the URL from the OpenShift route that is created by the Jaeger Operator. If your Jaeger resource is called external-jaeger and resides in the istio-system project, you can retrieve the route using the following command:

    $ oc get route -n istio-system external-jaeger

    Example output

    NAME                   HOST/PORT                                     PATH   SERVICES               [...]
    external-jaeger        external-jaeger-istio-system.apps.test        external-jaeger-query  [...]

    The value under HOST/PORT is the externally accessible URL of the Jaeger dashboard.

Example Jaeger resource

apiVersion: jaegertracing.io/v1
kind: "Jaeger"
metadata:
  name: "external-jaeger"
  # Deploy to the Control Plane Namespace
  namespace: istio-system
spec:
  # Set Up Authentication
  ingress:
    enabled: true
    security: oauth-proxy
    openshift:
      # This limits user access to the Jaeger instance to users who have access
      # to the control plane namespace. Make sure to set the correct namespace here
      sar: '{"namespace": "istio-system", "resource": "pods", "verb": "get"}'
      htpasswdFile: /etc/proxy/htpasswd/auth

  volumeMounts:
  - name: secret-htpasswd
    mountPath: /etc/proxy/htpasswd
  volumes:
  - name: secret-htpasswd
    secret:
      secretName: htpasswd

The following ServiceMeshControlPlane example assumes that you have deployed Jaeger using the Jaeger Operator and the example Jaeger resource.

Example ServiceMeshControlPlane with external Jaeger

apiVersion: maistra.io/v1
kind: ServiceMeshControlPlane
metadata:
  name: external-jaeger
  namespace: istio-system
spec:
  version: v1.1
  istio:
    tracing:
      # Disable Jaeger deployment by service mesh operator
      enabled: false
    global:
      tracer:
        zipkin:
          # Set Endpoint for Trace Collection
          address: external-jaeger-collector.istio-system.svc.cluster.local:9411
    kiali:
      # Set Jaeger dashboard URL
      dashboard:
        jaegerURL: https://external-jaeger-istio-system.apps.test
      # Set Endpoint for Trace Querying
      jaegerInClusterURL: external-jaeger-query.istio-system.svc.cluster.local

2.10.5.3. Configuring Elasticsearch

The default Jaeger deployment strategy uses the all-in-one template so that the installation can be completed using minimal resources. However, because the all-in-one template uses in-memory storage, it is only recommended for development, demo, or testing purposes and should NOT be used for production environments.

If you are deploying Service Mesh and Jaeger in a production environment you must change the template to the production-elasticsearch template, which uses Elasticsearch for Jaeger’s storage needs.

Elasticsearch is a memory intensive application. The initial set of nodes specified in the default OpenShift Container Platform installation may not be large enough to support the Elasticsearch cluster. You should modify the default Elasticsearch configuration to match your use case and the resources you have requested for your OpenShift Container Platform installation. You can adjust both the CPU and memory limits for each component by modifying the resources block with valid CPU and memory values. Additional nodes must be added to the cluster if you want to run with the recommended amount (or more) of memory. Ensure that you do not exceed the resources requested for your OpenShift Container Platform installation.

Default "production" Jaeger parameters with Elasticsearch

apiVersion: maistra.io/v1
kind: ServiceMeshControlPlane
spec:
  istio:
    tracing:
    enabled: true
    ingress:
      enabled: true
    jaeger:
      template: production-elasticsearch
      elasticsearch:
        nodeCount: 3
        redundancyPolicy:
        resources:
          requests:
            cpu: "1"
            memory: "16Gi"
          limits:
            cpu: "1"
            memory: "16Gi"

Table 2.13. Elasticsearch parameters
ParameterDescriptionValuesDefault ValueExamples
tracing:
  enabled:

This parameter enables/disables tracing in Service Mesh. Jaeger is installed by default.

true/false

true

 
ingress:
  enabled:

This parameter enables/disables ingress for Jaeger.

true/false

true

 
jaeger:
   template:

This parameter specifies which Jaeger deployment strategy to use.

all-in-one/production-elasticsearch

all-in-one

 
elasticsearch:
  nodeCount:

Number of Elasticsearch nodes to create.

Integer value.

1

Proof of concept = 1, Minimum deployment =3

requests:
  cpu:

Number of central processing units for requests, based on your environment’s configuration.

Specified in cores or millicores (for example, 200m, 0.5, 1).

1Gi

Proof of concept = 500m, Minimum deployment =1

requests:
  memory:

Available memory for requests, based on your environment’s configuration.

Specified in bytes (for example, 200Ki, 50Mi, 5Gi).

500m

Proof of concept = 1Gi, Minimum deployment = 16Gi*

limits:
  cpu:

Limit on number of central processing units, based on your environment’s configuration.

Specified in cores or millicores (for example, 200m, 0.5, 1).

 

Proof of concept = 500m, Minimum deployment =1

limits:
  memory:

Available memory limit based on your environment’s configuration.

Specified in bytes (for example, 200Ki, 50Mi, 5Gi).

 

Proof of concept = 1Gi, Minimum deployment = 16Gi*

* Each Elasticsearch node can operate with a lower memory setting though this is not recommended for production deployments. For production use, you should have no less than 16Gi allocated to each pod by default, but preferably allocate as much as you can, up to 64Gi per pod.

Procedure

  1. Log in to the OpenShift Container Platform web console as a user with the cluster-admin role.
  2. Navigate to Operators Installed Operators.
  3. Click the Red Hat OpenShift Service Mesh Operator.
  4. Click the Istio Service Mesh Control Plane tab.
  5. Click the name of your control plane file, for example, basic-install.
  6. Click the YAML tab.
  7. Edit the Jaeger parameters, replacing the default all-in-one template with parameters for the production-elasticsearch template, modified for your use case. Ensure that the indentation is correct.
  8. Click Save.
  9. Click Reload. OpenShift Container Platform redeploys Jaeger and creates the Elasticsearch resources based on the specified parameters.

2.10.5.4. Configuring the Elasticsearch index cleaner job

When the Service Mesh Operator creates the ServiceMeshControlPlane it also creates the custom resource (CR) for Jaeger. The Red Hat OpenShift distributed tracing platform Operator then uses this CR when creating Jaeger instances.

When using Elasticsearch storage, by default a job is created to clean old traces from it. To configure the options for this job, you edit the Jaeger custom resource (CR), to customize it for your use case. The relevant options are listed below.

  apiVersion: jaegertracing.io/v1
  kind: Jaeger
  spec:
    strategy: production
    storage:
      type: elasticsearch
      esIndexCleaner:
        enabled: false
        numberOfDays: 7
        schedule: "55 23 * * *"
Table 2.14. Elasticsearch index cleaner parameters
ParameterValuesDescription

enabled:

true/ false

Enable or disable the index cleaner job.

numberOfDays:

integer value

Number of days to wait before deleting an index.

schedule:

"55 23 * * *"

Cron expression for the job to run

For more information about configuring Elasticsearch with OpenShift Container Platform, see Configuring the log store.

2.10.6. 3scale configuration

The following table explains the parameters for the 3scale Istio Adapter in the ServiceMeshControlPlane resource.

Example 3scale parameters

spec:
  addons:
    3Scale:
      enabled: false
      PARAM_THREESCALE_LISTEN_ADDR: 3333
      PARAM_THREESCALE_LOG_LEVEL: info
      PARAM_THREESCALE_LOG_JSON: true
      PARAM_THREESCALE_LOG_GRPC: false
      PARAM_THREESCALE_REPORT_METRICS: true
      PARAM_THREESCALE_METRICS_PORT: 8080
      PARAM_THREESCALE_CACHE_TTL_SECONDS: 300
      PARAM_THREESCALE_CACHE_REFRESH_SECONDS: 180
      PARAM_THREESCALE_CACHE_ENTRIES_MAX: 1000
      PARAM_THREESCALE_CACHE_REFRESH_RETRIES: 1
      PARAM_THREESCALE_ALLOW_INSECURE_CONN: false
      PARAM_THREESCALE_CLIENT_TIMEOUT_SECONDS: 10
      PARAM_THREESCALE_GRPC_CONN_MAX_SECONDS: 60
      PARAM_USE_CACHED_BACKEND: false
      PARAM_BACKEND_CACHE_FLUSH_INTERVAL_SECONDS: 15
      PARAM_BACKEND_CACHE_POLICY_FAIL_CLOSED: true

Table 2.15. 3scale parameters
ParameterDescriptionValuesDefault value

enabled

Whether to use the 3scale adapter

true/false

false

PARAM_THREESCALE_LISTEN_ADDR

Sets the listen address for the gRPC server

Valid port number

3333

PARAM_THREESCALE_LOG_LEVEL

Sets the minimum log output level.

debug, info, warn, error, or none

info

PARAM_THREESCALE_LOG_JSON

Controls whether the log is formatted as JSON

true/false

true

PARAM_THREESCALE_LOG_GRPC

Controls whether the log contains gRPC info

true/false

true

PARAM_THREESCALE_REPORT_METRICS

Controls whether 3scale system and backend metrics are collected and reported to Prometheus

true/false

true

PARAM_THREESCALE_METRICS_PORT

Sets the port that the 3scale /metrics endpoint can be scrapped from

Valid port number

8080

PARAM_THREESCALE_CACHE_TTL_SECONDS

Time period, in seconds, to wait before purging expired items from the cache

Time period in seconds

300

PARAM_THREESCALE_CACHE_REFRESH_SECONDS

Time period before expiry when cache elements are attempted to be refreshed

Time period in seconds

180

PARAM_THREESCALE_CACHE_ENTRIES_MAX

Max number of items that can be stored in the cache at any time. Set to 0 to disable caching

Valid number

1000

PARAM_THREESCALE_CACHE_REFRESH_RETRIES

The number of times unreachable hosts are retried during a cache update loop

Valid number

1

PARAM_THREESCALE_ALLOW_INSECURE_CONN

Allow to skip certificate verification when calling 3scale APIs. Enabling this is not recommended.

true/false

false

PARAM_THREESCALE_CLIENT_TIMEOUT_SECONDS

Sets the number of seconds to wait before terminating requests to 3scale System and Backend

Time period in seconds

10

PARAM_THREESCALE_GRPC_CONN_MAX_SECONDS

Sets the maximum amount of seconds (+/-10% jitter) a connection may exist before it is closed

Time period in seconds

60

PARAM_USE_CACHE_BACKEND

If true, attempt to create an in-memory apisonator cache for authorization requests

true/false

false

PARAM_BACKEND_CACHE_FLUSH_INTERVAL_SECONDS

If the backend cache is enabled, this sets the interval in seconds for flushing the cache against 3scale

Time period in seconds

15

PARAM_BACKEND_CACHE_POLICY_FAIL_CLOSED

Whenever the backend cache cannot retrieve authorization data, whether to deny (closed) or allow (open) requests

true/false

true

2.11. Using the 3scale Istio adapter

Warning

You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported.

Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh.

For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page.

The 3scale Istio Adapter is an optional adapter that allows you to label a service running within the Red Hat OpenShift Service Mesh and integrate that service with the 3scale API Management solution. It is not required for Red Hat OpenShift Service Mesh.

2.11.1. Integrate the 3scale adapter with Red Hat OpenShift Service Mesh

You can use these examples to configure requests to your services using the 3scale Istio Adapter.

Prerequisites:

  • Red Hat OpenShift Service Mesh version 1.x
  • A working 3scale account (SaaS or 3scale 2.5 On-Premises)
  • Enabling backend cache requires 3scale 2.9 or greater
  • Red Hat OpenShift Service Mesh prerequisites
Note

To configure the 3scale Istio Adapter, refer to Red Hat OpenShift Service Mesh custom resources for instructions on adding adapter parameters to the custom resource file.

Note

Pay particular attention to the kind: handler resource. You must update this with your 3scale account credentials. You can optionally add a service_id to a handler, but this is kept for backwards compatibility only, since it would render the handler only useful for one service in your 3scale account. If you add service_id to a handler, enabling 3scale for other services requires you to create more handlers with different service_ids.

Use a single handler per 3scale account by following the steps below:

Procedure

  1. Create a handler for your 3scale account and specify your account credentials. Omit any service identifier.

      apiVersion: "config.istio.io/v1alpha2"
      kind: handler
      metadata:
       name: threescale
      spec:
       adapter: threescale
       params:
         system_url: "https://<organization>-admin.3scale.net/"
         access_token: "<ACCESS_TOKEN>"
       connection:
         address: "threescale-istio-adapter:3333"

    Optionally, you can provide a backend_url field within the params section to override the URL provided by the 3scale configuration. This may be useful if the adapter runs on the same cluster as the 3scale on-premise instance, and you wish to leverage the internal cluster DNS.

  2. Edit or patch the Deployment resource of any services belonging to your 3scale account as follows:

    1. Add the "service-mesh.3scale.net/service-id" label with a value corresponding to a valid service_id.
    2. Add the "service-mesh.3scale.net/credentials" label with its value being the name of the handler resource from step 1.
  3. Do step 2 to link it to your 3scale account credentials and to its service identifier, whenever you intend to add more services.
  4. Modify the rule configuration with your 3scale configuration to dispatch the rule to the threescale handler.

    Rule configuration example

      apiVersion: "config.istio.io/v1alpha2"
      kind: rule
      metadata:
        name: threescale
      spec:
        match: destination.labels["service-mesh.3scale.net"] == "true"
        actions:
          - handler: threescale.handler
            instances:
              - threescale-authorization.instance

2.11.1.1. Generating 3scale custom resources

The adapter includes a tool that allows you to generate the handler, instance, and rule custom resources.

Table 2.16. Usage
OptionDescriptionRequiredDefault value

-h, --help

Produces help output for available options

No

 

--name

Unique name for this URL, token pair

Yes

 

-n, --namespace

Namespace to generate templates

No

istio-system

-t, --token

3scale access token

Yes

 

-u, --url

3scale Admin Portal URL

Yes

 

--backend-url

3scale backend URL. If set, it overrides the value that is read from system configuration

No

 

-s, --service

3scale API/Service ID

No

 

--auth

3scale authentication pattern to specify (1=API Key, 2=App Id/App Key, 3=OIDC)

No

Hybrid

-o, --output

File to save produced manifests to

No

Standard output

--version

Outputs the CLI version and exits immediately

No

 
2.11.1.1.1. Generate templates from URL examples
Note
  • Run the following commands via oc exec from the 3scale adapter container image in Generating manifests from a deployed adapter.
  • Use the 3scale-config-gen command to help avoid YAML syntax and indentation errors.
  • You can omit the --service if you use the annotations.
  • This command must be invoked from within the container image via oc exec.

Procedure

  • Use the 3scale-config-gen command to autogenerate templates files allowing the token, URL pair to be shared by multiple services as a single handler:

    $ 3scale-config-gen --name=admin-credentials --url="https://<organization>-admin.3scale.net:443" --token="[redacted]"
  • The following example generates the templates with the service ID embedded in the handler:

    $ 3scale-config-gen --url="https://<organization>-admin.3scale.net" --name="my-unique-id" --service="123456789" --token="[redacted]"

Additional resources

2.11.1.2. Generating manifests from a deployed adapter

Note
  • NAME is an identifier you use to identify with the service you are managing with 3scale.
  • The CREDENTIALS_NAME reference is an identifier that corresponds to the match section in the rule configuration. This is automatically set to the NAME identifier if you are using the CLI tool.
  • Its value does not need to be anything specific: the label value should just match the contents of the rule. See Routing service traffic through the adapter for more information.
  1. Run this command to generate manifests from a deployed adapter in the istio-system namespace:

    $ export NS="istio-system" URL="https://replaceme-admin.3scale.net:443" NAME="name" TOKEN="token"
    oc exec -n ${NS} $(oc get po -n ${NS} -o jsonpath='{.items[?(@.metadata.labels.app=="3scale-istio-adapter")].metadata.name}') \
    -it -- ./3scale-config-gen \
    --url ${URL} --name ${NAME} --token ${TOKEN} -n ${NS}
  2. This will produce sample output to the terminal. Edit these samples if required and create the objects using the oc create command.
  3. When the request reaches the adapter, the adapter needs to know how the service maps to an API on 3scale. You can provide this information in two ways:

    1. Label the workload (recommended)
    2. Hard code the handler as service_id
  4. Update the workload with the required annotations:

    Note

    You only need to update the service ID provided in this example if it is not already embedded in the handler. The setting in the handler takes precedence.

    $ export CREDENTIALS_NAME="replace-me"
    export SERVICE_ID="replace-me"
    export DEPLOYMENT="replace-me"
    patch="$(oc get deployment "${DEPLOYMENT}"
    patch="$(oc get deployment "${DEPLOYMENT}" --template='{"spec":{"template":{"metadata":{"labels":{ {{ range $k,$v := .spec.template.metadata.labels }}"{{ $k }}":"{{ $v }}",{{ end }}"service-mesh.3scale.net/service-id":"'"${SERVICE_ID}"'","service-mesh.3scale.net/credentials":"'"${CREDENTIALS_NAME}"'"}}}}}' )"
    oc patch deployment "${DEPLOYMENT}" --patch ''"${patch}"''

2.11.1.3. Routing service traffic through the adapter

Follow these steps to drive traffic for your service through the 3scale adapter.

Prerequisites

  • Credentials and service ID from your 3scale administrator.

Procedure

  1. Match the rule destination.labels["service-mesh.3scale.net/credentials"] == "threescale" that you previously created in the configuration, in the kind: rule resource.
  2. Add the above label to PodTemplateSpec on the Deployment of the target workload to integrate a service. the value, threescale, refers to the name of the generated handler. This handler stores the access token required to call 3scale.
  3. Add the destination.labels["service-mesh.3scale.net/service-id"] == "replace-me" label to the workload to pass the service ID to the adapter via the instance at request time.

2.11.2. Configure the integration settings in 3scale

Follow this procedure to configure the 3scale integration settings.

Note

For 3scale SaaS customers, Red Hat OpenShift Service Mesh is enabled as part of the Early Access program.

Procedure

  1. Navigate to [your_API_name] Integration
  2. Click Settings.
  3. Select the Istio option under Deployment.

    • The API Key (user_key) option under Authentication is selected by default.
  4. Click Update Product to save your selection.
  5. Click Configuration.
  6. Click Update Configuration.

2.11.3. Caching behavior

Responses from 3scale System APIs are cached by default within the adapter. Entries will be purged from the cache when they become older than the cacheTTLSeconds value. Also by default, automatic refreshing of cached entries will be attempted seconds before they expire, based on the cacheRefreshSeconds value. You can disable automatic refreshing by setting this value higher than the cacheTTLSeconds value.

Caching can be disabled entirely by setting cacheEntriesMax to a non-positive value.

By using the refreshing process, cached values whose hosts become unreachable will be retried before eventually being purged when past their expiry.

2.11.4. Authenticating requests

This release supports the following authentication methods:

  • Standard API Keys: single randomized strings or hashes acting as an identifier and a secret token.
  • Application identifier and key pairs: immutable identifier and mutable secret key strings.
  • OpenID authentication method: client ID string parsed from the JSON Web Token.

2.11.4.1. Applying authentication patterns

Modify the instance custom resource, as illustrated in the following authentication method examples, to configure authentication behavior. You can accept the authentication credentials from:

  • Request headers
  • Request parameters
  • Both request headers and query parameters
Note

When specifying values from headers, they must be lower case. For example, if you want to send a header as User-Key, this must be referenced in the configuration as request.headers["user-key"].

2.11.4.1.1. API key authentication method

Service Mesh looks for the API key in query parameters and request headers as specified in the user option in the subject custom resource parameter. It checks the values in the order given in the custom resource file. You can restrict the search for the API key to either query parameters or request headers by omitting the unwanted option.

In this example, Service Mesh looks for the API key in the user_key query parameter. If the API key is not in the query parameter, Service Mesh then checks the user-key header.

API key authentication method example

apiVersion: "config.istio.io/v1alpha2"
kind: instance
metadata:
  name: threescale-authorization
  namespace: istio-system
spec:
  template: authorization
  params:
    subject:
      user: request.query_params["user_key"] | request.headers["user-key"] | ""
    action:
      path: request.url_path
      method: request.method | "get"

If you want the adapter to examine a different query parameter or request header, change the name as appropriate. For example, to check for the API key in a query parameter named “key”, change request.query_params["user_key"] to request.query_params["key"].

2.11.4.1.2. Application ID and application key pair authentication method

Service Mesh looks for the application ID and application key in query parameters and request headers, as specified in the properties option in the subject custom resource parameter. The application key is optional. It checks the values in the order given in the custom resource file. You can restrict the search for the credentials to either query parameters or request headers by not including the unwanted option.

In this example, Service Mesh looks for the application ID and application key in the query parameters first, moving on to the request headers if needed.

Application ID and application key pair authentication method example

apiVersion: "config.istio.io/v1alpha2"
kind: instance
metadata:
  name: threescale-authorization
  namespace: istio-system
spec:
  template: authorization
  params:
    subject:
        app_id: request.query_params["app_id"] | request.headers["app-id"] | ""
        app_key: request.query_params["app_key"] | request.headers["app-key"] | ""
    action:
      path: request.url_path
      method: request.method | "get"

If you want the adapter to examine a different query parameter or request header, change the name as appropriate. For example, to check for the application ID in a query parameter named identification, change request.query_params["app_id"] to request.query_params["identification"].

2.11.4.1.3. OpenID authentication method

To use the OpenID Connect (OIDC) authentication method, use the properties value on the subject field to set client_id, and optionally app_key.

You can manipulate this object using the methods described previously. In the example configuration shown below, the client identifier (application ID) is parsed from the JSON Web Token (JWT) under the label azp. You can modify this as needed.

OpenID authentication method example

apiVersion: "config.istio.io/v1alpha2"
kind: instance
metadata:
  name: threescale-authorization
spec:
  template: threescale-authorization
  params:
    subject:
      properties:
        app_key: request.query_params["app_key"] | request.headers["app-key"] | ""
        client_id: request.auth.claims["azp"] | ""
      action:
        path: request.url_path
        method: request.method | "get"
        service: destination.labels["service-mesh.3scale.net/service-id"] | ""

For this integration to work correctly, OIDC must still be done in 3scale for the client to be created in the identity provider (IdP). You should create a Request authorization for the service you want to protect in the same namespace as that service. The JWT is passed in the Authorization header of the request.

In the sample RequestAuthentication defined below, replace issuer, jwksUri, and selector as appropriate.

OpenID Policy example

apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
  name: jwt-example
  namespace: bookinfo
spec:
  selector:
    matchLabels:
      app: productpage
  jwtRules:
  - issuer: >-
      http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak
    jwksUri: >-
      http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak/protocol/openid-connect/certs

2.11.4.1.4. Hybrid authentication method

You can choose to not enforce a particular authentication method and accept any valid credentials for either method. If both an API key and an application ID/application key pair are provided, Service Mesh uses the API key.

In this example, Service Mesh checks for an API key in the query parameters, then the request headers. If there is no API key, it then checks for an application ID and key in the query parameters, then the request headers.

Hybrid authentication method example

apiVersion: "config.istio.io/v1alpha2"
kind: instance
metadata:
  name: threescale-authorization
spec:
  template: authorization
  params:
    subject:
      user: request.query_params["user_key"] | request.headers["user-key"] |
      properties:
        app_id: request.query_params["app_id"] | request.headers["app-id"] | ""
        app_key: request.query_params["app_key"] | request.headers["app-key"] | ""
        client_id: request.auth.claims["azp"] | ""
    action:
      path: request.url_path
      method: request.method | "get"
      service: destination.labels["service-mesh.3scale.net/service-id"] | ""

2.11.5. 3scale Adapter metrics

The adapter, by default reports various Prometheus metrics that are exposed on port 8080 at the /metrics endpoint. These metrics provide insight into how the interactions between the adapter and 3scale are performing. The service is labeled to be automatically discovered and scraped by Prometheus.

2.11.6. 3scale Istio adapter verification

You might want to check whether the 3scale Istio adapter is working as expected. If your adapter is not working, use the following steps to help troubleshoot the problem.

Procedure

  1. Ensure the 3scale-adapter pod is running in the Service Mesh control plane namespace:

    $ oc get pods -n <istio-system>
  2. Check that the 3scale-adapter pod has printed out information about itself booting up, such as its version:

    $ oc logs <istio-system>
  3. When performing requests to the services protected by the 3scale adapter integration, always try requests that lack the right credentials and ensure they fail. Check the 3scale adapter logs to gather additional information.

Additional resources

2.11.7. 3scale Istio adapter troubleshooting checklist

As the administrator installing the 3scale Istio adapter, there are a number of scenarios that might be causing your integration to not function properly. Use the following list to troubleshoot your installation:

  • Incorrect YAML indentation.
  • Missing YAML sections.
  • Forgot to apply the changes in the YAML to the cluster.
  • Forgot to label the service workloads with the service-mesh.3scale.net/credentials key.
  • Forgot to label the service workloads with service-mesh.3scale.net/service-id when using handlers that do not contain a service_id so they are reusable per account.
  • The Rule custom resource points to the wrong handler or instance custom resources, or the references lack the corresponding namespace suffix.
  • The Rule custom resource match section cannot possibly match the service you are configuring, or it points to a destination workload that is not currently running or does not exist.
  • Wrong access token or URL for the 3scale Admin Portal in the handler.
  • The Instance custom resource’s params/subject/properties section fails to list the right parameters for app_id, app_key, or client_id, either because they specify the wrong location such as the query parameters, headers, and authorization claims, or the parameter names do not match the requests used for testing.
  • Failing to use the configuration generator without realizing that it actually lives in the adapter container image and needs oc exec to invoke it.

2.12. Removing Service Mesh

Warning

You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported.

Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh.

For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page.

To remove Red Hat OpenShift Service Mesh from an existing OpenShift Container Platform instance, remove the control plane before removing the operators.

2.12.1. Removing the Red Hat OpenShift Service Mesh control plane

To uninstall Service Mesh from an existing OpenShift Container Platform instance, first you delete the Service Mesh control plane and the Operators. Then, you run commands to remove residual resources.

2.12.1.1. Removing the Service Mesh control plane using the web console

You can remove the Red Hat OpenShift Service Mesh control plane by using the web console.

Procedure

  1. Log in to the OpenShift Container Platform web console.
  2. Click the Project menu and select the project where you installed the Service Mesh control plane, for example istio-system.
  3. Navigate to Operators Installed Operators.
  4. Click Service Mesh Control Plane under Provided APIs.
  5. Click the ServiceMeshControlPlane menu kebab .
  6. Click Delete Service Mesh Control Plane.
  7. Click Delete on the confirmation dialog window to remove the ServiceMeshControlPlane.

2.12.1.2. Removing the Service Mesh control plane using the CLI

You can remove the Red Hat OpenShift Service Mesh control plane by using the CLI. In this example, istio-system is the name of the control plane project.

Procedure

  1. Log in to the OpenShift Container Platform CLI.
  2. Run the following command to delete the ServiceMeshMemberRoll resource.

    $ oc delete smmr -n istio-system default
  3. Run this command to retrieve the name of the installed ServiceMeshControlPlane:

    $ oc get smcp -n istio-system
  4. Replace <name_of_custom_resource> with the output from the previous command, and run this command to remove the custom resource:

    $ oc delete smcp -n istio-system <name_of_custom_resource>

2.12.2. Removing the installed Operators

You must remove the Operators to successfully remove Red Hat OpenShift Service Mesh. After you remove the Red Hat OpenShift Service Mesh Operator, you must remove the Kiali Operator, the Red Hat OpenShift distributed tracing platform Operator, and the OpenShift Elasticsearch Operator.

2.12.2.1. Removing the Operators

Follow this procedure to remove the Operators that make up Red Hat OpenShift Service Mesh. Repeat the steps for each of the following Operators.

  • Red Hat OpenShift Service Mesh
  • Kiali
  • Red Hat OpenShift distributed tracing platform
  • OpenShift Elasticsearch

Procedure

  1. Log in to the OpenShift Container Platform web console.
  2. From the Operators Installed Operators page, scroll or type a keyword into the Filter by name to find each Operator. Then, click the Operator name.
  3. On the Operator Details page, select Uninstall Operator from the Actions menu. Follow the prompts to uninstall each Operator.

2.12.2.2. Clean up Operator resources

Follow this procedure to manually remove resources left behind after removing the Red Hat OpenShift Service Mesh Operator using the OpenShift Container Platform web console.

Prerequisites

  • An account with cluster administration access.
  • Access to the OpenShift CLI (oc).

Procedure

  1. Log in to the OpenShift Container Platform CLI as a cluster administrator.
  2. Run the following commands to clean up resources after uninstalling the Operators. If you intend to keep using Jaeger as a stand alone service without service mesh, do not delete the Jaeger resources.

    Note

    The Operators are installed in the openshift-operators namespace by default. If you installed the Operators in another namespace, replace openshift-operators with the name of the project where the Red Hat OpenShift Service Mesh Operator was installed.

    $ oc delete validatingwebhookconfiguration/openshift-operators.servicemesh-resources.maistra.io
    $ oc delete mutatingwebhookconfiguration/openshift-operators.servicemesh-resources.maistra.io
    $ oc delete -n openshift-operators daemonset/istio-node
    $ oc delete clusterrole/istio-admin clusterrole/istio-cni clusterrolebinding/istio-cni
    $ oc delete clusterrole istio-view istio-edit
    $ oc delete clusterrole jaegers.jaegertracing.io-v1-admin jaegers.jaegertracing.io-v1-crdview jaegers.jaegertracing.io-v1-edit jaegers.jaegertracing.io-v1-view
    $ oc get crds -o name | grep '.*\.istio\.io' | xargs -r -n 1 oc delete
    $ oc get crds -o name | grep '.*\.maistra\.io' | xargs -r -n 1 oc delete
    $ oc get crds -o name | grep '.*\.kiali\.io' | xargs -r -n 1 oc delete
    $ oc delete crds jaegers.jaegertracing.io
    $ oc delete svc admission-controller -n <operator-project>
    $ oc delete project <istio-system-project>
Red Hat logoGithubRedditYoutubeTwitter

Aprender

Pruebe, compre y venda

Comunidades

Acerca de la documentación de Red Hat

Ayudamos a los usuarios de Red Hat a innovar y alcanzar sus objetivos con nuestros productos y servicios con contenido en el que pueden confiar.

Hacer que el código abierto sea más inclusivo

Red Hat se compromete a reemplazar el lenguaje problemático en nuestro código, documentación y propiedades web. Para más detalles, consulte el Blog de Red Hat.

Acerca de Red Hat

Ofrecemos soluciones reforzadas que facilitan a las empresas trabajar en plataformas y entornos, desde el centro de datos central hasta el perímetro de la red.

© 2024 Red Hat, Inc.