Chapter 2. Deploying 3scale for the Kafka Bridge


In order to use 3scale with the Kafka Bridge, you first deploy it and then configure it to discover the Kafka Bridge API.

In this scenario, Streams for Apache Kafka, Kafka, the Kafka Bridge, and the 3scale API Management components run in the same OpenShift cluster.

The following 3scale components help with the discovery of the Kafka Bridge:

  • 3scale APIcast provides an NGINX-based API gateway for HTTP clients to connect to the Kafka Bridge API service.
  • 3scale toolbox is used to import the OpenAPI specification for the Kafka Bridge service to 3scale.
Note

If 3scale is already deployed to the same cluster as the Kafka Bridge, you can skip the deployment steps and use your current deployment.

Prerequisites

For the 3scale deployment:

  • Check the Red Hat 3scale API Management supported configurations.
  • Installation requires a user with cluster-admin role, such as system:admin.
  • You need access to the JSON files describing the following:

    • Kafka Bridge OpenAPI specification (openapiv2.json)
    • Header modification and routing policies for the Kafka Bridge (policies_config.json)

      Find the JSON files on GitHub.

Procedure

  1. Set up 3scale API Management as described in the Red Hat 3scale documentation.

    1. Install 3scale API Manager and APIcast using the 3scale API Management operators.

      Before deploying API Manager, update the wildcardDomain property of the API Manager custom resource to the domain that hosts your OpenShift cluster.

      The domain is used in the URL to access the 3scale Admin Portal (http[s]://<authentication_token>@3scale-admin.<cluster_domain>).

    2. Verify that 3scale was successfully deployed by checking the status of the API Manager custom resource.
  2. Grant authorization for 3scale API Manager to discover the Kafka Bridge service:

    oc adm policy add-cluster-role-to-user view system:serviceaccount:<my_bridge_namespace>:amp

    The command grants the API Manager (amp) read access (view) to Kafka Bridge resources in the specified namespace (<my_bridge_namespace>).

  3. Set up 3scale API Management toolbox.

    1. Install 3scale toolbox as described in the Red Hat 3scale documentation.
    2. Set environment variables to be able to interact with 3scale:

      Example configuration for Kafka Bridge

      export REMOTE_NAME=strimzi-kafka-bridge 1
      export SYSTEM_NAME=strimzi_http_bridge_for_apache_kafka 2
      export TENANT=strimzi-kafka-bridge-admin 3
      export PORTAL_ENDPOINT=$TENANT.3scale.net 4
      export TOKEN=<3scale_authentication_token> 5

      1
      REMOTE_NAME is the name assigned to the remote address of the 3scale Admin Portal.
      2
      SYSTEM_NAME is the name of the 3scale service/API created by importing the OpenAPI specification through the 3scale toolbox.
      3
      TENANT is the tenant name of the 3scale Admin Portal (https://$TENANT.3scale.net).
      4
      PORTAL_ENDPOINT is the endpoint running the 3scale Admin Portal.
      5
      TOKEN is the authentication token provided by the 3scale Admin Portal for interaction through the 3scale toolbox or HTTP requests.
    3. Configure the remote web address of the 3scale toolbox:

      podman run -v /path/to/openapiv2.json:/tmp/oas/openapiv2.json registry.redhat.io/3scale-amp2/toolbox-rhel8:3scale2.14 3scale import openapi -d <admin_portal_url> /tmp/oas/openapiv2.json

      Specify the container image for the 3scale toolbox. Container images for 3scale are available in the Red Hat Ecosystem Catalog.

      Replace <admin_portal_url> with the path to the endpoint of the 3scale Admin Portal (https://$TOKEN@$PORTAL_ENDPOINT/). Now the endpoint address of the 3scale Admin Portal does not need to be specified every time you run the toolbox.

  4. Check that your Cluster Operator deployment has the labels and annotations properties required for the Kafka Bridge service to be discovered by 3scale.

    #...
    env:
    - name: STRIMZI_CUSTOM_KAFKA_BRIDGE_SERVICE_LABELS
        value: |
        discovery.3scale.net=true
    - name: STRIMZI_CUSTOM_KAFKA_BRIDGE_SERVICE_ANNOTATIONS
        value: |
        discovery.3scale.net/scheme=http
        discovery.3scale.net/port=8080
        discovery.3scale.net/path=/
        discovery.3scale.net/description-path=/openapi
    #...

    If not, add the properties through the OpenShift console or try redeploying the Cluster Operator and the Kafka Bridge.

  5. From the 3scale Admin Portal, import the Kafka Bridge API service from OpenShift as described in the Red Hat 3scale documentation.
  6. Edit the Host field in the OpenAPI specification (JSON file) to use the base URL of the Kafka Bridge service:

    For example:

    "host": "my-bridge-bridge-service.my-project.svc.cluster.local:8080"

    Check the host URL includes the following:

    • Kafka Bridge name (my-bridge)
    • Project name (my-project)
    • Port for the Kafka Bridge (8080)
  7. Import the updated OpenAPI specification to 3scale toolbox from the local file:

    podman run -v /path/to/openapiv2.json:/tmp/oas/openapiv2.json registry.redhat.io/3scale-amp2/toolbox-rhel8:3scale2.14 3scale import openapi [opts] -d=<admin_portal_url> -t 3scale-kafka-bridge /tmp/oas/openapiv2.json

    Here, we specify the system name as 3scale-kafka-bridge instead of generating the name from the OpenAPI specification. Replace /path/to/openapiv2.json with the path to the OpenAPI specification file and <admin_portal_url> with the path to the endpoint of the 3scale Admin Portal.

  8. Import the header modification and routing policies for the service (JSON file).

    1. Locate the ID for the service you created in 3scale, which is required when importing the policies:

      export SERVICE_ID=$(curl -k -s -X GET "https://$PORTAL_ENDPOINT/admin/api/services.json?access_token=$TOKEN" | jq ".services[] | select(.service.system_name | contains(\"$SYSTEM_NAME\")) | .service.id")

      Here, we use the jq command line JSON parser tool in the request.

    2. Import the policies:

      3scale policies import -f /path/to/policies_config.json -d=<admin_portal_url> 3scale-kafka-bridge

      Replace /path/to/policies_config.json with the path to the policies configuration file and <admin_portal_url> with the path to the endpoint of the 3scale Admin Portal.

  9. From the 3scale Admin Portal, check that the endpoints and policies for the Kafka Bridge service have loaded.
  10. From the 3scale Toolbox, create an application plan and an application.

    The application is required in order to obtain a user key for authentication.

  11. (Production environment step) To make the API available to the production gateway, promote the configuration:

    3scale proxy-config promote $REMOTE_NAME $SERVICE_ID
  12. Use an API testing tool to verify you can access the Kafka Bridge through the APIcast gateway using a call to create a consumer, and the user key created for the application.

    For example:

    https//my-project-my-bridge-bridge-service-3scale-apicast-staging.example.com:443/consumers/my-group?user_key=3dfc188650101010ecd7fdc56098ce95

    If a payload is returned from the Kafka Bridge, the consumer was created successfully.

    {
      "instance_id": "consumer1",
      "base uri": "https//my-project-my-bridge-bridge-service-3scale-apicast-staging.example.com:443/consumers/my-group/instances/consumer1"
    }

    The base URI is the address that the client will use in subsequent requests.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.