Chapter 1. 3scale operations and scaling guide


1.1. Introduction

This document describes operations and scaling tasks of a Red Hat 3scale AMP 2.6 On-Premises installation.

1.1.1. Prerequisites

An installed and initially configured AMP On-Premises instance on a supported OpenShift version.

This document is not intended for local installations on laptops or similar end user equipment.

1.1.2. Further reading

1.2. Re-deploying APIcast

After you have deployed AMP On-Premises and your chosen APIcast deployment method, you can test and promote system changes through 3scale Admin Portal. By default, APIcast deployments on OpenShift, both built-in and on other OpenShift clusters, are configured to allow you to publish changes to your staging and production gateways through the AMP UI.

Redeploy APIcast on OpenShift:

  1. Make system changes.
  2. In the UI, deploy to staging and test.
  3. In the UI, promote to production.

By default, APIcast retrieves and publishes the promoted update once every 5 minutes.

If you are using APIcast on the Docker containerized environment or a native installation, you must configure your staging and production gateways, and configure how often your gateway retrieves published changes. After you have configured your APIcast gateways, you can redeploy APIcast through the AMP UI.

To redeploy APIcast on the Docker containerized environment or a native installations:

  1. Configure your APIcast gateway and connect it to AMP On-Premises.
  2. Make system changes.
  3. In the UI, deploy to staging and test.
  4. In the UI, promote to production.

APIcast retrieves and publishes the promoted update at the configured frequency.

1.3. Scaling up 3scale on-premise

1.3.1. Scaling up storage

As your APIcast deployment grows, you may need to increase the amount of storage available. How you scale up storage depends on which type of file system you are using for your persistent storage.

If you are using a network file system (NFS), you can scale up your persistent volume using the oc edit pv command:

oc edit pv <pv_name>

If you are using any other storage method, you must scale up your persistent volume manually using one of the methods listes in the following sections.

1.3.1.1. Method 1: Backup and swap persistent volumes

  1. Back up the data on your existing persistent volume.
  2. Create and attach a target persistent volume, scaled for your new size requirements.
  3. Create a pre-bound persistent volume claim, specify: The size of your new PVC The persistent volume name using the volumeName field.
  4. Restore data from your backup onto your newly created PV.
  5. Modify your deployment configuration with the name of your new PV:

    oc edit dc/system-app
  6. Verify your new PV is configured and working correctly.
  7. Delete your previous PVC to release its claimed resources.

1.3.1.2. Method 2: Back up and redeploy 3scale

  1. Back up the data on your existing persistent volume.
  2. Shut down your 3scale pods.
  3. Create and attach a target persistent volume, scaled for your new size requirements.
  4. Restore data from your backup onto your newly created PV.
  5. Create a pre-bound persistent volume claim. Specify:

    1. The size of your new PVC
    2. The persistent volume name using the volumeName field.
  6. Deploy your AMP.yml.
  7. Verify your new PV is configured and working correctly.
  8. Delete your previous PVC to release its claimed resources.

1.3.2. Scaling up performance

1.3.2.1. Configuring 3scale on-premise deployments

By default, 3scale deployments run one process per pod. You can increase performance by running more processes per pod. Red Hat recommends running 1-2 processes per core on each node.

Perform the following steps to add more processes to a pod:

  1. Log in to your OpenShift cluster.

    oc login
  2. Switch to your 3scale project.

    oc project <project_name>
  3. Set the appropriate environment variable to the desired number of processes per pod.

    1. APICAST_WORKERS for APIcast pods (Red Hat recommends to keep this environment variable unset to allow APIcast to determine the number of workers by the number of CPUs available to the APIcast pod)
    2. PUMA_WORKERS for backend pods
    3. UNICORN_WORKERS for system pods

      oc set env dc/apicast-{production/staging} --overwrite APICAST_WORKERS=<number_of_processes>
      oc set env dc/backend-listener --overwrite PUMA_WORKERS=<number_of_processes>
      oc set env dc/system-app --overwrite UNICORN_WORKERS=<number_of_processes>

1.3.2.2. Vertical and horizontal hardware scaling

You can increase the performance of your AMP deployment on OpenShift by adding resources. You can add more compute nodes as pods to your OpenShift cluster (horizontal scaling) or you can allocate more resources to existing compute nodes (vertical scaling).

Horizontal Scaling

You can add more compute nodes as pods to your OpenShift. If the additional compute nodes match the existing nodes in your cluster, you do not have to reconfigure any environment variables.

Vertical Scaling

You can allocate more resources to existing compute nodes. If you allocate more resources, you must add additional processes to your pods to increase performance.

Note

Red Hat does not recommend mixing compute nodes of a different specification or configuration on your 3scale deployment.

1.3.2.3. Scaling up routers

As your traffic increases, you must ensure your OCP routers can adequately handle requests. If your routers are limiting the throughput of your requests, you must scale up your router nodes.

1.4. Operations troubleshooting

This section explains how to configure 3scale audit logging to display on OpenShift, and how to access 3scale logs and job queues on OpenShift.

1.4.1. Configuring 3scale audit logging on OpenShift

When 3scale is deployed on-premises, you can configure audit logging to stdout to forward all application logs to standard OpenShift pod logs. This enables all logs to be in one place for querying by Elasticsearch, Fluentd, and Kibana (EFK) logging tools. These tools provide increased visibility on changes made to your 3scale configuration, who made these changes, and when. For example, this includes changes to billing, application plans, API configuration, and so on.

Some considerations:

  • By default, audit logging to stdout is disabled when 3scale is deployed on-premises; you need to configure this feature to have it fully functional.
  • Audit logging to stdout is not available for 3scale hosted.

1.4.1.1. Enabling audit logging

3scale uses a features.xml configuration file to enable some global features. To enable audit logging to stdout, you must mount this file from a ConfigMap to replace the default file. The OpenShift pods that depend on features.xml are system-app and system-sidekiq.

Prerequisites

  • You must have cluster administrator access on OpenShift.

Procedure

  1. Enter the following command to enable audit logging to stdout:

    oc patch configmap system -p '{"data": {"features.yml": "features: &default\n  logging:\n    audits_to_stdout: true\n\nproduction:\n  <<: *default\n"}}'
  2. Export the following environment variable:

    export PATCH_SYSTEM_VOLUMES='{"spec":{"template":{"spec":{"volumes":[{"emptyDir":{"medium":"Memory"},"name":"system-tmp"},{"configMap":{"items":[{"key":"zync.yml","path":"zync.yml"},{"key":"rolling_updates.yml","path":"rolling_updates.yml"},{"key":"service_discovery.yml","path":"service_discovery.yml"},{"key":"features.yml","path":"features.yml"}],"name":"system"},"name":"system-config"}]}}}}'
  3. Enter the following command to apply the updated deployment configuration to the relevant OpenShift pods:

    oc patch dc system-app -p $PATCH_SYSTEM_VOLUMES
    oc patch dc system-sidekiq -p $PATCH_SYSTEM_VOLUMES

1.4.1.2. Configuring EFK logging

When you have enabled audit logging to stdout to forward 3scale application logs to OpenShift, you can use EFK logging tools to monitor your 3scale applications.

For details on how to configure EFK logging on OpenShift, see the following:

1.4.2. Accessing your logs

Each component’s deployment configuration contains logs for access and exceptions. If you encounter issues with your deployment, check these logs for details.

Follow these steps to access logs in 3scale:

  1. Find the ID of the pod you want logs for:

    oc get pods
  2. Enter oc logs and the ID of your chosen pod:

    oc logs <pod>

    The system pod has two containers, each with a separate log. To access a container’s log, specify the --container parameter with the system-provider and system-developer pods:

    oc logs <pod> --container=system-provider
    oc logs <pod> --container=system-developer

1.4.3. Checking job queues

Job queues contain logs of information sent from the system-sidekiq pods. Use these logs to check if your cluster is processing data. You can query the logs using the OpenShift CLI:

oc get jobs
oc logs <job>

1.4.4. Preventing monotonic growth

To prevent monotonic growth, 3scale schedules by default, automatic purging of the following tables:

  • user_sessions - clean up is triggered once a week, deletes records older than two weeks.
  • audits - clean up is triggered once a day, deletes records older than three months.
  • log_entries - clean up triggered once a day, deletes records older than six months.
  • event_store_events - clean up is triggered once a week, deletes records older than a week.

With the exception of the above listed tables, the alerts table requires manual purging by the database administrator.

Database typeSQL command

MySQL

DELETE FROM alerts WHERE timestamp < NOW() - INTERVAL 14 DAY;

PostgreSQL

DELETE FROM alerts WHERE timestamp < NOW() - INTERVAL '14 day';

Oracle

DELETE FROM alerts WHERE timestamp <= TRUNC(SYSDATE) - 14;

For other tables not specified in this section, the database administrator must manually clean the tables that the system does not automatically purge.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.