Questo contenuto non è disponibile nella lingua selezionata.
Chapter 2. 3scale API Management operations and scaling
This document is not intended for local installations on laptops or similar end user equipment.
This section describes operations and scaling tasks of a Red Hat 3scale API Management 2.14 installation.
Prerequisites
- An installed and initially configured 3scale On-premises instance on a supported OpenShift version.
To carry out 3scale operations and scaling tasks, perform the steps outlined in the following sections:
2.1. Redeploying APIcast Copia collegamentoCollegamento copiato negli appunti!
You can test and promote system changes through the 3scale Admin Portal.
Prerequisites
- A deployed instance of 3scale On-premises.
- You have chosen your APIcast deployment method.
By default, APIcast deployments on OpenShift, both embedded and on other OpenShift clusters, are configured to allow you to publish changes to your staging and production gateways through the 3scale Admin Portal.
To redeploy APIcast on OpenShift:
Procedure
- Make system changes.
- In the Admin Portal, deploy to staging and test.
- In the Admin Portal, promote to production.
By default, APIcast retrieves and publishes the promoted update once every 5 minutes.
If you are using APIcast on the Docker containerized environment or a native installation, configure your staging and production gateways, and indicate how often the gateway retrieves published changes. After you have configured your APIcast gateways, you can redeploy APIcast through the 3scale Admin Portal.
To redeploy APIcast on the Docker containerized environment or a native installations:
Procedure
- Configure your APIcast gateway and connect it to 3scale On-premises.
- Make system changes.
- In the Admin Portal, deploy to staging and test.
- In the Admin Portal, promote to production.
APIcast retrieves and publishes the promoted update at the configured frequency.
2.2. Scaling up 3scale API Management On-premise Copia collegamentoCollegamento copiato negli appunti!
As your APIcast deployment grows, you may need to increase the amount of storage available. How you scale up storage depends on which type of file system you are using for your persistent storage.
If you are using a network file system (NFS), you can scale up your persistent volume (PV) using this command:
oc edit pv <pv_name>
$ oc edit pv <pv_name>
If you are using any other storage method, you must scale up your persistent volume manually using one of the methods listed in the following sections.
2.2.1. Method 1: Backing up and swapping persistent volumes Copia collegamentoCollegamento copiato negli appunti!
Procedure
- Back up the data on your existing persistent volume.
- Create and attach a target persistent volume, scaled for your new size requirements.
-
Create a pre-bound persistent volume claim, specify: The size of your new PVC (PersistentVolumeClaim) and the persistent volume name using the
volumeNamefield. - Restore data from your backup onto your newly created PV.
Modify your deployment configuration with the name of your new PV:
oc edit dc/system-app
$ oc edit dc/system-appCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify your new PV is configured and working correctly.
- Delete your previous PVC to release its claimed resources.
2.2.2. Method 2: Backing up and redeploying 3scale API Management Copia collegamentoCollegamento copiato negli appunti!
Procedure
- Back up the data on your existing persistent volume.
- Shut down your 3scale pods.
- Create and attach a target persistent volume, scaled for your new size requirements.
- Restore data from your backup onto your newly created PV.
Create a pre-bound persistent volume claim. Specify:
- The size of your new PVC
-
The persistent volume name using the
volumeNamefield.
- Deploy your amp.yml.
- Verify your new PV is configured and working correctly.
- Delete your previous PVC to release its claimed resources.
2.2.3. Configuring 3scale API Management on-premise deployments Copia collegamentoCollegamento copiato negli appunti!
The key deployment configurations to be scaled for 3scale are:
- APIcast production
- Backend listener
- Backend worker
2.2.3.1. Scaling via the OCP Copia collegamentoCollegamento copiato negli appunti!
Via OpenShift Container Platform (OCP) using an APIManager CR, you can scale the deployment configuration either up or down.
To scale a particular deployment configuration, use the following:
Scale up an APIcast production deployment configuration with the following APIManager CR:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Scale up the backend listener, backend worker, and backend cron components of your deployment configuration with the following APIManager CR:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the appropriate environment variable to the desired number of processes per pod.
PUMA_WORKERSforbackend-listenerpods:oc set env dc/backend-listener --overwrite PUMA_WORKERS=<number_of_processes>
$ oc set env dc/backend-listener --overwrite PUMA_WORKERS=<number_of_processes>Copy to Clipboard Copied! Toggle word wrap Toggle overflow UNICORN_WORKERSforsystem-apppods:oc set env dc/system-app --overwrite UNICORN_WORKERS=<number_of_processes>
$ oc set env dc/system-app --overwrite UNICORN_WORKERS=<number_of_processes>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.2.3.2. Vertical and horizontal hardware scaling Copia collegamentoCollegamento copiato negli appunti!
You can increase the performance of your 3scale deployment on OpenShift by adding resources. You can add more compute nodes as pods to your OpenShift cluster, as horizontal scaling or you can allocate more resources to existing compute nodes as vertical scaling.
Horizontal scaling
You can add more compute nodes as pods to your OpenShift. If the additional compute nodes match the existing nodes in your cluster, you do not have to reconfigure any environment variables.
Vertical scaling
You can allocate more resources to existing compute nodes. If you allocate more resources, you must add additional processes to your pods to increase performance.
Avoid the use of computing nodes with different specifications and configurations in your 3scale deployment.
2.2.3.3. Scaling up routers Copia collegamentoCollegamento copiato negli appunti!
As traffic increases, ensure your Red Hat OCP routers can adequately handle requests. If your routers are limiting the throughput of your requests, you must scale up your router nodes.
2.3. Operations troubleshooting Copia collegamentoCollegamento copiato negli appunti!
This section explains how to configure 3scale audit logging to display on OpenShift, and how to access 3scale logs and job queues on OpenShift.
2.3.1. Configuring 3scale API Management audit logging on OpenShift Copia collegamentoCollegamento copiato negli appunti!
This enables all logs to be in one place for querying by Elasticsearch, Fluentd, and Kibana (EFK) logging tools. These tools provide increased visibility on changes made to your 3scale configuration, who made these changes, and when. For example, this includes changes to billing, application plans, application programming interface (API) configuration, and more.
Prerequisites
- A 3scale 2.14 deployment.
Procedure
Configure audit logging to stdout to forward all application logs to standard OpenShift pod logs.
Some considerations:
-
By default, audit logging to
stdoutis disabled when 3scale is deployed on-premises; you need to configure this feature to have it fully functional. -
Audit logging to
stdoutis not available for 3scale hosted.
2.3.2. Enabling audit logging Copia collegamentoCollegamento copiato negli appunti!
3scale uses a features.yml configuration file to enable some global features. To enable audit logging to stdout, you must mount this file from a ConfigMap to replace the default file. The OpenShift pods that depend on features.yml are system-app and system-sidekiq.
Prerequisites
- You must have administrator access for the 3scale project.
Procedure
Enter the following command to enable audit logging to
stdout:oc patch configmap system -p '{"data": {"features.yml": "features: &default\n logging:\n audits_to_stdout: true\n\nproduction:\n <<: *default\n"}}'$ oc patch configmap system -p '{"data": {"features.yml": "features: &default\n logging:\n audits_to_stdout: true\n\nproduction:\n <<: *default\n"}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Export the following environment variable:
export PATCH_SYSTEM_VOLUMES='{"spec":{"template":{"spec":{"volumes":[{"emptyDir":{"medium":"Memory"},"name":"system-tmp"},{"configMap":{"items":[{"key":"zync.yml","path":"zync.yml"},{"key":"rolling_updates.yml","path":"rolling_updates.yml"},{"key":"service_discovery.yml","path":"service_discovery.yml"},{"key":"features.yml","path":"features.yml"}],"name":"system"},"name":"system-config"}]}}}}'$ export PATCH_SYSTEM_VOLUMES='{"spec":{"template":{"spec":{"volumes":[{"emptyDir":{"medium":"Memory"},"name":"system-tmp"},{"configMap":{"items":[{"key":"zync.yml","path":"zync.yml"},{"key":"rolling_updates.yml","path":"rolling_updates.yml"},{"key":"service_discovery.yml","path":"service_discovery.yml"},{"key":"features.yml","path":"features.yml"}],"name":"system"},"name":"system-config"}]}}}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the following command to apply the updated deployment configuration to the relevant OpenShift pods:
oc patch dc system-app -p $PATCH_SYSTEM_VOLUMES oc patch dc system-sidekiq -p $PATCH_SYSTEM_VOLUMES
$ oc patch dc system-app -p $PATCH_SYSTEM_VOLUMES $ oc patch dc system-sidekiq -p $PATCH_SYSTEM_VOLUMESCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.3.3. Configuring logging for Red Hat OpenShift Copia collegamentoCollegamento copiato negli appunti!
When you have enabled audit logging to forward 3scale application logs to OpenShift, you can use logging tools to monitor your 3scale applications.
For details on configuring logging on Red Hat OpenShift, see the following:
2.3.4. Accessing your logs Copia collegamentoCollegamento copiato negli appunti!
Each component’s deployment configuration contains logs for access and exceptions. If you encounter issues with your deployment, check these logs for details.
Follow these steps to access logs in 3scale:
Procedure
Find the ID of the pod you want logs for:
oc get pods
$ oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enter
oc logsand the ID of your chosen pod:oc logs <pod>
$ oc logs <pod>Copy to Clipboard Copied! Toggle word wrap Toggle overflow The system pod has two containers, each with a separate log. To access a container’s log, specify the
--containerparameter with thesystem-providerandsystem-developerpods:oc logs <pod> --container=system-provider oc logs <pod> --container=system-developer
$ oc logs <pod> --container=system-provider $ oc logs <pod> --container=system-developerCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.3.5. Checking job queues Copia collegamentoCollegamento copiato negli appunti!
Job queues contain logs of information sent from the system-sidekiq pods. Use these logs to check if your cluster is processing data. You can query the logs using the OpenShift CLI:
oc get jobs
$ oc get jobs
oc logs <job>
$ oc logs <job>
2.3.6. Preventing monotonic growth Copia collegamentoCollegamento copiato negli appunti!
To prevent monotonic growth, 3scale schedules by default, automatic purging of the following tables:
user_sessions
Clean up is triggered once a week and deletes records older than two weeks.
audits
Clean up is triggered once a day and deletes records older than three months.
log_entries
Clean up triggered once a day and deletes records older than six months.
event_store_events
Clean up is triggered once a week and deletes records older than a week.
With the exception of the above listed tables, the following table requires manual purging by the database administrator:
- alerts
| Database type | SQL command |
|---|---|
| MySQL |
DELETE FROM alerts WHERE timestamp < NOW() - INTERVAL 14 DAY;
|
| PostgreSQL |
DELETE FROM alerts WHERE timestamp < NOW() - INTERVAL '14 day';
|
| Oracle |
DELETE FROM alerts WHERE timestamp <= TRUNC(SYSDATE) - 14;
|
Additional resources