이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 4. 3scale AMP 2.0 On-Premises Operations and Scaling Guide
4.1. 1. Introduction 링크 복사링크가 클립보드에 복사되었습니다!
This document describes operations and scaling tasks of a Red Hat 3scale AMP 2.0 On-Premises installation.
4.1.1. 1.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Before you can perform the steps in this guide, you must have installed and initially configured AMP On-Premises on OpenShift 3.3 or 3.4.
This document is not intended for local installations on laptops or similar end user equipment.
4.1.1.1. Further Reading 링크 복사링크가 클립보드에 복사되었습니다!
4.2. 2. Re-deploying APIcast 링크 복사링크가 클립보드에 복사되었습니다!
Once you have deployed AMP On-Premises and your chosen APIcast deployment method, you can test and promote system changes through your AMP dashboard. By default, APIcast deployments on OpenShift, both built-in and on other OpenShift clusters, are configured to allow you to publish changes to your staging and production gateways through the AMP UI.
Redeploy APIcast on OpenShift:
- Make system changes
- In the UI, deploy to staging and test
- In the UI, promote to production
- By default, APIcast retrieves and publishes the promoted update once every 5 minutes
If you are using APIcast on the Docker containerized environment or a native installation, you must configure your staging and production gateways, as well as configure how often your gateway retrieves published changes. Once you have configured your APIcast gateways, you can redeploy APIcast through the AMP UI.
To redeploy APIcast on the Docker containerized environment or a native installations:
- Configure your APIcast gateway and connect it to AMP On-Premises
- Make system changes
- In the UI, deploy to staging and test
- In the UI, promote to production
- APIcast will retrieve and publish the promoted update at the configured frequency
4.3. 3 Scaling up AMP On Premises 링크 복사링크가 클립보드에 복사되었습니다!
4.3.1. 3.1. Scaling up Storage 링크 복사링크가 클립보드에 복사되었습니다!
As your APIcast deployment grows, you may need to increase the amount of storage available. How you scale up storage depends on which type of file system you are using for your persistent storage.
If you are using a network file system (NFS), you can scale up your persistent volume using the oc edit pv
command:
oc edit pv <pv_name>
oc edit pv <pv_name>
If you are using any other storage method, you must scale up your persistent volume manually using either of the following methods:
4.3.1.1. 3.1.1. Method 1, Backup and Swap Persistent Volumes 링크 복사링크가 클립보드에 복사되었습니다!
- Back up the data on your existing persistent volume
- Create and attach a target persistent volume, scaled for your new size requirements
-
Create a pre-bound persistent volume claim, specify: The size of your new PVC The persistent volume name using the
volumeName
field - Restore data from your backup onto your newly created PV
Modify your deployment configuration with the name of your new PV:
oc edit dc/system-app
oc edit dc/system-app
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify your new PV is configured and working correctly
- Delete your previous PVC to release its claimed resources
4.3.1.2. 3.1.2. Method 2. Back up and Redeploy AMP 링크 복사링크가 클립보드에 복사되었습니다!
- Back up the data on your existing persistent volume
- Shut down your 3scale pods
- Create and attach a target persistent volume, scaled for your new size requirements
- Restore data from your backup onto your newly created PV
Create a pre-bound persistent volume claim. Specify:
- The size of your new PVC
-
The persistent volume name using the
volumeName
field
- Deploy your AMP.yml
- Verify your new PV is configured and working correctly.
- Delete your previous PVC to release its claimed resources.
4.3.2. 3.2. Scaling up Performance 링크 복사링크가 클립보드에 복사되었습니다!
4.3.2.1. 3.2.1. Configuring 3scale On-Premises Deployments 링크 복사링크가 클립보드에 복사되었습니다!
By default, 3scale deployments run 1 process per pod. You can increase performance by running more processes per pod. Red Hat recommends running 1-2 processes per core on each node.
Perform the following steps to add more processes to a pod:
Log in to your OpenShift cluster
oc login
oc login
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Switch to your 3scale project
oc project <project_name>
oc project <project_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the appropriate environment variable to the the desired number of processes per pod
-
APICAST_WORKERS
for APIcast pods (Red Hat recommends no more than 2 per deployment) -
PUMA_WORKERS
for backend pods UNICORN_WORKERS
for system podsoc env dc/apicast --overwrite APICAST_WORKERS=<number_of_processes>
oc env dc/apicast --overwrite APICAST_WORKERS=<number_of_processes>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc env dc/backend --overwrite PUMA_WORKERS=<number_of_processes>
oc env dc/backend --overwrite PUMA_WORKERS=<number_of_processes>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc env dc/system-app --overwrite UNICORN_WORKERS=<number_of_processes>
oc env dc/system-app --overwrite UNICORN_WORKERS=<number_of_processes>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
4.3.2.2. 3.2.2. Vertical and Horizontal Hardware Scaling 링크 복사링크가 클립보드에 복사되었습니다!
You can increase the performance of your AMP deployment on OpenShift by adding resources. You can add more compute nodes as pods to your OpenShift cluster (horizontal scaling), or you can allocate more resources to existing compute nodes (vertical scaling).
Horizontal Scaling
You can add more compute nodes as pods to your OpenShift. As long as your additional compute nodes match the existing nodes in your cluster, you do not have to reconfigure any environment variables.
Vertical Scaling
You can allocate more resources to existing compute nodes. If you allocate more resources, you must add additional processes to your pods to increase performance.
Note
Red Hat does not recommend mixing compute nodes of a different specification or configuration on your 3scale deployment.
4.3.2.3. 3.2.3. Scaling Up Routers 링크 복사링크가 클립보드에 복사되었습니다!
As your traffic increases, you must ensure your OCP routers can adequately handle requests. If your routers are limiting the throughput of your requests, you must scale up your router nodes.
4.3.2.4. 3.2.4. Further Reading 링크 복사링크가 클립보드에 복사되었습니다!
- Scaling tasks, adding hardware compute nodes to OpenShift
- Adding Compute Nodes
- Routers
4.4. 4. Operations Troubleshooting 링크 복사링크가 클립보드에 복사되었습니다!
4.4.1. 4.1. Access Your Logs 링크 복사링크가 클립보드에 복사되었습니다!
Each component’s deployment configuration contains logs for access and exceptions. If you encounter issues with your deployment, check these logs for details.
Follow these steps to access logs in 3scale:
Find the ID of the pod you want logs for:
oc get pods
oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter
oc logs
and the ID of your chosen pod:oc logs <pod>
oc logs <pod>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The system pod has 2 containers, each with a separate log. To access a container’s log, specify the --container
parameter with the system-provider
and system-developer
:
oc logs <pod> --container=system-provider oc logs <pod> --container=system-developer
oc logs <pod> --container=system-provider
oc logs <pod> --container=system-developer
4.4.2. 4.2. Job Queues 링크 복사링크가 클립보드에 복사되었습니다!
Job Queues contain logs of information sent from the system-resque
and system-sidekiq
pods. Use these logs to check if your cluster is processing data. You can query the logs using the OpenShift CLI:
oc get jobs
oc get jobs
oc logs <job>
oc logs <job>