Installing Red Hat 3scale API Management
Install and configure 3scale API Management.
Abstract
Preface Copy linkLink copied to clipboard!
DO NOT ATTEMPT TO INSTALL OR UPGRADE TO 3scale 2.16 IF YOUR DEPLOYMENT USES ORACLE DATABASE. 3scale 2.16 is currently not compatible with Oracle DB. Upgrading from 2.15 to 2.16 in such environments will lead to severe issues preventing the system from operating correctly. Deployments using Oracle DB must stay on version 2.15 until compatibility is added in a future maintenance release (planned for 2.16.1).
This guide will help you to install and configure 3scale
Providing feedback on Red Hat documentation Copy linkLink copied to clipboard!
We appreciate your feedback on our documentation.
To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to enable us to address your request quickly.
Prerequisite
- You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one.
Procedure
- Click the following link: Create issue.
- In the Summary text box, enter a brief description of the issue.
In the Description text box, provide the following information:
- The URL of the page where you found the issue.
-
A detailed description of the issue.
You can leave the information in any other fields at their default values.
- Click Create to submit the Jira issue to the documentation team.
Thank you for taking the time to provide feedback.
Chapter 1. Installing 3scale API Management on OpenShift Copy linkLink copied to clipboard!
This section walks you through steps to deploy Red Hat 3scale API Management 2.16 on OpenShift.
The 3scale solution for on-premises deployment is composed of:
- Two application programming interface (API) gateways: embedded APIcast.
- One 3scale Admin Portal and Developer Portal with persistent storage.
- The 3scale Istio Adapter is available as an optional adapter that allows labeling a service running within the Red Hat OpenShift Service Mesh, and integrate that service with 3scale. Refer to 3scale adapter documentation for more information.
Prerequisites
- You must configure 3scale servers for UTC (Coordinated Universal Time).
To install 3scale on OpenShift, perform the steps outlined in the following sections:
- System requirements for installing 3scale API Management on OpenShift
- Deploying 3scale API Management using the operator
- External databases for 3scale API Management using the operator
- Deployment configuration options for 3scale API Management on OpenShift using the operator
- Installing 3scale API Management with the operator using Oracle as the system database
- Troubleshooting common 3scale API Management installation issues
1.1. System requirements for installing 3scale API Management on OpenShift Copy linkLink copied to clipboard!
This section lists the system requirements for installing Red Hat 3scale API Management on OpenShift.
1.1.1. Environment requirements Copy linkLink copied to clipboard!
3scale requires an environment specified in supported configurations.
The requirements for persistent volumes vary between different deployment types. When deploying with external databases, persistent volumes are not necessary. For some deployment types, an Amazon S3 bucket can serve as a substitute for persistent volumes. If you use local file system storage, consider the specific deployment type and its associated requirements for persistent volumes.
Persistent volumes
- 4 RWO (ReadWriteOnce) persistent volumes for Redis, MySQL, and System-searchd persistence.
- 1 RWX (ReadWriteMany) persistent volume for Developer Portal content and System-app Assets.
Configure the RWX persistent volume to be group writable. For a list of persistent volume types that support the required access modes, see the OpenShift documentation.
Network File System (NFS) is supported on 3scale for the RWX volume only.
For IBM Power (ppc64le) and IBM Z (s390x), provision local storage using the following:
Storage
- NFS
If you are using an Amazon Simple Storage Service (Amazon S3) bucket for content management system (CMS) storage:
Persistent volumes
- 3 RWO (ReadWriteOnce) persistent volumes for Redis and MySQL persistence.
Storage
- 1 Amazon S3 bucket
- NFS
1.1.2. Hardware requirements Copy linkLink copied to clipboard!
Hardware requirements depend on your usage needs. Red Hat recommends that you test and configure your environment to meet your specific requirements. The following are the recommendations when configuring your environment for 3scale on OpenShift:
- Compute optimized nodes for deployments on cloud environments (AWS c4.2xlarge or Azure Standard_F8).
- Very large installations may require a separate node (AWS M4 series or Azure Av2 series) for Redis if memory requirements exceed your current node’s available RAM. Note that this is true only when the deployment is with embedded Redis.
- Separate nodes between routing and compute tasks.
- Dedicated computing nodes for 3scale specific tasks.
Additional resources
1.2. Installing the 3scale API Management operator on OpenShift Copy linkLink copied to clipboard!
3scale supports the last two general availability (GA) releases of OpenShift Container Platform (OCP). For more information, see the Red Hat 3scale API Management Supported Configurations page.
This documentation shows you how to:
- Create a new project.
- Deploy a Red Hat 3scale API Management instance.
- Deploy the custom resources once the operator has been deployed.
Prerequisites
Access to a supported version of an OpenShift Container Platform 4 cluster using an account with administrator privileges.
- For more information about supported configurations, see the Red Hat 3scale API Management Supported Configurations page.
Deploy the 3scale operator and custom resource definitions (CRDs) in a separate newly created, empty project. If you deploy them in an existing project containing infrastructure, it could alter or delete existing elements.
To install the 3scale operator on OpenShift, perform the steps outlined in the following sections:
1.2.1. Creating a new OpenShift project Copy linkLink copied to clipboard!
This procedure explains how to create a new OpenShift project named 3scale-project. Replace this project name with your own.
Procedure
To create a new OpenShift project:
Indicate a valid name using alphanumeric characters and dashes. As an example, run the command below to create
3scale-project:oc new-project 3scale-project
$ oc new-project 3scale-projectCopy to Clipboard Copied! Toggle word wrap Toggle overflow
This creates the new OpenShift project where the operator, the APIManager custom resource (CR), and the Capabilities custom resources will be installed. The operator manages the custom resources through OLM in that project.
1.2.2. Installing and configuring the 3scale API Management operator using the OLM Copy linkLink copied to clipboard!
Use Operator Lifecycle Manager (OLM) to install the 3scale operator on an OpenShift Container Platform (OCP) 4.12 (or above) cluster through the OperatorHub in the OCP console. You can install the 3scale operator using the following installation modes:
- Cluster-wide in which the operator is available in all namespaces on the cluster.
- A specific namespace on the cluster
If you are using the OpenShift Container Platform on a restricted network or a disconnected cluster, the Operator Lifecycle Manager can no longer use the OperatorHub. Follow the instructions for setting up and using the OLM in the guide titled Using Operator Lifecycle Manager on restricted networks.
Prerequisites
- You must install and deploy the 3scale operator in the project that you defined in Creating a new OpenShift project.
Procedure
- In the OpenShift Container Platform console, log in using an account with administrator privileges.
- Click Operators > OperatorHub
- In the Filter by keyword box, type 3scale operator to find Red Hat Integration - 3scale.
- Click Red Hat Integration - 3scale. Information about the operator is displayed.
- Read the information about the operator and click Install. The Install Operator page opens.
- On the Install Operator page, select the desired channel to update in the Update channel section.
In the Installation mode section, select where to install the operator.
- All namespaces on the cluster (default) - The operator will be available in all namespaces on the cluster.
- A specific namespace on the cluster - The operator will only be available in the specific single namespace on the cluster that you have selected.
- Click Install.
- After the installation is complete, the system displays a confirmation message indicating that the operator is ready for use.
Verify that the 3scale operator ClusterServiceVersion (CSV) is correctly installed. Also check if it reports that the installation of the operator has been successful:
- Click Operators > Installed Operators.
- Click on the Red Hat Integration - 3scale operator.
- In the Details tab, scroll down to the Conditions section, where the Succeeded condition should read InstallSucceeded under the Reason column.
Besides the indicated procedure, create a list of the allowed domains you intend to use in the 3scale Developer Portal while using OCP on restricted networks. Consider the following examples:
- Any link you intend to add to the Developer Portal.
- Single sign-on (SSO) integrations through third party SSO providers such as GitHub.
- Billing.
- Webhooks that trigger an external URL.
1.2.2.1. Restrictions in disconnected environments Copy linkLink copied to clipboard!
The following list outlines current restrictions in a disconnected environment for 3scale 2.16:
- The GitHub login to the Developer Portal is not available.
- Support links are not operational.
- Links to external documentation are not operational.
- The validator for the OpenAPI Specification (OAS) in the Developer Portal is not operational, affecting links to external services.
In the product Overview page in ActiveDocs, links to OAS are not operational.
- It is also necessary to check the option Skip swagger validations when you create a new ActiveDocs specification.
1.2.3. Upgrading the 3scale API Management operator using the OLM Copy linkLink copied to clipboard!
To upgrade the 3scale operator from a single namespace to a cluster-wide installation in all namespaces on an operator-based deployment, you must remove the 3scale operator from the namespace and then reinstall the operator on the cluster.
Cluster administrators can delete installed operators from a selected namespace by using the web console. Uninstalling the operator does not uninstall an existing 3scale instance.
After the 3scale operator is uninstalled from the namespace, you can use OLM to install the operator in the cluster-wide mode.
Prerequisites
- 3scale administrator permissions or an OpenShift role that has delete permissions for the namespace.
Procedure
- In the OpenShift Container Platform console, log in using an account with administrator privileges.
- Click Operators > OperatorHub. The installed Operators page is displayed.
- Enter 3scale into the Filter by name to find the operator and click on it.
- On the Operator Details page, select Uninstall Operator from the Actions drop-down menu to remove it from a specific namespace.
An Uninstall Operator? dialog box is displayed, reminding you that:
Removing the operator will not remove any of its custom resource definitions or managed resources. If your operator has deployed applications on the cluster or configured off-cluster resources, these will continue to run and need to be cleaned up manually. This action removes the operator as well as the Operator deployments and pods, if any. Any operands and resources managed by the operator, including CRDs and CRs, are not removed. The web console enables dashboards and navigation items for some operators. To remove these after uninstalling the operator, you might need to manually delete the operator CRDs.
Removing the operator will not remove any of its custom resource definitions or managed resources. If your operator has deployed applications on the cluster or configured off-cluster resources, these will continue to run and need to be cleaned up manually. This action removes the operator as well as the Operator deployments and pods, if any. Any operands and resources managed by the operator, including CRDs and CRs, are not removed. The web console enables dashboards and navigation items for some operators. To remove these after uninstalling the operator, you might need to manually delete the operator CRDs.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Select Uninstall. This operator stops running and no longer receives updates.
- In the OpenShift Container Platform console click Operators > OperatorHub.
- In the Filter by keyword box, type 3scale operator to find Red Hat Integration - 3scale.
- Click Red Hat Integration - 3scale. Information about the operator is displayed.
- Click Install. The Install Operator page opens.
- On the Install Operator page, select the desired channel to update in the Update channel section.
- In the Installation mode section, select All namespaces on the cluster (default). The operator will be available in all namespaces on the cluster.
- Click Subscribe. The 3scale operator details page is displayed and you can see the Subscription Overview.
- Confirm that the subscription Upgrade Status is displayed as Up to date.
- Verify that the 3scale operator ClusterServiceVersion (CSV) is displayed.
Additional Resources
1.2.3.1. Configuring automated application of micro releases Copy linkLink copied to clipboard!
If you are using an external Oracle database, set the 3scale update strategy to Manual. With an external Oracle database, the database and the .spec.system.image are updated manually. The Automatic setting would not update the .spec.system.image. See the Migrating 3scale guide to update an operator-based installation with an external Oracle database.
To get automatic updates, the 3scale operator must have its approval strategy set to Automatic. This allows it to apply micro release updates automatically. The following describes the differences between Automatic and Manual settings, and outlines the steps in a procedure to change from one to the other.
Automatic and manual:
- During installation, the Automatic setting is the selected option by default. Installation of new updates occur as they become available. You can change this during the install or at any time afterwards.
- If you select the Manual option during installation or at any time afterwards, you will receive updates when they are available. Next, you must approve the Install Plan and apply it yourself.
Procedure
- Click Operators > Installed Operators.
- Click Red Hat Integration - 3scale from the list of Installed Operators.
- Click the Subscription tab. Under the Subscription Details heading, you will see the subheading Approval.
- Click the link below Approval. The link is set to Automatic by default. A modal with the heading Change Update Approval Strategy will pop up.
- Choose the option of your preference: Automatic (default) or Manual, and then click Save.
Additional resources
1.3. Installing the APIcast operator on OpenShift Copy linkLink copied to clipboard!
This guide provides steps for installing the APIcast operator through the OpenShift Container Platform (OCP) console.
Prerequisites
- OCP 4.x or later with administrator privileges.
Procedure
-
Create new project
operator-testin Projects > Create Project. - Click Operators > OperatorHub.
- In the Filter by keyword box, type apicast operator to find Red Hat Integration - 3scale APIcast gateway.
- Click Red Hat Integration - 3scale APIcast gateway. Information about the APIcast operator is displayed.
- Click Install. The Create Operator Subscription page opens.
Click Install to accept all of the default selections on the Create Operator Subscription page.
NoteYou can select different operator versions and installation modes, such as cluster-wide or namespace-specific options. There can only be one cluster-wide installation per cluster.
- The subscription Upgrade Status is shown as Up to date.
-
Click Operators > Installed Operators to verify that the APIcast operator ClusterServiceVersion (CSV) status displays to InstallSucceeded in the
operator-testproject.
1.4. Deploying 3scale API Management using the operator Copy linkLink copied to clipboard!
This section takes you through installing and deploying the 3scale solution via the 3scale operator, using the APIManager custom resource (CR).
Wildcard routes have been removed since 3scale 2.6.
- This functionality is handled by Zync in the background.
- When API providers are created, updated, or deleted, routes automatically reflect those changes.
Prerequisites
- To make sure you receive automatic updates of micro releases for 3scale, you must have enabled the automatic approval functionality in the 3scale operator. Automatic is the default approval setting. To change this at any time based on your specific needs, use the steps for Configuring automated application of micro releases.
- Deploying 3scale API Management using the operator first requires that you follow the steps in Installing the 3scale API Management operator on OpenShift
OpenShift Container Platform 4
- A user account with administrator privileges in the OpenShift cluster.
- For more information about supported configurations, see the Red Hat 3scale API Management Supported Configurations page.
Follow these procedures to deploy 3scale using the operator:
1.4.1. Deploying the APIManager custom resource Copy linkLink copied to clipboard!
If you decide to use Amazon Simple Storage Service (Amazon S3), see Amazon Simple Storage Service 3scale API Management fileStorage installation.
The operator watches for APIManager CRs and deploys your required 3scale solution as specified in the APIManager CR.
Procedure
Click Operators > Installed Operators.
- From the list of Installed Operators, click Red Hat Integration - 3scale.
- Click the API Manager tab.
- Click Create APIManager.
Clear the sample content and add the following YAML definitions to the editor, then click Create.
Before 3scale 2.8, you could configure the automatic addition of replicas by setting the
highAvailabilityfield totrue. From 3scale 2.8, the addition of replicas is controlled through the replicas field in the APIManager CR as shown in the following example.NoteThe value of the wildcardDomain parameter must be a valid domain name that resolves to the address of your OpenShift Container Platform (OCP) router. For example,
apps.mycluster.example.com.APIManager CR with minimum requirements:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow APIManager CR with replicas configured:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.4.2. Getting the Admin Portal URL Copy linkLink copied to clipboard!
When you deploy 3scale using the operator, a default tenant is created with a fixed URL: 3scale-admin.${wildcardDomain}.
The wildcardDomain is the <wildCardDomain> parameter you provided during the installation. Open this unique URL in a browser using the this command:
xdg-open https://3scale-admin.3scale-project.example.com
$ xdg-open https://3scale-admin.3scale-project.example.com
Optionally, you can create new tenants on the MASTER portal URL: master.${wildcardDomain}.
1.4.3. Getting the APIManager Admin Portal and Master Admin Portal credentials Copy linkLink copied to clipboard!
Replace <namespace> with the name of the namespace where the APIManager resource was created.
To log in to either the 3scale Admin Portal or Master Admin Portal after the operator-based deployment, you need the credentials for each separate portal. To get these credentials:
Run the following commands to get the Admin Portal credentials:
oc get secret system-seed -n <namespace> -o json | jq -r .data.ADMIN_USER | base64 -d oc get secret system-seed -n <namespace> -o json | jq -r .data.ADMIN_PASSWORD | base64 -d
$ oc get secret system-seed -n <namespace> -o json | jq -r .data.ADMIN_USER | base64 -d $ oc get secret system-seed -n <namespace> -o json | jq -r .data.ADMIN_PASSWORD | base64 -dCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Log in as the Admin Portal administrator to verify these credentials are working.
Run the following commands to get the Master Admin Portal credentials:
oc get secret system-seed -n <namespace> -o json | jq -r .data.MASTER_USER | base64 -d oc get secret system-seed -n <namespace> -o json | jq -r .data.MASTER_PASSWORD | base64 -d
$ oc get secret system-seed -n <namespace> -o json | jq -r .data.MASTER_USER | base64 -d $ oc get secret system-seed -n <namespace> -o json | jq -r .data.MASTER_PASSWORD | base64 -dCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Log in as the Master Admin Portal administrator to verify these credentials are working.
Additional resources
Optionally, you can create new tenants on the MASTER portal URL: master.${wildcardDomain}.
1.4.4. External databases for 3scale API Management using the operator Copy linkLink copied to clipboard!
When you externalize databases from a Red Hat 3scale API Management deployment, this means to provide isolation from the application and resilience against service disruptions at the database level. The resilience to service disruptions depends on the service level agreements (SLAs) provided by the infrastructure or platform provider where you host the databases. This is not offered by 3scale. For more details on externalizing of databases offered by your chosen deployment, see the associated documentation.
When you use an external databases for 3scale using the operator, the aim is to provide uninterrupted uptime if, for example, one or more databases were to fail.
If you use external databases in your 3scale operator-based deployment, note the following:
- Configure and deploy 3scale critical databases externally. Critical databases include the system database, system redis, and backend redis components. Ensure that you deploy and configure these components in a way that makes them highly available.
Specify the connection endpoints to those components for 3scale by creating their corresponding Kubernetes secrets before deploying 3scale.
- See External databases installation for more information.
- See Enabling Pod Disruption Budgets for more information about non-database deployment configurations.
In the
APIManagerCR, set the.spec.externalComponentsattribute to specify that system database, system redis, and backend redis are external:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additionally, if you want the zync database to be highly available to avoid zync potentially losing queue jobs data on restart, note the following:
- Deploy and configure the zync database externally. Make sure you deploy and configure the database in a way that it is highly available.
Specify the connection endpoint to the zync database for 3scale by creating its corresponding Kubernetes secret before deploying 3scale.
- See Zync database secret.
-
Configure 3scale by setting the
.spec.externalComponents.zync.databaseattribute in theAPIManagerCR totrueto specify that the zync database is an external database.
1.5. Deployment configuration options for 3scale API Management on OpenShift using the operator Copy linkLink copied to clipboard!
Links contained in this note to external website(s) are provided for convenience only. Red Hat has not reviewed the links and is not responsible for the content or its availability. The inclusion of any link to an external website does not imply endorsement by Red Hat of the website or their entities, products or services. You agree that Red Hat is not responsible or liable for any loss or expenses that may result due to your use of (or reliance on) the external site or content.
This section provides information about the deployment configuration options for Red Hat 3scale API Management on OpenShift using the operator.
Prerequisites
- Deploying 3scale API Management using the operator first requires that you follow the steps in Installing the 3scale API Management operator on OpenShift.
OpenShift Container Platform 4.x.
- A user account with administrator privileges in the OpenShift cluster.
1.5.1. Configuring proxy parameters for embedded APIcast Copy linkLink copied to clipboard!
As a 3scale administrator, you can configure proxy parameters for embedded APIcast staging and production. This section provides reference information for specifying proxy parameters in an APIManager custom resource (CR). In other words, you are using the 3scale operator, an APIManager CR to deploy 3scale on OpenShift.
You can specify these parameters when you deploy an APIManager CR for the first time or you can update a deployed APIManager CR and the operator will reconcile the update. See Deploying the APIManager custom resource.
There are four proxy-related configuration parameters for embedded APIcast:
-
allProxy -
httpProxy -
httpsProxy -
noProxy
allProxy
The allProxy parameter specifies an HTTP or HTTPS proxy to be used for connecting to services when a request does not specify a protocol-specific proxy.
After you set up a proxy, configure APIcast by setting the allProxy parameter to the address of the proxy. Authentication is not supported for the proxy. In other words, APIcast does not send authenticated requests to the proxy.
The value of the allProxy parameter is a string, there is no default, and the parameter is not required. Use this format to set the spec.apicast.productionSpec.allProxy parameter or the spec.apicast.stagingSpec.allProxy parameter:
<scheme>://<host>:<port>
For example:
httpProxy
The httpProxy parameter specifies an HTTP proxy to be used for connecting to HTTP services.
After you set up a proxy, configure APIcast by setting the httpProxy parameter to the address of the proxy. Authentication is not supported for the proxy. In other words, APIcast does not send authenticated requests to the proxy.
The value of the httpProxy parameter is a string, there is no default, and the parameter is not required. Use this format to set the spec.apicast.productionSpec.httpProxy parameter or the spec.apicast.stagingSpec.httpProxy parameter:
http://<host>:<port>
For example:
httpsProxy
The httpsProxy parameter specifies an HTTPS proxy to be used for connecting to services.
After you set up a proxy, configure APIcast by setting the httpsProxy parameter to the address of the proxy. Authentication is not supported for the proxy. In other words, APIcast does not send authenticated requests to the proxy.
The value of the httpsProxy parameter is a string, there is no default, and the parameter is not required. Use this format to set the spec.apicast.productionSpec.httpsProxy parameter or the spec.apicast.stagingSpec.httpsProxy parameter:
https://<host>:<port>
For example:
noProxy
The noProxy parameter specifies a comma-separated list of hostnames and domain names. When a request contains one of these names, APIcast does not proxy the request.
If you need to stop access to the proxy, for example during maintenance operations, set the noProxy parameter to an asterisk (*). This matches all hosts specified in all requests and effectively disables any proxies.
The value of the noProxy parameter is a string, there is no default, and the parameter is not required. Specify a comma-separated string to set the spec.apicast.productionSpec.noProxy parameter or the spec.apicast.stagingSpec.noProxy parameter. For example:
1.5.2. Injecting custom environments with the 3scale API Management operator Copy linkLink copied to clipboard!
In a 3scale installation that uses embedded APIcast, you can use the 3scale operator to inject custom environments. Embedded APIcast is also referred to as managed or hosted APIcast. A custom environment defines behavior that APIcast applies to all upstream APIs that the gateway serves. To create a custom environment, define a global configuration in Lua code.
You can inject a custom environment before or after 3scale installation. After injecting a custom environment and after 3scale installation, you can remove a custom environment. The 3scale operator reconciles the changes.
Prerequisites
- The 3scale operator is installed.
Procedure
Write Lua code that defines the custom environment that you want to inject. For example, the following
env1.luafile shows a custom logging policy that the 3scale operator loads for all services.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a secret from the Lua file that defines the custom environment. For example:
oc create secret generic custom-env-1 --from-file=./env1.lua
$ oc create secret generic custom-env-1 --from-file=./env1.luaCopy to Clipboard Copied! Toggle word wrap Toggle overflow A secret can contain multiple custom environments. Specify the
–from-fileoption for each file that defines a custom environment. The operator loads each custom environment.Define an APIManager custom resource (CR) that references the secret you just created. The following example shows only content relative to referencing the secret that defines the custom environment.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow An APIManager CR can reference multiple secrets that define custom environments. The operator loads each custom environment.
Create the APIManager CR that adds the custom environment. For example:
oc apply -f apimanager.yaml
$ oc apply -f apimanager.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
You cannot update the content of a secret that defines a custom environment. If you need to update the custom environment you can do either of the following:
-
The recommended option is to create a secret with a different name and update the APIManager CR field,
customEnvironments[].secretRef.name. The operator triggers a rolling update and loads the updated custom environment. -
Alternatively, you can update the existing secret, redeploy APIcast by setting
spec.apicast.productionSpec.replicasorspec.apicast.stagingSpec.replicasto 0, and then redeploy APIcast again by settingspec.apicast.productionSpec.replicasorspec.apicast.stagingSpec.replicasback to its previous value.
1.5.3. Injecting custom policies with the 3scale API Management operator Copy linkLink copied to clipboard!
In a 3scale installation that uses embedded APIcast, you can use the 3scale operator to inject custom policies. Embedded APIcast is also referred to as managed or hosted APIcast. Injecting a custom policy adds the policy code to APIcast. You can then use either of the following to add the custom policy to an API product’s policy chain:
- 3scale API
-
Productcustom resource (CR)
To use the 3scale Admin Portal to add the custom policy to a product’s policy chain, you must also register the custom policy’s schema with a CustomPolicyDefinition CR. Custom policy registration is a requirement only when you want to use the Admin Portal to configure a product’s policy chain.
You can inject a custom policy as part of or after 3scale installation. After injecting a custom policy and after 3scale installation, you can remove a custom policy by removing its specification from the APIManager CR. The 3scale operator reconciles the changes.
Prerequisites
- You are installing or you previously installed the 3scale operator.
-
You have defined a custom policy as described in Write your own policy. That is, you have already created, for example, the
my-policy.lua,apicast-policy.json, andinit.luafiles that define a custom policy.
Procedure
Create a secret from the files that define one custom policy. For example:
oc create secret generic my-first-custom-policy-secret \ --from-file=./apicast-policy.json \ --from-file=./init.lua \ --from-file=./my-first-custom-policy.lua
$ oc create secret generic my-first-custom-policy-secret \ --from-file=./apicast-policy.json \ --from-file=./init.lua \ --from-file=./my-first-custom-policy.luaCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you have more than one custom policy, create a secret for each custom policy. A secret can contain only one custom policy.
Use the 3scale operator to monitor secret changes. Add the
apimanager.apps.3scale.net/watched-by=apimanagerlabel to begin the 3scale operator secret changes monitoring:oc label secret my-first-custom-policy-secret apimanager.apps.3scale.net/watched-by=apimanager
$ oc label secret my-first-custom-policy-secret apimanager.apps.3scale.net/watched-by=apimanagerCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteBy default, changes to the secret are not tracked by the 3scale operator. With the label in place, the 3scale operator automatically updates the APIcast deployment whenever you make changes to the secret. This happens in both staging and production environments where the secret is in use. The 3scale operator will not take ownership of the secret in any way.
Define an APIManager CR that references each secret that contains a custom policy. You can specify the same secret for APIcast staging and APIcast production. The following example shows only content relative to referencing secrets that contain custom policies.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow An APIManager CR can reference multiple secrets that define different custom policies. The operator loads each custom policy.
Create the APIManager CR that references the secrets that contain the custom policies. For example:
oc apply -f apimanager.yaml
$ oc apply -f apimanager.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
When you apply the apimanager.apps.3scale.net/watched-by=apimanager label, the 3scale operator begins monitoring changes in the secret. Now, you can modify the custom policy within the secret, and the operator will initiate a rolling update, loading the updated custom environment.
-
Alternatively, you can update the existing secret, redeploy APIcast by setting
spec.apicast.productionSpec.replicasorspec.apicast.stagingSpec.replicasto 0, and then redeploy APIcast again by settingspec.apicast.productionSpec.replicasorspec.apicast.stagingSpec.replicasback to its previous value.
1.5.4. Configuring OpenTracing with the 3scale API Management operator Copy linkLink copied to clipboard!
In a 3scale installation that uses embedded APIcast, you can use the 3scale operator to configure OpenTracing. You can configure OpenTracing in the staging or production environments or both environments. By enabling OpenTracing, you get more insight and better observability on the APIcast instance.
Prerequisites
- The 3scale operator is installed or you are in the process of installing it.
Procedure
Define a secret that contains your OpenTracing configuration details in
stringData.config. This is the only valid value for the attribute that contains your OpenTracing configuration details. Any other specification prevents APIcast from receiving your OpenTracing configuration details. The following example shows a valid secret definition:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the secret. For example, if you saved the previous secret definition in the
myjaeger.yamlfile, you would run the following command:oc create -f myjaeger.yaml
$ oc create -f myjaeger.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Define an APIManager custom resource (CR) that specifies
OpenTracingattributes. In the CR definition, set theopenTracing.tracingConfigSecretRef.nameattribute to the name of the secret that contains your OpenTracing configuration details. The following example shows only content relative to configuring OpenTracing:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the APIManager CR that configures OpenTracing. For example, if you saved the APIManager CR in the
apimanager1.yamlfile, you would run the following command:oc apply -f apimanager1.yaml
$ oc apply -f apimanager1.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
Depending on how OpenTracing is installed, you should see the traces in the Jaeger service user interface.
Additional resource
1.5.5. Enabling TLS at the pod level with the 3scale API Management operator Copy linkLink copied to clipboard!
3scale deploys two APIcast instances, one for production and the other for staging. TLS can be enabled for only production or only staging, or for both instances.
Prerequisites
- A valid certificate for enabling TLS.
Procedure
Create a secret from your valid certificate, for example:
oc create secret tls mycertsecret --cert=server.crt --key=server.key
$ oc create secret tls mycertsecret --cert=server.crt --key=server.keyCopy to Clipboard Copied! Toggle word wrap Toggle overflow The configuration exposes secret references in the APIManager custom resource (CR). You create the secret and then reference the name of the secret in the APIManager CR as follows:
-
Production: The APIManager CR exposes the certificate in the
.spec.apicast.productionSpec.httpsCertificateSecretReffield. Staging: The APIManager CR exposes the certificate in the
.spec.apicast.stagingSpec.httpsCertificateSecretReffield.Optionally, you can configure the following:
-
httpsPortindicates which port APIcast should start listening on for HTTPS connections. If this clashes with the HTTP port APIcast uses this port for HTTPS only. httpsVerifyDepthdefines the maximum length of the client certificate chain.NoteProvide a valid certificate and reference from the APIManager CR. If the configuration can access
httpsPortbut nothttpsCertificateSecretRef, APIcast uses an embedded self-signed certificate. This is not recommended.
-
Production: The APIManager CR exposes the certificate in the
- Click Operators > Installed Operators.
- From the list of Installed Operators, click 3scale Operator.
- Click the API Manager tab.
- Click Create APIManager.
Add the following YAML definitions to the editor.
If enabling for production, configure the following YAML definitions:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If enabling for staging, configure the following YAML defintions:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Click Create.
1.5.6. Proof of concept for evaluation deployment Copy linkLink copied to clipboard!
The following sections describe the configuration options applicable to the proof of concept for an evaluation deployment of 3scale. This deployment uses internal databases as default.
The configuration for external databases is the standard deployment option for production environments.
1.5.6.1. Default deployment configuration Copy linkLink copied to clipboard!
Containers will have Kubernetes resource limits and requests.
- This ensures a minimum performance level.
- It limits resources to allow external services and allocation of solutions.
- Deployment of internal databases.
File storage will be based on Persistence Volumes (PV).
- One will require read, write, execute (RWX) access mode.
- OpenShift configured to provide them upon request.
- Deploy MySQL as the internal relational database.
The default configuration option is suitable for proof of concept (PoC) or evaluation by a customer.
One, many, or all of the default configuration options can be overridden with specific field values in the APIManager custom resource (CR). The 3scale operator allows all available combinations. For example, the 3scale operator allows deployment of 3scale in evaluation mode and external databases mode.
1.5.6.2. Evaluation installation Copy linkLink copied to clipboard!
For and evaluation installation, containers will not have kubernetes resource limits and requests specified. For example:
- Small memory footprint
- Fast startup
- Runnable on laptop
- Suitable for presale/sales demos
Additional resources
1.5.7. External databases installation Copy linkLink copied to clipboard!
- Since 3scale 2.16, external databases are a requirement and a pre-requisite for installation.
Before creating an APIManager CR to deploy 3scale, you must provide the database connection settings for the external databases by using OpenShift secrets: * Backend Redis (2 instances or logical DBs) – backend-redis secret. * System Redis – system-redis secret. * System database – system-database secret.
External Zync database can be provided optionally in the zync secret.
Refer to External Redis database configuration for information about configuring Redis databases for 3scale.
Additional resources
1.5.7.1. System database secret Copy linkLink copied to clipboard!
-
The Secret name must be
system-database.
When you are deploying 3scale, you have three alternatives for your system database. Configure different attributes and values for each alternative’s related secret.
- MySQL
- PostgreSQL
- Oracle Database
To deploy a MySQL, PostgreSQL, or an Oracle Database system database secret, fill in the connection settings as shown in the following examples:
For MySQL, PostgreSQL, and Oracle databases, grant all privileges on the database {DB_NAME} to the database user {DB_USER}. Follow the instructions specific to your database.
MySQL system database secret
PostgreSQL system database secret
Oracle system database secret
-
{DB_USER}and{DB_PASSWORD}are the username and password of the regular non-system user. -
{DB_NAME}is the Oracle Database service name. -
ORACLE_SYSTEM_PASSWORDis optional, see Configure a database user. -
If you use a Pluggable Database (PDB) and the PDB name does not match the service name, use the PDB name (that does not contain the domain part) as
{DB_NAME}. You can use the Oracle specific connect descriptor format for the
URLfield in the secret:oracle-enhanced://{DB_USER}:{DB_PASSWORD}@connection-string/(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp)(HOST={IP_OR_HOST})(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME={DB_NAME}))).For example:
oracle-enhanced://user:pass@connection-string/(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp)(HOST=10.0.206.233)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=THPDB2)))
oracle-enhanced://user:pass@connection-string/(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp)(HOST=10.0.206.233)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=THPDB2)))Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.5.7.2. Zync database secret Copy linkLink copied to clipboard!
In a zync database setup, when the spec.externalComponents.zync.database field is set to true, you must create a secret named zync before you deploy 3scale. In this secret, set the DATABASE_URL and DATABASE_PASSWORD fields to the values that point to your external zync database, for example:
The zync database must be in high-availability mode.
1.5.7.3. APIManager custom resources to deploy 3scale API Management Copy linkLink copied to clipboard!
-
When you enable external components, you must create a secret for each external component (
backend-redis,system-redis,system-database,zync) before you deploy 3scale. -
For an external
system-database, choose only one type of database to externalize.
Configuration of the APIManager custom resource (CR) depends on whether or not your choice of database is external to your 3scale deployment.
If backend-redis, system-redis, or system-database is external to 3scale, populate the APIManager CR externalComponents object as shown in the following example:
Additional resources
1.5.8. Enabling pod affinity in the 3scale API Management operator Copy linkLink copied to clipboard!
You can enable pod affinities in the 3scale operator for every component. This ensures distribution of pod replicas from each deployment across different nodes of the cluster, so they will be evenly balanced across different availability zones (AZ).
1.5.8.1. Customizing node affinity and tolerations at component level Copy linkLink copied to clipboard!
Customize kubernetes affinity and tolerations in your 3scale solution through the APIManager CR attributes. You can then customize to schedule different 3scale components onto kubernetes nodes.
For example, to set a custom node affinity for backend-listener and custom tolerations for system-memcached, do the following:
Custom affinity and tolerations
Add the following affinity block to apicastProductionSpec or to any non-database deployment. This adds a soft podAntiAffinity configuration using preferredDuringSchedulingIgnoredDuringExecution. The scheduler will try to run this set of apicast-production pods in different hosts from different AZs. If it is not possible, then allow them to run elsewhere:
Soft podAntiAffinity
In the following example, a hard podAntiAffinity configuration is set using requiredDuringSchedulingIgnoredDuringExecution. Conditions must be met to schedule a pod onto a node. A risk exists, for example, that you will not be able to schedule new pods on a cluster with low free resources:
Hard podAntiAffinity
Additional resources
1.5.9. Multiple clusters in multiple availability zones Copy linkLink copied to clipboard!
In case of failure, bringing a passive cluster into active mode disrupts the provision of the service until the procedure finishes. Due to this disruption, be sure to have a maintenance window.
This documentation focuses on deployment using Amazon Web Services (AWS). The same configuration options apply to other public cloud vendors where the provider’s managed database services offer, for example, support for multiple availability zones and multiple regions.
When you want to install 3scale on several OpenShift clusters and high availability (HA) zones, there are options available which you can refer to here.
In multiple cluster installation options, clusters work in an active/passive configuration, with the failover procedure involving a few manual steps.
1.5.9.1. Prerequisites for multiple clusters installations Copy linkLink copied to clipboard!
Use the following in 3scale installations that involve using several OpenShift clusters:
-
Use pod affinities with both
kubernetes.io/hostnameandtopology.kubernetes.io/zonerules in the APIManager custom resource (CR). -
Use pod affinities with both
kubernetes.io/hostnameandtopology.kubernetes.io/zonerules in the APIManager custom resource (CR). - Use pod disruption budgets in the APIManager CR.
- A 3scale installation over multiple clusters must use the same shared wildcardDomain attribute specifications in the APIManager CR. The use of a different domain for each cluster is not allowed in this installation mode, as the information stored in the database would be conflicting.
You must manually deploy the secrets containing credentials, such as tokens and passwords, in all clusters with the same values. The 3scale operator creates them with secure random values on every cluster. In this case, you need to have the same credentials in both clusters. You will find the list of secrets and how to configure them in the 3scale operator documentation. The following is the list of secrets you must mirror in both clusters:
-
backend-internal-api -
system-app -
system-events-hook -
system-master-apicast system-seedYou must manually deploy secrets with the database connection strings for
backend-redis,system-redis,system-databaseandzync. See External databases installation.- Databases shared among clusters must use the same values on all clusters.
- If each cluster have their own databases, they must use different values on each cluster.
-
1.5.9.5. Active-passive clusters on different regions with synced databases Copy linkLink copied to clipboard!
This setup consists of having two or more clusters in different regions and deploying 3scale in active-passive mode. One cluster is active, receiving traffic, the others are in standby mode without receiving traffic, therefore passive, but prepared to assume the active role in case there is a failure in the active cluster.
To ensure good database access latency, each cluster has its own database instances. The databases from the active 3scale installation are replicated to the read-replica databases of the 3scale passive installations so the data is available and up to date in all regions for a possible failover.
1.5.9.6. Configuring and installing synced databases Copy linkLink copied to clipboard!
Procedure
- Create two or more OpenShift clusters in different regions using different availability zones. A minimum of three zones is recommended.
Create all required AWS ElastiCache instances with Amazon RDS Multi-AZ enabled on every region:
- Two AWS EC for Backend Redis database: one per region.
- Two AWS EC for System Redis database: one per region.
- Use the cross-region replication with the Global Datastore feature enabled, so the databases from passive regions are read-replicas from the master databases at the active region.
Create all required AWS RDS instances with Amazon RDS Multi-AZ enabled on every region:
- Two AWS RDS for the System database.
- Two AWS RDS for Zync database.
- Use cross-region replication, so the databases from passive regions are read-replicas from the master databases at the active region.
- Configure a AWS S3 bucket for the system assets on every region using cross-region replication.
- Create a custom domain in AWS Route53 or your DNS provider and point it to the OpenShift Router of the active cluster. This must coincide with the wildcardDomain attribute from APIManager CR.
Install 3scale in the passive cluster. The APIManager CR should be identical to the one used in the previous step. When all pods are running, change the APIManager to deploy 0 replicas for all the
backend,system,zync, andAPIcastpods.- Set replicas to 0 to avoid consuming jobs from active database. Deployment will fail due to pod dependencies if each replica is set to 0 at first. For example, pods checking that others are running. First deploy as normal, then set replicas to 0.
1.5.9.7. Manual failover synced databases Copy linkLink copied to clipboard!
Procedure
Do steps 1, 2 and 3 from Manual Failover shared databases.
- Every cluster has its own independent databases: read-replicas from the master at the active region.
- You must manually execute a failover on every database to select the new master on the passive region, which then becomes the active region.
Manual failovers of the databases to execute are:
- AWS RDS: System and Zync.
- AWS ElastiCaches: Backend and System.
- Do step 4 from Manual Failover shared databases.
1.5.10. Amazon Simple Storage Service 3scale API Management fileStorage installation Copy linkLink copied to clipboard!
Before creating APIManager custom resource (CR) to deploy 3scale, provide connection settings for the Amazon Simple Storage Service (Amazon S3) service by using an OpenShift secret.
- Skip this section if you are deploying 3scale with the local filesystem storage.
- The name you choose for a secret can be any name as long as it is not an existing secret name and it will be referenced in the APIManager CR.
-
If
AWS_REGIONis not provided for S3 compatible storage, usedefaultor the deployment will fail. - Disclaimer: Links contained herein to external website(s) are provided for convenience only. Red Hat has not reviewed the links and is not responsible for the content or its availability. The inclusion of any link to an external website does not imply endorsement by Red Hat of the website or their entities, products or services. You agree that Red Hat is not responsible or liable for any loss or expenses that may result due to your use of (or reliance on) the external site or content.
1.5.10.1. Amazon S3 bucket creation Copy linkLink copied to clipboard!
Prerequisites
- You must have an Amazon Web Services (AWS) account.
Procedure
- Create a bucket for storing the system assets.
- Disable the public access blocker of S3 when using the Logo feature of the Developer Portal.
Create an Identity and Access Management (IAM) policy with the following minimum permissions:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a CORS configuration with the following rules:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.5.10.2. Create an OpenShift secret Copy linkLink copied to clipboard!
The following examples show 3scale fileStorage using Amazon S3 instead of persistent volume claim (PVC).
AN AWS S3 compatible provider can be configured in the S3 secret with AWS_HOSTNAME, AWS_PATH_STYLE, and AWS_PROTOCOL optional keys. See the fileStorage S3 credentials secret fields table for more details.
In the following example, Secret name can be anything, as it is be referenced in the APIManager CR.
Lastly, create the APIManager CR to deploy 3scale.
Check APIManager SystemS3Spec.
The following table shows the fileStorage Amazon S3 credentials secret field requirements for Identity and Access Management (IAM) and Security Token Service (STS) settings:
- The S3 authentication method using Secure Token Service (STS) is for short-term, limited-privilege security credentials.
- S3 Identity and Access Management (IAM) is for long-term privilege security credentials.
| Field | Description | Required for IAM | Required for STS |
|---|---|---|---|
| AWS_ACCESS_KEY_ID |
AWS Access Key ID to use in S3 Storage for system’s | Yes | No |
| AWS_SECRET_ACCESS_KEY |
AWS Access Key Secret to use in S3 Storage for system’s | Yes | No |
| AWS_BUCKET |
The S3 bucket to be used as system’s | Yes | Yes |
| AWS_REGION |
The region of the S3 bucket to be used as system’s | Yes | Yes |
| AWS_HOSTNAME | Default: Amazon endpoints - An AWS S3 compatible provider endpoint hostname | No | No |
| AWS_PROTOCOL |
Default: | No | No |
| AWS_PATH_STYLE |
Default: | No | No |
| AWS_ROLE_ARN | ARN of the Role which has a policy attached to authenticate using AWS STS | No | Yes |
| AWS_WEB_IDENTITY_TOKEN_FILE |
Path to mounted token file location. For example: | No | Yes |
1.5.10.3. Manual mode with STS Copy linkLink copied to clipboard!
STS authentication mode must be enabled from the APIManager CR. You can define your audience, however, the default value is openshift.
Prerequisites
- Configure OpenShift to use temporary credentials for different components with AWS Security Token Service (STS). For further detail see Using manual mode with Amazon Web Services Secure Token Service.
The secret generated by the cloud credential tooling looks different from the IAM secret. There are two new fields AWS_ROLE_ARN and AWS_WEB_IDENTITY_TOKEN_FILE instead of AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
STS secret example
With STS, the 3scale operator adds the projected volume to request the token. The following pods have a projected volume:
-
system-app -
system-app hook pre -
system-sidekiq
Pod example for STS
1.5.11. PostgreSQL installation Copy linkLink copied to clipboard!
A MySQL internal relational database is the default deployment. This deployment configuration can be overridden to use PostgreSQL instead.
Additional resources
1.5.12. Configuring SMTP variables (optional) Copy linkLink copied to clipboard!
3scale uses email to send notifications and invite new users. If you intend to use these features, you must provide your own SMTP server and configure SMTP variables in the system-smtp secret.
Procedure
If you are not already logged in, log in to OpenShift and select the project where your 3scale On-premises instance is installed using the
occommand:oc login -u <user> <url> oc project <3scale-project>
$ oc login -u <user> <url> $ oc project <3scale-project>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
oc patchcommand to update the secretsystem-smtp. Use the option-pwith the following JSON:{"stringData":{"<field>":"<value>"}}, where<field>is the field of the secret, and<value>is the desired value. The complete list of available fields is listed in the table below.Expand Table 1.2. system-smtp Field Description Default value in the secret Default value in the application mailer addressThe address of a remote mail server.
""nilportThe port of the remote mail server to use.
""0
usernameUsername if the mail server requires authentication and the authentication type requires it.
""nilpasswordPassword if the mail server requires authentication and the authentication type requires it.
""nilauthenticationUse if the mail server requires authentication. Set the authentication types:
plainto send the password in the clear,loginto send password Base64 encoded, orcram_md5to combine a challenge/response mechanism based on the HMAC-MD5 algorithm.""nilopenssl.verify.modeWhen using TLS, you can set how OpenSSL checks the certificate. This is useful if you need to validate a self-signed and/or a wildcard certificate. You can use the name of an OpenSSL verify constant:
noneorpeer.""nilfrom_addressfromaddress value for the no-reply mail."""no-reply@{wildcardDomain}"Note-
HELOdomain used for the mail server configuration is the value of{wildcardDomain}. -
{wildcardDomain}is a placeholder for the value that is set inspec.wildcardDomainin the APIManager CR.
Examples
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
After you have set the secret variables, redeploy the
system-appandsystem-sidekiqpods:oc rollout restart deployment/system-app oc rollout restart deployment/system-sidekiq
$ oc rollout restart deployment/system-app $ oc rollout restart deployment/system-sidekiqCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status of the rollout to ensure it has finished:
oc rollout status deployment/system-app oc rollout status deployment/system-sidekiq
$ oc rollout status deployment/system-app $ oc rollout status deployment/system-sidekiqCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.5.13. Customizing compute resource requirements at component level Copy linkLink copied to clipboard!
Customize Kubernetes Compute Resource Requirements in your 3scale solution through the APIManager custom resource (CR) attributes. Do this to customize compute resource requirements, which is CPU and memory, assigned to a specific APIManager component.
The following example outlines how to customize compute resource requirements for the system-master’s system-provider container, for the backend-listener and for the zync-database:
Additional resources
1.5.13.1. Default APIManager components compute resources Copy linkLink copied to clipboard!
When you configure the APIManager spec.resourceRequirementsEnabled attribute as true, the default compute resources are set for the APIManager components.
The specific compute resources default values that are set for the APIManager components are shown in the following table.
1.5.13.1.1. CPU and memory units Copy linkLink copied to clipboard!
The following list explains the units you will find mentioned in the compute resources default values table. For more information on CPU and memory units, see Managing Resources for Containers.
Resource units explanation
- m - milliCPU or millicore
- Mi - mebibytes
- Gi - gibibyte
- G - gigabyte
| Component | CPU requests | CPU limits | Memory requests | Memory limits |
|---|---|---|---|---|
| system-app’s system-master | 50m | 1000m | 600Mi | 800Mi |
| system-app’s system-provider | 50m | 1000m | 600Mi | 800Mi |
| system-app’s system-developer | 50m | 1000m | 600Mi | 800Mi |
| system-sidekiq | 100m | 1000m | 500Mi | 2Gi |
| system-searchd | 80m | 1000m | 250Mi | 512Mi |
| system-redis | 150m | 500m | 256Mi | 32Gi |
| system-mysql | 250m | No limit | 512Mi | 2Gi |
| system-postgresql | 250m | No limit | 512Mi | 2Gi |
| backend-listener | 500m | 1000m | 550Mi | 700Mi |
| backend-worker | 150m | 1000m | 50Mi | 300Mi |
| backend-cron | 100m | 500m | 100Mi | 500Mi |
| backend-redis | 1000m | 2000m | 1024Mi | 32Gi |
| apicast-production | 500m | 1000m | 64Mi | 128Mi |
| apicast-staging | 50m | 100m | 64Mi | 128Mi |
| zync | 150m | 1 | 250M | 512Mi |
| zync-que | 250m | 1 | 250M | 512Mi |
| zync-database | 50m | 250m | 250M | 2G |
1.5.14. Pod priority of 3scale API Management components Copy linkLink copied to clipboard!
As a 3scale administrator, you can set up the pod priority for various 3scale installed components by modifying the APIManager custom resource (CR). Use the optional priorityClassName available in the following components:
-
apicast-production -
apicast-staging -
backend-cron -
backend-listener -
backend-worker -
backend-redis -
system-app -
system-sidekiq -
system-searchd -
system-memcache -
system-mysql -
system-postgresql -
system-redis -
zync -
zync-database -
zync-que
For example:
1.5.15. Setting custom labels Copy linkLink copied to clipboard!
You can customize labels through the APIManager CR labels attribute for each Deployment that are applied to their respective pods.
If you remove a label defined in a Custom Resource (CR), it is not automatically removed from the associated Deployment. You must manually remove the label from the Deployment.
Example for apicast-staging and backend-listener:
Additional resources
1.5.16. Setting backend client to skip certificate verification Copy linkLink copied to clipboard!
When a controller processes an object, it generates a new backend client for making API calls. By default, this client is set up to confirm the server’s certificate chain. However, during development and testing, you might need the client to skip certificate verification when processing an object. To achieve this, add the annotation "insecure_skip_verify": "true" to the following objects:
- ActiveDoc
- Application
- Backend
- CustomPolicyDefinition
- DeveloperAccount
- DeveloperUser
- OpenAPI - backend and product
- Product
- ProxyConfigPromote
- Tenant
OpenAPI CR example:
1.5.17. Setting custom annotations Copy linkLink copied to clipboard!
In 3scale, the components' pods have annotations. These are key/value pairs used for configurations. You can change these annotations for any 3scale component using the APIManager CR.
If you remove an annotation defined in a custom resource (CR), it is not automatically removed from the associated Deployment. You must manually remove the annotation from the Deployment.
APIManager annotations for apicast-staging and backend-listener
Additional resources
1.5.18. Reconciliation Copy linkLink copied to clipboard!
Once 3scale has been installed, the 3scale operator enables updating a given set of parameters from the custom resource (CR) to modify system configuration options. Modifications are made by hot swapping, that is, without stopping or shutting down the system.
When a reconciliation event happens in the 3scale operator and the APIcast operator, there are two possible scenarios:
-
When there is no
deploymentand the CR has replicas, thedeploymentvalue will match those replicas. If the CR does not contain replicas, thedeploymentreplica value will be set to1. -
When there is a
deploymentand the CR has replicas, thedeploymentvalue will match those replicas, even if it is0. If the CR does not contain replicas, thedeploymentvalue stays the same.
Not all the parameters of the APIManager CR definitions (CRDs) are reconcilable.
The following is a list of reconcilable parameters:
1.5.18.1. Resources Copy linkLink copied to clipboard!
Resource limits and requests for all 3scale components.
1.5.18.2. Backend replicas Copy linkLink copied to clipboard!
Backend components pod count.
When the replica field is not set, the operator does not reconcile replicas. This allows third party controllers to manage replicas, like HorizontalPodAutoscaler controllers. It also allows update them manually on the deployment object.
1.5.18.3. APIcast replicas Copy linkLink copied to clipboard!
APIcast staging and production components pod count.
When the replica field is not set, the operator does not reconcile replicas. This allows third party controllers to manage replicas, like HorizontalPodAutoscaler controllers. It also allows update them manually on the deployment object.
1.5.18.4. System replicas Copy linkLink copied to clipboard!
System app and system sidekiq components pod count
When the replica field is not set, the operator does not reconcile replicas. This allows third party controllers to manage replicas, like HorizontalPodAutoscaler controllers. It also allows update them manually on the deployment object.
1.5.18.5. Zync replicas Copy linkLink copied to clipboard!
Zync app and que components pod count
When the replica field is not set, the operator does not reconcile replicas. This allows third party controllers to manage replicas, like HorizontalPodAutoscaler controllers. It also allows update them manually on the deployment object.
1.5.19. Setting the APICAST_SERVICE_CACHE_SIZE environment variable Copy linkLink copied to clipboard!
You can specify the number of services that APIcast stores in the internal cache by adding an optional field in the APIManager custom resource definition (CRD).
Prerequisites
-
You have installed the
APIcastoperator, or you are in the process of installing it.
Procedure
-
Add the
serviceCacheSizeoptional fields in both the production and staging sections of thespec:
apicast:
productionSpec:
serviceCacheSize: 20
stagingSpec:
serviceCacheSize: 10
apicast:
productionSpec:
serviceCacheSize: 20
stagingSpec:
serviceCacheSize: 10
Verification
Type the following commands to check the deployment:
oc get dc/apicast-staging -o yaml
$ oc get dc/apicast-staging -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc get dc/apicast-production -o yaml
$ oc get dc/apicast-production -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify inclusion of the environment variables:
apicast-staging - name: APICAST_SERVICE_CACHE_SIZE value: '10'
# apicast-staging - name: APICAST_SERVICE_CACHE_SIZE value: '10'Copy to Clipboard Copied! Toggle word wrap Toggle overflow apicast-production - name: APICAST_SERVICE_CACHE_SIZE value: '20'
# apicast-production - name: APICAST_SERVICE_CACHE_SIZE value: '20'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
You can specify the number of services that APIcast stores in the internal cache by adding an optional field in the APIManager custom resource definition (CRD). When the replica field is not set, the operator does not reconcile replicas. This allows third party controllers to manage replicas, like HorizontalPodAutoscaler controllers. It also allows update them manually on the deployment object.
Additional resource
1.6. Configuring Horizontal Pod Autoscaling for 3scale API Management Copy linkLink copied to clipboard!
- Default values, especially thresholds, might not be sufficient for your environment and could require adjustment.
- Requests and limits might also need modification.
- Performance testing is highly recommended.
Configuring Horizontal Pod Autoscaling (HPA) for Red Hat 3scale API Management components ensures optimal resource utilization and scalability. With this guide, you can enable and configure HPA for apicast-production, backend-listener, and backend-worker components. You can maintain efficient performance and handle varying workloads dynamically. Check the prerequisites before updating the APIManager custom resource (CR), and modifying the HPA configuration to suit your specific needs.
Prerequisites
- Ensure Redis is running in async mode. This is enabled by default by the 3scale operator.
-
ResourceRequirementsEnabledmust be set totruefor HPA to function.
Procedure
-
Enable HPA for
apicast-production,backend-listener, andbackend-workercomponents. Accept the default HPA configuration, which sets 85% resource utilization with a minimum of 1 pod and a maximum of 5 pods. Add the HPA configuration to your APIManager CR. The following YAML is an example configuration for
backend-worker,backend-listener, andapicast-productioncomponents:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the default HPA configuration. The default HPA configuration creates HPA instances with the following settings for
backend-worker:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Modify HPA instances as needed. After HPA instances are created, manually edit them to better optimize your configuration based on your workload requirements.
Disable HPA if necessary. To remove HPA for a component, remove the
hpafield or set it to false:backend: workerSpec: hpa: falsebackend: workerSpec: hpa: falseCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional notes
-
Enabling HPA overrides and ignores any replicas values set in the APIManager CR for
apicast-production,backend-listener, andbackend-worker. - For vertical scaling, set resource requests equal to limits. Since the HPA scales based on a default utilization of 85%, setting aside additional resources for limits is unnecessary.
1.7. Installing 3scale API Management with the operator using Oracle as the system database Copy linkLink copied to clipboard!
As a Red Hat 3scale API Management administrator, you can install the 3scale with the operator using the Oracle Database. By default, 3scale 2.16 has a component called system that stores configuration data in a MySQL database. You can override the default database and store your information in an external Oracle Database.
- The Oracle Database is not supported with OpenShift Container Platform (OCP) versions 4.2 and 4.3 when you are performing an operator-only installation of 3scale. For more information, refer to the Red Hat 3scale API Management Supported Configurations page.
-
In this documentation
myregistry.example.comis used as an example of the registry URL. Replace it with your registry URL. - Disclaimer: Links contained herein to external website(s) are provided for convenience only. Red Hat has not reviewed the links and is not responsible for the content or its availability. The inclusion of any link to an external website does not imply endorsement by Red Hat of the website or their entities, products or services. You agree that Red Hat is not responsible or liable for any loss or expenses that may result due to your use of (or reliance on) the external site or content.
Prerequisites
- A container registry to push container images, accessible by the OCP cluster where 3scale installed.
An installation of the 3scale operator.
- Do not install the APIManager CR, as it will be created in the following procedure.
- A supported version of the Oracle Database accessible from your OpenShift cluster.
-
Access to the Oracle Database
SYSTEMuser for installation procedures.
To install 3scale with the operator using Oracle as the system database, use the following steps:
1.7.1. Preparing the Oracle Database Copy linkLink copied to clipboard!
As a 3scale administrator, you must fully prepare the Oracle Database for your 3scale installation when you decide to use it for the System component.
Procedure
- Create a new database.
Apply the following settings:
ALTER SYSTEM SET max_string_size=extended SCOPE=SPFILE;
ALTER SYSTEM SET max_string_size=extended SCOPE=SPFILE;Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure a database user
There are two options for setting up Oracle Database integration in 3scale: with or without providing the Oracle
SYSTEMuser password.3scale uses the
SYSTEMuser only for the initial setup, which consist in creating a regular user and granting it the required privileges. The following SQL commands will set up a regular user with proper permissions. ({DB_USER}and{DB_PASSWORD}are placeholders that need to be replaced with actual values):Copy to Clipboard Copied! Toggle word wrap Toggle overflow Using the
SYSTEMuser:-
Provide the
SYSTEMuser password inORACLE_SYSTEM_PASSWORDfield of thesystem-databasesecret. - The regular user does not need to exist before the installation. It will be created by the 3scale initialization script.
-
Provide the desired username and password for the regular user in the connection string, for example,
oracle-enhanced://{DB_USER}:{DB_PASSWORD}@{DB_HOST}:{DB_PORT}/{DB_NAME}in theURLfield of thesystem-databasesecret. -
The password for the regular Oracle Database non-system user must be unique and not match the
SYSTEMuser password. If the user with the specified username already exists, the 3scale initialization script will attempt to update the password using the following command:
ALTER USER {DB_USER} IDENTIFIED BY {DB_PASSWORD}ALTER USER {DB_USER} IDENTIFIED BY {DB_PASSWORD}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Your database configuration might prevent this command from completing successfully if the parameters
PASSWORD_REUSE_TIMEandPASSWORD_REUSE_MAXare set in a way that restricts reusing the same password.
-
Provide the
Manual setup of the regular database user:
-
You do not need to provide the
ORACLE_SYSTEM_PASSWORDin thesystem-databasesecret. -
The regular database user (not
SYSTEM) specified in the connection string in theURLfield of thesystem-databasesecret needs to exist prior to the 3scale installation. - The regular user used for the installation must have all the privileges listed above.
-
You do not need to provide the
Additional resources
- For information on creating a new database, see the Oracle Database 19c documentation.
1.7.2. Building a custom system container image Copy linkLink copied to clipboard!
Procedure
Download 3scale OpenShift templates from the GitHub repository and extract the archive:
curl -L -o system-oracle-3scale-2.16.0-GA.tar.gz https://github.com/3scale/system-oracle/archive/refs/tags/3scale-2.16.0-GA.tar.gz tar -xzf system-oracle-3scale-2.16.0-GA.tar.gz
curl -L -o system-oracle-3scale-2.16.0-GA.tar.gz https://github.com/3scale/system-oracle/archive/refs/tags/3scale-2.16.0-GA.tar.gz tar -xzf system-oracle-3scale-2.16.0-GA.tar.gzCopy to Clipboard Copied! Toggle word wrap Toggle overflow From the Instant Client Downloads page, download:
- A client: It can be either basic-lite or basic.
- The ODBC driver.
The SDK for Oracle Database 19c.
- For 3scale, use Instant Client Downloads for Linux x86-64 (64-bit)
- For ppc64le and 3scale, use Oracle Instant Client Downloads for Linux on Power Little Endian (64-bit)
Check the table for the following Oracle software component versions:
- Oracle Instant Client Package: Basic or Basic Light
- Oracle Instant Client Package: SDK
Oracle Instant Client Package: ODBC
Expand Table 1.4. Oracle 19c example packages for 3scale Oracle 19c package name Compressed file name Basic
Basic Light
SDK
ODBC
Expand Table 1.5. Oracle 19c example packages for ppc64le and 3scale Oracle 19c package name Compressed file name Basic
Basic Light
instantclient-basiclite-linux.leppc64.c64-19.3.0.0.0dbru.zip
SDK
ODBC
NoteIf the client packages versions downloaded and stored locally do not match with the ones 3scale expects, 3scale will automatically download and use the appropriate ones in the following steps.
-
Place your Oracle Database Instant Client Package files into the
system-oracle-3scale-2.16.0-GA/oracle-client-filesdirectory. Build the custom system Oracle-based image. The image tag must be a fixed image tag as in the following example:
cd system-oracle-3scale-2.16.0-GA docker build . --tag myregistry.example.com/system-oracle:2.16.0-1
$ cd system-oracle-3scale-2.16.0-GA $ docker build . --tag myregistry.example.com/system-oracle:2.16.0-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Push the system Oracle-based image to a container registry accessible by the OCP cluster. This container registry is where your 3scale solution is going to be installed:
docker push myregistry.example.com/system-oracle:2.16.0-1
$ docker push myregistry.example.com/system-oracle:2.16.0-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.7.3. Installing 3scale API Management with Oracle using the operator Copy linkLink copied to clipboard!
Procedure
-
Set up the Oracle Database URL connection string and Oracle Database system password by creating the
system-databasesecret with the corresponding fields. See, External databases installation for the Oracle Database. Install your 3scale solution by creating an APIManager CR. Follow the instructions in Deploying 3scale API Management using the operator.
The APIManager CR must specify the
.spec.system.imagefield set to the system’s Oracle-based image you previous built:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.8. Troubleshooting common 3scale API Management installation issues Copy linkLink copied to clipboard!
This section contains a list of common installation issues and provides guidance for their resolution.
- Previous deployment leaving dirty persistent volume claims
- Incorrectly pulling from the Docker registry
- Permission issues for MySQL when persistent volumes are mounted locally
- Unable to upload logo or images
- Test calls not working on OpenShift
- APIcast on a different project from 3scale failing to deploy
1.8.1. Previous deployment leaving dirty persistent volume claims Copy linkLink copied to clipboard!
Problem
A previous deployment attempt leaves a dirty Persistent Volume Claim (PVC) causing the MySQL container to fail to start.
Cause
Deleting a project in OpenShift does not clean the PVCs associated with it.
Solution
Procedure
Find the PVC containing the erroneous MySQL data with the
oc get pvccommand:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Stop the deployment of the
system-mysqlpod by clicking cancel deployment in the OpenShift Container Platform (OCP) console. - Delete everything under the MySQL path to clean the volume.
-
Start a new
system-mysqldeployment.
1.8.2. Incorrectly pulling from the Docker registry Copy linkLink copied to clipboard!
Problem
The following error occurs during installation:
svc/system-redis - 1EX.AMP.LE.IP:6379
deployment/system-redis deploys docker.io/rhscl/redis-32-rhel7:3.2-5.3
deployment #1 failed 13 minutes ago: config change
svc/system-redis - 1EX.AMP.LE.IP:6379
deployment/system-redis deploys docker.io/rhscl/redis-32-rhel7:3.2-5.3
deployment #1 failed 13 minutes ago: config change
Cause
OpenShift searches for and pulls container images by issuing the docker command. This command refers to the docker.io Docker registry instead of the registry.redhat.io Red Hat Ecosystem Catalog.
This occurs when the system contains an unexpected version of the Docker containerized environment.
Solution
Procedure
Use the appropriate version of the Docker containerized environment.
1.8.3. Permission issues for MySQL when persistent volumes are mounted locally Copy linkLink copied to clipboard!
Problem
The system-msql pod crashes and does not deploy causing other systems dependant on it to fail deployment. The pod log displays the following error:
[ERROR] Cannot start server : on unix socket: Permission denied [ERROR] Do you already have another mysqld server running on socket: /var/lib/mysql/mysql.sock ? [ERROR] Aborting
[ERROR] Cannot start server : on unix socket: Permission denied
[ERROR] Do you already have another mysqld server running on socket: /var/lib/mysql/mysql.sock ?
[ERROR] Aborting
Cause
The MySQL process is started with inappropriate user permissions.
Solution
Procedure
The directories used for the persistent volumes MUST have the write permissions for the root group. Having read-write permissions for the root user is not enough as the MySQL service runs as a different user in the root group. Execute the following command as the root user:
chmod -R g+w /path/for/pvs
$ chmod -R g+w /path/for/pvsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to prevent SElinux from blocking access:
chcon -Rt svirt_sandbox_file_t /path/for/pvs
$ chcon -Rt svirt_sandbox_file_t /path/for/pvsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.8.4. Unable to upload logo or images Copy linkLink copied to clipboard!
Problem
Unable to upload a logo - system-app logs display the following error:
Errno::EACCES (Permission denied @ dir_s_mkdir - /opt/system/public//system/provider-name/2
Errno::EACCES (Permission denied @ dir_s_mkdir - /opt/system/public//system/provider-name/2
Cause
Persistent volumes are not writable by OpenShift.
Solution
Procedure
Ensure your persistent volume is writable by OpenShift. It should be owned by root group and be group writable.
1.8.5. Test calls not working on OpenShift Copy linkLink copied to clipboard!
Problem
Test calls do not work after creation of a new service and routes on OpenShift. Direct calls via curl also fail, stating: service not available.
Cause
3scale requires HTTPS routes by default, and OpenShift routes are not secured.
Solution
Procedure
Ensure the secure route checkbox is clicked in your OpenShift router settings.
1.8.6. APIcast on a different project from 3scale API Management failing to deploy Copy linkLink copied to clipboard!
Problem
APIcast deploy fails (pod does not turn blue). You see the following error in the logs:
update acceptor rejected apicast-3: pods for deployment "apicast-3" took longer than 600 seconds to become ready
update acceptor rejected apicast-3: pods for deployment "apicast-3" took longer than 600 seconds to become ready
You see the following error in the pod:
Error synching pod, skipping: failed to "StartContainer" for "apicast" with RunContainerError: "GenerateRunContainerOptions: secrets \"apicast-configuration-url-secret\" not found"
Error synching pod, skipping: failed to "StartContainer" for "apicast" with RunContainerError: "GenerateRunContainerOptions: secrets \"apicast-configuration-url-secret\" not found"
Cause
The secret was not properly set up.
Solution
Procedure
When creating a secret with APIcast v3, specify apicast-configuration-url-secret:
oc create secret generic apicast-configuration-url-secret --from-literal=password=https://<ACCESS_TOKEN>@<TENANT_NAME>-admin.<WILDCARD_DOMAIN>
$ oc create secret generic apicast-configuration-url-secret --from-literal=password=https://<ACCESS_TOKEN>@<TENANT_NAME>-admin.<WILDCARD_DOMAIN>
1.9. Additional resources Copy linkLink copied to clipboard!
Chapter 2. Installing APIcast Copy linkLink copied to clipboard!
APIcast is an NGINX based API gateway used to integrate your internal and external API services with the Red Hat 3scale API Management Platform. APIcast does load balancing by using round-robin.
In this guide you will learn about deployment options, environments provided, and how to get started.
Prerequisites
APIcast is not a standalone API gateway. It needs connection to 3scale API Manager.
- A working 3scale On-Premises instance.
To install APIcast, perform the steps outlined in the following sections:
2.1. APIcast deployment options Copy linkLink copied to clipboard!
You can use hosted or self-managed APIcast. In both cases, APIcast must be connected to the rest of the 3scale API Management platform:
- Embedded APIcast: A 3scale API Management installation includes two default APIcast gateways, staging and production. These gateways come preconfigured and ready for immediate use.
Self-managed APIcast: You can deploy APIcast wherever you want. Here is one of the recommended option to deploy APIcast:
- Running APIcast on Red Hat OpenShift: Run APIcast on a supported version of OpenShift. You can connect self-managed APIcasts to a 3scale On-premises installation or to a 3scale Hosted (SaaS) account. For this, deploy an APIcast gateway self-managed solution using the operator.
2.2. APIcast environments Copy linkLink copied to clipboard!
By default, when you create a 3scale account, you get embedded APIcast in two different environments:
- Staging: Intended to be used only while configuring and testing your API integration. When you have confirmed that your setup is working as expected, then you can choose to deploy it to the production environment.
-
Production: This environment is intended for production use. The following parameters are set for the Production APIcast in the OpenShift template:
APICAST_CONFIGURATION_LOADER: boot,APICAST_CONFIGURATION_CACHE: 300. This means that the configuration will be fully loaded when APIcast is started, and will be cached for 300 seconds (5 minutes). After 5 minutes the configuration will be reloaded. This means that when you promote the configuration to production, it may take up to 5 minutes to be applied, unless you trigger a new deployment of APIcast.
2.3. Configuring the integration settings Copy linkLink copied to clipboard!
As a 3scale administrator, configure the integration settings for the environment you require 3scale to run in.
Prerequisites
A 3scale account with administrator privileges.
Procedure
- Navigate to [Your_product_name] > Integration > Settings.
Under Deployment, the default options are as follows:
- Deployment Option: APIcast 3scale managed
- Authentication mode: API key.
- Change to your preferred option.
- To save your changes, click Update Product.
2.4. Configuring your product Copy linkLink copied to clipboard!
You must declare your API back-end in the Private Base URL field, which is the endpoint host of your API back-end. APIcast will redirect all traffic to your API back-end after all authentication, authorization, rate limits and statistics have been processed.
This section will guide you through configuring your product:
2.4.1. Declaring the API backend Copy linkLink copied to clipboard!
Typically, the Private Base URL of your API will be something like https://api-backend.yourdomain.com:443, on the domain that you manage (yourdomain.com). For instance, if you were integrating with the Twitter API the Private Base URL would be https://api.twitter.com/.
In this example, you will use the Echo API hosted by 3scale, a simple API that accepts any path and returns information about the request (path, request parameters, headers, etc.). Its Private Base URL is https://echo-api.3scale.net:443.
Procedure
Test your private (unmanaged) API is working. For example, for the Echo API you can make the following call with
curlcommand:curl "https://echo-api.3scale.net:443"
$ curl "https://echo-api.3scale.net:443"Copy to Clipboard Copied! Toggle word wrap Toggle overflow You will get the following response:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.4.2. Configuring the authentication settings Copy linkLink copied to clipboard!
You can configure authentication settings for your API in the AUTHENTICATION section under [Your_product_name] > Integration > Settings.
| Field | Description |
|---|---|
| Auth user key | Set the user key associated with the credentials location. |
| Credentials location | Define whether credentials are passed as HTTP headers, query parameters or as HTTP basic authentication. |
| Host Header | Define a custom Host request header. This is required if your API backend only accepts traffic from a specific host. |
| Secret Token | Used to block direct developer requests to your API backend. Set the value of the header here, and ensure your backend only allows calls with this secret header. |
Furthermore, you can configure the GATEWAY RESPONSE error codes under [Your_product_name] > Integration > Settings. Define the Response Code, Content-type, and Response Body for the errors: Authentication failed, Authentication missing, and No match.
| Response code | Response body |
|---|---|
| 403 | Authentication failed |
| 403 | Authentication parameters missing |
| 404 | No Mapping Rule matched |
| 429 | Usage limit exceeded |
2.4.3. Configuring the API test call Copy linkLink copied to clipboard!
Configuring the API involves testing the backends with a product and promoting the APIcast configuration to staging and production environments to make tests based on request calls.
For each product, requests get redirected to their corresponding backend according to the path. This path is configured when you add the backend to the product. For example, if you have two backends added to a product, each backend has its own path.
Prerequisites
- One or more backends added to a product.
- A mapping rule for each backend added to a product.
- An application plan to define the access policies.
- An application that subscribes to the application plan.
Procedure
- Promote an APIcast configuration to Staging, by navigating to [Your_product_name] > Integration > Configuration.
Under APIcast Configuration, you will see the mapping rules for each backend added to the product. Click Promote v.[n] to Staging APIcast.
- v.[n] indicates the version number to be promoted.
Once promoted to staging, you can promote to Production. Under Staging APIcast, click Promote v.[n] to Production APIcast.
- v.[n] indicates the version number to be promoted.
To test requests to your API in the command line, use the command provided in Example curl for testing.
- The curl command example is based on the first mapping rule in the product.
When testing requests to your API, you can modify the mapping rules by adding methods and metrics.
Every time you modify the configuration and before making calls to your API, make sure you promote to the Staging and Production environments. When there are pending changes to be promoted to the Staging environment, you will see an exclamation mark in the Admin Portal, next to the Integration menu item.
3scale Hosted APIcast gateway does the validation of the credentials and applies the rate limits that you defined for the application plan of your API. If you make a call without credentials, or with invalid credentials, you will see the error message, Authentication failed.
2.4.4. Deploying APIcast on Podman Copy linkLink copied to clipboard!
This is a step-by-step guide for deploying APIcast on a Pod Manager (Podman) container environment to be used as a Red Hat 3scale API Management API gateway.
When deploying APIcast on a Podman container environment, the supported versions of Red Hat Enterprise Linux (RHEL) and Podman are as follows:
- RHEL 8.x/9.x
- Podman 4.6.1
Prerequisites
- You must configure APIcast in your 3scale Admin Portal as per Installing APIcast.
To deploy APIcast on the Podman container environment, perform the steps outlined in the following sections:
2.4.4.1. Installing the Podman container environment Copy linkLink copied to clipboard!
This guide covers the steps to set up the Podman container environment on RHEL 8.x. Docker is not included in RHEL 8.x, therefore, use Podman for working with containers.
For more details about Podman with RHEL 8.x, see the Container command-line reference.
Procedure
Install the Podman container environment package:
sudo dnf install podman
$ sudo dnf install podmanCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional resources
For other operating systems, refer to the following Podman documentation:
2.4.4.2. Running the Podman environment Copy linkLink copied to clipboard!
To run the Podman container environment, follow the procedure below.
Procedure
Download a ready to use Podman container image from the Red Hat registry:
podman pull registry.redhat.io/3scale-amp2/apicast-gateway-rhel8:3scale2.16
$ podman pull registry.redhat.io/3scale-amp2/apicast-gateway-rhel8:3scale2.16Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run APIcast in a Podman:
podman run --name apicast --rm -p 8080:8080 -e THREESCALE_PORTAL_ENDPOINT=https://<access_token>@<domain>-admin.3scale.net registry.redhat.io/3scale-amp2/apicast-gateway-rhel8:3scale2.16
$ podman run --name apicast --rm -p 8080:8080 -e THREESCALE_PORTAL_ENDPOINT=https://<access_token>@<domain>-admin.3scale.net registry.redhat.io/3scale-amp2/apicast-gateway-rhel8:3scale2.16Copy to Clipboard Copied! Toggle word wrap Toggle overflow Here,
<access_token>is the Access Token for the 3scale Account Management API. You can use the Provider Key instead of the access token.<domain>-admin.3scale.netis the URL of your 3scale Admin Portal.
This command runs a Podman container engine called "apicast" on port 8080 and fetches the JSON configuration file from your 3scale Admin Portal. For other configuration options, see Installing APIcast.
2.4.4.2.1. Testing APIcast with Podman Copy linkLink copied to clipboard!
The preceding steps ensure that your Podman container engine is running with your own configuration file and the Podman container image from the 3scale registry. You can test calls through APIcast on port 8080 and provide the correct authentication credentials, which you can get from your 3scale account.
Test calls will not only verify that APIcast is running correctly but also that authentication and reporting is being handled successfully.
Ensure that the host you use for the calls is the same as the one configured in the Public Base URL field on the Integration page.
2.4.4.3. The podman command options Copy linkLink copied to clipboard!
You can use the following option examples with the podman command:
-
-d: Runs the container in detached mode and prints the container ID. When it is not specified, the container runs in the foreground mode and you can stop it usingCTRL + c. When started in the detached mode, you can reattach to the container with thepodman attachcommand, for example,podman attach apicast. -
psand-a: Podmanpsis used to list creating and running containers. Adding-ato thepscommand will show all containers, both running and stopped, for example,podman ps -a. -
inspectand-l: Inspect a running container. For example, useinspectto see the ID that was assigned to the container. Use-lto get the details for the latest container, for example,podman inspect -l | grep Id\":.
2.4.4.4. Additional resources Copy linkLink copied to clipboard!
2.5. Deploying an APIcast gateway self-managed solution using the operator Copy linkLink copied to clipboard!
This guide provides steps for deploying an APIcast gateway self-managed solution using the APIcast operator via the Openshift Container Platform console.
The default settings are for production environment when you deploy APIcast. You can always adjust these settings for deploying a staging environment. For example, use the following oc command:
oc patch apicast/{apicast_name} --type=merge -p '{"spec":{"deploymentEnvironment":"staging","configurationLoadMode":"lazy"}}'
$ oc patch apicast/{apicast_name} --type=merge -p '{"spec":{"deploymentEnvironment":"staging","configurationLoadMode":"lazy"}}'
For more information, see the: APIcast Custom Resource reference
Prerequisites
- OpenShift Container Platform (OCP) 4.x or later with administrator privileges.
- * You followed the steps in Installing the APIcast operator on OpenShift.
Procedure
- Log in to the OCP console using an account with administrator privileges.
- Click Operators > Installed Operators.
- Click the APIcast Operator from the list of Installed Operators.
- Click APIcast > Create APIcast.
2.5.1. APICast deployment and configuration options Copy linkLink copied to clipboard!
You can deploy and configure an APIcast gateway self-managed solution using two approaches:
See also:
2.5.1.1. Providing a 3scale API Management system endpoint Copy linkLink copied to clipboard!
Procedure
Create an OpenShift secret that contains 3scale System Admin Portal endpoint information:
oc create secret generic ${SOME_SECRET_NAME} --from-literal=AdminPortalURL=${MY_3SCALE_URL}$ oc create secret generic ${SOME_SECRET_NAME} --from-literal=AdminPortalURL=${MY_3SCALE_URL}Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
${SOME_SECRET_NAME}is the name of the secret and can be any name you want as long as it does not conflict with an existing secret. ${MY_3SCALE_URL}is the URI that includes your 3scale access token and 3scale System portal endpoint. For more details, seeTHREESCALE_PORTAL_ENDPOINTExample
oc create secret generic 3scaleportal --from-literal=AdminPortalURL=https://access-token@account-admin.3scale.net
$ oc create secret generic 3scaleportal --from-literal=AdminPortalURL=https://access-token@account-admin.3scale.netCopy to Clipboard Copied! Toggle word wrap Toggle overflow For more information about the contents of the secret see the Admin portal configuration secret reference.
-
Create the OpenShift object for APIcast
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
spec.adminPortalCredentialsRef.namemust be the name of the existing OpenShift secret that contains the 3scale system Admin Portal endpoint information.Verify the APIcast pod is running and ready, by confirming that the
readyReplicasfield of the OpenShift Deployment associated with the APIcast object is 1. Alternatively, wait until the field is set with:echo $(oc get deployment apicast-example-apicast -o jsonpath='{.status.readyReplicas}') 1$ echo $(oc get deployment apicast-example-apicast -o jsonpath='{.status.readyReplicas}') 1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5.1.1.1. Verifying the APIcast gateway is running and available Copy linkLink copied to clipboard!
Procedure
Ensure the OpenShift Service APIcast is exposed to your local machine, and perform a test request. Do this by port-forwarding the APIcast OpenShift Service to
localhost:8080:oc port-forward svc/apicast-example-apicast 8080
$ oc port-forward svc/apicast-example-apicast 8080Copy to Clipboard Copied! Toggle word wrap Toggle overflow Make a request to a configured 3scale service to verify a successful HTTP response. Use the domain name configured in
Staging Public Base URLorProduction Public Base URLsettings of your service. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5.1.1.2. Exposing APIcast externally via a Kubernetes Ingress Copy linkLink copied to clipboard!
To expose APIcast externally via a Kubernetes Ingress, set and configure the exposedHost section. When the host field in the exposedHost section is set, this creates a Kubernetes Ingress object. The Kubernetes Ingress object can then be used by a previously installed and existing Kubernetes Ingress Controller to make APIcast accessible externally.
To learn what Ingress Controllers are available to make APIcast externally accessible and how they are configured see the Kubernetes Ingress Controllers documentation.
The following example to expose APIcast with the hostname myhostname.com:
The example creates a Kubernetes Ingress object on the port 80 using HTTP. When the APIcast deployment is in an OpenShift environment, the OpenShift default Ingress Controller will create a Route object using the Ingress object APIcast creates which allows external access to the APIcast installation.
You may also configure TLS for the exposedHost section. Details about the available fields in the following table:
| json/yaml field | Type | Required | Default value | Description |
|---|---|---|---|---|
|
| string | Yes | N/A | Domain name being routed to the gateway |
|
| []networkv1.IngressTLS | No | N/A | Array of ingress TLS objects. See more on TLS. |
2.5.1.2. Providing a configuration secret Copy linkLink copied to clipboard!
Procedure
Create a secret with the configuration file:
curl https://raw.githubusercontent.com/3scale/APIcast/master/examples/configuration/echo.json -o $PWD/config.json oc create secret generic apicast-echo-api-conf-secret --from-file=$PWD/config.json
$ curl https://raw.githubusercontent.com/3scale/APIcast/master/examples/configuration/echo.json -o $PWD/config.json $ oc create secret generic apicast-echo-api-conf-secret --from-file=$PWD/config.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow The configuration file must be called
config.json. This is an APIcast CRD reference requirement.For more information about the contents of the secret see the Admin portal configuration secret reference.
Create an APIcast custom resource:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following is an example of an embedded configuration secret:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Set the following content when creating the APIcast object:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
spec.embeddedConfigurationSecretRef.namemust be the name of the existing OpenShift secret that contains the configuration of the gateway.Verify the APIcast pod is running and ready, by confirming that the
readyReplicasfield of the OpenShift Deployment associated with the APIcast object is 1. Alternatively, wait until the field is set with:echo $(oc get deployment apicast-example-apicast -o jsonpath='{.status.readyReplicas}') 1$ echo $(oc get deployment apicast-example-apicast -o jsonpath='{.status.readyReplicas}') 1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5.1.2.1. Verifying APIcast gateway is running and available Copy linkLink copied to clipboard!
Procedure
Ensure the OpenShift Service APIcast is exposed to your local machine, and perform a test request. Do this by port-forwarding the APIcast OpenShift Service to
localhost:8080:oc port-forward svc/apicast-example-apicast 8080
$ oc port-forward svc/apicast-example-apicast 8080Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5.1.3. Injecting custom environments with the APIcast operator Copy linkLink copied to clipboard!
In a 3scale installation that uses self-managed APIcast, you can use the APIcast operator to inject custom environments. A custom environment defines behavior that APIcast applies to all upstream APIs that the gateway serves. To create a custom environment, define a global configuration in Lua code.
You can inject a custom environment as part of or after APIcast installation. After injecting a custom environment, you can remove it and the APIcast operator reconciles the changes.
Prerequisites
- The APIcast operator is installed.
Procedure
Write Lua code that defines the custom environment that you want to inject. For example, the following
env1.luafile shows a custom logging policy that theAPIcastoperator loads for all services.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a secret from the Lua file that defines the custom environment. For example:
oc create secret generic custom-env-1 --from-file=./env1.lua
$ oc create secret generic custom-env-1 --from-file=./env1.luaCopy to Clipboard Copied! Toggle word wrap Toggle overflow A secret can contain multiple custom environments. Specify the
–from-fileoption for each file that defines a custom environment. The operator loads each custom environment.Define an
APIcastcustom resource that references the secret you just created. The following example shows only content relative to referencing the secret that defines the custom environment.Copy to Clipboard Copied! Toggle word wrap Toggle overflow An
APIcastcustom resource can reference multiple secrets that define custom environments. The operator loads each custom environment.Create the
APIcastcustom resource that adds the custom environment. For example, if you saved theAPIcastcustom resource in theapicast.yamlfile, run the following command:oc apply -f apicast.yaml
$ oc apply -f apicast.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
If you update your custom environment be sure to re-create its secret so the secret contains the update. The APIcast operator watches for updates and automatically redeploys when it finds an update.
2.5.1.4. Injecting custom policies with the APIcast operator Copy linkLink copied to clipboard!
In a 3scale installation that uses self-managed APIcast, you can use the APIcast operator to inject custom policies. Injecting a custom policy adds the policy code to APIcast. You can then use either of the following to add the custom policy to an API product’s policy chain:
- 3scale API
-
Productcustom resource
To use the 3scale Admin Portal to add the custom policy to a product’s policy chain, you must also register the custom policy’s schema with a CustomPolicyDefinition custom resource. Custom policy registration is a requirement only when you want to use the Admin Portal to configure a product’s policy chain.
You can inject a custom policy as part of or after APIcast installation. After injecting a custom policy, you can remove it and the APIcast operator reconciles the changes.
Prerequisites
- The APIcast operator is installed or you are in the process of installing it.
-
You have defined a custom policy as described in Write your own policy. That is, you have already created, for example, the
my-first-custom-policy.lua,apicast-policy.json, andinit.luafiles that define a custom policy,
Procedure
Create a secret from the files that define one custom policy. For example:
oc create secret generic my-first-custom-policy-secret \ --from-file=./apicast-policy.json \ --from-file=./init.lua \ --from-file=./my-first-custom-policy.lua
$ oc create secret generic my-first-custom-policy-secret \ --from-file=./apicast-policy.json \ --from-file=./init.lua \ --from-file=./my-first-custom-policy.luaCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you have more than one custom policy, create a secret for each custom policy. A secret can contain only one custom policy.
Define an
APIcastcustom resource that references the secret you just created. The following example shows only content relative to referencing the secret that defines the custom policy.Copy to Clipboard Copied! Toggle word wrap Toggle overflow An
APIcastcustom resource can reference multiple secrets that define custom policies. The operator loads each custom policy.Create the
APIcastcustom resource that adds the custom policy. For example, if you saved theAPIcastcustom resource in theapicast.yamlfile, run the following command:oc apply -f apicast.yaml
$ oc apply -f apicast.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
If you update your custom policy be sure to re-create its secret so the secret contains the update. The APIcast operator watches for updates and automatically redeploys when it finds an update.
Additional resources
2.5.1.5. Configuring OpenTracing with the APIcast operator Copy linkLink copied to clipboard!
In a 3scale installation that uses self-managed APIcast, you can use the APIcast operator to configure OpenTracing. By enabling OpenTracing, you get more insight and better observability on the APIcast instance.
Prerequisites
-
The
APIcastoperator is installed or you are in the process of installing it.
Procedure
Define a secret that contains your OpenTracing configuration details in
stringData.config. This is the only valid value for the attribute that contains your OpenTracing configuration details. Any other specification prevents APIcast from receiving your OpenTracing configuration details. The folowing example shows a valid secret definition:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the secret. For example, if you saved the previous secret definition in the
myjaeger.yamlfile, you would run the following command:oc create -f myjaeger.yaml
$ oc create -f myjaeger.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Define an
APIcastcustom resource that specifies theOpenTracingattributes. In the CR definition, set thespec.tracingConfigSecretRef.nameattribute to the name of the secret that contains your OpenTracing configuration details. The following example shows only content relative to configuring OpenTracing.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
APIcastcustom resource that configures OpenTracing. For example, if you saved theAPIcastcustom resource in theapicast1.yamlfile, you would run the following command:oc apply -f apicast1.yaml
$ oc apply -f apicast1.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
Depending on how OpenTracing is installed, you should see the traces in the Jaeger service user interface.
Additional resource
2.5.1.6. Setting the APICAST_SERVICE_CACHE_SIZE environment variable Copy linkLink copied to clipboard!
You can specify the number of services that APIcast stores in the internal cache by adding an optional field in the APIcast custom resource (CR).
Prerequisites
- You have installed the APIcast operator, or you are in the process of installing it.
Procedure
-
Add the
serviceCacheSizeoptional field in thespec:
spec: // ... serviceCacheSize: 42
spec:
// ...
serviceCacheSize: 42
Verification
Type the following command to check the deployment:
oc get deployment/apicast-example-apicast -o yaml
$ oc get deployment/apicast-example-apicast -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify inclusion of the environment variable:
# ... - name: APICAST_SERVICE_CACHE_SIZE value: '42' # ...
# ... - name: APICAST_SERVICE_CACHE_SIZE value: '42' # ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional resource
2.6. Additional resources Copy linkLink copied to clipboard!
To get information about the latest released and supported version of APIcast, see the articles:
Chapter 3. External Redis database configuration Copy linkLink copied to clipboard!
Red Hat supports 3scale configurations that use an external Redis database. However, does not officially support setting up Redis for zero downtime, or Redis database replication and sharding. The content is for reference only. Additionally, Redis Cluster mode is not supported in 3scale.
- Disclaimer: Links contained herein to external website(s) are provided for convenience only. Red Hat has not reviewed the links and is not responsible for the content or its availability. The inclusion of any link to an external website does not imply endorsement by Red Hat of the website or their entities, products or services. You agree that Red Hat is not responsible or liable for any loss or expenses that may result due to your use of (or reliance on) the external site or content.
3scale uses multiple Redis databases:
- Backend storage database holds information about API services, application keys, metrics, limits, and usage data, including current usage and historical analytics. Utilization data is unique to the Backend storage database, while the database of the System component is the main source for other data, which you can recreate if needed.
- Backend queues database temporarily stores background job queues for reporting API usage, based on Resque. The backend listener creates these jobs, and the backend worker processes them.
- System Redis database is used for storage of data required by the System component, mainly for background job processing using Sidekiq, but also other internal purposes.
The OpenShift deployments that use Redis databases are: backend-listener, backend-worker, backend-cron, system-app, and system-sidekiq.
Starting from Red Hat 3scale API Management 2.16, all Redis databases must be provided by the user. The 3scale operator does not create or manage the Redis databases. The connection details for the Redis databases are set through OpenShift secrets backend-redis and system-redis, see Configuring Redis databases for details. The supported versions for the database can be consulted at Red Hat 3scale API Management Supported Configurations (3scale API Management 2.16 section).
Prerequisites
- A 3scale account with an administrator role.
3.1. Configuring Redis databases Copy linkLink copied to clipboard!
This section provides information on how to configure the Redis databases used by 3scale API Management when deploying in an OpenShift Container Platform environment.
Redis databases are configured using the OpenShift secrets backend-redis and system-redis. The 3scale operator uses these secrets to set the environment variables in the corresponding Deployment resources for the Backend and System components. Each Deployment specification contains mappings from the secret fields to the environment variables used by the pods. For example, for the Backend Deployments:
name: CONFIG_REDIS_SENTINEL_HOSTS
valueFrom:
secretKeyRef:
key: REDIS_STORAGE_SENTINEL_HOSTS
name: backend-redis
name: CONFIG_REDIS_SENTINEL_HOSTS
valueFrom:
secretKeyRef:
key: REDIS_STORAGE_SENTINEL_HOSTS
name: backend-redis
The complete mapping between the secret fields and the environment variables is described in the following sections.
3.1.1. backend-redis secret Copy linkLink copied to clipboard!
backend-redis secret contains the configuration for the Backend storage and Backend queues databases. The former is used both by Backend and System components, while the latter is only used by the Backend components.
The following table lists all the supported fields in the backend-redis secret and how the values set in those fields are used in the pods of the System (system-app, and system-sidekiq) and Backend (backend-listener, backend-worker, backend-cron) components. For the format and example values for the fields, see Format and example values for Redis configuration.
backend-redis secret field | Description | Env var in Backend | Env var in System |
|---|---|---|---|
| REDIS_STORAGE_URL | Backend storage database URL. Required. | CONFIG_REDIS_PROXY | BACKEND_REDIS_URL |
| REDIS_STORAGE_USERNAME | Username for the backend storage database authentication (or Redis master, if Redis Sentinel is used). | CONFIG_REDIS_USERNAME | BACKEND_REDIS_USERNAME |
| REDIS_STORAGE_PASSWORD | Password for the backend storage database authentication (or Redis master, if Redis Sentinel is used). | CONFIG_REDIS_PASSWORD | BACKEND_REDIS_PASSWORD |
| REDIS_STORAGE_SENTINEL_HOSTS | Comma-separated list of sentinel URLs, for backend storage database. | CONFIG_REDIS_SENTINEL_HOSTS | BACKEND_REDIS_SENTINEL_HOSTS |
| REDIS_STORAGE_SENTINEL_ROLE |
The role of the instance to connect to via Sentinel ( | CONFIG_REDIS_SENTINEL_ROLE | BACKEND_REDIS_SENTINEL_ROLE |
| REDIS_STORAGE_SENTINEL_USERNAME | Username for the Redis sentinel authentication, for backend storage database. | CONFIG_REDIS_SENTINEL_USERNAME | BACKEND_REDIS_SENTINEL_USERNAME |
| REDIS_STORAGE_SENTINEL_PASSWORD | Password for the Redis sentinel authentication, for backend storage database. | CONFIG_REDIS_SENTINEL_PASSWORD | BACKEND_REDIS_SENTINEL_PASSWORD |
| REDIS_QUEUES_URL | Backend queues database URL. Required. | CONFIG_QUEUES_MASTER_NAME | |
| REDIS_QUEUES_USERNAME | Username for the backend queues database authentication (or Redis master, if Redis Sentinel is used). | CONFIG_QUEUES_USERNAME | |
| REDIS_QUEUES_PASSWORD | Password for the backend queues database authentication (or Redis master, if Redis Sentinel is used). | CONFIG_QUEUES_PASSWORD | |
| REDIS_QUEUES_SENTINEL_HOSTS | Comma-separated list of sentinel URLs, for backend queues database. | CONFIG_QUEUES_SENTINEL_HOSTS | |
| REDIS_QUEUES_SENTINEL_ROLE |
The role of the instance to connect to via Sentinel ( | CONFIG_QUEUES_SENTINEL_ROLE | |
| REDIS_QUEUES_SENTINEL_USERNAME | Username for the Redis sentinel authentication, for backend queues database. | CONFIG_QUEUES_SENTINEL_USERNAME | |
| REDIS_QUEUES_SENTINEL_PASSWORD | Password for the Redis sentinel authentication, for backend queues database. | CONFIG_QUEUES_SENTINEL_PASSWORD |
Example of the backend-redis secret
3.1.2. system-redis secret Copy linkLink copied to clipboard!
system-redis secret contains the configuration for the Redis databases used by the System components of 3scale.
The following table lists all the supported fields in the system-redis secret and how the values set in those fields are used in the pods of the System components (system-app and system-sidekiq).
system-redis secret field | Description | Env var in system |
|---|---|---|
| URL | System database URL. Required. | REDIS_URL |
| REDIS_USERNAME | Username for the system database authentication (or Redis master, if Redis Sentinel is used). | REDIS_USERNAME |
| REDIS_PASSWORD | Password for the system database authentication (or Redis master, if Redis Sentinel is used). | REDIS_PASSWORD |
| SENTINEL_HOSTS | Comma-separated list of sentinel URLs, for system database. | REDIS_SENTINEL_HOSTS |
| SENTINEL_ROLE |
The role of the instance to connect to via Sentinel ( | REDIS_SENTINEL_ROLE |
| REDIS_SENTINEL_USERNAME | Username for the Redis sentinel authentication, for system database. | REDIS_SENTINEL_USERNAME |
| REDIS_SENTINEL_PASSWORD | Password for the Redis sentinel authentication, for system database. | REDIS_SENTINEL_PASSWORD |
Example of the system-redis secret
3.1.3. Format and example values for Redis configuration Copy linkLink copied to clipboard!
The following sections describe the format and provide example values for the fields in the backend-redis and system-redis secrets.
3.1.3.1. URL fields Copy linkLink copied to clipboard!
The fields REDIS_STORAGE_URL, REDIS_QUEUES_URL in backend-redis secret, and URL in system-redis secret accept values in the following format:
redis[s]://[[username][:password]@]host-or-ip[:port][/db-number]
redis[s]://[[username][:password]@]host-or-ip[:port][/db-number]
-
Square brackets (
[ ]) indicate an optional parameter sequence. -
host-or-ipcan be a hostname or an IP address, when using a single Redis instance. -
When using Redis Sentinel,
host-or-iprefers to the Redis master group name. -
db-numberis an integer value representing the logical database number to connect to. If not specified, it defaults to0. Note that some Redis providers do not support logical databases, in which case this part should be omitted. -
When
rediss://scheme is used, the connection to Redis is made over TLS/SSL. If TLS mode is used, the corresponding fields in the secrets must be set for SSL CA, SSL Cert, and SSL Key (REDIS_SSL_(CA|CERT|KEY)orREDIS_SSL_QUEUES_(CA|CERT|KEY)). -
The credentials (
usernameandpassword) can also be provided through their respective fields in the secrets (REDIS_STORAGE_USERNAME,REDIS_QUEUES_PASSWORD, etc.). If the credentials are provided both the URL and the through the corresponding secret fields, the ones in the URL take precedence. -
If
portis omitted, it defaults to6379.
Example values:
3.1.3.2. SENTINEL_HOSTS fields Copy linkLink copied to clipboard!
The fields REDIS_STORAGE_SENTINEL_HOSTS and REDIS_QUEUES_SENTINEL_HOSTS in backend-redis secret, and SENTINEL_HOSTS in system-redis secret accept a comma-separated list of sentinel connection strings in the following format:
[redis[s]://][[username][:password]@]sentinel-hostname-or-ip:[port]
[redis[s]://][[username][:password]@]sentinel-hostname-or-ip:[port]
-
Square brackets (
[ ]) indicate an optional parameter sequence. -
Specifying the scheme (
redis://orrediss://) is optional. -
When
rediss://scheme is used, the connection to Redis Sentinel is made over TLS/SSL. If TLS mode is used, the corresponding fields in the secrets must be set for SSL CA, SSL Cert, and SSL Key (REDIS_SSL_(CA|CERT|KEY)orREDIS_SSL_QUEUES_(CA|CERT|KEY)). -
sentinel-hostname-or-ipcan be a hostname or an IP address. - Normally, there should be at least three Sentinel instances for high availability.
-
The credentials (
usernameandpassword) can also be provided through their respective fields in the secrets (REDIS_STORAGE_SENTINEL_USERNAME,REDIS_QUEUES_SENTINEL_PASSWORD,REDIS_SENTINEL_USERNAME, etc.). If the credentials are provided both the URL and the through the corresponding secret fields, the ones in the URL take precedence. -
If the credentials (
usernameandpassword) are provided through the URL, all sentinels in the list must use the same credentials. If different credentials are provided for different sentinels, the credentials of the first sentinel in the list are used. -
If
portis omitted, it defaults to26379.
Example values:
redis://sentinel-user:sentinel-pass@sentinel1.example.com,redis://sentinel-user:sentinel-pass@sentinel2.example.com,redis://sentinel-user:sentinel-pass@sentinel3.example.com sentinel1.3scale-databases.svc.cluster.local:26379,sentinel2.3scale-databases.svc.cluster.local:26379,sentinel2.3scale-databases.svc.cluster.local:26379 rediss://:mypass@10.119.4.45:16379,rediss://:mypass@10.119.8.252:16380,rediss://:mypass@10.119.7.48:16381
redis://sentinel-user:sentinel-pass@sentinel1.example.com,redis://sentinel-user:sentinel-pass@sentinel2.example.com,redis://sentinel-user:sentinel-pass@sentinel3.example.com
sentinel1.3scale-databases.svc.cluster.local:26379,sentinel2.3scale-databases.svc.cluster.local:26379,sentinel2.3scale-databases.svc.cluster.local:26379
rediss://:mypass@10.119.4.45:16379,rediss://:mypass@10.119.8.252:16380,rediss://:mypass@10.119.7.48:16381
3.2. Setting up Redis for zero downtime Copy linkLink copied to clipboard!
High availability is provided for most components of 3scale by the OpenShift Container Platform (OCP). The different components can be deployed in multiple replicas across different OpenShift nodes, so if one node goes down, the other nodes can continue to serve traffic.
In order to provide high availability (HA) for 3scale, it is important that the underlying databases, including the Redis databases, also work in high availability (HA) mode.
For the Redis database, it is not possible to provide high availability using simple OpenShift deployments (for example, Deployment or StatefulSet resources) alone. If the Redis pod comes to a stop, or if the OpenShift Container Platform stops it, a new pod is automatically created. Persistent storage will restore the data so the pod continues to work. In these scenarios, there will be a small amount of downtime while the new pod starts. You can reduce downtime by preinstalling the Redis images onto all nodes that have Redis deployed to them. This will speed up the pod restart time, however does not eliminate downtime completely.
To achieve high availability (HA) for Redis databases, more complex Redis configurations or managed cloud solutions are required. Example solutions include:
- Red Hat does not provide support for the above mentioned services. The mention of any such services does not imply endorsement by Red Hat of the products or services. You agree that Red Hat is not responsible or liable for any loss or expenses that may result due to your use of (or reliance on) any external content.
- Ensure that the version provided by these services is compatible with 3scale. For more details, see Red Hat 3scale API Management Supported Configurations.
- Ensure that the Redis solution you choose does not use Redis Cluster topology, or sharding, as they are not supported by 3scale.
- Some Redis services do not support logical database separation (multiple databases in a single Redis instance). In such cases, you must use a different Redis instance for each database used by 3scale.
3.2.1. Configuring Redis in high availability (HA) mode Copy linkLink copied to clipboard!
You can configure Backend Redis databases, System Redis database, or both, to work in high availability (HA) mode. Backend Redis databases are more critical to run in HA mode, as they are used for authorizing the API requests, and a downtime in these databases will lead to downtime of the API services managed by 3scale. The System Redis database is mainly used for background job processing, and a downtime in this database will not lead to downtime of the API services, however it will affect some functionality of the 3scale application.
To configure Redis (not Redis Sentinel), you need to update the backend-redis and system-redis secrets with the connection details of the Redis instances. It is recommended to use a different Redis instance for each database. Update the URL field of the system-redis secret, and the fields REDIS_QUEUES_URL and REDIS_STORAGE_URL of the backend-redis secret with the connection strings of the Redis instances. If needed, you can also provide authentication details (username and/or password) by either adding them to the Redis connection string, or by setting the corresponding fields in the secrets, see Configuring Redis databases for details about the available fields.
3.2.1.1. Redis Sentinel configuration Copy linkLink copied to clipboard!
To configure Redis Sentinel with 3scale, you need to update the backend-redis and system-redis secrets with the following connection details:
-
REDIS_STORAGE_SENTINEL_HOSTS,REDIS_QUEUES_SENTINEL_HOSTSinbackend-redissecret andSENTINEL_HOSTSinsystem-redissecret must contain a comma-separated list of Redis Sentinel hosts and ports. -
REDIS_STORAGE_SENTINEL_ROLE,REDIS_QUEUES_SENTINEL_ROLEinbackend-redissecret andSENTINEL_ROLEinsystem-redissecret must contain the role of the Redis instance to connect to via Sentinel. Accepted values:masterorslave. -
REDIS_STORAGE_URL,REDIS_QUEUES_URLinbackend-redissecret andURLinsystem-redissecret must contain the name of the redis master group configured in Redis Sentinel. - Optionally, provide authentication details (username and/or password) by either adding them to the Redis connection string, or by setting the corresponding fields in the secrets.
- Refer to see Configuring Redis databases for details about the available fields and their formats.
Example of the backend-redis secret using Redis Sentinel
Example of the system-redis secret using Redis Sentinel
3.3. Additional information Copy linkLink copied to clipboard!
- For more information about 3scale and Redis database support, see Red Hat 3scale API Management Supported Configurations.
- For more information about Amazon ElastiCache for Redis, see the official Amazon ElastiCache Documentation.
- For more information about Redis Sentinel, see the latest High availability with Redis Sentinel.
Chapter 4. Configuring an external MySQL database Copy linkLink copied to clipboard!
When you externalize databases from a Red Hat 3scale API Management deployment, this means to provide isolation from the application and resilience against service disruptions at the database level. The resilience to service disruptions depends on the service level agreements (SLAs) provided by the infrastructure or platform provider where you host the databases. This is not offered by 3scale. For more details on externalizing of databases offered by your chosen deployment, see the associated documentation.
Red Hat supports 3scale configurations that use an external MySQL database. However, the database itself is not within the scope of support.
This guide provides information for externalizing the MySQL database. This is useful where there are several infrastructure issues, such as network or filesystem, using the default system-mysql pod.
Prerequisites
- Access to an OpenShift Container Platform 4.x cluster using an account with administrator privileges.
- A 3scale instance installation on the OpenShift cluster. See Installing 3scale API Management on OpenShift.
- An external (that is not part of the 3scale installation) MySQL database, configured according to the External MySQL database configuration.
To configure an external MySQL database, perform the steps outlined in the following sections:
4.1. External MySQL database configuration Copy linkLink copied to clipboard!
When creating an external MySQL database, you need to configure it as explained below.
MySQL database user
The connection string that is used to configure the database connection (see System database secret to learn where to configure the connection string) for the external MySQL database must be in the following format:
mysql2://{DB_USER}:{DB_PASSWORD}@{DB_HOST}:{DB_PORT}/{DB_NAME}
mysql2://{DB_USER}:{DB_PASSWORD}@{DB_HOST}:{DB_PORT}/{DB_NAME}
{DB_PASSWORD} and {DB_PORT} are optional.
The user with username {DB_USER} must be created and granted all privileges to the database indicated as {DB_NAME}. Example commands for creating a user:
CREATE USER 'exampleuser'@'%' IDENTIFIED BY 'examplepass'; GRANT ALL PRIVILEGES ON exampledb.* to 'exampleuser'@'%';
CREATE USER 'exampleuser'@'%' IDENTIFIED BY 'examplepass';
GRANT ALL PRIVILEGES ON exampledb.* to 'exampleuser'@'%';
In case of a new installation of 3scale, if the database {DB_NAME} does not exist, it will be created by the installation scripts.
Binary logging configuration
In case binary logging is enabled on the MySQL server, and the database user doesn’t have the SUPER privilege, the global system variable log_bin_trust_function_creators must be set to 1. This is required because 3scale uses stored procedures and triggers.
Alternatively, if you choose to set SUPER privilege for the database user, note that it is deprecated as of MySQL 8.0 and will be removed in a future version of MySQL. See MySQL documentation for more information.
Configuring secure connection (SSL/TLS)
If the MySQL server requires a secure connection (for example, when the system variable require_secure_transport is set to ON), you need to configure the MySQL client to use SSL/TLS. To do so, add the parameter ?ssl_mode=required to the connection string set in the System database secret. Example value:
mysql2://some-user:secure-password@db.example.com/system_database?ssl_mode=required
mysql2://some-user:secure-password@db.example.com/system_database?ssl_mode=required
Refer to the MySQL documentation on client-side configuration for encrypted connections for more information about the possible values of the ssl_mode parameter.
4.2. Externalizing the MySQL database Copy linkLink copied to clipboard!
Use the following steps to fully externalize the MySQL database.
This will cause downtime in the environment while the process is ongoing.
Prerequisites
- You have a running and working 3scale instance with the MySQL default database. The 3scale instance installation supports your MySQL database version.
You have a MySQL instance with the following characteristics:
- Its version is supported for the current 3scale On-premises installation. See the supported configurations page.
- The current built-in MySQL database and the external one have the same version.
- The database must be accessible from the pods in the 3scale namespace. The database hostname or IP address is referred to as <ext_mysql_hostname>.
- The database is configured according to the requirements specified in External MySQL database configuration.
- An empty database exists. The database name is referred to as <ext_mysql_dbname>.
-
A user with credentials referred to as <ext_mysql_username> and <ext_mysql_password> has full access to the <ext_mysql_dbname> database.
rootuser can also be used to access it, but it is not recommended. Refer to External MySQL database configuration for an example on granting the required privileges to a user.
-
You have the
sedutility for manipulating text files.
Procedure
Log in to the OpenShift node where your 3scale On-premises instance is hosted and change to its project:
oc login <url> <authentication-parameters> oc project <3scale-project>
$ oc login <url> <authentication-parameters> $ oc project <3scale-project>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace <url>, <authentication-parameters> and <3scale-project> with your own OpenShift server URL, authentication parameters and the project name where 3scale is installed. Authentication parameters can be either
-u <username>or--token=<token>.Back up the existing APIManager custom resource. To confirm the name of your APIManager resource, use the following command:
oc get apimanager NAME AGE apimanager 151d
$ oc get apimanager NAME AGE apimanager 151dCopy to Clipboard Copied! Toggle word wrap Toggle overflow In the example above the name of the APIManager custom resource is
apimanager.Export the resource for future reference using the following command (change <apimanager-resource-name> with your resource name):
oc get apimanager <apimanager-resource-name> -o yaml > apimanager.backup.yml
$ oc get apimanager <apimanager-resource-name> -o yaml > apimanager.backup.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that you are able to log in to the external MySQL database through the
system-mysqlpod before proceeding with the next steps:oc rsh <system_mysql_pod_name> mysql -u <ext_mysql_username> -p -h <ext_mysql_hostname> <ext_mysql_dbname>
$ oc rsh <system_mysql_pod_name> mysql -u <ext_mysql_username> -p -h <ext_mysql_hostname> <ext_mysql_dbname>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
<system_mysql_pod_name>: The name of the system-mysql pod (starting with
system-mysql-). You can get the pod name using the commandoc get pods --selector=deployment=system-mysql -o name. - When prompted, enter the <ext_mysql_username> user password (<ext_mysql_password>).
-
The CLI will now display
mysql>. Type exit, then press return.
-
<system_mysql_pod_name>: The name of the system-mysql pod (starting with
- Stop the 3scale pods in the order described below.
Stop 3scale On-premises
Scale down the deployment of the 3scale operator controller to prevent it from interfering with the scaling down of other pods.
oc scale deployment/threescale-operator-controller-manager-v2 --replicas=0
$ oc scale deployment/threescale-operator-controller-manager-v2 --replicas=0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Scale down the following pods to 0 replicas. It is recommended to run these commands one by one and wait for each step to be completed before proceeding to the next one. Use the command
oc get deployment/<deployment-name>to check the status of the pods in the deployment. You should expect the READY column to show0/0for each deployment before proceeding to the next one.oc scale deployment/{apicast-production,apicast-staging} --replicas=0 oc get deployment/{apicast-production,apicast-staging}$ oc scale deployment/{apicast-production,apicast-staging} --replicas=0 $ oc get deployment/{apicast-production,apicast-staging}Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc scale deployment/{system-app,system-sidekiq} --replicas=0 oc get deployment/{system-app,system-sidekiq}$ oc scale deployment/{system-app,system-sidekiq} --replicas=0 $ oc get deployment/{system-app,system-sidekiq}Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc scale deployment/{backend-listener,backend-worker,backend-cron,system-memcache,system-redis,system-searchd,zync,zync-que} --replicas=0 oc get deployment/{backend-listener,backend-worker,backend-cron,system-memcache,system-redis,system-searchd,zync,zync-que}$ oc scale deployment/{backend-listener,backend-worker,backend-cron,system-memcache,system-redis,system-searchd,zync,zync-que} --replicas=0 $ oc get deployment/{backend-listener,backend-worker,backend-cron,system-memcache,system-redis,system-searchd,zync,zync-que}Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc scale deployment/{backend-redis,zync-database} --replicas=0 oc get deployment/{backend-redis,zync-database}$ oc scale deployment/{backend-redis,zync-database} --replicas=0 $ oc get deployment/{backend-redis,zync-database}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that all deployments except
system-mysqlare scaled down to 0 pods using the following command. The READY column should show0/0for all deployments exceptsystem-mysql.oc get deployments
$ oc get deploymentsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Perform a full MySQL dump using the following command:
oc rsh <system_mysql_pod_name> /bin/bash -c "mysqldump -u root --single-transaction --routines --triggers system" > system-mysql-dump-temp.sql
$ oc rsh <system_mysql_pod_name> /bin/bash -c "mysqldump -u root --single-transaction --routines --triggers system" > system-mysql-dump-temp.sqlCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace <system_mysql_pod_name> with your unique
system-mysqlpod name. Validate that the file
system-mysql-dump-temp.sqlcontains a valid MySQL level dump as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Replace <system_mysql_pod_name> with your unique
Remove the
DEFINERfrom the dump using the following command, if the user on the external database is notroot:sed 's/DEFINER=`root`@`%`//g' system-mysql-dump-temp.sql > system-mysql-dump.sql
$ sed 's/DEFINER=`root`@`%`//g' system-mysql-dump-temp.sql > system-mysql-dump.sqlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Scale down the
system-mysqlpod and leave it with 0 (zero) replicas:oc scale deployment/system-mysql --replicas=0
$ oc scale deployment/system-mysql --replicas=0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Import the dump into the external server using the following command:
mysql -u <ext_mysql_username> -p -h <ext_mysql_hostname> <ext_mysql_dbname> < system-mysql-dump.sql
$ mysql -u <ext_mysql_username> -p -h <ext_mysql_hostname> <ext_mysql_dbname> < system-mysql-dump.sqlCopy to Clipboard Copied! Toggle word wrap Toggle overflow - When prompted, enter the <ext_mysql_username> user password (<ext_mysql_password>) on the external server.
Ensure that the <ext_mysql_dbname> database has been populated with the data from the dump:
mysql -u <ext_mysql_username> -p -h <ext_mysql_hostname> <ext_mysql_dbname> -e 'show tables;'
$ mysql -u <ext_mysql_username> -p -h <ext_mysql_hostname> <ext_mysql_dbname> -e 'show tables;'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output for the database named
system_appwould be the following (only the first 15 lines are shown):Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Perform a backup of the existing OpenShift secret
system-database:oc get secret system-database -o yaml > system-database.backup.yml
$ oc get secret system-database -o yaml > system-database.backup.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the connection string in the
system-databasesecret to point to the external MySQL database. Replace the <ext_mysql_username>, <ext_mysql_password>, <ext_mysql_dbname> and <ext_mysql_hostname> with your own values:oc create secret generic system-database --from-literal=URL=mysql2://<ext_mysql_username>:<ext_mysql_password>@<ext_mysql_hostname>/<ext_mysql_dbname> --dry-run=client -o yaml | oc replace -f -
$ oc create secret generic system-database --from-literal=URL=mysql2://<ext_mysql_username>:<ext_mysql_password>@<ext_mysql_hostname>/<ext_mysql_dbname> --dry-run=client -o yaml | oc replace -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the APIManager custom resource to indicate that the database of the system component is external. This will detach the
system-mysqldeployment and the related PersistentVolumeClaim and ConfigMap resources from the 3scale operator. Run the following command (change <apimanager-resource-name> with your APIManager resource name):oc patch apimanager apimanager --type=merge -p '{"spec": {"externalComponents": {"system": {"database": true}}}}'$ oc patch apimanager apimanager --type=merge -p '{"spec": {"externalComponents": {"system": {"database": true}}}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Use the following instructions to Start 3scale On-premises, which scales up all the pods in the correct order.
Scale up the following pods to 1 replica. It is recommended to run these commands one by one and wait for each step to be completed before proceeding to the next one. Use the command
oc get deployment/<deployment-name>to check the status of the pods in the deployment. You should expect the READY column to show1/1for each deployment before proceeding to the next one.oc scale deployment/{backend-redis,zync-database} --replicas=1 oc get deployment/{backend-redis,zync-database}$ oc scale deployment/{backend-redis,zync-database} --replicas=1 $ oc get deployment/{backend-redis,zync-database}Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc scale deployment/{backend-listener,backend-worker,backend-cron,system-memcache,system-redis,system-searchd,zync,zync-que} --replicas=1 oc get deployment/{backend-listener,backend-worker,backend-cron,system-memcache,system-redis,system-searchd,zync,zync-que}$ oc scale deployment/{backend-listener,backend-worker,backend-cron,system-memcache,system-redis,system-searchd,zync,zync-que} --replicas=1 $ oc get deployment/{backend-listener,backend-worker,backend-cron,system-memcache,system-redis,system-searchd,zync,zync-que}Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc scale deployment/{system-app,system-sidekiq} --replicas=1 oc get deployment/{system-app,system-sidekiq}$ oc scale deployment/{system-app,system-sidekiq} --replicas=1 $ oc get deployment/{system-app,system-sidekiq}Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the external MySQL database integration has been configured properly, the
system-appandsystem-sidekiqpods should start without any errors. At this point you can try to access the admin and developer portals to verify they work as expected. Proceed to the next commands after confirming that both portals are accessible and contain the expected data. In case you observe any errors in thesystem-apporsystem-sidekiqpods, you can follow the instructions in Rolling back to revert the changes and restore the system to use the internal MySQL database.oc scale deployment/{apicast-production,apicast-staging} --replicas=1 oc get deployment/{apicast-production,apicast-staging}$ oc scale deployment/{apicast-production,apicast-staging} --replicas=1 $ oc get deployment/{apicast-production,apicast-staging}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Scale up the deployment of the 3scale operator controller back to 1 replica.
oc scale deployment/threescale-operator-controller-manager-v2 --replicas=1
$ oc scale deployment/threescale-operator-controller-manager-v2 --replicas=1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Once the
threescale-operator-controller-manager-v2pod is up and running, the 3scale operator will reconcile the resources to ensure that the state of the cluster matches the desired state defined in the APIManager custom resource. For example, if any replicas number is specified for any of the components in the APIManager custom resource, the operator will scale the corresponding deployment to match that number.Verify that all the pods are up and running with the following command:
oc get deployments
$ oc get deploymentsCopy to Clipboard Copied! Toggle word wrap Toggle overflow All deployments should show matching numbers of ready and desired replicas in the READY column, for example,
1/1or2/2, exceptsystem-mysql.- Verify that the everything is working properly by logging in to the Admin Portal and the Developer Portal and checking that the APIs are working as expected.
-
Back up the
system-mysqlDeployment object. You may delete after a few days once you are sure everything is running properly. Deletingsystem-mysqlDeployment avoids any future confusion if this procedure is done again in the future.
4.3. Rolling back Copy linkLink copied to clipboard!
Perform a rollback procedure if some issue occurs when scaling up the pods and it cannot be fixed.
Prerequisites
-
You have the backup file
system-database.backup.ymlcreated as part of Externalizing the MySQL database. -
You have the
yqutility for manipulating YAML files.
Procedure
Run the following steps to roll back the changes made to externalize the MySQL database.
- Scale down all the pods following the instructions in Stop 3scale On-premises.
Restore the
system-databasesecret to its original contents, pointing the system to the internal MySQL database. Run the following command:cat system-database.backup.yml \ | yq 'del(.metadata.annotations, .metadata.creationTimestamp, .metadata.resourceVersion, .metadata.uid)' \ | oc replace -f -
$ cat system-database.backup.yml \ | yq 'del(.metadata.annotations, .metadata.creationTimestamp, .metadata.resourceVersion, .metadata.uid)' \ | oc replace -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the APIManager custom resource to indicate that the database of the system component is internal. Run the following command (change <apimanager-resource-name> with your APIManager resource name):
oc patch apimanager apimanager --type=merge -p '{"spec": {"externalComponents": {"system": {"database": false}}}}'$ oc patch apimanager apimanager --type=merge -p '{"spec": {"externalComponents": {"system": {"database": false}}}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Scale all the pods up again following the instructions in Start 3scale On-premises.
4.4. Additional information Copy linkLink copied to clipboard!
4.4.1. Red Hat 3scale API Management Supported Configurations Copy linkLink copied to clipboard!
For more information about 3scale and MySQL database support, see Red Hat 3scale API Management Supported Configurations.
4.4.2. Deleting the internal MySQL database resources (optional) Copy linkLink copied to clipboard!
After you have confirmed that the external MySQL database is working correctly with your 3scale installation, you can optionally delete the internal MySQL database resources to free up resources in your OpenShift cluster. Follow the steps below to delete the resources.
The data stored in the internal MySQL database will be permanently lost after performing these steps. If you think you might need to restore to the internal MySQL database in the future, you will need to restore the data from the dump system-mysql-dump.sql you created as part of Externalizing the MySQL database.
(Optional) Back up the OpenShift resources related to the internal MySQL database. This is useful only for reference, in case you applied any custom configuration to them. It is not recommended to restore these resources from the backup files. If you need to roll back to using the internal MySQL database, set
spec.externalComponents.system.databasefield of the APIManager CR tofalseand let the 3scale operator recreate the resources.oc get deployment system-mysql -o yaml > system-mysql-deployment.backup.yml oc get pvc mysql-storage -o yaml > mysql-storage.backup.yml oc get cm mysql-extra-conf -o yaml > mysql-extra-conf.backup.yml oc get cm mysql-main-conf -o yaml > mysql-main-conf.backup.yml oc get service system-mysql -o yaml > system-mysql-service.backup.yml
$ oc get deployment system-mysql -o yaml > system-mysql-deployment.backup.yml $ oc get pvc mysql-storage -o yaml > mysql-storage.backup.yml $ oc get cm mysql-extra-conf -o yaml > mysql-extra-conf.backup.yml $ oc get cm mysql-main-conf -o yaml > mysql-main-conf.backup.yml $ oc get service system-mysql -o yaml > system-mysql-service.backup.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the OpenShift resources related to the internal MySQL database:
oc delete deployment system-mysql oc delete pvc mysql-storage oc delete cm mysql-extra-conf oc delete cm mysql-main-conf oc delete service system-mysql
$ oc delete deployment system-mysql $ oc delete pvc mysql-storage $ oc delete cm mysql-extra-conf $ oc delete cm mysql-main-conf $ oc delete service system-mysqlCopy to Clipboard Copied! Toggle word wrap Toggle overflow