Chapter 2. 3scale on OpenShift installation guide
2.1. Introduction
This guide walks you through steps to deploy Red Hat 3scale API Management - On-premises 2.5 on OpenShift.
The 3scale solution for on-premises deployment is composed of:
- Two API gateways: embedded APIcast
- One 3scale Admin Portal and Developer Portal with persistent storage
There are two ways to deploy a 3scale solution:
The 3scale operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
2.1.1. Prerequisites
- You must configure 3scale servers for UTC (Coordinated Universal Time).
2.2. System requirements
This section lists the requirements for the 3scale API Management OpenShift template.
2.2.1. Environment requirements
3scale API Management requires an environment specified in supported configurations.
Persistent volumes:
- 3 RWO (ReadWriteOnce) persistent volumes for Redis and MySQL persistence
- 1 RWX (ReadWriteMany) persistent volume for CMS and System-app Assets
The RWX persistent volume must be configured to be group writable. For a list of persistent volume types that support the required access modes, see the OpenShift documentation .
2.2.2. Hardware requirements
Hardware requirements depend on your usage needs. Red Hat recommends that you test and configure your environment to meet your specific requirements. Following are the recommendations when configuring your environment for 3scale on OpenShift:
- Compute optimized nodes for deployments on cloud environments (AWS c4.2xlarge or Azure Standard_F8).
- Very large installations may require a separate node (AWS M4 series or Azure Av2 series) for Redis if memory requirements exceed your current node’s available RAM.
- Separate nodes between routing and compute tasks.
- Dedicated compute nodes to 3scale specific tasks.
-
Set the
PUMA_WORKERS
variable of the backend listener to the number of cores in your compute node.
2.3. Configuring nodes and entitlements
Before you can deploy 3scale on OpenShift, you must configure your nodes and the entitlements required for your environment to fetch images from Red Hat.
Perform the following steps to configure the entitlements:
- Install Red Hat Enterprise Linux (RHEL) on each of your nodes.
- Register your nodes with Red Hat using the Red Hat Subscription Manager (RHSM), via the interface or the command line.
- Attach your nodes to your 3scale subscription using RHSM.
Install OpenShift on your nodes, complying with the following requirements:
- Use a supported OpenShift version.
- Configure persistent storage on a file system that supports multiple writes.
- Install the OpenShift command line interface.
Enable access to the
rhel-7-server-3scale-amp-2.5-rpms
repository using the subscription manager:sudo subscription-manager repos --enable=rhel-7-server-3scale-amp-2.5-rpms
Install the
3scale-amp-template
. The template will be saved at/opt/amp/templates
.sudo yum install 3scale-amp-template
2.4. Deploying 3scale on OpenShift using a template
2.4.1. Prerequisites
- An OpenShift cluster configured as specified in Section 2.3, “Configuring nodes and entitlements”.
- A domain, preferably wildcard, that resolves to your OpenShift cluster.
- Access to the Red Hat container catalog.
- (Optional) A working SMTP server for email functionality.
Follow these procedures to install 3scale on OpenShift using a .yml
template:
2.4.2. Importing the 3scale template
Perform the following steps to import the 3scale template into your OpenShift cluster:
From a terminal session log in to OpenShift as the cluster administrator:
oc login
Select your project, or create a new project:
oc project <project_name>
oc new-project <project_name>
Enter the
oc new-app
command:-
Specify the
--file
option with the path to the amp.yml file you downloaded as part of the configure nodes and entitlements section. -
Specify the
--param
option with theWILDCARD_DOMAIN
parameter set to the domain of your OpenShift cluster. Optionally, specify the
--param
option with theWILDCARD_POLICY
parameter set tosubdomain
to enable wildcard domain routing:Without Wildcard Routing:
oc new-app --file /opt/amp/templates/amp.yml --param WILDCARD_DOMAIN=<WILDCARD_DOMAIN>
With Wildcard Routing:
oc new-app --file /opt/amp/templates/amp.yml --param WILDCARD_DOMAIN=<WILDCARD_DOMAIN> --param WILDCARD_POLICY=Subdomain
The terminal shows the master and tenant URLs and credentials for your newly created 3scale Admin Portal. This output should include the following information:
- master admin username
- master password
- master token information
- tenant username
- tenant password
- tenant token information
-
Specify the
Log in to https://user-admin.3scale-project.example.com as admin/xXxXyz123.
* With parameters: * ADMIN_PASSWORD=xXxXyz123 # generated * ADMIN_USERNAME=admin * TENANT_NAME=user * MASTER_NAME=master * MASTER_USER=master * MASTER_PASSWORD=xXxXyz123 # generated --> Success Access your application via route 'user-admin.3scale-project.example.com' Access your application via route 'master-admin.3scale-project.example.com' Access your application via route 'backend-user.3scale-project.example.com' Access your application via route 'user.3scale-project.example.com' Access your application via route 'api-user-apicast-staging.3scale-project.example.com' Access your application via route 'api-user-apicast-production.3scale-project.example.com' Access your application via route 'apicast-wildcard.3scale-project.example.com'
Make a note of these details for future reference.
NoteYou may need to wait a few minutes for 3scale to fully deploy on OpenShift for your login and credentials to work.
More Information
For information about wildcard domains on OpenShift, visit Using Wildcard Routes (for a Subdomain).
2.4.3. Configuring SMTP variables (optional)
OpenShift uses email to send notifications and invite new users. If you intend to use these features, you must provide your own SMTP server and configure SMTP variables in the SMTP config map.
Perform the following steps to configure the SMTP variables in the SMTP config map:
If you are not already logged in, log in to OpenShift:
oc login
Configure variables for the SMTP config map. Use the
oc patch
command, specify theconfigmap
andsmtp
objects, followed by the-p
option and write the following new values in JSON for the following variables:Variable Description address
Allows you to specify a remote mail server as a relay
username
Specify your mail server username
password
Specify your mail server password
domain
Specify a HELO domain
port
Specify the port on which the mail server is listening for new connections
authentication
Specify the authentication type of your mail server. Allowed values:
plain
( sends the password in the clear),login
(send password Base64 encoded), orcram_md5
(exchange information and a cryptographic Message Digest 5 algorithm to hash important information)openssl.verify.mode
Specify how OpenSSL checks certificates when using TLS. Allowed values:
none
,peer
,client_once
, orfail_if_no_peer_cert
.Example
oc patch configmap smtp -p '{"data":{"address":"<your_address>"}}' oc patch configmap smtp -p '{"data":{"username":"<your_username>"}}' oc patch configmap smtp -p '{"data":{"password":"<your_password>"}}'
After you have set the configmap variables, redeploy the
system-app
andsystem-sidekiq
pods:oc rollout latest dc/system-app oc rollout latest dc/system-sidekiq
2.5. 3scale template parameters
Template parameters configure environment variables of the 3scale (amp.yml) template during and after deployment.
In 3scale 2.5, the PostgreSQL version has been updated from 9 to 10. We highly recommend for PostgreSQL configuration you make this update. Refer to Upgrade Zync Database PostgreSQL 9.5 to 10 in the Migrating 3scale documentation.
Name | Description | Default Value | Required? |
---|---|---|---|
APP_LABEL | Used for object app labels |
| yes |
ZYNC_DATABASE_PASSWORD | Password for the PostgreSQL connection user. Generated randomly if not provided. | N/A | yes |
ZYNC_SECRET_KEY_BASE | Secret key base for Zync. Generated randomly if not provided. | N/A | yes |
ZYNC_AUTHENTICATION_TOKEN | Authentication token for Zync. Generated randomly if not provided. | N/A | yes |
AMP_RELEASE | 3scale release tag. |
| yes |
ADMIN_PASSWORD | A randomly generated 3scale administrator account password. | N/A | yes |
ADMIN_USERNAME | 3scale administrator account username. |
| yes |
APICAST_ACCESS_TOKEN | Read Only Access Token that APIcast will use to download its configuration. | N/A | yes |
ADMIN_ACCESS_TOKEN | Admin Access Token with all scopes and write permissions for API access. | N/A | no |
WILDCARD_DOMAIN |
Root domain for the wildcard routes. For example, a root domain | N/A | yes |
WILDCARD_POLICY | Enable wildcard routes to embedded APIcast gateways by setting the value as "Subdomain" |
| yes |
TENANT_NAME | Tenant name under the root that Admin Portal will be available with -admin suffix. |
| yes |
MYSQL_USER | Username for MySQL user that will be used for accessing the database. |
| yes |
MYSQL_PASSWORD | Password for the MySQL user. | N/A | yes |
MYSQL_DATABASE | Name of the MySQL database accessed. |
| yes |
MYSQL_ROOT_PASSWORD | Password for Root user. | N/A | yes |
SYSTEM_BACKEND_USERNAME | Internal 3scale API username for internal 3scale api auth. |
| yes |
SYSTEM_BACKEND_PASSWORD | Internal 3scale API password for internal 3scale api auth. | N/A | yes |
REDIS_IMAGE | Redis image to use |
| yes |
MYSQL_IMAGE | Mysql image to use |
| yes |
MEMCACHED_IMAGE | Memcached image to use |
| yes |
POSTGRESQL_IMAGE | Postgresql image to use |
| yes |
AMP_SYSTEM_IMAGE | 3scale System image to use |
| yes |
AMP_BACKEND_IMAGE | 3scale Backend image to use |
| yes |
AMP_APICAST_IMAGE | 3scale APIcast image to use |
| yes |
AMP_ROUTER_IMAGE | 3scale Wildcard Router image to use |
| yes |
AMP_ZYNC_IMAGE | 3scale Zync image to use |
| yes |
SYSTEM_BACKEND_SHARED_SECRET | Shared secret to import events from backend to system. | N/A | yes |
SYSTEM_APP_SECRET_KEY_BASE | System application secret key base | N/A | yes |
APICAST_MANAGEMENT_API | Scope of the APIcast Management API. Can be disabled, status or debug. At least status required for health checks. |
| no |
APICAST_OPENSSL_VERIFY | Turn on/off the OpenSSL peer verification when downloading the configuration. Can be set to true/false. |
| no |
APICAST_RESPONSE_CODES | Enable logging response codes in APIcast. | true | no |
APICAST_REGISTRY_URL | A URL which resolves to the location of APIcast policies | yes | |
MASTER_USER | Master administrator account username |
| yes |
MASTER_NAME |
The subdomain value for the master Admin Portal, will be appended with the |
| yes |
MASTER_PASSWORD | A randomly generated master administrator password | N/A | yes |
MASTER_ACCESS_TOKEN | A token with master level permissions for API calls | N/A | yes |
IMAGESTREAM_TAG_IMPORT_INSECURE | Set to true if the server may bypass certificate verification or connect directly over HTTP during image import. |
| yes |
2.6. Using APIcast with 3scale on OpenShift
APIcast is available with API Manager for 3scale Hosted, and in on-premises installations in OpenShift Container Platform. The configuration procedures are different for both. This section explains how to deploy APIcast with API Manager on OpenShift.
2.6.1. Deploying APIcast templates on an existing OpenShift cluster containing 3scale
3scale OpenShift templates contain two embedded APIcast API gateways by default. If you require more API gateways, or require separate APIcast deployments, you can deploy additional APIcast templates on your OpenShift cluster.
Perform the following steps to deploy additional API gateways on your OpenShift cluster:
Create an access token with the following configurations:
- Scoped to Account Management API
- Having read-only access
Log in to your APIcast cluster:
oc login
Create a secret that allows APIcast to communicate with 3scale. Specify
new-basicauth
,apicast-configuration-url-secret
, and the--password
parameter with the access token, tenant name, and wildcard domain of your 3scale deployment:oc secret new-basicauth apicast-configuration-url-secret --password=https://<APICAST_ACCESS_TOKEN>@<TENANT_NAME>-admin.<WILDCARD_DOMAIN>
NoteTENANT_NAME
is the name under the root that the Admin Portal will be available with. The default value forTENANT_NAME
is 3scale. If you used a custom value in your 3scale deployment, you must use that value.Import the APIcast template using the
oc new-app
command, specifying the--file
option with theapicast.yml
file:oc new-app --file /opt/amp/templates/apicast.yml
NoteFirst install the APIcast template as described in Section 2.3, “Configuring nodes and entitlements”.
2.6.2. Connecting APIcast from a different OpenShift cluster
If you deploy APIcast on a different OpenShift cluster, outside your 3scale cluster, you must connect through the public route:
Create an access token with the following configurations:
- Scoped to Account Management API
- Having read-only access
Log in to your APIcast cluster:
oc login
Create a secret that allows APIcast to communicate with 3scale. Specify
new-basicauth
,apicast-configuration-url-secret
, and the--password
parameter with the access token, tenant name, and wildcard domain of your 3scale deployment:oc secret new-basicauth apicast-configuration-url-secret --password=https://<APICAST_ACCESS_TOKEN>@<TENANT_NAME>-admin.<WILDCARD_DOMAIN>
NoteTENANT_NAME
is the name under the root that the Admin Portal will be available with. The default value forTENANT_NAME
is 3scale. If you used a custom value in your 3scale deployment, you must use that value.Deploy APIcast on an OpenShift cluster outside of the OpenShift Cluster with the
oc new-app
command. Specify the--file
option and the path to yourapicast.yml
file:oc new-app --file /path/to/file/apicast.yml
2.6.3. Changing the default behavior for embedded APIcast
In external APIcast deployments, you can modify default behavior by changing the template parameters in the APIcast OpenShift template.
In embedded APIcast deployments, 3scale and APIcast are deployed from a single template. You must modify environment variables after deployment if you wish to change the default behavior for the embedded APIcast deployments.
2.6.4. Connecting multiple APIcast deployments on a single OpenShift cluster over internal service routes
If you deploy multiple APIcast gateways into the same OpenShift cluster, you can configure them to connect using internal routes through the backend listener service instead of the default external route configuration.
You must have an OpenShift SDN plugin installed to connect over internal service routes. How you connect depends on which SDN you have installed:
ovs-subnet
If you are using the ovs-subnet
OpenShift Software-Defined Networking (SDN) plugin, perform the following steps to connect over internal routes:
If not already logged in, log in to your OpenShift cluster:
oc login
Enter the following command to display the
backend-listener
route URL:oc route backend-listener
Enter the
oc new-app
command with the path toapicast.yml
:oc new-app -f apicast.yml
ovs-multitenant
If you are using the ovs-multitenant
OpenShift SDN plugin, perform the following steps to connect over internal routes:
If not already logged in, log in to your OpenShift cluster:
oc login
As admin, specify the
oadm
command with thepod-network
andjoin-projects
options to set up communication between both projects:oadm pod-network join-projects --to=<3SCALE_PROJECT> <APICAST_PROJECT>
Enter the following command to display the
backend-listener
route URL:oc route backend-listener
Enter the
oc new-app
command with the path toapicast.yml
:oc new-app -f apicast.yml
More information
For information on OpenShift SDN and project network isolation, see Openshift SDN.
2.6.5. Connecting APIcast on other deployments
If you deploy APIcast on Docker, you can connect APIcast to 3scale deployed on OpenShift by setting the THREESCALE_PORTAL_ENDPOINT
parameter to the URL and access token of your 3scale administration portal deployed on OpenShift. You do not need to set the BACKEND_ENDPOINT_OVERRIDE
parameter in this case.
For more details, see Chapter 5, APIcast on the Docker containerized environment.
2.7. Deploying 3scale using the operator
This section will take you through installing and deploying the 3scale solution via the 3scale operator using the APIManager custom resource.
The 3scale operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
2.7.1. Prerequisites
- Deploying 3scale using the operator first requires that the 3scale operator is installed
OpenShift Container Platform 3.11
- A user account with administrator privileges in the OpenShift cluster
2.7.2. Deploying the APIManager custom resource
Deploying the APIManager custom resource will make the operator begin the processing of it and will deploy a 3scale solution from it.
Deploy an APIManager by creating a new YAML file with the following content:
NoteThe wildcardDomain parameter can be any desired name you wish to give that resolves to an IP address, which is a valid DNS domain. The wildcardPolicy parameter can only be
None
orSubdomain
. Be sure to remove the placeholder marks for your parameters: < >.apiVersion: apps.3scale.net/v1alpha1 kind: APIManager metadata: name: example-apimanager spec: productVersion: "2.5" wildcardDomain: <wildcardDomain> wildcardPolicy: <None|Subdomain> resourceRequirementsEnabled: true
Enable wildcard routes at the OpenShift router level if
wildcardPolicy
isSubdomain
.You can do so by running the following command:
oc set env dc/router ROUTER_ALLOW_WILDCARD_ROUTES=true -n default
NoteFor more information about the APIManager fields, refer to the Reference documentation.
export NAMESPACE="operator-test" oc project ${NAMESPACE} oc create -f <yaml-name>
- This should trigger the deployment of a 3scale solution in the operator-test project.
2.8. Troubleshooting
This section contains a list of common installation issues and provides guidance for their resolution.
- Section 2.8.1, “Previous deployment leaving dirty persistent volume claims”
- Section 2.8.2, “Incorrectly pulling from the Docker registry”
- Section 2.8.3, “Permission issues for MySQL when persistent volumes are mounted locally”
- Section 2.8.4, “Unable to upload logo or images”
- Section 2.8.5, “Test calls do not work on OpenShift”
- Section 2.8.6, “APIcast on a different project from 3scale”
2.8.1. Previous deployment leaving dirty persistent volume claims
Problem
A previous deployment attempt leaves a dirty Persistent Volume Claim (PVC) causing the MySQL container to fail to start.
Cause
Deleting a project in OpenShift does not clean the PVCs associated with it.
Solution
Find the PVC containing the erroneous MySQL data with the
oc get pvc
command:# oc get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES AGE backend-redis-storage Bound vol003 100Gi RWO,RWX 4d mysql-storage Bound vol006 100Gi RWO,RWX 4d system-redis-storage Bound vol008 100Gi RWO,RWX 4d system-storage Bound vol004 100Gi RWO,RWX 4d
-
Stop the deployment of the system-mysql pod by clicking
cancel deployment
in the OpenShift UI. - Delete everything under the MySQL path to clean the volume.
-
Start a new
system-mysql
deployment.
2.8.2. Incorrectly pulling from the Docker registry
Problem
The following error occurs during installation:
svc/system-redis - 1EX.AMP.LE.IP:6379 dc/system-redis deploys docker.io/rhscl/redis-32-rhel7:3.2-5.3 deployment #1 failed 13 minutes ago: config change
Cause
OpenShift searches for and pulls container images by issuing the docker
command. This command refers to the docker.io
Docker registry instead of the registry.access.redhat.com
Red Hat container registry.
This occurs when the system contains an unexpected version of the Docker containerized environment.
Solution
Use the appropriate version of the Docker containerized environment.
2.8.3. Permission issues for MySQL when persistent volumes are mounted locally
Problem
The system-msql pod crashes and does not deploy causing other systems dependant on it to fail deployment. The pod log displays the following error:
[ERROR] Cannot start server : on unix socket: Permission denied [ERROR] Do you already have another mysqld server running on socket: /var/lib/mysql/mysql.sock ? [ERROR] Aborting
Cause
The MySQL process is started with inappropriate user permissions.
Solution
The directories used for the persistent volumes MUST have the write permissions for the root group. Having rw permissions for the root user is not enough as the MySQL service runs as a different user in the root group. Execute the following command as the root user:
chmod -R g+w /path/for/pvs
Execute the following command to prevent SElinux from blocking access:
chcon -Rt svirt_sandbox_file_t /path/for/pvs
2.8.4. Unable to upload logo or images
Problem
Unable to upload a logo - system-app
logs display the following error:
Errno::EACCES (Permission denied @ dir_s_mkdir - /opt/system/public//system/provider-name/2
Cause
Persistent volumes are not writable by OpenShift.
Solution
Ensure your persistent volume is writable by OpenShift. It should be owned by root group and be group writable.
2.8.5. Test calls do not work on OpenShift
Problem
Test calls do not work after creation of a new service and routes on OpenShift. Direct calls via curl also fail, stating: service not available
.
Cause
3scale requires HTTPS routes by default, and OpenShift routes are not secured.
Solution
Ensure the secure route checkbox is clicked in your OpenShift router settings.
2.8.6. APIcast on a different project from 3scale
Problem
APIcast deploy fails (pod does not turn blue). The following error appears in the logs:
update acceptor rejected apicast-3: pods for deployment "apicast-3" took longer than 600 seconds to become ready
The following error appears in the pod:
Error synching pod, skipping: failed to "StartContainer" for "apicast" with RunContainerError: "GenerateRunContainerOptions: secrets \"apicast-configuration-url-secret\" not found"
Cause
The secret was not properly set up.
Solution
When creating a secret with APIcast v3, specify apicast-configuration-url-secret
:
oc secret new-basicauth apicast-configuration-url-secret --password=https://<ACCESS_TOKEN>@<TENANT_NAME>-admin.<WILDCARD_DOMAIN>