Chapter 3. 3scale AMP On-premises Installation Guide
In this guide you’ll learn how to install 3scale 2.0 (on-premises) on OpenShift using OpenShift templates.
3.1. 1. 3scale AMP OpenShift Templates
As of 3scale API Management Platform (AMP) 2.0, Red Hat provides an OpenShift template. You can use this template to deploy AMP onto OpenShift Container Platform 3.3 and 3.4.
The 3scale AMP template is composed of the following:
- Two built-in APIcast API gateways
- One AMP admin portal and developer portal with persistent storage
3.2. 2. System Requirements
The 3scale AMP OpenShift template requires the following:
3.2.1. Environment Requirements
Persistent Volumes:
- 3 RWO (ReadWriteOnce) persistent volumes for Redis and MySQL persistence
- 1 RWX (ReadWriteMany) persistent volume for CMS and System-app Assets
The RWX persistent volume must be configured to be group writable. Refer to the OpenShift documentation for a list of persistent volume types which support the required access modes.
3.2.2. Hardware Requirements
Hardware requirements depend on your usage needs. Red Hat recommends you test and configure your environment to meet your specific requirements. Consider the following recommendations when configuring your environment for 3scale on OpenShift:
- Compute optimized nodes for deployments on cloud environments (AWS c4.2xlarge or Azure Standard_F8).
- Very large installations may require a separate node (AWS M4 series or Azure Av2 series) for Redis if memory needs exceed your current node’s available RAM.
- Separate nodes between routing and compute tasks
- Dedicate compute nodes to 3scale specific tasks
-
Set the
PUMA_WORKERS
variable of the backend listener to the number of cores in your compute node
3.3. 3. Configure Nodes and Entitlements
Before you can deploy 3scale on OpenShift, you must configure your nodes and the entitlements required for your environment to fetch images from Red Hat.
Perform the following steps to configure entitlements:
- Install Red Hat Enterprise Linux (RHEL) onto each of your nodes
- Register your nodes with Red Hat using the Red Hat Subscription Manager (RHSM)
- Attach your nodes to your 3scale subscription using RHSM.
Install OpenShift onto your nodes, complying with the following requirements:
- You must use OpenShift version 3.3 or 3.4
- You must configure persistent storage on a file system that supports multiple writes.
- Install the OpenShift command line interface
-
Enable access to the
rhel-7-server-3scale-amp-2.0-rpms
repository using the subscription manager:
sudo subscription-manager repos --enable=rhel-7-server-3scale-amp-2.0-rpms
-
Install the
3scale-amp-template
AMP template. The template will be saved in/opt/amp/templates
.
sudo yum install 3scale-amp-template
3.4. 4. Deploy the 3scale AMP on OpenShift using a Template
3.4.1. Prerequisites:
- An OpenShift cluster configured as specified in the Chapter 3, Configure Nodes and Entitlements section
- A domain, preferably wildcard, that resolves to your OpenShift cluster.
- Access to the Red Hat container catalog
- (Optional) A working SMTP server for email functionality
Follow these procedures to install AMP onto OpenShift using a .yml template:
3.4.2. 4.1. Import the AMP Template
Once you meet the Prerequisites, you can import the AMP template into your OpenShift cluster.
Perform the following steps to import the AMP template into your OpenShift cluster:
- Download amp.yml from the 3scale GitHub page
- From a terminal session log in to OpenShift:
oc login
- Select your project, or create a new project:
oc project <project_name>
oc new-project <project_name>
Enter the
oc new-app
command:-
Specify the
--file
option with the path to the amp.yml file Specify the
--param
option with theWILDCARD_DOMAIN
parameter set to the domain of your OpenShift cluster:oc new-app --file /path/to/amp.yml --param WILDCARD_DOMAIN=<WILDCARD_DOMAIN>
NoteIf you encounter a timeout error after entering
oc new-app
, you may need to manually create persistent storage volumes. See Section 7.7, “Deployment Script is Unable to Create Persistent Storage Volumes”, in the troubleshooting guide, for information on how to do this.
-
Specify the
The terminal will output the URL and credentials for your newly created AMP admin portal. Save these details for future reference.
NoteYou may need to wait a few minutes for AMP to fully deploy on OpenShift for your login and credentials to work.
3.4.3. 4.2. Configure Wildcard Domains (Optional, Tech Preview)
Wildcard domains allow you to direct subdomain traffic through your wildcard domain.
The Wildcard Domain feature is a tech preview.
Use of the wildcard domain feature requires the following:
- OpenShift version 3.4
- a wildcard domain that is not being used for any other routes, or as another project’s namespace domain
- your router must be configured to allow wildcard routes
Perform the following steps to configure a wildcard domain:
From a terminal session, log in to OpenShift:
oc login
- Create a wildcard router configured with your wildcard domain
- Download the wildcard.yml template from the 3scale GitHub page
Switch to the project which contains your AMP deployment:
oc project <project_name>
Enter the
oc new-app
command, specifying the following:-
the
-file
option and the path to the wildcard.yml template -
the
--param
option and the WILDCARD_DOMAIN of your openshift cluster the
--param
option and the TENANT_NAME from the project which contains your AMP deploymentoc new-app -f wildcard.yml --param WILDCARD_DOMAIN=<a-domain-that-resolves-to-your-ocp-cluster.com> --param TENANT_NAME=3scale
-
the
Once configured, your AMP deployment will connect to the built-in APIcast gateways automatically and direct all subdomain traffic through your wildcard domain.
Considerations
Consider the following limitations when using the wildcard domain feature:
- API endpoints must end with the -staging or -production suffix (e.g. api1-production.example.com, api1-staging.example.com), If your API endpoints do not have one of these suffixes, calls will return a 500 error code.
- You must deploy the router in same project as the AMP
- This template will work only with an AMP.yml deploymentMore Information
For information about wildcard domains on OpenShift, visit Using Wildcard Routes (for a Subdomain).
3.4.4. 4.3. Configure SMTP Variables (Optional)
OpenShift uses email to send notifications and invite new users. If you intend to use these features, you must provide your own SMTP server and configure SMTP variables in the SMTP config map.
Follow these steps to configure the SMTP variables in the SMTP config map:
If you are not already logged in, log in to OpenShift:
oc login
Configure variables for the SMTP config map. Use the
oc patch
command, specify theconfigmap
andsmtp
objects, followed by the-p
option and write the following new values in JSON for the following variables:Variable
Description
address
Allows you to specify a remote mail server as a relay
username
Specify your mail server username
password
Specify your mail server password
domain
Specify a HELO domain
port
Specify the port on which the mail server is listening for new connections
authentication
Specify the authentication type of your mail server. Allowed values:
plain
( sends the password in the clear),login
(send password Base64 encoded), orcram_md5
(exchange information and a cryptographic Message Digest 5 algorithm to hash important information)openssl.verify.mode
Specify how OpenSSL checks certificates when using TLS. Allowed values:
none
,peer
,client_once
, orfail_if_no_peer_cert
.Example:
oc patch configmap smtp -p '{"data":{"address":"<your_address>"}}' oc patch configmap smtp -p '{"data":{"username":"<your_username>"}}' oc patch configmap smtp -p '{"data":{"password":"<your_password>"}}'
Once you have set the configmap variables, redeploy the
system-app
,system-resque
, andsystem-sidekiq
pods:oc deploy system-app --latest oc deploy system-resque --latest oc deploy system-sidekiq --latest
3.5. 5. 3scale AMP Template Parameters
Template parameters configure environment variables of the AMP yml template during and after deployment.
Name | Description | Default Value | Required? |
AMP_RELEASE | AMP release tag. | 2.0.0-CR2-redhat-2 | yes |
ADMIN_PASSWORD | A randomly generated AMP administrator account password. | N/A | yes |
ADMIN_USERNAME | AMP administrator account username. | admin | yes |
APICAST_ACCESS_TOKEN | Read Only Access Token that APIcast will use to download its configuration. | N/A | yes |
ADMIN_ACCESS_TOKEN | Admin Access Token with all scopes and write permissions for API access. | N/A | no |
WILDCARD_DOMAIN |
Root domain for the wildcard routes. For example, a root domain | N/A | yes |
TENANT_NAME | Tenant name under the root that Admin UI will be available with -admin suffix. | 3scale | yes |
MYSQL_USER | Username for MySQL user that will be used for accessing the database. | mysql | yes |
MYSQL_PASSWORD | Password for the MySQL user. | N/A | yes |
MYSQL_DATABASE | Name of the MySQL database accessed. | system | yes |
MYSQL_ROOT_PASSWORD | Password for Root user. | N/A | yes |
SYSTEM_BACKEND_USERNAME | Internal 3scale API username for internal 3scale api auth. | 3scale_api_user | yes |
SYSTEM_BACKEND_PASSWORD | Internal 3scale API password for internal 3scale api auth. | N/A | yes |
REDIS_IMAGE | Redis image to use | rhscl/redis-32-rhel7:3.2 | yes |
MYSQL_IMAGE | Mysql image to use | rhscl/mysql-56-rhel7:5.6-13.5 | yes |
SYSTEM_BACKEND_SHARED_SECRET | Shared secret to import events from backend to system. | N/A | yes |
SYSTEM_APP_SECRET_KEY_BASE | System application secret key base | N/A | yes |
APICAST_MANAGEMENT_API | Scope of the APIcast Management API. Can be disabled, status or debug. At least status required for health checks. | status | no |
APICAST_OPENSSL_VERIFY | Turn on/off the OpenSSL peer verification when downloading the configuration. Can be set to true/false. | false | no |
APICAST_RESPONSE_CODES | Enable logging response codes in APIcast. | true | no |
3.6. 6. Use APIcast with AMP on OpenShift
APIcast with AMP on OpenShift differs from APIcast with AMP hosted and requires unique configuration procedures.
The topics in this section explain how to deploy APIcast with AMP on OpenShift.
3.6.1. 6.1. Deploy APIcast Templates on an Existing OpenShift Cluster Containing Your AMP
AMP OpenShift templates contain 2 built-in APIcast API gateways by default. If you require more API gateways, or require separate APIcast deployments, you can deploy additional APIcast templates onto your OpenShift cluster.
Follow the steps below to deploy additional API gateways onto your OpenShift cluster:
Create an /docs/accounts/tokens[access token] with the following configurations:
- scoped to Account Management API
- has read-only access
Log in to your APIcast Cluster:
oc login
Create a secret, which allows APIcast to communicate with AMP. Specify
new-basicauth
,apicast-configuration-url-secret
, and the--password
parameter with the access token, tenant name, and wildcard domain of your AMP deployment:oc secret new-basicauth apicast-configuration-url-secret --password=https://<APICAST_ACCESS_TOKEN>@<TENANT_NAME>-admin.<WILDCARD_DOMAIN>
NoteTENANT_NAME
is the name under the root that Admin UI will be available with.TENANT_NAME
default value is "3scale". If you used a custom value in your AMP deployment, then you must input that value here.Import the APIcast template by downloading the apicast.yml, located on the 3scale GitHub, and running the
oc new-app
command, specifying the--file
option with theapicast.yml
file:oc new-app --file /path/to/file/apicast.yml
3.6.2. 6.2. Connect APIcast from an OpenShift Cluster Outside of an OpenShift Cluster Containing Your AMP
If you deploy APIcast onto a different OpenShift cluster, outside of your AMP cluster, you must connect over the public route.
Create an /docs/accounts/tokens[access token] with the following configurations:
- scoped to Account Management API
- has read-only access
Log in to your APIcast Cluster:
oc login
Create a secret, which allows APIcast to communicate with AMP. Specify
new-basicauth
,apicast-configuration-url-secret
, and the--password
parameter with the access token, tenant name, and wildcard domain of your AMP deployment:oc secret new-basicauth apicast-configuration-url-secret --password=https://<APICAST_ACCESS_TOKEN>@<TENANT_NAME>-admin.<WILDCARD_DOMAIN>
NoteTENANT_NAME
is the name under the root that Admin UI will be available with.TENANT_NAME
default value is "3scale". If you used a custom value in your AMP deployment, then you must input that value here.Deploy APIcast onto an OpenShift cluster outside of the OpenShift Cluster with the oc new-app command. Specify the
--file
option and the file path of yourapicast.yml
file:oc new-app --file /path/to/file/apicast.yml
Update the apicast
BACKEND_ENDPOINT_OVERRIDE
environment variable set to the URLbackend.
followed by the wildcard domain of the OpenShift Cluster containing your AMP deployment:oc env dc/apicast --overwrite BACKEND_ENDPOINT_OVERRIDE=https://backend.<WILDCARD_DOMAIN>
3.6.3. 6.3. Connect APIcast from Other Deployments
Once you have deployed APIcast on other platforms, such as the /docs/deployment-options/apicast-docker[Docker] containerized environment or /docs/deployment-options/apicast-v2-self-managed[native installations], you can connect them to AMP on OpenShift by configuring the BACKEND_ENDPOINT_OVERRIDE
environment variable in your AMP OpenShift Cluster:
Log in to your AMP OpenShift Cluster:
oc login
Configure the system-app object
BACKEND_ENDPOINT_OVERRIDE
environment variable:If you are using a native installation:
BACKEND_ENDPOINT_OVERRIDE=https://backend.<your_openshift_subdomain> bin/apicast
If are using the Docker containerized environment:
docker run -e BACKEND_ENDPOINT_OVERRIDE=https://backend.<your_openshift_subdomain>
3.6.4. 6.4. Change Built-In APIcast Default Behavior
In external APIcast deployments, you can modify default behavior by changing template parameters in the APIcast OpenShift template.
In built-in APIcast deployments, AMP and APIcast are deployed from a single template. You must modify environment variables after deployment if you wish to change default behavior for built-in APIcast deployments.
3.6.5. 6.5. Connect Multiple APIcast Deployments on a Single OpenShift Cluster over Internal Service Routes
If you deploy multiple APIcast gateways into the same OpenShift cluster, you can configure them to connect using internal routes through the backend listener service instead of the default external route configuration.
You must have an OpenShift SDN plugin installed to connect over internal service routes. How you connect depends on which SDN you have installed.
ovs-subnet
If you are using the ovs-subnet
OpenShift SDN plugin, follow these steps to connect over internal routes:
Log in to your OpenShift Cluster, if you have not already done so:
oc login
Enter the
oc new-app
command with the path to theapicast.yml
file:-
Specify the
--param
option with theBACKEND_ENDPOINT_OVERRIDE
parameter set to the domain of your OpenShift cluster’s AMP project:
-
Specify the
oc new-app -f apicast.yml --param BACKEND_ENDPOINT_OVERRIDE=http://backend-listener.<AMP_PROJECT>.svc.cluster.local:3000
ovs-multitenant
If you are using the 'ovs-multitenant' Openshift SDN plugin, follow these steps to connect over internal routes:
Log in to your OpenShift Cluster, if you have not already done so:
oc login
As admin, specify the
oadm
command with thepod-network
andjoin-projects
options to set up communication between both projects:oadm pod-network join-projects --to=<AMP_PROJECT> <APICAST_PROJECT>
Enter the
oc new-app
cotion with the path to theapicast.yml
file-
Specify the
--param
option with theBACKEND_ENDPOINT_OVERRIDE
parameter set to the domain of your OpenShift cluster’s AMP project:
-
Specify the
oc new-app -f apicast.yml --param BACKEND_ENDPOINT_OVERRIDE=http://backend-listener.<AMP_PROJECT>.svc.cluster.local:3000
More information
For information on Openshift SDN and project network isolation, visit: Openshift SDN
3.7. 7. Troubleshooting
This section contains a list of common installation issues, and provides guidance for resolution.
- 7.1, “Previous Deployment Leaves Dirty Persistent Volume Claims”
- 7.2, “Incorrectly Pulling from the Docker Registry”
- 7.3, “Permissions Issues for MySQL when Persistent Volumes are Mounted Locally”
- 7.4, “Unable to Upload Logo or Images Because Persistent Volumes are not Writable by OpenShift”
- 7.5, “Create Secure Routes on OpenShift”
- 7.6, “APIcast on a Different Project from AMP Fails to Deploy Due to Problem with Secrets”
- 7.7, “Deployment Script is Unable to Create Persistent Storage Volumes”
3.7.1. 7.1. Previous Deployment Leaves Dirty Persistent Volume Claims
Problem
A previous deployment attempt leaves a dirty Persistent Volume Claim (PVC), causing the MySQL container to fail to start.
Cause
Deleting a project in OpenShift does not clean the PVCs associated with it.
Solution
Find the PVC containing the erroneous MySQL data with
oc get pvc
:# oc get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES AGE backend-redis-storage Bound vol003 100Gi RWO,RWX 4d mysql-storage Bound vol006 100Gi RWO,RWX 4d system-redis-storage Bound vol008 100Gi RWO,RWX 4d system-storage Bound vol004 100Gi RWO,RWX 4d
-
Stop the deployment of the system-mysql pod by clicking
cancel deployment
in the OpenShift UI. - Delete everything under the MySQL path to clean the volume.
-
Start a new
system-mysql
deployment.
3.7.2. 7.2. Incorrectly Pulling from the Docker Registry
Problem
The following error occurs during installation:
svc/system-redis - 1EX.AMP.LE.IP:6379 dc/system-redis deploys docker.io/rhscl/redis-32-rhel7:3.2-5.3 deployment #1 failed 13 minutes ago: config change
Cause
OpenShift searches for and pulls container images by issuing the docker
command. This command refers to the docker.io
Docker registry, instead of the registry.access.redhat.com
Red Hat container registry.
This occurs when the system contains an unexpected version of the Docker containerized environment.
Solution
Use the appropriate version of the Docker containerized environment.
3.7.3. 7.3. Permissions Issues for MySQL when Persistent Volumes are Mounted Locally
Problem
The system-msql pod crashes and does not deploy, causing other systems dependant on it to fail deployment. The pod’s log displays the following error:
[ERROR] Can't start server : on unix socket: Permission denied [ERROR] Do you already have another mysqld server running on socket: /var/lib/mysql/mysql.sock ? [ERROR] Aborting
Cause
The MySQL process is started with inappropriate user permissions.
Solution
The directories used for the persistent volumes MUST have the write permissions for the root group. Having rw permissions for the root user is not enough, as the MySQL service runs as a different user in the root group. Execute the following as root user:
chmod -R g+w /path/for/pvs
Execute the following command to prevent SElinux from blocking access:
chcon -Rt svirt_sandbox_file_t /path/for/pvs
3.7.4. 7.4. Unable to Upload Logo or Images Because Persistent Volumes are not Writable by OpenShift
Problem
Unable to upload a logo using OpenShift version 3.4. system-app
logs display the following error:
Errno::EACCES (Permission denied @ dir_s_mkdir - /opt/system/public//system/provider-name/2
Cause
Persistent volumes are not writable by OpenShift.
Solution
Ensure your persistent volume is writable by OpenShift. It should be owned by root group and be group writable.
3.7.5. 7.5. Create Secure Routes on OpenShift
Problem
Test calls do not work after creation of a new service and routes on OpenShift. Direct calls via curl also fail, stating: service not available
.
Cause
3scale requires HTTPS routes by default, and OpenShift routes are not secured.
Solution
Ensure the "secure route" checkbox is enabled in your OpenShift router settings.
3.7.6. 7.6. APIcast on a Different Project from AMP Fails to Deploy Due to Problem with Secrets
Problem
APIcast deploy fails (pod doesn’t turn blue). The following error appears in the logs:
update acceptor rejected apicast-3: pods for deployment "apicast-3" took longer than 600 seconds to become ready
The following error appears in the pod:
Error synching pod, skipping: failed to "StartContainer" for "apicast" with RunContainerError: "GenerateRunContainerOptions: secrets \"apicast-configuration-url-secret\" not found"
Cause
The secret was not properly set up.
Solution
When creating a secret with APIcast v3, specify apicast-configuration-url-secret
:
oc secret new-basicauth apicast-configuration-url-secret --password=https://<ACCESS_TOKEN>@<TENANT_NAME>-admin.<WILDCARD_DOMAIN>
3.7.7. 7.7. Deployment Script is Unable to Create Persistent Storage Volumes
Problem
A command creating a persistent volume claim in the amp.yml script might fail. If the command fails, you see the following error message:
timeout expired waiting for volumes to attach/mount for pod
Cause
This error is caused by a bug in Kubernetes.
Solution
Perform the following steps to correct the error and deploy AMP:
Remove the project that contains the failed installation:
oc delete project <project_name>
Create a new project or select an existing project:
oc new-project <project_name>
oc project <project_name>
- Download pvc.yml, located on the 3scale GitHub.
From a terminal session within the same folder as the downloaded template run
oc new-app
, specifying the--file option
and the pvc.yml file:oc new-app --file pvc.yml
NoteThe pvc.yml script may take a few minutes to create the persistent volume claims on your OpenShift cluster.
- Continue the installation process at the #import_the_amp_template[Section 4.1, “Import the AMP Template”] task
During your next deployment, you may see an error related to persistent volume claims, you can safely ignore this message.