Questo contenuto non è disponibile nella lingua selezionata.

Chapter 3. 3scale AMP On-premises Installation Guide


In this guide you’ll learn how to install 3scale 2.0 (on-premises) on OpenShift using OpenShift templates.

3.1. 1. 3scale AMP OpenShift Templates

As of 3scale API Management Platform (AMP) 2.0, Red Hat provides an OpenShift template. You can use this template to deploy AMP onto OpenShift Container Platform 3.3 and 3.4.

The 3scale AMP template is composed of the following:

  • Two built-in APIcast API gateways
  • One AMP admin portal and developer portal with persistent storage

3.2. 2. System Requirements

The 3scale AMP OpenShift template requires the following:

3.2.1. Environment Requirements

Persistent Volumes:

  • 3 RWO (ReadWriteOnce) persistent volumes for Redis and MySQL persistence
  • 1 RWX (ReadWriteMany) persistent volume for CMS and System-app Assets

The RWX persistent volume must be configured to be group writable. Refer to the OpenShift documentation for a list of persistent volume types which support the required access modes.

3.2.2. Hardware Requirements

Hardware requirements depend on your usage needs. Red Hat recommends you test and configure your environment to meet your specific requirements. Consider the following recommendations when configuring your environment for 3scale on OpenShift:

  • Compute optimized nodes for deployments on cloud environments (AWS c4.2xlarge or Azure Standard_F8).
  • Very large installations may require a separate node (AWS M4 series or Azure Av2 series) for Redis if memory needs exceed your current node’s available RAM.
  • Separate nodes between routing and compute tasks
  • Dedicate compute nodes to 3scale specific tasks
  • Set the PUMA_WORKERS variable of the backend listener to the number of cores in your compute node

3.3. 3. Configure Nodes and Entitlements

Before you can deploy 3scale on OpenShift, you must configure your nodes and the entitlements required for your environment to fetch images from Red Hat.

Perform the following steps to configure entitlements:

  1. Install Red Hat Enterprise Linux (RHEL) onto each of your nodes
  2. Register your nodes with Red Hat using the Red Hat Subscription Manager (RHSM)
  3. Attach your nodes to your 3scale subscription using RHSM.
  4. Install OpenShift onto your nodes, complying with the following requirements:

    • You must use OpenShift version 3.3 or 3.4
    • You must configure persistent storage on a file system that supports multiple writes.
  5. Install the OpenShift command line interface
  6. Enable access to the rhel-7-server-3scale-amp-2.0-rpms repository using the subscription manager:
sudo subscription-manager repos --enable=rhel-7-server-3scale-amp-2.0-rpms
Copy to Clipboard Toggle word wrap
  1. Install the 3scale-amp-template AMP template. The template will be saved in /opt/amp/templates.
sudo yum install 3scale-amp-template
Copy to Clipboard Toggle word wrap

3.4. 4. Deploy the 3scale AMP on OpenShift using a Template

3.4.1. Prerequisites:

Follow these procedures to install AMP onto OpenShift using a .yml template:

3.4.2. 4.1. Import the AMP Template

Once you meet the Prerequisites, you can import the AMP template into your OpenShift cluster.

Perform the following steps to import the AMP template into your OpenShift cluster:

  1. Download amp.yml from the 3scale GitHub page
  2. From a terminal session log in to OpenShift:
oc login
Copy to Clipboard Toggle word wrap
  1. Select your project, or create a new project:
oc project <project_name>
Copy to Clipboard Toggle word wrap
oc new-project <project_name>
Copy to Clipboard Toggle word wrap
  1. Enter the oc new-app command:

    • Specify the --file option with the path to the amp.yml file
    • Specify the --param option with the WILDCARD_DOMAIN parameter set to the domain of your OpenShift cluster:

      oc new-app --file /path/to/amp.yml --param WILDCARD_DOMAIN=<WILDCARD_DOMAIN>
      Copy to Clipboard Toggle word wrap
      Note

      If you encounter a timeout error after entering oc new-app, you may need to manually create persistent storage volumes. See Section 7.7, “Deployment Script is Unable to Create Persistent Storage Volumes”, in the troubleshooting guide, for information on how to do this.

  2. The terminal will output the URL and credentials for your newly created AMP admin portal. Save these details for future reference.

    Note

    You may need to wait a few minutes for AMP to fully deploy on OpenShift for your login and credentials to work.

3.4.3. 4.2. Configure Wildcard Domains (Optional, Tech Preview)

Wildcard domains allow you to direct subdomain traffic through your wildcard domain.

Note

The Wildcard Domain feature is a tech preview.

Use of the wildcard domain feature requires the following:

  • OpenShift version 3.4
  • a wildcard domain that is not being used for any other routes, or as another project’s namespace domain
  • your router must be configured to allow wildcard routes

Perform the following steps to configure a wildcard domain:

  1. From a terminal session, log in to OpenShift:

    oc login
    Copy to Clipboard Toggle word wrap
  2. Create a wildcard router configured with your wildcard domain
  3. Download the wildcard.yml template from the 3scale GitHub page
  4. Switch to the project which contains your AMP deployment:

    oc project <project_name>
    Copy to Clipboard Toggle word wrap
  5. Enter the oc new-app command, specifying the following:

    • the -file option and the path to the wildcard.yml template
    • the --param option and the WILDCARD_DOMAIN of your openshift cluster
    • the --param option and the TENANT_NAME from the project which contains your AMP deployment

      oc new-app -f wildcard.yml --param WILDCARD_DOMAIN=<a-domain-that-resolves-to-your-ocp-cluster.com> --param TENANT_NAME=3scale
      Copy to Clipboard Toggle word wrap

Once configured, your AMP deployment will connect to the built-in APIcast gateways automatically and direct all subdomain traffic through your wildcard domain.

Considerations

Consider the following limitations when using the wildcard domain feature:

  • API endpoints must end with the -staging or -production suffix (e.g. api1-production.example.com, api1-staging.example.com), If your API endpoints do not have one of these suffixes, calls will return a 500 error code.
  • You must deploy the router in same project as the AMP
  • This template will work only with an AMP.yml deploymentMore Information

For information about wildcard domains on OpenShift, visit Using Wildcard Routes (for a Subdomain).

3.4.4. 4.3. Configure SMTP Variables (Optional)

OpenShift uses email to send notifications and invite new users. If you intend to use these features, you must provide your own SMTP server and configure SMTP variables in the SMTP config map.

Follow these steps to configure the SMTP variables in the SMTP config map:

  1. If you are not already logged in, log in to OpenShift:

    oc login
    Copy to Clipboard Toggle word wrap
  2. Configure variables for the SMTP config map. Use the oc patch command, specify the configmap and smtp objects, followed by the -p option and write the following new values in JSON for the following variables:

    Expand

    Variable

    Description

    address

    Allows you to specify a remote mail server as a relay

    username

    Specify your mail server username

    password

    Specify your mail server password

    domain

    Specify a HELO domain

    port

    Specify the port on which the mail server is listening for new connections

    authentication

    Specify the authentication type of your mail server. Allowed values: plain ( sends the password in the clear), login (send password Base64 encoded), or cram_md5 (exchange information and a cryptographic Message Digest 5 algorithm to hash important information)

    openssl.verify.mode

    Specify how OpenSSL checks certificates when using TLS. Allowed values: none, peer, client_once, or fail_if_no_peer_cert.

    Example:

    oc patch configmap smtp -p '{"data":{"address":"<your_address>"}}'
    oc patch configmap smtp -p '{"data":{"username":"<your_username>"}}'
    oc patch configmap smtp -p '{"data":{"password":"<your_password>"}}'
    Copy to Clipboard Toggle word wrap
  3. Once you have set the configmap variables, redeploy the system-app, system-resque, and system-sidekiq pods:

    oc deploy system-app --latest
    oc deploy system-resque --latest
    oc deploy system-sidekiq --latest
    Copy to Clipboard Toggle word wrap

3.5. 5. 3scale AMP Template Parameters

Template parameters configure environment variables of the AMP yml template during and after deployment.

Expand

Name

Description

Default Value

Required?

AMP_RELEASE

AMP release tag.

2.0.0-CR2-redhat-2

yes

ADMIN_PASSWORD

A randomly generated AMP administrator account password.

N/A

yes

ADMIN_USERNAME

AMP administrator account username.

admin

yes

APICAST_ACCESS_TOKEN

Read Only Access Token that APIcast will use to download its configuration.

N/A

yes

ADMIN_ACCESS_TOKEN

Admin Access Token with all scopes and write permissions for API access.

N/A

no

WILDCARD_DOMAIN

Root domain for the wildcard routes. For example, a root domain example.com will generate 3scale-admin.example.com.

N/A

yes

TENANT_NAME

Tenant name under the root that Admin UI will be available with -admin suffix.

3scale

yes

MYSQL_USER

Username for MySQL user that will be used for accessing the database.

mysql

yes

MYSQL_PASSWORD

Password for the MySQL user.

N/A

yes

MYSQL_DATABASE

Name of the MySQL database accessed.

system

yes

MYSQL_ROOT_PASSWORD

Password for Root user.

N/A

yes

SYSTEM_BACKEND_USERNAME

Internal 3scale API username for internal 3scale api auth.

3scale_api_user

yes

SYSTEM_BACKEND_PASSWORD

Internal 3scale API password for internal 3scale api auth.

N/A

yes

REDIS_IMAGE

Redis image to use

rhscl/redis-32-rhel7:3.2

yes

MYSQL_IMAGE

Mysql image to use

rhscl/mysql-56-rhel7:5.6-13.5

yes

SYSTEM_BACKEND_SHARED_SECRET

Shared secret to import events from backend to system.

N/A

yes

SYSTEM_APP_SECRET_KEY_BASE

System application secret key base

N/A

yes

APICAST_MANAGEMENT_API

Scope of the APIcast Management API. Can be disabled, status or debug. At least status required for health checks.

status

no

APICAST_OPENSSL_VERIFY

Turn on/off the OpenSSL peer verification when downloading the configuration. Can be set to true/false.

false

no

APICAST_RESPONSE_CODES

Enable logging response codes in APIcast.

true

no

3.6. 6. Use APIcast with AMP on OpenShift

APIcast with AMP on OpenShift differs from APIcast with AMP hosted and requires unique configuration procedures.

The topics in this section explain how to deploy APIcast with AMP on OpenShift.

AMP OpenShift templates contain 2 built-in APIcast API gateways by default. If you require more API gateways, or require separate APIcast deployments, you can deploy additional APIcast templates onto your OpenShift cluster.

Follow the steps below to deploy additional API gateways onto your OpenShift cluster:

  1. Create an /docs/accounts/tokens[access token] with the following configurations:

    • scoped to Account Management API
    • has read-only access
  2. Log in to your APIcast Cluster:

    oc login
    Copy to Clipboard Toggle word wrap
  3. Create a secret, which allows APIcast to communicate with AMP. Specify new-basicauth, apicast-configuration-url-secret, and the --password parameter with the access token, tenant name, and wildcard domain of your AMP deployment:

    oc secret new-basicauth apicast-configuration-url-secret --password=https://<APICAST_ACCESS_TOKEN>@<TENANT_NAME>-admin.<WILDCARD_DOMAIN>
    Copy to Clipboard Toggle word wrap
    Note

    TENANT_NAME is the name under the root that Admin UI will be available with. TENANT_NAME default value is "3scale". If you used a custom value in your AMP deployment, then you must input that value here.

  4. Import the APIcast template by downloading the apicast.yml, located on the 3scale GitHub, and running the oc new-app command, specifying the --file option with the apicast.yml file:

    oc new-app --file /path/to/file/apicast.yml
    Copy to Clipboard Toggle word wrap

If you deploy APIcast onto a different OpenShift cluster, outside of your AMP cluster, you must connect over the public route.

  1. Create an /docs/accounts/tokens[access token] with the following configurations:

    • scoped to Account Management API
    • has read-only access
  2. Log in to your APIcast Cluster:

    oc login
    Copy to Clipboard Toggle word wrap
  3. Create a secret, which allows APIcast to communicate with AMP. Specify new-basicauth, apicast-configuration-url-secret, and the --password parameter with the access token, tenant name, and wildcard domain of your AMP deployment:

    oc secret new-basicauth apicast-configuration-url-secret --password=https://<APICAST_ACCESS_TOKEN>@<TENANT_NAME>-admin.<WILDCARD_DOMAIN>
    Copy to Clipboard Toggle word wrap
    Note

    TENANT_NAME is the name under the root that Admin UI will be available with. TENANT_NAME default value is "3scale". If you used a custom value in your AMP deployment, then you must input that value here.

  4. Deploy APIcast onto an OpenShift cluster outside of the OpenShift Cluster with the oc new-app command. Specify the --file option and the file path of your apicast.yml file:

    oc new-app --file /path/to/file/apicast.yml
    Copy to Clipboard Toggle word wrap
  5. Update the apicast BACKEND_ENDPOINT_OVERRIDE environment variable set to the URL backend. followed by the wildcard domain of the OpenShift Cluster containing your AMP deployment:

    oc env dc/apicast --overwrite BACKEND_ENDPOINT_OVERRIDE=https://backend.<WILDCARD_DOMAIN>
    Copy to Clipboard Toggle word wrap

3.6.3. 6.3. Connect APIcast from Other Deployments

Once you have deployed APIcast on other platforms, such as the /docs/deployment-options/apicast-docker[Docker] containerized environment or /docs/deployment-options/apicast-v2-self-managed[native installations], you can connect them to AMP on OpenShift by configuring the BACKEND_ENDPOINT_OVERRIDE environment variable in your AMP OpenShift Cluster:

  1. Log in to your AMP OpenShift Cluster:

    oc login
    Copy to Clipboard Toggle word wrap
  2. Configure the system-app object BACKEND_ENDPOINT_OVERRIDE environment variable:

    If you are using a native installation:

    BACKEND_ENDPOINT_OVERRIDE=https://backend.<your_openshift_subdomain> bin/apicast
    Copy to Clipboard Toggle word wrap

    If are using the Docker containerized environment:

    docker run -e BACKEND_ENDPOINT_OVERRIDE=https://backend.<your_openshift_subdomain>
    Copy to Clipboard Toggle word wrap

3.6.4. 6.4. Change Built-In APIcast Default Behavior

In external APIcast deployments, you can modify default behavior by changing template parameters in the APIcast OpenShift template.

In built-in APIcast deployments, AMP and APIcast are deployed from a single template. You must modify environment variables after deployment if you wish to change default behavior for built-in APIcast deployments.

If you deploy multiple APIcast gateways into the same OpenShift cluster, you can configure them to connect using internal routes through the backend listener service instead of the default external route configuration.

You must have an OpenShift SDN plugin installed to connect over internal service routes. How you connect depends on which SDN you have installed.

ovs-subnet

If you are using the ovs-subnet OpenShift SDN plugin, follow these steps to connect over internal routes:

  1. Log in to your OpenShift Cluster, if you have not already done so:

    oc login
    Copy to Clipboard Toggle word wrap
  2. Enter the oc new-app command with the path to the apicast.yml file:

    • Specify the --param option with the BACKEND_ENDPOINT_OVERRIDE parameter set to the domain of your OpenShift cluster’s AMP project:
oc new-app -f apicast.yml --param BACKEND_ENDPOINT_OVERRIDE=http://backend-listener.<AMP_PROJECT>.svc.cluster.local:3000
Copy to Clipboard Toggle word wrap

ovs-multitenant

If you are using the 'ovs-multitenant' Openshift SDN plugin, follow these steps to connect over internal routes:

  1. Log in to your OpenShift Cluster, if you have not already done so:

    oc login
    Copy to Clipboard Toggle word wrap
  2. As admin, specify the oadm command with the pod-network and join-projects options to set up communication between both projects:

    oadm pod-network join-projects --to=<AMP_PROJECT> <APICAST_PROJECT>
    Copy to Clipboard Toggle word wrap
  3. Enter the oc new-app cotion with the path to the apicast.yml file

    • Specify the --param option with the BACKEND_ENDPOINT_OVERRIDE parameter set to the domain of your OpenShift cluster’s AMP project:
oc new-app -f apicast.yml --param BACKEND_ENDPOINT_OVERRIDE=http://backend-listener.<AMP_PROJECT>.svc.cluster.local:3000
Copy to Clipboard Toggle word wrap

More information

For information on Openshift SDN and project network isolation, visit: Openshift SDN

3.7. 7. Troubleshooting

This section contains a list of common installation issues, and provides guidance for resolution.

3.7.1. 7.1. Previous Deployment Leaves Dirty Persistent Volume Claims

Problem

A previous deployment attempt leaves a dirty Persistent Volume Claim (PVC), causing the MySQL container to fail to start.

Cause

Deleting a project in OpenShift does not clean the PVCs associated with it.

Solution

  1. Find the PVC containing the erroneous MySQL data with oc get pvc:

    # oc get pvc
    NAME                    STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
    backend-redis-storage   Bound     vol003    100Gi      RWO,RWX       4d
    mysql-storage           Bound     vol006    100Gi      RWO,RWX       4d
    system-redis-storage    Bound     vol008    100Gi      RWO,RWX       4d
    system-storage          Bound     vol004    100Gi      RWO,RWX       4d
    Copy to Clipboard Toggle word wrap
  2. Stop the deployment of the system-mysql pod by clicking cancel deployment in the OpenShift UI.
  3. Delete everything under the MySQL path to clean the volume.
  4. Start a new system-mysql deployment.

3.7.2. 7.2. Incorrectly Pulling from the Docker Registry

Problem

The following error occurs during installation:

svc/system-redis - 1EX.AMP.LE.IP:6379
  dc/system-redis deploys docker.io/rhscl/redis-32-rhel7:3.2-5.3
    deployment #1 failed 13 minutes ago: config change
Copy to Clipboard Toggle word wrap

Cause

OpenShift searches for and pulls container images by issuing the docker command. This command refers to the docker.io Docker registry, instead of the registry.access.redhat.com Red Hat container registry.

This occurs when the system contains an unexpected version of the Docker containerized environment.

Solution

Use the appropriate version of the Docker containerized environment.

Problem

The system-msql pod crashes and does not deploy, causing other systems dependant on it to fail deployment. The pod’s log displays the following error:

[ERROR] Can't start server : on unix socket: Permission denied
[ERROR] Do you already have another mysqld server running on socket: /var/lib/mysql/mysql.sock ?
[ERROR] Aborting
Copy to Clipboard Toggle word wrap

Cause

The MySQL process is started with inappropriate user permissions.

Solution

  1. The directories used for the persistent volumes MUST have the write permissions for the root group. Having rw permissions for the root user is not enough, as the MySQL service runs as a different user in the root group. Execute the following as root user:

    chmod -R g+w /path/for/pvs
    Copy to Clipboard Toggle word wrap
  2. Execute the following command to prevent SElinux from blocking access:

    chcon -Rt svirt_sandbox_file_t /path/for/pvs
    Copy to Clipboard Toggle word wrap

Problem

Unable to upload a logo using OpenShift version 3.4. system-app logs display the following error:

Errno::EACCES (Permission denied @ dir_s_mkdir - /opt/system/public//system/provider-name/2
Copy to Clipboard Toggle word wrap

Cause

Persistent volumes are not writable by OpenShift.

Solution

Ensure your persistent volume is writable by OpenShift. It should be owned by root group and be group writable.

3.7.5. 7.5. Create Secure Routes on OpenShift

Problem

Test calls do not work after creation of a new service and routes on OpenShift. Direct calls via curl also fail, stating: service not available.

Cause

3scale requires HTTPS routes by default, and OpenShift routes are not secured.

Solution

Ensure the "secure route" checkbox is enabled in your OpenShift router settings.

Problem

APIcast deploy fails (pod doesn’t turn blue). The following error appears in the logs:

update acceptor rejected apicast-3: pods for deployment "apicast-3" took longer than 600 seconds to become ready
Copy to Clipboard Toggle word wrap

The following error appears in the pod:

Error synching pod, skipping: failed to "StartContainer" for "apicast" with RunContainerError: "GenerateRunContainerOptions: secrets \"apicast-configuration-url-secret\" not found"
Copy to Clipboard Toggle word wrap

Cause

The secret was not properly set up.

Solution

When creating a secret with APIcast v3, specify apicast-configuration-url-secret:

oc secret new-basicauth apicast-configuration-url-secret  --password=https://<ACCESS_TOKEN>@<TENANT_NAME>-admin.<WILDCARD_DOMAIN>
Copy to Clipboard Toggle word wrap

3.7.7. 7.7. Deployment Script is Unable to Create Persistent Storage Volumes

Problem

A command creating a persistent volume claim in the amp.yml script might fail. If the command fails, you see the following error message:

timeout expired waiting for volumes to attach/mount for pod
Copy to Clipboard Toggle word wrap

Cause

This error is caused by a bug in Kubernetes.

Solution

Perform the following steps to correct the error and deploy AMP:

  1. Remove the project that contains the failed installation:

    oc delete project <project_name>
    Copy to Clipboard Toggle word wrap
  2. Create a new project or select an existing project:

    oc new-project <project_name>
    Copy to Clipboard Toggle word wrap
    oc project <project_name>
    Copy to Clipboard Toggle word wrap
  3. Download pvc.yml, located on the 3scale GitHub.
  4. From a terminal session within the same folder as the downloaded template run oc new-app, specifying the --file option and the pvc.yml file:

    oc new-app --file pvc.yml
    Copy to Clipboard Toggle word wrap
    Note

    The pvc.yml script may take a few minutes to create the persistent volume claims on your OpenShift cluster.

  5. Continue the installation process at the #import_the_amp_template[Section 4.1, “Import the AMP Template”] task
Note

During your next deployment, you may see an error related to persistent volume claims, you can safely ignore this message.

Torna in cima
Red Hat logoGithubredditYoutubeTwitter

Formazione

Prova, acquista e vendi

Community

Informazioni sulla documentazione di Red Hat

Aiutiamo gli utenti Red Hat a innovarsi e raggiungere i propri obiettivi con i nostri prodotti e servizi grazie a contenuti di cui possono fidarsi. Esplora i nostri ultimi aggiornamenti.

Rendiamo l’open source più inclusivo

Red Hat si impegna a sostituire il linguaggio problematico nel codice, nella documentazione e nelle proprietà web. Per maggiori dettagli, visita il Blog di Red Hat.

Informazioni su Red Hat

Forniamo soluzioni consolidate che rendono più semplice per le aziende lavorare su piattaforme e ambienti diversi, dal datacenter centrale all'edge della rete.

Theme

© 2025 Red Hat