Chapter 1. Installing 3scale on OpenShift
This section walks you through steps to deploy Red Hat 3scale API Management 2.7 on OpenShift.
The Red Hat 3scale API Management solution for on-premises deployment is composed of:
- Two API gateways: embedded APIcast
- One 3scale Admin Portal and Developer Portal with persistent storage
There are two ways to deploy a 3scale solution:
Whether deploying 3scale using the operator or via templates, you must first configure registry authentication to the Red Hat container registry. See Section 1.3.1, “Configuring registry authentication in OpenShift”.
Prerequisites
- You must configure 3scale servers for UTC (Coordinated Universal Time).
To install 3scale on OpenShift, perform the steps outlined in the following sections:
- Section 1.1, “System requirements for installing 3scale on OpenShift”
- Section 1.2, “Configuring nodes and entitlements”
- Section 1.3, “Deploying 3scale on OpenShift using a template”
- Section 1.4, “Parameters of the 3scale template”
- Section 1.5, “Using APIcast with 3scale on OpenShift”
- Section 1.6, “Deploying 3scale using the operator”
- Section 1.7, “Troubleshooting common 3scale installation issues”
1.1. System requirements for installing 3scale on OpenShift
This section lists the requirements for the 3scale - OpenShift template.
- Environment requirements
- Red Hat 3scale API Management requires an environment specified in supported configurations.
Persistent volumes
- 3 RWO (ReadWriteOnce) persistent volumes for Redis and MySQL persistence
- 1 RWX (ReadWriteMany) persistent volume for CMS and System-app Assets
Configure the RWX persistent volume to be group writable. For a list of persistent volume types that support the required access modes, see the OpenShift documentation.
- Hardware requirements
Hardware requirements depend on your usage needs. Red Hat recommends that you test and configure your environment to meet your specific requirements. The following are the recommendations when configuring your environment for 3scale on OpenShift:
- Compute optimized nodes for deployments on cloud environments (AWS c4.2xlarge or Azure Standard_F8).
- Very large installations may require a separate node (AWS M4 series or Azure Av2 series) for Redis if memory requirements exceed your current node’s available RAM.
- Separate nodes between routing and compute tasks.
- Dedicated computing nodes for 3scale specific tasks.
-
Set the
PUMA_WORKERS
variable of the back-end listener to the number of cores in your compute node.
1.2. Configuring nodes and entitlements
Before deploying 3scale on OpenShift, you must configure the necessary nodes and the entitlements for the environment to fetch images from the Red Hat Container Registry. Perform the following steps to configure the nodes and entitlements:
Procedure
- Install Red Hat Enterprise Linux (RHEL) on each of your nodes.
- Register your nodes with Red Hat using the Red Hat Subscription Manager (RHSM), via the interface or the command line.
- Attach your nodes to your 3scale subscription using RHSM.
Install OpenShift on your nodes, complying with the following requirements:
- Use a supported OpenShift version.
- Configure persistent storage on a file system that supports multiple writes.
- Install the OpenShift command line interface.
Enable access to the
rhel-7-server-3scale-amp-2-rpms
repository using the subscription manager:sudo subscription-manager repos --enable=rhel-7-server-3scale-amp-2-rpms
Install the 3scale template called
3scale-amp-template
. This will be saved at/opt/amp/templates
.sudo yum install 3scale-amp-template
1.3. Deploying 3scale on OpenShift using a template
OpenShift Container Platform (OCP) 4.x supports deployment of 3scale using the operator only. See Deploying 3scale using the operator.
This section describes how to deploy 3scale on OpenShift using a template.
Prerequisites
- An OpenShift cluster configured as specified in the Configuring nodes and entitlements section.
A domain that resolves to your OpenShift cluster.
- Note: OpenShift Container Platform (OCP) 3.11 supports deployment of 3scale using templates only.
- Access to the Red Hat container catalog.
- (Optional) A working SMTP server for email functionality.
Follow these procedures to install 3scale on OpenShift using a .yml
template:
1.3.1. Configuring registry authentication in OpenShift
You must configure registry authentication to the Red Hat container registry before you can use Red Hat 3scale API Management OpenShift image stream. Follow the instruction below to configure the registration to container registry.
Procedure
Log in to the OpenShift server as an administrator, as follows:
oc login -u system:admin
Log in to the OpenShift project where you will be installing the image streams. Red Hat recommends that you use the
openshift
project for the 3scale OpenShift image streams.Note: It will have a prefix that is a fixed, random string.
oc project your-openshift-project
Create a
docker-registry
secret using the credentials you created in Creating registry service accounts.Note-
Replace
your-registry-service-account-username
with the username created in the format, 12345678|username. -
Replace
your-registry-service-account-password
with the password string below the username, under the Token Information tab. -
Create a
docker-registry
secret for every newnamespace
where the image streams reside and which use registry.redhat.io.
oc create secret docker-registry threescale-registry-auth \ --docker-server=registry.redhat.io \ --docker-username="your-registry-service-account-username" \ --docker-password="your-registry-service-account-password"
-
Replace
1.3.2. Creating registry service accounts
To use container images from registry.redhat.io
in a shared environment with 3scale 2.7 deployed on OpenShift, you must use a Registry Service Account instead of an individual user’s Customer Portal credentials.
It is a requirement for 3scale 2.7 that you follow the steps outlined below before deploying either on OpenShift using a template or via the operator, as both options use registry authentication.
Procedure
- Navigate to the Registry Service Accounts page and log in.
- Click New Service Account.
Fill in the form on the Create a New Registry Service Account page.
Add a name for the service account.
Note: You will see a fixed-length, randomly generated number string before the form field.
- Enter a Description.
- Click Create.
- Navigate back to your Service Accounts.
- Click the Service Account you created.
Make a note of the username, including the prefix string, for example 12345678|username, and your password.
-
This username and password will be used to log in to
registry.redhat.io
.
-
This username and password will be used to log in to
There are tabs available on the Token Information page that show you how to use the authentication token. For example, the Token Information tab shows the username in the format 12345678|username and the password string below it.
1.3.3. Modifying registry service accounts
Service accounts can be modified or deleted. This can done from the Registry Service Account page using the pop-up menu to the right of each authentication token in the table.
The regeneration or removal of service accounts will impact systems that are using the token to authenticate and retrieve content from registry.redhat.io
.
A description for each function is as follows:
Regenerate token: Allows an authorized user to reset the password associated with the Service Account.
Note: The username for the Service Account cannot be changed.
- Update Description: Allows an authorized user to update the description for the Service Account.
- Delete Account: Allows an authorized user to remove the Service Account.
Additional resources
1.3.4. Importing the 3scale template
Wildcard routes have been removed as of 3scale 2.6.
- This functionality is handled by Zync in the background.
- When API providers are created, updated, or deleted, routes automatically reflect those changes.
Perform the following steps to import the 3scale template into your OpenShift cluster:
Procedure
From a terminal session log in to OpenShift as the cluster administrator:
oc login
Select your project, or create a new project:
oc project <project_name>
oc new-project <project_name>
Enter the
oc new-app
command:-
Specify the
--file
option with the path to the amp.yml file you downloaded as part of Configuring nodes and entitlements. Specify the
--param
option with theWILDCARD_DOMAIN
parameter set to the domain of your OpenShift cluster:oc new-app --file /opt/amp/templates/amp.yml --param WILDCARD_DOMAIN=<WILDCARD_DOMAIN>
The terminal shows the master and tenant URLs and credentials for your newly created 3scale Admin Portal. This output should include the following information:
- master admin username
- master password
- master token information
- tenant username
- tenant password
- tenant token information
-
Specify the
Log in to https://user-admin.3scale-project.example.com as admin/xXxXyz123.
* With parameters: * ADMIN_PASSWORD=xXxXyz123 # generated * ADMIN_USERNAME=admin * TENANT_NAME=user * MASTER_NAME=master * MASTER_USER=master * MASTER_PASSWORD=xXxXyz123 # generated --> Success Access your application via route 'user-admin.3scale-project.example.com' Access your application via route 'master-admin.3scale-project.example.com' Access your application via route 'backend-user.3scale-project.example.com' Access your application via route 'user.3scale-project.example.com' Access your application via route 'api-user-apicast-staging.3scale-project.example.com' Access your application via route 'api-user-apicast-production.3scale-project.example.com'
Make a note of these details for future reference.
NoteWait for 3scale to fully deploy on OpenShift for your login and credentials to work.
1.3.5. Getting the Admin Portal URL
When you deploy 3scale using the template, a default tenant is created, with a fixed URL: 3scale-admin.${wildcardDomain}
The 3scale Dashboard shows the new portal URL of the tenant. As an example, if the <wildCardDomain> is 3scale-project.example.com
, the Admin Portal URL is: https://3scale-admin.3scale-project.example.com
.
The wildcardDomain
is the <wildCardDomain> parameter you provided during installation. Open this unique URL in a browser using the this command:
xdg-open https://3scale-admin.3scale-project.example.com
Optionally, you can create new tenants on the MASTER portal URL: `master.${wildcardDomain}
1.3.6. Configuring SMTP Variables (Optional)
OpenShift uses email to send notifications and invite new users. If you intend to use these features, you must provide your own SMTP server and configure SMTP variables in the SMTP ConfigMap.
Perform the following steps to configure the SMTP variables in the SMTP config map:
Procedure
If you are not already logged in, log in to OpenShift:
oc login
Configure variables for the SMTP config map. Use the
oc patch
command, specify theconfigmap
andsmtp
objects, followed by the-p
option and write the following new values in JSON for the following variables:Variable Description address
Allows you to specify a remote mail server as a relay
username
Specify your mail server username
password
Specify your mail server password
domain
Specify a HELO domain
port
Specify the port on which the mail server is listening for new connections
authentication
Specify the authentication type of your mail server. Allowed values:
plain
( sends the password in the clear),login
(send password Base64 encoded), orcram_md5
(exchange information and a cryptographic Message Digest 5 algorithm to hash important information)openssl.verify.mode
Specify how OpenSSL checks certificates when using TLS. Allowed values:
none
,peer
,client_once
, orfail_if_no_peer_cert
.Example
oc patch configmap smtp -p '{"data":{"address":"<your_address>"}}' oc patch configmap smtp -p '{"data":{"username":"<your_username>"}}' oc patch configmap smtp -p '{"data":{"password":"<your_password>"}}'
After you have set the configmap variables, redeploy the
system-app
andsystem-sidekiq
pods:oc rollout latest dc/system-app oc rollout latest dc/system-sidekiq
Check the status of the rollout to ensure it has finished:
oc rollout status dc/system-app oc rollout status dc/system-sidekiq
1.4. Parameters of the 3scale template
Template parameters configure environment variables of the 3scale (amp.yml) template during and after deployment.
Name | Description | Default Value | Required? |
---|---|---|---|
APP_LABEL | Used for object app labels |
| yes |
ZYNC_DATABASE_PASSWORD | Password for the PostgreSQL connection user. Generated randomly if not provided. | N/A | yes |
ZYNC_SECRET_KEY_BASE | Secret key base for Zync. Generated randomly if not provided. | N/A | yes |
ZYNC_AUTHENTICATION_TOKEN | Authentication token for Zync. Generated randomly if not provided. | N/A | yes |
AMP_RELEASE | 3scale release tag. |
| yes |
ADMIN_PASSWORD | A randomly generated 3scale administrator account password. | N/A | yes |
ADMIN_USERNAME | 3scale administrator account username. |
| yes |
APICAST_ACCESS_TOKEN | Read Only Access Token that APIcast will use to download its configuration. | N/A | yes |
ADMIN_ACCESS_TOKEN | Admin Access Token with all scopes and write permissions for API access. | N/A | no |
WILDCARD_DOMAIN |
Root domain for the wildcard routes. For example, a root domain | N/A | yes |
TENANT_NAME | Tenant name under the root that Admin Portal will be available with -admin suffix. |
| yes |
MYSQL_USER | Username for MySQL user that will be used for accessing the database. |
| yes |
MYSQL_PASSWORD | Password for the MySQL user. | N/A | yes |
MYSQL_DATABASE | Name of the MySQL database accessed. |
| yes |
MYSQL_ROOT_PASSWORD | Password for Root user. | N/A | yes |
SYSTEM_BACKEND_USERNAME | Internal 3scale API username for internal 3scale api auth. |
| yes |
SYSTEM_BACKEND_PASSWORD | Internal 3scale API password for internal 3scale api auth. | N/A | yes |
REDIS_IMAGE | Redis image to use |
| yes |
MYSQL_IMAGE | Mysql image to use |
| yes |
MEMCACHED_IMAGE | Memcached image to use |
| yes |
POSTGRESQL_IMAGE | Postgresql image to use |
| yes |
AMP_SYSTEM_IMAGE | 3scale System image to use |
| yes |
AMP_BACKEND_IMAGE | 3scale Backend image to use |
| yes |
AMP_APICAST_IMAGE | 3scale APIcast image to use |
| yes |
AMP_ZYNC_IMAGE | 3scale Zync image to use |
| yes |
SYSTEM_BACKEND_SHARED_SECRET | Shared secret to import events from backend to system. | N/A | yes |
SYSTEM_APP_SECRET_KEY_BASE | System application secret key base | N/A | yes |
APICAST_MANAGEMENT_API | Scope of the APIcast Management API. Can be disabled, status or debug. At least status required for health checks. |
| no |
APICAST_OPENSSL_VERIFY | Turn on/off the OpenSSL peer verification when downloading the configuration. Can be set to true/false. |
| no |
APICAST_RESPONSE_CODES | Enable logging response codes in APIcast. | true | no |
APICAST_REGISTRY_URL | A URL which resolves to the location of APIcast policies | yes | |
MASTER_USER | Master administrator account username |
| yes |
MASTER_NAME |
The subdomain value for the master Admin Portal, will be appended with the |
| yes |
MASTER_PASSWORD | A randomly generated master administrator password | N/A | yes |
MASTER_ACCESS_TOKEN | A token with master level permissions for API calls | N/A | yes |
IMAGESTREAM_TAG_IMPORT_INSECURE | Set to true if the server may bypass certificate verification or connect directly over HTTP during image import. |
| yes |
1.5. Using APIcast with 3scale on OpenShift
APIcast is available with API Manager for 3scale hosted, and in on-premises installations in OpenShift Container Platform. The configuration procedures are different for both.
This section explains how to deploy APIcast with API Manager on OpenShift.
- Deploying APIcast templates on an existing OpenShift cluster containing 3scale
- Connecting APIcast from a different OpenShift cluster
- Changing the default behavior for embedded APIcast
- Connecting multiple APIcast deployments on a single OpenShift cluster over internal service routes
- Connecting APIcast on other deployments
1.5.1. Deploying APIcast templates on an existing OpenShift cluster containing 3scale
3scale OpenShift templates contain two embedded APIcast by default. If you require more API gateways, or require separate APIcast deployments, you can deploy additional APIcast templates on your OpenShift cluster.
Prerequisites
- First install the APIcast template as described in Configuring nodes and entitlements.
Perform the following steps to deploy additional API gateways on your OpenShift cluster:
Procedure
Create an access token with the following configurations:
- Scoped to Account Management API
- Having read-only access
Log in to your APIcast cluster:
oc login
Create a secret that allows APIcast to communicate with 3scale. Specify
new-basicauth
,apicast-configuration-url-secret
, and the--password
parameter with the access token, tenant name, and wildcard domain of your 3scale deployment:oc secret new-basicauth apicast-configuration-url-secret --password=https://<APICAST_ACCESS_TOKEN>@<TENANT_NAME>-admin.<WILDCARD_DOMAIN>
NoteTENANT_NAME
is the name under the root that the Admin Portal will be available with. The default value forTENANT_NAME
is 3scale. If you used a custom value in your 3scale deployment, you must use that value here.Import the APIcast template using the
oc new-app
command, specifying the--file
option with theapicast.yml
file:oc new-app --file /opt/amp/templates/apicast.yml
1.5.2. Connecting APIcast from a different OpenShift cluster
If you deploy APIcast on a different OpenShift cluster, outside your 3scale cluster, you must connect through the public route:
Procedure
Create an access token with the following configurations:
- Scoped to Account Management API
- Having read-only access
Log in to your APIcast cluster:
oc login
Create a secret that allows APIcast to communicate with 3scale. Specify
new-basicauth
,apicast-configuration-url-secret
, and the--password
parameter with the access token, tenant name, and wildcard domain of your 3scale deployment:oc secret new-basicauth apicast-configuration-url-secret --password=https://<APICAST_ACCESS_TOKEN>@<TENANT_NAME>-admin.<WILDCARD_DOMAIN>
NoteTENANT_NAME
is the name under the root that the Admin Portal will be available with. The default value forTENANT_NAME
is 3scale. If you used a custom value in your 3scale deployment, you must use that value.Deploy APIcast on a different OpenShift cluster using the
oc new-app
command. Specify the--file
option and the to path to yourapicast.yml
file:oc new-app --file /path/to/file/apicast.yml
1.5.3. Changing the default behavior for embedded APIcast
In external APIcast deployments, you can modify default behavior by changing the template parameters in the APIcast OpenShift template.
In embedded APIcast deployments, 3scale and APIcast are deployed from a single template. You must modify environment variables after deployment if you wish to change the default behavior for the embedded APIcast deployments.
1.5.4. Connecting multiple APIcast deployments on a single OpenShift cluster over internal service routes
If you deploy multiple APIcast gateways into the same OpenShift cluster, you can configure them to connect using internal routes through the backend listener service instead of the default external route configuration.
You must have an OpenShift Software-Defined Networking (SDN) plugin installed to connect over internal service routes. How you connect depends on which SDN you have installed:
ovs-subnet
If you are using the ovs-subnet
OpenShift SDN plugin, perform the following steps to connect over internal routes:
Procedure
If not already logged in, log in to your OpenShift cluster:
oc login
Enter the following command to display the
backend-listener
route URL:oc get route backend
Enter the
oc new-app
command with the path toapicast.yml
:oc new-app -f apicast.yml
ovs-multitenant
If you are using the ovs-multitenant
OpenShift SDN plugin, perform the following steps to connect over internal routes:
Procedure
If not already logged in, log in to your OpenShift cluster:
oc login
As administrator, specify the
oadm
command with thepod-network
andjoin-projects
options to set up communication between both projects:oadm pod-network join-projects --to=<3SCALE_PROJECT> <APICAST_PROJECT>
Enter the following command to display the
backend-listener
route URL:oc get route backend
Enter the
oc new-app
command with the path toapicast.yml
:oc new-app -f apicast.yml
Additional resources
For information on OpenShift SDN and project network isolation, see Openshift SDN.
1.5.5. Connecting APIcast on other deployments
If you deploy APIcast on Docker, you can connect APIcast to 3scale deployed on OpenShift by setting the THREESCALE_PORTAL_ENDPOINT
parameter to the URL and access token of your 3scale Admin Portal deployed on OpenShift. You do not need to set the BACKEND_ENDPOINT_OVERRIDE
parameter in this case.
Additional resources
For more details, see Deploying APIcast on the Docker containerized environment.
1.6. Deploying 3scale using the operator
This section takes you through installing and deploying the 3scale solution via the 3scale operator, using the APIManager custom resource.
Wildcard routes have been removed since 3scale 2.6.
- This functionality is handled by Zync in the background.
- When API providers are created, updated, or deleted, routes automatically reflect those changes.
Prerequisites
- Configuring registry authentication in OpenShift
- Deploying 3scale using the operator first requires that you follow the steps in Installing the 3scale Operator on OpenShift
OpenShift Container Platform 4.x
- A user account with administrator privileges in the OpenShift cluster.
- Note: OCP 4 supports deployment of 3scale using the operator only.
- For more information about supported configurations, see the Red Hat 3scale API Management Supported Configurations page.
Follow these procedures to deploy 3scale using the operator:
1.6.1. Deploying the APIManager custom resource
Deploying the APIManager custom resource will make the operator begin processing and will deploy a 3scale solution from it.
Procedure
The menu structure depends on the OpenShift version you are using:
- For OCP 4.1, click Catalog > Installed Operators.
For OCP 4.2, click Operators > Installed Operators.
- From the list of Installed Operators, click 3scale Operator.
- Click the API Manager tab.
- Click Create APIManager.
Clear the sample content and add the following YAML definitions to the editor, then click Create.
NoteThe wildcardDomain parameter can be any desired name you wish to give that resolves to an IP address, which is a valid DNS domain. Be sure to remove the placeholder marks for your parameters: < >.
apiVersion: apps.3scale.net/v1alpha1 kind: APIManager metadata: name: example-apimanager spec: wildcardDomain: <wildcardDomain>
1.6.2. Getting the APIManager Admin Portal and Master Admin Portal credentials
To log in to either the 3scale Admin Portal or Master Admin Portal after the operator-based deployment, you need the credentials for each separate portal. To get these credentials:
Run the following commands to get the Admin Portal credentials:
oc get secret system-seed -o json | jq -r .data.ADMIN_USER | base64 -d oc get secret system-seed -o json | jq -r .data.ADMIN_PASSWORD | base64 -d
- Log in as the Admin Portal administrator to verify these credentials are working.
Run the following commands to get the Master Admin Portal credentials:
oc get secret system-seed -o json | jq -r .data.MASTER_USER | base64 -d oc get secret system-seed -o json | jq -r .data.MASTER_PASSWORD | base64 -d
- Log in as the Master Admin Portal administrator to verify these credentials are working.
Additional resources
For more information about the APIManager fields, refer to the Reference documentation.
1.6.3. Getting the Admin Portal URL
When you deploy 3scale using the operator, a default tenant is created, with a fixed URL: 3scale-admin.${wildcardDomain}
The 3scale Dashboard shows the new portal URL of the tenant. As an example, if the <wildCardDomain> is 3scale-project.example.com
, the Admin Portal URL is: https://3scale-admin.3scale-project.example.com
.
The wildcardDomain
is the <wildCardDomain> parameter you provided during installation. Open this unique URL in a browser using the this command:
xdg-open https://3scale-admin.3scale-project.example.com
Optionally, you can create new tenants on the MASTER portal URL: master.${wildcardDomain}
1.7. Troubleshooting common 3scale installation issues
This section contains a list of common installation issues and provides guidance for their resolution.
- Previous deployment leaving dirty persistent volume claims
- Wrong or missing credentials of the authenticated image registry
- Incorrectly pulling from the Docker registry
- Permission issues for MySQL when persistent volumes are mounted locally
- Unable to upload logo or images
- Test calls not working on OpenShift
- APIcast on a different project from 3scale failing to deploy
1.7.1. Previous deployment leaving dirty persistent volume claims
Problem
A previous deployment attempt leaves a dirty Persistent Volume Claim (PVC) causing the MySQL container to fail to start.
Cause
Deleting a project in OpenShift does not clean the PVCs associated with it.
Solution
Procedure
Find the PVC containing the erroneous MySQL data with the
oc get pvc
command:# oc get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES AGE backend-redis-storage Bound vol003 100Gi RWO,RWX 4d mysql-storage Bound vol006 100Gi RWO,RWX 4d system-redis-storage Bound vol008 100Gi RWO,RWX 4d system-storage Bound vol004 100Gi RWO,RWX 4d
-
Stop the deployment of the system-mysql pod by clicking
cancel deployment
in the OpenShift UI. - Delete everything under the MySQL path to clean the volume.
-
Start a new
system-mysql
deployment.
1.7.2. Wrong or missing credentials of the authenticated image registry
Problem
Pods are not starting. ImageStreams show the following error:
! error: Import failed (InternalError): ...unauthorized: Please login to the Red Hat Registry
Cause
While installing 3scale on OpenShift 4.x, OpenShift fails to start pods because ImageStreams cannot pull the images they reference. This happens because the pods cannot authenticate against the registries they point to.
Solution
Procedure
Type the following command to verify the configuration of your container registry authentication:
$ oc get secret
If your secret exists, you will see the following output in the terminal:
threescale-registry-auth kubernetes.io/dockerconfigjson 1 4m9s
- However, if you do not see the output, you must do the following:
- Use the credentials you previously set up while Creating a registry service account to create your secret.
-
Use the steps in Configuring registry authentication in OpenShift, replacing
<your-registry-service-account-username>
and<your-registry-service-account-password>
in theoc create secret
command provided. Generate the
threescale-registry-auth
secret in the same namespace as the APIManager resource. You must run the following inside the<project-name>
:oc project <project-name> oc create secret docker-registry threescale-registry-auth \ --docker-server=registry.redhat.io \ --docker-username="<your-registry-service-account-username>" \ --docker-password="<your-registry-service-account-password>" --docker-email="<email-address>"
Delete and recreate the APIManager resource:
$ oc delete -f apimanager.yaml apimanager.apps.3scale.net "example-apimanager" deleted $ oc create -f apimanager.yaml apimanager.apps.3scale.net/example-apimanager created
Verification
Type the following command to confirm that deployments have a status of
Starting
orReady
. The pods then begin to spawn:$ oc describe apimanager (...) Status: Deployments: Ready: apicast-staging system-memcache system-mysql system-redis zync zync-database zync-que Starting: apicast-production backend-cron backend-worker system-sidekiq system-sphinx Stopped: backend-listener backend-redis system-app
Type the following command to see the status of each pod:
$ oc get pods NAME READY STATUS RESTARTS AGE 3scale-operator-66cc6d857b-sxhgm 1/1 Running 0 17h apicast-production-1-deploy 1/1 Running 0 17m apicast-production-1-pxkqm 0/1 Pending 0 17m apicast-staging-1-dbwcw 1/1 Running 0 17m apicast-staging-1-deploy 0/1 Completed 0 17m backend-cron-1-deploy 1/1 Running 0 17m
1.7.3. Incorrectly pulling from the Docker registry
Problem
The following error occurs during installation:
svc/system-redis - 1EX.AMP.LE.IP:6379 dc/system-redis deploys docker.io/rhscl/redis-32-rhel7:3.2-5.3 deployment #1 failed 13 minutes ago: config change
Cause
OpenShift searches for and pulls container images by issuing the docker
command. This command refers to the docker.io
Docker registry instead of the registry.redhat.io
Red Hat container registry.
This occurs when the system contains an unexpected version of the Docker containerized environment.
Solution
Procedure
Use the appropriate version of the Docker containerized environment.
1.7.4. Permission issues for MySQL when persistent volumes are mounted locally
Problem
The system-msql pod crashes and does not deploy causing other systems dependant on it to fail deployment. The pod log displays the following error:
[ERROR] Cannot start server : on unix socket: Permission denied [ERROR] Do you already have another mysqld server running on socket: /var/lib/mysql/mysql.sock ? [ERROR] Aborting
Cause
The MySQL process is started with inappropriate user permissions.
Solution
Procedure
The directories used for the persistent volumes MUST have the write permissions for the root group. Having read-write permissions for the root user is not enough as the MySQL service runs as a different user in the root group. Execute the following command as the root user:
chmod -R g+w /path/for/pvs
Execute the following command to prevent SElinux from blocking access:
chcon -Rt svirt_sandbox_file_t /path/for/pvs
1.7.5. Unable to upload logo or images
Problem
Unable to upload a logo - system-app
logs display the following error:
Errno::EACCES (Permission denied @ dir_s_mkdir - /opt/system/public//system/provider-name/2
Cause
Persistent volumes are not writable by OpenShift.
Solution
Procedure
Ensure your persistent volume is writable by OpenShift. It should be owned by root group and be group writable.
1.7.6. Test calls not working on OpenShift
Problem
Test calls do not work after creation of a new service and routes on OpenShift. Direct calls via curl also fail, stating: service not available
.
Cause
3scale requires HTTPS routes by default, and OpenShift routes are not secured.
Solution
Procedure
Ensure the secure route checkbox is clicked in your OpenShift router settings.
1.7.7. APIcast on a different project from 3scale failing to deploy
Problem
APIcast deploy fails (pod does not turn blue). You see the following error in the logs:
update acceptor rejected apicast-3: pods for deployment "apicast-3" took longer than 600 seconds to become ready
You see the following error in the pod:
Error synching pod, skipping: failed to "StartContainer" for "apicast" with RunContainerError: "GenerateRunContainerOptions: secrets \"apicast-configuration-url-secret\" not found"
Cause
The secret was not properly set up.
Solution
Procedure
When creating a secret with APIcast v3, specify apicast-configuration-url-secret
:
oc secret new-basicauth apicast-configuration-url-secret --password=https://<ACCESS_TOKEN>@<TENANT_NAME>-admin.<WILDCARD_DOMAIN>