Chapter 2. Installing 3scale on OpenShift
This section walks you through steps to deploy Red Hat 3scale API Management 2.9 on OpenShift.
The Red Hat 3scale API Management solution for on-premises deployment is composed of:
- Two API gateways: embedded APIcast
- One 3scale Admin Portal and Developer Portal with persistent storage
There are two ways to deploy a 3scale solution:
- Whether deploying 3scale using the operator or via templates, you must first configure registry authentication to the Red Hat container registry. See Configuring container registry authentication.
- The 3scale Istio Adapter is available as an optional adapter that allows labeling a service running within the Red Hat OpenShift Service Mesh, and integrate that service with Red Hat 3scale API Management. Refer to 3scale adapter documentation for more information.
Prerequisites
- You must configure 3scale servers for UTC (Coordinated Universal Time).
- Create user credentials using the step in Chapter 1, Registry service accounts for 3scale
To install 3scale on OpenShift, perform the steps outlined in the following sections:
- Section 2.1, “System requirements for installing 3scale on OpenShift”
- Section 2.2, “Configuring nodes and entitlements”
- Section 2.3, “Deploying 3scale on OpenShift using a template”
- Section 2.5, “Parameters of the 3scale template”
- Section 2.6, “Using APIcast with 3scale on OpenShift”
- Section 2.7, “Deploying 3scale using the operator”
- Section 2.9, “Troubleshooting common 3scale installation issues”
2.1. System requirements for installing 3scale on OpenShift
This section lists the requirements for the 3scale - OpenShift template.
2.1.1. Environment requirements
Red Hat 3scale API Management requires an environment specified in supported configurations.
If you are using local filesystem storage:
Persistent volumes
- 3 RWO (ReadWriteOnce) persistent volumes for Redis and MySQL persistence
- 1 RWX (ReadWriteMany) persistent volume for Developer Portal content and System-app Assets
Configure the RWX persistent volume to be group writable. For a list of persistent volume types that support the required access modes, see the OpenShift documentation.
If you are using an Amazon Simple Storage Service (Amazon S3) bucket for content management system (CMS) storage:
Persistent volumes
- 3 RWO (ReadWriteOnce) persistent volumes for Redis and MySQL persistence
Storage
- 1 Amazon S3 bucket
2.1.2. Hardware requirements
Hardware requirements depend on your usage needs. Red Hat recommends that you test and configure your environment to meet your specific requirements. The following are the recommendations when configuring your environment for 3scale on OpenShift:
- Compute optimized nodes for deployments on cloud environments (AWS c4.2xlarge or Azure Standard_F8).
- Very large installations may require a separate node (AWS M4 series or Azure Av2 series) for Redis if memory requirements exceed your current node’s available RAM.
- Separate nodes between routing and compute tasks.
- Dedicated computing nodes for 3scale specific tasks.
-
Set the
PUMA_WORKERS
variable of the back-end listener to the number of cores in your compute node.
2.2. Configuring nodes and entitlements
Before deploying 3scale on OpenShift, you must configure the necessary nodes and the entitlements for the environment to fetch images from the Red Hat Ecosystem Catalog. Perform the following steps to configure the nodes and entitlements:
Procedure
- Install Red Hat Enterprise Linux (RHEL) on each of your nodes.
- Register your nodes with Red Hat using the Red Hat Subscription Manager (RHSM), via the interface or the command line.
- Attach your nodes to your 3scale subscription using RHSM.
Install OpenShift on your nodes, complying with the following requirements:
- Use a supported OpenShift version.
- Configure persistent storage on a file system that supports multiple writes.
- Install the OpenShift command line interface.
Enable access to the
rhel-7-server-3scale-amp-2-rpms
repository using the subscription manager:sudo subscription-manager repos --enable=rhel-7-server-3scale-amp-2-rpms
Install the 3scale template called
3scale-amp-template
. This will be saved at/opt/amp/templates
.sudo yum install 3scale-amp-template
2.2.1. Configuring Amazon Simple Storage Service
Skip this section, if you are deploying 3scale with the local filesystem storage.
If you want to use an Amazon Simple Storage Service (Amazon S3) bucket as storage, you must configure your bucket before you can deploy 3scale on OpenShift.
Perform the following steps to configure your Amazon S3 bucket for 3scale:
Create an Identity and Access Management (IAM) policy with the following minimum permissions:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "s3:ListAllMyBuckets", "Resource": "arn:aws:s3:::*" }, { "Effect": "Allow", "Action": "s3:*", "Resource": [ "arn:aws:s3:::targetBucketName", "arn:aws:s3:::targetBucketName/*" ] } ] }
Create a CORS configuration with the following rules:
<?xml version="1.0" encoding="UTF-8"?> <CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <CORSRule> <AllowedOrigin>https://*</AllowedOrigin> <AllowedMethod>GET</AllowedMethod> </CORSRule> </CORSConfiguration>
2.3. Deploying 3scale on OpenShift using a template
OpenShift Container Platform (OCP) 4.x supports deployment of 3scale using the operator only. See Deploying 3scale using the operator.
Prerequisites
- An OpenShift cluster configured as specified in the Configuring nodes and entitlements section.
- A domain that resolves to your OpenShift cluster.
- Access to the Red Hat Ecosystem Catalog.
- (Optional) An Amazon Simple Storage Service (Amazon S3) bucket for content management system (CMS) storage outside of the local filesystem.
(Optional) A deployment with PostgreSQL.
- This is the same as the default deployment on Openshift, however it uses PostgreSQL as an internal system database.
- (Optional) A working SMTP server for email functionality.
Deploying 3scale on OpenShift using a template is based on OpenShift Container Platform 3.11
Follow these procedures to install 3scale on OpenShift using a .yml
template:
2.4. Configuring container registry authentication
As a 3scale administrator, configure authentication with registry.redhat.io
before you deploy 3scale container images on OpenShift.
Prerequisites
- Cluster administrator access to an OpenShift Container Platform cluster.
-
OpenShift
oc
client tool is installed. For more details, see the OpenShift CLI documentation.
Procedure
Log into your OpenShift cluster as administrator:
$ oc login -u system:admin
Open the project in which you want to deploy 3scale:
oc project your-openshift-project
Create a
docker-registry
secret using your Red Hat Customer Portal account, replacingthreescale-registry-auth
with the secret to create:$ oc create secret docker-registry threescale-registry-auth \ --docker-server=registry.redhat.io \ --docker-username=CUSTOMER_PORTAL_USERNAME \ --docker-password=CUSTOMER_PORTAL_PASSWORD \ --docker-email=EMAIL_ADDRESS
You will see the following output:
secret/threescale-registry-auth created
Link the secret to your service account to use the secret for pulling images. The service account name must match the name that the OpenShift pod uses. This example uses the
default
service account:$ oc secrets link default threescale-registry-auth --for=pull
Link the secret to the
builder
service account to use the secret for pushing and pulling build images:$ oc secrets link builder threescale-registry-auth
Additional resources
For more details on authenticating with Red Hat for container images:
2.4.1. Creating registry service accounts
To use container images from registry.redhat.io
in a shared environment with 3scale 2.9 deployed on OpenShift, you must use a Registry Service Account instead of an individual user’s Customer Portal credentials.
It is a requirement for 3scale 2.8 and greater that you follow the steps outlined below before deploying either on OpenShift using a template or via the operator, as both options use registry authentication.
Procedure
- Navigate to the Registry Service Accounts page and log in.
Click New Service Account. Fill in the form on the Create a New Registry Service Account page.
Add a name for the service account.
Note: You will see a fixed-length, randomly generated number string before the form field.
- Enter a Description.
- Click Create.
- Navigate back to your Service Accounts.
- Click the Service Account you created.
Make a note of the username, including the prefix string, for example 12345678|username, and your password.
This username and password is used to log in to
registry.redhat.io
.NoteThere are tabs available on the Token Information page that show you how to use the authentication token. For example, the Token Information tab shows the username in the format 12345678|username and the password string below it.
2.4.2. Modifying registry service accounts
Service accounts can be modified or deleted. This can done from the Registry Service Account page using the pop-up menu to the right of each authentication token in the table.
The regeneration or removal of service accounts impacts systems that are using the token to authenticate and retrieve content from registry.redhat.io
.
A description for each function is as follows:
Regenerate token: Allows an authorized user to reset the password associated with the Service Account.
Note: The username for the Service Account cannot be changed.
- Update Description: Allows an authorized user to update the description for the Service Account.
- Delete Account: Allows an authorized user to remove the Service Account.
Additional resources
2.4.3. Importing the 3scale template
Wildcard routes have been removed as of 3scale 2.6.
- This functionality is handled by Zync in the background.
- When API providers are created, updated, or deleted, routes automatically reflect those changes.
Perform the following steps to import the 3scale template into your OpenShift cluster:
Procedure
From a terminal session log in to OpenShift as the cluster administrator:
oc login
Select your project, or create a new project:
oc project <project_name>
oc new-project <project_name>
Enter the
oc new-app
command:-
Specify the
--file
option with the path to the amp.yml file you downloaded as part of Configuring nodes and entitlements. Specify the
--param
option with theWILDCARD_DOMAIN
parameter set to the domain of your OpenShift cluster:oc new-app --file /opt/amp/templates/amp.yml --param WILDCARD_DOMAIN=<WILDCARD_DOMAIN>
The terminal shows the master and tenant URLs and credentials for your newly created 3scale Admin Portal. This output should include the following information:
- master admin username
- master password
- master token information
- tenant username
- tenant password
- tenant token information
-
Specify the
Log in to https://user-admin.3scale-project.example.com as admin/xXxXyz123.
* With parameters: * ADMIN_PASSWORD=xXxXyz123 # generated * ADMIN_USERNAME=admin * TENANT_NAME=user * MASTER_NAME=master * MASTER_USER=master * MASTER_PASSWORD=xXxXyz123 # generated --> Success Access your application via route 'user-admin.3scale-project.example.com' Access your application via route 'master-admin.3scale-project.example.com' Access your application via route 'backend-user.3scale-project.example.com' Access your application via route 'user.3scale-project.example.com' Access your application via route 'api-user-apicast-staging.3scale-project.example.com' Access your application via route 'api-user-apicast-production.3scale-project.example.com'
- Make a note of these details for future reference.
The 3scale deployment on OpenShift has been successful when the following command returns:
oc wait --for=condition=available --timeout=-1s $(oc get dc --output=name)
NoteWhen the 3scale deployment on OpenShift has been successful, your login credentials will work.
2.4.4. Getting the Admin Portal URL
When you deploy 3scale using the template, a default tenant is created, with a fixed URL: 3scale-admin.${wildcardDomain}
The 3scale Dashboard shows the new portal URL of the tenant. As an example, if the <wildCardDomain> is 3scale-project.example.com
, the Admin Portal URL is: https://3scale-admin.3scale-project.example.com
.
The wildcardDomain
is the <wildCardDomain> parameter you provided during installation. Open this unique URL in a browser using the this command:
xdg-open https://3scale-admin.3scale-project.example.com
Optionally, you can create new tenants on the MASTER portal URL: master.${wildcardDomain}
2.4.5. Deploying 3scale with Amazon Simple Storage Service
Deploying 3scale with Amazon Simple Storage Service (Amazon S3) is an optional procedure. Deploy 3scale with Amazon S3 using the following steps:
Procedure
- Download amp-s3.yml.
Log in to OpenShift from a terminal session :
oc login
Select your project, or create a new project:
oc project <project_name>
OR
oc new-project <project_name>
Enter the oc new-app command:
-
Specify the
--file
option with the path to the amp-s3.yml file. Specify the
--param
options with the following values:-
WILDCARD_DOMAIN
: the parameter set to the domain of your OpenShift cluster. -
AWS_BUCKET
: with your target bucket name. -
AWS_ACCESS_KEY_ID
: with your AWS credentials ID. -
AWS_SECRET_ACCESS_KEY
: with your AWS credentials KEY. -
AWS_REGION: with the AWS
: region of your bucket. -
AWS_HOSTNAME
: Default: Amazon endpoints - AWS S3 compatible provider endpoint hostname. -
AWS_PROTOCOL
: Default: HTTPS - AWS S3 compatible provider endpoint protocol. -
AWS_PATH_STYLE
: Default:false
- When set totrue
, the bucket name is always left in the request URI and never moved to the host as a sub-domain.
-
Optionally, specify the
--param
option with theTENANT_NAME
parameter to set a custom name for the Admin Portal. If omitted, this defaults to 3scaleoc new-app --file /path/to/amp-s3.yml \ --param WILDCARD_DOMAIN=<a-domain-that-resolves-to-your-ocp-cluster.com> \ --param TENANT_NAME=3scale \ --param AWS_ACCESS_KEY_ID=<your-aws-access-key-id> \ --param AWS_SECRET_ACCESS_KEY=<your-aws-access-key-secret> \ --param AWS_BUCKET=<your-target-bucket-name> \ --param AWS_REGION=<your-aws-bucket-region> \ --param FILE_UPLOAD_STORAGE=s3
The terminal shows the master and tenant URLs, as well as credentials for your newly created 3scale Admin Portal. This output should include the following information:
- master admin username
- master password
- master token information
- tenant username
- tenant password
- tenant token information
-
Specify the
Log in to https://user-admin.3scale-project.example.com as admin/xXxXyz123.
... * With parameters: * ADMIN_PASSWORD=xXxXyz123 # generated * ADMIN_USERNAME=admin * TENANT_NAME=user ... * MASTER_NAME=master * MASTER_USER=master * MASTER_PASSWORD=xXxXyz123 # generated ... --> Success Access your application via route 'user-admin.3scale-project.example.com' Access your application via route 'master-admin.3scale-project.example.com' Access your application via route 'backend-user.3scale-project.example.com' Access your application via route 'user.3scale-project.example.com' Access your application via route 'api-user-apicast-staging.3scale-project.example.com' Access your application via route 'api-user-apicast-production.3scale-project.example.com' Access your application via route 'apicast-wildcard.3scale-project.example.com' ...
- Make a note of these details for future reference.
The 3scale deployment on OpenShift has been successful when the following command returns:
oc wait --for=condition=available --timeout=-1s $(oc get dc --output=name)
NoteWhen the 3scale deployment on OpenShift has been successful, your login credentials will work.
2.4.6. Deploying 3scale with PostgreSQL
Deploying 3scale with PostgreSQL is an optional procedure. Deploy 3scale with PostgreSQL using the following steps:
Procedure
- Download amp-postgresql.yml.
Log in to OpenShift from a terminal session :
oc login
Select your project, or create a new project:
oc project <project_name>
OR
oc new-project <project_name>
Enter the oc new-app command:
-
Specify the
--file
option with the path to the amp-postgresql.yml file. -
Specify the
--param
options with the following values: -
WILDCARD_DOMAIN
: the parameter set to the domain of your OpenShift cluster. Optionally, specify the
--param
option with theTENANT_NAME
parameter to set a custom name for the Admin Portal. If omitted, this defaults to 3scaleoc new-app --file /path/to/amp-postgresql.yml \ --param WILDCARD_DOMAIN=<a-domain-that-resolves-to-your-ocp-cluster.com> \ --param TENANT_NAME=3scale \
The terminal shows the master and tenant URLs, as well as the credentials for your newly created 3scale Admin Portal. This output should include the following information:
- master admin username
- master password
- master token information
- tenant username
- tenant password
- tenant token information
-
Specify the
Log in to https://user-admin.3scale-project.example.com as admin/xXxXyz123.
... * With parameters: * ADMIN_PASSWORD=xXxXyz123 # generated * ADMIN_USERNAME=admin * TENANT_NAME=user ... * MASTER_NAME=master * MASTER_USER=master * MASTER_PASSWORD=xXxXyz123 # generated ... --> Success Access your application via route 'user-admin.3scale-project.example.com' Access your application via route 'master-admin.3scale-project.example.com' Access your application via route 'backend-user.3scale-project.example.com' Access your application via route 'user.3scale-project.example.com' Access your application via route 'api-user-apicast-staging.3scale-project.example.com' Access your application via route 'api-user-apicast-production.3scale-project.example.com' Access your application via route 'apicast-wildcard.3scale-project.example.com' ...
- Make a note of these details for future reference.
The 3scale deployment on OpenShift has been successful when the following command returns:
oc wait --for=condition=available --timeout=-1s $(oc get dc --output=name)
NoteWhen the 3scale deployment on OpenShift has been successful, your login and credentials will work.
2.4.7. Configuring SMTP variables (optional)
OpenShift uses email to send notifications and invite new users. If you intend to use these features, you must provide your own SMTP server and configure SMTP variables in the system-smtp
secret.
Perform the following steps to configure the SMTP variables in the system-smtp
secret:
Procedure
If you are not already logged in, log in to OpenShift:
oc login
Using the
oc patch
command, specify thesecret
type wheresystem-smtp
is the name of the secret, followed by the-p
option, and write the new values in JSON for the following variables:Variable Description address
Allows you to specify a remote mail server as a relay
username
Specify your mail server username
password
Specify your mail server password
domain
Specify a HELO domain
port
Specify the port on which the mail server is listening for new connections
authentication
Specify the authentication type of your mail server. Allowed values:
plain
(sends the password in the clear),login
(send password Base64 encoded), orcram_md5
(exchange information and a cryptographic Message Digest 5 algorithm to hash important information)openssl.verify.mode
Specify how OpenSSL checks certificates when using TLS. Allowed values:
none
orpeer
.Example
oc patch secret system-smtp -p '{"stringData":{"address":"<your_address>"}}' oc patch secret system-smtp -p '{"stringData":{"username":"<your_username>"}}' oc patch secret system-smtp -p '{"stringData":{"password":"<your_password>"}}'
After you have set the secret variables, redeploy the
system-app
andsystem-sidekiq
pods:oc rollout latest dc/system-app oc rollout latest dc/system-sidekiq
Check the status of the rollout to ensure it has finished:
oc rollout status dc/system-app oc rollout status dc/system-sidekiq
2.5. Parameters of the 3scale template
Template parameters configure environment variables of the 3scale (amp.yml) template during and after deployment.
Name | Description | Default Value | Required? |
---|---|---|---|
APP_LABEL | Used for object app labels |
| yes |
ZYNC_DATABASE_PASSWORD | Password for the PostgreSQL connection user. Generated randomly if not provided. | N/A | yes |
ZYNC_SECRET_KEY_BASE | Secret key base for Zync. Generated randomly if not provided. | N/A | yes |
ZYNC_AUTHENTICATION_TOKEN | Authentication token for Zync. Generated randomly if not provided. | N/A | yes |
AMP_RELEASE | 3scale release tag. |
| yes |
ADMIN_PASSWORD | A randomly generated 3scale administrator account password. | N/A | yes |
ADMIN_USERNAME | 3scale administrator account username. |
| yes |
APICAST_ACCESS_TOKEN | Read Only Access Token that APIcast will use to download its configuration. | N/A | yes |
ADMIN_ACCESS_TOKEN | Admin Access Token with all scopes and write permissions for API access. | N/A | no |
WILDCARD_DOMAIN |
Root domain for the wildcard routes. For example, a root domain | N/A | yes |
TENANT_NAME | Tenant name under the root that Admin Portal will be available with -admin suffix. |
| yes |
MYSQL_USER | Username for MySQL user that will be used for accessing the database. |
| yes |
MYSQL_PASSWORD | Password for the MySQL user. | N/A | yes |
MYSQL_DATABASE | Name of the MySQL database accessed. |
| yes |
MYSQL_ROOT_PASSWORD | Password for Root user. | N/A | yes |
SYSTEM_BACKEND_USERNAME | Internal 3scale API username for internal 3scale api auth. |
| yes |
SYSTEM_BACKEND_PASSWORD | Internal 3scale API password for internal 3scale api auth. | N/A | yes |
REDIS_IMAGE | Redis image to use |
| yes |
MYSQL_IMAGE | Mysql image to use |
| yes |
MEMCACHED_IMAGE | Memcached image to use |
| yes |
POSTGRESQL_IMAGE | Postgresql image to use |
| yes |
AMP_SYSTEM_IMAGE | 3scale System image to use |
| yes |
AMP_BACKEND_IMAGE | 3scale Backend image to use |
| yes |
AMP_APICAST_IMAGE | 3scale APIcast image to use |
| yes |
AMP_ZYNC_IMAGE | 3scale Zync image to use |
| yes |
SYSTEM_BACKEND_SHARED_SECRET | Shared secret to import events from backend to system. | N/A | yes |
SYSTEM_APP_SECRET_KEY_BASE | System application secret key base | N/A | yes |
APICAST_MANAGEMENT_API | Scope of the APIcast Management API. Can be disabled, status or debug. At least status required for health checks. |
| no |
APICAST_OPENSSL_VERIFY | Turn on/off the OpenSSL peer verification when downloading the configuration. Can be set to true/false. |
| no |
APICAST_RESPONSE_CODES | Enable logging response codes in APIcast. | true | no |
APICAST_REGISTRY_URL | A URL which resolves to the location of APIcast policies | yes | |
MASTER_USER | Master administrator account username |
| yes |
MASTER_NAME |
The subdomain value for the master Admin Portal, will be appended with the |
| yes |
MASTER_PASSWORD | A randomly generated master administrator password | N/A | yes |
MASTER_ACCESS_TOKEN | A token with master level permissions for API calls | N/A | yes |
IMAGESTREAM_TAG_IMPORT_INSECURE | Set to true if the server may bypass certificate verification or connect directly over HTTP during image import. |
| yes |
2.6. Using APIcast with 3scale on OpenShift
APIcast is available with API Manager for 3scale hosted, and in on-premises installations in OpenShift Container Platform. The configuration procedures are different for both.
This section explains how to deploy APIcast with API Manager on OpenShift.
- Deploying APIcast templates on an existing OpenShift cluster containing 3scale
- Connecting APIcast from a different OpenShift cluster
- Changing the default behavior for embedded APIcast
- Connecting multiple APIcast deployments on a single OpenShift cluster over internal service routes
- Connecting APIcast on other deployments
2.6.1. Deploying APIcast templates on an existing OpenShift cluster containing 3scale
3scale OpenShift templates contain two embedded APIcast by default. If you require more API gateways, or require separate APIcast deployments, you can deploy additional APIcast templates on your OpenShift cluster.
Perform the following steps to deploy additional API gateways on your OpenShift cluster:
Procedure
Create an access token with the following configurations:
- Scoped to Account Management API
- Having read-only access
Log in to your APIcast cluster:
oc login
Create a secret that allows APIcast to communicate with 3scale. Specify the
create secret
andapicast-configuration-url-secret
parameters with the access token, tenant name, and wildcard domain of your 3scale deployment:oc create secret generic apicast-configuration-url-secret --from-literal=password=https://<ACCESS_TOKEN>@<TENANT_NAME>-admin.<WILDCARD_DOMAIN>
NoteTENANT_NAME
is the name under the root that the Admin Portal will be available with. The default value forTENANT_NAME
is 3scale. If you used a custom value in your 3scale deployment, you must use that value here.Import the APIcast template using the
oc new-app
command, specifying the--file
option with theapicast.yml
file:oc new-app --file /opt/amp/templates/apicast.yml
NoteFirst install the APIcast template as described in Configuring nodes and entitlements.
2.6.2. Connecting APIcast from a different OpenShift cluster
If you deploy APIcast on a different OpenShift cluster, outside your 3scale cluster, you must connect through the public route:
Procedure
Create an access token with the following configurations:
- Scoped to Account Management API
- Having read-only access
Log in to your APIcast cluster:
oc login
Create a secret that allows APIcast to communicate with 3scale. Specify the
create secret
andapicast-configuration-url-secret
parameters with the access token, tenant name, and wildcard domain of your 3scale deployment:oc create secret generic apicast-configuration-url-secret --from-literal=password=https://<ACCESS_TOKEN>@<TENANT_NAME>-admin.<WILDCARD_DOMAIN>
NoteTENANT_NAME
is the name under the root that the Admin Portal will be available with. The default value forTENANT_NAME
is 3scale. If you used a custom value in your 3scale deployment, you must use that value.Deploy APIcast on a different OpenShift cluster using the
oc new-app
command. Specify the--file
option and the to path to yourapicast.yml
file:oc new-app --file /path/to/file/apicast.yml
2.6.3. Changing the default behavior for embedded APIcast
In external APIcast deployments, you can modify default behavior by changing the template parameters in the APIcast OpenShift template.
In embedded APIcast deployments, 3scale and APIcast are deployed from a single template. You must modify environment variables after deployment if you wish to change the default behavior for the embedded APIcast deployments.
2.6.4. Connecting multiple APIcast deployments on a single OpenShift cluster over internal service routes
If you deploy multiple APIcast gateways into the same OpenShift cluster, you can configure them to connect using internal routes through the backend listener service instead of the default external route configuration.
You must have an OpenShift Software-Defined Networking (SDN) plugin installed to connect over internal service routes. How you connect depends on which SDN you have installed:
ovs-subnet
If you are using the ovs-subnet
OpenShift SDN plugin, perform the following steps to connect over internal routes:
Procedure
If not already logged in, log in to your OpenShift cluster:
oc login
Enter the following command to display the
backend-listener
route URL:oc get route backend
Enter the
oc new-app
command with the path toapicast.yml
:oc new-app -f apicast.yml
ovs-multitenant
If you are using the ovs-multitenant
OpenShift SDN plugin, perform the following steps to connect over internal routes:
Procedure
If not already logged in, log in to your OpenShift cluster:
oc login
As administrator, specify the
oadm
command with thepod-network
andjoin-projects
options to set up communication between both projects:oadm pod-network join-projects --to=<3SCALE_PROJECT> <APICAST_PROJECT>
Enter the following command to display the
backend-listener
route URL:oc get route backend
Enter the
oc new-app
command with the path toapicast.yml
:oc new-app -f apicast.yml
Additional resources
For information on OpenShift SDN and project network isolation, see Openshift SDN.
2.6.5. Connecting APIcast on other deployments
If you deploy APIcast on Docker, you can connect APIcast to 3scale deployed on OpenShift by setting the THREESCALE_PORTAL_ENDPOINT
parameter to the URL and access token of your 3scale Admin Portal deployed on OpenShift. You do not need to set the BACKEND_ENDPOINT_OVERRIDE
parameter in this case.
Additional resources
For more details, see Deploying APIcast on the Docker containerized environment.
2.7. Deploying 3scale using the operator
This section takes you through installing and deploying the 3scale solution via the 3scale operator, using the APIManager custom resource.
Wildcard routes have been removed since 3scale 2.6.
- This functionality is handled by Zync in the background.
- When API providers are created, updated, or deleted, routes automatically reflect those changes.
Prerequisites
- Configuring container registry authentication
- To make sure you receive automatic updates of micro releases for 3scale, you must have enabled the automatic approval functionality in the 3scale operator. Automatic is the default approval setting. To change this at any time based on your specific needs, use the steps for Configuring automated application of micro releases.
- Deploying 3scale using the operator first requires that you follow the steps in Installing the 3scale Operator on OpenShift
OpenShift Container Platform 4
- A user account with administrator privileges in the OpenShift cluster.
- Note: OCP 4 supports deployment of 3scale using the operator only.
- For more information about supported configurations, see the Red Hat 3scale API Management Supported Configurations page.
Follow these procedures to deploy 3scale using the operator:
2.7.1. Deploying the APIManager custom resource
Deploying the APIManager custom resource will make the operator begin processing and will deploy a 3scale solution from it.
Procedure
Click Operators > Installed Operators.
- From the list of Installed Operators, click 3scale Operator.
- Click the API Manager tab.
- Click Create APIManager.
Clear the sample content and add the following YAML definitions to the editor, then click Create.
Before 3scale 2.8, you could configure the automatic addition of replicas by setting the
highAvailability
field totrue
. From 3scale 2.8, the addition of replicas is controlled through the replicas field in the APIManager CR as shown in the following example.NoteThe wildcardDomain parameter can be any desired name you wish to give that resolves to an IP address, which is a valid DNS domain.
APIManager CR with minimum requirements:
apiVersion: apps.3scale.net/v1alpha1 kind: APIManager metadata: name: apimanager-sample spec: wildcardDomain: example.com
APIManager CR with replicas configured:
apiVersion: apps.3scale.net/v1alpha1 kind: APIManager metadata: name: apimanager-sample spec: system: appSpec: replicas: 1 sidekiqSpec: replicas: 1 zync: appSpec: replicas: 1 queSpec: replicas: 1 backend: cronSpec: replicas: 1 listenerSpec: replicas: 1 workerSpec: replicas: 1 apicast: productionSpec: replicas: 1 stagingSpec: replicas: 1 wildcardDomain: example.com
2.7.2. Getting the APIManager Admin Portal and Master Admin Portal credentials
To log in to either the 3scale Admin Portal or Master Admin Portal after the operator-based deployment, you need the credentials for each separate portal. To get these credentials:
Run the following commands to get the Admin Portal credentials:
oc get secret system-seed -o json | jq -r .data.ADMIN_USER | base64 -d oc get secret system-seed -o json | jq -r .data.ADMIN_PASSWORD | base64 -d
- Log in as the Admin Portal administrator to verify these credentials are working.
Run the following commands to get the Master Admin Portal credentials:
oc get secret system-seed -o json | jq -r .data.MASTER_USER | base64 -d oc get secret system-seed -o json | jq -r .data.MASTER_PASSWORD | base64 -d
- Log in as the Master Admin Portal administrator to verify these credentials are working.
Additional resources
For more information about the APIManager fields, refer to the Reference documentation.
2.7.3. Getting the Admin Portal URL
When you deploy 3scale using the operator, a default tenant is created, with a fixed URL: 3scale-admin.${wildcardDomain}
The 3scale Dashboard shows the new portal URL of the tenant. As an example, if the <wildCardDomain> is 3scale-project.example.com
, the Admin Portal URL is: https://3scale-admin.3scale-project.example.com
.
The wildcardDomain
is the <wildCardDomain> parameter you provided during installation. Open this unique URL in a browser using the this command:
xdg-open https://3scale-admin.3scale-project.example.com
Optionally, you can create new tenants on the MASTER portal URL: master.${wildcardDomain}
2.7.4. Configuring automated application of micro releases
To obtain micro release updates and have them be applied automatically, the 3scale operator’s approval strategy must be set to Automatic. The following describes the differences between Automatic and Manual settings and outlines the steps in a procedure to change from one to the other.
Automatic and manual:
- During installation, the Automatic setting is the selected option by default. Installation of new updates occur as they become available. You can change this during the install or any time afterwards.
- If you select the Manual option during installation or at any time afterwards, you will receive updates when they are available. Next, you must approve the Install Plan and apply it yourself.
Procedure
- Click Operators > Installed Operators.
- Click 3scale API Management from the list of Installed Operators.
- Click the Subscription tab. Under the Subscription Details heading you will see the subheading Approval.
- Click the link below Approval. The link is set to Automatic by default. A modal with the heading, Change Update Approval Strategy will pop up.
- Choose the option of your preference: Automatic (default) or Manual, and then click Save.
Additional resources
- See Approval Strategy under Operator installation with OperatorHub.
2.7.5. High availability in 3scale using the operator
High availability (HA) in 3scale using the operator aims to provide uninterrupted uptime if, for example, if one or more databases were to fail.
.spec.highAvailability.enabled
is only for external databases.
If you want HA in your 3scale operator-based deployment, note the following:
- Deploy and configure 3scale critical databases externally, specifically system database, system redis, and backend redis. Make sure you deploy and configure those databases in a way they are highly available.
Specify the connection endpoints to those databases for 3scale by pre-creating their corresponding Kubernetes Secrets.
- See External databases installation for more information.
- See Enabling Pod Disruption Budgets for more information about non-database deployment configurations.
-
Set the
.spec.highAvailability.enabled
attribute totrue
when deploying the APIManager CR to enable external database mode for the critical databases: system database, system redis, and backend redis.
Additionally, if you want the zync database to be highly available to avoid zync potentially losing queue jobs data on restart, note the following:
- Deploy and configure the zync database externally. Make sure you deploy and configure the database in a way that it is highly available.
Specify the connection endpoint to the zync database for 3scale by pre-creating its corresponding Kubernetes Secrets.
- See Zync database secret for more information.
-
Deploy 3scale setting the
spec.highAvailability.externalZyncDatabaseEnabled
attribute to true to specify zync database as an external database.
2.8. Deployment configuration options for 3scale on OpenShift using the operator
This section provides information about the deployment configuration options for Red Hat 3scale API Management on OpenShift using the operator.
Prerequisites
- Configuring container registry authentication
- Deploying 3scale using the operator first requires that you follow the steps in Installing the 3scale Operator on OpenShift
OpenShift Container Platform 4.x
- A user account with administrator privileges in the OpenShift cluster.
2.8.1. Proof of concept for evaluation deployment
The following sections describe the configuration options applicable to the proof of concept for an evaluation deployment of 3scale. This deployment uses internal databases as default.
The configuration for external databases is the standard deployment option for production environments.
2.8.1.1. Default deployment configuration
Containers will have Kubernetes resource limits and requests.
- This ensures a minimum performance level.
- It limits resources to allow external services and allocation of solutions.
- Deployment of internal databases.
File storage will be based on Persistence Volumes (PV).
- One will require read, write, execute (RWX) access mode.
- OpenShift configured to provide them upon request.
- Deploy MySQL as the internal relational database.
The default configuration option is suitable for proof of concept (PoC) or evaluation by a customer.
One, many, or all of the default configuration options can be overriden with specific field values in the APIManager custom resource. The 3scale operator allows all available combinations whereas templates allow fixed deployment profiles. For example, the 3scale operator allows deployment of 3scale in evaluation mode and external databases mode. Templates do not allow this specific deployment configuration. Templates are only available for the most common configuration options.
2.8.1.2. Evaluation installation
For and evaluation installtion, containers will not have kubernetes resource limits and requests specified. For example:
- Small memory footprint
- Fast startup
- Runnable on laptop
- Suitable for presale/sales demos
apiVersion: apps.3scale.net/v1alpha1 kind: APIManager metadata: name: example-apimanager spec: wildcardDomain: lvh.me resourceRequirementsEnabled: false
Check APIManager custom resource for reference.
2.8.2. External databases installation
An external databases installation is suitable for production use where high availability (HA) is a requirement or where you plan to reuse your own databases.
When enabling the 3scale external databases installation mode, all of the following databases are externalized:
-
backend-redis
-
system-redis
-
system-database
(mysql
,postgresql
, ororacle
)
3scale 2.8 and above has been tested is supported with the following database versions:
Database | Version |
---|---|
Redis | 5.0 |
MySQL | 5.7 |
PostgreSQL | 10.6 |
Before creating APIManager custom resource to deploy 3scale, you must provide the following connection settings for the external databases using OpenShift secrets.
2.8.2.1. Backend Redis secret
Deploy two external Redis instances and fill in the connection settings as shown in the following example:
apiVersion: v1 kind: Secret metadata: name: backend-redis stringData: REDIS_STORAGE_URL: "redis://backend-redis-storage" REDIS_STORAGE_SENTINEL_HOSTS: "redis://sentinel-0.example.com:26379,redis://sentinel-1.example.com:26379, redis://sentinel-2.example.com:26379" REDIS_STORAGE_SENTINEL_ROLE: "master" REDIS_QUEUES_URL: "redis://backend-redis-queues" REDIS_QUEUES_SENTINEL_HOSTS: "redis://sentinel-0.example.com:26379,redis://sentinel-1.example.com:26379, redis://sentinel-2.example.com:26379" REDIS_QUEUES_SENTINEL_ROLE: "master" type: Opaque
The Secret name must be backend-redis
.
2.8.2.2. System Redis secret
Deploy two external Redis instances and fill in the connection settings as shown in the following example:
apiVersion: v1 kind: Secret metadata: name: system-redis stringData: URL: "redis://system-redis" SENTINEL_HOSTS: "redis://sentinel-0.example.com:26379,redis://sentinel-1.example.com:26379, redis://sentinel-2.example.com:26379" SENTINEL_ROLE: "master" NAMESPACE: "" MESSAGE_BUS_URL: "redis://system-redis-messagebus" MESSAGE_BUS_SENTINEL_HOSTS: "redis://sentinel-0.example.com:26379,redis://sentinel-1.example.com:26379, redis://sentinel-2.example.com:26379" MESSAGE_BUS_SENTINEL_ROLE: "master" MESSAGE_BUS_NAMESPACE: "" type: Opaque
The Secret name must be system-redis
.
2.8.2.3. System database secret
The Secret name must be system-database
.
When you are deploying 3scale, you have three alternatives for your system database. Configure different attributes and values for each alternative’s related secret.
- MySQL
- PostgreSQL
- Oracle Database
To deploy a MySQL, PostgreSQL, or an Oracle Database system database secret, fill in the connection settings as shown in the following examples:
MySQL system database secret
apiVersion: v1 kind: Secret metadata: name: system-database stringData: URL: "mysql2://{DB_USER}:{DB_PASSWORD}@{DB_HOST}:{DB_PORT}/{DB_NAME}" type: Opaque
PostgreSQL system database secret
apiVersion: v1 kind: Secret metadata: name: system-database stringData: URL: "postgresql://{DB_USER}:{DB_PASSWORD}@{DB_HOST}:{DB_PORT}/{DB_NAME}" type: Opaque
Oracle system database secret
apiVersion: v1 kind: Secret metadata: name: system-database stringData: URL: "oracle-enhanced://{DB_USER}:{DB_PASSWORD}@{DB_HOST}:{DB_PORT}/{DB_NAME}" ORACLE_SYSTEM_PASSWORD: "{SYSTEM_PASSWORD}" type: Opaque
-
The Oracle
system
user executes commands with system privileges. Some are detailed in this GitHub repository. The latest can be executed in the Oracle Database initializer when the tables are initialized in the database. There may be other commands executed not listed in these links. -
The
system
user is also required for upgrades when there are any schema migrations to run, so other commands not included in the previous links may be executed. - Disclaimer: Links contained in this note to external website(s) are provided for convenience only. Red Hat has not reviewed the links and is not responsible for the content or its availability. The inclusion of any link to an external website does not imply endorsement by Red Hat of the website or their entities, products or services. You agree that Red Hat is not responsible or liable for any loss or expenses that may result due to your use of (or reliance on) the external site or content.
2.8.2.4. Zync database secret
In a zync database setup, when HighAvailability is enabled, and if the externalZyncDatabaseEnabled
field is also enabled, the user has to pre-create a secret named zync
. Then set zync
with the DATABASE_URL
and DATABASE_PASSWORD
fields with the values pointing to your externally database. The external database must be in high-availability mode. See the following example:
apiVersion: v1 kind: Secret metadata: name: zync stringData: DATABASE_URL: postgresql://<zync-db-user>:<zync-db-password>@<zync-db-host>:<zync-db-port>/zync_production ZYNC_DATABASE_PASSWORD: <zync-db-password> type: Opaque
2.8.2.5. APIManager custom resources to deploy 3scale
-
When you enable
highAvailability
, you must pre-create thebackend-redis
,system-redis
, andsystem-database
secrets. When you enable
highAvailability
and theexternalZyncDatabaseEnabled
fields together, you must pre-create the zync database secret.-
Choose only one type of database to externalize in the case of
system-database
.
-
Choose only one type of database to externalize in the case of
Configuration of the APIManager custom resource will depend on whether or not your choice of database is external to your 3scale deployment.
If your backend Redis, system Redis, and system database will be external to 3scale, the APIManager custom resource must have highAvailability
set to true
. See the following example:
apiVersion: apps.3scale.net/v1alpha1 kind: APIManager metadata: name: example-apimanager spec: wildcardDomain: lvh.me highAvailability: enabled: true
If your zync database will be external, the APIManager custom resource must have highAvailability
set to true
and externalZyncDatabaseEnabled
must also be set to true
. See the following example:
apiVersion: apps.3scale.net/v1alpha1 kind: APIManager metadata: name: example-apimanager spec: wildcardDomain: lvh.me highAvailability: enabled: true externalZyncDatabaseEnabled: true
Additional resources
2.8.3. Amazon Simple Storage Service 3scale Filestorage installation
The following examples show 3scale FileStorage using Amazon Simple Storage Service (Amazon S3) instead of persistent volume claim (PVC).
Before creating APIManager custom resource to deploy 3scale, connection settings for the S3 service needs to be provided using an openshift secret.
2.8.3.1. Amazon S3 secret
In the following example, Secret name can be anyone, as it will be referenced in the APIManager custom resource.
kind: Secret metadata: creationTimestamp: null name: aws-auth stringData: AWS_ACCESS_KEY_ID: 123456 AWS_SECRET_ACCESS_KEY: 98765544 AWS_BUCKET: mybucket.example.com AWS_REGION: eu-west-1 type: Opaque
Lastly, create the APIManager custom resource to deploy 3scale.
apiVersion: apps.3scale.net/v1alpha1 kind: APIManager metadata: name: example-apimanager spec: wildcardDomain: lvh.me system: fileStorage: simpleStorageService: configurationSecretRef: name: aws-auth
Amazon S3 region and Amazon S3 bucket settings are provided directly in the APIManager custom resource. The Amazon S3 secret name is provided directly in the APIManager custom resource.
Check APIManager SystemS3Spec for reference.
2.8.4. PostgreSQL installation
A MySQL internal relational database is the default deployment. This deployment configuration can be overriden to use PostgreSQL instead.
apiVersion: apps.3scale.net/v1alpha1 kind: APIManager metadata: name: example-apimanager spec: wildcardDomain: lvh.me system: database: postgresql: {}
Check APIManager DatabaseSpec for reference.
2.8.5. Customizing compute resource requirements at component level
Customize Kubernetes Compute Resource Requirements in your 3scale solution through the APIManager custom resource attributes. Do this to customize compute resource requirements, which is CPU and memory, assigned to a specific APIManager component.
The following example outlines how to customize compute resource requirements for the system-master’s system-provider
container, for the backend-listener
and for the zync-database
:
apiVersion: apps.3scale.net/v1alpha1 kind: APIManager metadata: name: example-apimanager spec: backend: listenerSpec: resources: requests: memory: "150Mi" cpu: "300m" limits: memory: "500Mi" cpu: "1000m" system: appSpec: providerContainerResources: requests: memory: "111Mi" cpu: "222m" limits: memory: "333Mi" cpu: "444m" zync: databaseResources: requests: memory: "111Mi" cpu: "222m" limits: memory: "333Mi" cpu: "444m"
Additional resources
See APIManager CRD reference for more information about how to specify component-level custom resource requirements.
2.8.5.1. Default APIManager components compute resources
When you configure the APIManager spec.resourceRequirementsEnabled
attribute as true
, the default compute resources are set for the APIManager components.
The specific compute resources default values that are set for the APIManager components are shown in the following table.
2.8.5.1.1. CPU and memory units
The following list explains the units you will find mentioned in the compute resources default values table. For more information on CPU and memory units, see Managing Resources for Containers.
Resource units explanation
- m - milliCPU or millicore
- Mi - mebibytes
- Gi - gibibyte
- G - gigabyte
Component | CPU requests | CPU limits | Memory requests | Memory limits |
---|---|---|---|---|
system-app’s system-master | 50m | 1000m | 600Mi | 800Mi |
system-app’s system-provider | 50m | 1000m | 600Mi | 800Mi |
system-app’s system-developer | 50m | 1000m | 600Mi | 800Mi |
system-sidekiq | 100m | 1000m | 500Mi | 2Gi |
system-sphinx | 80m | 1000m | 250Mi | 512Mi |
system-redis | 150m | 500m | 256Mi | 32Gi |
system-mysql | 250m | No limit | 512Mi | 2Gi |
system-postgresql | 250m | No limit | 512Mi | 2Gi |
backend-listener | 500m | 1000m | 550Mi | 700Mi |
backend-worker | 150m | 1000m | 50Mi | 300Mi |
backend-cron | 50m | 150m | 40Mi | 80Mi |
backend-redis | 1000m | 2000m | 1024Mi | 32Gi |
apicast-production | 500m | 1000m | 64Mi | 128Mi |
apicast-staging | 50m | 100m | 64Mi | 128Mi |
zync | 150m | 1 | 250M | 512Mi |
zync-que | 250m | 1 | 250M | 512Mi |
zync-database | 50m | 250m | 250M | 2G |
2.8.6. Customizing node affinity and tolerations at component level
Customize Kubernetes Affinity and Tolerations in your Red Hat 3scale API Management solution through the APIManager custom resource attributes to customize where and how the different 3scale components of an installation are scheduled onto Kubernetes Nodes.
The following example sets a custom node affinity for the backend. It also sets listener and custom tolerations for the system-memcached
:
apiVersion: apps.3scale.net/v1alpha1 kind: APIManager metadata: name: example-apimanager spec: backend: listenerSpec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: "kubernetes.io/hostname" operator: In values: - ip-10-96-1-105 - key: "beta.kubernetes.io/arch" operator: In values: - amd64 system: memcachedTolerations: - key: key1 value: value1 operator: Equal effect: NoSchedule - key: key2 value: value2 operator: Equal effect: NoSchedule
Additional resources
See APIManager CDR reference for a full list of attributes related to affinity and tolerations.
2.8.7. Reconciliation
Once 3scale has been installed, the 3scale operator enables updating a given set of parameters from the custom resource to modify system configuration options. Modifications are made by hot swapping, that is, without stopping or shutting down the system.
Not all the parameters of the APIManager custom resource definitions (CRDs) are reconcilable.
The following is a list of reconcilable parameters:
2.8.7.1. Resources
Resource limits and requests for all 3scale components.
apiVersion: apps.3scale.net/v1alpha1 kind: APIManager metadata: name: example-apimanager spec: ResourceRequirementsEnabled: true/false
2.8.7.2. Backend replicas
Backend components pod count.
apiVersion: apps.3scale.net/v1alpha1 kind: APIManager metadata: name: example-apimanager spec: backend: listenerSpec: replicas: X workerSpec: replicas: Y cronSpec: replicas: Z
2.8.7.3. APIcast replicas
APIcast staging and production components pod count.
apiVersion: apps.3scale.net/v1alpha1 kind: APIManager metadata: name: example-apimanager spec: apicast: productionSpec: replicas: X stagingSpec: replicas: Z
2.8.7.4. System replicas
System app and system sidekiq components pod count
apiVersion: apps.3scale.net/v1alpha1 kind: APIManager metadata: name: example-apimanager spec: system: appSpec: replicas: X sidekiqSpec: replicas: Z
2.8.7.5. Zync replicas
Zync app and que components pod count
apiVersion: apps.3scale.net/v1alpha1 kind: APIManager metadata: name: example-apimanager spec: zync: appSpec: replicas: X queSpec: replicas: Z
2.9. Troubleshooting common 3scale installation issues
This section contains a list of common installation issues and provides guidance for their resolution.
- Previous deployment leaving dirty persistent volume claims
- Wrong or missing credentials of the authenticated image registry
- Incorrectly pulling from the Docker registry
- Permission issues for MySQL when persistent volumes are mounted locally
- Unable to upload logo or images
- Test calls not working on OpenShift
- APIcast on a different project from 3scale failing to deploy
2.9.1. Previous deployment leaving dirty persistent volume claims
Problem
A previous deployment attempt leaves a dirty Persistent Volume Claim (PVC) causing the MySQL container to fail to start.
Cause
Deleting a project in OpenShift does not clean the PVCs associated with it.
Solution
Procedure
Find the PVC containing the erroneous MySQL data with the
oc get pvc
command:# oc get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES AGE backend-redis-storage Bound vol003 100Gi RWO,RWX 4d mysql-storage Bound vol006 100Gi RWO,RWX 4d system-redis-storage Bound vol008 100Gi RWO,RWX 4d system-storage Bound vol004 100Gi RWO,RWX 4d
-
Stop the deployment of the system-mysql pod by clicking
cancel deployment
in the OpenShift UI. - Delete everything under the MySQL path to clean the volume.
-
Start a new
system-mysql
deployment.
2.9.2. Wrong or missing credentials of the authenticated image registry
Problem
Pods are not starting. ImageStreams show the following error:
! error: Import failed (InternalError): ...unauthorized: Please login to the Red Hat Registry
Cause
While installing 3scale on OpenShift 4.x, OpenShift fails to start pods because ImageStreams cannot pull the images they reference. This happens because the pods cannot authenticate against the registries they point to.
Solution
Procedure
Type the following command to verify the configuration of your container registry authentication:
$ oc get secret
If your secret exists, you will see the following output in the terminal:
threescale-registry-auth kubernetes.io/dockerconfigjson 1 4m9s
- However, if you do not see the output, you must do the following:
- Use the credentials you previously set up while Creating a registry service account to create your secret.
-
Use the steps in Configuring registry authentication in OpenShift, replacing
<your-registry-service-account-username>
and<your-registry-service-account-password>
in theoc create secret
command provided. Generate the
threescale-registry-auth
secret in the same namespace as the APIManager resource. You must run the following inside the<project-name>
:oc project <project-name> oc create secret docker-registry threescale-registry-auth \ --docker-server=registry.redhat.io \ --docker-username="<your-registry-service-account-username>" \ --docker-password="<your-registry-service-account-password>" --docker-email="<email-address>"
Delete and recreate the APIManager resource:
$ oc delete -f apimanager.yaml apimanager.apps.3scale.net "example-apimanager" deleted $ oc create -f apimanager.yaml apimanager.apps.3scale.net/example-apimanager created
Verification
Type the following command to confirm that deployments have a status of
Starting
orReady
. The pods then begin to spawn:$ oc describe apimanager (...) Status: Deployments: Ready: apicast-staging system-memcache system-mysql system-redis zync zync-database zync-que Starting: apicast-production backend-cron backend-worker system-sidekiq system-sphinx Stopped: backend-listener backend-redis system-app
Type the following command to see the status of each pod:
$ oc get pods NAME READY STATUS RESTARTS AGE 3scale-operator-66cc6d857b-sxhgm 1/1 Running 0 17h apicast-production-1-deploy 1/1 Running 0 17m apicast-production-1-pxkqm 0/1 Pending 0 17m apicast-staging-1-dbwcw 1/1 Running 0 17m apicast-staging-1-deploy 0/1 Completed 0 17m backend-cron-1-deploy 1/1 Running 0 17m
2.9.3. Incorrectly pulling from the Docker registry
Problem
The following error occurs during installation:
svc/system-redis - 1EX.AMP.LE.IP:6379 dc/system-redis deploys docker.io/rhscl/redis-32-rhel7:3.2-5.3 deployment #1 failed 13 minutes ago: config change
Cause
OpenShift searches for and pulls container images by issuing the docker
command. This command refers to the docker.io
Docker registry instead of the registry.redhat.io
Red Hat Ecosystem Catalog.
This occurs when the system contains an unexpected version of the Docker containerized environment.
Solution
Procedure
Use the appropriate version of the Docker containerized environment.
2.9.4. Permission issues for MySQL when persistent volumes are mounted locally
Problem
The system-msql pod crashes and does not deploy causing other systems dependant on it to fail deployment. The pod log displays the following error:
[ERROR] Cannot start server : on unix socket: Permission denied [ERROR] Do you already have another mysqld server running on socket: /var/lib/mysql/mysql.sock ? [ERROR] Aborting
Cause
The MySQL process is started with inappropriate user permissions.
Solution
Procedure
The directories used for the persistent volumes MUST have the write permissions for the root group. Having read-write permissions for the root user is not enough as the MySQL service runs as a different user in the root group. Execute the following command as the root user:
chmod -R g+w /path/for/pvs
Execute the following command to prevent SElinux from blocking access:
chcon -Rt svirt_sandbox_file_t /path/for/pvs
2.9.5. Unable to upload logo or images
Problem
Unable to upload a logo - system-app
logs display the following error:
Errno::EACCES (Permission denied @ dir_s_mkdir - /opt/system/public//system/provider-name/2
Cause
Persistent volumes are not writable by OpenShift.
Solution
Procedure
Ensure your persistent volume is writable by OpenShift. It should be owned by root group and be group writable.
2.9.6. Test calls not working on OpenShift
Problem
Test calls do not work after creation of a new service and routes on OpenShift. Direct calls via curl also fail, stating: service not available
.
Cause
3scale requires HTTPS routes by default, and OpenShift routes are not secured.
Solution
Procedure
Ensure the secure route checkbox is clicked in your OpenShift router settings.
2.9.7. APIcast on a different project from 3scale failing to deploy
Problem
APIcast deploy fails (pod does not turn blue). You see the following error in the logs:
update acceptor rejected apicast-3: pods for deployment "apicast-3" took longer than 600 seconds to become ready
You see the following error in the pod:
Error synching pod, skipping: failed to "StartContainer" for "apicast" with RunContainerError: "GenerateRunContainerOptions: secrets \"apicast-configuration-url-secret\" not found"
Cause
The secret was not properly set up.
Solution
Procedure
When creating a secret with APIcast v3, specify apicast-configuration-url-secret
:
oc create secret generic apicast-configuration-url-secret --from-literal=password=https://<ACCESS_TOKEN>@<TENANT_NAME>-admin.<WILDCARD_DOMAIN>