Chapter 7. Redis high availability (HA) support for 3scale


Important

Red Hat does not officially support setting up Redis for zero downtime, configuring back-end components for 3scale, or Redis database replication and sharding. The content is for reference only. Additionally, Redis cluster mode is not supported in 3scale.

High availability (HA) is provided for most components by the OpenShift Container Platform (OCP). For more information see OpenShift Container Platform 3.11 Chapter 30. High Availability.

The database components for HA in Red Hat 3scale API Management include:

  • backend-redis: used for statistics storage and temporary job storage.
  • system-redis: provides temporary storage for background jobs for 3scale and is also used as a message bus for Ruby processes of system-app pods.

Both backend-redis and system-redis work with supported Redis high availability variants for Redis Sentinel and Redis Enterprise.

If the Redis pod comes to a stop, or if the OpenShift Container Platform stops it, a new pod is automatically created. Persistent storage will restore the data so the pod continues to work. In these scenarios, there will be a small amount of downtime while the new pod starts. This is due to a limitation in Redis that does not support a multi-master setup. You can reduce downtime by preinstalling the Redis images onto all nodes that have Redis deployed to them. This will speed up the pod restart time.

Set up Redis for zero downtime and configure back-end components for 3scale:

Prerequisites

  • A 3scale account with an administrator role.

7.1. Setting up Redis for zero downtime

As a 3scale administrator, configure Redis outside of OCP if you require zero downtime. There are several ways to set it up using the configuration options of 3scale pods:

Note

Red Hat does not provide support for the above mentioned services. The mention of any such services does not imply endorsement by Red Hat of the products or services. You agree that Red Hat is not responsible or liable for any loss or expenses that may result due to your use of (or reliance on) any external content.

7.2. Configuring back-end components for 3scale

As a 3scale administrator, configure Redis HA (failover) for the back-end component environment variables in the following deployment configurations: backend-cron, backend-listener, and backend-worker. These configurations are necessary for Redis HA in 3scale.

Note

If you want to use Redis with sentinels, you must create the system-redis secret with all fields in order to configure the Redis you want to point to before deploying 3scale. The fields are not provided as parameters in the back end as of 3scale.

7.2.1. Creating backend-redis and system-redis secrets

Follow these steps to create backend-redis and system-redis secrets accordingly:

7.2.2. Deploying a fresh installation of 3scale for HA

To prevent key collision when deploying with single-database Redis instances, set different namespaces for sidekiq and message_bus Redis keys. This applies to both Redis Enterprise and Redis Cluster.

For other deployments where sidekiq and message_bus read and write to different Redis databases, namespaces are not necessary.

The following parameters are used to set Redis key namespaces:

  • NAMESPACE: for entries related to job queues stored by system-app and system-sidekiq in the Redis database.
  • MESSAGE_BUS_NAMESPACE: for entries related to interprocess message_bus communication stored by system-app in the Redis database.

Procedure

  1. Create the backend-redis and system-redis secrets with the fields below:

    backend-redis

    REDIS_QUEUES_SENTINEL_HOSTS
    REDIS_QUEUES_SENTINEL_ROLE
    REDIS_QUEUES_URL
    REDIS_STORAGE_SENTINEL_HOSTS
    REDIS_STORAGE_SENTINEL_ROLE
    REDIS_STORAGE_URL

    system-redis

    MESSAGE_BUS_NAMESPACE
    MESSAGE_BUS_SENTINEL_HOSTS
    MESSAGE_BUS_SENTINEL_ROLE
    MESSAGE_BUS_URL
    NAMESPACE
    SENTINEL_HOSTS
    SENTINEL_ROLE
    URL

    • When configuring for Redis with sentinels, the corresponding URL fields in backend-redis and system-redis refer to the Redis group in the format redis://[:redis-password@]redis-group[/db]`, where [x] denotes optional element x and redis-password, redis-group, and db are variables to be replaced accordingly:

      Example

      redis://:redispwd@mymaster/5

    • The SENTINEL_HOSTS fields are comma-separated lists of sentinel connection strings in the following format:

      redis://:sentinel-password@sentinel-hostname-or-ip:port
      • For each element of the list, [x] denotes optional element x and sentinel-password, sentinel-hostname-or-ip, and port are variables to be replaced accordingly:

        Example

        :sentinelpwd@123.45.67.009:2711,:sentinelpwd@other-sentinel:2722

    • The SENTINEL_ROLE fields are either master or slave.
  2. Deploy 3scale as indicated in Deploying 3scale on OpenShift using a template, using the latest version of the templates.

    1. Ignore the errors due to backend-redis and system-redis already present.

7.2.3. Migrating a non-HA deployment of 3scale to HA

  1. Edit the backend-redis and system-redis secrets with all fields as shown in Deploying a fresh installation of 3scale for HA.
  2. Make sure the following backend-redis environment variables are defined for the back-end pods.

    name: BACKEND_REDIS_SENTINEL_HOSTS
      valueFrom:
        secretKeyRef:
          key: REDIS_STORAGE_SENTINEL_HOSTS
          name: backend-redis
    name: BACKEND_REDIS_SENTINEL_ROLE
      valueFrom:
        secretKeyRef:
          key: REDIS_STORAGE_SENTINEL_ROLE
          name: backend-redis
  3. Make sure the following system-redis environment variables are defined for the system-(app|sidekiq|sphinx) pods.

    name: REDIS_SENTINEL_HOSTS
      valueFrom:
        secretKeyRef:
          key: SENTINEL_HOSTS
          name: system-redis
    name: REDIS_SENTINEL_ROLE
      valueFrom:
        secretKeyRef:
          key: SENTINEL_ROLE
          name: system-redis
    name: MESSAGE_BUS_REDIS_SENTINEL_HOSTS
      valueFrom:
        secretKeyRef:
          key: MESSAGE_BUS_SENTINEL_HOSTS
          name: system-redis
    name: MESSAGE_BUS_REDIS_SENTINEL_ROLE
      valueFrom:
        secretKeyRef:
          key: MESSAGE_BUS_SENTINEL_ROLE
          name: system-redis
  4. Proceed with instructions to continue Upgrading 3scale using templates.

7.2.3.1. Using Redis Enterprise

  1. Use Redis Enterprise deployed in OpenShift, with three different redis-enterprise instances:

    1. Edit system-redis secret:

      1. Set distinct values to MESSAGE_BUS_NAMESPACE and NAMESPACE.
      2. Set URL and MESSAGE_BUS_URL to the same database.
    2. Set the back-end database in backend-redis to REDIS_QUEUES_URL.
    3. Set the third database to REDIS_STORAGE_URL for backend-redis.

7.2.3.2. Using Redis Sentinel

  1. Use Redis Sentinel, with three or four different Redis databases:

    1. Edit system-redis secret:

      1. Set distinct values to MESSAGE_BUS_NAMESPACE and NAMESPACE.
      2. Set URL and MESSAGE_BUS_URL to the proper Redis group, for example: redis://:redispwd@mymaster/5
      3. Set SENTINEL_HOSTS and MESSAGE_BUS_SENTINEL_HOSTS to a comma-separated list of sentinels hosts and ports, for example: :sentinelpwd@123.45.67.009:2711,:sentinelpwd@other-sentinel:2722
      4. Set SENTINEL_ROLE and MESSAGE_BUS_SENTINEL_ROLE to master
  2. Set the backend-redis secret for back-end with the values:

    • REDIS_QUEUES_URL
    • REDIS_QUEUES_SENTINEL_ROLE
    • REDIS_QUEUES_SENTINEL_HOSTS
  3. Set the variables in the third database as follows:

    • REDIS_STORAGE_URL
    • REDIS_STORAGE_SENTINEL_ROLE
    • REDIS_STORAGE_SENTINEL_HOSTS

Notes

  • The system-app and system-sidekiq components connect directly to back-end Redis for retrieving statistics.

    • As of 3scale 2.7, these system components can also connect to back-end Redis (storage) when using sentinels.
  • The system-app and system-sidekiq components uses only backend-redis storage, not backend-redis queues.

    • Changes made to the system components support backend-redis storage with sentinels.

7.3. Redis database sharding and replication

Sharding, sometimes referred to as partitioning, separates large databases in to smaller databases called shards. With replication, your database is set up with copies of the same dataset hosted on separate machines.

Sharding

Sharding facilitates adding more leader instances, which is also useful when you have so much data that it does not fit in a single database, or when the CPU load is close to 100%.

With Redis HA for 3scale, the following two reasons are why sharding is important:

  • Spliting and scaling large volumes of data and adjusting the number of shards for a particular index to help avoid bottlenecks.
  • Distributing operations across different node, therefore increasing performance, for example, when multiple machines are working on the same query.

The three main solutions for Redis database sharding with cluster mode disabled are:

  • Amazon ElastiCache
  • Standard Redis via Redis sentinels
  • Redis Enterprise

Replication

Redis database replication ensures redundancy by having your dataset replicated across different machines. Using replication allows you to keep Redis working when the leader goes down. Data is then pulled from a single instance, the leader, ensuring high availability.

With Redis HA for 3scale, database replication ensures high availability replicas of a primary shard. The principles of operation involve:

  • When the primary shard fails, the replica shard will automatically be promoted to the new primary shard.
  • Upon recovery of the original primary shard, it automatically becomes the replica shard of the new primary shard.

The three main solutions for Redis database replication are:

  • Redis Enterprise
  • Amazon ElastiCache
  • Standard Redis via Redis sentinels

Sharding with twemproxy

For Amazon ElastiCache and Standard Redis, sharding involves splitting data up based on keys. You need a proxy component that given a particular key knows which shard to find, for example twemproxy. Also known as nutcracker, twemproxy is a lightweight proxy solution for Redis protocols that finds shards based on specific keys or server maps assigned to them. Adding sharding capabilities to your Amazon ElastiCache or Standard Redis instance with twemproxy, has the following advantages:

  • The capability of sharding data automatically across multiple servers.
  • Support of multiple hashing modes and consistent hashing and distribution.
  • The capability to run in multiple instances, which allows clients to connect to the first available proxy server.
  • Reduce the number of connections to the caching servers on the backend.
Note

Redis Enterprise uses its own proxy, so it does not need twemproxy.

Additional resources

7.4. Additional information

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.