Chapter 2. Before You Begin
2.1. Functionality Differences for JDG for OpenShift Images
There are several major functionality differences in the JDG for OpenShift image:
- The JBoss Data Grid Management Console is not available to manage JDG for OpenShift images.
- The JBoss Data Grid Management CLI is only bound locally. This means that you can only access the Management CLI of a container from within the pod.
- Library mode is not supported.
- Only JDBC is supported for a backing cache-store. Support for remote cache stores are present only for data migration purposes.
2.2. Initial Setup
The Tutorials in this guide follow on from and assume an OpenShift instance similar to that created in the OpenShift Primer.
2.3. Forming a Cluster using the JDG for OpenShift Images
Clustering is achieved through one of two discovery mechanisms: Kubernetes or DNS. This is accomplished by configuring the JGroups protocol stack in clustered-openshift.xml with either the <openshift.KUBE_PING/> or <openshift.DNS_PING/> elements. By default KUBE_PING is the pre-configured and supported protocol.
For KUBE_PING to work the following steps must be taken:
- The OPENSHIFT_KUBE_PING_NAMESPACE environment variable must be set (as seen in the Configuration Environment Variables). If this variable is not set, then the server will act as if it is a single-node cluster, or a cluster that consists of only one node.
- The OPENSHIFT_KUBE_PING_LABELS environment variable must be set (as seen in the Configuration Environment Variables). If this variable is not set, then pods outside the application (but in the same namespace) will attempt to join.
Authorization must be granted to the service account the pod is running under to be allowed to Kubernetes' REST api. This is done on the command line:
Example 2.1. Policy commands
Using the default service account in the myproject namespace:
oc policy add-role-to-user view system:serviceaccount:$(oc project -q):default -n $(oc project -q)
Using the eap-service-account in the myproject namespace:
oc policy add-role-to-user view system:serviceaccount:$(oc project -q):eap-service-account -n $(oc project -q)
Once the above is configured images will automatically join the cluster as they are deployed; however, removing images from an active cluster, and therefore shrinking the cluster, is not supported.
2.4. Rolling Upgrades
In Red Hat JBoss Data Grid, rolling upgrades permit a cluster to be upgraded from one version to a new version without experiencing any downtime.
When performing a rolling upgrade it is recommended to not update any cache entries in the source cluster, as this may lead to data inconsistency.
2.4.1. Rolling Upgrades Using Hot Rod
Rolling upgrades on Red Hat JBoss Data Grid running in remote client-server mode, using the Hot Rod connector, are working consistently (allow seamless data migration with no downtime) from version 6.6.2 through 7.1. See:
for details.
Rolling upgrades on Red Hat JBoss Data Grid from version 6.1 through 6.6.1, using the Hot Rod connector, are not working correctly yet. See:
for details.
This is a known issue in Red Hat JBoss Data Grid 7.1, and no workaround exists at this time.
Since JBoss Data Grid 6.5 for OpenShift image is based on version 6.5 of Red Hat JBoss Data Grid, rolling upgrades from JBoss Data Grid 6.5 for OpenShift to JBoss Data Grid 7.1 for OpenShift, using the Hot Rod connector, are not possible without a data loss.
2.4.2. Rolling Upgrades Using REST
See Example Workflow: Performing JDG rolling upgrade from JDG 6.5 for OpenShift image to JDG 7.1 for OpenShift image using the REST connector for an end-to-end example of performing JDG rolling upgrade using the REST connector.
2.5. Endpoints
Clients can access JBoss Data Grid via REST, HotRod, and memcached endpoints defined as usual in the cache’s configuration.
If a client attempts to access a cache via HotRod and is in the same project it will be able to receive the full cluster view and make use of consistent hashing; however, if it is in another project then the client will unable to receive the cluster view. Additionally, if the client is located outside of the project that contains the HotRod cache there will be additional latency due to extra network hops being required to access the cache.
Only caches with an exposed REST endpoint will be accessible outside of OpenShift.
2.6. Configuring Caches
A list of caches may be defined by the CACHE_NAMES environment variable. By default the following caches are created:
- default
- memcached
Each cache’s behavior may be controlled through the use of cache-specific environment variables, with each environment variable expecting the cache’s name as the prefix. For instance, consider the default cache, any configuration applied to this cache must begin with the DEFAULT_ prefix. To define the number of cache entry owners for each entry in this cache the DEFAULT_CACHE_OWNERS environment variable would be used.
A full list of these is found at Cache Environment Variables.
2.6.1. Preserving Existing Content of the JBoss Data Grid Data Directory Across JDG for OpenShift Pod Restarts
The JBoss Data Grid server uses specified data directory for persistent data file storage (contains for example ___protobuf_metadata.dat
and ___script_cache.dat
files, or global state persistence configuration). When running on OpenShift, the data directory of the JBoss Data Grid server does not point to a persistent storage medium by default. This means the existing content of the data directory is deleted each time the JDG for OpenShift pod (the underlying JBoss Data Grid server) is restarted. To enable storing of data directory content to a persistent storage, deploy the JDG for OpenShift image using the datagrid71-partition application template with DATAGRID_SPLIT parameter set to true (default setting).
Successful deployment of a JDG for OpenShift image using the datagrid71-partition template requires the ${APPLICATION_NAME}-datagrid-claim persistent volume claim to be available, and the ${APPLICATION_NAME}-datagrid-pvol persistent volume to be mounted at /opt/datagrid/standalone/partitioned_data path. See Persistent Storage Examples for guidance on how to deploy persistent volumes using different available plug-ins, and persistent volume claims.
2.7. Datasources
Datasources are automatically created based on the value of some environment variables.
The most important variable is the DB_SERVICE_PREFIX_MAPPING which defines JNDI mappings for datasources. It must be set to a comma-separated list of <name><database_type>=<PREFIX> triplet, where *name is used as the pool-name in the datasource, database_type determines which database driver to use, and PREFIX is the prefix used in the names of environment variables, which are used to configure the datasource.
2.7.1. JNDI Mappings for Datasources
For each <name>-database_type>=PREFIX triplet in the DB_SERVICE_PREFIX_MAPPING environment variable, a separate datasource will be created by the launch script, which is executed when running the image.
The <database_type> will determine the driver for the datasource. Currently, only postgresql and mysql are supported.
The <name> parameter can be chosen on your own. Do not use any special characters.
The first part (before the equal sign) of the DB_SERVICE_PREFIX_MAPPING should be lowercase.
2.7.2. Database Drivers
The JDG for OpenShift image contains Java drivers for MySQL, PostgreSQL, and MongoDB databases deployed. Datasources are generated only for MySQL and PostGreSQL databases.
For MongoDB databases there are no JNDI mappings created because this is not a SQL database.
2.7.3. Examples
The following examples demonstrate how datasources may be defined using the DB_SERVICE_PREFIX_MAPPING environment variable.
2.7.3.1. Single Mapping
Consider the value test-postgresql=TEST.
This will create a datasource named java:jboss/datasources/test_postgresql. Additionally, all of the required settings, such as username and password, will be expected to be provided as environment variables with the TEST_ prefix, such as TEST_USERNAME and TEST_PASSWORD.
2.7.3.2. Multiple Mappings
Multiple database mappings may also be specified; for instance, considering the following value for the DB_SERVICE_PREFIX_MAPPING environment variable: cloud-postgresql=CLOUD,test-mysql=TEST_MYSQL.
Multiple datasource mappings should be separated with commas, as seen in the above example.
This will create two datasources:
- java:jboss/datasources/test_mysql
- java:jboss/datasources/cloud_postgresql
MySQL datasource configuration, such as the username and password, will be expected with the TEST_MYSQL prefix, for example TEST_MYSQL_USERNAME. Similarly the PostgreSQL datasource will expect to have environment variables defined with the CLOUD_ prefix, such as CLOUD_USERNAME.
2.7.4. Environment Variables
A full list of datasource environment variables may be found at Datasource Environment Variables.
2.8. Security Domains
To configure a new Security Domain the SECDOMAIN_NAME environment variable must be defined, which will result in the creation of a security domain named after the passed in value. This domain may be configured through the use of the Security Environment Variables.
2.9. Managing JDG for OpenShift Images
A major difference in managing an JDG for OpenShift image is that there is no Management Console exposed for the JBoss Data Grid installation inside the image. Because images are intended to be immutable, with modifications being written to a non-persistent file system, the Management Console is not exposed.
However, the JBoss Data Grid Management CLI (JDG_HOME/bin/cli.sh) is still accessible from within the container for troubleshooting purposes.
First open a remote shell session to the running pod:
$ oc rsh <pod_name>
Then run the following from the remote shell session to launch the JBoss Data Grid Management CLI:
$ /opt/datagrid/bin/cli.sh
Any configuration changes made using the JBoss Data Grid Management CLI on a running container will be lost when the container restarts.
Making configuration changes to the JBoss Data Grid instance inside the JDG for OpenShift image is different from the process you may be used to for a regular release of JBoss Data Grid.