Red Hat Data Grid for OpenShift
Run Data Grid on OpenShift
Abstract
Chapter 1. Red Hat Data Grid Copy linkLink copied to clipboard!
Data Grid provides an elastically scalable in-memory data store for Red Hat OpenShift.
- Schemaless data structure
- Flexibility to store different objects as key-value pairs.
- Grid-based data storage
- Designed to distribute and replicate data across clusters.
- Elastic scaling
- Dynamically adjust the number of nodes to meet demand without service disruption.
- Data interoperability
- Store, retrieve, and query data in the grid from different endpoints.
1.1. Data Grid Documentation Copy linkLink copied to clipboard!
Red Hat Data Grid documentation is available on the Red Hat Customer Portal.
1.2. Data Grid Repositories Copy linkLink copied to clipboard!
- Data Grid 7 OpenShift Image holds the container image and resources for Data Grid for OpenShift.
- OpenShift Quickstart Tutorials provide code examples with how-to and best practice demonstrations for running Data Grid on OpenShift.
1.3. Data Grid Image Details Copy linkLink copied to clipboard!
Red Hat Data Grid for OpenShift images are hosted on the Red Hat Container Registry, where you can find health indexes for the images along with information about each tagged version.
Chapter 2. Getting Started Copy linkLink copied to clipboard!
2.1. System Requirements Copy linkLink copied to clipboard!
To use Red Hat Data Grid for OpenShift, you need:
A running Red Hat OpenShift cluster.
For supported Red Hat OpenShift Container Platform versions, refer to Red Hat Data Grid Supported Configurations.
TipUse the Red Hat Container Development Kit to create a local OpenShift cluster with minishift.
-
An oc client in your
$PATH.
2.2. Creating a Data Grid for OpenShift Project Copy linkLink copied to clipboard!
Set up an OpenShift project where you can run Data Grid for OpenShift pods.
Log in to your OpenShift cluster.
If you are new to OpenShift, try the following tutorial: Logging in to an OpenShift Cluster.
Create an OpenShift project with the
oc new-projectcommand, for example:oc new-project rhdg-helloworld --display-name="Red Hat Data Grid"
$ oc new-project rhdg-helloworld --display-name="Red Hat Data Grid"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.3. Setting Up Registry Authentication Copy linkLink copied to clipboard!
You must authenticate with the Red Hat Container Catalog, registry.redhat.io, to pull Data Grid images.
Use one of the following:
-
Red Hat customer account username and password. Pull resources from registry.redhat.io with the
docker logincommand. Registry service account tokens. Use authentication tokens to configure multiple hosts. Do the following:
- Log in to registry.redhat.io.
- Create or select a Registry Service Account.
- Generate authentication tokens.
2.3.1. Configuring Hosts with Authentication Tokens Copy linkLink copied to clipboard!
Add authentication tokens from your Registry Service Account to hosts as follows:
- Select the Docker Login tab and copy the command.
-
Run the
docker logincommand on each host that pulls from registry.redhat.io. Verify your Docker configuration.
cat ~/.docker/config.json ... "registry.redhat.io": { "auth": "MTEwMDkx..." }$ cat ~/.docker/config.json ... "registry.redhat.io": { "auth": "MTEwMDkx..." }Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.3.2. Creating Pull Secrets Copy linkLink copied to clipboard!
To pull secured container images that are not available on the internal registry for OpenShift, do the following:
- Log in to your OpenShift cluster.
Select your working project, for example:
oc project rhdg-helloworld
$ oc project rhdg-helloworldCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a generic pull secret with your Docker configuration.
oc create secret generic ${SECRET_NAME} \ --from-file=.dockerconfigjson=path/to/.docker/config.json \ --type=kubernetes.io/dockerconfigjson$ oc create secret generic ${SECRET_NAME} \ --from-file=.dockerconfigjson=path/to/.docker/config.json \ --type=kubernetes.io/dockerconfigjsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Link the pull secret to your service account.
oc secrets link default ${SECRET_NAME} --for=pull$ oc secrets link default ${SECRET_NAME} --for=pullCopy to Clipboard Copied! Toggle word wrap Toggle overflow Mount the secret.
oc secrets link builder ${SECRET_NAME}$ oc secrets link builder ${SECRET_NAME}Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For more information, including troubleshooting procedures, see Red Hat Container Registry Authentication.
Chapter 3. Configuring Authentication and Encryption Copy linkLink copied to clipboard!
You need to configure authentication and encryption only if you are using a custom template or want to use your own keystores with the Data Grid deployment configuration templates.
3.1. Adding Keystores to Secrets Copy linkLink copied to clipboard!
To configure authentication and encryption:
Create a keystore (
.jks) with a trusted certificate.Both HTTPS and Hot Rod services can use the same keystore or you can create separate keystores.
Add the keystore as an OpenShift secret.
Create a secret. For example, to create a secret named
rhdg-https-secretfrom a keystore namedrhdg-https.jks:oc create secret generic rhdg-https-secret \ --from-file=rhdg-https.jks
$ oc create secret generic rhdg-https-secret \ --from-file=rhdg-https.jksCopy to Clipboard Copied! Toggle word wrap Toggle overflow Link the secret to the default service account.
oc secrets link default rhdg-https-secret
$ oc secrets link default rhdg-https-secretCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2. Configuring Deployments Copy linkLink copied to clipboard!
Instantiate one of the secure templates with following parameters:
Set up HTTP and HTTPS hostnames:
HOSTNAME_HTTP=my.example.hostnameHOSTNAME_HTTPS=secure-my.example.hostname-
Specify the name of the keystore:
HTTPS_KEYSTORE=keystore.jks -
Specify the path to the keystore:
HTTPS_KEYSTORE_DIR=/etc/datagrid-secret-volume -
Specify the name of the secret:
HTTPS_SECRET=rhdg-https-secret Specify credentials for the keystore:
HTTPS_NAME=${USERNAME}HTTPS_PASSWORD=${PASSWORD}-
Set the HTTP security domain for the user:
REST_SECURITY_DOMAIN=SecurityRealm -
Enforce client certificate authentication:
ENCRYPTION_REQUIRE_SSL_CLIENT_AUTH=true Enable authentication and encryption for the Hot Rod protocol:
HOTROD_AUTHENTICATION=trueNoteThe template automatically sets
HOTROD_ENCRYPTION=trueif you set a value forHOSTNAME_HTTPS.
3.3. Setting Unique Keystores for the Hot Rod Protocol Copy linkLink copied to clipboard!
To use a unique keystore for the Hot Rod protocol:
-
Specify the path to the keystore:
SSL_KEYSTORE_PATH=hr_keystore.jks -
Specify the keystore password:
SSL_KEYSTORE_PASSWORD=${PASSWORD} If necessary, do the following:
-
Set a relative path to the keystore:
SSL_KEYSTORE_RELATIVE_TO=path/to/keystore/ -
Specify the private key password, if different to the keystore password:
SSL_KEY_PASSWORD=${PASSWORD} -
Set the correct alias in the keystore if it contains multiple entries:
SSL_KEYSTORE_ALIAS=cert_alias
-
Set a relative path to the keystore:
Specify authorization credentials if you have not already:
USERNAME=${USERNAME}PASSWORD=${PASSWORD}NoteThe Hot Rod endpoint always uses the
ApplicationRealmto authorize users. If you want to use separate keystores for the Hot Rod and REST endpoints, you must set credentials with theUSERNAMEandPASSWORDparameters. Templates then configure the REST endpoint to use thejdg-openshiftsecurity realm. In this case theREST_SECURITY_DOMAINenvironment variable does not take effect.
Chapter 4. Setting Up Data Grid for OpenShift Services Copy linkLink copied to clipboard!
4.1. Data Grid for OpenShift Services Copy linkLink copied to clipboard!
Data Grid services are stateful applications that you can easily scale up or down without losing data.
cache-serviceAn easy-to-use Data Grid for OpenShift cluster designed to accelerate application response time with high-performance caching.
- Data in memory is distributed evenly across nodes. You define the initial size of the Data Grid cluster when you create the service. Distribution is also synchronous. When propagating data to another node, the sending node waits for the operation to complete before the thread continues.
- Single copies of cache entries by default. If a pod restarts, data in that pod is lost. For more resiliency with your data, you can easily enable replication when you create the service.
-
Cache entries are stored off-heap for JVM efficiency. When the cache size reaches the amount of memory available to the pod, entries are evicted. You can optionally change the eviction policy to throw a
ContainerFullException.
datagrid-service- A full distribution of Data Grid for OpenShift that lets you create multiple different cache configurations. Gives you advanced capabilities such as indexing and querying as well as Prometheus monitoring.
4.1.1. Container Storage Copy linkLink copied to clipboard!
cache-service and datagrid-service containers have storage volumes mounted at /opt/datagrid/standalone/data.
The volume size is 1GB by default. You can adjust the size with datagrid-service but not cache-service.
- Ephemeral or Permanent
- When you remotely create caches, you control whether they are ephemeral or permanent. Permanent caches survive container restarts because the cache definitions are saved in the storage volume. Default caches are always permanent.
- Persistent
-
datagrid-servicelets you persist cache entries and indexes to the storage volume. If you require more guarantees for your data, you can optionally persist to external file-based storage or a database via a cache store.
4.1.2. Partition Handling Copy linkLink copied to clipboard!
By default, Data Grid for OpenShift services use a partition handling configuration to ensure data consistency.
-
DENY_READ_WRITESconflict resolution strategy that denies read and write operations for cache entries unless all the owners of a segment are in the same partition. -
REMOVE_ALLmerge policy that removes entries from the cache when conflicts are detected.
Network partitioning applies only when data is replicated across a cluster.
4.1.3. Confirming Service Availability Copy linkLink copied to clipboard!
The templates for cache-service and datagrid-service are available on Red Hat OpenShift Online and Red Hat OpenShift Container Platform in the openshift namespace.
Run the following command to verify that the service templates are available:
oc get templates -n openshift | grep 'cache-service\|datagrid-service'
$ oc get templates -n openshift | grep 'cache-service\|datagrid-service'
4.1.3.1. Importing Templates Copy linkLink copied to clipboard!
If necessary, import cache-service and datagrid-service as follows:
- Log in to your OpenShift cluster.
Import the service templates:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipOverwrite existing templates with
oc replace --force.
4.2. Creating Cache Services Copy linkLink copied to clipboard!
Use cache-service to quickly set up clusters that give you optimal performance and ease of use with minimal configuration.
-
Create the service with the
new-appcommand. - Set template parameters and environment variables as appropriate.
For Example:
Create
cache-servicewith minimal configuration:oc new-app cache-service \ -p APPLICATION_USER=${USERNAME} \ -p APPLICATION_PASSWORD=${PASSWORD}$ oc new-app cache-service \ -p APPLICATION_USER=${USERNAME} \ -p APPLICATION_PASSWORD=${PASSWORD}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
cache-servicecluster with three nodes and data replication:oc new-app cache-service \ -p APPLICATION_USER=${USERNAME} \ -p APPLICATION_PASSWORD=${PASSWORD} \ -p NUMBER_OF_INSTANCES=3 \ -p REPLICATION_FACTOR=2$ oc new-app cache-service \ -p APPLICATION_USER=${USERNAME} \ -p APPLICATION_PASSWORD=${PASSWORD} \ -p NUMBER_OF_INSTANCES=3 \ -p REPLICATION_FACTOR=2Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Template Parameters
-
APPLICATION_NAMEspecifies a name for the application. The default iscache-service. -
NUMBER_OF_INSTANCESsets the number of nodes in the Data Grid for OpenShift cluster. The default is1. -
TOTAL_CONTAINER_MEMconfigures the total amount of memory, in MiB, available to the container. The default is512. -
APPLICATION_USERcreates a user to securely access the cache. There is no default value. You must always create a user. -
APPLICATION_PASSWORDspecifies a password for the user. If you do not set a password, the service template randomly generates one and stores it as a secret. -
REPLICATION_FACTORspecifies the number of copies for each cache entry. The default is1. EVICTION_POLICYdefines howcache-servicebehaves when the size of the cache reaches the amount of memory available to the pod. There are two values:-
evictremoves entries from the cache. This is the default. -
rejectthrowsContainerFullExceptioninstead of adding new entries.
-
Environment Variables
AB_PROMETHEUS_ENABLEallows you to collect JMX metrics to monitor and analyze Data Grid and has the following values:false- Disables monitoring with the default Prometheus agent.
true- Enables monitoring with the default Prometheus agent. The Prometheus Operator must be installed and running. You must also Set Up Monitoring after you create the service.
-
AB_PROMETHEUS_JMX_EXPORTER_PORTdefines the port on which Data Grid publishes JMX metrics. The default is9779.
Verifying the Application
Command output displays parameter values and resources when you create cache-service, as in the following example:
Try the Hello World Quickstart Tutorial.
4.3. Creating Data Grid Services Copy linkLink copied to clipboard!
Use datagrid-service to set up a cluster that you can use with different cache configurations and more complex Data Grid capabilities.
-
Create the service with the
new-appcommand. - Set template parameters and environment variables as appropriate.
For Example:
Create
datagrid-servicewith minimal configuration:oc new-app datagrid-service \ -p APPLICATION_USER=${USERNAME} \ -p APPLICATION_PASSWORD=${PASSWORD}$ oc new-app datagrid-service \ -p APPLICATION_USER=${USERNAME} \ -p APPLICATION_PASSWORD=${PASSWORD}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
datagrid-servicecluster with three nodes and monitoring enabled:oc new-app datagrid-service \ -p APPLICATION_USER=${USERNAME} \ -p APPLICATION_PASSWORD=${PASSWORD} \ -p NUMBER_OF_INSTANCES=3 -e AB_PROMETHEUS_ENABLE=true$ oc new-app datagrid-service \ -p APPLICATION_USER=${USERNAME} \ -p APPLICATION_PASSWORD=${PASSWORD} \ -p NUMBER_OF_INSTANCES=3 -e AB_PROMETHEUS_ENABLE=trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Template Parameters
-
APPLICATION_NAMEspecifies a name for the application. The default isdatagrid-service. -
NUMBER_OF_INSTANCESsets the number of nodes in the Data Grid for OpenShift cluster. The default is 1. -
TOTAL_CONTAINER_MEMconfigures the total amount of memory, in MiB, available to the container. The default is 512. -
APPLICATION_USERcreates a user to securely access the cache. There is no default value. You must always create a user. -
APPLICATION_PASSWORDspecifies a password for the user. If you do not set a password, the service template randomly generates one and stores it as a secret. -
REPLICATION_FACTORspecifies the number of copies for each cache entry. The default is 2. -
TOTAL_CONTAINER_STORAGEconfigures the size, in GiB, of the file-based storage volume. The default is 1.
Environment Variables
AB_PROMETHEUS_ENABLEallows you to collect JMX metrics to monitor and analyze Data Grid and has the following values:false- Disables monitoring with the default Prometheus agent.
true- Enables monitoring with the default Prometheus agent. The Prometheus Operator must be installed and running. You must also Set Up Monitoring after you create the service.
-
AB_PROMETHEUS_JMX_EXPORTER_PORTdefines the port on which Data Grid publishes JMX metrics. The default is9779.
Verifying the Application
Command output displays parameter values and resources when you create datagrid-service, as in the following example:
Try the Hello World Quickstart Tutorial.
Chapter 5. Invoking the Data Grid REST API Copy linkLink copied to clipboard!
Data Grid services expose a REST endpoint at port 8443.
By default, Data Grid requires user authentication for data access and encryption for client connections.
- Authentication
-
Data Grid authorizes data access requests with credentials that you specify with the
APPLICATION_USERandAPPLICATION_PASSWORDparameters. - Encryption
-
When Data Grid pods start they generate TLS certificate/key pairs and save them in the
service-certssecret. The TLS certificates are signed by the OpenShift certificate authority (CA).
5.1. Creating External Routes to the REST API Copy linkLink copied to clipboard!
REST clients running outside OpenShift access Data Grid pods through routes with reencrypt termination.
Procedure
Create a route with
reencrypttermination.oc create route reencrypt ${ROUTE_NAME} \ --port=https \ --service ${APPLICATION_NAME}$ oc create route reencrypt ${ROUTE_NAME} \ --port=https \ --service ${APPLICATION_NAME}Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc create route reencrypt cache-service-https-route \ --port=https \ --service cache-service
$ oc create route reencrypt cache-service-https-route \ --port=https \ --service cache-serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run
oc get routesto find the HTTPS route hostname, for example:oc get routes NAME HOST/PORT cache-service-https-route cache-service-https-route-rhdg-helloworld.192.0.2.0.nip.io
$ oc get routes NAME HOST/PORT cache-service-https-route cache-service-https-route-rhdg-helloworld.192.0.2.0.nip.ioCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.2. Making REST Calls Copy linkLink copied to clipboard!
Prerequisite
Configure REST clients for authentication and encryption.
- On OpenShift
-
Create truststores with the CA bundle mounted in the pod at:
/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt - Outside OpenShift
- Create truststores with the CA for your OpenShift environment.
Procedure
Invoke the Data Grid REST API as appropriate.
For example, invoke a
PUTcall to add a key:value pair:curl -X PUT \ -u ${USERNAME}:${PASSWORD} \ -H 'Content-type: text/plain' \ -d 'world' \ https://${HOSTNAME_FOR_HTTPS_ROUTE}/rest/default/hellocurl -X PUT \ -u ${USERNAME}:${PASSWORD} \ -H 'Content-type: text/plain' \ -d 'world' \ https://${HOSTNAME_FOR_HTTPS_ROUTE}/rest/default/helloCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.2.1. Using the OpenShift CA to Make REST Calls Copy linkLink copied to clipboard!
In cases where the CA certificate is not valid, such as local OpenShift clusters or Red Hat OpenShift Container Platform development installations, you can use service-ca.crt to make REST calls.
Procedure
Get
service-ca.crtfrom Data Grid pods.oc rsync ${pod_name}:/var/run/secrets/kubernetes.io/serviceaccount/..data/service-ca.crt .$ oc rsync ${pod_name}:/var/run/secrets/kubernetes.io/serviceaccount/..data/service-ca.crt .Copy to Clipboard Copied! Toggle word wrap Toggle overflow Pass
service-ca.crtwhen you invoke REST calls.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 6. Configuring Hot Rod Clients Copy linkLink copied to clipboard!
Data Grid services expose a Hot Rod endpoint at port 11222.
By default, Data Grid requires user authentication for data access and encryption for client connections.
- Authentication
-
Data Grid authorizes data access requests with credentials that you specify with the
APPLICATION_USERandAPPLICATION_PASSWORDparameters. - Encryption
-
When Data Grid pods start they generate TLS certificate/key pairs and save them in the
service-certssecret. The TLS certificates are signed by the OpenShift certificate authority (CA).
6.1. Configuring Truststores with Hot Rod Copy linkLink copied to clipboard!
Set trustStorePath to the location of a valid certificate in PEM format in your Hot Rod client configuration. The Hot Rod Java client builds an in-memory Java keystore with all certificates found in the path.
On OpenShift
-
Specify the OpenShift certificate authority (CA) bundle.
/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
Outside OpenShift
Get
tls.crtfrom theservice-certssecret.oc get secret service-certs \ -o jsonpath='{.data.tls\.crt}' \ | base64 -d > tls.crt$ oc get secret service-certs \ -o jsonpath='{.data.tls\.crt}' \ | base64 -d > tls.crtCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Specify the path to
tls.crtin your client configuration.
6.2. Client Intelligence Copy linkLink copied to clipboard!
Client intelligence refers to mechanisms the Hot Rod protocol provides so that clients can locate and send requests to Data Grid pods.
On OpenShift
Clients can access the internal IP addresses for pods so you can use any client intelligence. The default intelligence, HASH_DISTRIBUTION_AWARE, is recommended because it allows clients to route requests to primary owners, which improves performance.
Outside OpenShift
Use BASIC intelligence only.
6.3. Creating External Routes for Hot Rod Copy linkLink copied to clipboard!
Hot Rod clients running outside OpenShift access Data Grid pods through routes with passthrough termination.
Prerequisites
- Configure Data Grid Server to encrypt client connections.
Procedure
Create a route with
passthroughtermination.oc create route passthrough ${ROUTE_NAME} \ --port=hotrod \ --service ${APPLICATION_NAME}$ oc create route passthrough ${ROUTE_NAME} \ --port=hotrod \ --service ${APPLICATION_NAME}Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc create route passthrough cache-service-hotrod-route \ --port=hotrod \ --service cache-service
$ oc create route passthrough cache-service-hotrod-route \ --port=hotrod \ --service cache-serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Get the Hot Rod route hostname from
.spec.host.oc get route cache-service-hotrod-route -o jsonpath="{.spec.host}" cache-service-hotrod-route-rhdg-helloworld.192.0.2.0.nip.io$ oc get route cache-service-hotrod-route -o jsonpath="{.spec.host}" cache-service-hotrod-route-rhdg-helloworld.192.0.2.0.nip.ioCopy to Clipboard Copied! Toggle word wrap Toggle overflow
6.4. Hostnames for Data Grid Services Copy linkLink copied to clipboard!
Use the hostname for Data Grid that corresponds to the location of your Hot Rod client.
In the Same OpenShift Namespace
Use APPLICATION_NAME.
For example:
.host("cache-service")
.host("cache-service")
In Different OpenShift Namespaces
Use the internal service DNS name in this form:APPLICATION_NAME.SERVICE_NAMESPACE.svc
For example:
.host("cache-service.rhdg-helloworld.svc")
.host("cache-service.rhdg-helloworld.svc")
Outside OpenShift
Use the Hot Rod route hostname.
For example:
.host("cache-service-hotrod-route-rhdg-helloworld.192.0.2.0.nip.io")
.host("cache-service-hotrod-route-rhdg-helloworld.192.0.2.0.nip.io")
6.5. Configuring Hot Rod Clients Programmatically Copy linkLink copied to clipboard!
Use the ConfigurationBuilder class to programmatically configure Hot Rod clients to access Data Grid clusters.
-
Call the
create()method to create a configuration bean that you can pass to theRemoteCacheManager. -
Use the
authentication()andssl()methods to configure authentication and encryption.
6.5.1. Hot Rod Configuration Builder On OpenShift Copy linkLink copied to clipboard!
Configuration bean for Hot Rod clients running on OpenShift:
6.5.2. Hot Rod Configuration Builder Outside OpenShift Copy linkLink copied to clipboard!
Configuration bean for Hot Rod clients running outside OpenShift:
6.6. Setting Hot Rod Client Properties Copy linkLink copied to clipboard!
Use Hot Rod client configuration properties to specify Data Grid hostnames and ports, authentication details, and TLS certificates.
Procedure
-
Create a
hotrod-client.propertiesfile that contains your Hot Rod client configuration. -
Add
hotrod-client.propertiesto the classpath.
6.6.1. Hot Rod Configuration Properties On OpenShift Copy linkLink copied to clipboard!
Configuration properties for Hot Rod clients running on OpenShift:
6.6.2. Hot Rod Configuration Properties Outside OpenShift Copy linkLink copied to clipboard!
Configuration properties for Hot Rod clients running outside OpenShift:
Chapter 7. Remotely Creating Caches Copy linkLink copied to clipboard!
When you remotely create caches with the cache-service, you can configure caches to be ephemeral or permanent and if data is replicated across the cluster.
You can define any custom configuration when you remotely create caches with the datagrid-service.
Remotely create cache definitions with the cache-service and datagrid-service through the Hot Rod protocol as follows:
-
Instantiate the
RemoteCacheManagerclass to connect to the service. Call the
createCache()method to create a cache, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the named cache already exists, an exception is thrown. Alternatives are to:
-
Call the
getOrCreateCachemethod inRemoteCacheManagerAdminto return the cache name instead of throwing an exception. -
Call the
removeCachemethod inRemoteCacheManagerAdminto destroy the cache and then callcreateCacheagain.
-
Call the
Try one of the Quickstart Tutorials:
Chapter 8. Defining File-based Cache Stores Copy linkLink copied to clipboard!
Define a file-based cache store with datagrid-service to persist data to external storage.
Use the XMLStringConfiguration class to provide XML configuration as a string through the Hot Rod interface.
- XML must be valid with the Data Grid configuration schema.
-
Location of your file store should be under the storage volume mounted at
/opt/datagrid/standalone/databecause thedatafolder is aPersistentVolumethat allows data to survive when the container restarts.
As an example, the following main method creates a cache with a distributed configuration that includes a file store:
For information about valid file-store configuration options, see the Data Grid configuration schema.
See the Javadocs for more information:
Chapter 9. Using Data Grid Deployment Configuration Templates Copy linkLink copied to clipboard!
9.1. Data Grid Deployment Configuration Templates Copy linkLink copied to clipboard!
Data Grid provides a set of templates that can help you deploy Data Grid for OpenShift with different configurations.
As of Data Grid 7.3, these deployment configuration templates are deprecated. You should use cache-service or datagrid-service service templates instead. For more information see Red Hat Data Grid Supported Configurations.
| Template | Description |
|---|---|
|
| Run Data Grid for OpenShift without authentication or encryption. |
|
| Run Data Grid for OpenShift with an HTTPS route to securely access caches. Requires an OpenShift secret for encrypting network traffic. |
|
| Run Data Grid for OpenShift with a MySQL database as an ephemeral cache store. Requires an OpenShift secret for encrypting network traffic. |
|
| Run Data Grid for OpenShift with a MySQL database as a persistent cache store. Requires an OpenShift secret for encrypting network traffic. |
|
| Run Data Grid for OpenShift with a PostgreSQL database as an ephemeral cache store. Requires an OpenShift secret for encrypting network traffic. |
|
| Run Data Grid for OpenShift with a PostgreSQL database as a persistent cache store. Requires an OpenShift secret for encrypting network traffic. |
|
| Run Data Grid for OpenShift with a partitioned data directory that preserves metadata for cache entries when the pod restarts. |
9.2. Importing Deployment Configuration Templates Copy linkLink copied to clipboard!
Import the Data Grid for OpenShift deployment configuration templates into OpenShift as follows:
- Log in to your OpenShift cluster.
Import a specific template or all templates.
Import a specific template:
oc create -f \ https://raw.githubusercontent.com/jboss-container-images/jboss-datagrid-7-openshift-image/7.3-v1.8/templates/datagrid73-mysql.json
$ oc create -f \ https://raw.githubusercontent.com/jboss-container-images/jboss-datagrid-7-openshift-image/7.3-v1.8/templates/datagrid73-mysql.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Import all templates:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipUse
oc createto import new templates. Useoc replace --forceto overwrite existing templates.Specify the namespace into which to import the templates with the
-noption. For example,-n openshiftimports resources into the globalopenshiftnamespace and requires administrative permissions.
Import the Data Grid image.
oc -n openshift import-image jboss-datagrid73-openshift:1.9
$ oc -n openshift import-image jboss-datagrid73-openshift:1.9Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the templates are available on OpenShift.
oc get templates -n openshift | grep datagrid73
$ oc get templates -n openshift | grep datagrid73Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.3. Importing OpenShift Secrets Copy linkLink copied to clipboard!
Some Data Grid for OpenShift deployment configuration templates require HTTPS and JGroups keystores.
Data Grid for OpenShift provides HTTPS and JGroups keystores that you can import for evaluation purposes. However, you should not use this secret in production environments.
Import the secret with the keystores into your project namespace as follows:
oc create \ -f https://raw.githubusercontent.com/jboss-openshift/application-templates/master/secrets/datagrid-app-secret.json
$ oc create \
-f https://raw.githubusercontent.com/jboss-openshift/application-templates/master/secrets/datagrid-app-secret.json
For more information, see:
9.4. Deploying Data Grid for OpenShift Copy linkLink copied to clipboard!
-
Create a new deployment with the
new-appcommand. -
Specify a template with the
--templateoption. Set environment variables to configure the deployment with the
-eoption.For example, to create a deployment with the datagrid73-basic template that includes a cache named
mycachethat starts eagerly, run the following command:oc new-app --template=datagrid73-basic \ -p USERNAME=${USERNAME} \ -p PASSWORD=${PASSWORD} \ -p CACHE_NAMES=mycache \ -e MYCACHE_CACHE_START=EAGER$ oc new-app --template=datagrid73-basic \ -p USERNAME=${USERNAME} \ -p PASSWORD=${PASSWORD} \ -p CACHE_NAMES=mycache \ -e MYCACHE_CACHE_START=EAGERCopy to Clipboard Copied! Toggle word wrap Toggle overflow See Environment Variables for information about supported environment variables.
9.5. Configuring Data Grid for OpenShift Copy linkLink copied to clipboard!
After you create a Data Grid for OpenShift deployment, you can configure it with environment variables.
For example, you have a deployment configuration (dc) named datagrid-app with a cache named mycache. Configure mycache to start lazily as follows:
oc env dc/datagrid-app -e MYCACHE_CACHE_START=LAZY
$ oc env dc/datagrid-app -e MYCACHE_CACHE_START=LAZY
When you modify a deployment configuration, the replication controller deploys a new version. Get the updated deployment configuration as follows:
oc get pods NAME READY STATUS RESTARTS AGE datagrid-app-2-<id> 0/1 Running 0 58s datagrid-app-2-deploy 1/1 Running 0 59s
$ oc get pods
NAME READY STATUS RESTARTS AGE
datagrid-app-2-<id> 0/1 Running 0 58s
datagrid-app-2-deploy 1/1 Running 0 59s
Verify configuration changes as follows:
Chapter 10. Setting Up Monitoring Copy linkLink copied to clipboard!
Collect JMX metrics with the Prometheus Operator to monitor events and get statistics for Data Grid clusters.
From a high-level, you set up monitoring capabilities as follows:
-
Configure Data Grid with the
AB_PROMETHEUS_ENABLEenvironment variable set totrue. - Install the Prometheus Operator and expose the Prometheus web UI.
- Export Data Grid metrics to Prometheus.
- Enable Prometheus to monitor Data Grid for metrics.
10.1. Deploying the Prometheus Operator Copy linkLink copied to clipboard!
To install the Prometheus Operator, you should refer to the following documentation:
10.2. Exposing Data Grid Metrics to Prometheus Copy linkLink copied to clipboard!
Add a service that exposes JMX metrics from Data Grid to Prometheus.
Create a
service-metrics.yamlfile that defines a metrics service.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply
service-metrics.yaml.oc apply -f service-metrics.yaml
$ oc apply -f service-metrics.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
10.3. Enabling Prometheus to Monitor Data Grid Copy linkLink copied to clipboard!
A service monitor lets Prometheus connect to the Data Grid metrics service.
Create a
service-monitor.yamlfile that holds the definition for aServiceMonitorobject.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply
service-monitor.yaml.oc apply -f service-monitor.yaml
$ oc apply -f service-monitor.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 11. Configuring Data Grid for OpenShift Clusters Copy linkLink copied to clipboard!
11.1. Configuring Cluster Discovery Copy linkLink copied to clipboard!
Data Grid for OpenShift can use either the Kubernetes or DNS discovery mechanisms for clustering. These discovery mechanisms enable images to automatically join clusters.
Data Grid for OpenShift templates and services use DNS by default. If you deploy Data Grid for OpenShift directly from the image or custom template, you must configure the appropriate discovery mechanism.
11.1.1. Configuring DNS_PING Copy linkLink copied to clipboard!
To configure the DNS discovery mechanism for clustering, do the following:
Set
openshift.DNS_PINGas the value for theJGROUPS_PING_PROTOCOLenvironment variable.JGROUPS_PING_PROTOCOL=openshift.DNS_PING
JGROUPS_PING_PROTOCOL=openshift.DNS_PINGCopy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the name of the ping service for the cluster as the value for the
OPENSHIFT_DNS_PING_SERVICE_NAMEenvironment variable.OPENSHIFT_DNS_PING_SERVICE_NAME=${PING_SERVICE_NAME}OPENSHIFT_DNS_PING_SERVICE_NAME=${PING_SERVICE_NAME}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the port number where the ping service is exposed as the value for the
OPENSHIFT_DNS_PING_SERVICE_PORTenvironment variable. The default value is8888.OPENSHIFT_DNS_PING_SERVICE_PORT=${PING_SERVICE_NAME}OPENSHIFT_DNS_PING_SERVICE_PORT=${PING_SERVICE_NAME}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Define a ping service that exposes the ping port, as in the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantYou should configure
clusterIP: Noneso that the service is headless. Likewise, the ping port must be named and include theservice.alpha.kubernetes.io/tolerate-unready-endpoints: 'true'annotation.
11.1.2. Configuring KUBE_PING Copy linkLink copied to clipboard!
To configure the Kubernetes discovery mechanism for clustering, do the following:
Set
openshift.KUBE_PINGas the value for theJGROUPS_PING_PROTOCOLenvironment variable.JGROUPS_PING_PROTOCOL=openshift.KUBE_PING
JGROUPS_PING_PROTOCOL=openshift.KUBE_PINGCopy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the OpenShift project name as the value for the
OPENSHIFT_KUBE_PING_NAMESPACEenvironment variable. If you do not set this variable, the server behaves like a single-node cluster.OPENSHIFT_KUBE_PING_NAMESPACE=${PING_NAMESPACE}OPENSHIFT_KUBE_PING_NAMESPACE=${PING_NAMESPACE}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify a cluster label with the
OPENSHIFT_KUBE_PING_LABELSenvironment variable. If you do not set this variable, pods outside the application but in the same namespace attempt to join.OPENSHIFT_KUBE_PING_LABELS=labelKey=labelValue
OPENSHIFT_KUBE_PING_LABELS=labelKey=labelValueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Grant authorization to the service account the pod is running under so that it can access the Kubernetes REST API. For example, grant authorization to datagrid-service-account as follows:
oc policy add-role-to-user view \ system:serviceaccount:$(oc project -q):datagrid-service-account \ -n $(oc project -q)
oc policy add-role-to-user view \ system:serviceaccount:$(oc project -q):datagrid-service-account \ -n $(oc project -q)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure port
8888is defined as a ping port on the pod container, as follows:ports: - containerPort: 8888 name: ping protocol: TCPports: - containerPort: 8888 name: ping protocol: TCPCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.2. Configuring JGroups Encryption Copy linkLink copied to clipboard!
Data Grid for OpenShift uses JGroups technology to secure traffic between clustered servers with the following options:
- Authentication
Uses the JGroups
AUTHprotocol that requires nodes to authenticate with a password when joining the cluster.You configure authentication with the
JGROUPS_CLUSTER_PASSWORDenvironment variable. This environment variable sets a password for nodes to use when joining the cluster. The password must be the same across the cluster.- Symmetric encryption
Uses the JGroups
SYM_ENCRYPTprotocol to secure traffic with a JGroups keystore (.jceks). This is the default encryption protocol.The JGroups
AUTHprotocol is optional with symmetric encryption.The JGroups keystore contains credentials that each node in the cluster uses to secure communication.
- Asymmetric encryption
Uses the JGroups
ASYM_ENCRYPTprotocol to secure traffic with public/private key encryption.The JGroups
AUTHprotocol is required with asymmetric encryption.The coordinator node generates a secret key. When a node joins the cluster, it requests the secret key from the coordinator and provides its public key. The coordinator encrypts the secret key with the public key and returns it to the node. The node then decrypts and installs the secret so that it can securely communicate with other nodes in the cluster.
11.2.1. Setting Up Symmetric Encryption Copy linkLink copied to clipboard!
To use symmetric encryption, do the following:
Create a JGroups keystore (
.jceks) that contains credentials to encrypt traffic.You can use the Java keytool to generate a JGroups keystore.
Deploy the JGroups keystore to OpenShift as a secret.
- Log in to your OpenShift cluster.
Create a secret for the JGroups keystore. For example, to create a secret named
jgroups-secretfrom a keystore namedjgroups.jceks, do the following:oc create secret generic jgroups-secret \ --from-file=jgroups.jceks
$ oc create secret generic jgroups-secret \ --from-file=jgroups.jceksCopy to Clipboard Copied! Toggle word wrap Toggle overflow Link the secret to the default service account.
oc secrets link default jgroups-secret
$ oc secrets link default jgroups-secretCopy to Clipboard Copied! Toggle word wrap Toggle overflow Mount the secret to the container.
oc set volumes dc/datagrid \ --add -t secret \ --secret-name='jgroups-secret' \ --mount-path='/keystores/jgroups'
$ oc set volumes dc/datagrid \ --add -t secret \ --secret-name='jgroups-secret' \ --mount-path='/keystores/jgroups'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Set the value of the
JGROUPS_ENCRYPT_PROTOCOLenvironment variable toSYM_ENCRYPTfor each node in the cluster. Configure each node in the cluster to use the JGroups keystore with the following environment variables:
JGROUPS_ENCRYPT_KEYSTORE- Specifes the JGroups keystore for encrypting cluster traffic.
JGROUPS_ENCRYPT_KEYSTORE_DIR- Specifies the directory where the JGroups keystore resides.
JGROUPS_ENCRYPT_SECRET- Matches the OpenShift secret for the keystore.
JGROUPS_ENCRYPT_NAME- Matches the username for the keystore.
JGROUPS_ENCRYPT_PASSWORD- Matches the keystore password.
-
If required, set a password for nodes to use when joining the cluster with the
JGROUPS_CLUSTER_PASSWORDenvironment variable.
11.2.2. Setting Up Asymmetric Encryption Copy linkLink copied to clipboard!
To use asymmetric encryption, do the following:
-
Configure authentication with the
JGROUPS_CLUSTER_PASSWORDenvironment variable. -
Set the value of the
JGROUPS_ENCRYPT_PROTOCOLenvironment variable toASYM_ENCRYPTfor each node in the cluster.
Chapter 12. Customizing Data Grid for OpenShift Copy linkLink copied to clipboard!
Use the Data Grid image with custom configuration either with the Source-to-Image (S2I) process or via the ConfigMap API.
Red Hat encourages you to use the Data Grid for OpenShift image and datagrid-service and cache-service templates to create Data Grid clusters. While it is possible to create Data Grid pods with custom configuration, datagrid-service and cache-service are designed for high performance and are suitable for a wide range of use cases with little to no configuration required.
Go to Setting Up Data Grid for OpenShift Services and learn how to quickly create Data Grid clusters.
- Source-to-Image (S2I)
Use the S2I process and binary builds to create custom Data Grid images.
Add cache definitions and endpoint configurations to
openshift-clustered.xmlthen use S2I capabilities to build a custom Data Grid image that uses that configuration file. You can then create Data Grid pods with the image as required.To modify the configuration you must build a new image. However, when you rebuild your custom image, the Data Grid pods automatically redeploy with the new configuration changes.
- ConfigMap API
Use the Red Hat OpenShift
ConfigMapAPI to dynamically configure Data Grid pods.Define custom configuration in
standalone.xmlthat maps to aConfigMapobject in the namespace as the Data Grid pod. You can modify the Data Grid configuration and then redeploy pods to load configuration changes. However, Data Grid pods do not automatically redeploy when you modifystandalone.xml. You must manually redeploy pods to load the configuration changes.You should explicitly define all cache and endpoint configuration in
standalone.xmlbefore you create Data Grid pods.Environment variables for cache and endpoint configuration do not take effect unless the placeholder exists before deployment. For example, the following is a placeholder for the
JGROUPS_PING_PROTOCOL:<!-- ##JGROUPS_PING_PROTOCOL## -->
<!-- ##JGROUPS_PING_PROTOCOL## -->Copy to Clipboard Copied! Toggle word wrap Toggle overflow See clustered-openshift.xml to review available placeholders.
-
To encrypt client to server traffic, you must configure the server identity in
standalone.xml. -
The
DATAGRID_SPLITenvironment variable does not take effect with Data Grid for OpenShift pods that you customize via theConfigMapAPI. These pods cannot use shared persistent volumes withDATAGRID_SPLIT.
12.1. Cloning the Data Grid Examples Copy linkLink copied to clipboard!
Clone the Data Grid for OpenShift image source repository.
git clone git@github.com:jboss-container-images/jboss-datagrid-7-openshift-image.git .
$ git clone git@github.com:jboss-container-images/jboss-datagrid-7-openshift-image.git .Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the contents of the
docs/examplesdirectory, for example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - S2I
Injects
clustered-openshift.xmlto${JBOSS_HOME}/standalone/configuration/inside the Data Grid for OpenShift image.TipAdd custom logging.properties and application-role.properties to the
configurationdirectory to include them when you build a custom image.- ConfigMap
-
Maps
standalone.xmlto the/opt/datagrid/standalone/configuration/userdirectory inside the Data Grid for OpenShift image.
12.2. Creating S2I Builds with Custom Configuration Copy linkLink copied to clipboard!
12.2.1. Setting Up the Data Grid Image Copy linkLink copied to clipboard!
You need the Data Grid for OpenShift image as source for your customization.
Confirm if the Data Grid for OpenShift image is available.
oc get images | grep jboss-datagrid73-openshift
$ oc get images | grep jboss-datagrid73-openshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the image is not available, create an image stream and import it.
oc create -f https://raw.githubusercontent.com/jboss-container-images/jboss-datagrid-7-openshift-image/7.3-v1.9/templates/datagrid73-image-stream.json oc import-image jboss-datagrid73-openshift --from='registry.redhat.io/jboss-datagrid-7/datagrid73-openshift:1.9'
$ oc create -f https://raw.githubusercontent.com/jboss-container-images/jboss-datagrid-7-openshift-image/7.3-v1.9/templates/datagrid73-image-stream.json $ oc import-image jboss-datagrid73-openshift --from='registry.redhat.io/jboss-datagrid-7/datagrid73-openshift:1.9'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.2.2. Creating Custom Data Grid Images Copy linkLink copied to clipboard!
Create a new binary build using the Data Grid for OpenShift image.
oc new-build jboss-datagrid73-openshift:1.9 --binary=true --name=custom-datagrid
$ oc new-build jboss-datagrid73-openshift:1.9 --binary=true --name=custom-datagridCopy to Clipboard Copied! Toggle word wrap Toggle overflow Navigate to the
s2idirectory so you can pass--from-dir=".". You must include theconfigurationdirectory so that the S2I process can detect your custom configuration.To use the example configuration, do the following:
cd jboss-datagrid-7-openshift-image/docs/examples/s2i/
$ cd jboss-datagrid-7-openshift-image/docs/examples/s2i/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Build the Data Grid for OpenShift image with your custom configuration.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that your image is available.
oc get images | grep custom-datagrid
$ oc get images | grep custom-datagridCopy to Clipboard Copied! Toggle word wrap Toggle overflow
12.2.3. Verifying Custom Data Grid Images Copy linkLink copied to clipboard!
Confirm that the build pod is running.
oc get pods NAME READY STATUS RESTARTS AGE custom-datagrid-1-build 0/1 Completed 0 38s
$ oc get pods NAME READY STATUS RESTARTS AGE custom-datagrid-1-build 0/1 Completed 0 38sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a Data Grid application with your custom configuration.
oc new-app custom-datagrid \ -p APPLICATION_USER=${USERNAME} \ -p APPLICATION_PASSWORD=${PASSWORD}$ oc new-app custom-datagrid \ -p APPLICATION_USER=${USERNAME} \ -p APPLICATION_PASSWORD=${PASSWORD}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for your custom Data Grid application to start running.
oc get pods -w NAME READY STATUS RESTARTS AGE custom-datagrid-1-build 0/1 Completed 0 8m custom-datagrid-1-<id> 1/1 Running 0 11s
$ oc get pods -w NAME READY STATUS RESTARTS AGE custom-datagrid-1-build 0/1 Completed 0 8m custom-datagrid-1-<id> 1/1 Running 0 11sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remotely access the bash shell for your custom Data Grid pod.
oc exec -it ${pod-name} -- /bin/bash$ oc exec -it ${pod-name} -- /bin/bashCopy to Clipboard Copied! Toggle word wrap Toggle overflow View
clustered-openshift.xmlto verify the configuration, for example:cat /opt/datagrid/configuration/clustered-openshift.xml
$ cat /opt/datagrid/configuration/clustered-openshift.xmlCopy to Clipboard Copied! Toggle word wrap Toggle overflow If
clustered-openshift.xmlcontains your custom configuration then the Data Grid pod is using it. You can optionally use the Data Grid command line interface to verify your configuration, for example:/opt/datagrid/bin/cli.sh -c
$ /opt/datagrid/bin/cli.sh -cCopy to Clipboard Copied! Toggle word wrap Toggle overflow End the remote session after you verify your configuration.
exit
$ exitCopy to Clipboard Copied! Toggle word wrap Toggle overflow
12.3. Creating Custom Data Grid for OpenShift Pods with the ConfigMap API Copy linkLink copied to clipboard!
Create a custom template for your Data Grid for OpenShift pod.
- Expose the required ports and services in your template.
-
Add a
configMapobject to the custom template. -
Add a config volume for the container at
/opt/datagrid/standalone/configuration/user. Import your custom template into OpenShift.
To use the example template, do the following:
cd jboss-datagrid-7-openshift-image/docs/examples/user-configuration/
$ cd jboss-datagrid-7-openshift-image/docs/examples/user-configuration/Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create -f user-config-template.yaml
$ oc create -f user-config-template.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a ConfigMap in your OpenShift project, for example:
oc create configmap user-config --from-file="."
$ oc create configmap user-config --from-file="."Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create Data Grid pods with your custom configuration.
oc new-app user-config \ -p APPLICATION_NAME=${USERNAME} \ -e USER_CONFIG_MAP=true$ oc new-app user-config \ -p APPLICATION_NAME=${USERNAME} \ -e USER_CONFIG_MAP=trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Where:
-
APPLICATION_NAMEis a required parameter in the example template and defaults tocustom-datagrid. USER_CONFIG_MAP=trueapplies the ConfigMap to the Data Grid pod. This is set in the example template as follows:- env: - name: USER_CONFIG_MAP value: "true"- env: - name: USER_CONFIG_MAP value: "true"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
12.3.1. Verifying Custom Data Grid for OpenShift Pods with the ConfigMap API Copy linkLink copied to clipboard!
Wait for your custom Data Grid application to start running.
oc get pods -w NAME READY STATUS RESTARTS AGE user-config-0 0/1 Running 7 17m
$ oc get pods -w NAME READY STATUS RESTARTS AGE user-config-0 0/1 Running 7 17mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the container logs.
oc logs ${pod-name} | grep standalone.xml INFO Running jboss-datagrid-7/datagrid73-openshift image, version 1.9 with user standalone.xml$ oc logs ${pod-name} | grep standalone.xml INFO Running jboss-datagrid-7/datagrid73-openshift image, version 1.9 with user standalone.xmlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Try the Customizing Data Grid Service Deployments Quickstart Tutorial.
12.4. Configuring Persistent Datasources Copy linkLink copied to clipboard!
Data Grid lets you persist data stored in the cache to a datasource. There are two types of datasources for Red Hat Data Grid for OpenShift:
Internal datasources that run on OpenShift. These datasources are available through the Red Hat Container Registry and do not require you to configure additional environment files.
NoteInternal datasources include PostgreSQL, MySQL, and MongoDB. However, Red Hat Data Grid for OpenShift currently supports PostgreSQL and MySQL only.
- External datasources that do not run on OpenShift. You must configure these external datasources with environment files that you add to OpenShift Secrets.
12.4.1. Configuring Internal Datasources Copy linkLink copied to clipboard!
The DB_SERVICE_PREFIX_MAPPING environment variable defines JNDI mappings for internal datasources.
You can define multiple JNDI mappings as comma-separated values for the DB_SERVICE_PREFIX_MAPPING environment variable. When you run the Data Grid for OpenShift image, the launch script creates a separate datasource for each JNDI mapping. The Data Grid for OpenShift then automatically discovers each datasource.
To define a JNDI mapping, specify a value for the environment variable in the following format:
${POOL_NAME}-${DATABASE_TYPE}=${PREFIX}
-
${POOL_NAME}is thepool-nameattribute for the datasource. Use any alphanumeric value that is meaningful and easy to identify. The value cannot contain special characters. Likewise, the value must contain lowercase characters only. ${DATABASE_TYPE}specifies the database driver to use. The value must contain lowercase characters only.NoteOnly
mysqlandpostgresqlare supported values for{$DATABASE_TYPE}.-
${PREFIX}is used for the names of environment variables that configure the datasource.
12.4.1.1. Single Datasource Example Copy linkLink copied to clipboard!
If you specify test-postgresql=TEST as the value for the DB_SERVICE_PREFIX_MAPPING environment variable, it creates a datasource with the following name:
java:jboss/datasources/test_postgresql
You must use the TEST prefix when specifying other environment variables for the datasource. For example, to set the username and password, use TEST_USERNAME and TEST_PASSWORD as the environment variables.
12.4.1.2. Multiple Datasource Example Copy linkLink copied to clipboard!
If you specify cloud-postgresql=CLOUD,test-mysql=TEST_MYSQL as the value for the DB_SERVICE_PREFIX_MAPPING environment variable, it creates two datasources with the following names:
-
java:jboss/datasources/test_mysql -
java:jboss/datasources/cloud_postgresql
When specifying other environment variables for the datasources, you must use the TEST_MYSQL prefix to configure the MySQL datasource. For example, use TEST_MYSQL_USERNAME as the environment variable to specify the username.
Similarly, you must use the CLOUD prefix to configure the PostgreSQL datasource. For example, use CLOUD_USERNAME as the environment variable to specify the username.
12.4.2. Configuring External Datasources Copy linkLink copied to clipboard!
To use an external datasource, you define a custom image template and then use the Source-to-Image (S2I) build tool to create an image. S2I is a framework that takes application source code as an input and produces a new image that runs the assembled application as output.
The following high-level steps provide an overview of the process:
Specify the
CUSTOM_INSTALL_DIRECTORIESenvironment variable in the image template JSON. This variable defines the location where S2I artifacts reside, as in the following example:{ "name": "CUSTOM_INSTALL_DIRECTORIES", "value": "extensions/*" }{ "name": "CUSTOM_INSTALL_DIRECTORIES", "value": "extensions/*" }Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an
install.shscript in that directory. This script installs the modules and drivers for the external datasource in the image.The following is an example
install.shscript:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Include a
modulessubdirectory that contains amodule.xmlfile and the driver for the datasource. The resulting image uses the module to load classes and define dependencies.As an example, you plan to use Derby as an external datasource. You need to obtain a driver such as
derby-10.12.1.1.jarand place it in the following directory:modules/org/apache/derby/main/In the same directory, you also need to create a
module.xmlfile that defines the driver as a resource and declares dependencies.The following is an example
module.xmlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Define the driver configuration properties in a
drivers.propertyenvironment variable file.The following is an example
drivers.propertyfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow After you build and deploy the image, specify environment variables for the datasource.
The following example shows a datasource definition with the
DATASOURCESenvironment variable:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.4.3. Datasource Environment Variables Copy linkLink copied to clipboard!
DB_SERVICE_PREFIX_MAPPINGDefines a comma-separated list of datasources to configure.
For example,
DB_SERVICE_PREFIX_MAPPING=test-mysql=TEST_MYSQL. See Configuring Persistent Datasources for more information.${NAME}_${DATABASE_TYPE}_SERVICE_HOSTDefines the database server hostname or IP for the datasource
connection_urlproperty.For example,
EXAMPLE_MYSQL_SERVICE_HOST=192.0.2.0${NAME}_${DATABASE_TYPE}_SERVICE_PORT- Defines the database server port.
${PREFIX}_USERNAME- Defines the user for the datasource.
${PREFIX}_PASSWORD- Defines the password for the datasource.
${PREFIX}_DATABASEDefines the database name for the datasource.
For example,
CLOUD_DATABASE=myDatabase.${PREFIX}_DRIVERDefines Java database driver for the datasource.
For example,
CLOUD_DRIVER=postgresql${PREFIX}_BACKGROUND_VALIDATION-
Specifies if a background thread validates database connections before they are used. The value is
trueorfalse(default). By default, the<validate-on-match>method is enabled. ${PREFIX}_BACKGROUND_VALIDATION_MILLIS-
Specifies how often validation occurs, in milliseconds, if you set the
${PREFIX}_BACKGROUND_VALIDATIONenvironment variable totrue. The default value is10000. ${PREFIX}_CONNECTION_CHECKERSpecifies a connection checker class that validates connections to the database.
For example,
CLOUD_CONNECTION_CHECKER=org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker${PREFIX}_EXCEPTION_SORTERSpecifies the exception sorter class that detects and cleans up after fatal database connection exceptions.
For example,
CLOUD_EXCEPTION_SORTER=org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLExceptionSorter${PREFIX}_JNDIDefines the JNDI name for the datasource.
Defaults to
java:jboss/datasources/<name>_<database_type>. The launch script automatically generates the value from theDB_SERVICE_PREFIX_MAPPINGenvironment variable.For example,
CLOUD_JNDI=java:jboss/datasources/test-postgresql${PREFIX}_JTA-
Defines the Java Transaction API (JTA) option for non-XA datasources. The value is
true(default) orfalse. ${PREFIX}_MAX_POOL_SIZE- Defines the maximum pool size for the datasource.
${PREFIX}_MIN_POOL_SIZE- Defines the minimum pool size for the datasource.
${PREFIX}_NONXA-
Defines the datasource as a non-XA datasource. The value is
trueorfalse(default). ${PREFIX}_TX_ISOLATIONDefines the java.sql.Connection transaction isolation level for the database.
For example,
CLOUD_TX_ISOLATION=TRANSACTION_READ_UNCOMMITTED${PREFIX}_URLDefines the connection URL for a non-XA datasource.
If you do not specify a connection URL, the launch script automatically generates it from other environment variables as follows:
url="jdbc:${DRIVER}://${HOST}:${PORT}/${DATABASE}".However, the launch script constructs the correct connection URLs only for internal datasources such as PostgreSQL and MySQL. If you use any other non-XA datasource you must specify the connection URL.
For example,
CLOUD_URL=jdbc:postgresql://localhost:5432/postgresdb${PREFIX}_XA_CONNECTION_PROPERTY_<PROPERTY_NAME>Defines connection properties for an XA datasource.
Consult the appropriate driver documentation for your datasource to find which XA properties you can set on the connection.
For example,
CLOUD_XA_CONNECTION_PROPERTY_DatabaseName=/opt/eap/standalone/data/databases/db/accountsThis example adds the following to the configuration:
<xa-datasource-property name="DatabaseName">/opt/eap/standalone/data/databases/db/accounts</xa-datasource-property>
<xa-datasource-property name="DatabaseName">/opt/eap/standalone/data/databases/db/accounts</xa-datasource-property>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 13. Embedded Data Grid (Library Mode) Copy linkLink copied to clipboard!
Embedding Data Grid in a custom OpenShift application, or running in Library Mode, is intended for specific uses only:
- Using local or distributed caching in custom Java applications to retain full control of the cache lifecycle. Additionally, when using features that are available only with embedded Data Grid such as distributed streams.
- Reducing network latency to improve the speed of cache operations. Although it is worth noting that the Hot Rod protocol provides near-cache capabilities that achieve equivalent performance of a standard client-server architecture.
Embedding Data Grid also has some limitations:
- Persistence to a cache store is not currently supported. You can embed Data Grid for in-memory caching only.
-
Only
DNS_PINGis supported for clustering. -
TCPis supported for the ping port. -
UDPis not supported with embedded Data Grid.
Red Hat highly discourages embedding Data Grid to build custom caching servers to handle remote client requests. Use the datagrid-service server implementation. It is fully supported and benefits from regular updates with performance improvements and security fixes.
Try the Embedded Data Grid Quickstart.
Chapter 14. Troubleshooting Data Grid for OpenShift Copy linkLink copied to clipboard!
14.1. Launching the Command Line Interface (CLI) Copy linkLink copied to clipboard!
Access the Data Grid Management CLI to troubleshoot Data Grid for OpenShift as follows:
Open a remote shell session to the running pod.
oc rsh ${POD_NAME}$ oc rsh ${POD_NAME}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Launch the Data Grid CLI from the remote shell.
/opt/datagrid/bin/cli.sh
$ /opt/datagrid/bin/cli.shCopy to Clipboard Copied! Toggle word wrap Toggle overflow
The Data Grid Management CLI is bound to the pod in which it runs. Changes that you make with the CLI are not saved if the container restarts.
14.2. Viewing Pod Logs Copy linkLink copied to clipboard!
Run the following command to view log messages for a running pod:
oc logs -f ${POD_NAME}
$ oc logs -f ${POD_NAME}
Chapter 15. Reference Copy linkLink copied to clipboard!
15.1. Probes Copy linkLink copied to clipboard!
Data Grid for OpenShift provides a liveness probe and a readiness probe to perform container health checks.
- Liveness probe
The liveness probe is located in the container at
/opt/datagrid/bin/livenessProbe.sh.The liveness probe tests server status and restarts pods if the following events occur:
- Data Grid for OpenShift boots with errors.
- Custom deployment configurations do not successfully deploy.
- One or more caches fail to instantiate, which usually occurs if the cache configuration is not valid.
- Readiness probe
The readiness probe is located in the container at
/opt/datagrid/bin/readinessProbe.sh.The readiness probe determines if the pod is ready to receive requests and checks Data Grid cache level
MBeansto ensure the following:- All cache instances are initialized.
- All cache instances have joined the cluster, if using distributed cache mode.
- Initial state transfer is complete. If state transfer is in progress, the pod is not marked as ready.
- All cache instances in the Cache manager are running.
To configure custom deployments to use the liveness probe and readiness probe, run the following commands:
15.2. Ports Copy linkLink copied to clipboard!
Data Grid for OpenShift uses the following ports:
| Port Number | Protocol | Use |
|---|---|---|
| 8080 | TCP | HTTP Access |
| 8443 | TCP | HTTPS Access |
| 8888 | TCP | JGroups Ping |
| 11222 | TCP | Hot Rod Access |
Data Grid deployment configuration templates also use the following ports:
| Port Number | Protocol | Use |
|---|---|---|
| 11211 | TCP | Memcached Access |
| 11333 | TCP | External Hot Rod Access |
| 8778 | TCP | Remote JMX Access |
If you set the HOTROD_SERVICE_NAME environment variable with the deployment configuration templates, the Hot Rod external connector returns ${service_name}:11333 for the endpoint.
15.3. Administration Console Copy linkLink copied to clipboard!
The Data Grid Administration Console is not supported on Red Hat OpenShift.
To monitor events and get statistics for Data Grid for OpenShift clusters, you should use Prometheus. See Setting Up Monitoring.
You can also use the Management CLI to troubleshoot Data Grid for OpenShift pods. See Launching the Command Line Interface.
15.4. Environment Variables Copy linkLink copied to clipboard!
15.4.1. Monitoring and Logging Copy linkLink copied to clipboard!
AB_PROMETHEUS_ENABLEAllows you to collect JMX metrics to monitor and analyze Data Grid. The default value is
false. Set the value totrueto enable monitoring with the default Prometheus agent.The Prometheus Operator must be installed and running. You must also Set Up Monitoring.
AB_PROMETHEUS_JMX_EXPORTER_PORT-
Defines the port on which Data Grid publishes JMX metrics. The default is
9779. LOGGING_CATEGORIESAdjusts the categories and levels for which Data Grid captures log messages, for example:
LOGGING_CATEGORIES=org.infinipan.core=WARN,org.infinispan.config=DEBUG
$ LOGGING_CATEGORIES=org.infinipan.core=WARN,org.infinispan.config=DEBUGCopy to Clipboard Copied! Toggle word wrap Toggle overflow Log categories correspond to Java package names and use standard log levels:
TRACE,DEBUG,INFO,WARN,ERROR, andFATAL.ImportantIf you specify
LOGGING_CATEGORIES, Data Grid does not set the following default loggers. Instead, Data Grid uses the default level ofINFOfor all packages that you do not explicitly specify withLOGGING_CATEGORIES.Expand Table 15.1. Default Loggers Category Level com.arjunaWARNsun.rmiWARNorg.jboss.as.configDEBUG
15.4.2. Containers Copy linkLink copied to clipboard!
USERNAMECreates a user in the security realm who is authorized to access data.
NoteBy default, the Hot Rod endpoint uses the
ApplicationRealmsecurity realm and the REST endpoint uses thejdg-openshiftsecurity realm.PASSWORD- Specifies the password for the user.
DATAGRID_SPLITDetermines if the data directory for each node should be split in a mesh. The value is
trueorfalse(default).If you set the value to
true, you must also configure a persistent volume mounted on/opt/datagrid/standalone/partitioned_data.JAVA_OPTS_APPENDAppends options to the
JAVA_OPTSenvironment variable on startup.For example,
JAVA_OPTS_APPEND=-Dfoo=barOPENSHIFT_KUBE_PING_LABELSSpecifies the clustering labels selector.
For example,
OPENSHIFT_KUBE_PING_LABELS=application=eap-appOPENSHIFT_KUBE_PING_NAMESPACE- Specifies the clustering project namespace.
TRANSPORT_LOCK_TIMEOUTSets the time to wait to acquire a distributed lock. The default value is
240000.Data Grid uses a distributed lock to maintain a coherent transaction log during state transfer or rehashing, which means that only one cache can perform state transfer or rehashing at a time. This constraint is in place because more than one cache could be involved in a transaction.
15.4.3. Caches Copy linkLink copied to clipboard!
Creating and configuring caches with cache-service and datagrid-service
Do not use environment variables to create and configure caches with cache-service or datagrid-service.
These environment variables are intended for use with the deployment configuration templates only and are deprecated.
You should remote create caches with cache-service and datagrid-service dynamically through the Hot Rod endpoint. For more information, see Remotely Creating Caches.
CACHE_NAMESDefines cache instances in your configuration.
If you are using the Data Grid deployment configuration templates and you do not define any cache instances, the launch script adds a default distributed cache in
SYNCmode.TipGive each cache instance in your configuration a unique name. Use underscore characters (_) and descriptive labels to help you distinguish between cache instances. This ensures that you do not have conflicts when applying cache-specific configuration.
For example,
CACHE_NAMES=addressbook,addressbook_indexedCACHE_CONTAINER_STARTConfigures how the cache container starts. Specify one of the following:
-
LAZYStarts the cache container when requested by a service or deployment. This is the default. -
EAGERStarts the cache container when the server starts.
-
CACHE_CONTAINER_STATISTICS-
Configures the cache container to collect statistics. The value is
true(default) orfalse. You can set the value tofalseto improve performance. DEFAULT_CACHE- Sets the default cache for the cache container.
15.4.3.1. Cache Container Security Configuration Copy linkLink copied to clipboard!
CONTAINER_SECURITY_CUSTOM_ROLE_MAPPER_CLASSSpecifies the class of the custom principal to role mapper.
For example,
CONTAINER_SECURITY_CUSTOM_ROLE_MAPPER_CLASS=com.acme.CustomRoleMapperCONTAINER_SECURITY_ROLE_MAPPERSets a role mapper for this cache container with the following values:
-
identity-role-mapperUses the Principal name as the role name. This is the default role mapper if you do not specify one and use theCONTAINER_SECURITY_ROLESenvironment variable to define role names. -
common-name-role-mapperUses the Common Name (CN) as the role name if the Principal name is a Distinguished Name (DN). For example, the DNcn=managers,ou=people,dc=example,dc=comis mapped to themanagerrole name. -
cluster-role-mapperUses theClusterRegistryto store Principal name to role mappings. -
custom-role-mapperTakes the fully-qualified class name of an implementation of theorg.infinispan.security.impl.PrincipalRoleMapperinterface.
-
CONTAINER_SECURITY_ROLESDefines role names and assigns permissions to them.
For example,
CONTAINER_SECURITY_ROLES=admin=ALL,reader=READ,writer=WRITE
15.4.3.2. Named Cache Configuration Copy linkLink copied to clipboard!
You specify the cache name as a prefix for the environment variable in capital letters (all caps) otherwise the configuration does not take effect.
For example, you create two separate cache instances: MyCache and MYCACHE. You then set MyCache_CACHE_TYPE=replicated to configure the MyCache instance. This configuration does not take effect. However, if you set MYCACHE_CACHE_TYPE=replicated the configuration takes effect for both the MyCache and MYCACHE instances.
${CACHE_NAME}_CACHE_TYPE-
Determines whether this cache should be distributed or replicated. You can specify either
distributed(default) orreplicated. ${CACHE_NAME}_CACHE_STARTConfigures how the cache starts. Specify one of the following:
-
LAZYStarts the cache when requested by a service or deployment. This is the default. -
EAGERStarts the cache when the server starts.
-
${CACHE_NAME}_CACHE_BATCHING-
Enables invocation batching for this cache. The value is
trueorfalse(default). ${CACHE_NAME}_CACHE_STATISTICS-
Configures the cache to collect statistics. The value is
true(default) orfalse. You can set the value tofalseto improve performance. ${CACHE_NAME}_CACHE_MODESets the clustered cache mode. Specify one of the following:
-
ASYNCfor asynchronous operations. -
SYNCfor synchronous operations.
-
${CACHE_NAME}_CACHE_QUEUE_SIZE-
Sets the threshold at which the replication queue is flushed when the cache is in
ASYNCmode. The default value is0(flushing is disabled). ${CACHE_NAME}_CACHE_QUEUE_FLUSH_INTERVAL-
Specifies the wakeup time, in milliseconds, for the thread that flushes the replication queue in
ASYNCmode. The default value is10. ${CACHE_NAME}_CACHE_REMOTE_TIMEOUT-
Specifies the timeout, in milliseconds, to wait for acknowledgement when making remote calls in
SYNCmode. If the timeout is reached, the remote call is aborted and an exception is thrown. The default value is17500. ${CACHE_NAME}_CACHE_OWNERS-
Specifies the number of cluster-wide replicas for each cache entry. The default value is
2. ${CACHE_NAME}_CACHE_SEGMENTS-
Specifies the number of hash space segments per cluster. The recommended value is
10 * cluster size. The default value is80. ${CACHE_NAME}_CACHE_L1_LIFESPAN-
Specifies the maximum lifespan, in milliseconds, of an entry placed in the L1 cache. The default value is
0(L1 is disabled). ${CACHE_NAME}_CACHE_MEMORY_EVICTION_TYPEDefines the maximum limit for entries in the cache. You can set the following values:
-
COUNTMeasures the number of entries in the cache. When the count exceeds the maximum, Data Grid evicts unused entries. -
MEMORYMeasures the amount of memory that all entries in the cache take up. When the total amount of memory exceeds the maximum, Data Grid evicts unused entries.
-
${CACHE_NAME}_CACHE_MEMORY_STORAGE_TYPEDefines how Data Grid stores entries in the cache. You can set the following values:
Expand Storage Type Description Eviction Type Policy objectStores entries as objects in the Java heap. This is the default storage type.
COUNTTinyLFU
binaryStores entries as
bytes[]in the Java heap.COUNTorMEMORYTinyLFU
off-heapStores entries as
bytes[]in native memory outside the Java.COUNTorMEMORYLRU
${CACHE_NAME}_CACHE_MEMORY_EVICTION_SIZEConfigures the size of the cache before eviction starts. Set the value to a number greater than zero.
-
For
COUNT, the size is the maximum number of entries the cache can hold before eviction starts. For
MEMORY, the size is the maximum number of bytes the cache can take from memory before eviction starts. For example, a value of10000000000is 10 GB.Try different cache sizes to determine the optimal setting. A cache size that is too large can cause Data Grid to run out of memory. At the same time, a cache size that is too small wastes available memory.
NoteIf you configure a JDBC store, passivation is automatically enabled when you set the eviction size to a value that is greater than zero.
-
For
${CACHE_NAME}_CACHE_MEMORY_EVICTION_STRATEGYControls how Data Grid performs eviction. You can set the following values:
Expand Strategy Description NONEData Grid does not evict entries. This is the default setting unless you configure eviction.
REMOVEData Grid removes entries from memory so that the cache does not exceed the configured size. This is the default setting when you configure eviction.
MANUALData Grid does not perform eviction. Eviction takes place manually by invoking the
evict()method from theCacheAPI.EXCEPTIONData Grid does not write new entries to the cache if doing so would exceed the configured size. Instead of writing new entries to the cache, Data Grid throws a
ContainerFullException.${CACHE_NAME}_CACHE_MEMORY_OFF_HEAP_ADDRESS_COUNTSpecifies the number of pointers that are available in the hash map to prevent collisions when using
OFFHEAPstorage. Preventing collisions in the hash map improves performance.Set the value to a number that is greater than the number of cache entries. By default
address-countis 2^20, or 1048576. The parameter is always rounded up to a power of 2.
${CACHE_NAME}_CACHE_EXPIRATION_LIFESPAN-
Specifies the maximum lifespan, in milliseconds, of a cache entry, after which the entry is expired cluster-wide. The default value is
-1(entries never expire). ${CACHE_NAME}_CACHE_EXPIRATION_MAX_IDLE-
Specifies the maximum idle time, in milliseconds, that cache entries are maintained in the cache. If the idle time is exceeded, then the entry is expired cluster-wide. The default value is
-1(expiration is disabled). ${CACHE_NAME}_CACHE_EXPIRATION_INTERVAL-
Specifies the interval, in milliseconds, between runs to purge expired entries from memory and any cache stores. The default value is
5000. Set-1to disable expiration. ${CACHE_NAME}_JDBC_STORE_TYPESets the type of JDBC store to configure. You can set the following values:
-
string -
binary
-
${CACHE_NAME}_JDBC_STORE_DATASOURCEDefines the jndiname of the datasource.
For example,
MYCACHE_JDBC_STORE_DATASOURCE=java:jboss/datasources/ExampleDS${CACHE_NAME}_KEYED_TABLE_PREFIX-
Defines the prefix prepended to the cache name used when composing the name of the cache entry table. The defaule value is
ispn_entry. ${CACHE_NAME}_CACHE_INDEXSets the indexing mode of the cache. You can set the following values:
-
NONEThis is the default. -
LOCAL -
ALL
-
${CACHE_NAME}_INDEXING_PROPERTIESSpecifies a comma-separated list of properties to pass to the indexing system.
For example,
MYCACHE_INDEXING_PROPERTIES=default.directory_provider=ram${CACHE_NAME}_CACHE_SECURITY_AUTHORIZATION_ENABLED-
Enables authorization checks for this cache. The value is
trueorfalse(default). ${CACHE_NAME}_CACHE_SECURITY_AUTHORIZATION_ROLESSets the roles required to access this cache.
For example,
MYCACHE_CACHE_SECURITY_AUTHORIZATION_ROLES=admin, reader, writer
${CACHE_NAME}_CACHE_PARTITION_HANDLING_WHEN_SPLITConfigures the strategy for handling partitions between nodes in a cluster when network events isolate nodes from each other. Partitions function as independent clusters until Data Grid merges cache entries to re-form a single cluster. You can set the following values:
Expand Partition Handling Strategy Description ALLOW_READ_WRITESNodes from any partition can read or write cache entries. This is the default value.
DENY_READ_WRITESNodes enter degraded mode if:
* One or more hash space segments in the partition have no owners. The
ownersare the number of cluster-wide replicas for cache entries.* The partition has less than half the nodes from the most recent stable cluster topology.
In degraded mode, only nodes in the same partition can read or write cache entries. All owners, or copies, for a cache entry must exist on the same partition, otherwise the read or write operation fails with an
AvailabilityException.ALLOW_READSNodes enter degraded mode similarly to the
DENY_READ_WRITESstrategy. Nodes from any partition can read cache entries.In degraded mode, only nodes in the same partition can write cache entries. All owners, or copies, for a cache entry must exist on the same partition, otherwise the write operation fails with an
AvailabilityException.${CACHE_NAME}_CACHE_PARTITION_MERGE_POLICYConfigures how Data Grid resolves conflicts between cache entries when merging partitions. You can set the following values:
Expand Merge Policy Description NONEDo not resolve conflicts when merging partitions. This is the default value.
PREFERRED_ALWAYSAlways use the
preferredEntry. ThepreferredEntryis the primary replica of a cache entry that resides in the partition that contains the most nodes. If the number of nodes is equal between partitions, thepreferredEntryis the cache entry that resides in the partition with the highest topology ID, which means that topology is more recent.PREFERRED_NON_NULLUse the
preferredEntryif it has a value (non-null). If thepreferredEntrydoes not have a value, use the first entry defined inotherEntries.REMOVE_ALLRemove entries (key and value) from the cache if conflicts exist.
${CACHE_NAME}_STATE_TRANSFER_TIMEOUTSets the amount of time, in milliseconds, to wait for other cache instances in the cluster to transfer state to the cache. If other cache instances do not transfer state before the timeout occurs, the application throws an exception and aborts startup. The default value is
240000(4 minutes).You must use a custom template to set this environment variable. It does not take effect if you set the state transfer timeout in the default Data Grid for OpenShift templates.
15.4.4. Security Domain Copy linkLink copied to clipboard!
SECDOMAIN_NAME-
Defines additional security domains. For example:
SECDOMAIN_NAME=myDomain SECDOMAIN_PASSWORD_STACKING-
Enables the password staking module and sets the
useFirstPassoption. The value istrueorfalse(default). SECDOMAIN_LOGIN_MODULE-
Specifies a login module to use. The default value is
UsersRoles. SECDOMAIN_USERS_PROPERTIES-
Specifies the properties file that contains user definitions. The default value is
users.properties. SECDOMAIN_ROLES_PROPERTIES-
Specifies the properties file that contains role definitions. The default value is
roles.properties.
15.4.5. Endpoints Copy linkLink copied to clipboard!
Clients can access Data Grid via REST, Hot Rod, and Memcached endpoints that you define in the cache configuration.
Clients that run in the same project as Data Grid for OpenShift can access the cache via Hot Rod and receive a full cluster view. These clients can also use consistent hashing capabilities.
However, when clients run in a different project to Data Grid for OpenShift, they need to access the Data Grid cluster using an OpenShift service that exposes the Hot Rod endpoint externally. Depending on your network configuration, clients might not have access to some pods and must use BASIC client intelligence. In these cases, clients might require extra network hops to access data, which can increase network latency.
External access to clients running in OpenShift requires routes with passthrough encryption termination. Clients must also use BASIC client intelligence and the fully qualified domain name as a TLS/SNI host name. Alternatively, you can expose the Data Grid cluster behind a Load Balancer service that is externally available.
Configure endpoints with the following environment variables:
INFINISPAN_CONNECTORS-
Defines a comma-separated list of connectors to configure. Defaults to
hotrod,memcached,rest. If authorization or authentication is enabled on the cache then you should removememcachedbecause this protocol is inherently insecure. MEMCACHED_CACHE-
Sets the cache name for the Memcached connector. Defaults to
memcachedif you do not specify a cache name with theCACHE_NAMESenvironment variable. HOTROD_SERVICE_NAMEDefines the name of the OpenShift service for the external Hot Rod connector.
The external Hot Rod connector is available with deployment configuration templates only if you define this environment variable.
cache-serviceanddatagrid-servicedo not use the external Hot Rod connector.For example, if you set
HOTROD_SERVICE_NAME=DATAGRID_APP_HOTRODthe Hot Rod external connector returnsDATAGRID_APP_HOTROD:11333.REST_STORE_AS_STRINGSpecifies if Data Grid saves entries as Java strings when written to the cache via the REST API. The value is
trueorfalse(default).Set the value to
trueif you are upgrading the image from a previous version and plan to read persisted cache entries.NoteData Grid version 7.1 and earlier: When you write entries to the cache through the REST endpoint, Data Grid stores them as Java strings.
Data Grid version 7.2 and later: Data Grid stores cache entries as
bytes[]to enable data interoperability between clients and protocols.If you upgrade Data Grid for OpenShift images to version 7.2 or later, Data Grid returns null values when you attempt to read cache entries that are persisted to a data store. To resolve the null values, set
REST_STORE_AS_STRING=true.