Running Data Grid on OpenShift
Data Grid Documentation
Abstract
Chapter 1. Red Hat Data Grid Copy linkLink copied to clipboard!
Data Grid is a high-performance, distributed in-memory data store.
- Schemaless data structure
- Flexibility to store different objects as key-value pairs.
- Grid-based data storage
- Designed to distribute and replicate data across clusters.
- Elastic scaling
- Dynamically adjust the number of nodes to meet demand without service disruption.
- Data interoperability
- Store, retrieve, and query data in the grid from different endpoints.
1.1. Data Grid Documentation Copy linkLink copied to clipboard!
Documentation for Data Grid is available on the Red Hat customer portal.
1.2. Data Grid Downloads Copy linkLink copied to clipboard!
Access the Data Grid Software Downloads on the Red Hat customer portal.
You must have a Red Hat account to access and download Data Grid software.
Chapter 2. Getting Started with Data Grid Operator Copy linkLink copied to clipboard!
Data Grid Operator lets you create, configure, and manage Data Grid clusters.
Prerequisites
- Create a Data Grid Operator subscription.
-
Have an
occlient.
2.1. Infinispan Custom Resource (CR) Copy linkLink copied to clipboard!
Data Grid Operator adds a new Custom Resource (CR) of type Infinispan that lets you handle Data Grid clusters as complex units on OpenShift.
You configure Data Grid clusters running on OpenShift by modifying the Infinispan CR.
The minimal Infinispan CR for Data Grid clusters is as follows:
2.2. Creating Data Grid Clusters Copy linkLink copied to clipboard!
Use Data Grid Operator to create clusters of two or more Data Grid nodes.
Procedure
Specify the number of Data Grid nodes in the cluster with
spec.replicasin your Infinispan CR.For example, create a
cr_minimal.yamlfile as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply your Infinispan CR.
oc apply -f cr_minimal.yaml
$ oc apply -f cr_minimal.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Watch Data Grid Operator create the Data Grid nodes.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next Steps
Try changing the value of replicas: and watching Data Grid Operator scale the cluster up or down.
2.3. Verifying Data Grid Clusters Copy linkLink copied to clipboard!
Review log messages to ensure that Data Grid nodes receive clustered views.
Procedure
Do either of the following:
Retrieve the cluster view from logs.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the Infinispan CR for Data Grid Operator.
oc get infinispan -o yaml
$ oc get infinispan -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow The response indicates that Data Grid pods have received clustered views:
conditions: - message: 'View: [example-rhdatagrid-0, example-rhdatagrid-1]' status: "True" type: wellFormedconditions: - message: 'View: [example-rhdatagrid-0, example-rhdatagrid-1]' status: "True" type: wellFormedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Use oc wait with the wellFormed condition for automated scripts.
oc wait --for condition=wellFormed --timeout=240s infinispan/example-rhdatagrid
$ oc wait --for condition=wellFormed --timeout=240s infinispan/example-rhdatagrid
Chapter 3. Creating Data Grid Services Copy linkLink copied to clipboard!
Data Grid services are stateful applications that provide flexible and robust in-memory data storage.
3.1. Cache Service Copy linkLink copied to clipboard!
Cache service provides a volatile, low-latency data store that dramatically increases application response rates.
Cache service nodes:
- Synchronously distribute data across the cluster to ensure consistency.
- Maintain single copies of cache entries to reduce size.
- Store cache entries off-heap and use eviction for JVM efficiency.
- Ensure data consistency with a default partition handling configuration.
You can create multiple cache definitions with Cache service but only as copies of the default configuration.
If you update Cache service nodes with the Infinispan CR or update the version, you lose all data in the cache.
3.1.1. Cache Configuration Copy linkLink copied to clipboard!
Cache service nodes use the following cache configuration:
- 1
- Names the cache instance as "default".
- 2
- Uses synchronous distribution for storing data across the cluster.
- 3
- Configures one replica for each cache entry across the cluster.
- 4
- Stores cache entries as bytes in native memory (off-heap).
- 5
- Removes old entries to make space when adding new entries.
- 6
- Specifies a conflict resolution strategy that allows read and write operations for cache entries even if segment owners are in different partitions.
- 7
- Specifies a merge policy that removes entries from the cache when Data Grid detects conflicts.
3.2. Data Grid Service Copy linkLink copied to clipboard!
Data Grid service provides a configurable Data Grid server distribution for OpenShift.
- Use with advanced capabilities like cross-site replication as well as indexing and querying.
Remotely access Data Grid service clusters from Hot Rod or REST clients and dynamically create caches using any Data Grid cache mode and configuration.
NoteData Grid does not provide default caches for Data Grid service nodes. However, you can use cache configuration templates to get started.
3.3. Creating Data Grid Services Copy linkLink copied to clipboard!
Define the .spec.service.type resource to create Cache service and Data Grid service nodes with Data Grid Operator.
By default, Data Grid Operator creates Data Grid clusters configured as a Cache service.
Procedure
-
Specify the service type for Data Grid clusters with
spec.service.typein your Infinispan CR and then apply the changes.
For example, create Data Grid service clusters as follows:
spec:
...
service:
type: DataGrid
spec:
...
service:
type: DataGrid
You cannot change .spec.service.type after you create Data Grid clusters.
For example, if you create a cluster of Cache service nodes, you cannot change the service type to Data Grid service. In this case you must create a new cluster with Data Grid service nodes in a different OpenShift namespace.
3.3.1. Cache Service Resources Copy linkLink copied to clipboard!
- 1
- Names the Data Grid cluster.
- 2
- Specifies the number of nodes in the cluster.
- 3
- Creates Cache service clusters.
- 4
- Adds an authentication secret with user credentials.
- 5
- Adds a custom encryption secret for secure connections.
- 6
- Allocates resources to nodes.
- 7
- Configures logging.
- 8
- Configures services for external traffic.
3.3.2. Data Grid Service Resources Copy linkLink copied to clipboard!
- 1
- Names the Data Grid cluster.
- 2
- Specifies the number of nodes in the cluster.
- 3
- Creates Data Grid service clusters.
- 4
- Configures size of the persistent volume.
- 5
- Provides connection information for backup locations.
- 6
- Adds an authentication secret with user credentials.
- 7
- Adds a custom encryption secret for secure connections.
- 8
- Allocates resources to nodes.
- 9
- Configures logging.
- 10
- Configures services for external traffic.
Chapter 4. Stopping and Starting Data Grid Services Copy linkLink copied to clipboard!
Gracefully shut down Data Grid clusters to avoid data loss.
Cache configuration
Both Cache service and Data Grid service store permanent cache definitions in persistent volumes so they are still available after cluster restarts.
Data
Data Grid service nodes can write all cache entries to persistent storage during cluster shutdown if you add cache stores.
You should configure the storage size for Data Grid service nodes to ensure that the persistent volume can hold all your data.
If the available container storage is less than the amount of memory available to Data Grid service nodes, Data Grid writes an exception to logs and data loss occurs during shutdown.
4.1. Gracefully Shutting Down Data Grid Clusters Copy linkLink copied to clipboard!
-
Set the value of
replicasto0and apply the changes.
spec: replicas: 0
spec:
replicas: 0
4.2. Restarting Data Grid Clusters Copy linkLink copied to clipboard!
-
Set the value of
spec.replicasto the same number of nodes that were in the cluster before you shut it down.
For example, you shut down a cluster of 6 nodes. When you restart the cluster, you must set:
spec: replicas: 6
spec:
replicas: 6
This allows Data Grid to restore the distribution of data across the cluster. When all nodes in the cluster are running, you can then add or remove nodes.
Chapter 5. Adjusting Container Specifications Copy linkLink copied to clipboard!
You can allocate CPU and memory resources, specify JVM options, and configure storage for Data Grid nodes.
5.1. JVM, CPU, and Memory Resources Copy linkLink copied to clipboard!
When Data Grid Operator creates Data Grid clusters, it uses spec.container.cpu and spec.container.memory to:
-
Ensure that OpenShift has sufficient capacity to run the Data Grid node. By default Data Grid Operator requests 512Mi of
memoryand 0.5cpufrom the OpenShift scheduler. -
Constrain node resource usage. Data Grid Operator sets the values of
cpuandmemoryas resource limits.
Garbage collection logging
By default, Data Grid Operator does not log garbage collection (GC) messages. You can optionally add the following JVM options to direct GC messages to stdout:
extraJvmOpts: "-Xlog:gc*:stdout:time,level,tags"
extraJvmOpts: "-Xlog:gc*:stdout:time,level,tags"
5.2. Storage Resources Copy linkLink copied to clipboard!
- 1
- Configures the storage size for Data Grid service nodes.
By default, Data Grid Operator allocates 1Gi for storage for both Cache service and Data Grid service nodes. You can configure storage size only for Data Grid service nodes.
Persistence
Data Grid service lets you configure Single File cache stores for data persistence:
<persistence>
<file-store />
</persistence>
<persistence>
<file-store />
</persistence>
5.2.1. Persistent Volume Claims Copy linkLink copied to clipboard!
Data Grid Operator mounts persistent volumes at:/opt/datagrid/server/data
Persistent volume claims use the ReadWriteOnce (RWO) access mode.
Chapter 6. Creating Network Services Copy linkLink copied to clipboard!
Network services provide access to Data Grid clusters for client connections.
6.1. Getting the Service for Internal Connections Copy linkLink copied to clipboard!
By default, Data Grid Operator creates a service that provides access to Data Grid clusters from clients running in OpenShift.
This internal service has the same name as your Data Grid cluster, for example:
metadata: name: example-rhdatagrid
metadata:
name: example-rhdatagrid
Procedure
Check that the internal service is available as follows:
oc get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) example-rhdatagrid ClusterIP 192.0.2.0 <none> 11222/TCP
$ oc get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) example-rhdatagrid ClusterIP 192.0.2.0 <none> 11222/TCPCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Reference
6.2. Exposing Data Grid to External Clients Copy linkLink copied to clipboard!
Expose Data Grid clusters to clients running outside OpenShift with external services.
Procedure
Specify an external service type with
spec.expose.typein your Infinispan CR and then apply the changes.spec: ... expose: type: LoadBalancer nodePort: 30000spec: ... expose:1 type: LoadBalancer2 nodePort: 300003 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Exposes an external service.
- 2
- Specifies either a
LoadBalancerorNodePortservice resource type. - 3
- Defines the port where the external service is exposed. If you do not define a port, and the service type is
NodePort, the platform selects a port to use. If the service type isLoadBalancer, the exposed port is11222by default.
LoadBalancerUse for OpenShift clusters where a load balancer service is available to handle external network traffic. You can then use the URL for the load balancer service for client connections.
To access Data Grid with unencrypted Hot Rod client connections you must use a load balancer service.
NodePort- Use for local OpenShift clusters.
Verification
-
Check that the
-externalservice is available.
oc get services | grep external NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) example-rhdatagrid-external LoadBalancer 192.0.2.24 <none> 11222/TCP
$ oc get services | grep external
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
example-rhdatagrid-external LoadBalancer 192.0.2.24 <none> 11222/TCP
Reference
Chapter 7. Configuring Authentication Copy linkLink copied to clipboard!
Application users must authenticate with Data Grid clusters. Data Grid Operator generates default credentials or you can add your own.
7.1. Default Credentials Copy linkLink copied to clipboard!
Data Grid Operator generates base64-encoded default credentials stored in an authentication secret named example-rhdatagrid-generated-secret
| Username | Description |
|---|---|
|
| Default application user. |
|
| Internal user that interacts with Data Grid clusters. |
7.2. Retrieving Credentials Copy linkLink copied to clipboard!
Get credentials from authentication secrets to access Data Grid clusters.
Procedure
Retrieve credentials from authentication secrets, as in the following example:
oc get secret example-rhdatagrid-generated-secret
$ oc get secret example-rhdatagrid-generated-secretCopy to Clipboard Copied! Toggle word wrap Toggle overflow Base64-decode credentials.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.3. Adding Custom Credentials Copy linkLink copied to clipboard!
Add custom credentials to an authentication secret.
Procedure
Create an
identities.yamlfile that contains credentials for application users and the operator user for Data Grid Operator, for example:credentials: - username: testuser password: testpassword - username: operator password: supersecretoperatorpassword
credentials: - username: testuser password: testpassword - username: operator password: supersecretoperatorpasswordCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create an authentication secret with
identities.yamlas follows:oc create secret generic --from-file=identities.yaml connect-secret
$ oc create secret generic --from-file=identities.yaml connect-secretCopy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the authentication secret with
spec.security.endpointSecretNamein your Infinispan CR and then apply the changes.spec: ... security: endpointSecretName: connect-secretspec: ... security: endpointSecretName: connect-secret1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- specifies the authentication secret.
Chapter 8. Securing Data Grid Connections Copy linkLink copied to clipboard!
Encrypt connections between clients and Data Grid nodes with Red Hat OpenShift service certificates or custom TLS certificates.
8.1. Using Red Hat OpenShift Service Certificates Copy linkLink copied to clipboard!
Data Grid Operator automatically generates TLS certificates signed by the Red Hat OpenShift service CA. You can use these certificates to encrypt remote client connections.
Procedure
Set the following
spec.security.endpointEncryptionconfiguration in your Infinispan CR and then apply the changes.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Data Grid Operator stores in a secret named -cert-secret that is prefixed with the Data Grid cluster name, for example:
metadata: name: example-rhdatagrid
metadata:
name: example-rhdatagrid
The preceding cluster name results in a secret named example-rhdatagrid-cert-secret.
8.1.1. Red Hat OpenShift Service Certificates Copy linkLink copied to clipboard!
If the Red Hat OpenShift service CA is available, Data Grid Operator automatically generates a certificate, tls.crt, and key, tls.key, in PEM format.
Service certificates use the internal DNS name of the Data Grid cluster as the common name (CN), for example:
Subject: CN = example-infinispan.mynamespace.svc
For this reason, service certificates can be fully trusted only inside OpenShift. If you want to encrypt connections with clients running outside OpenShift, you should use custom TLS certificates.
Certificates are valid for one year and are automatically replaced before they expire.
8.1.2. Retrieving TLS Certificates Copy linkLink copied to clipboard!
Get TLS certificates from encryption secrets to create client trust stores.
-
Retrieve
tls.crtfrom encryption secrets as follows:
oc get secret example-rhdatagrid-cert-secret \
-o jsonpath='{.data.tls\.crt}' | base64 -d > tls.crt
$ oc get secret example-rhdatagrid-cert-secret \
-o jsonpath='{.data.tls\.crt}' | base64 -d > tls.crt
8.2. Using Custom TLS Certificates Copy linkLink copied to clipboard!
Use custom PKCS12 keystore or TLS certificate/key pairs to encrypt connections between clients and Data Grid clusters.
Prerequisites
Create either a keystore or certificate secret. See:
Procedure
Add the encryption secret to your OpenShift namespace, for example:
oc apply -f tls_secret.yaml
$ oc apply -f tls_secret.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the encryption secret with
spec.security.endpointEncryptionin your Infinispan CR and then apply the changes.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.2.1. Certificate Secrets Copy linkLink copied to clipboard!
8.2.2. Keystore Secrets Copy linkLink copied to clipboard!
Chapter 9. Monitoring Data Grid with Prometheus Copy linkLink copied to clipboard!
Data Grid exposes a metrics endpoint that provides statistics and events to Prometheus.
9.1. Setting Up Prometheus Copy linkLink copied to clipboard!
Set up Prometheus so it can authenticate with and monitor Data Grid clusters.
Prerequisites
- Install the Prometheus Operator.
- Create a running Prometheus instance.
Procedure
Add an authentication secret to your Prometheus namespace.
This secret allows Prometheus to authenticate with your Data Grid cluster. You can find Data Grid credentials in the authentication secret in your Data Grid Operator namespace.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a service monitor instance that configures Prometheus to monitor your Data Grid cluster.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- names the service monitor.
- 2
- specifies your Prometheus namespace.
- 3
- specifies the name of the authentication secret that has Data Grid credentials.
- 4
- specifies the name of the authentication secret that has Data Grid credentials.
- 5
- specifies that Data Grid endpoints use encryption. If you do not use TLS, remove
spec.endpoints.scheme. - 6
- specifies the Common Name (CN) of the TLS certificate for Data Grid encryption. If you use an OpenShift service certificate, the CN matches the
metadata.nameresource for your Data Grid cluster. If you do not use TLS, removespec.endpoints.tlsConfig. - 7
- specifies the Data Grid Operator namespace.
- 8
- specifies the name of the Data Grid cluster.
Chapter 10. Connecting to Data Grid Clusters Copy linkLink copied to clipboard!
Connect to Data Grid via the REST or Hot Rod endpoints. You can then remotely create and modify cache definitions and store data across Data Grid clusters.
The examples in this section use $SERVICE_HOSTNAME to denote the service that provides access to your Data Grid cluster.
Clients running in OpenShift can specify the name of the internal service that Data Grid Operator creates.
Clients running outside OpenShift should specify hostnames according to the type of external service and provider. For example, if using a load balancer service on AWS, the service hostname could be:
.status.loadBalancer.ingress[0].hostname
On GCP or Azure, hostnames might be as follows:
.status.loadBalancer.ingress[0].ip
10.1. Invoking the Data Grid REST API Copy linkLink copied to clipboard!
You can invoke the Data Grid REST API with any appropriate HTTP client.
For convenience, the following examples show how to invoke the REST API with curl using unencrypted connections. It is beyond the scope of this document to describe how to configure HTTP clients to use encryption.
Procedure
Open a remote shell to a Data Grid node, for example:
oc rsh example-rhdatagrid
$ oc rsh example-rhdatagridCopy to Clipboard Copied! Toggle word wrap Toggle overflow Cache service provides a default cache instance, but Data Grid service does not. Before you can store data with Data Grid service clusters, you must create a cache as in the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Put an entry in the cache.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the entry.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.2. Configuring Hot Rod Clients Copy linkLink copied to clipboard!
Configure Hot Rod Java clients to connect to Data Grid clusters.
Hot Rod client ConfigurationBuilder
Hot Rod client properties
Chapter 11. Monitoring Data Grid Logs Copy linkLink copied to clipboard!
Set logging categories to different message levels to monitor, debug, and troubleshoot Data Grid clusters.
11.1. Configuring Data Grid Logging Copy linkLink copied to clipboard!
Procedure
Specify logging configuration with
spec.loggingin your Infinispan CR and then apply the changes.Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe root logging category is
org.infinispanand isINFOby default.Retrieve logs from Data Grid nodes as required.
oc logs -f $POD_NAME
$ oc logs -f $POD_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.2. Log Levels Copy linkLink copied to clipboard!
Log levels indicate the nature and severity of messages.
| Log level | Description |
|---|---|
| trace | Provides detailed information about running state of applications. This is the most verbose log level. |
| debug | Indicates the progress of individual requests or activities. |
| info | Indicates overall progress of applications, including lifecycle events. |
| warn | Indicates circumstances that can lead to error or degrade performance. |
| error | Indicates error conditions that might prevent operations or activities from being successful but do not prevent applications from running. |
Chapter 12. Configuring Cross-Site Replication Copy linkLink copied to clipboard!
Set up cross-site replication to back up data between Data Grid clusters running in different locations.
For example, you use Data Grid Operator to manage a Data Grid cluster at a data center in London, LON. At another data center in New York City, NYC, you also use Data Grid Operator to manage a Data Grid cluster. In this case, you can add LON and NYC as backup locations for each other.
Cross-site replication functionality is currently Technology Preview. Contact Red Hat support for more information.
Prerequisites
- Ensure that a load balancer service is available for OpenShift. This service allows external access to OpenShift Container Platform clusters. See Configuring ingress cluster traffic using a load balancer.
12.1. Data Grid Cluster and Project Naming Copy linkLink copied to clipboard!
Data Grid Operator expects Data Grid clusters in each site to have the same cluster names and be running in matching namespaces.
For example, in the LON site you create a Data Grid cluster with metadata.name: mydatagrid in a OpenShift project named "my-xsite". In this case you must create Data Grid clusters in other backup locations, such as NYC, with identical names in matching namespaces.
In effect, you must create Data Grid cluster names and OpenShift namespaces at each backup location that mirror one another.
12.2. Creating Service Account Tokens Copy linkLink copied to clipboard!
Traffic between independent OpenShift installations occurs through a Kubernetes API. OpenShift Container Platform clusters use tokens to authenticate with and access the API.
To enable cross-site replication between Data Grid clusters you must add tokens to the namespace on each site. For example, LON needs a secret with the token for NYC. NYC also needs a secret with the token for LON.
Procedure
Create service accounts on each OpenShift instance.
For example, create a service account on LON as follows:
oc create sa lon serviceaccount/lon created
$ oc create sa lon serviceaccount/lon createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the view role to service accounts.
For example, if your Data Grid cluster runs in the "my-xsite" namespace, add the view role to the service account on LON as follows:
oc policy add-role-to-user view system:serviceaccount:my-xsite:lon
$ oc policy add-role-to-user view system:serviceaccount:my-xsite:lonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve tokens from each service account.
The following example shows the service account token for LON:
oc sa get-token lon eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9...
$ oc sa get-token lon eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9...Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create secrets that contain service account tokens for the backup locations.
- Log in to OpenShift Container Platform at NYC.
Add the service account token to a
lon-tokensecret.oc create secret generic lon-token --from-literal=token=eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9...
oc create secret generic lon-token --from-literal=token=eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9...Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Repeat the preceding steps to create a
nyc-tokensecret on LON.
After you add service account tokens to each backup location, the OpenShift instances can authenticate with each other so that Data Grid clusters can form cross-site views.
Reference
12.3. Adding Backup Locations to Data Grid Clusters Copy linkLink copied to clipboard!
Configure Data Grid clusters as backup locations so that they can communicate over a dedicated JGroups transport channel for replicating data.
Procedure
Configure Data Grid clusters at each site with the Infinispan CR as necessary.
For example, create
lon.yamlto configure LON andnyc.yamlto configure NYC. Both configurations must include the following:-
.spec.service.sites.localnames the local site for Data Grid clusters. -
.spec.service.sites.locationsprovides the location of all site masters. Data Grid nodes use this information to connect with each other and form cross-site views.
-
Instantiate Data Grid clusters at each site, for example:
Apply the Infinispan CR for LON.
oc apply -f lon.yaml
$ oc apply -f lon.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Log in to OpenShift Container Platform at NYC.
Apply the Infinispan CR for NYC.
oc apply -f nyc.yaml
$ oc apply -f nyc.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify that Data Grid clusters form a cross-site view.
For example, do the following on LON:
oc logs example-rhdatagrid-0 | grep x-site INFO [org.infinispan.XSITE] (jgroups-5,example-rhdatagrid-0-<id>) ISPN000439: Received new x-site view: [NYC] INFO [org.infinispan.XSITE] (jgroups-7,example-rhdatagrid-0-<id>) ISPN000439: Received new x-site view: [NYC, LON]
$ oc logs example-rhdatagrid-0 | grep x-site INFO [org.infinispan.XSITE] (jgroups-5,example-rhdatagrid-0-<id>) ISPN000439: Received new x-site view: [NYC] INFO [org.infinispan.XSITE] (jgroups-7,example-rhdatagrid-0-<id>) ISPN000439: Received new x-site view: [NYC, LON]Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Reference
12.3.1. Cross-Site Replication Resources Copy linkLink copied to clipboard!
- 1
- Specifies Data Grid service. Data Grid supports cross-site replication with Data Grid service clusters only.
- 2
- Names the local site for a Data Grid cluster.
- 3
- Specifies
LoadBalanceras the service that handles communication between backup locations. - 4
- Provides connection information for all backup locations.
- 5
- Specifies a backup location that matches
.spec.service.sites.local.name. - 6
- Specifies the URL of the Kubernetes API for the backup location.
- 7
- Specifies the secret that contains the service account token for the backup site.
Chapter 13. Reference Copy linkLink copied to clipboard!
Find useful information for Data Grid clusters that you create with Data Grid Operator.
13.1. Network Services Copy linkLink copied to clipboard!
Internal service
- Allow Data Grid nodes to discover each other and form clusters.
- Provide access to Data Grid endpoints from clients in the same OpenShift namespace.
| Service | Port | Protocol | Description |
|---|---|---|---|
|
|
| TCP | Internal access to Data Grid endpoints |
|
|
| TCP | Cluster discovery |
External service
Provides access to Data Grid endpoints from clients outside OpenShift or in different namespaces.
You must create the external service with Data Grid Operator. It is not available by default.
| Service | Port | Protocol | Description |
|---|---|---|---|
|
|
| TCP | External access to Data Grid endpoints. |
Cross-site service
Allows Data Grid to back up data between clusters in different locations.
| Service | Port | Protocol | Description |
|---|---|---|---|
|
|
| TCP | JGroups RELAY2 channel for cross-site communication. |
Reference
13.2. Data Grid Operator Upgrades Copy linkLink copied to clipboard!
Data Grid Operator upgrades Data Grid when new versions become available.
To upgrade Data Grid clusters, Data Grid Operator checks the version of the image for Data Grid nodes. If Data Grid Operator determines that a new version of the image is available, it gracefully shuts down all nodes, applies the new image, and then restarts the nodes.
On Red Hat OpenShift, the Operator Lifecycle Manager (OLM) enables upgrades for Data Grid Operator. When you install Data Grid Operator, you select either Automatic or Manual updates with the Approval Strategy. This determines how Data Grid Operator upgrades clusters. See the OpenShift documentation for more information.
13.3. Technology Preview Copy linkLink copied to clipboard!
Technology Preview features or capabilities are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information, see Red Hat Technology Preview Features Support Scope.