Data Grid Operator Guide
Create Data Grid clusters on OpenShift
Abstract
Red Hat Data Grid Copy linkLink copied to clipboard!
Data Grid is a high-performance, distributed in-memory data store.
- Schemaless data structure
- Flexibility to store different objects as key-value pairs.
- Grid-based data storage
- Designed to distribute and replicate data across clusters.
- Elastic scaling
- Dynamically adjust the number of nodes to meet demand without service disruption.
- Data interoperability
- Store, retrieve, and query data in the grid from different endpoints.
Data Grid documentation Copy linkLink copied to clipboard!
Documentation for Data Grid is available on the Red Hat customer portal.
Data Grid downloads Copy linkLink copied to clipboard!
Access the Data Grid Software Downloads on the Red Hat customer portal.
You must have a Red Hat account to access and download Data Grid software.
Making open source more inclusive Copy linkLink copied to clipboard!
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Chapter 1. Data Grid Operator Copy linkLink copied to clipboard!
Data Grid Operator provides operational intelligence and reduces management complexity for deploying Data Grid on Kubernetes and Red Hat OpenShift.
1.1. Data Grid Operator deployments Copy linkLink copied to clipboard!
When you install Data Grid Operator, it extends the Kubernetes API with Custom Resource Definitions (CRDs) for deploying and managing Data Grid clusters on Red Hat OpenShift.
To interact with Data Grid Operator, OpenShift users apply Custom Resources (CRs) through the OpenShift Web Console or oc client. Data Grid Operator listens for Infinispan CRs and automatically provisions native resources, such as StatefulSets and Secrets, that your Data Grid deployment requires. Data Grid Operator also configures Data Grid services according to the specifications in Infinispan CRs, including the number of pods for the cluster and backup locations for cross-site replication.
Figure 1.1. Custom resources
1.2. Cluster management Copy linkLink copied to clipboard!
A single Data Grid Operator installation can manage multiple clusters with different Data Grid versions in separate namespaces. Each time a user applies CRs to modify the deployment, Data Grid Operator applies the changes globally to all Data Grid clusters.
Figure 1.2. Operator-managed clusters
1.3. Resource reconciliation Copy linkLink copied to clipboard!
Data Grid Operator reconciles custom resources such as the Cache CR with resources on your Data Grid cluster.
Bidirectional reconciliation synchronizes your CRs with changes that you make to Data Grid resources through the Data Grid Console, command line interface (CLI), or other client application and vice versa. For example if you create a cache through the Data Grid Console then Data Grid Operator adds a declarative Kubernetes representation.
To perform reconciliation Data Grid Operator creates a listener pod for each Data Grid cluster that detects modifications for Infinispan resources.
Notes about reconciliation
-
When you create a cache through the Data Grid Console, CLI, or other client application, Data Grid Operator creates a corresponding
CacheCR with a unique name that conforms to the Kubernetes naming policy. -
Declarative Kubernetes representations of Data Grid resources that Data Grid Operator creates with the
listenerpod are linked toInfinispanCRs.
DeletingInfinispanCRs removes any associated resource declarations.
Chapter 2. Installing the native Data Grid CLI as a client plugin Copy linkLink copied to clipboard!
Data Grid provides a command line interface (CLI) compiled to a native executable that you can install as a plugin for oc clients. You can then use your oc client to:
- Create Data Grid Operator subscriptions and remove Data Grid Operator installations.
- Set up Data Grid clusters and configure services.
- Work with Data Grid resources via remote shells.
2.1. Installing the native Data Grid CLI plugin Copy linkLink copied to clipboard!
Install the native Data Grid Command Line Interface (CLI) as a plugin for oc clients.
Prerequisites
-
Have an
occlient. - Download the native Data Grid CLI distribution from the Data Grid software downloads.
Procedure
-
Extract the
.ziparchive for the native Data Grid CLI distribution. Copy the native executable, or create a hard link, to a file named "kubectl-infinispan", for example:
cp redhat-datagrid-cli kubectl-infinispan
cp redhat-datagrid-cli kubectl-infinispanCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Add
kubectl-infinispanto yourPATH. Verify that the CLI is installed.
oc plugin list The following compatible plugins are available: /path/to/kubectl-infinispan
oc plugin list The following compatible plugins are available: /path/to/kubectl-infinispanCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
infinispan --helpcommand to view available commands.oc infinispan --help
oc infinispan --helpCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.2. kubectl-infinispan command reference Copy linkLink copied to clipboard!
This topic provides some details about the kubectl-infinispan plugin for clients.
Use the --help argument to view the complete list of available options and descriptions for each command.
For example, oc infinispan create cluster --help prints all command options for creating Data Grid clusters.
| Command | Description |
|
| Creates Data Grid Operator subscriptions and installs into the global namespace by default. |
|
| Creates Data Grid clusters. |
|
| Displays running Data Grid clusters. |
|
| Starts an interactive remote shell session on a Data Grid cluster. |
|
| Removes Data Grid clusters. |
|
| Removes Data Grid Operator installations and all managed resources. |
Chapter 3. Installing Data Grid Operator Copy linkLink copied to clipboard!
Install Data Grid Operator into a OpenShift namespace to create and manage Data Grid clusters.
3.1. Installing Data Grid Operator on Red Hat OpenShift Copy linkLink copied to clipboard!
Create subscriptions to Data Grid Operator on OpenShift so you can install different Data Grid versions and receive automatic updates.
Automatic updates apply to Data Grid Operator first and then for each Data Grid node. Data Grid Operator updates clusters one node at a time, gracefully shutting down each node and then bringing it back online with the updated version before going on to the next node.
Prerequisites
- Access to OperatorHub running on OpenShift. Some OpenShift environments, such as OpenShift Container Platform, can require administrator credentials.
- Have an OpenShift project for Data Grid Operator if you plan to install it into a specific namespace.
Procedure
- Log in to the OpenShift Web Console.
- Navigate to OperatorHub.
- Find and select Data Grid Operator.
- Select Install and continue to Create Operator Subscription.
Specify options for your subscription.
- Installation Mode
- You can install Data Grid Operator into a Specific namespace or All namespaces.
- Update Channel
- Get updates for Data Grid Operator 8.4.x.
- Approval Strategies
- Automatically install updates from the 8.4.x channel or require approval before installation.
- Select Subscribe to install Data Grid Operator.
- Navigate to Installed Operators to verify the Data Grid Operator installation.
3.2. Installing Data Grid Operator with the native CLI plugin Copy linkLink copied to clipboard!
Install Data Grid Operator with the native Data Grid CLI plugin, kubectl-infinispan.
Prerequisites
-
Have
kubectl-infinispanon yourPATH.
Procedure
Run the
oc infinispan installcommand to create Data Grid Operator subscriptions, for example:oc infinispan install --channel=8.4.x --source=redhat-operators --source-namespace=openshift-marketplaceoc infinispan install --channel=8.4.x --source=redhat-operators --source-namespace=openshift-marketplaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the installation.
oc get pods -n openshift-operators | grep infinispan-operator NAME READY STATUS infinispan-operator-<id> 1/1 Running
oc get pods -n openshift-operators | grep infinispan-operator NAME READY STATUS infinispan-operator-<id> 1/1 RunningCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Use oc infinispan install --help for command options and descriptions.
3.3. Installing Data Grid Operator with an OpenShift client Copy linkLink copied to clipboard!
You can use the oc client to create Data Grid Operator subscriptions as an alternative to installing through the OperatorHub or with the native Data Grid CLI.
Prerequisites
-
Have an
occlient.
Procedure
Set up projects.
- Create a project for Data Grid Operator.
If you want Data Grid Operator to control a specific Data Grid cluster only, create a project for that cluster.
oc new-project ${INSTALL_NAMESPACE} oc new-project ${WATCH_NAMESPACE}oc new-project ${INSTALL_NAMESPACE}1 oc new-project ${WATCH_NAMESPACE}2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create an
OperatorGroupresource.Control all Data Grid clusters
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Control a specific Data Grid cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a subscription for Data Grid Operator.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you want to manually approve updates from the 8.4.x channel, change the value of the
spec.installPlanApprovalfield toManual.Verify the installation.
oc get pods -n ${INSTALL_NAMESPACE} NAME READY STATUS infinispan-operator-<id> 1/1 Runningoc get pods -n ${INSTALL_NAMESPACE} NAME READY STATUS infinispan-operator-<id> 1/1 RunningCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 4. Creating Data Grid clusters Copy linkLink copied to clipboard!
Create Data Grid clusters running on OpenShift with the Infinispan CR or with the native Data Grid CLI plugin for oc clients.
4.1. Infinispan custom resource (CR) Copy linkLink copied to clipboard!
Data Grid Operator adds a new Custom Resource (CR) of type Infinispan that lets you handle Data Grid clusters as complex units on OpenShift.
Data Grid Operator listens for Infinispan Custom Resources (CR) that you use to instantiate and configure Data Grid clusters and manage OpenShift resources, such as StatefulSets and Services.
Infinispan CR
| Field | Description |
|---|---|
|
|
Declares the version of the |
|
|
Declares the |
|
| Specifies a name for your Data Grid cluster. |
|
| Specifies the number of pods in your Data Grid cluster. |
|
| Specifies the type of Data Grid service to create. |
|
| Specifies the Data Grid Server version of your cluster. |
4.2. Creating Data Grid clusters Copy linkLink copied to clipboard!
Create Data Grid clusters with the native CLI plugin, kubectl-infinispan.
Prerequisites
- Install Data Grid Operator.
-
Have
kubectl-infinispanon yourPATH.
Procedure
Run the
infinispan create clustercommand.For example, create a Data Grid cluster with two pods as follows:
oc infinispan create cluster --replicas=3 -Pservice.type=DataGrid infinispan
oc infinispan create cluster --replicas=3 -Pservice.type=DataGrid infinispanCopy to Clipboard Copied! Toggle word wrap Toggle overflow TipAdd the
--versionargument to control the Data Grid version of your cluster. For example,--version=8.4.6-1. If you don’t specify the version, Data Grid Operator creates cluster with the latest supported Data Grid version.Watch Data Grid Operator create the Data Grid pods.
oc get pods -w
oc get pods -wCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
After you create a Data Grid cluster, use the oc to apply changes to Infinispan CR and configure your Data Grid service.
You can also delete Data Grid clusters with kubectl-infinispan and re-create them as required.
oc infinispan delete cluster infinispan
oc infinispan delete cluster infinispan
Additional resources
4.3. Verifying Data Grid cluster views Copy linkLink copied to clipboard!
Confirm that Data Grid pods have successfully formed clusters.
Prerequisites
- Create at least one Data Grid cluster.
Procedure
Retrieve the
InfinispanCR for Data Grid Operator.oc get infinispan -o yaml
oc get infinispan -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow The response indicates that Data Grid pods have received clustered views, as in the following example:
conditions: - message: 'View: [infinispan-0, infinispan-1]' status: "True" type: wellFormedconditions: - message: 'View: [infinispan-0, infinispan-1]' status: "True" type: wellFormedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Do the following for automated scripts:
oc wait --for condition=wellFormed --timeout=240s infinispan/infinispan
oc wait --for condition=wellFormed --timeout=240s infinispan/infinispan
Retrieving cluster view from logs
You can also get the cluster view from Data Grid logs as follows:
oc logs infinispan-0 | grep ISPN000094
oc logs infinispan-0 | grep ISPN000094
4.4. Modifying Data Grid clusters Copy linkLink copied to clipboard!
Configure Data Grid clusters by providing Data Grid Operator with a custom Infinispan CR.
Prerequisites
- Install Data Grid Operator.
- Create at least one Data Grid cluster.
-
Have an
occlient.
Procedure
Create a YAML file that defines your
InfinispanCR.For example, create a
my_infinispan.yamlfile that changes the number of Data Grid pods to two:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply your
InfinispanCR.oc apply -f my_infinispan.yaml
oc apply -f my_infinispan.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Watch Data Grid Operator scale the Data Grid pods.
oc get pods -w
oc get pods -wCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.5. Stopping and starting Data Grid clusters Copy linkLink copied to clipboard!
Stop and start Data Grid pods in a graceful, ordered fashion to correctly preserve cluster state.
Clusters of Data Grid service pods must restart with the same number of pods that existed before shutdown. This allows Data Grid to restore the distribution of data across the cluster. After Data Grid Operator fully restarts the cluster you can safely add and remove pods.
Procedure
Change the
spec.replicasfield to0to stop the Data Grid cluster.spec: replicas: 0
spec: replicas: 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure you have the correct number of pods before you restart the cluster.
oc get infinispan infinispan -o=jsonpath='{.status.replicasWantedAtRestart}'oc get infinispan infinispan -o=jsonpath='{.status.replicasWantedAtRestart}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the
spec.replicasfield to the same number of pods to restart the Data Grid cluster.spec: replicas: 6
spec: replicas: 6Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 5. Configuring Data Grid clusters Copy linkLink copied to clipboard!
Apply custom Data Grid configuration to clusters that Data Grid Operator manages.
5.1. Applying custom configuration to Data Grid clusters Copy linkLink copied to clipboard!
Add Data Grid configuration to a ConfigMap and make it available to Data Grid Operator. Data Grid Operator can then apply the custom configuration to your Data Grid cluster.
Data Grid Operator applies default configuration on top of your custom configuration to ensure it can continue to manage your Data Grid clusters.
Be careful when applying custom configuration outside the cache-container element or field. You can apply custom configuration to underlying Data Grid Server mechanisms such as endpoints, security realms, and cluster transport. Changing this configuration can result in error and result in service downtime for your Data Grid deployment.
Use the Data Grid Helm chart to deploy clusters of fully configurable Data Grid Server instances on OpenShift.
Prerequisites
- Have valid Data Grid configuration in XML, YAML, or JSON format.
Procedure
Add Data Grid configuration to a
infinispan-config.[xml|yaml|json]key in thedatafield of yourConfigMap.XML
Copy to Clipboard Copied! Toggle word wrap Toggle overflow YAML
Copy to Clipboard Copied! Toggle word wrap Toggle overflow JSON
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
ConfigMapfrom your YAML file.oc apply -f cluster-config.yaml
oc apply -f cluster-config.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the name of the
ConfigMapwith thespec.configMapNamefield in yourInfinispanCR and then apply the changes.spec: configMapName: "cluster-config"
spec: configMapName: "cluster-config"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
If your cluster is already running Data Grid Operator restarts it to apply the configuration. Each time you modify the Data Grid configuration in the ConfigMap, Data Grid Operator detects the updates and restarts the cluster to apply the changes.
5.2. Custom Data Grid configuration Copy linkLink copied to clipboard!
You can add Data Grid configuration to a ConfigMap in XML, YAML, or JSON format.
5.2.1. Cache template Copy linkLink copied to clipboard!
XML
YAML
JSON
5.2.2. Logging configuration Copy linkLink copied to clipboard!
You can also include Apache Log4j configuration in XML format as part of your ConfigMap.
Use the spec.logging.categories field in your Infinispan CR to adjust logging levels for Data Grid clusters. Add Apache Log4j configuration only if you require advanced file-based logging capabilities.
5.3. Securing custom Data Grid configuration Copy linkLink copied to clipboard!
Securely define and store custom Data Grid Server configuration. To protect sensitive text strings such as passwords, add the entries in a credential store rather than directly in the Data Grid Server configuration.
Prerequisites
- Have a valid Data Grid configuration in XML, YAML, or JSON format.
Procedure
-
Create a
CredentialStore Secretfile. Use the
datafield to specify the credentials and its aliases.user-secret.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply your Secret file.
oc apply -f user-secret.yaml
oc apply -f user-secret.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Open the
InfinispanCR for editing. In the
spec.security.credentialStoreSecretNamefield, specify the name of the credential store secret.Infinispan CR
spec: security: credentialStoreSecretName: user-secretspec: security: credentialStoreSecretName: user-secretCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Apply the changes.
- Open your Data Grid Server configuration for editing.
Add a
credential-referenceto your configuration.-
Specify the
credentialsas the name of thestore. Specify the
aliasattribute as one of the keys defined in your credential secret.Data Grid.xml
<credential-store> <credential-reference store="credentials" alias="postgres_cred"/> </credential-store><credential-store> <credential-reference store="credentials" alias="postgres_cred"/> </credential-store>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Specify the
Chapter 6. Upgrading Data Grid clusters Copy linkLink copied to clipboard!
Data Grid Operator lets you upgrade Data Grid clusters from one version to another without downtime or data loss.
Hot Rod rolling upgrades are available as a technology preview feature.
6.1. Technology preview features Copy linkLink copied to clipboard!
Technology preview features or capabilities are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete.
Red Hat does not recommend using technology preview features or capabilities for production. These features provide early access to upcoming product features, which enables you to test functionality and provide feedback during the development process.
For more information, see Red Hat Technology Preview Features Support Scope.
6.2. Data Grid cluster upgrades Copy linkLink copied to clipboard!
The spec.upgrades.type field controls how Data Grid Operator upgrades your Data Grid cluster when new versions become available. There are two types of cluster upgrade:
Shutdown- Upgrades Data Grid clusters with service downtime. This is the default upgrade type.
HotRodRolling- Upgrades Data Grid clusters without service downtime.
Shutdown upgrades
To perform a shutdown upgrade, Data Grid Operator does the following:
- Gracefully shuts down the existing cluster.
- Removes the existing cluster.
- Creates a new cluster with the target version.
Hot Rod rolling upgrades
To perform a Hot Rod rolling upgrade, Data Grid Operator does the following:
- Creates a new Data Grid cluster with the target version that runs alongside your existing cluster.
- Creates a remote cache store to transfer data from the existing cluster to the new cluster.
- Redirects all clients to the new cluster.
- Removes the existing cluster when all data and client connections are transferred to the new cluster.
You should not perform Hot Rod rolling upgrades with caches that enable passivation with persistent cache stores. In the event that the upgrade does not complete successfully, passivation can result in data loss when Data Grid Operator rolls back the target cluster.
If your cache configuration enables passivation you should perform a shutdown upgrade.
6.3. Upgrading Data Grid clusters with downtime Copy linkLink copied to clipboard!
Upgrading Data Grid clusters with downtime results in service disruption but does not require any additional capacity.
Prerequisites
- The Data Grid Operator version you have installed supports the Data Grid target version.
If required, configure a persistent cache store to preserve your data during the upgrade.
ImportantAt the start of the upgrade process Data Grid Operator shuts down your existing cluster. This results in data loss if you do not configure a persistent cache store.
Procedure
-
Specify the Data Grid version number in the
spec.versionfield. Ensure that
Shutdownis set as the value for thespec.upgrades.typefield, which is the default.spec: version: 8.4.6-1 upgrades: type: Shutdownspec: version: 8.4.6-1 upgrades: type: ShutdownCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Apply your changes, if necessary.
When new Data Grid version becomes available, you must manually change the value in the spec.version field to trigger the upgrade.
6.4. Performing Hot Rod rolling upgrades for Data Grid clusters Copy linkLink copied to clipboard!
Performing Hot Rod rolling upgrades lets you move to a new Data Grid version without service disruption. However, this upgrade type requires additional capacity and temporarily results in two Data Grid clusters with different versions running concurrently.
Prerequisite
- The Data Grid Operator version you have installed supports the Data Grid target version.
Procedure
-
Specify the Data Grid version number in the
spec.versionfield. Specify
HotRodRollingas the value for thespec.upgrades.typefield.spec: version: 8.4.6-1 upgrades: type: HotRodRollingspec: version: 8.4.6-1 upgrades: type: HotRodRollingCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Apply your changes.
When new Data Grid version becomes available, you must manually change the value in the spec.version field to trigger the upgrade.
6.4.1. Recovering from a failed Hot Rod rolling upgrade Copy linkLink copied to clipboard!
You can roll back a failed Hot Rod rolling upgrade to the previous version if the original cluster is still present.
Prerequisites
- Hot Rod rolling upgrade is in progress and the initial Data Grid cluster is present.
Procedure
Ensure the Hot Rod rolling upgrade is in progress.
oc get infinispan <cr_name> -o yaml
oc get infinispan <cr_name> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
status.hotRodRollingUpgradeStatusfield must be present.Update
spec.versionfield of yourInfinispan CRto the original cluster version defined in thestatus.hotRodRollingUpgradeStatus.Data Grid Operator deletes the newly created cluster.
Chapter 7. Setting up Data Grid services Copy linkLink copied to clipboard!
Use Data Grid Operator to create clusters of either Cache service or Data Grid service pods.
7.1. Service types Copy linkLink copied to clipboard!
Services are stateful applications, based on the Data Grid Server image, that provide flexible and robust in-memory data storage. When you create Data Grid clusters you specify either DataGrid or Cache as the service type with the spec.service.type field.
DataGridservice type- Deploy Data Grid clusters with full configuration and capabilities.
Cacheservice type- Deploy Data Grid clusters with minimal configuration.
Red Hat recommends the DataGrid service type for clusters because it lets you:
- Back up data across global clusters with cross-site replication.
- Create caches with any valid configuration.
- Add file-based cache stores to save data in a persistent volume.
- Query values across caches using the Data Grid Query API.
- Use advanced Data Grid features and capabilities.
The Cache service type was designed to provide a convenient way to create a low-latency data store with minimal configuration. Additional development on the Infinispan CRD has shown that the DataGrid CR offers a better approach to achieving this goal, ultimately giving users more choice and less deployment overhead. For this reason, the Cache service type is planned for removal in the next version of the Infinispan CRD and is no longer under active development.
The DataGrid service type continues to benefit from new features and improved tooling to automate complex operations such as cluster upgrades and data migration.
7.2. Creating Data Grid service pods Copy linkLink copied to clipboard!
To use custom cache definitions along with Data Grid capabilities such as cross-site replication, create clusters of Data Grid service pods.
Procedure
Create an
InfinispanCR that setsspec.service.type: DataGridand configures any other Data Grid service resources.Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantYou cannot change the
spec.service.typefield after you create pods. To change the service type, you must delete the existing pods and create new ones.-
Apply your
InfinispanCR to create the cluster.
7.2.1. Data Grid service CR Copy linkLink copied to clipboard!
This topic describes the Infinispan CR for Data Grid service pods.
| Field | Description |
|---|---|
|
| Names your Data Grid cluster. |
|
|
Automatically creates a |
|
| Specifies the number of pods in your cluster. |
|
| Specifies the Data Grid Server version of your cluster. |
|
| Controls how Data Grid Operator upgrades your Data Grid cluster when new versions become available. |
|
|
Configures the type Data Grid service. A value of |
|
| Configures the storage resources for Data Grid service pods. |
|
| Configures cross-site replication. |
|
| Specifies an authentication secret that contains Data Grid user credentials. |
|
| Specifies TLS certificates and keystores to encrypt client connections. |
|
| Specifies JVM, CPU, and memory resources for Data Grid pods. |
|
| Configures Data Grid logging categories. |
|
| Controls how Data Grid endpoints are exposed on the network. |
|
|
Specifies a |
|
|
Creates a
The |
|
|
Configures the logging level for the |
|
| Configures anti-affinity strategies that guarantee Data Grid availability. |
7.3. Allocating storage resources Copy linkLink copied to clipboard!
You can allocate storage for Data Grid service pods but not Cache service pods.
By default, Data Grid Operator allocates 1Gi for the persistent volume claim. However you should adjust the amount of storage available to Data Grid service pods so that Data Grid can preserve cluster state during shutdown.
If available container storage is less than the amount of available memory, data loss can occur.
Procedure
-
Allocate storage resources with the
spec.service.container.storagefield. Configure either the
ephemeralStoragefield or thestorageClassNamefield as required.NoteThese fields are mutually exclusive. Add only one of them to your
InfinispanCR.- Apply the changes.
Ephemeral storage
Name of a StorageClass object
| Field | Description |
|---|---|
|
| Specifies the amount of storage for Data Grid service pods. |
|
|
Defines whether storage is ephemeral or permanent. Set the value to |
|
|
Specifies the name of a |
7.3.1. Persistent volume claims Copy linkLink copied to clipboard!
Data Grid Operator creates a persistent volume claim (PVC) and mounts container storage at:/opt/infinispan/server/data
Caches
When you create caches, Data Grid permanently stores their configuration so your caches are available after cluster restarts. This applies to both Cache service and Data Grid service pods.
Data
Data is always volatile in clusters of Cache service pods. When you shutdown the cluster, you permanently lose the data.
Use a file-based cache store, by adding the <file-store/> element to your Data Grid cache configuration, if you want Data Grid service pods to persist data during cluster shutdown.
7.4. Allocating CPU and memory Copy linkLink copied to clipboard!
Allocate CPU and memory resources to Data Grid pods with the Infinispan CR.
Data Grid Operator requests 1Gi of memory from the OpenShift scheduler when creating Data Grid pods. CPU requests are unbounded by default.
Procedure
-
Allocate the number of CPU units with the
spec.container.cpufield. Allocate the amount of memory, in bytes, with the
spec.container.memoryfield.The
cpuandmemoryfields have values in the format of<limit>:<requests>. For example,cpu: "2000m:1000m"limits pods to a maximum of2000mof CPU and requests1000mof CPU for each pod at startup. Specifying a single value sets both the limit and request.Apply your
InfinispanCR.If your cluster is running, Data Grid Operator restarts the Data Grid pods so changes take effect.
spec:
container:
cpu: "2000m:1000m"
memory: "2Gi:1Gi"
spec:
container:
cpu: "2000m:1000m"
memory: "2Gi:1Gi"
7.5. Setting JVM options Copy linkLink copied to clipboard!
Pass additional JVM options to Data Grid pods at startup.
Procedure
-
Configure JVM options with the
spec.containerfiled in yourInfinispanCR. Apply your
InfinispanCR.If your cluster is running, Data Grid Operator restarts the Data Grid pods so changes take effect.
JVM options
spec:
container:
extraJvmOpts: "-<option>=<value>"
routerExtraJvmOpts: "-<option>=<value>"
cliExtraJvmOpts: "-<option>=<value>"
spec:
container:
extraJvmOpts: "-<option>=<value>"
routerExtraJvmOpts: "-<option>=<value>"
cliExtraJvmOpts: "-<option>=<value>"
| Field | Description |
|---|---|
|
| Specifies additional JVM options for the Data Grid Server. |
|
| Specifies additional JVM options for the Gossip router. |
|
| Specifies additional JVM options for the Data Grid CLI. |
7.6. Configuring pod probes Copy linkLink copied to clipboard!
Optionally configure the values of the Liveness, Readiness and Startup probes used by Data Grid pods.
The Data Grid Operator automatically configures the probe values to sensible defaults. We only recommend providing your own values once you have determined that the default values do not match your requirements.
Procedure
Configure probe values using the
spec.service.container.*Probefields:Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantIf no value is specified for a given probe value, then the Data Grid Operator default is used.
Apply your
InfinispanCR.If your cluster is running, Data Grid Operator restarts the Data Grid pods in order for the changes to take effect.
7.7. Configuring pod priority Copy linkLink copied to clipboard!
Create one or more priority classes to indicate the importance of a pod relative to other pods. Pods with higher priority are scheduled ahead of pods with lower priority, ensuring prioritization of pods running critical workloads, especially when resources become constrained.
Prerequisites
-
Have
cluster-adminaccess to OpenShift.
Procedure
Define a
PriorityClassobject by specifying its name and value.high-priority.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the priority class.
oc create -f high-priority.yaml
oc create -f high-priority.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Reference the priority class name in the pod configuration.
Infinispan CR
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You must reference an existing priority class name, otherwise the pod is rejected.
- Apply the changes.
7.8. FIPS mode for your Infinispan CR Copy linkLink copied to clipboard!
The Red Hat OpenShift Container Platform can use certain Federal Information Processing Standards (FIPS) components that ensure OpenShift clusters meet the requirements of a FIPS compliance audit.
If you enabled FIPS mode on your OpenShift cluster then the Data Grid Operator automatically enables FIPS mode for your Infinispan custom resource (CR).
Client certificate authentication is not currently supported with FIPS mode. Attempts to create Infinispan CR with spec.security.endpointEncryption.clientCert set to a value other than None will fail.
7.9. Adjusting log levels Copy linkLink copied to clipboard!
Change levels for different Data Grid logging categories when you need to debug issues. You can also adjust log levels to reduce the number of messages for certain categories to minimize the use of container resources.
Procedure
Configure Data Grid logging with the
spec.logging.categoriesfield in yourInfinispanCR.spec: logging: categories: org.infinispan: debug org.jgroups: debugspec: logging: categories: org.infinispan: debug org.jgroups: debugCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Apply the changes.
Retrieve logs from Data Grid pods as required.
oc logs -f $POD_NAME
oc logs -f $POD_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow
7.9.1. Logging reference Copy linkLink copied to clipboard!
Find information about log categories and levels.
| Root category | Description | Default level |
|---|---|---|
|
| Data Grid messages |
|
|
| Cluster transport messages |
|
| Log level | Description |
|---|---|
|
| Provides detailed information about running state of applications. This is the most verbose log level. |
|
| Indicates the progress of individual requests or activities. |
|
| Indicates overall progress of applications, including lifecycle events. |
|
| Indicates circumstances that can lead to error or degrade performance. |
|
| Indicates error conditions that might prevent operations or activities from being successful but do not prevent applications from running. |
Garbage collection (GC) messages
Data Grid Operator does not log GC messages by default. You can direct GC messages to stdout with the following JVM options:
extraJvmOpts: "-Xlog:gc*:stdout:time,level,tags"
extraJvmOpts: "-Xlog:gc*:stdout:time,level,tags"
7.10. Creating Cache service pods Copy linkLink copied to clipboard!
Create Data Grid clusters with Cache service pods for a volatile, low-latency data store with minimal configuration.
Cache service pods provide volatile storage only, which means you lose all data when you modify your Infinispan CR or update the version of your Data Grid cluster.
Procedure
Create an
InfinispanCR that setsspec.service.type: Cacheand configures any other Cache service resources.Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Apply your
InfinispanCR to create the cluster.
7.10.1. Cache service CR Copy linkLink copied to clipboard!
This topic describes the Infinispan CR for Cache service pods.
| Field | Description |
|---|---|
|
| Names your Data Grid cluster. |
|
|
Automatically creates a |
|
| Specifies the number of pods in your cluster. If you enable autoscaling capabilities, this field specifies the initial number of pods. |
|
| Specifies the Data Grid Server version of your cluster. |
|
| Controls how Data Grid Operator upgrades your Data Grid cluster when new versions become available. |
|
|
Configures the type Data Grid service. A value of |
|
| Sets the number of copies for each entry across the cluster. The default for Cache service pods is two, which replicates each cache entry to avoid data loss. |
|
| Enables and configures automatic scaling. |
|
| Specifies an authentication secret that contains Data Grid user credentials. |
|
| Specifies TLS certificates and keystores to encrypt client connections. |
|
| Specifies JVM, CPU, and memory resources for Data Grid pods. |
|
| Configures Data Grid logging categories. |
|
| Controls how Data Grid endpoints are exposed on the network. |
|
| Configures anti-affinity strategies that guarantee Data Grid availability. |
7.11. Automatic scaling Copy linkLink copied to clipboard!
Data Grid Operator can monitor the default cache on Cache service pods to automatically scale clusters up or down, by creating or deleting pods based on memory usage.
Automatic scaling is available for clusters of Cache service pods only. Data Grid Operator does not perform automatic scaling for clusters of Data Grid service pods.
When you enable automatic scaling, you define memory usage thresholds that let Data Grid Operator determine when it needs to create or delete pods. Data Grid Operator monitors statistics for the default cache and, when memory usage reaches the configured thresholds, scales your clusters up or down.
Maximum threshold
This threshold sets an upper boundary for the amount of memory that pods in your cluster can use before scaling up or performing eviction. When Data Grid Operator detects that any node reaches the maximum amount of memory that you configure, it creates a new node if possible. If Data Grid Operator cannot create a new node then it performs eviction when memory usage reaches 100 percent.
Minimum threshold
This threshold sets a lower boundary for memory usage across your Data Grid cluster. When Data Grid Operator detects that memory usage falls below the minimum, it shuts down pods.
Default cache only
Autoscaling capabilities work with the default cache only. If you plan to add other caches to your cluster, you should not include the autoscale field in your Infinispan CR. In this case you should use eviction to control the size of the data container on each node.
7.11.1. Configuring automatic scaling Copy linkLink copied to clipboard!
If you create clusters with Cache service pods, you can configure Data Grid Operator to automatically scale clusters.
Procedure
Add the
spec.autoscaleresource to yourInfinispanCR to enable automatic scaling.NoteSet a value of
truefor theautoscale.disabledfield to disable automatic scaling.Configure thresholds for automatic scaling with the following fields:
Expand Field Description spec.autoscale.maxMemUsagePercentSpecifies a maximum threshold, as a percentage, for memory usage on each node.
spec.autoscale.maxReplicasSpecifies the maximum number of Cache service pods for the cluster.
spec.autoscale.minMemUsagePercentSpecifies a minimum threshold, as a percentage, for cluster memory usage.
spec.autoscale.minReplicasSpecifies the minimum number of Cache service pods for the cluster.
For example, add the following to your
InfinispanCR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Apply the changes.
7.12. Adding labels and annotations to Data Grid resources Copy linkLink copied to clipboard!
Attach key/value labels and annotations to pods and services that Data Grid Operator creates and manages. Labels help you identify relationships between objects to better organize and monitor Data Grid resources. Annotations are arbitrary non-identifying metadata for client applications or deployment and management tooling.
Red Hat subscription labels are automatically applied to Data Grid resources.
Procedure
-
Open your
InfinispanCR for editing. Attach labels and annotations to Data Grid resources in the
metadata.annotationssection.-
Define values for annotations directly in the
metadata.annotationssection. -
Define values for labels with the
metadata.labelsfield.
-
Define values for annotations directly in the
-
Apply your
InfinispanCR.
Custom annotations
Custom labels
7.13. Adding labels and annotations with environment variables Copy linkLink copied to clipboard!
Set environment variables for Data Grid Operator to add labels and annotations that automatically propagate to all Data Grid pods and services.
Procedure
Add labels and annotations to your Data Grid Operator subscription with the spec.config.env field in one of the following ways:
Use the
oc edit subscriptioncommand.oc edit subscription datagrid -n openshift-operators
oc edit subscription datagrid -n openshift-operatorsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the Red Hat OpenShift Console.
- Navigate to Operators > Installed Operators > Data Grid Operator.
- From the Actions menu, select Edit Subscription.
Labels and annotations with environment variables
7.14. Defining environment variables in the Data Grid Operator subscription Copy linkLink copied to clipboard!
You can define environment variables in your Data Grid Operator subscription either when you create or edit the subscription.
If you are using the Red Hat OpenShift Console, you must first install the Data Grid Operator and then edit the existing subscription.
spec.config.envfield-
Includes the
nameandvaluefields to define environment variables. ADDITIONAL_VARSvariable-
Includes the names of environment variables in a format of JSON array. Environment variables within the
valueof theADDITIONAL_VARSvariable automatically propagate to each Data Grid Server pod managed by the associated Operator.
Prerequisites
- Ensure the Operator Lifecycle Manager (OLM) is installed.
-
Have an
occlient.
Procedure
Create a subscription definition YAML for your Data Grid Operator:
-
Use the
spec.config.envfield to define environment variables. Within the
ADDITIONAL_VARSvariable, include environment variable names in a JSON array.subscription-datagrid.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example, use the environment variables to set the local time zone:
subscription-datagrid.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Use the
Create a subscription for Data Grid Operator:
oc apply -f subscription-datagrid.yaml
oc apply -f subscription-datagrid.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Retrieve the environment variables from the
subscription-datagrid.yaml:oc get subscription datagrid -n openshift-operators -o jsonpath='{.spec.config.env[*].name}'oc get subscription datagrid -n openshift-operators -o jsonpath='{.spec.config.env[*].name}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
Use the
oc edit subscriptioncommand to modify the environment variable:oc edit subscription datagrid -n openshift-operators
oc edit subscription datagrid -n openshift-operatorsCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
To ensure the changes take effect on your Data Grid clusters, you must recreate the existing clusters. Terminate the pods by deleting the
StatefulSetassociated with the existingInfinispanCRs.
- In the Red Hat OpenShift Console, navigate to Operators > Installed Operators > Data Grid Operator. From the Actions menu, select Edit Subscription.
Chapter 8. Configuring authentication Copy linkLink copied to clipboard!
Application users need credentials to access Data Grid clusters. You can use default, generated credentials or add your own.
8.1. Default credentials Copy linkLink copied to clipboard!
Data Grid Operator generates base64-encoded credentials for the following users:
| User | Secret name | Description |
|---|---|---|
|
|
| Credentials for the default application user. |
|
|
| Credentials that Data Grid Operator uses to interact with Data Grid resources. |
8.2. Retrieving credentials Copy linkLink copied to clipboard!
Get credentials from authentication secrets to access Data Grid clusters.
Procedure
Retrieve credentials from authentication secrets.
oc get secret infinispan-generated-secret
oc get secret infinispan-generated-secretCopy to Clipboard Copied! Toggle word wrap Toggle overflow Base64-decode credentials.
oc get secret infinispan-generated-secret -o jsonpath="{.data.identities\.yaml}" | base64 --decodeoc get secret infinispan-generated-secret -o jsonpath="{.data.identities\.yaml}" | base64 --decodeCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.3. Adding custom user credentials Copy linkLink copied to clipboard!
Configure access to Data Grid cluster endpoints with custom credentials.
Modifying spec.security.endpointSecretName triggers a cluster restart.
Procedure
Create an
identities.yamlfile with the credentials that you want to add.credentials: - username: myfirstusername password: changeme-one - username: mysecondusername password: changeme-two
credentials: - username: myfirstusername password: changeme-one - username: mysecondusername password: changeme-twoCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create an authentication secret from
identities.yaml.oc create secret generic --from-file=identities.yaml connect-secret
oc create secret generic --from-file=identities.yaml connect-secretCopy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the authentication secret with
spec.security.endpointSecretNamein yourInfinispanCR and then apply the changes.spec: security: endpointSecretName: connect-secretspec: security: endpointSecretName: connect-secretCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.4. Changing the operator password Copy linkLink copied to clipboard!
You can change the password for the operator user if you do not want to use the automatically generated password.
Procedure
Update the
passwordkey in theinfinispan-generated-operator-secretsecret as follows:oc patch secret infinispan-generated-operator-secret -p='{"stringData":{"password": "supersecretoperatorpassword"}}'oc patch secret infinispan-generated-operator-secret -p='{"stringData":{"password": "supersecretoperatorpassword"}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou should update only the
passwordkey in thegenerated-operator-secretsecret. When you update the password, Data Grid Operator automatically refreshes other keys in that secret.
8.5. Disabling user authentication Copy linkLink copied to clipboard!
Allow users to access Data Grid clusters and manipulate data without providing credentials.
Do not disable authentication if endpoints are accessible from outside the OpenShift cluster via spec.expose.type. You should disable authentication for development environments only.
Procedure
Set
falseas the value for thespec.security.endpointAuthenticationfield in yourInfinispanCR.spec: security: endpointAuthentication: falsespec: security: endpointAuthentication: falseCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Apply the changes.
Chapter 9. Configuring client certificate authentication Copy linkLink copied to clipboard!
Add client trust stores to your project and configure Data Grid to allow connections only from clients that present valid certificates. This increases security of your deployment by ensuring that clients are trusted by a public certificate authority (CA).
9.1. Client certificate authentication Copy linkLink copied to clipboard!
Client certificate authentication restricts in-bound connections based on the certificates that clients present.
You can configure Data Grid to use trust stores with either of the following strategies:
Validate
To validate client certificates, Data Grid requires a trust store that contains any part of the certificate chain for the signing authority, typically the root CA certificate. Any client that presents a certificate signed by the CA can connect to Data Grid.
If you use the Validate strategy for verifying client certificates, you must also configure clients to provide valid Data Grid credentials if you enable authentication.
Authenticate
Requires a trust store that contains all public client certificates in addition to the root CA certificate. Only clients that present a signed certificate can connect to Data Grid.
If you use the Authenticate strategy for verifying client certificates, you must ensure that certificates contain valid Data Grid credentials as part of the distinguished name (DN).
9.2. Enabling client certificate authentication Copy linkLink copied to clipboard!
To enable client certificate authentication, you configure Data Grid to use trust stores with either the Validate or Authenticate strategy.
Procedure
Set either
ValidateorAuthenticateas the value for thespec.security.endpointEncryption.clientCertfield in yourInfinispanCR.NoteThe default value is
None.Specify the secret that contains the client trust store with the
spec.security.endpointEncryption.clientCertSecretNamefield.By default Data Grid Operator expects a trust store secret named
<cluster-name>-client-cert-secret.NoteThe secret must be unique to each
InfinispanCR instance in the OpenShift cluster. When you delete theInfinispanCR, OpenShift also automatically deletes the associated secret.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Apply the changes.
Next steps
Provide Data Grid Operator with a trust store that contains all client certificates. Alternatively you can provide certificates in PEM format and let Data Grid generate a client trust store.
9.3. Providing client truststores Copy linkLink copied to clipboard!
If you have a trust store that contains the required certificates you can make it available to Data Grid Operator.
Data Grid supports trust stores in PKCS12 format only.
Procedure
Specify the name of the secret that contains the client trust store as the value of the
metadata.namefield.NoteThe name must match the value of the
spec.security.endpointEncryption.clientCertSecretNamefield.-
Provide the password for the trust store with the
stringData.truststore-passwordfield. Specify the trust store with the
data.truststore.p12field.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Apply the changes.
9.4. Providing client certificates Copy linkLink copied to clipboard!
Data Grid Operator can generate a trust store from certificates in PEM format.
Procedure
Specify the name of the secret that contains the client trust store as the value of the
metadata.namefield.NoteThe name must match the value of the
spec.security.endpointEncryption.clientCertSecretNamefield.-
Specify the signing certificate, or CA certificate bundle, as the value of the
data.trust.cafield. If you use the
Authenticatestrategy to verify client identities, add the certificate for each client that can connect to Data Grid endpoints with thedata.trust.cert.<name>field.NoteData Grid Operator uses the
<name>value as the alias for the certificate when it generates the trust store.Optionally provide a password for the trust store with the
stringData.truststore-passwordfield.If you do not provide one, Data Grid Operator sets "password" as the trust store password.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Apply the changes.
Chapter 10. Configuring encryption Copy linkLink copied to clipboard!
Encrypt connections between clients and Data Grid pods with Red Hat OpenShift service certificates or custom TLS certificates.
10.1. Encryption with Red Hat OpenShift service certificates Copy linkLink copied to clipboard!
Data Grid Operator automatically generates TLS certificates that are signed by the Red Hat OpenShift service CA. Data Grid Operator then stores the certificates and keys in a secret so you can retrieve them and use with remote clients.
If the Red Hat OpenShift service CA is available, Data Grid Operator adds the following spec.security.endpointEncryption configuration to the Infinispan CR:
| Field | Description |
|---|---|
|
| Specifies the service that provides TLS certificates. |
|
|
Specifies a secret with a service certificate and key in PEM format. Defaults to |
Service certificates use the internal DNS name of the Data Grid cluster as the common name (CN), for example:
Subject: CN = example-infinispan.mynamespace.svc
For this reason, service certificates can be fully trusted only inside OpenShift. If you want to encrypt connections with clients running outside OpenShift, you should use custom TLS certificates.
Service certificates are valid for one year and are automatically replaced before they expire.
10.2. Retrieving TLS certificates Copy linkLink copied to clipboard!
Get TLS certificates from encryption secrets to create client trust stores.
Procedure
Retrieve
tls.crtfrom encryption secrets as follows:oc get secret infinispan-cert-secret -o jsonpath='{.data.tls\.crt}' | base64 --decode > tls.crtoc get secret infinispan-cert-secret -o jsonpath='{.data.tls\.crt}' | base64 --decode > tls.crtCopy to Clipboard Copied! Toggle word wrap Toggle overflow
10.3. Disabling encryption Copy linkLink copied to clipboard!
You can disable encryption so clients do not need TLS certificates to establish connections with Data Grid.
Do not disable encryption if endpoints are accessible from outside the OpenShift cluster via spec.expose.type. You should disable encryption for development environments only.
Procedure
Set
Noneas the value for thespec.security.endpointEncryption.typefield in yourInfinispanCR.spec: security: endpointEncryption: type: Nonespec: security: endpointEncryption: type: NoneCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Apply the changes.
10.4. Using custom TLS certificates Copy linkLink copied to clipboard!
Use custom PKCS12 keystore or TLS certificate/key pairs to encrypt connections between clients and Data Grid clusters.
Prerequisites
Create either a keystore or certificate secret.
NoteThe secret must be unique to each
InfinispanCR instance in the OpenShift cluster. When you delete theInfinispanCR, OpenShift also automatically deletes the associated secret.
Procedure
Add the encryption secret to your OpenShift namespace, for example:
oc apply -f tls_secret.yaml
oc apply -f tls_secret.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the encryption secret with the
spec.security.endpointEncryption.certSecretNamefield in yourInfinispanCR.spec: security: endpointEncryption: type: Secret certSecretName: tls-secretspec: security: endpointEncryption: type: Secret certSecretName: tls-secretCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Apply the changes.
10.4.1. Custom encryption secrets Copy linkLink copied to clipboard!
Custom encryption secrets that add keystores or certificate/key pairs to secure Data Grid connections must contain specific fields.
Keystore secrets
| Field | Description |
|---|---|
|
| Specifies an alias for the keystore. |
|
| Specifies the keystore password. |
|
| Adds a base64-encoded keystore. |
Certificate secrets
| Field | Description |
|---|---|
|
| Adds a base64-encoded TLS key. |
|
| Adds a base64-encoded TLS certificate. |
Chapter 11. Configuring user roles and permissions Copy linkLink copied to clipboard!
Secure access to Data Grid services by configuring role-based access control (RBAC) for users. This requires you to assign roles to users so that they have permission to access caches and Data Grid resources.
11.1. Enabling security authorization Copy linkLink copied to clipboard!
By default authorization is disabled to ensure backwards compatibility with Infinispan CR instances. Complete the following procedure to enable authorization and use role-based access control (RBAC) for Data Grid users.
Procedure
Set
trueas the value for thespec.security.authorization.enabledfield in yourInfinispanCR.spec: security: authorization: enabled: truespec: security: authorization: enabled: trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Apply the changes.
11.2. User roles and permissions Copy linkLink copied to clipboard!
Data Grid Operator provides a set of default roles that are associated with different permissions.
| Role | Permissions | Description |
|---|---|---|
|
| ALL | Superuser with all permissions including control of the Cache Manager lifecycle. |
|
| ALL_READ, ALL_WRITE, LISTEN, EXEC, MONITOR, CREATE |
Can create and delete Data Grid resources in addition to |
|
| ALL_READ, ALL_WRITE, LISTEN, EXEC, MONITOR |
Has read and write access to Data Grid resources in addition to |
|
| ALL_READ, MONITOR |
Has read access to Data Grid resources in addition to |
|
| MONITOR | Can view statistics for Data Grid clusters. |
Data Grid Operator credentials
Data Grid Operator generates credentials that it uses to authenticate with Data Grid clusters to perform internal operations. By default Data Grid Operator credentials are automatically assigned the admin role when you enable security authorization.
11.3. Assigning roles and permissions to users Copy linkLink copied to clipboard!
Assign users with roles that control whether users are authorized to access Data Grid cluster resources. Roles can have different permission levels, from read-only to unrestricted access.
Users gain authorization implicitly. For example, "admin" gets admin permissions automatically. A user named "deployer" has the deployer role automatically, and so on.
Procedure
Create an
identities.yamlfile that assigns roles to users.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an authentication secret from
identities.yaml.If necessary, delete the existing secret first.
oc delete secret connect-secret --ignore-not-found oc create secret generic --from-file=identities.yaml connect-secret
oc delete secret connect-secret --ignore-not-found oc create secret generic --from-file=identities.yaml connect-secretCopy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the authentication secret with
spec.security.endpointSecretNamein yourInfinispanCR and then apply the changes.spec: security: endpointSecretName: connect-secretspec: security: endpointSecretName: connect-secretCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.4. Adding custom roles and permissions Copy linkLink copied to clipboard!
You can define custom roles with different combinations of permissions.
Procedure
-
Open your
InfinispanCR for editing. Specify custom roles and their associated permissions with the
spec.security.authorization.rolesfield.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Apply the changes.
Chapter 12. Configuring network access to Data Grid Copy linkLink copied to clipboard!
Expose Data Grid clusters so you can access Data Grid Console, the Data Grid command line interface (CLI), REST API, and Hot Rod endpoint.
12.1. Getting the service for internal connections Copy linkLink copied to clipboard!
By default, Data Grid Operator creates a service that provides access to Data Grid clusters from clients running on OpenShift.
This internal service has the same name as your Data Grid cluster, for example:
metadata: name: infinispan
metadata:
name: infinispan
Procedure
Check that the internal service is available as follows:
oc get services
oc get servicesCopy to Clipboard Copied! Toggle word wrap Toggle overflow
12.2. Exposing Data Grid through a LoadBalancer service Copy linkLink copied to clipboard!
Use a LoadBalancer service to make Data Grid clusters available to clients running outside OpenShift.
To access Data Grid with unencrypted Hot Rod client connections you must use a LoadBalancer service.
Procedure
-
Include
spec.exposein yourInfinispanCR. -
Specify
LoadBalanceras the service type with thespec.expose.typefield. Optionally specify the network port where the service is exposed with the
spec.expose.portfield.spec: expose: type: LoadBalancer port: 65535spec: expose: type: LoadBalancer port: 65535Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Apply the changes.
Verify that the
-externalservice is available.oc get services | grep external
oc get services | grep externalCopy to Clipboard Copied! Toggle word wrap Toggle overflow
12.3. Exposing Data Grid through a NodePort service Copy linkLink copied to clipboard!
Use a NodePort service to expose Data Grid clusters on the network.
Procedure
-
Include
spec.exposein yourInfinispanCR. -
Specify
NodePortas the service type with thespec.expose.typefield. Configure the port where Data Grid is exposed with the
spec.expose.nodePortfield.spec: expose: type: NodePort nodePort: 30000spec: expose: type: NodePort nodePort: 30000Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Apply the changes.
Verify that the
-externalservice is available.oc get services | grep external
oc get services | grep externalCopy to Clipboard Copied! Toggle word wrap Toggle overflow
12.4. Exposing Data Grid through a Route Copy linkLink copied to clipboard!
Use an OpenShift Route with passthrough encryption to make Data Grid clusters available on the network.
To access Data Grid with Hot Rod client, you must configure TLS with SNI.
Procedure
-
Include
spec.exposein yourInfinispanCR. -
Specify
Routeas the service type with thespec.expose.typefield. Optionally add a hostname with the
spec.expose.hostfield.spec: expose: type: Route host: www.example.orgspec: expose: type: Route host: www.example.orgCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Apply the changes.
Verify that the route is available.
oc get routes
oc get routesCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Route ports
When you create a Route, it exposes a port on the network that accepts client connections and redirects traffic to Data Grid services that listen on port 11222.
The port where the Route is available depends on whether you use encryption or not.
| Port | Description |
|---|---|
|
| Encryption is disabled. |
|
| Encryption is enabled. |
12.5. Network services Copy linkLink copied to clipboard!
Reference information for network services that Data Grid Operator creates and manages.
| Service | Port | Protocol | Description |
|---|---|---|---|
|
|
| TCP |
Access to Data Grid endpoints within the OpenShift cluster or from an OpenShift |
|
|
| TCP | Access to Data Grid endpoints within the OpenShift cluster for internal Data Grid Operator use. This port utilises a different security-realm to port 11222 and should not be accessed by user applications. |
|
|
| TCP | Cluster discovery for Data Grid pods. |
|
|
| TCP |
Access to Data Grid endpoints from a |
|
|
| TCP | JGroups RELAY2 channel for cross-site communication. |
The Data Grid Console should only be accessed via OpenShift services or an OpenShift Route exposing port 11222.
Chapter 13. Setting up cross-site replication Copy linkLink copied to clipboard!
Ensure availability with Data Grid Operator by configuring geographically distributed clusters as a unified service.
You can configure clusters to perform cross-site replication with:
- Connections that Data Grid Operator manages.
- Connections that you configure and manage.
You can use both managed and manual connections for Data Grid clusters in the same Infinispan CR. You must ensure that Data Grid clusters establish connections in the same way at each site.
13.1. Cross-site replication expose types Copy linkLink copied to clipboard!
You can use a NodePort service, a LoadBalancer service, or an OpenShift Route to handle network traffic for backup operations between Data Grid clusters. Before you start setting up cross-site replication you should determine what expose type is available for your Red Hat OpenShift cluster. In some cases you may require an administrator to provision services before you can configure an expose type.
NodePort
A NodePort is a service that accepts network traffic at a static port, in the 30000 to 32767 range, on an IP address that is available externally to the OpenShift cluster.
To use a NodePort as the expose type for cross-site replication, an administrator must provision external IP addresses for each OpenShift node. In most cases, an administrator must also configure DNS routing for those external IP addresses.
LoadBalancer
A LoadBalancer is a service that directs network traffic to the correct node in the OpenShift cluster.
Whether you can use a LoadBalancer as the expose type for cross-site replication depends on the host platform. AWS supports network load balancers (NLB) while some other cloud platforms do not. To use a LoadBalancer service, an administrator must first create an ingress controller backed by an NLB.
Route
An OpenShift Route allows Data Grid clusters to connect with each other through a public secure URL.
Data Grid uses TLS with the SNI header to send backup requests between clusters through an OpenShift Route. To do this you must add a keystore with TLS certificates so that Data Grid can encrypt network traffic for cross-site replication.
When you specify Route as the expose type for cross-site replication, Data Grid Operator creates a route with TLS passthrough encryption for each Data Grid cluster that it manages. You can specify a hostname for the Route but you cannot specify a Route that you have already created.
13.2. Managed cross-site replication Copy linkLink copied to clipboard!
Data Grid Operator can discover Data Grid clusters running in different data centers to form global clusters.
When you configure managed cross-site connections, Data Grid Operator creates router pods in each Data Grid cluster. Data Grid pods use the <cluster_name>-site service to connect to these router pods and send backup requests.
Router pods maintain a record of all pod IP addresses and parse RELAY message headers to forward backup requests to the correct Data Grid cluster. If a router pod crashes then all Data Grid pods start using any other available router pod until OpenShift restores it.
To manage cross-site connections, Data Grid Operator uses the Kubernetes API. Each OpenShift cluster must have network access to the remote Kubernetes API and a service account token for each backup cluster.
Data Grid clusters do not start running until Data Grid Operator discovers all backup locations that you configure.
13.2.1. Creating service account tokens for managed cross-site connections Copy linkLink copied to clipboard!
Generate service account tokens on OpenShift clusters that allow Data Grid Operator to automatically discover Data Grid clusters and manage cross-site connections.
Prerequisites
Ensure all OpenShift clusters have access to the Kubernetes API.
Data Grid Operator uses this API to manage cross-site connections.NoteData Grid Operator does not modify remote Data Grid clusters. The service account tokens provide read-only access through the Kubernetes API.
Procedure
- Log in to an OpenShift cluster.
Create a service account.
For example, create a service account at LON:
oc create sa -n <namespace> lon
oc create sa -n <namespace> lonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the view role to the service account with the following command:
oc policy add-role-to-user view -n <namespace> -z lon
oc policy add-role-to-user view -n <namespace> -z lonCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you use a
NodePortservice to expose Data Grid clusters on the network, you must also add thecluster-readerrole to the service account:oc adm policy add-cluster-role-to-user cluster-reader -z lon -n <namespace>
oc adm policy add-cluster-role-to-user cluster-reader -z lon -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Repeat the preceding steps on your other OpenShift clusters.
- Exchange service account tokens on each OpenShift cluster.
13.2.2. Exchanging service account tokens Copy linkLink copied to clipboard!
Generate service account tokens on your OpenShift clusters and add them into secrets at each backup location. The tokens that you generate in this procedure do not expire. For bound service account tokens, see Exchanging bound service account tokens.
Prerequisites
- You have created a service account.
Procedure
- Log in to your OpenShift cluster.
Create a service account token secret file as follows:
sa-token.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the secret in your OpenShift cluster:
oc -n <namespace> create -f sa-token.yaml
oc -n <namespace> create -f sa-token.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the service account token:
oc -n <namespace> get secrets ispn-xsite-sa-token -o jsonpath="{.data.token}" | base64 -doc -n <namespace> get secrets ispn-xsite-sa-token -o jsonpath="{.data.token}" | base64 -dCopy to Clipboard Copied! Toggle word wrap Toggle overflow The command prints the token in the terminal.
- Copy the token for deployment in the backup OpenShift cluster.
- Log in to the backup OpenShift cluster.
Add the service account token for a backup location:
oc -n <namespace> create secret generic <token-secret> --from-literal=token=<token>
oc -n <namespace> create secret generic <token-secret> --from-literal=token=<token>Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
<token-secret>is the name of the secret configured in theInfinispanCR.
Next steps
- Repeat the preceding steps on your other OpenShift clusters.
13.2.3. Exchanging bound service account tokens Copy linkLink copied to clipboard!
Create service account tokens with a limited lifespan and add them into secrets at each backup location. You must refresh the token periodically to prevent Data Grid Operator from losing access to the remote OpenShift cluster. For non-expiring tokens, see Exchanging service account tokens.
Prerequisites
- You have created a service account.
Procedure
- Log in to your OpenShift cluster.
Create a bound token for the service account:
oc -n <namespace> create token <service-account>
oc -n <namespace> create token <service-account>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteBy default, service account tokens are valid for one hour. Use the command option
--durationto specify the lifespan in seconds..The command prints the token in the terminal.
- Copy the token for deployment in the backup OpenShift cluster(s).
- Log in to the backup OpenShift cluster.
Add the service account token for a backup location:
oc -n <namespace> create secret generic <token-secret> --from-literal=token=<token>
oc -n <namespace> create secret generic <token-secret> --from-literal=token=<token>Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
<token-secret>is the name of the secret configured in theInfinispanCR.- Repeat the steps on other OpenShift clusters.
Deleting expired tokens
When a token expires, delete the expired token secret, and then repeat the procedure to generate and exchange a new one.
- Log in to the backup OpenShift cluster.
Delete the expired secret
<token-secret>:oc -n <namespace> delete secrets <token-secret>
oc -n <namespace> delete secrets <token-secret>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Repeat the procedure to create a new token and generate a new
<token-secret>.
13.2.4. Configuring managed cross-site connections Copy linkLink copied to clipboard!
Configure Data Grid Operator to establish cross-site views with Data Grid clusters.
Prerequisites
-
Determine a suitable expose type for cross-site replication.
If you use an OpenShiftRouteyou must add a keystore with TLS certificates and secure cross-site connections. - Create and exchange Red Hat OpenShift service account tokens for each Data Grid cluster.
Procedure
-
Create an
InfinispanCR for each Data Grid cluster. -
Specify the name of the local site with
spec.service.sites.local.name. Configure the expose type for cross-site replication.
Set the value of the
spec.service.sites.local.expose.typefield to one of the following:-
NodePort -
LoadBalancer -
Route
-
Optionally specify a port or custom hostname with the following fields:
-
spec.service.sites.local.expose.nodePortif you use aNodePortservice. -
spec.service.sites.local.expose.portif you use aLoadBalancerservice. -
spec.service.sites.local.expose.routeHostNameif you use an OpenShiftRoute.
-
Specify the number of pods that can send RELAY messages with the
service.sites.local.maxRelayNodesfield.TipConfigure all pods in your cluster to send
RELAYmessages for better performance. If all pods send backup requests directly, then no pods need to forward backup requests.-
Provide the name, URL, and secret for each Data Grid cluster that acts as a backup location with
spec.service.sites.locations. If Data Grid cluster names or namespaces at the remote site do not match the local site, specify those values with the
clusterNameandnamespacefields.The following are example
InfinispanCR definitions for LON and NYC:LON
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NYC
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantBe sure to adjust logging categories in your
InfinispanCR to decrease log levels for JGroups TCP and RELAY2 protocols. This prevents a large number of log files from uses container storage.spec: logging: categories: org.jgroups.protocols.TCP: error org.jgroups.protocols.relay.RELAY2: errorspec: logging: categories: org.jgroups.protocols.TCP: error org.jgroups.protocols.relay.RELAY2: errorCopy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Configure your
InfinispanCRs with any other Data Grid service resources and then apply the changes. Verify that Data Grid clusters form a cross-site view.
Retrieve the
InfinispanCR.oc get infinispan -o yaml
oc get infinispan -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Check for the
type: CrossSiteViewFormedcondition.
Next steps
If your clusters have formed a cross-site view, you can start adding backup locations to caches.
13.3. Manually configuring cross-site connections Copy linkLink copied to clipboard!
You can specify static network connection details to perform cross-site replication with Data Grid clusters running outside OpenShift. Manual cross-site connections are necessary in any scenario where access to the Kubernetes API is not available outside the OpenShift cluster where Data Grid runs.
Prerequisites
-
Determine a suitable expose type for cross-site replication.
If you use an OpenShiftRouteyou must add a keystore with TLS certificates and secure cross-site connections. Ensure you have the correct host names and ports for each Data Grid cluster and each
<cluster-name>-siteservice.Manually connecting Data Grid clusters to form cross-site views requires predictable network locations for Data Grid services, which means you need to know the network locations before they are created.
Procedure
-
Create an
InfinispanCR for each Data Grid cluster. -
Specify the name of the local site with
spec.service.sites.local.name. Configure the expose type for cross-site replication.
Set the value of the
spec.service.sites.local.expose.typefield to one of the following:-
NodePort -
LoadBalancer -
Route
-
Optionally specify a port or custom hostname with the following fields:
-
spec.service.sites.local.expose.nodePortif you use aNodePortservice. -
spec.service.sites.local.expose.portif you use aLoadBalancerservice. -
spec.service.sites.local.expose.routeHostNameif you use an OpenShiftRoute.
-
Provide the name and static URL for each Data Grid cluster that acts as a backup location with
spec.service.sites.locations, for example:LON
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NYC
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantBe sure to adjust logging categories in your
InfinispanCR to decrease log levels for JGroups TCP and RELAY2 protocols. This prevents a large number of log files from uses container storage.spec: logging: categories: org.jgroups.protocols.TCP: error org.jgroups.protocols.relay.RELAY2: errorspec: logging: categories: org.jgroups.protocols.TCP: error org.jgroups.protocols.relay.RELAY2: errorCopy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Configure your
InfinispanCRs with any other Data Grid service resources and then apply the changes. Verify that Data Grid clusters form a cross-site view.
Retrieve the
InfinispanCR.oc get infinispan -o yaml
oc get infinispan -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Check for the
type: CrossSiteViewFormedcondition.
Next steps
If your clusters have formed a cross-site view, you can start adding backup locations to caches.
13.4. Allocating CPU and memory for Gossip router pod Copy linkLink copied to clipboard!
Allocate CPU and memory resources to Data Grid Gossip router.
Prerequisite
-
Have Gossip router enabled. The
service.sites.local.discovery.launchGossipRouterproperty must be set totrue, which is the default value.
Procedure
-
Allocate the number of CPU units using the
service.sites.local.discovery.cpufield. Allocate the amount of memory, in bytes, using the
service.sites.local.discovery.memoryfield.The
cpuandmemoryfields have values in the format of<limit>:<requests>. For example,cpu: "2000m:1000m"limits pods to a maximum of2000mof CPU and requests1000mof CPU for each pod at startup. Specifying a single value sets both the limit and request.-
Apply your
InfinispanCR.
13.5. Disabling local Gossip router and service Copy linkLink copied to clipboard!
The Data Grid Operator starts a Gossip router on each site, but you only need a single Gossip router to manage traffic between the Data Grid cluster members. You can disable the additional Gossip routers to save resources.
For example, you have Data Grid clusters in LON and NYC sites. The following procedure shows how you can disable Gossip router in LON site and connect to NYC that has the Gossip router enabled.
Procedure
-
Create an
InfinispanCR for each Data Grid cluster. -
Specify the name of the local site with the
spec.service.sites.local.namefield. -
For the LON cluster, set
falseas the value for thespec.service.sites.local.discovery.launchGossipRouterfield. -
For the LON cluster, specify the
urlwith thespec.service.sites.locations.urlto connect to the NYC. In the NYC configuration, do not specify the
spec.service.sites.locations.url.LON
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NYC
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
If you have three or more sites, Data Grid recommends to keep the Gossip router enabled on all the remote sites. When you have multiple Gossip routers and one of them becomes unavailable, the remaining routers continue exchanging messages. If a single Gossip router is defined, and it becomes unavailable, the connection between the remote sites breaks.
Next steps
If your clusters have formed a cross-site view, you can start adding backup locations to caches.
13.6. Resources for configuring cross-site replication Copy linkLink copied to clipboard!
The following tables provides fields and descriptions for cross-site resources.
| Field | Description |
|---|---|
|
| Data Grid supports cross-site replication with Data Grid service clusters only. |
| Field | Description |
|---|---|
|
| Names the local site where a Data Grid cluster runs. |
|
|
Specifies the maximum number of pods that can send RELAY messages for cross-site replication. The default value is |
|
|
If |
|
|
Allocates the amount of memory in bytes. It uses the following format |
|
|
Allocates the number of CPU units. It uses the following format |
|
|
Specifies the network service for cross-site replication. Data Grid clusters use this service to communicate and perform backup operations. You can set the value to |
|
|
Specifies a static port within the default range of |
|
|
Specifies the network port for the service if you expose Data Grid through a |
|
|
Specifies a custom hostname if you expose Data Grid through an OpenShift |
| Field | Description |
|---|---|
|
| Provides connection information for all backup locations. |
|
|
Specifies a backup location that matches |
|
| Specifies the URL of the Kubernetes API for managed connections or a static URL for manual connections.
Use
Note that the
Use the |
|
| Specifies the secret that contains the service account token for the backup site. |
|
| Specifies the cluster name at the backup location if it is different to the cluster name at the local site. |
|
| Specifies the namespace of the Data Grid cluster at the backup location if it does not match the namespace at the local site. |
Managed cross-site connections
Manual cross-site connections
13.7. Securing cross-site connections Copy linkLink copied to clipboard!
Add keystores and trust stores so that Data Grid clusters can secure cross-site replication traffic.
You must add a keystore to use an OpenShift Route as the expose type for cross-site replication. Securing cross-site connections is optional if you use a NodePort or LoadBalancer as the expose type.
Cross-site replication does not support the OpenShift CA service. You must provide your own certificates.
Prerequisites
Have a PKCS12 keystore that Data Grid can use to encrypt and decrypt RELAY messages.
You must provide a keystore for relay pods and router pods to secure cross-site connections.
The keystore can be the same for relay pods and router pods or you can provide separate keystores for each.
You can also use the same keystore for each Data Grid cluster or a unique keystore for each cluster.- Have a PKCS12 trust store that contains part of the certificate chain or root CA certificate that verifies public certificates for Data Grid relay pods and router pods.
Procedure
Create cross-site encryption secrets.
- Create keystore secrets.
- Create trust store secrets.
-
Modify the
InfinispanCR for each Data Grid cluster to specify the secret name for theencryption.transportKeyStore.secretNameandencryption.routerKeyStore.secretNamefields. Configure any other fields to encrypt RELAY messages as required and then apply the changes.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
13.7.1. Resources for configuring cross-site encryption Copy linkLink copied to clipboard!
The following tables provides fields and descriptions for encrypting cross-site connections.
| Field | Description |
|---|---|
|
|
Specifies the TLS protocol to use for cross-site connections. The default value is |
|
| Configures a keystore secret for relay pods. |
|
| Configures a keystore secret for router pods. |
|
| Configures a trust store secret for relay pods and router pods. |
| Field | Description |
|---|---|
|
| Specifies the secret that contains a keystore that relay pods can use to encrypt and decrypt RELAY messages. This field is required. |
|
|
Optionally specifies the alias of the certificate in the keystore. The default value is |
|
|
Optionally specifies the filename of the keystore. The default value is |
| Field | Description |
|---|---|
|
| Specifies the secret that contains a keystore that router pods can use to encrypt and decrypt RELAY messages. This field is required. |
|
|
Optionally specifies the alias of the certificate in the keystore. The default value is |
|
|
Optionally specifies the filename of the keystore. The default value is |
| Field | Description |
|---|---|
|
| Specifies the secret that contains a trust store to verify public certificates for relay pods and router pods. This field is required. |
|
|
Optionally specifies the filename of the trust store. The default value is |
13.7.2. Cross-site encryption secrets Copy linkLink copied to clipboard!
Cross-site replication encryption secrets add keystores and trust store for securing cross-site connections.
Cross-site encryption secrets
| Field | Description |
|---|---|
|
| Specifies the password for the keystore or trust store. |
|
|
Optionally specifies the keystore or trust store type. The default value is |
|
| Adds a base64-encoded keystore or trust store. |
13.8. Configuring sites in the same OpenShift cluster Copy linkLink copied to clipboard!
For evaluation and demonstration purposes, you can configure Data Grid to back up between pods in the same OpenShift cluster.
Using ClusterIP as the expose type for cross-site replication is intended for demonstration purposes only. It would be appropriate to use this expose type only to perform a temporary proof-of-concept deployment on a laptop or something of that nature.
Procedure
-
Create an
InfinispanCR for each Data Grid cluster. -
Specify the name of the local site with
spec.service.sites.local.name. -
Set
ClusterIPas the value of thespec.service.sites.local.expose.typefield. -
Provide the name of the Data Grid cluster that acts as a backup location with
spec.service.sites.locations.clusterName. If both Data Grid clusters have the same name, specify the namespace of the backup location with
spec.service.sites.locations.namespace.Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Configure your
InfinispanCRs with any other Data Grid service resources and then apply the changes. Verify that Data Grid clusters form a cross-site view.
Retrieve the
InfinispanCR.oc get infinispan -o yaml
oc get infinispan -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Check for the
type: CrossSiteViewFormedcondition.
Chapter 14. Monitoring Data Grid services Copy linkLink copied to clipboard!
Data Grid exposes metrics that can be used by Prometheus and Grafana for monitoring and visualizing the cluster state.
This documentation explains how to set up monitoring on OpenShift Container Platform. If you’re working with community Prometheus deployments, you might find these instructions useful as a general guide. However you should refer to the Prometheus documentation for installation and usage instructions.
See the Prometheus Operator documentation.
14.1. Creating a Prometheus service monitor Copy linkLink copied to clipboard!
Data Grid Operator automatically creates a Prometheus ServiceMonitor that scrapes metrics from your Data Grid cluster.
Procedure
Enable monitoring for user-defined projects on OpenShift Container Platform.
When the Operator detects an Infinispan CR with the monitoring annotation set to true, which is the default, Data Grid Operator does the following:
-
Creates a
ServiceMonitornamed<cluster_name>-monitor. Adds the
infinispan.org/monitoring: 'true'annotation to yourInfinispanCR metadata, if the value is not already explicitly set:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
To authenticate with Data Grid, Prometheus uses the operator credentials.
Verification
You can check that Prometheus is scraping Data Grid metrics as follows:
- In the OpenShift Web Console, select the </> Developer perspective and then select Monitoring.
- Open the Dashboard tab for the namespace where your Data Grid cluster runs.
Open the Metrics tab and confirm that you can query Data Grid metrics such as:
vendor_cache_manager_default_cluster_size
vendor_cache_manager_default_cluster_sizeCopy to Clipboard Copied! Toggle word wrap Toggle overflow
14.1.1. Disabling the Prometheus service monitor Copy linkLink copied to clipboard!
You can disable the ServiceMonitor if you do not want Prometheus to scrape metrics for your Data Grid cluster.
Procedure
Set
'false'as the value for theinfinispan.org/monitoringannotation in yourInfinispanCR.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Apply the changes.
14.1.2. Configuring Service Monitor Target Labels Copy linkLink copied to clipboard!
You can configure the generated ServiceMonitor to propagate Service labels to the underlying metrics using the ServiceMonitor spec.targetLabels field. Use the Service labels to filter and aggregate the metrics collected from the monitored endpoints.
Procedure
-
Define labels to apply to your service by setting the
infinispan.org/targetLabelsannotation in yourInfinispanCR. Specify a comma-separated list of the labels required in your metrics using the
infinispan.org/serviceMonitorTargetLabelsannotation on yourInfinispanCR.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Apply the changes.
14.2. Installing the Grafana Operator Copy linkLink copied to clipboard!
To support various needs, Data Grid Operator integrates with the community version of the Grafana Operator to create dashboards for Data Grid services.
Until Grafana is integrated with OpenShift user workload monitoring, the only option is to rely on the community version. You can install the Grafana Operator on OpenShift from the OperatorHub and should create a subscription for the alpha channel.
However, as is the policy for all Community Operators, Red Hat does not certify the Grafana Operator and does not provide support for it in combination with Data Grid. When you install the Grafana Operator you are prompted to acknowledge a warning about the community version before you can continue.
14.3. Creating Grafana data sources Copy linkLink copied to clipboard!
Create a GrafanaDatasource CR so you can visualize Data Grid metrics in Grafana dashboards.
Prerequisites
-
Have an
occlient. -
Have
cluster-adminaccess to OpenShift Container Platform. - Enable monitoring for user-defined projects on OpenShift Container Platform.
-
Install the Grafana Operator from the alpha channel and create a
GrafanaCR.
Procedure
Create a
ServiceAccountthat lets Grafana read Data Grid metrics from Prometheus.apiVersion: v1 kind: ServiceAccount metadata: name: infinispan-monitoring
apiVersion: v1 kind: ServiceAccount metadata: name: infinispan-monitoringCopy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
ServiceAccount.oc apply -f service-account.yaml
oc apply -f service-account.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Grant
cluster-monitoring-viewpermissions to theServiceAccount.oc adm policy add-cluster-role-to-user cluster-monitoring-view -z infinispan-monitoring
oc adm policy add-cluster-role-to-user cluster-monitoring-view -z infinispan-monitoringCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a Grafana data source.
Retrieve the token for the
ServiceAccount.oc serviceaccounts get-token infinispan-monitoring
oc serviceaccounts get-token infinispan-monitoringCopy to Clipboard Copied! Toggle word wrap Toggle overflow Define a
GrafanaDataSourcethat includes the token in thespec.datasources.secureJsonData.httpHeaderValue1field, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Apply the
GrafanaDataSource.oc apply -f grafana-datasource.yaml
oc apply -f grafana-datasource.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
Enable Grafana dashboards with the Data Grid Operator configuration properties.
14.4. Configuring Data Grid dashboards Copy linkLink copied to clipboard!
Data Grid Operator provides global configuration properties that let you configure Grafana dashboards for Data Grid clusters.
You can modify global configuration properties while Data Grid Operator is running.
Prerequisites
- Data Grid Operator must watch the namespace where the Grafana Operator is running.
Procedure
Create a
ConfigMapnamedinfinispan-operator-configin the Data Grid Operator namespace.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the namespace of your Data Grid cluster with the
data.grafana.dashboard.namespaceproperty.NoteDeleting the value for this property removes the dashboard. Changing the value moves the dashboard to that namespace.
-
Specify a name for the dashboard with the
data.grafana.dashboard.nameproperty. -
If necessary, specify a monitoring key with the
data.grafana.dashboard.monitoring.keyproperty. Create
infinispan-operator-configor update the configuration.oc apply -f infinispan-operator-config.yaml
oc apply -f infinispan-operator-config.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Open the Grafana UI, which is available at:
oc get routes grafana-route -o jsonpath=https://"{.spec.host}"oc get routes grafana-route -o jsonpath=https://"{.spec.host}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
14.5. Enabling JMX remote ports for Data Grid clusters Copy linkLink copied to clipboard!
Enable JMX remote ports to expose Data Grid MBeans and to integrate Data Grid with external monitoring systems such as Cryostat.
When you enable JMX for Data Grid cluster, the following occurs:
-
Each Data Grid server pod exposes an authenticated JMX endpoint on port
9999utilizing the "admin" security-realm, which includes the Operator user credentials. -
The
<cluster-name>-adminService exposes port9999.
You can enable or disable JMX only during the creation of the Infinispan CR. Once the CR instance is created, you cannot modify the JMX settings.
Procedure
Enable JMX in your
InfinispanCR.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the Operator user credentials to authenticate client JMX connections.
oc get secret infinispan-generated-operator-secret -o jsonpath="{.data.identities\.yaml}" | base64 --decodeoc get secret infinispan-generated-operator-secret -o jsonpath="{.data.identities\.yaml}" | base64 --decodeCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional resources
14.6. Setting up JFR recordings with Cryostat Copy linkLink copied to clipboard!
Enable JDK Flight Recorder (JFR) monitoring for your Data Grid clusters that run on OpenShift.
JFR recordings with Cryostat
JFR provides insights into various aspects of JVM performance to ease cluster inspection and debugging. Depending on your requirements, you can store and analyze your recordings using the integrated tools provided by Cryostat or export the recordings to an external monitoring application.
Prerequisites
- Install the Cryostat Operator. You can install the Cryostat Operator in your OpenShift project by using Operator Lifecycle Manager (OLM).
- Have JMX enabled on your Data Grid cluster. You must enable JMX before deploying the cluster, as JMX settings cannot be modified after deployment.
Procedure
Create a Cryostat CR in the same namespace as your
InfinispanCR.Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe Cryostat Operator requires cert-manager for traffic encryption. If the cert-manager is enabled but not installed, the deployment fails. For details, see the Installing Cryostat guide.
Wait for the
CryostatCR to be ready.oc wait -n <namespace> --for=condition=MainDeploymentAvailable cryostat/cryostat-sample
oc wait -n <namespace> --for=condition=MainDeploymentAvailable cryostat/cryostat-sampleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Open the Cryostat
status.applicationUrl.oc -n <namespace> get cryostat cryostat-sample
oc -n <namespace> get cryostat cryostat-sampleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the Operator user credentials to authenticate client JMX connections in the Cryostat UI.
oc get secret infinispan-generated-operator-secret -o jsonpath="{.data.identities\.yaml}" | base64 --decodeoc get secret infinispan-generated-operator-secret -o jsonpath="{.data.identities\.yaml}" | base64 --decodeCopy to Clipboard Copied! Toggle word wrap Toggle overflow - In the Cryostat UI, navigate to the Security menu.
- In the Store Credentials window, click the Add button. The Store Credentials window opens.
In the Match Expression filed, enter match expression details in the following format:
target.labels['infinispan_cr'] == '<cluster_name>'
target.labels['infinispan_cr'] == '<cluster_name>'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 15. Guaranteeing availability with anti-affinity Copy linkLink copied to clipboard!
Kubernetes includes anti-affinity capabilities that protect workloads from single points of failure.
15.1. Anti-affinity strategies Copy linkLink copied to clipboard!
Each Data Grid node in a cluster runs in a pod that runs on an OpenShift node in a cluster. Each Red Hat OpenShift node runs on a physical host system. Anti-affinity works by distributing Data Grid nodes across OpenShift nodes, ensuring that your Data Grid clusters remain available even if hardware failures occur.
Data Grid Operator offers two anti-affinity strategies:
kubernetes.io/hostname- Data Grid replica pods are scheduled on different OpenShift nodes.
topology.kubernetes.io/zone- Data Grid replica pods are scheduled across multiple zones.
Fault tolerance
Anti-affinity strategies guarantee cluster availability in different ways.
The equations in the following section apply only if the number of OpenShift nodes or zones is greater than the number of Data Grid nodes.
Scheduling pods on different OpenShift nodes
Provides tolerance of x node failures for the following types of cache:
-
Replicated:
x = spec.replicas - 1 -
Distributed:
x = num_owners - 1
Scheduling pods across multiple zones
Provides tolerance of x zone failures when x zones exist for the following types of cache:
-
Replicated:
x = spec.replicas - 1 -
Distributed:
x = num_owners - 1
spec.replicas- Defines the number of pods in each Data Grid cluster.
num_owners- Is the cache configuration attribute that defines the number of replicas for each entry in the cache.
15.2. Configuring anti-affinity Copy linkLink copied to clipboard!
Specify where OpenShift schedules pods for your Data Grid clusters to ensure availability.
Procedure
-
Add the
spec.affinityblock to yourInfinispanCR. - Configure anti-affinity strategies as necessary.
-
Apply your
InfinispanCR.
15.2.1. Anti-affinity strategy configurations Copy linkLink copied to clipboard!
Configure anti-affinity strategies in your Infinispan CR to control where OpenShift schedules Data Grid replica pods.
| Topology keys | Description |
|---|---|
|
| Schedules Data Grid replica pods across multiple zones. |
|
| Schedules Data Grid replica pods on different OpenShift nodes. |
Schedule pods on different OpenShift nodes
The following is the anti-affinity strategy that Data Grid Operator uses if you do not configure the spec.affinity field in your Infinispan CR:
Requiring different nodes
In the following example, OpenShift does not schedule Data Grid pods if different nodes are not available:
To ensure that you can schedule Data Grid replica pods on different OpenShift nodes, the number of OpenShift nodes available must be greater than the value of spec.replicas.
Schedule pods across multiple OpenShift zones
The following example prefers multiple zones when scheduling pods but schedules Data Grid replica pods on different OpenShift nodes if it is not possible to schedule across zones:
Requiring multiple zones
The following example uses the zone strategy only when scheduling Data Grid replica pods:
Chapter 16. Creating caches with Data Grid Operator Copy linkLink copied to clipboard!
Use Cache CRs to add cache configuration with Data Grid Operator and control how Data Grid stores your data.
16.1. Data Grid caches Copy linkLink copied to clipboard!
Cache configuration defines the characteristics and features of the data store and must be valid with the Data Grid schema. Data Grid recommends creating standalone files in XML or JSON format that define your cache configuration. You should separate Data Grid configuration from application code for easier validation and to avoid the situation where you need to maintain XML snippets in Java or some other client language.
To create caches with Data Grid clusters running on OpenShift, you should:
-
Use
CacheCR as the mechanism for creating caches through the OpenShift front end. -
Use
BatchCR to create multiple caches at a time from standalone configuration files. - Access Data Grid Console and create caches in XML or JSON format.
You can use Hot Rod or HTTP clients but Data Grid recommends Cache CR or Batch CR unless your specific use case requires programmatic remote cache creation.
Cache CRs
-
CacheCRs apply to Data Grid service pods only. -
Each
CacheCR corresponds to a single cache on the Data Grid cluster.
16.2. Creating caches with the Cache CR Copy linkLink copied to clipboard!
Complete the following steps to create caches on Data Grid service clusters using valid configuration in XML or YAML format.
Procedure
-
Create a
CacheCR with a unique value in themetadata.namefield. -
Specify the target Data Grid cluster with the
spec.clusterNamefield. Name your cache with the
spec.namefield.NoteThe
nameattribute in the cache configuration does not take effect. If you do not specify a name with thespec.namefield then the cache uses the value of themetadata.namefield.-
Add a cache configuration with the
spec.templatefield. Apply the
CacheCR, for example:oc apply -f mycache.yaml cache.infinispan.org/mycachedefinition created
oc apply -f mycache.yaml cache.infinispan.org/mycachedefinition createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Cache CR examples
XML
YAML
16.3. Updating caches with the Cache CR Copy linkLink copied to clipboard!
You can control how Data Grid Operator handles modifications to the cache configuration in the Cache CR.
Data Grid Operator attempts to update the cache configuration on the Data Grid Server at runtime. If the update fails, Data Grid Operator uses one of the following strategies:
- retain strategy
-
The Operator updates the status of the
CacheCR toReady=False. You can manually delete theCacheCR and create a new cache configuration. This is the default strategy. - recreate strategy
The Operator deletes the cache from the Data Grid cluster and creates a new cache with the latest
spec.templatevalue from theCacheCR.ImportantConfigure the
recreatestrategy only if your deployment can tolerate data loss.
Prerequisites
-
Have a valid
CacheCR.
Procedure
Use the
spec.updates.strategyfield to set theCacheCR strategy.mycache.yaml
spec: updates: strategy: recreatespec: updates: strategy: recreateCopy to Clipboard Copied! Toggle word wrap Toggle overflow Apply changes to the
CacheCR, for example:oc apply -f mycache.yaml
oc apply -f mycache.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
16.4. Adding persistent cache stores Copy linkLink copied to clipboard!
You can add persistent cache stores to Data Grid service pods to save data to the persistent volume.
Data Grid creates a Single File cache store, .dat file, in the /opt/infinispan/server/data directory.
Procedure
Add the
<file-store/>element to thepersistenceconfiguration in your Data Grid cache, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
16.5. Adding caches to Cache service pods Copy linkLink copied to clipboard!
Cache service pods include a default cache configuration with recommended settings. This default cache lets you start using Data Grid without the need to create caches.
Because the default cache provides recommended settings, you should create caches only as copies of the default. If you want multiple custom caches you should create Data Grid service pods instead of Cache service pods.
Procedure
- Access the Data Grid Console and provide a copy of the default configuration in XML or JSON format.
Use the Data Grid CLI to create a copy from the default cache as follows:
[//containers/default]> create cache --template=default mycache
[//containers/default]> create cache --template=default mycacheCopy to Clipboard Copied! Toggle word wrap Toggle overflow
16.5.1. Default cache configuration Copy linkLink copied to clipboard!
This topic describes default cache configuration for Cache service pods.
Default caches:
- Use synchronous distribution to store data across the cluster.
- Create two replicas of each entry on the cluster.
- Store cache entries as bytes in native memory (off-heap).
- Define the maximum size for the data container in bytes. Data Grid Operator calculates the maximum size when it creates pods.
- Evict cache entries to control the size of the data container. You can enable automatic scaling so that Data Grid Operator adds pods when memory usage increases instead of removing entries.
- Use a conflict resolution strategy that allows read and write operations for cache entries, even if segment owners are in different partitions.
- Specify a merge policy that removes entries from the cache when Data Grid detects conflicts.
Chapter 17. Running batch operations Copy linkLink copied to clipboard!
Data Grid Operator provides a Batch CR that lets you create Data Grid resources in bulk. Batch CR uses the Data Grid command line interface (CLI) in batch mode to carry out sequences of operations.
Modifying a Batch CR instance has no effect. Batch operations are "one-time" events that modify Data Grid resources. To update .spec fields for the CR, or when a batch operation fails, you must create a new instance of the Batch CR.
17.1. Running inline batch operations Copy linkLink copied to clipboard!
Include your batch operations directly in a Batch CR if they do not require separate configuration artifacts.
Procedure
Create a
BatchCR.-
Specify the name of the Data Grid cluster where you want the batch operations to run as the value of the
spec.clusterfield. Add each CLI command to run on a line in the
spec.configfield.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Specify the name of the Data Grid cluster where you want the batch operations to run as the value of the
Apply your
BatchCR.oc apply -f mybatch.yaml
oc apply -f mybatch.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the
BatchCR to succeed.oc wait --for=jsonpath='{.status.phase}'=Succeeded Batch/mybatchoc wait --for=jsonpath='{.status.phase}'=Succeeded Batch/mybatchCopy to Clipboard Copied! Toggle word wrap Toggle overflow
17.2. Creating ConfigMaps for batch operations Copy linkLink copied to clipboard!
Create a ConfigMap so that additional files, such as Data Grid cache configuration, are available for batch operations.
Prerequisites
For demonstration purposes, you should add some configuration artifacts to your host filesystem before you start the procedure:
Create a
/tmp/mybatchdirectory where you can add some files.mkdir -p /tmp/mybatch
mkdir -p /tmp/mybatchCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a Data Grid cache configuration.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Create a
batchfile that contains all commands you want to run.For example, the following
batchfile creates a cache named "mycache" and adds two entries to it:create cache mycache --file=/etc/batch/mycache.xml put --cache=mycache hello world put --cache=mycache hola mundo
create cache mycache --file=/etc/batch/mycache.xml put --cache=mycache hello world put --cache=mycache hola mundoCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThe
ConfigMapis mounted in Data Grid pods at/etc/batch. You must prepend all--file=directives in your batch operations with that path.Ensure all configuration artifacts that your batch operations require are in the same directory as the
batchfile.ls /tmp/mybatch batch mycache.xml
ls /tmp/mybatch batch mycache.xmlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
ConfigMapfrom the directory.oc create configmap mybatch-config-map --from-file=/tmp/mybatch
oc create configmap mybatch-config-map --from-file=/tmp/mybatchCopy to Clipboard Copied! Toggle word wrap Toggle overflow
17.3. Running batch operations with ConfigMaps Copy linkLink copied to clipboard!
Run batch operations that include configuration artifacts.
Prerequisites
-
Create a
ConfigMapthat contains any files your batch operations require.
Procedure
-
Create a
BatchCR that specifies the name of a Data Grid cluster as the value of thespec.clusterfield. Set the name of the
ConfigMapthat contains yourbatchfile and configuration artifacts with thespec.configMapfield.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply your
BatchCR.oc apply -f mybatch.yaml
oc apply -f mybatch.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the
BatchCR to succeed.oc wait --for=jsonpath='{.status.phase}'=Succeeded Batch/mybatchoc wait --for=jsonpath='{.status.phase}'=Succeeded Batch/mybatchCopy to Clipboard Copied! Toggle word wrap Toggle overflow
17.4. Batch status messages Copy linkLink copied to clipboard!
Verify and troubleshoot batch operations with the status.Phase field in the Batch CR.
| Phase | Description |
|---|---|
|
| All batch operations have completed successfully. |
|
| Batch operations are queued and resources are initializing. |
|
| Batch operations are ready to start. |
|
| Batch operations are in progress. |
|
| One or more batch operations were not successful. |
Failed operations
Batch operations are not atomic. If a command in a batch script fails, it does not affect the other operations or cause them to rollback.
If your batch operations have any server or syntax errors, you can view log messages in the Batch CR in the status.Reason field.
17.5. Example batch operations Copy linkLink copied to clipboard!
Use these example batch operations as starting points for creating and modifying Data Grid resources with the Batch CR.
You can pass configuration files to Data Grid Operator only via a ConfigMap.
The ConfigMap is mounted in Data Grid pods at /etc/batch so you must prepend all --file= directives with that path.
17.5.1. Caches Copy linkLink copied to clipboard!
- Create multiple caches from configuration files.
- Create a template from a file and then create caches from the template.
17.5.2. Counters Copy linkLink copied to clipboard!
Use the Batch CR to create multiple counters that can increment and decrement to record the count of objects.
You can use counters to generate identifiers, act as rate limiters, or track the number of times a resource is accessed.
17.5.3. Protobuf schema Copy linkLink copied to clipboard!
Register Protobuf schema to query values in caches. Protobuf schema (.proto files) provide metadata about custom entities and controls field indexing.
17.5.4. Tasks Copy linkLink copied to clipboard!
Upload tasks that implement org.infinispan.tasks.ServerTask or scripts that are compatible with the javax.script scripting API.
Chapter 18. Backing up and restoring Data Grid clusters Copy linkLink copied to clipboard!
Data Grid Operator lets you back up and restore Data Grid cluster state for disaster recovery and to migrate Data Grid resources between clusters.
18.1. Backup and Restore CRs Copy linkLink copied to clipboard!
Backup and Restore CRs save in-memory data at runtime so you can easily recreate Data Grid clusters.
Applying a Backup or Restore CR creates a new pod that joins the Data Grid cluster as a zero-capacity member, which means it does not require cluster rebalancing or state transfer to join.
For backup operations, the pod iterates over cache entries and other resources and creates an archive, a .zip file, in the /opt/infinispan/backups directory on the persistent volume (PV).
Performing backups does not significantly impact performance because the other pods in the Data Grid cluster only need to respond to the backup pod as it iterates over cache entries.
For restore operations, the pod retrieves Data Grid resources from the archive on the PV and applies them to the Data Grid cluster.
When either the backup or restore operation completes, the pod leaves the cluster and is terminated.
Reconciliation
Data Grid Operator does not reconcile Backup and Restore CRs which mean that backup and restore operations are "one-time" events.
Modifying an existing Backup or Restore CR instance does not perform an operation or have any effect. If you want to update .spec fields, you must create a new instance of the Backup or Restore CR.
18.2. Backing up Data Grid clusters Copy linkLink copied to clipboard!
Create a backup file that stores Data Grid cluster state to a persistent volume.
Prerequisites
-
Create an
InfinispanCR withspec.service.type: DataGrid. Ensure there are no active client connections to the Data Grid cluster.
Data Grid backups do not provide snapshot isolation and data modifications are not written to the archive after the cache is backed up.
To archive the exact state of the cluster, you should always disconnect any clients before you back it up.
Procedure
-
Name the
BackupCR with themetadata.namefield. -
Specify the Data Grid cluster to backup with the
spec.clusterfield. Configure the persistent volume claim (PVC) that adds the backup archive to the persistent volume (PV) with the
spec.volume.storageandspec.volume.storage.storageClassNamefields.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optionally include
spec.resourcesfields to specify which Data Grid resources you want to back up.If you do not include any
spec.resourcesfields, theBackupCR creates an archive that contains all Data Grid resources. If you do specifyspec.resourcesfields, theBackupCR creates an archive that contains those resources only.Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can also use the
*wildcard character as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply your
BackupCR.oc apply -f my-backup.yaml
oc apply -f my-backup.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Check that the
status.phasefield has a status ofSucceededin theBackupCR and that Data Grid logs have the following message:ISPN005044: Backup file created 'my-backup.zip'
ISPN005044: Backup file created 'my-backup.zip'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to check that the backup is successfully created:
oc describe Backup my-backup
oc describe Backup my-backupCopy to Clipboard Copied! Toggle word wrap Toggle overflow
18.3. Restoring Data Grid clusters Copy linkLink copied to clipboard!
Restore Data Grid cluster state from a backup archive.
Prerequisites
-
Create a
BackupCR on a source cluster. Create a target Data Grid cluster of Data Grid service pods.
NoteIf you restore an existing cache, the operation overwrites the data in the cache but not the cache configuration.
For example, you back up a distributed cache named
mycacheon the source cluster. You then restoremycacheon a target cluster where it already exists as a replicated cache. In this case, the data from the source cluster is restored andmycachecontinues to have a replicated configuration on the target cluster.Ensure there are no active client connections to the target Data Grid cluster you want to restore.
Cache entries that you restore from a backup can overwrite more recent cache entries.
For example, a client performs acache.put(k=2)operation and you then restore a backup that containsk=1.
Procedure
-
Name the
RestoreCR with themetadata.namefield. -
Specify a
BackupCR to use with thespec.backupfield. Specify the Data Grid cluster to restore with the
spec.clusterfield.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optionally add the
spec.resourcesfield to restore specific resources only.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply your
RestoreCR.oc apply -f my-restore.yaml
oc apply -f my-restore.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Check that the
status.phasefield has a status ofSucceededin theRestoreCR and that Data Grid logs have the following message:ISPN005045: Restore 'my-backup' complete
ISPN005045: Restore 'my-backup' completeCopy to Clipboard Copied! Toggle word wrap Toggle overflow
You should then open the Data Grid Console or establish a CLI connection to verify data and Data Grid resources are restored as expected.
18.4. Backup and restore status Copy linkLink copied to clipboard!
Backup and Restore CRs include a status.phase field that provides the status for each phase of the operation.
| Status | Description |
|---|---|
|
| The system has accepted the request and the controller is preparing the underlying resources to create the pod. |
|
| The controller has prepared all underlying resources successfully. |
|
| The pod is created and the operation is in progress on the Data Grid cluster. |
|
| The operation has completed successfully on the Data Grid cluster and the pod is terminated. |
|
| The operation did not successfully complete and the pod is terminated. |
|
| The controller cannot obtain the status of the pod or determine the state of the operation. This condition typically indicates a temporary communication error with the pod. |
18.4.1. Handling failed backup and restore operations Copy linkLink copied to clipboard!
If the status.phase field of the Backup or Restore CR is Failed, you should examine pod logs to determine the root cause before you attempt the operation again.
Procedure
Examine the logs for the pod that performed the failed operation.
Pods are terminated but remain available until you delete the
BackuporRestoreCR.oc logs <backup|restore_pod_name>
oc logs <backup|restore_pod_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Resolve any error conditions or other causes of failure as indicated by the pod logs.
-
Create a new instance of the
BackuporRestoreCR and attempt the operation again.
Chapter 19. Deploying custom code to Data Grid Copy linkLink copied to clipboard!
Add custom code, such as scripts and event listeners, to your Data Grid clusters.
Before you can deploy custom code to Data Grid clusters, you need to make it available. To do this you can copy artifacts from a persistent volume (PV), download artifacts from an HTTP or FTP server, or use both methods.
19.1. Copying code artifacts to Data Grid clusters Copy linkLink copied to clipboard!
Adding your artifacts to a persistent volume (PV) and then copy them to Data Grid pods.
This procedure explains how to use a temporary pod that mounts a persistent volume claim (PVC) that:
- Lets you add code artifacts to the PV (perform a write operation).
- Allows Data Grid pods to load code artifacts from the PV (perform a read operation).
To perform these read and write operations, you need certain PV access modes. However, support for different PVC access modes is platform dependent.
It is beyond the scope of this document to provide instructions for creating PVCs with different platforms. For simplicity, the following procedure shows a PVC with the ReadWriteMany access mode.
In some cases only the ReadOnlyMany or ReadWriteOnce access modes are available. You can use a combination of those access modes by reclaiming and reusing PVCs with the same spec.volumeName.
Using ReadWriteOnce access mode results in all Data Grid pods in a cluster being scheduled on the same OpenShift node.
Procedure
Change to the namespace for your Data Grid cluster.
oc project rhdg-namespace
oc project rhdg-namespaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a PVC for your custom code artifacts, for example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply your PVC.
oc apply -f datagrid-libs.yaml
oc apply -f datagrid-libs.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a pod that mounts the PVC, for example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the pod to the Data Grid namespace and wait for it to be ready.
oc apply -f datagrid-libs-pod.yaml oc wait --for=condition=ready --timeout=2m pod/datagrid-libs-pod
oc apply -f datagrid-libs-pod.yaml oc wait --for=condition=ready --timeout=2m pod/datagrid-libs-podCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy your code artifacts to the pod so that they are loaded into the PVC.
For example to copy code artifacts from a local
libsdirectory, do the following:oc cp --no-preserve=true libs datagrid-libs-pod:/tmp/
oc cp --no-preserve=true libs datagrid-libs-pod:/tmp/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the pod.
oc delete pod datagrid-libs-pod
oc delete pod datagrid-libs-podCopy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the persistent volume with
spec.dependencies.volumeClaimNamein yourInfinispanCR and then apply the changes.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
If you update your custom code on the persistent volume, you must restart the Data Grid cluster so it can load the changes.
19.2. Downloading code artifacts Copy linkLink copied to clipboard!
Add your artifacts to an HTTP or FTP server so that Data Grid Operator downloads them to the {lib_path} directory on each Data Grid node.
When downloading files, Data Grid Operator can automatically detect the file type. Data Grid Operator also extracts archived files, such as zip or tgz, to the filesystem after the download completes.
You can also download Maven artifacts using the groupId:artifactId:version format, for example org.postgresql:postgresql:42.3.1.
Each time Data Grid Operator creates a Data Grid node it downloads the artifacts to the node.
Prerequisites
- Host your code artifacts on an HTTP or FTP server or publish them to a maven repository.
Procedure
-
Add the
spec.dependencies.artifactsfield to yourInfinispanCR. Do one of the following:
-
Specify the location of the file to download via
HTTPorFTPas the value of thespec.dependencies.artifacts.urlfield. -
Provide the Maven artifact to download with the
groupId:artifactId:versionformat as the value of thespec.dependencies.artifacts.mavenfield.
-
Specify the location of the file to download via
Optionally specify a checksum to verify the integrity of the download with the
spec.dependencies.artifacts.hashfield.The
hashfield requires a value is in the format of<algorithm>:<checksum>where<algorithm>issha1|sha224|sha256|sha384|sha512|md5.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Apply the changes.
Chapter 20. Sending cloud events from Data Grid clusters Copy linkLink copied to clipboard!
Configure Data Grid as a Knative source by sending CloudEvents to Apache Kafka topics.
Sending cloud events with Red Hat OpenShift Serverless is available as a technology preview feature.
20.1. Technology preview features Copy linkLink copied to clipboard!
Technology preview features or capabilities are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete.
Red Hat does not recommend using technology preview features or capabilities for production. These features provide early access to upcoming product features, which enables you to test functionality and provide feedback during the development process.
For more information, see Red Hat Technology Preview Features Support Scope.
This feature is deprecated and will have no effect on Data Grid releases after RHDG 8.4.x.
20.2. Cloud events Copy linkLink copied to clipboard!
You can send CloudEvents from Data Grid clusters when entries in caches are created, updated, removed, or expired.
Data Grid sends structured events to Kafka in JSON format, as in the following example:
| Field | Description |
|---|---|
|
|
Prefixes events for Data Grid cache entries with |
|
| Entry value. |
|
| Entry key, converted to string. |
|
| Generated identifier for the event. |
20.3. Enabling cloud events Copy linkLink copied to clipboard!
Configure Data Grid to send CloudEvents.
Prerequisites
- Set up an Kafka cluster that listens for Data Grid topics.
Procedure
Add
spec.cloudEventsto yourInfinispanCR.-
Configure the number of acknowledgements with the
spec.cloudEvents.acksfield. Values are "0", "1", or "all". -
List Kafka servers to which Data Grid sends events with the
spec.cloudEvents.bootstrapServersfield. Specify the Kafka topic for Data Grid events with the
spec.cloudEvents.cacheEntriesTopicfield.spec: cloudEvents: acks: "1" bootstrapServers: my-cluster-kafka-bootstrap_1.<namespace_1>.svc:9092,my-cluster-kafka-bootstrap_2.<namespace_2>.svc:9092 cacheEntriesTopic: target-topicspec: cloudEvents: acks: "1" bootstrapServers: my-cluster-kafka-bootstrap_1.<namespace_1>.svc:9092,my-cluster-kafka-bootstrap_2.<namespace_2>.svc:9092 cacheEntriesTopic: target-topicCopy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Configure the number of acknowledgements with the
- Apply your changes.
Chapter 21. Establishing remote client connections Copy linkLink copied to clipboard!
Connect to Data Grid clusters from the Data Grid Console, Command Line Interface (CLI), and remote clients.
21.1. Client connection details Copy linkLink copied to clipboard!
Client connections to Data Grid require the following information:
- Hostname
- Port
- Authentication credentials, if required
- TLS certificate, if you use encryption
Hostnames
The hostname you use depends on whether clients are running on the same OpenShift cluster as Data Grid.
Client applications running on the same OpenShift cluster use the internal service name for the Data Grid cluster.
metadata: name: infinispan
metadata:
name: infinispan
Client applications running on a different OpenShift, or outside OpenShift, use a hostname that depends on how Data Grid is exposed on the network.
A LoadBalancer service uses the URL for the load balancer. A NodePort service uses the node hostname. An Red Hat OpenShift Route uses either a custom hostname that you define or a hostname that the system generates.
Ports
Client connections on OpenShift and a through LoadBalancer service use port 11222.
NodePort services use a port in the range of 30000 to 60000. Routes use either port 80 (unencrypted) or 443 (encrypted).
21.2. Connecting to Data Grid clusters with remote shells Copy linkLink copied to clipboard!
Start a remote shell session to Data Grid clusters and use the command line interface (CLI) to work with Data Grid resources and perform administrative operations.
Prerequisites
-
Have
kubectl-infinispanon yourPATH. - Have valid Data Grid credentials.
Procedure
Run the
infinispan shellcommand to connect to your Data Grid cluster.oc infinispan shell <cluster_name>
oc infinispan shell <cluster_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you have access to authentication secrets and there is only one Data Grid user the
kubectl-infinispanplugin automatically detects your credentials and authenticates to Data Grid. If your deployment has multiple Data Grid credentials, specify a user with the--usernameargument and enter the corresponding password when prompted.Perform CLI operations as required.
TipPress the tab key or use the
--helpargument to view available options and help text.-
Use the
quitcommand to end the remote shell session.
21.3. Accessing Data Grid Console Copy linkLink copied to clipboard!
Access the console to create caches, perform adminstrative operations, and monitor your Data Grid clusters.
Prerequisites
-
Expose Data Grid on the network so you can access the console through a browser.
For example, configure aLoadBalancerservice or create aRoute.
Procedure
Access the console from any browser at
$HOSTNAME:$PORT.Replace
$HOSTNAME:$PORTwith the network location where Data Grid is available.
The Data Grid Console should only be accessed via OpenShift services or an OpenShift Route exposing port 11222.
21.4. Hot Rod clients Copy linkLink copied to clipboard!
Hot Rod is a binary TCP protocol that Data Grid provides for high-performance data transfer capabilities with remote clients.
Client intelligence
The Hot Rod protocol includes a mechanism that provides clients with an up-to-date view of the cache topology. Client intelligence improves performance by reducing the number of network hops for read and write operations.
Clients running in the same OpenShift cluster can access internal IP addresses for Data Grid pods so you can use any client intelligence.
HASH_DISTRIBUTION_AWARE is the default intelligence mechanism and enables clients to route requests to primary owners, which provides the best performance for Hot Rod clients.
Clients running on a different OpenShift, or outside OpenShift, can access Data Grid by using a LoadBalancer, NodePort, or OpenShift Route.
Hot Rod client connections via OpenShift Route require encryption. You must configure TLS with SNI otherwise the Hot Rod connection fails.
For unencrypted Hot Rod client connections, you must use a LoadBalancer service or a NodePort service.
Hot Rod clients must use BASIC intelligence in the following situations:
-
Connecting to Data Grid through a
LoadBalancerservice, aNodePortservice, or an OpenShiftRoute. - Failing over to a different OpenShift cluster when using cross-site replication.
OpenShift cluster administrators can define network policies that restrict traffic to Data Grid. In some cases network isolation policies can require you to use BASIC intelligence even when clients are running in the same OpenShift cluster but a different namespace.
21.4.1. Hot Rod client configuration API Copy linkLink copied to clipboard!
You can programmatically configure Hot Rod client connections with the ConfigurationBuilder interface.
Replace $SERVICE_HOSTNAME in the following examples with the internal service name of your Data Grid cluster.
metadata: name: infinispan
metadata:
name: infinispan
On OpenShift
ConfigurationBuilder
hotrod-client.properties
Outside OpenShift
ConfigurationBuilder
hotrod-client.properties
21.4.2. Configuring Hot Rod clients for certificate authentication Copy linkLink copied to clipboard!
If you enable client certificate authentication, clients must present valid certificates when negotiating connections with Data Grid.
Validate strategy
If you use the Validate strategy, you must configure clients with a keystore so they can present signed certificates. You must also configure clients with Data Grid credentials and any suitable authentication mechanism.
Authenticate strategy
If you use the Authenticate strategy, you must configure clients with a keystore that contains signed certificates and valid Data Grid credentials as part of the distinguished name (DN). Hot Rod clients must also use the EXTERNAL authentication mechanism.
If you enable security authorization, you should assign the Common Name (CN) from the client certificate a role with the appropriate permissions.
The following example shows a Hot Rod client configuration for client certificate authentication with the Authenticate strategy:
21.4.3. Creating caches from Hot Rod clients Copy linkLink copied to clipboard!
You can remotely create caches on Data Grid clusters running on OpenShift with Hot Rod clients. However, Data Grid recommends that you create caches using Data Grid Console, the CLI, or with Cache CRs instead of with Hot Rod clients.
Programmatically creating caches
The following example shows how to add cache configurations to the ConfigurationBuilder and then create them with the RemoteCacheManager:
This example shows how to create a cache named CacheWithXMLConfiguration using the XMLStringConfiguration() method to pass the cache configuration as XML:
Using Hot Rod client properties
When you invoke cacheManager.getCache() calls for named caches that do not exist, Data Grid creates them from the Hot Rod client properties instead of returning null.
Add cache configuration to hotrod-client.properties as in the following example:
Add cache configuration
# Add cache configuration
infinispan.client.hotrod.cache.my-cache.template_name=org.infinispan.DIST_SYNC
infinispan.client.hotrod.cache.another-cache.configuration=<infinispan><cache-container><distributed-cache name=\"another-cache\"/></cache-container></infinispan>
infinispan.client.hotrod.cache.my-other-cache.configuration_uri=file:/path/to/configuration.xml
21.5. Accessing the REST API Copy linkLink copied to clipboard!
Data Grid provides a RESTful interface that you can interact with using HTTP clients.
Prerequisites
-
Expose Data Grid on the network so you can access the REST API.
For example, configure aLoadBalancerservice or create aRoute.
Procedure
Access the REST API with any HTTP client at
$HOSTNAME:$PORT/rest/v2.Replace
$HOSTNAME:$PORTwith the network location where Data Grid listens for client connections.