Este conteúdo não está disponível no idioma selecionado.
Chapter 7. Setting up Data Grid services
Use Data Grid Operator to create clusters of Data Grid service pods.
7.1. Service types Copiar o linkLink copiado para a área de transferência!
Services are stateful applications, based on the Data Grid Server image, that provide flexible and robust in-memory data storage. Data Grid operator supports only DataGrid
service type which deploys Data Grid clusters with full configuration and capabilities. Cache
service type is no longer supported.
DataGrid` service type for clusters lets you:
- Back up data across global clusters with cross-site replication.
- Create caches with any valid configuration.
- Add file-based cache stores to save data in a persistent volume.
- Query values across caches using the Data Grid Query API.
- Use advanced Data Grid features and capabilities.
7.2. Creating Data Grid service pods Copiar o linkLink copiado para a área de transferência!
To use custom cache definitions along with Data Grid capabilities such as cross-site replication, create clusters of Data Grid service pods.
Procedure
Create an
Infinispan
CR that setsspec.service.type: DataGrid
and configures any other Data Grid service resources.Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantYou cannot change the
spec.service.type
field after you create pods. To change the service type, you must delete the existing pods and create new ones.-
Apply your
Infinispan
CR to create the cluster.
7.2.1. Data Grid service CR Copiar o linkLink copiado para a área de transferência!
This topic describes the Infinispan
CR for Data Grid service pods.
Field | Description |
---|---|
| Names your Data Grid cluster. |
|
Automatically creates a |
| Specifies the number of pods in your cluster. |
| Specifies the Data Grid Server version of your cluster. |
| Controls how Data Grid Operator upgrades your Data Grid cluster when new versions become available. |
|
Configures the type Data Grid service. A value of |
| Configures the storage resources for Data Grid service pods. |
| Configures cross-site replication. |
| Specifies an authentication secret that contains Data Grid user credentials. |
| Specifies TLS certificates and keystores to encrypt client connections. |
| Specifies JVM, CPU, and memory resources for Data Grid pods. |
| Configures Data Grid logging categories. |
| Controls how Data Grid endpoints are exposed on the network. |
|
Specifies a |
|
Creates a
The |
|
Configures the logging level for the |
| Configures anti-affinity strategies that guarantee Data Grid availability. |
7.3. Allocating storage resources Copiar o linkLink copiado para a área de transferência!
By default, Data Grid Operator allocates 1Gi
for the persistent volume claim. However you should adjust the amount of storage available to Data Grid service pods so that Data Grid can preserve cluster state during shutdown.
If available container storage is less than the amount of available memory, data loss can occur.
Procedure
-
Allocate storage resources with the
spec.service.container.storage
field. Configure either the
ephemeralStorage
field or thestorageClassName
field as required.NoteThese fields are mutually exclusive. Add only one of them to your
Infinispan
CR.- Apply the changes.
Ephemeral storage
Name of a StorageClass
object
Field | Description |
---|---|
| Specifies the amount of storage for Data Grid service pods. |
|
Defines whether storage is ephemeral or permanent. Set the value to |
|
Specifies the name of a |
7.3.1. Persistent volume claims Copiar o linkLink copiado para a área de transferência!
Data Grid Operator creates a persistent volume claim (PVC) and mounts container storage at:/opt/infinispan/server/data
Caches
When you create caches, Data Grid permanently stores their configuration so your caches are available after cluster restarts.
Data
Use a file-based cache store, by adding the <file-store/>
element to your Data Grid cache configuration, if you want Data Grid service pods to persist data during cluster shutdown.
7.4. Allocating CPU and memory Copiar o linkLink copiado para a área de transferência!
Allocate CPU and memory resources to Data Grid pods with the Infinispan
CR.
Data Grid Operator requests 1Gi of memory from the OpenShift scheduler when creating Data Grid pods. CPU requests are unbounded by default.
Procedure
-
Allocate the number of CPU units with the
spec.container.cpu
field. Allocate the amount of memory, in bytes, with the
spec.container.memory
field.The
cpu
andmemory
fields have values in the format of<limit>:<requests>
. For example,cpu: "2000m:1000m"
limits pods to a maximum of2000m
of CPU and requests1000m
of CPU for each pod at startup. Specifying a single value sets both the limit and request.Apply your
Infinispan
CR.If your cluster is running, Data Grid Operator restarts the Data Grid pods so changes take effect.
spec: container: cpu: "2000m:1000m" memory: "2Gi:1Gi"
spec:
container:
cpu: "2000m:1000m"
memory: "2Gi:1Gi"
7.5. Setting JVM options Copiar o linkLink copiado para a área de transferência!
Pass additional JVM options to Data Grid pods at startup.
Procedure
-
Configure JVM options with the
spec.container
filed in yourInfinispan
CR. Apply your
Infinispan
CR.If your cluster is running, Data Grid Operator restarts the Data Grid pods so changes take effect.
JVM options
spec: container: extraJvmOpts: "-<option>=<value>" routerExtraJvmOpts: "-<option>=<value>" cliExtraJvmOpts: "-<option>=<value>"
spec:
container:
extraJvmOpts: "-<option>=<value>"
routerExtraJvmOpts: "-<option>=<value>"
cliExtraJvmOpts: "-<option>=<value>"
Field | Description |
---|---|
| Specifies additional JVM options for the Data Grid Server. |
| Specifies additional JVM options for the Gossip router. |
| Specifies additional JVM options for the Data Grid CLI. |
7.6. Configuring pod probes Copiar o linkLink copiado para a área de transferência!
Optionally configure the values of the Liveness, Readiness and Startup probes used by Data Grid pods.
The Data Grid Operator automatically configures the probe values to sensible defaults. We only recommend providing your own values once you have determined that the default values do not match your requirements.
Procedure
Configure probe values using the
spec.service.container.*Probe
fields:Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantIf no value is specified for a given probe value, then the Data Grid Operator default is used.
Apply your
Infinispan
CR.If your cluster is running, Data Grid Operator restarts the Data Grid pods in order for the changes to take effect.
7.7. Configuring pod priority Copiar o linkLink copiado para a área de transferência!
Create one or more priority classes to indicate the importance of a pod relative to other pods. Pods with higher priority are scheduled ahead of pods with lower priority, ensuring prioritization of pods running critical workloads, especially when resources become constrained.
Prerequisites
-
Have
cluster-admin
access to OpenShift.
Procedure
Define a
PriorityClass
object by specifying its name and value.high-priority.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the priority class.
oc create -f high-priority.yaml
oc create -f high-priority.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Reference the priority class name in the pod configuration.
Infinispan CR
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You must reference an existing priority class name, otherwise the pod is rejected.
- Apply the changes.
7.8. FIPS mode for your Infinispan CR Copiar o linkLink copiado para a área de transferência!
The Red Hat OpenShift Container Platform can use certain Federal Information Processing Standards (FIPS) components that ensure OpenShift clusters meet the requirements of a FIPS compliance audit.
If you enabled FIPS mode on your OpenShift cluster then the Data Grid Operator automatically enables FIPS mode for your Infinispan
custom resource (CR).
Client certificate authentication is not currently supported with FIPS mode. Attempts to create Infinispan
CR with spec.security.endpointEncryption.clientCert
set to a value other than None
will fail.
7.9. Adjusting log pattern Copiar o linkLink copiado para a área de transferência!
To customize the log display for Data Grid log traces, update the log pattern. If no custom pattern is set, the default format is: %d{HH:mm:ss,SSS} %-5p (%t) [%c] %m%throwable%n
Procedure
Configure Data Grid logging with the
spec.logging.pattern
field in yourInfinispan
CR.spec: logging: pattern: %X{address} %X{user} [%d{dd/MMM/yyyy:HH:mm:ss Z}]
spec: logging: pattern: %X{address} %X{user} [%d{dd/MMM/yyyy:HH:mm:ss Z}]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Apply the changes.
Retrieve logs from Data Grid pods as required.
oc logs -f $POD_NAME
oc logs -f $POD_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.10. Adjusting log levels Copiar o linkLink copiado para a área de transferência!
Change levels for different Data Grid logging categories when you need to debug issues. You can also adjust log levels to reduce the number of messages for certain categories to minimize the use of container resources.
Procedure
Configure Data Grid logging with the
spec.logging.categories
field in yourInfinispan
CR.spec: logging: categories: org.infinispan: debug org.jgroups: debug
spec: logging: categories: org.infinispan: debug org.jgroups: debug
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Apply the changes.
Retrieve logs from Data Grid pods as required.
oc logs -f $POD_NAME
oc logs -f $POD_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.10.1. Logging reference Copiar o linkLink copiado para a área de transferência!
Find information about log categories and levels.
Root category | Description | Default level |
---|---|---|
| Data Grid messages |
|
| Cluster transport messages |
|
Log level | Description |
---|---|
| Provides detailed information about running state of applications. This is the most verbose log level. |
| Indicates the progress of individual requests or activities. |
| Indicates overall progress of applications, including lifecycle events. |
| Indicates circumstances that can lead to error or degrade performance. |
| Indicates error conditions that might prevent operations or activities from being successful but do not prevent applications from running. |
Garbage collection (GC) messages
Data Grid Operator does not log GC messages by default. You can direct GC messages to stdout
with the following JVM options:
extraJvmOpts: "-Xlog:gc*:stdout:time,level,tags"
extraJvmOpts: "-Xlog:gc*:stdout:time,level,tags"
7.11. Adding labels and annotations to Data Grid resources Copiar o linkLink copiado para a área de transferência!
Attach key/value labels and annotations to pods and services that Data Grid Operator creates and manages. Labels help you identify relationships between objects to better organize and monitor Data Grid resources. Annotations are arbitrary non-identifying metadata for client applications or deployment and management tooling.
Red Hat subscription labels are automatically applied to Data Grid resources.
Procedure
-
Open your
Infinispan
CR for editing. Attach labels and annotations to Data Grid resources in the
metadata.annotations
section.-
Define values for annotations directly in the
metadata.annotations
section. -
Define values for labels with the
metadata.labels
field.
-
Define values for annotations directly in the
-
Apply your
Infinispan
CR.
Custom annotations
Custom labels
7.12. Adding labels and annotations with environment variables Copiar o linkLink copiado para a área de transferência!
Set environment variables for Data Grid Operator to add labels and annotations that automatically propagate to all Data Grid pods and services.
Procedure
Add labels and annotations to your Data Grid Operator subscription with the spec.config.env
field in one of the following ways:
Use the
oc edit subscription
command.oc edit subscription datagrid -n openshift-operators
oc edit subscription datagrid -n openshift-operators
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the Red Hat OpenShift Console.
- Navigate to Operators > Installed Operators > Data Grid Operator.
- From the Actions menu, select Edit Subscription.
Labels and annotations with environment variables
7.13. Defining environment variables in the Data Grid Operator subscription Copiar o linkLink copiado para a área de transferência!
You can define environment variables in your Data Grid Operator subscription either when you create or edit the subscription.
If you are using the Red Hat OpenShift Console, you must first install the Data Grid Operator and then edit the existing subscription.
spec.config.env
field-
Includes the
name
andvalue
fields to define environment variables. ADDITIONAL_VARS
variable-
Includes the names of environment variables in a format of JSON array. Environment variables within the
value
of theADDITIONAL_VARS
variable automatically propagate to each Data Grid Server pod managed by the associated Operator.
Prerequisites
- Ensure the Operator Lifecycle Manager (OLM) is installed.
-
Have an
oc
client.
Procedure
Create a subscription definition YAML for your Data Grid Operator:
-
Use the
spec.config.env
field to define environment variables. Within the
ADDITIONAL_VARS
variable, include environment variable names in a JSON array.subscription-datagrid.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example, use the environment variables to set the local time zone:
subscription-datagrid.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Use the
Create a subscription for Data Grid Operator:
oc apply -f subscription-datagrid.yaml
oc apply -f subscription-datagrid.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Retrieve the environment variables from the
subscription-datagrid.yaml
:oc get subscription datagrid -n openshift-operators -o jsonpath='{.spec.config.env[*].name}'
oc get subscription datagrid -n openshift-operators -o jsonpath='{.spec.config.env[*].name}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
Use the
oc edit subscription
command to modify the environment variable:oc edit subscription datagrid -n openshift-operators
oc edit subscription datagrid -n openshift-operators
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
To ensure the changes take effect on your Data Grid clusters, you must recreate the existing clusters. Terminate the pods by deleting the
StatefulSet
associated with the existingInfinispan
CRs.
- In the Red Hat OpenShift Console, navigate to Operators > Installed Operators > Data Grid Operator. From the Actions menu, select Edit Subscription.