此内容没有您所选择的语言版本。
Chapter 1. Getting started with Data Grid Server
Install the server, create a user, and start your first Data Grid cluster. Data Grid Server can run either as a containerized image or as a standalone Java process.
1.1. Data Grid Server Container Image 复制链接链接已复制到粘贴板!
Data Grid Server as a container image requires a container manager, such as Docker or Podman.
1.1.1. Container registries 复制链接链接已复制到粘贴板!
The Data Grid Server container image is available at the following registries:
| Registry | URL |
|---|---|
| Docker Hub | |
| Quay.io |
1.1.2. Container execution 复制链接链接已复制到粘贴板!
Start an instance of Infinispan Server by executing the following command:
Docker
docker run -p 11222:11222 --name infinispan infinispan/server
Podman
podman run -p 11222:11222 --net=host --name infinispan infinispan/server
When utilising podman it is necessary for the
--net=hostto be passed when not executing assudo.
By default, the image has authentication enabled on all exposed endpoints. When executing the above command the image automatically generates a username/password pair with the admin role, prints the values to stdout and then starts the Infinispan server with the authenticated endpoints exposed on port 11222. Therefore, it’s necessary to utilise the printed credentials when accessing the exposed endpoints via clients.
It is also possible to provide an administrator username/password combination via environment variables:
Docker
docker run -p 11222:11222 -e USER="admin" -e PASS="changeme" --name infinispan infinispan/server
Podman
podman run -p 11222:11222 -e USER="admin" -e PASS="changeme" --net=host --name infinispan infinispan/server
We recommend utilising the auto-generated credentials or USER & PASS env variables for initial development only. Providing authentication and authorization configuration via an [Identities Batch file](#identities-batch) allows for much greater control.
1.1.3. Hot Rod Clients 复制链接链接已复制到粘贴板!
When connecting a Hot Rod client to the image, the following SASL properties must be configured on your client (with the username and password properties changed as required):
infinispan.client.hotrod.auth_username=admin
infinispan.client.hotrod.auth_password=changme
infinispan.client.hotrod.sasl_mechanism=DIGEST-MD5
1.1.4. Identities Batch 复制链接链接已复制到粘贴板!
User identities and roles can be defined by providing a cli batch file via the IDENTITIES_BATCH env variable. All the cli commands defined in this file are executed before the server is started, therefore it iss only possible to execute offline commands otherwise the container will fail. For example, including create cache … in the batch would fail as it requires a connection to an Infinispan server.
Data Grid provides implicit roles for some users.
Check Infinispan documentation to know more about implicit roles and authorization
Below is an example Identities batch CLI file identities.batch, that defines four users and their role:
user create "Alan Shearer" -p "striker9" -g admin
user create "observer" -p "secret1"
user create "deployer" -p "secret2"
user create "Rigoberta Baldini" -p "secret3" -g monitor
To run the image using a local identities.batch, execute:
Docker
docker run -v $(pwd):/user-config -e IDENTITIES_BATCH="/user-config/identities.batch" -p 11222:11222 --name infinispan infinispan/server
Podman
podman run -v $(pwd):/user-config -e IDENTITIES_BATCH="/user-config/identities.batch" -p 11222:11222 --net=host --name infinispan infinispan/server
1.1.5. Server Configuration 复制链接链接已复制到粘贴板!
The Infinispan image passes all container arguments to the created server, therefore it is possible to configure the server in the same manner as a non-containerised deployment.
Below shows how a docker volume can be created and mounted in order to run the Infinispan image with the local configuration file my-infinispan-config.xml located in the users current working directory.
Docker
docker run -v $(pwd):/user-config -e IDENTITIES_BATCH="/user-config/identities.batch" -p 11222:11222 --name infinispan infinispan/server -c /user-config/my-infinispan-config.xml
Podman
podman run -v $(pwd):/user-config -e IDENTITIES_BATCH="/user-config/identities.batch" -p 11222:11222 --net=host --name infinispan infinispan/server -c /user-config/my-infinispan-config.xml
1.1.5.1. Kubernetes/OpenShift Clustering 复制链接链接已复制到粘贴板!
When running in a managed environment such as Kubernetes, it is not possible to utilise multicasting for initial node discovery, therefore we must utilise the JGroups DNS_PING protocol to discover cluster members. To enable this, we must provide the jgroups.dnsPing.query property and configure the kubernetes stack.
To utilise the tcp stack with DNS_PING, execute the following config:
Docker
docker run -v $(pwd):/user-config --name infinispan infinispan/server --bind-address=0.0.0.0 -Dinfinispan.cluster.stack=kubernetes -Djgroups.dns.query="infinispan-dns-ping.myproject.svc.cluster.local"
Podman
podman run -v $(pwd):/user-config --name infinispan infinispan/server --bind-address=0.0.0.0 -Dinfinispan.cluster.stack=kubernetes -Djgroups.dns.query="infinispan-dns-ping.myproject.svc.cluster.local"
1.1.5.2. Java Properties 复制链接链接已复制到粘贴板!
It is possible to provide additional Java properties and JVM options to the server images via the JAVA_OPTIONS env variable. For example, to quickly configure CORS without providing a server.yaml file, do the following:
Docker
docker run -e JAVA_OPTIONS="-Dinfinispan.cors.enableAll=https://host.domain:port" --name infinispan infinispan/server
Podman
podman run -e JAVA_OPTIONS="-Dinfinispan.cors.enableAll=https://host.domain:port" --net=host --name infinispan infinispan/server
Using JAVA_OPTIONS will append the options to those determined by the server launch script, such as those that configure the JVM memory sizing. You can completely override these options by setting the JAVA_OPTS env variable.
1.1.5.3. Deploying artifacts to the server lib directory 复制链接链接已复制到粘贴板!
Deploy artifacts to the server lib directory using the SERVER_LIBS env variable. For example, to add the PostgreSQL JDBC driver to the server:
Docker
docker run -e SERVER_LIBS="org.postgresql:postgresql:42.3.1" --name infinispan infinispan/server
Podman
podman run -e SERVER_LIBS="org.postgresql:postgresql:42.3.1" --name infinispan infinispan/server
The SERVER_LIBS variable supports multiple, space-separated artifacts represented as URLs or as Maven coordinates. Archive artifacts in .tar, .tar.gz or .zip formats will be extracted. Refer to the CLI install command help to learn about all possible arguments and options.
1.1.6. Kubernetes 复制链接链接已复制到粘贴板!
1.1.6.1. Liveness and Readiness Probes 复制链接链接已复制到粘贴板!
It is recommended to utilise Infinispan’s REST endpoint in order to determine if the server is ready/live. To do this, you can utilise the Kubernetes httpGet probes as follows:
livenessProbe:
httpGet:
path: /rest/v2/cache-managers/default/health/status
port: 11222
failureThreshold: 5
initialDelaySeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
httpGet:
path: /rest/v2/cache-managers/default/health/status
port: 11222
failureThreshold: 5
initialDelaySeconds: 10
successThreshold: 1
timeoutSeconds: 10
1.2. Data Grid Server distribution 复制链接链接已复制到粘贴板!
1.2.1. Data Grid Server distribution requirements 复制链接链接已复制到粘贴板!
Data Grid Server requires a Java Virtual Machine. See the Data Grid Supported Configurations for details on supported versions.
1.2.2. Downloading Data Grid Server distributions 复制链接链接已复制到粘贴板!
The Data Grid Server distribution is an archive of Java libraries (JAR files) and configuration files.
Procedure
- Access the Red Hat customer portal.
- Download Red Hat Data Grid 8.6 Server from the software downloads section.
Run the
md5sumorsha256sumcommand with the server download archive as the argument, for example:sha256sum jboss-datagrid-${version}-server.zip-
Compare with the
MD5orSHA-256checksum value on the Data Grid Software Details page.
Reference
- Data Grid Server README describes the contents of the server distribution.
1.2.3. Installing Data Grid Server 复制链接链接已复制到粘贴板!
Install the Data Grid Server distribution on a host system.
Prerequisites
- Download a Data Grid Server distribution archive.
Procedure
- Use any appropriate tool to extract the Data Grid Server archive to the host filesystem.
unzip redhat-datagrid-8.6.0-server.zip
The resulting directory is your $RHDG_HOME.
1.2.4. Starting Data Grid Server 复制链接链接已复制到粘贴板!
Run Data Grid Server instances in a Java Virtual Machine (JVM) on any supported host.
Prerequisites
- Download and install the server distribution.
Procedure
-
Open a terminal in
$RHDG_HOME. Start Data Grid Server instances with the
serverscript.- Linux
bin/server.sh- Microsoft Windows
bin\server.bat
Data Grid Server is running successfully when it logs the following messages:
ISPN080004: Protocol SINGLE_PORT listening on 127.0.0.1:11222
ISPN080034: Server '...' listening on http://127.0.0.1:11222
ISPN080001: Data Grid Server <version> started in <mm>ms
Verification
-
Open
127.0.0.1:11222/console/in any browser. - Enter your credentials at the prompt and continue to Data Grid Console.
1.2.5. Passing Data Grid Server configuration at startup 复制链接链接已复制到粘贴板!
Specify custom configuration when you start Data Grid Server.
Data Grid Server can parse multiple configuration files that you overlay on startup with the --server-config argument. You can use as many configuration overlay files as required, in any order. Configuration overlay files:
-
Must be valid Data Grid configuration and contain the root
serverelement or field. - Do not need to be full configuration as long as your combination of overlay files results in a full configuration.
Data Grid Server does not detect conflicting configuration between overlay files. Each overlay file overwrites any conflicting configuration in the preceding configuration.
If you pass cache configuration to Data Grid Server on startup it does not dynamically create those cache across the cluster. You must manually propagate caches to each node.
Additionally, cache configuration that you pass to Data Grid Server on startup must include the infinispan and cache-container elements.
Prerequisites
- Download and install the server distribution.
-
Add custom server configuration to the
server/confdirectory of your Data Grid Server installation.
Procedure
-
Open a terminal in
$RHDG_HOME. Specify one or more configuration files with the
--server-config=or-cargument, for example:bin/server.sh -c infinispan.xml -c datasources.yaml -c security-realms.json
1.3. Creating Data Grid users 复制链接链接已复制到粘贴板!
Add credentials to authenticate with Data Grid Server deployments through Hot Rod and REST endpoints. Before you can access the Data Grid Console or perform cache operations you must create at least one user with the Data Grid command line interface (CLI).
Data Grid enforces security authorization with role-based access control (RBAC). Create an admin user the first time you add credentials to gain full ADMIN permissions to your Data Grid deployment.
Prerequisites
- Download and install Data Grid Server.
Procedure
-
Open a terminal in
$RHDG_HOME. Create an
adminuser, belonging to theadmingroup with theuser createcommand.bin/cli.sh user create admin -p changeme -g adminTipRun
help userfrom a CLI session to get complete command details.
Verification
Open user.properties and confirm the user exists.
cat server/conf/users.properties
admin=scram-sha-1\:BYGcIAwvf6b...
Adding credentials to a properties realm with the CLI creates the user only on the server instance to which you are connected. You must manually synchronize credentials in a properties realm to each node in the cluster.
1.3.1. Granting roles to users 复制链接链接已复制到粘贴板!
Assign roles to users and grant them permissions to perform cache operations and interact with Data Grid resources.
Grant roles to groups instead of users if you want to assign the same role to multiple users and centrally maintain their permissions.
Prerequisites
-
Have
ADMINpermissions for Data Grid. - Create Data Grid users.
Procedure
- Create a CLI connection to Data Grid.
Assign roles to users with the
user roles grantcommand, for example:user roles grant --roles=deployer katie
Verification
List roles that you grant to users with the user roles ls command.
user roles ls katie
["deployer"]
1.3.1.1. Adding users to groups 复制链接链接已复制到粘贴板!
Groups let you change permissions for multiple users. You assign a role to a group and then add users to that group. Users inherit permissions from the group role.
You use groups as part of a property realm in the Data Grid Server configuration. Each group is a special type of user that also requires a username and password.
Prerequisites
-
Have
ADMINpermissions for Data Grid. - Create Data Grid users.
Procedure
- Create a CLI connection to Data Grid.
Use the
user createcommand to create a group.-
Specify a group name with the
--groupsargument. Set a username and password for the group.
user create --groups=developers developers -p changeme
-
Specify a group name with the
List groups.
user ls --groupsGrant a role to the group.
user roles grant --roles=application developersList roles for the group.
user roles ls developersAdd users to the group one at a time.
user groups john --groups=developers
Verification
Open groups.properties and confirm the group exists.
cat server/conf/groups.properties
1.3.2. Data Grid user roles and permissions 复制链接链接已复制到粘贴板!
Data Grid includes several roles that provide users with permissions to access caches and Data Grid resources.
| Role | Permissions | Description |
|---|---|---|
|
| ALL | Superuser with all permissions including control of the Cache Manager lifecycle. |
|
| ALL_READ, ALL_WRITE, LISTEN, EXEC, MONITOR, CREATE |
Can create and delete Data Grid resources in addition to |
|
| ALL_READ, ALL_WRITE, LISTEN, EXEC, MONITOR |
Has read and write access to Data Grid resources in addition to |
|
| ALL_READ, MONITOR |
Has read access to Data Grid resources in addition to |
|
| MONITOR |
Can view statistics via JMX and the |
1.4. Verifying cluster views 复制链接链接已复制到粘贴板!
Data Grid Server instances on the same network automatically discover each other and form clusters.
Complete this procedure to observe cluster discovery with the MPING protocol in the default TCP stack with locally running Data Grid Server instances. If you want to adjust cluster transport for custom network requirements, see the documentation for setting up Data Grid clusters.
This procedure is intended to demonstrate the principle of cluster discovery and is not intended for production environments. Doing things like specifying a port offset on the command line is not a reliable way to configure cluster transport for production.
Prerequisites
Have one instance of Data Grid Server running.
Procedure
-
Open a terminal in
$RHDG_HOME. Copy the root directory to
server2.cp -r server server2Specify a port offset and the
server2directory.bin/server.sh -o 100 -s server2
Verification
You can view cluster membership in the console at 127.0.0.1:11222/console/cluster-membership.
Data Grid also logs the following messages when nodes join clusters:
INFO [org.infinispan.CLUSTER] (jgroups-11,<server_hostname>)
ISPN000094: Received new cluster view for channel cluster:
[<server_hostname>|3] (2) [<server_hostname>, <server2_hostname>]
INFO [org.infinispan.CLUSTER] (jgroups-11,<server_hostname>)
ISPN100000: Node <server2_hostname> joined the cluster
1.5. Shutting down Data Grid Server 复制链接链接已复制到粘贴板!
Stop individually running servers or bring down clusters gracefully.
Procedure
- Create a CLI connection to Data Grid.
Shut down Data Grid Server in one of the following ways:
Stop all nodes in a cluster with the
shutdown clustercommand, for example:shutdown clusterThis command saves cluster state to the
datafolder for each node in the cluster. If you use a cache store, theshutdown clustercommand also persists all data in the cache.Stop individual server instances with the
shutdown servercommand and the server hostname, for example:shutdown server <my_server01>
The shutdown server command does not wait for rebalancing operations to complete, which can lead to data loss if you specify multiple hostnames at the same time.
Run help shutdown for more details about using the command.
Verification
Data Grid logs the following messages when you shut down servers:
ISPN080002: Data Grid Server stopping
ISPN000080: Disconnecting JGroups channel cluster
ISPN000390: Persisted state, version=<$version> timestamp=YYYY-MM-DDTHH:MM:SS
ISPN080003: Data Grid Server stopped
1.5.1. Shutdown and restart of Data Grid clusters 复制链接链接已复制到粘贴板!
Prevent data loss and ensure consistency of your cluster by properly shutting down and restarting nodes.
Cluster shutdown
Data Grid recommends using the shutdown cluster command to stop all nodes in a cluster while saving cluster state and persisting all data in the cache. You can use the shutdown cluster command also for clusters with a single node.
When you bring Data Grid clusters back online, all nodes and caches in the cluster will be unavailable until all nodes rejoin. To prevent inconsistencies or data loss, Data Grid restricts access to the data stored in the cluster and modifications of the cluster state until the cluster is fully operational again. Additionally, Data Grid disables cluster rebalancing and prevents local cache stores purging on startup.
During the cluster recovery process, the coordinator node logs messages for each new node joining, indicating which nodes are available and which are still missing. Other nodes in the Data Grid cluster have the view from the time they join. You can monitor availability of caches using the Data Grid Console or REST API.
However, in cases where waiting for all nodes is not necessary nor desired, it is possible to set a cache available with the current topology. This approach is possible through the CLI, see below, or the REST API.
Manually installing a topology can lead to data loss, only perform this operation if the initial topology cannot be recreated.
Server shutdown
After using the shutdown server command to bring nodes down, the first node to come back online will be available immediately without waiting for other members. The remaining nodes join the cluster immediately, triggering state transfer but loading the local persistence first, which might lead to stale entries. Local cache stores configured to purge on startup will be emptied when the server starts. Local cache stores marked as purge=false will be available after a server restarts but might contain stale entries.
If you shutdown clustered nodes with the shutdown server command, you must restart each server in reverse order to avoid potential issues related to data loss and stale entries in the cache.
For example, if you shutdown server1 and then shutdown server2, you should first start server2 and then start server1. However, restarting clustered nodes in reverse order does not completely prevent data loss and stale entries.
1.6. Data Grid Server installation directory structure 复制链接链接已复制到粘贴板!
Data Grid Server uses the following folders on the host filesystem under $RHDG_HOME:
├── bin
├── boot
├── docs
├── lib
├── server
└── static
See the Data Grid Server README for descriptions of the each folder in your $RHDG_HOME directory as well as system properties you can use to customize the filesystem.
1.6.1. Server root directory 复制链接链接已复制到粘贴板!
Apart from resources in the bin and docs folders, the only folder under $RHDG_HOME that you should interact with is the server root directory, which is named server by default.
You can create multiple nodes under the same $RHDG_HOME directory or in different directories, but each Data Grid Server instance must have its own server root directory. For example, a cluster of 5 nodes could have the following server root directories on the filesystem:
├── server
├── server1
├── server2
├── server3
└── server4
Each server root directory should contain the following folders:
├── server
│ ├── conf
│ ├── data
│ ├── lib
│ └── log
server/conf
Holds infinispan.xml configuration files for a Data Grid Server instance.
Data Grid separates configuration into two layers:
- Dynamic
-
Create mutable cache configurations for data scalability.
Data Grid Server permanently saves the caches you create at runtime along with the cluster state that is distributed across nodes. Each joining node receives a complete cluster state that Data Grid Server synchronizes across all nodes whenever changes occur. - Static
-
Add configuration to
infinispan.xmlfor underlying server mechanisms such as cluster transport, security, and shared datasources.
server/data
Provides internal storage that Data Grid Server uses to maintain cluster state.
Never directly delete or modify content in server/data.
Modifying files such as caches.xml while the server is running can cause corruption. Deleting content can result in an incorrect state, which means clusters cannot restart after shutdown.
server/lib
Contains extension JAR files for custom filters, custom event listeners, JDBC drivers, custom ServerTask implementations, and so on.
server/log
Holds Data Grid Server log files.
Ansible collection
Automate installation of Data Grid clusters with our Ansible collection that optionally includes Keycloak caches and cross-site replication configuration. The Ansible collection also lets you inject Data Grid caches into the static configuration for each server instance during installation.
The Ansible collection for Data Grid is available from the Red Hat Automation Hub.