Chapter 1. Getting started with Data Grid Server


Install the server distribution, create a user, and start your first Data Grid cluster.

Ansible collection

Automate installation of Data Grid clusters with our Ansible collection that optionally includes Keycloak caches and cross-site replication configuration. The Ansible collection also lets you inject Data Grid caches into the static configuration for each server instance during installation.

The Ansible collection for Data Grid is available from the Red Hat Automation Hub.

1.1. Data Grid Server requirements

Data Grid Server requires a Java Virtual Machine. See the Data Grid Supported Configurations for details on supported versions.

1.2. Downloading Data Grid Server distributions

The Data Grid Server distribution is an archive of Java libraries (JAR files) and configuration files.

Procedure

  1. Access the Red Hat customer portal.
  2. Download Red Hat Data Grid 8.5 Server from the software downloads section.
  3. Run the md5sum or sha256sum command with the server download archive as the argument, for example:

    sha256sum jboss-datagrid-${version}-server.zip
  4. Compare with the MD5 or SHA-256 checksum value on the Data Grid Software Details page.

Reference

1.3. Installing Data Grid Server

Install the Data Grid Server distribution on a host system.

Prerequisites

  • Download a Data Grid Server distribution archive.

Procedure

  • Use any appropriate tool to extract the Data Grid Server archive to the host filesystem.
unzip redhat-datagrid-8.5.0-server.zip

The resulting directory is your $RHDG_HOME.

1.4. Starting Data Grid Server

Run Data Grid Server instances in a Java Virtual Machine (JVM) on any supported host.

Prerequisites

  • Download and install the server distribution.

Procedure

  1. Open a terminal in $RHDG_HOME.
  2. Start Data Grid Server instances with the server script.

    Linux
    bin/server.sh
    Microsoft Windows
    bin\server.bat

Data Grid Server is running successfully when it logs the following messages:

ISPN080004: Protocol SINGLE_PORT listening on 127.0.0.1:11222
ISPN080034: Server '...' listening on http://127.0.0.1:11222
ISPN080001: Data Grid Server <version> started in <mm>ms

Verification

  1. Open 127.0.0.1:11222/console/ in any browser.
  2. Enter your credentials at the prompt and continue to Data Grid Console.

1.5. Passing Data Grid Server configuration at startup

Specify custom configuration when you start Data Grid Server.

Data Grid Server can parse multiple configuration files that you overlay on startup with the --server-config argument. You can use as many configuration overlay files as required, in any order. Configuration overlay files:

  • Must be valid Data Grid configuration and contain the root server element or field.
  • Do not need to be full configuration as long as your combination of overlay files results in a full configuration.
Important

Data Grid Server does not detect conflicting configuration between overlay files. Each overlay file overwrites any conflicting configuration in the preceding configuration.

Note

If you pass cache configuration to Data Grid Server on startup it does not dynamically create those cache across the cluster. You must manually propagate caches to each node.

Additionally, cache configuration that you pass to Data Grid Server on startup must include the infinispan and cache-container elements.

Prerequisites

  • Download and install the server distribution.
  • Add custom server configuration to the server/conf directory of your Data Grid Server installation.

Procedure

  1. Open a terminal in $RHDG_HOME.
  2. Specify one or more configuration files with the --server-config= or -c argument, for example:

    bin/server.sh -c infinispan.xml -c datasources.yaml -c security-realms.json

1.6. Creating Data Grid users

Add credentials to authenticate with Data Grid Server deployments through Hot Rod and REST endpoints. Before you can access the Data Grid Console or perform cache operations you must create at least one user with the Data Grid command line interface (CLI).

Tip

Data Grid enforces security authorization with role-based access control (RBAC). Create an admin user the first time you add credentials to gain full ADMIN permissions to your Data Grid deployment.

Prerequisites

  • Download and install Data Grid Server.

Procedure

  1. Open a terminal in $RHDG_HOME.
  2. Create an admin user with the user create command.

    bin/cli.sh user create admin -p changeme
    Tip

    Run help user from a CLI session to get complete command details.

Verification

Open user.properties and confirm the user exists.

cat server/conf/users.properties

admin=scram-sha-1\:BYGcIAwvf6b...
Note

Adding credentials to a properties realm with the CLI creates the user only on the server instance to which you are connected. You must manually synchronize credentials in a properties realm to each node in the cluster.

1.6.1. Granting roles to users

Assign roles to users and grant them permissions to perform cache operations and interact with Data Grid resources.

Tip

Grant roles to groups instead of users if you want to assign the same role to multiple users and centrally maintain their permissions.

Prerequisites

  • Have ADMIN permissions for Data Grid.
  • Create Data Grid users.

Procedure

  1. Create a CLI connection to Data Grid.
  2. Assign roles to users with the user roles grant command, for example:

    user roles grant --roles=deployer katie

Verification

List roles that you grant to users with the user roles ls command.

user roles ls katie
["deployer"]

1.6.2. Adding users to groups

Groups let you change permissions for multiple users. You assign a role to a group and then add users to that group. Users inherit permissions from the group role.

Note

You use groups as part of a property realm in the Data Grid Server configuration. Each group is a special type of user that also requires a username and password.

Prerequisites

  • Have ADMIN permissions for Data Grid.
  • Create Data Grid users.

Procedure

  1. Create a CLI connection to Data Grid.
  2. Use the user create command to create a group.

    1. Specify a group name with the --groups argument.
    2. Set a username and password for the group.

      user create --groups=developers developers -p changeme
  3. List groups.

    user ls --groups
  4. Grant a role to the group.

    user roles grant --roles=application developers
  5. List roles for the group.

    user roles ls developers
  6. Add users to the group one at a time.

    user groups john --groups=developers

Verification

Open groups.properties and confirm the group exists.

cat server/conf/groups.properties

1.6.3. Data Grid user roles and permissions

Data Grid includes several roles that provide users with permissions to access caches and Data Grid resources.

RolePermissionsDescription

admin

ALL

Superuser with all permissions including control of the Cache Manager lifecycle.

deployer

ALL_READ, ALL_WRITE, LISTEN, EXEC, MONITOR, CREATE

Can create and delete Data Grid resources in addition to application permissions.

application

ALL_READ, ALL_WRITE, LISTEN, EXEC, MONITOR

Has read and write access to Data Grid resources in addition to observer permissions. Can also listen to events and execute server tasks and scripts.

observer

ALL_READ, MONITOR

Has read access to Data Grid resources in addition to monitor permissions.

monitor

MONITOR

Can view statistics via JMX and the metrics endpoint.

1.7. Verifying cluster views

Data Grid Server instances on the same network automatically discover each other and form clusters.

Complete this procedure to observe cluster discovery with the MPING protocol in the default TCP stack with locally running Data Grid Server instances. If you want to adjust cluster transport for custom network requirements, see the documentation for setting up Data Grid clusters.

Note

This procedure is intended to demonstrate the principle of cluster discovery and is not intended for production environments. Doing things like specifying a port offset on the command line is not a reliable way to configure cluster transport for production.

Prerequisites

Have one instance of Data Grid Server running.

Procedure

  1. Open a terminal in $RHDG_HOME.
  2. Copy the root directory to server2.

    cp -r server server2
  3. Specify a port offset and the server2 directory.

    bin/server.sh -o 100 -s server2

Verification

You can view cluster membership in the console at 127.0.0.1:11222/console/cluster-membership.

Data Grid also logs the following messages when nodes join clusters:

INFO  [org.infinispan.CLUSTER] (jgroups-11,<server_hostname>)
ISPN000094: Received new cluster view for channel cluster:
[<server_hostname>|3] (2) [<server_hostname>, <server2_hostname>]

INFO  [org.infinispan.CLUSTER] (jgroups-11,<server_hostname>)
ISPN100000: Node <server2_hostname> joined the cluster

1.8. Shutting down Data Grid Server

Stop individually running servers or bring down clusters gracefully.

Procedure

  1. Create a CLI connection to Data Grid.
  2. Shut down Data Grid Server in one of the following ways:

    • Stop all nodes in a cluster with the shutdown cluster command, for example:

      shutdown cluster

      This command saves cluster state to the data folder for each node in the cluster. If you use a cache store, the shutdown cluster command also persists all data in the cache.

    • Stop individual server instances with the shutdown server command and the server hostname, for example:

      shutdown server <my_server01>
Important

The shutdown server command does not wait for rebalancing operations to complete, which can lead to data loss if you specify multiple hostnames at the same time.

Tip

Run help shutdown for more details about using the command.

Verification

Data Grid logs the following messages when you shut down servers:

ISPN080002: Data Grid Server stopping
ISPN000080: Disconnecting JGroups channel cluster
ISPN000390: Persisted state, version=<$version> timestamp=YYYY-MM-DDTHH:MM:SS
ISPN080003: Data Grid Server stopped

1.8.1. Shutdown and restart of Data Grid clusters

Prevent data loss and ensure consistency of your cluster by properly shutting down and restarting nodes.

Cluster shutdown

Data Grid recommends using the shutdown cluster command to stop all nodes in a cluster while saving cluster state and persisting all data in the cache. You can use the shutdown cluster command also for clusters with a single node.

When you bring Data Grid clusters back online, all nodes and caches in the cluster will be unavailable until all nodes rejoin. To prevent inconsistencies or data loss, Data Grid restricts access to the data stored in the cluster and modifications of the cluster state until the cluster is fully operational again. Additionally, Data Grid disables cluster rebalancing and prevents local cache stores purging on startup.

During the cluster recovery process, the coordinator node logs messages for each new node joining, indicating which nodes are available and which are still missing. Other nodes in the Data Grid cluster have the view from the time they join. You can monitor availability of caches using the Data Grid Console or REST API.

However, in cases where waiting for all nodes is not necessary nor desired, it is possible to set a cache available with the current topology. This approach is possible through the CLI, see below, or the REST API.

Important

Manually installing a topology can lead to data loss, only perform this operation if the initial topology cannot be recreated.

Server shutdown

After using the shutdown server command to bring nodes down, the first node to come back online will be available immediately without waiting for other members. The remaining nodes join the cluster immediately, triggering state transfer but loading the local persistence first, which might lead to stale entries. Local cache stores configured to purge on startup will be emptied when the server starts. Local cache stores marked as purge=false will be available after a server restarts but might contain stale entries.

If you shutdown clustered nodes with the shutdown server command, you must restart each server in reverse order to avoid potential issues related to data loss and stale entries in the cache.
For example, if you shutdown server1 and then shutdown server2, you should first start server2 and then start server1. However, restarting clustered nodes in reverse order does not completely prevent data loss and stale entries.

1.9. Data Grid Server installation directory structure

Data Grid Server uses the following folders on the host filesystem under $RHDG_HOME:

├── bin
├── boot
├── docs
├── lib
├── server
└── static
Tip

See the Data Grid Server README for descriptions of the each folder in your $RHDG_HOME directory as well as system properties you can use to customize the filesystem.

1.9.1. Server root directory

Apart from resources in the bin and docs folders, the only folder under $RHDG_HOME that you should interact with is the server root directory, which is named server by default.

You can create multiple nodes under the same $RHDG_HOME directory or in different directories, but each Data Grid Server instance must have its own server root directory. For example, a cluster of 5 nodes could have the following server root directories on the filesystem:

├── server
├── server1
├── server2
├── server3
└── server4

Each server root directory should contain the following folders:

├── server
│   ├── conf
│   ├── data
│   ├── lib
│   └── log

server/conf

Holds infinispan.xml configuration files for a Data Grid Server instance.

Data Grid separates configuration into two layers:

Dynamic
Create mutable cache configurations for data scalability.
Data Grid Server permanently saves the caches you create at runtime along with the cluster state that is distributed across nodes. Each joining node receives a complete cluster state that Data Grid Server synchronizes across all nodes whenever changes occur.
Static
Add configuration to infinispan.xml for underlying server mechanisms such as cluster transport, security, and shared datasources.

server/data

Provides internal storage that Data Grid Server uses to maintain cluster state.

Important

Never directly delete or modify content in server/data.

Modifying files such as caches.xml while the server is running can cause corruption. Deleting content can result in an incorrect state, which means clusters cannot restart after shutdown.

server/lib

Contains extension JAR files for custom filters, custom event listeners, JDBC drivers, custom ServerTask implementations, and so on.

server/log

Holds Data Grid Server log files.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.