이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 17. Establishing remote client connections
Connect to Data Grid clusters from the Data Grid Console, Command Line Interface (CLI), and remote clients.
17.1. Client connection details 링크 복사링크가 클립보드에 복사되었습니다!
Before you can connect to Data Grid, you need to retrieve the following pieces of information:
- Service hostname
- Port
- Authentication credentials, if required
- TLS certificate, if you use encryption
Service hostnames
The service hostname depends on how you expose Data Grid on the network or if your clients are running on OpenShift.
For clients running on OpenShift, you can use the name of the internal service that Data Grid Operator creates.
For clients running outside OpenShift, the service hostname is the location URL if you use a load balancer. For a node port service, the service hostname is the node host name. For a route, the service hostname is either a custom hostname or a system-defined hostname.
Ports
Client connections on OpenShift and through load balancers use port 11222
.
Node port services use a port in the range of 30000
to 60000
. Routes use either port 80
(unencrypted) or 443
(encrypted).
17.2. Data Grid caches 링크 복사링크가 클립보드에 복사되었습니다!
Cache configuration defines the characteristics and features of the data store and must be valid with the Data Grid schema. Data Grid recommends creating standalone files in XML or JSON format that define your cache configuration. You should separate Data Grid configuration from application code for easier validation and to avoid the situation where you need to maintain XML snippets in Java or some other client language.
To create caches with Data Grid clusters running on OpenShift, you should:
-
Use
Cache
CR as the mechanism for creating caches through the OpenShift front end. -
Use
Batch
CR to create multiple caches at a time from standalone configuration files. - Access Data Grid Console and create caches in XML or JSON format.
You can use Hot Rod or HTTP clients but Data Grid recommends Cache
CR or Batch
CR unless your specific use case requires programmatic remote cache creation.
17.3. Connecting the Data Grid CLI 링크 복사링크가 클립보드에 복사되었습니다!
Use the command line interface (CLI) to connect to your Data Grid cluster and perform administrative operations.
Prerequisites
- Download a CLI distribution so you can connect to Data Grid clusters on OpenShift.
The Data Grid CLI is available with the server distribution or as a native executable.
Follow the instructions in Getting Started with Data Grid Server for information on downloading and installing the CLI as part of the server distribution. For the native CLI, you should follow the installation instructions in the README file that is included in the ZIP download.
It is possible to open a remote shell to a Data Grid node and access the CLI.
oc rsh example-infinispan-0
$ oc rsh example-infinispan-0
However using the CLI in this way consumes memory allocated to the container, which can lead to out of memory exceptions.
Procedure
Create a CLI connection to your Data Grid cluster.
Using the server distribution
bin/cli.sh -c https://$SERVICE_HOSTNAME:$PORT --trustall
$ bin/cli.sh -c https://$SERVICE_HOSTNAME:$PORT --trustall
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Using the native CLI
./redhat-datagrid-cli -c https://$SERVICE_HOSTNAME:$PORT --trustall
$ ./redhat-datagrid-cli -c https://$SERVICE_HOSTNAME:$PORT --trustall
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
$SERVICE_HOSTNAME:$PORT
with the hostname and port where Data Grid is available on the network.- Enter your Data Grid credentials when prompted.
Perform CLI operations as required, for example:
List caches configured on the cluster with the
ls
command.[//containers/default]> ls caches mycache
[//containers/default]> ls caches mycache
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View cache configuration with the
describe
command.[//containers/default]> describe caches/mycache
[//containers/default]> describe caches/mycache
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
17.4. Accessing Data Grid Console 링크 복사링크가 클립보드에 복사되었습니다!
Access the console to create caches, perform adminstrative operations, and monitor your Data Grid clusters.
Prerequisites
-
Expose Data Grid on the network so you can access the console through a browser.
For example, configure a load balancer service or create a route.
Procedure
Access the console from any browser at
$SERVICE_HOSTNAME:$PORT
.Replace
$SERVICE_HOSTNAME:$PORT
with the hostname and port where Data Grid is available on the network.
17.5. Hot Rod clients 링크 복사링크가 클립보드에 복사되었습니다!
Hot Rod is a binary TCP protocol that Data Grid provides for high-performance data transfer capabilities with remote clients.
Client intelligence
Client intelligence refers to mechanisms the Hot Rod protocol provides so that clients can locate and send requests to Data Grid pods.
Hot Rod clients running on OpenShift can access internal IP addresses for Data Grid pods so you can use any client intelligence. The default intelligence, HASH_DISTRIBUTION_AWARE
, is recommended because it allows clients to route requests to primary owners, which improves performance.
Hot Rod clients running outside OpenShift must use BASIC
intelligence.
17.5.1. Hot Rod client configuration API 링크 복사링크가 클립보드에 복사되었습니다!
You can programmatically configure Hot Rod client connections with the ConfigurationBuilder
interface.
$SERVICE_HOSTNAME:$PORT
denotes the hostname and port that allows access to your Data Grid cluster. You should replace these variables with the actual hostname and port for your environment.
On OpenShift
Hot Rod clients running on OpenShift can use the following configuration:
Outside OpenShift
Hot Rod clients running outside OpenShift can use the following configuration:
17.5.2. Hot Rod client properties 링크 복사링크가 클립보드에 복사되었습니다!
You can configure Hot Rod client connections with the hotrod-client.properties
file on the application classpath.
$SERVICE_HOSTNAME:$PORT
denotes the hostname and port that allows access to your Data Grid cluster. You should replace these variables with the actual hostname and port for your environment.
On OpenShift
Hot Rod clients running on OpenShift can use the following properties:
Outside OpenShift
Hot Rod clients running outside OpenShift can use the following properties:
17.5.3. Configuring Hot Rod clients for certificate authentication 링크 복사링크가 클립보드에 복사되었습니다!
If you enable client certificate authentication, clients must present valid certificates when negotiating connections with Data Grid.
Validate strategy
If you use the Validate
strategy, you must configure clients with a keystore so they can present signed certificates. You must also configure clients with Data Grid credentials and any suitable authentication mechanism.
Authenticate strategy
If you use the Authenticate
strategy, you must configure clients with a keystore that contains signed certificates and valid Data Grid credentials as part of the distinguished name (DN). Hot Rod clients must also use the EXTERNAL
authentication mechanism.
If you enable security authorization, you should assign the Common Name (CN) from the client certificate a role with the appropriate permissions.
The following example shows a Hot Rod client configuration for client certificate authentication with the Authenticate
strategy:
17.5.4. Creating caches from Hot Rod clients 링크 복사링크가 클립보드에 복사되었습니다!
You can remotely create caches on Data Grid clusters running on OpenShift with Hot Rod clients. However, Data Grid recommends that you create caches using Data Grid Console, the CLI, or with Cache
CRs instead of with Hot Rod clients.
Programmatically creating caches
The following example shows how to add cache configurations to the ConfigurationBuilder
and then create them with the RemoteCacheManager
:
This example shows how to create a cache named CacheWithXMLConfiguration using the XMLStringConfiguration()
method to pass the cache configuration as XML:
Using Hot Rod client properties
When you invoke cacheManager.getCache()
calls for named caches that do not exist, Data Grid creates them from the Hot Rod client properties instead of returning null.
Add cache configuration to hotrod-client.properties
as in the following example:
Add cache configuration
# Add cache configuration
infinispan.client.hotrod.cache.my-cache.template_name=org.infinispan.DIST_SYNC
infinispan.client.hotrod.cache.another-cache.configuration=<infinispan><cache-container><distributed-cache name=\"another-cache\"/></cache-container></infinispan>
infinispan.client.hotrod.cache.my-other-cache.configuration_uri=file:/path/to/configuration.xml
17.6. Accessing the REST API 링크 복사링크가 클립보드에 복사되었습니다!
Data Grid provides a RESTful interface that you can interact with using HTTP clients.
Prerequisites
-
Expose Data Grid on the network so you can access the REST API.
For example, configure a load balancer service or create a route.
Procedure
Access the REST API with any HTTP client at
$SERVICE_HOSTNAME:$PORT/rest/v2
.Replace
$SERVICE_HOSTNAME:$PORT
with the hostname and port where Data Grid is available on the network.
17.7. Adding caches to Cache service pods 링크 복사링크가 클립보드에 복사되었습니다!
Cache service pods include a default cache configuration with recommended settings. This default cache lets you start using Data Grid without the need to create caches.
Because the default cache provides recommended settings, you should create caches only as copies of the default. If you want multiple custom caches you should create Data Grid service pods instead of Cache service pods.
Procedure
- Access the Data Grid Console and provide a copy of the default configuration in XML or JSON format.
Use the Data Grid CLI to create a copy from the default cache as follows:
[//containers/default]> create cache --template=default mycache
[//containers/default]> create cache --template=default mycache
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
17.7.1. Default cache configuration 링크 복사링크가 클립보드에 복사되었습니다!
This topic describes default cache configuration for Cache service pods.
Default caches:
- Use synchronous distribution to store data across the cluster.
- Create two replicas of each entry on the cluster.
- Store cache entries as bytes in native memory (off-heap).
- Define the maximum size for the data container in bytes. Data Grid Operator calculates the maximum size when it creates pods.
- Evict cache entries to control the size of the data container. You can enable automatic scaling so that Data Grid Operator adds pods when memory usage increases instead of removing entries.
- Use a conflict resolution strategy that allows read and write operations for cache entries, even if segment owners are in different partitions.
- Specify a merge policy that removes entries from the cache when Data Grid detects conflicts.