Chapter 17. Establishing remote client connections


Connect to Data Grid clusters from the Data Grid Console, Command Line Interface (CLI), and remote clients.

17.1. Client connection details

Before you can connect to Data Grid, you need to retrieve the following pieces of information:

  • Service hostname
  • Port
  • Authentication credentials, if required
  • TLS certificate, if you use encryption

Service hostnames

The service hostname depends on how you expose Data Grid on the network or if your clients are running on OpenShift.

For clients running on OpenShift, you can use the name of the internal service that Data Grid Operator creates.

For clients running outside OpenShift, the service hostname is the location URL if you use a load balancer. For a node port service, the service hostname is the node host name. For a route, the service hostname is either a custom hostname or a system-defined hostname.

Ports

Client connections on OpenShift and through load balancers use port 11222.

Node port services use a port in the range of 30000 to 60000. Routes use either port 80 (unencrypted) or 443 (encrypted).

17.2. Data Grid caches

Cache configuration defines the characteristics and features of the data store and must be valid with the Data Grid schema. Data Grid recommends creating standalone files in XML or JSON format that define your cache configuration. You should separate Data Grid configuration from application code for easier validation and to avoid the situation where you need to maintain XML snippets in Java or some other client language.

To create caches with Data Grid clusters running on OpenShift, you should:

  • Use Cache CR as the mechanism for creating caches through the OpenShift front end.
  • Use Batch CR to create multiple caches at a time from standalone configuration files.
  • Access Data Grid Console and create caches in XML or JSON format.

You can use Hot Rod or HTTP clients but Data Grid recommends Cache CR or Batch CR unless your specific use case requires programmatic remote cache creation.

17.3. Connecting the Data Grid CLI

Use the command line interface (CLI) to connect to your Data Grid cluster and perform administrative operations.

Prerequisites

  • Download a CLI distribution so you can connect to Data Grid clusters on OpenShift.

The Data Grid CLI is available with the server distribution or as a native executable.

Follow the instructions in Getting Started with Data Grid Server for information on downloading and installing the CLI as part of the server distribution. For the native CLI, you should follow the installation instructions in the README file that is included in the ZIP download.

Note

It is possible to open a remote shell to a Data Grid node and access the CLI.

$ oc rsh example-infinispan-0

However using the CLI in this way consumes memory allocated to the container, which can lead to out of memory exceptions.

Procedure

  1. Create a CLI connection to your Data Grid cluster.

    Using the server distribution

    $ bin/cli.sh -c https://$SERVICE_HOSTNAME:$PORT --trustall

    Using the native CLI

    $ ./redhat-datagrid-cli -c https://$SERVICE_HOSTNAME:$PORT --trustall

    Replace $SERVICE_HOSTNAME:$PORT with the hostname and port where Data Grid is available on the network.

  2. Enter your Data Grid credentials when prompted.
  3. Perform CLI operations as required, for example:

    1. List caches configured on the cluster with the ls command.

      [//containers/default]> ls caches
      mycache
    2. View cache configuration with the describe command.

      [//containers/default]> describe caches/mycache

17.4. Accessing Data Grid Console

Access the console to create caches, perform adminstrative operations, and monitor your Data Grid clusters.

Prerequisites

  • Expose Data Grid on the network so you can access the console through a browser.
    For example, configure a load balancer service or create a route.

Procedure

  • Access the console from any browser at $SERVICE_HOSTNAME:$PORT.

    Replace $SERVICE_HOSTNAME:$PORT with the hostname and port where Data Grid is available on the network.

17.5. Hot Rod clients

Hot Rod is a binary TCP protocol that Data Grid provides for high-performance data transfer capabilities with remote clients.

Client intelligence

Client intelligence refers to mechanisms the Hot Rod protocol provides so that clients can locate and send requests to Data Grid pods.

Hot Rod clients running on OpenShift can access internal IP addresses for Data Grid pods so you can use any client intelligence. The default intelligence, HASH_DISTRIBUTION_AWARE, is recommended because it allows clients to route requests to primary owners, which improves performance.

Hot Rod clients running outside OpenShift must use BASIC intelligence.

17.5.1. Hot Rod client configuration API

You can programmatically configure Hot Rod client connections with the ConfigurationBuilder interface.

Note

$SERVICE_HOSTNAME:$PORT denotes the hostname and port that allows access to your Data Grid cluster. You should replace these variables with the actual hostname and port for your environment.

On OpenShift

Hot Rod clients running on OpenShift can use the following configuration:

import org.infinispan.client.hotrod.configuration.ConfigurationBuilder;
import org.infinispan.client.hotrod.configuration.SaslQop;
import org.infinispan.client.hotrod.impl.ConfigurationProperties;
...

ConfigurationBuilder builder = new ConfigurationBuilder();
      builder.addServer()
               .host("$SERVICE_HOSTNAME")
               .port(ConfigurationProperties.DEFAULT_HOTROD_PORT)
             .security().authentication()
               .username("username")
               .password("changeme")
               .realm("default")
               .saslQop(SaslQop.AUTH)
               .saslMechanism("SCRAM-SHA-512")
             .ssl()
               .sniHostName("$SERVICE_HOSTNAME")
               .trustStoreFileName("/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt")
               .trustStoreType("pem");
Outside OpenShift

Hot Rod clients running outside OpenShift can use the following configuration:

import org.infinispan.client.hotrod.configuration.ClientIntelligence;
import org.infinispan.client.hotrod.configuration.ConfigurationBuilder;
import org.infinispan.client.hotrod.configuration.SaslQop;
...

ConfigurationBuilder builder = new ConfigurationBuilder();
      builder.addServer()
               .host("$SERVICE_HOSTNAME")
               .port("$PORT")
             .security().authentication()
               .username("username")
               .password("changeme")
               .realm("default")
               .saslQop(SaslQop.AUTH)
               .saslMechanism("SCRAM-SHA-512")
             .ssl()
               .sniHostName("$SERVICE_HOSTNAME")
               //Create a client trust store with tls.crt from your project.
               .trustStoreFileName("/path/to/truststore.pkcs12")
               .trustStorePassword("trust_store_password")
               .trustStoreType("PCKS12");
      builder.clientIntelligence(ClientIntelligence.BASIC);

17.5.2. Hot Rod client properties

You can configure Hot Rod client connections with the hotrod-client.properties file on the application classpath.

Note

$SERVICE_HOSTNAME:$PORT denotes the hostname and port that allows access to your Data Grid cluster. You should replace these variables with the actual hostname and port for your environment.

On OpenShift

Hot Rod clients running on OpenShift can use the following properties:

# Connection
infinispan.client.hotrod.server_list=$SERVICE_HOSTNAME:$PORT

# Authentication
infinispan.client.hotrod.use_auth=true
infinispan.client.hotrod.auth_username=developer
infinispan.client.hotrod.auth_password=$PASSWORD
infinispan.client.hotrod.auth_server_name=$CLUSTER_NAME
infinispan.client.hotrod.sasl_properties.javax.security.sasl.qop=auth
infinispan.client.hotrod.sasl_mechanism=SCRAM-SHA-512

# Encryption
infinispan.client.hotrod.sni_host_name=$SERVICE_HOSTNAME
infinispan.client.hotrod.trust_store_file_name=/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
infinispan.client.hotrod.trust_store_type=pem
Outside OpenShift

Hot Rod clients running outside OpenShift can use the following properties:

# Connection
infinispan.client.hotrod.server_list=$SERVICE_HOSTNAME:$PORT

# Client intelligence
infinispan.client.hotrod.client_intelligence=BASIC

# Authentication
infinispan.client.hotrod.use_auth=true
infinispan.client.hotrod.auth_username=developer
infinispan.client.hotrod.auth_password=$PASSWORD
infinispan.client.hotrod.auth_server_name=$CLUSTER_NAME
infinispan.client.hotrod.sasl_properties.javax.security.sasl.qop=auth
infinispan.client.hotrod.sasl_mechanism=SCRAM-SHA-512

# Encryption
infinispan.client.hotrod.sni_host_name=$SERVICE_HOSTNAME
# Create a client trust store with tls.crt from your project.
infinispan.client.hotrod.trust_store_file_name=/path/to/truststore.pkcs12
infinispan.client.hotrod.trust_store_password=trust_store_password
infinispan.client.hotrod.trust_store_type=PCKS12

17.5.3. Configuring Hot Rod clients for certificate authentication

If you enable client certificate authentication, clients must present valid certificates when negotiating connections with Data Grid.

Validate strategy

If you use the Validate strategy, you must configure clients with a keystore so they can present signed certificates. You must also configure clients with Data Grid credentials and any suitable authentication mechanism.

Authenticate strategy

If you use the Authenticate strategy, you must configure clients with a keystore that contains signed certificates and valid Data Grid credentials as part of the distinguished name (DN). Hot Rod clients must also use the EXTERNAL authentication mechanism.

Note

If you enable security authorization, you should assign the Common Name (CN) from the client certificate a role with the appropriate permissions.

The following example shows a Hot Rod client configuration for client certificate authentication with the Authenticate strategy:

import org.infinispan.client.hotrod.configuration.ConfigurationBuilder;
...

ConfigurationBuilder builder = new ConfigurationBuilder();
      builder.security()
             .authentication()
               .saslMechanism("EXTERNAL")
             .ssl()
               .keyStoreFileName("/path/to/keystore")
               .keyStorePassword("keystorepassword".toCharArray())
               .keyStoreType("PCKS12");

17.5.4. Creating caches from Hot Rod clients

You can remotely create caches on Data Grid clusters running on OpenShift with Hot Rod clients. However, Data Grid recommends that you create caches using Data Grid Console, the CLI, or with Cache CRs instead of with Hot Rod clients.

Programmatically creating caches

The following example shows how to add cache configurations to the ConfigurationBuilder and then create them with the RemoteCacheManager:

import org.infinispan.client.hotrod.DefaultTemplate;
import org.infinispan.client.hotrod.RemoteCache;
import org.infinispan.client.hotrod.RemoteCacheManager;
...

      builder.remoteCache("my-cache")
             .templateName(DefaultTemplate.DIST_SYNC);
      builder.remoteCache("another-cache")
             .configuration("<infinispan><cache-container><distributed-cache name=\"another-cache\"><encoding media-type=\"application/x-protostream\"/></distributed-cache></cache-container></infinispan>");
      try (RemoteCacheManager cacheManager = new RemoteCacheManager(builder.build())) {
      // Get a remote cache that does not exist.
      // Rather than return null, create the cache from a template.
      RemoteCache<String, String> cache = cacheManager.getCache("my-cache");
      // Store a value.
      cache.put("hello", "world");
      // Retrieve the value and print it.
      System.out.printf("key = %s\n", cache.get("hello"));

This example shows how to create a cache named CacheWithXMLConfiguration using the XMLStringConfiguration() method to pass the cache configuration as XML:

import org.infinispan.client.hotrod.RemoteCacheManager;
import org.infinispan.commons.configuration.XMLStringConfiguration;
...

private void createCacheWithXMLConfiguration() {
    String cacheName = "CacheWithXMLConfiguration";
    String xml = String.format("<distributed-cache name=\"%s\">" +
                                  "<encoding media-type=\"application/x-protostream\"/>" +
                                  "<locking isolation=\"READ_COMMITTED\"/>" +
                                  "<transaction mode=\"NON_XA\"/>" +
                                  "<expiration lifespan=\"60000\" interval=\"20000\"/>" +
                                "</distributed-cache>"
                                , cacheName);
    manager.administration().getOrCreateCache(cacheName, new XMLStringConfiguration(xml));
    System.out.println("Cache with configuration exists or is created.");
}
Using Hot Rod client properties

When you invoke cacheManager.getCache() calls for named caches that do not exist, Data Grid creates them from the Hot Rod client properties instead of returning null.

Add cache configuration to hotrod-client.properties as in the following example:

# Add cache configuration
infinispan.client.hotrod.cache.my-cache.template_name=org.infinispan.DIST_SYNC
infinispan.client.hotrod.cache.another-cache.configuration=<infinispan><cache-container><distributed-cache name=\"another-cache\"/></cache-container></infinispan>
infinispan.client.hotrod.cache.my-other-cache.configuration_uri=file:/path/to/configuration.xml

17.6. Accessing the REST API

Data Grid provides a RESTful interface that you can interact with using HTTP clients.

Prerequisites

  • Expose Data Grid on the network so you can access the REST API.
    For example, configure a load balancer service or create a route.

Procedure

  • Access the REST API with any HTTP client at $SERVICE_HOSTNAME:$PORT/rest/v2.

    Replace $SERVICE_HOSTNAME:$PORT with the hostname and port where Data Grid is available on the network.

Additional resources

17.7. Adding caches to Cache service pods

Cache service pods include a default cache configuration with recommended settings. This default cache lets you start using Data Grid without the need to create caches.

Note

Because the default cache provides recommended settings, you should create caches only as copies of the default. If you want multiple custom caches you should create Data Grid service pods instead of Cache service pods.

Procedure

  • Access the Data Grid Console and provide a copy of the default configuration in XML or JSON format.
  • Use the Data Grid CLI to create a copy from the default cache as follows:

    [//containers/default]> create cache --template=default mycache

17.7.1. Default cache configuration

This topic describes default cache configuration for Cache service pods.

<distributed-cache name="default"
                   mode="SYNC"
                   owners="2">
  <memory storage="OFF_HEAP"
          max-size="<maximum_size_in_bytes>"
          when-full="REMOVE" />
  <partition-handling when-split="ALLOW_READ_WRITES"
                      merge-policy="REMOVE_ALL"/>
</distributed-cache>

Default caches:

  • Use synchronous distribution to store data across the cluster.
  • Create two replicas of each entry on the cluster.
  • Store cache entries as bytes in native memory (off-heap).
  • Define the maximum size for the data container in bytes. Data Grid Operator calculates the maximum size when it creates pods.
  • Evict cache entries to control the size of the data container. You can enable automatic scaling so that Data Grid Operator adds pods when memory usage increases instead of removing entries.
  • Use a conflict resolution strategy that allows read and write operations for cache entries, even if segment owners are in different partitions.
  • Specify a merge policy that removes entries from the cache when Data Grid detects conflicts.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.