Chapter 35. Integration with Apache Hadoop
35.1. Integration with Apache Hadoop
The JBoss Data Grid connector allows the JBoss Data Grid to be a Hadoop compliant data source. It accomplishes this integration by providing implementations of Hadoop’s InputFormat
and OutputFormat
, allowing applications to read and write data to a JBoss Data Grid server with best data locality. While JBoss Data Grid’s implementation of the InputFormat
and OutputFormat
allow one to run traditional Hadoop Map/Reduce jobs, they may also be used with any tool or utility that supports Hadoop’s InputFormat
data source.
35.2. Hadoop Dependencies
The JBoss Data Grid implementations of Hadoop’s formats are found in the following Maven dependency:
<dependency> <groupId>org.infinispan.hadoop</groupId> <artifactId>infinispan-hadoop-core</artifactId> <version>0.3.0.Final-redhat-9</version> </dependency>
35.3. Supported Hadoop Configuration Parameters
The following parameters are supported:
Parameter Name | Description | Default Value |
---|---|---|
| The name of the filter factory deployed on the server to pre-filter data before reading. | null (no filtering enabled) |
| The name of cache where data will be read. | ___defaultcache |
| List of servers of the input cache, in the format: host1:port;host2:port2 | localhost:11222 |
| The name of cache where data will be written. | default |
| List of servers of the output cache, in the format: host1:port;host2:port2 | null (no output cache) |
| Batch size when reading from the cache. | 5000 |
| Batch size when writing to the cache. | 500 |
|
Class name with an implementation of | null (no converting enabled). |
|
Class name with an implementation of | null (no converting enabled). |
35.4. Using the Hadoop Connector
InfinispanInputFormat
and InfinispanOutputFormat
In Hadoop, the InputFormat
interface indicates how a specific data source is partitioned, along with how to read data from each of the partitions, while the OutputFormat
interface specifies how to write data.
There are two methods of importance defined in the InputFormat
interface:
The
getSplits
method defines a data partitioner, returning one or moreInputSplit
instances that contain information regarding a certain section of the data.List<InputSplit> getSplits(JobContext context);
The
InputSplit
can then be used to obtain aRecordReader
which will be used to iterate over the resulting dataset.RecordReader<K,V> createRecordReader(InputSplit split,TaskAttemptContext context);
These two operations allow for parallelization of data processing across multiple nodes, resulting in Hadoop’s high throughput over large datasets.
In regards to JBoss Data Grid, partitions are generated based on segment ownership, meaning that each partition is a set of segments on a certain server. By default there will be as many partitions as servers in the cluster, and each partition will contain all segments associated with that specific server.
Running a Hadoop Map Reduce job on JBoss Data Grid
Example of configuring a Map Reduce job targeting a JBoss Data Grid cluster:
import org.infinispan.hadoop.*; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.mapreduce.Job; [...] Configuration configuration = new Configuration(); configuration.set(InfinispanConfiguration.INPUT_REMOTE_CACHE_SERVER_LIST, "localhost:11222"); configuration.set(InfinispanConfiguration.INPUT_REMOTE_CACHE_NAME, "map-reduce-in"); configuration.set(InfinispanConfiguration.OUTPUT_REMOTE_CACHE_SERVER_LIST, "localhost:11222"); configuration.set(InfinispanConfiguration.OUTPUT_REMOTE_CACHE_NAME, "map-reduce-out"); Job job = Job.getInstance(configuration, "Infinispan Integration"); [...]
In order to target the JBoss Data Grid, the job needs to be configured with the InfinispanInputFormat
and InfinispanOutputFormat
classes:
[...] // Define the Map and Reduce classes job.setMapperClass(MapClass.class); job.setReducerClass(ReduceClass.class); // Define the JBoss Data Grid implementations job.setInputFormatClass(InfinispanInputFormat.class); job.setOutputFormatClass(InfinispanOutputFormat.class); [...]