Search

Chapter 3. Workflow

download PDF

OpenStack Data Processing provisions and scales Hadoop clusters using pre-configured cluster templates that define specifically designed instances. These instances form the individual nodes that make up Hadoop clusters; you can then use these Hadoop clusters to run the jobs/binaries that will process your data.

If you intend to use OpenStack Data Processing, you should already be familiar with the necessary components for working within the Hadoop framework. As such, the general workflow described in this section assumes that you already have the following components prepared:

  • A Hadoop image; specifically, a Red Hat Enterprise Linux image containing a Hadoop data processing plug-in. See Chapter 1, Overview for a list of supported plug-ins.
  • The input data you wish to process, preferably uploaded to the Object Storage service.
  • The job binaries and libraries you will use to process the input data, preferably uploaded to the Object Storage service.
Note

For details on how to upload content to the Object Storage service, see Upload an Object.

In addition, you should also have a general idea of the computational resources required to run the job. This will help you determine what type of nodes (and how many of each) you will need.

The following high-level workflow describes how to configure and use the OpenStack Data Processing service to launch clusters and run jobs on those clusters:

  1. Create an image containing the necessary plug-in components for OpenStack Data Processing (Chapter 4, Create Hadoop Image). This will be your Hadoop image.

    The procedure for creating this image differs depending on your chosen Hadoop plug-in.

  2. Register the following required components to the OpenStack Data Processing service:

    • Hadoop image
    • Data sources (namely, your input data and where the output data should go)
  3. Create node group templates. Each template defines many useful Hadoop-specific settings for any given node, most notably:

    • What Hadoop plug-in and version should the node group use?
    • Which processes should run on the node?
  4. Create or upload cluster templates. A cluster template defines, among other things:

    • Node group composition: namely, how many nodes of each node group should make up the cluster.
    • Cluster-scoped Hadoop configurations: specific parameters you need to set for each Hadoop component (HIVE, AMBARI, HDFS, and the like).
  5. Launch a Hadoop cluster (using a cluster template), and run a job on the cluster (namely, running a registered job binary on a data source). You can also scale the cluster (as in, add or remove nodes of any type) as needed.
  6. Register job binaries, scripts, or libraries to the OpenStack Data Processing Service, create jobs, and launch them on Hadoop clusters. Jobs define which job binaries, scripts, or libraries should be used to process registered data sources.

The next few sections describe each workflow step in greater detail.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.