Search

Chapter 1. Architecture overview

download PDF

OpenShift Container Platform is a cloud-based Kubernetes container platform. The foundation of OpenShift Container Platform is based on Kubernetes and therefore shares the same technology. To learn more about OpenShift Container Platform and Kubernetes, see product architecture.

1.1. About installation and updates

As a cluster administrator, you can use the OpenShift Container Platform installation program to install and deploy a cluster on:

  • Installer-provisioned infrastructure clusters
  • User-provisioned infrastructure clusters

1.2. About the control plane

The control plane manages the worker nodes and the pods in your cluster. You can configure nodes with the use of machine config pools (MCPs). MCPs are groups of machines, such as control plane components or user workloads, that are based on the resources that they handle. OpenShift Container Platform assigns different roles to hosts. These roles define the function of a machine in a cluster. The cluster contains definitions for the standard control plane and worker role types.

You can use Operators to package, deploy, and manage services on the control plane. Operators are important components in OpenShift Container Platform because they provide the following services:

  • Perform health checks
  • Provide ways to watch applications
  • Manage over-the-air updates
  • Ensures applications stay in the specified state

1.3. About containerized applications for developers

As a developer, you can use different tools, methods, and formats to develop your containerized application based on your unique requirements, for example:

  • Use various build-tool, base-image, and registry options to build a simple container application.
  • Use supporting components such as OperatorHub and templates to develop your application.
  • Package and deploy your application as an Operator.

You can also create a Kubernetes manifest and store it in a Git repository. Kubernetes works on basic units called pods. A pod is a single instance of a running process in your cluster. Pods can contain one or more containers. You can create a service by grouping a set of pods and their access policies. Services provide permanent internal IP addresses and host names for other applications to use as pods are created and destroyed. Kubernetes defines workloads based on the type of your application.

1.4. About Red Hat Enterprise Linux CoreOS (RHCOS) and Ignition

As a cluster administrator, you can perform the following Red Hat Enterprise Linux CoreOS (RHCOS) tasks:

  • Learn about the next generation of single-purpose container operating system technology.
  • Choose how to configure Red Hat Enterprise Linux CoreOS (RHCOS)
  • Choose how to deploy Red Hat Enterprise Linux CoreOS (RHCOS):

    • Installer-provisioned deployment
    • User-provisioned deployment

The OpenShift Container Platform installation program creates the Ignition configuration files that you need to deploy your cluster. Red Hat Enterprise Linux CoreOS (RHCOS) uses Ignition during the initial configuration to perform common disk tasks, such as partitioning, formatting, writing files, and configuring users. During the first boot, Ignition reads its configuration from the installation media or the location that you specify and applies the configuration to the machines.

You can learn how Ignition works, the process for a Red Hat Enterprise Linux CoreOS (RHCOS) machine in an OpenShift Container Platform cluster, view Ignition configuration files, and change Ignition configuration after an installation.

1.5. About CI/CD methodology

You can use the continuous integration/continuous delivery (CI/CD) methodology to automate all the stages of application development. You can also use GitOps methodology to create repeatable and predictable processes for managing and recreating OpenShift Container Platform clusters and applications.

1.6. About ArgoCD

You can use ArgoCD, which is a declarative, GitOps continuous delivery tool for cluster resources.

1.7. About admission plug-ins

You can use admission plug-ins to regulate how OpenShift Container Platform functions. After a resource request is authenticated and authorized, admission plug-ins intercept the resource request to the master API to validate resource requests and to ensure that scaling policies are adhered to. Admission plug-ins are used to enforce security policies, resource limitations, or configuration requirements.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.