Chapter 2. Infrastructure Components
2.1. Kubernetes Infrastructure
2.1.1. Overview
Within OpenShift Online, Kubernetes manages containerized applications across a set of containers or hosts and provides mechanisms for deployment, maintenance, and application-scaling. The Docker service packages, instantiates, and runs containerized applications. A Kubernetes cluster consists of one or more masters and a set of nodes.
OpenShift Online uses Kubernetes 1.9 and Docker 1.13.
2.1.2. Masters
The master is the host or hosts that contain the master components, including the API server, controller manager server, and etcd. The master manages nodes in its Kubernetes cluster and schedules pods to run on nodes.
Component | Description |
---|---|
API Server | The Kubernetes API server validates and configures the data for pods, services, and replication controllers. It also assigns pods to nodes and synchronizes pod information with service configuration. Can be run as a standalone process. |
etcd | etcd stores the persistent master state while other components watch etcd for changes to bring themselves into the desired state. etcd can be optionally configured for high availability, typically deployed with 2n+1 peer services. |
Controller Manager Server | The controller manager server watches etcd for changes to replication controller objects and then uses the API to enforce the desired state. Can be run as a standalone process. Several such processes create a cluster with one active leader at a time. |
2.1.3. Nodes
A node provides the runtime environments for containers. Each node in a Kubernetes cluster has the required services to be managed by the master. Nodes also have the required services to run pods, including the Docker service, a kubelet, and a service proxy.
OpenShift Online creates nodes from a cloud provider, physical systems, or virtual systems. Kubernetes interacts with node objects that are a representation of those nodes. The master uses the information from node objects to validate nodes with health checks. A node is ignored until it passes the health checks, and the master continues checking nodes until they are valid. The Kubernetes documentation has more information on node management.
2.1.3.1. Kubelet
Each node has a kubelet that updates the node as specified by a container manifest, which is a YAML file that describes a pod. The kubelet uses a set of manifests to ensure that its containers are started and that they continue to run.
A container manifest can be provided to a kubelet by:
- A file path on the command line that is checked every 20 seconds.
- An HTTP endpoint passed on the command line that is checked every 20 seconds.
- The kubelet watching an etcd server, such as /registry/hosts/$(hostname -f), and acting on any changes.
- The kubelet listening for HTTP and responding to a simple API to submit a new manifest.
2.1.3.2. Service Proxy
Each node also runs a simple network proxy that reflects the services defined in the API on that node. This allows the node to do simple TCP and UDP stream forwarding across a set of back ends.
2.1.3.3. Node Object Definition
The following is an example node object definition in Kubernetes:
apiVersion: v1 1 kind: Node 2 metadata: creationTimestamp: null labels: 3 kubernetes.io/hostname: node1.example.com name: node1.example.com 4 spec: externalID: node1.example.com 5 status: nodeInfo: bootID: "" containerRuntimeVersion: "" kernelVersion: "" kubeProxyVersion: "" kubeletVersion: "" machineID: "" osImage: "" systemUUID: ""
- 1
apiVersion
defines the API version to use.- 2
kind
set toNode
identifies this as a definition for a node object.- 3
metadata.labels
lists any labels that have been added to the node.- 4
metadata.name
is a required value that defines the name of the node object. This value is shown in theNAME
column when running theoc get nodes
command.- 5
spec.externalID
defines the fully-qualified domain name where the node can be reached. Defaults to themetadata.name
value when empty.
2.2. Container Registry
2.2.1. Overview
OpenShift Online can utilize any server implementing the Docker registry API as a source of images, including the Docker Hub, private registries run by third parties, and the integrated OpenShift Online registry.
2.2.2. Integrated OpenShift Container Registry
OpenShift Online provides an integrated container registry called OpenShift Container Registry (OCR) that adds the ability to automatically provision new image repositories on demand. This provides users with a built-in location for their application builds to push the resulting images.
Whenever a new image is pushed to OCR, the registry notifies OpenShift Online about the new image, passing along all the information about it, such as the namespace, name, and image metadata. Different pieces of OpenShift Online react to new images, creating new builds and deployments.
2.2.3. Third Party Registries
OpenShift Online can create containers using images from third party registries, but it is unlikely that these registries offer the same image notification support as the integrated OpenShift Online registry. In this situation OpenShift Online will fetch tags from the remote registry upon imagestream creation. Refreshing the fetched tags is as simple as running oc import-image <stream>
. When new images are detected, the previously-described build and deployment reactions occur.
2.2.3.1. Authentication
OpenShift Online can communicate with registries to access private image repositories using credentials supplied by the user. This allows OpenShift to push and pull images to and from private repositories. The Authentication topic has more information.
2.3. Web Console
2.3.1. Overview
The OpenShift Online web console is a user interface accessible from a web browser. Developers can use the web console to visualize, browse, and manage the contents of projects.
JavaScript must be enabled to use the web console. For the best experience, use a web browser that supports WebSockets.
From the About page in the web console, you can check the cluster’s version number.
2.3.2. Project Overviews
After logging in, the web console provides developers with an overview for the currently selected project:
Figure 2.1. Web Console Project Overview
- The project selector allows you to switch between projects you have access to.
- To quickly find services from within project view, type in your search criteria
- Create new applications using a source repository or service from the service catalog.
- Notifications related to your project.
- The Overview tab (currently selected) visualizes the contents of your project with a high-level view of each component.
- Applications tab: Browse and perform actions on your deployments, pods, services, and routes.
- Builds tab: Browse and perform actions on your builds and image streams.
- Resources tab: View your current quota consumption and other resources.
- Storage tab: View persistent volume claims and request storage for your applications.
- Monitoring tab: View logs for builds, pods, and deployments, as well as event notifications for all objects in your project.
- Catalog tab: Quickly get to the catalog from within a project.
2.3.3. JVM Console
For pods based on Java images, the web console also exposes access to a hawt.io-based JVM console for viewing and managing any relevant integration components. A Connect link is displayed in the pod’s details on the Browse
Figure 2.2. Pod with a Link to the JVM Console
After connecting to the JVM console, different pages are displayed depending on which components are relevant to the connected pod.
Figure 2.3. JVM Console
The following pages are available:
Page | Description |
---|---|
JMX | View and manage JMX domains and mbeans. |
Threads | View and monitor the state of threads. |
ActiveMQ | View and manage Apache ActiveMQ brokers. |
Camel | View and manage Apache Camel routes and dependencies. |
OSGi | View and manage the JBoss Fuse OSGi environment. |
2.3.4. StatefulSets
A StatefulSet
controller provides a unique identity to its pods and determines the order of deployments and scaling. StatefulSet
is useful for unique network identifiers, persistent storage, graceful deployment and scaling, and graceful deletion and termination.
Figure 2.4. StatefulSet in OpenShift Online