Chapter 1. OpenShift Container Platform 4.6 Documentation
Welcome to the official OpenShift Container Platform 4.6 documentation, where you can find information to help you learn about OpenShift Container Platform and start exploring its features.
To navigate the OpenShift Container Platform 4.6 documentation, you can either
- Use the left navigation bar to browse the documentation or
- Select the activity that interests you from the contents of this Welcome page
You can start with Architecture and Security and compliance. Then, see Release notes.
1.1. Cluster installer activities
As someone setting out to install an OpenShift Container Platform 4.6 cluster, this documentation helps you:
- Install a cluster on AWS: You have the most installation options when you deploy a cluster on Amazon Web Services (AWS). You can deploy clusters with default settings or custom AWS settings. You can also deploy a cluster on AWS infrastructure that you provisioned yourself. You can modify the provided AWS CloudFormation templates to meet your needs.
- Install a cluster on Azure: You can deploy clusters with default settings, custom Azure settings, or custom networking settings in Microsoft Azure. You can also provision OpenShift Container Platform into an Azure Virtual Network or use Azure Resource Manager Templates to provision your own infrastructure.
- Install a cluster on GCP: You can deploy clusters with default settings or custom GCP settings on Google Cloud Platform (GCP). You can also perform a GCP installation where you provision your own infrastructure.
- Install a cluster on VMware vSphere: You can install OpenShift Container Platform on supported versions of vSphere.
- Install a cluster on bare metal: If none of the available platform and cloud providers meet your needs, you can install OpenShift Container Platform on bare metal.
- Install an installer-provisioned cluster on bare metal: You can install OpenShift Container Platform on bare metal with an installer-provisioned architecture.
-
Create Red Hat Enterprise Linux CoreOS (RHCOS) machines on bare metal: You can install RHCOS machines using ISO or PXE in a fully live environment and configure them with kernel arguments, Ignition configs, or the
coreos-installer
command. - Install a cluster on Red Hat OpenStack Platform (RHOSP): You can install a cluster on RHOSP with customizations.
- Install a cluster on Red Hat Virtualization (RHV): You can deploy clusters on Red Hat Virtualization (RHV) with a quick install or an install with customizations.
- Install a cluster in a restricted network: If your cluster that uses user-provisioned infrastructure on AWS, GCP, vSphere, or bare metal does not have full access to the internet, you can mirror the OpenShift Container Platform installation images and install a cluster in a restricted network.
- Install a cluster in an existing network: If you use an existing Virtual Private Cloud (VPC) in AWS or GCP or an existing VNet on Azure, you can install a cluster.
- Install a private cluster: If your cluster does not require external internet access, you can install a private cluster on AWS, Azure, or GCP. Internet access is still required to access the cloud APIs and installation media.
- Check installation logs: Access installation logs to evaluate issues that occur during OpenShift Container Platform 4.6 installation.
- Access OpenShift Container Platform: Use credentials output at the end of the installation process to log in to the OpenShift Container Platform cluster from the command line or web console.
- Install Red Hat OpenShift Container Storage: You can install Red Hat OpenShift Container Storage as an Operator to provide highly integrated and simplified persistent storage management for containers.
1.2. Developer activities
Ultimately, OpenShift Container Platform is a platform for developing and deploying containerized applications. As an application developer, OpenShift Container Platform documentation helps you:
- Understand OpenShift Container Platform development: Learn the different types of containerized applications, from simple containers to advanced Kubernetes deployments and Operators.
- Work with projects: Create projects from the web console or CLI to organize and share the software you develop.
- Work with applications: Use the Developer perspective in the OpenShift Container Platform web console to easily create and deploy applications.
Use the Topology view to visually interact with your applications, monitor status, connect and group components, and modify your code base.
- Use the developer CLI tool (odo): The odo CLI tool lets developers create single or multi-component applications easily and automates deployment, build, and service route configurations. It abstracts complex Kubernetes and OpenShift Container Platform concepts, allowing developers to focus on developing their applications.
- Create CI/CD Pipelines: Pipelines are serverless, cloud-native, continuous integration and continuous deployment systems that run in isolated containers. They use standard Tekton custom resources to automate deployments and are designed for decentralized teams that work on microservices-based architecture.
- Deploy Helm charts: Helm 3 is a package manager that helps developers define, install, and update application packages on Kubernetes. A Helm chart is a packaging format that describes an application that can be deployed using the Helm CLI.
- Understand Operators: Operators are the preferred method for creating on-cluster applications for OpenShift Container Platform 4.6. Learn about the Operator Framework and how to deploy applications using installed Operators into your projects.
- Understand image builds: Choose from different build strategies (Docker, S2I, custom, and pipeline) that can include different kinds of source materials (from places like Git repositories, local binary inputs, and external artifacts). Then, follow examples of build types from basic builds to advanced builds.
- Create container images: A container image is the most basic building block in OpenShift Container Platform (and Kubernetes) applications. Defining image streams lets you gather multiple versions of an image in one place as you continue its development. S2I containers let you insert your source code into a base container that is set up to run code of a particular type (such as Ruby, Node.js, or Python).
-
Create deployments: Use
Deployment
andDeploymentConfig
objects to exert fine-grained management over applications. Use the Workloads page oroc
CLI to manage deployments. Learn rolling, recreate, and custom deployment strategies. - Create templates: Use existing templates or create your own templates that describe how an application is built or deployed. A template can combine images with descriptions, parameters, replicas, exposed ports and other content that defines how an application can be run or built.
- Create Operators: Operators are the preferred method for creating on-cluster applications for OpenShift Container Platform 4.6. Learn the workflow for building, testing, and deploying Operators. Then create your own Operators based on Ansible or Helm, or configure built-in Prometheus monitoring using the Operator SDK.
- REST API reference: Lists OpenShift Container Platform application programming interface endpoints.
1.3. Cluster administrator activities
Ongoing tasks on your OpenShift Container Platform 4.6 cluster include various activities for managing machines, providing services to users, and following monitoring and logging features that watch over the cluster. As a cluster administrator, this documentation helps you:
- Understand OpenShift Container Platform management: Learn about components of the OpenShift Container Platform 4.6 control plane. See how OpenShift Container Platform masters and workers are managed and updated through the Machine API and Operators.
1.3.1. Manage cluster components
- Manage machines: Manage machines in your cluster on AWS, Azure, or GCP by deploying health checks and applying autoscaling to machines.
- Manage container registries: Each OpenShift Container Platform cluster includes a built-in container registry for storing its images. You can also configure a separate Red Hat Quay registry to use with OpenShift Container Platform. The Quay.io web site provides a public container registry that stores OpenShift Container Platform containers and Operators.
- Manage users and groups: Add users and groups that have different levels of permissions to use or modify clusters.
- Manage authentication: Learn how user, group, and API authentication works in OpenShift Container Platform. OpenShift Container Platform supports multiple identity providers, including HTPasswd, Keystone, LDAP, basic authentication, request header, GitHub, GitLab, Google, and OpenID.
- Manage ingress, API server, and service certificates: OpenShift Container Platform creates certificates by default for the Ingress Operator, the API server, and for services needed by complex middleware applications that require encryption. At some point, you might need to change, add, or rotate these certificates.
- Manage networking: Networking in OpenShift Container Platform is managed by the Cluster Network Operator (CNO). The CNO uses iptables rules in kube-proxy to direct traffic between nodes and pods running on those nodes. The Multus Container Network Interface adds the capability to attach multiple network interfaces to a pod. Using network policy features, you can isolate your pods or permit selected traffic.
- Manage storage: OpenShift Container Platform allows cluster administrators to configure persistent storage using Red Hat OpenShift Container Storage, AWS Elastic Block Store, NFS, iSCSI, Container Storage Interface (CSI), and more. As needed, you can expand persistent volumes, configure dynamic provisioning, and use CSI to configure and clone persistent storage.
- Manage Operators: Lists of Red Hat, ISV, and community Operators can be reviewed by cluster administrators and installed on their clusters. Once installed, you can run, upgrade, back up or otherwise manage the Operator on your cluster (based on what the Operator is designed to do).
1.3.2. Change cluster components
- Use custom resource definitions (CRDs) to modify the cluster: Cluster features that are implemented with Operators, can be modified with CRDs. Learn to create a CRD and manage resources from CRDs.
- Set resource quotas: Choose from CPU, memory and other system resources to set quotas.
- Prune and reclaim resources: You can reclaim space by pruning unneeded Operators, groups, deployments, builds, images, registries, and cron jobs.
- Scale and tune clusters: Set cluster limits, tune nodes, scale cluster monitoring, and optimize networking, storage, and routes for your environment.
- Update a cluster: To upgrade your OpenShift Container Platform to a later version, use the Cluster Version Operator (CVO). If an update is available from the Container Platform update service, you apply that cluster update from either the web console or the CLI.
- Understanding the OpenShift Update Service: Learn about installing and managing a local OpenShift Update Service for recommending OpenShift Container Platform updates in restricted network environments.
1.3.3. Monitor the cluster
- Work with cluster logging: Learn about cluster logging and configure different cluster logging types, such as Elasticsearch, Fluentd, Kibana, and Curator.
- Monitor clusters: Learn to configure the monitoring stack. Once your monitoring is configured, use the Web UI to access monitoring dashboards. In addition to infrastructure metrics, you can also scrape and view metrics for your own services.
- Remote health monitoring: OpenShift Container Platform collects anonymized aggregated information about your cluster and reports it to Red Hat via Telemetry and the Insights Operator. This information allows Red Hat to improve OpenShift Container Platform and to react to issues that impact customers more quickly. You can view the data collected by remote health monitoring.