1.2. New features and enhancements
This release adds improvements related to the following components and concepts.
1.2.1. Installation and upgrade
1.2.1.1. Installing a cluster on vSphere using installer-provisioned infrastructure
OpenShift Container Platform 4.5 introduces support for installing a cluster on vSphere using installer-provisioned infrastructure.
1.2.1.3. Three-node bare metal deployments
You can install and run three-node clusters in OpenShift Container Platform with no workers. This provides smaller, more resource efficient clusters for deployment, development, and testing.
Previously in Technology Preview, this feature is now fully supported in OpenShift Container Platform 4.5.
For more information, see Running a three-node cluster.
1.2.1.4. Restricted network cluster upgrade improvements
The Cluster Version Operator (CVO) can now verify the release images if the image signature is available as a config map in the cluster during the upgrade process for a restricted network cluster. This removes the need for using the --force
flag during upgrades in a restricted network environment.
This improved upgrade workflow is completed by running the enhanced oc adm release mirror
command. The following actions are performed:
- Pulls the image signature from the release during the mirroring process.
- Applies the signature config map directly to the connected cluster.
1.2.1.5. Migrating Azure private DNS zones
There is now a new openshift-install migrate
command available for migrating Azure private DNS zones. If you installed an OpenShift Container Platform version 4.2 or 4.3 cluster on Azure that uses installer-provisioned infrastructure, your cluster might use a legacy private DNS zone. If it does, you must migrate it to the new type of private DNS zone.
1.2.1.6. Built-in help for install-config.yaml
supported fields
There is a new openshift-install explain
command available that lists all the fields for supported install-config.yaml
file versions including a short description explaining each resource. It also provides details on which fields are mandatory and specifies their default value. Using the explain
command reduces the need to continually look up configuration options when creating or customizing the install-config.yaml
file.
1.2.1.7. Encrypt EBS instance volumes with a KMS key
You can now define a KMS key to encrypt EBS instance volumes. This is useful if you have explicit compliance and security guidelines when deploying to AWS. The KMS key can be configured in the install-config.yaml
file by setting the optional kmsKeyARN
field. For example:
apiVersion: v1 baseDomain: example.com compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: aws: rootVolume: kmsKeyARN: arn:aws:kms:us-east-2:563456982459:key/4f5265b4-16f7-xxxx-xxxx-xxxxxxxxxxxx ...
If no key is specified, the account’s default KMS key for that particular region is used.
1.2.1.8. Install to pre-existing VPC with multiple CIDRs on AWS
You can now install OpenShift Container Platform to a VPC with more than one CIDR on AWS. This lets you select secondary CIDRs for the machine network. When the VPC is provisioned by the installer, it does not create multiple CIDRs or configure the routing between subnets. Installing to a pre-existing VPC with multiple CIDRs is supported for both user-provisioned and installer-provisioned infrastructure installation workflows.
1.2.1.9. Adding custom domain names to AWS Virtual Private Cloud (VPC) DHCP option sets
Custom domain names can now be added to AWS Virtual Private Cloud (VPC) DHCP option sets. This enables certificate signing request (CSR) approval of new nodes when custom DHCP options are used.
1.2.1.10. Provisioning bare metal hosts using IPv6 with Ironic
Binaries required for IPv6 provisioning using the UEFI networking stack have now been introduced in Ironic. You can now provision bare metal hosts using IPv6 with Ironic. The snpnoly.efi
bootloader executable and compatible iPXE binaries are now included in the tftpboot
directory.
1.2.1.11. Custom networks and subnets for clusters on RHOSP
OpenShift Container Platform 4.5 introduces support for installing clusters on Red Hat OpenStack Platform (RHOSP) that rely on preexisting networks and subnets.
1.2.1.12. Additional networks for clusters on RHOSP
OpenShift Container Platform 4.5 introduces support for multiple networks in clusters that run on RHOSP. You can specify these networks for both control plane and compute machines during installation.
1.2.1.13. Improved RHOSP load balancer upgrade experience for clusters that use Kuryr
Clusters that use Kuryr now have improved support for Octavia load-balancing services on RHOSP clusters that were upgraded from 13 to 16. For example, these clusters now support the Octavia OVN provider driver.
For more information, see The Octavia OVN driver.
1.2.1.14. Multiple version schemes accepted when installing RPM packages
When installing RPM packages, OpenShift Container Platform now accepts both three-part and two-part version schemes. A three-part version scheme follows the x.y.z
format whereas a two-part version scheme follows the x.y
format. Packages that use either scheme can be installed. See BZ#1826213 for more information.
1.2.1.15. SSH configuration no longer required for debug information
Gathering debug information from the bootstrap host no longer requires SSH configuration. See BZ#1811453 for more information.
1.2.1.16. Master nodes can be named any valid hostname
Master nodes can now be named as any valid hostname. See BZ#1804944 for more information.
1.2.1.17. Octavia OVN provider driver supported on previous RHOSP versions
OpenShift Container Platform clusters that were deployed before RHOSP supported the Octavia OVN provider driver can now use the driver. See BZ#1847181 for more information.
1.2.1.18. Octavia OVN provider driver supports listeners on same port
The ovn-octavia
driver now supports listeners on the same port for different protocols. This was not supported previously on the ovn-octavia
driver, but now it is supported and there is no need to block it. This means that it is possible to have, for instance, the DNS service expose port 53 in both TCP and UDP protocols when using ovn-octavia
. See BZ#1846452 for more information.
1.2.2. Security
1.2.2.1. Using the oauth-proxy
image stream in restricted network installations
The oauth-proxy
image can now be consumed by external components in restricted network installations by using the oauth-proxy
image stream.
1.2.3. Images
1.2.3.1. Mirroring release images to and from files
You can now mirror release images from a registry to a file and from a file to a registry.
1.2.3.2. Mirroring release image signatures
The oc adm release mirror
command was extended to also create and apply config map manifests that contain the release image signature, which the Cluster Version Operator can use to verify the mirrored release.
1.2.4. Machine API
1.2.4.1. AWS machine sets support spot instances
AWS machine sets now support spot instances. This lets you create a machine set that deploys machines as spot instances so you can save costs compared to on-demand instance prices. You can configure spot instances by adding the following line under the providerSpec
field in the machine set YAML file:
providerSpec: value: spotMarketOptions: {}
1.2.4.2. Autoscaling the minimum number of machines to 0
You can now set the minimum number of replicas for a machine autoscaler to 0
. This allows the autoscaler to be more cost-effective by scaling between zero machines and the machine count necessary based on the resources your workloads require.
For more information, see the MachineAutoscaler
resource definition.
1.2.4.3. MachineHealthCheck
resource with empty selector monitors all machines
A MachineHealthCheck
resource that contains an empty selector
field now monitors all machines.
For more information on the selector
field in the MachineHealthCheck
resource, see the Sample MachineHealthCheck
resource.
1.2.4.4. Describing machine and machine set fields by using oc explain
A full OpenAPI schema is now provided for machine and machine set Custom Resources. The oc explain
command now provides descriptions for fields included in machine and machine set API resources.
1.2.5. Nodes
1.2.5.1. New descheduler strategy is available (Technology Preview)
The descheduler now allows you to configure the RemovePodsHavingTooManyRestarts
strategy. This strategy ensures that pods that have been restarted too many times are removed from nodes. Likewise, the Descheduler Operator now supports full upstream descheduler strategy names, allowing for more one-to-one configuration.
See Descheduler strategies for more information.
1.2.5.2. Vertical Pod Autoscaler Operator (Technology Preview)
OpenShift Container Platform 4.5 introduces the Vertical Pod Autoscaler Operator (VPA). The VPA reviews the historic and current CPU and memory resources for containers in pods and can update the resource limits and requests based on the usage values it learns. You create individual custom resources (CR) to instruct the VPA to update all of the pods associated with a workload object, such as a Deployment
, Deployment Config
, StatefulSet
, Job
, DaemonSet
, ReplicaSet
, or ReplicationController
. The VPA helps you to understand the optimal CPU and memory usage for your pods and can automatically maintain pod resources through the pod lifecycle.
1.2.5.3. Anti-affinity control plane node scheduling on RHOSP
If separate physical hosts are available on an RHOSP deployment, control plane nodes will be scheduled across all of them.
1.2.6. Cluster monitoring
1.2.6.1. Monitor your own services (Technology Preview)
The following improvements are now available to further enhance monitoring your own services:
- Allow cross-correlation of the metrics of your own service with cluster metrics.
- Allow using metrics of services in user namespaces in recording and alerting rules.
- Add multi-tenancy support for the Alertmanager API.
- Add the ability to deploy user recording and alerting rules with higher availability.
- Add the ability to introspect Thanos Stores using the Thanos Querier.
- Access metrics of all services together in the web console from a single view.
For more information see Monitoring your own services.
1.2.7. Cluster logging
1.2.7.1. Elasticsearch version upgrade
Cluster logging in OpenShift Container Platform 4.5 now uses Elasticsearch 6.8.1 as the default log store.
The new Elasticsearch version introduces a new Elasticsearch data model. With the new data model, data is no longer indexed by type (infrastructure and application) and project. Data is only indexed by type:
- The application logs that were previously in the project- indices in OpenShift Container Platform 4.4 are in a set of indices prefixed with app-.
- The infrastructure logs that were previously in the .operations- indices are now in the infra- indices.
- The audit logs are stored in the audit- indices.
Because of the new data model, the update does not migrate existing custom Kibana index patterns and visualizations into the new version. You must re-create your Kibana index patterns and visualizations to match the new indices after updating.
Elasticsearch 6.x also includes a new security plug-in, Open Distro for Elasticsearch. Open Distro for Elasticsearch provides a comprehensive set of advanced security features designed to keep your data secure.
1.2.7.2. New Elasticsearch log retention feature
The new index management feature relies on the Elasticsearch rollover feature to maintain indices. You can configure how long to retain data before it is removed from the cluster. The index management feature replaces Curator. In OpenShift Container Platform 4.5, Curator removes data that is in the Elasticsearch index formats prior to OpenShift Container Platform 4.5, and will be removed in a later release.
1.2.7.3. Kibana link in web console moved
The link to launch Kibana has been moved from the Monitoring menu to the Application Launcher at the top of the OpenShift Container Platform console.
1.2.8. Web console
1.2.8.1. New Infrastructure Features filters for Operators in OperatorHub
You can now filter Operators by Infrastructure Features in OperatorHub. For example, select Disconnected to see Operators that work in disconnected environments.
1.2.8.2. Developer Perspective
You can now use the Developer perspective to:
- Make informed decisions on installing Helm Charts in the Developer Catalog using the description and docs for them.
- Uninstall, upgrade, and rollback Helm Releases.
- Create and delete dynamic Knative event sources.
- Deploy virtual machines, launch applications in them, or delete the virtual machines.
- Provide Git webhooks, Triggers, and Workspaces, manage credentials of private git repositories, and troubleshoot using better logs for OpenShift Pipelines.
- Add health checks during or after application deployment.
- Navigate efficiently and pin frequently searched items.
1.2.8.3. Streamlined steps for configuring alerts from cluster dashboard
For AlertManagerReceiversNotConfigured
alerts that display on the cluster dashboard of the web console, a new Configure link is available. This link goes to the Alertmanager configuration page. This streamlines the steps it takes to configure your alerts. For more information, see BZ#1826489.
1.2.9. Scale
1.2.9.1. Cluster maximums
Updated guidance around Cluster maximums for OpenShift Container Platform 4.5 is now available.
Use the OpenShift Container Platform Limit Calculator to estimate cluster limits for your environment.
1.2.10. Networking
1.2.10.1. Migrating from the OpenShift SDN default CNI network provider (Technology Preview)
You can now migrate to the OVN-Kubernetes default Container Network Interface (CNI) network provider from the OpenShift SDN default CNI network provider.
For more information, see Migrate from the OpenShift SDN default CNI network provider.
1.2.10.2. Ingress Controller enhancements
There are two noteworthy Ingress Controller enhancements introduced in OpenShift Container Platform 4.5:
1.2.10.3. HAProxy upgraded to version 2.0.14
The HAProxy used for the Ingress Controller has been upgraded from version 2.0.13 to 2.0.14. This upgrade provides a router reload performance improvement. The router reload optimization is most beneficial for clusters with thousands of routes.
1.2.10.4. HTTP/2 Ingress support
You can now enable transparent end-to-end HTTP/2 connectivity in HAProxy. This feature allows application owners to make use of HTTP/2 protocol capabilities, including single connection, header compression, binary streams, and more.
You can enable HTTP/2 connectivity in HAProxy for an individual Ingress Controller or for the entire cluster. For more information, see HTTP/2 Ingress connectivity.
To enable the use of HTTP/2 for the connection from the client to HAProxy, a route must specify a custom certificate. A route that uses the default certificate cannot use HTTP/2. This restriction is necessary to avoid problems from connection coalescing, where the client re-uses a connection for different routes that use the same certificate.
The connection from HAProxy to the application pod can use HTTP/2 only for re-encrypt routes and not for edge-terminated or insecure routes. This restriction comes from the fact that HAProxy uses Application-Level Protocol Negotiation (ALPN), which is a TLS extension, to negotiate the use of HTTP/2 with the back-end. The implication is that end-to-end HTTP/2 is possible with passthrough and re-encrypt and not with insecure or edge-terminated routes.
A connection that uses the HTTP/2 protocol cannot be upgraded to the WebSocket protocol. If you have a back-end application that is designed to allow WebSocket connections, it must not allow a connection to negotiate use of the HTTP/2 protocol or else WebSocket connections will fail.
1.2.11. Developer experience
1.2.11.1. oc new-app now produces Deployment
resources
The oc new-app
command now produces Deployment
resources instead of DeploymentConfig
resources by default. If you prefer to create DeploymentConfig
resources, you can pass the --as-deployment-config
flag when invoking oc new-app
. For more information, see Understanding Deployments
and DeploymentConfigs
.
1.2.11.2. Support node affinity scheduler in image registry CRD
The node affinity scheduler is now supported to ensure image registry deployments complete even when an infrastructure node does not exist. The node affinity scheduler must be manually configured.
See Controlling pod placement on nodes using node affinity rules for more information.
1.2.11.3. Virtual hosted buckets for custom S3 endpoints
Virtual hosted buckets are now supported to deploy clusters in new or hidden AWS regions.
1.2.11.4. Node pull credentials during build and image stream import
Builds and image stream imports will automatically use the pull secret used to install the cluster if a pull secret is not explicitly set. Developers do not need to copy this pull secret into their namespace.
1.2.12. Backup and restore
1.2.12.1. Gracefully shutting down and restarting a cluster
You can now gracefully shut down and restart your OpenShift Container Platform 4.5 cluster. You might need to temporarily shut down your cluster for maintenance reasons, or to save on resource costs.
See Shutting down the cluster gracefully for more information.
1.2.13. Disaster recovery
1.2.13.1. Automatic control plane certificate recovery
First introduced in OpenShift Container Platform 4.4.8, OpenShift Container Platform can now automatically recover from expired control plane certificates. The exception is that you must manually approve pending node-bootstrapper
certificate signing requests (CSRs) to recover kubelet certificates.
See Recovering from expired control plane certificates for more information.
1.2.14. Storage
1.2.14.1. Persistent storage using the AWS EBS CSI Driver Operator (Technology Preview)
You can now use the Container Storage Interface (CSI) to deploy the CSI driver you need for provisioning AWS Elastic Block Store (EBS) persistent storage. This Operator is in Technology Preview. For more information, see AWS Elastic Block Store CSI Driver Operator.
1.2.14.2. Persistent storage using the OpenStack Manila CSI Driver Operator
You can now use CSI to provision a persistent volume using the CSI driver for the OpenStack Manila shared file system service. For more information, see OpenStack Manila CSI Driver Operator.
1.2.14.3. Persistent storage using CSI inline ephemeral volumes (Technology Preview)
You can now use CSI to specify volumes directly in the pod specification, rather than in a persistent volume. This feature is in Technology Preview and is available by default when using CSI drivers. For more information, see CSI inline ephemeral volumes.
1.2.14.4. Persistent storage using CSI volume cloning
Volume cloning using CSI, previously in Technology Preview, is now fully supported in OpenShift Container Platform 4.5. For more information, see CSI volume cloning.
1.2.14.5. External provisioner for AWS EFS (Technology Preview) feature has been removed
The Amazon Web Services (AWS) Elastic File System (EFS) Technology Preview feature has been removed and is no longer supported.
1.2.15. Operators
1.2.15.1. Bundle Format for packaging Operators and opm
CLI tool
The Bundle Format for Operators is a new packaging format introduced by the Operator Framework that is supported starting with OpenShift Container Platform 4.5. To improve scalability and better enable upstream users hosting their own catalogs, the Bundle Format specification simplifies the distribution of Operator metadata.
While the legacy Package Manifest Format is deprecated in OpenShift Container Platform 4.5, it is still supported and Operators provided by Red Hat are currently shipped using the Package Manifest Format.
An Operator bundle represents a single version of an Operator and can be scaffolded with the Operator SDK. On-disk bundle manifests are containerized and shipped as a bundle image, a non-runnable container image that stores the Kubernetes manifests and Operator metadata. Storage and distribution of the bundle image is then managed using existing container tools like podman
and docker
and container registries like Quay.
See Packaging formats for more details on the Bundle Format.
The new opm
CLI tool is also introduced alongside the Bundle Format. The opm
CLI allows you to create and maintain catalogs of Operators from a list of bundles, called an index, that are equivalent to a "repository". The result is a container image, called an index image, which can be stored in a container registry and then installed on a cluster.
An index contains a database of pointers to Operator manifest content that can be queried via an included API that is served when the container image is run. On OpenShift Container Platform, OLM can use the index image as a catalog by referencing it in a CatalogSource
object, which polls the image at regular intervals to enable frequent updates to installed Operators on the cluster.
See Managing custom catalogs for more details on opm
usage.
1.2.15.2. v1 CRD support in Operator Lifecycle Manager
Operator Lifecycle Manager (OLM) now supports Operators using v1 custom resource definitions (CRDs) when loading Operators into catalogs and deploying them on cluster. Previously, OLM only supported v1beta1 CRDs; OLM now manages both v1 and v1beta1 CRDs in the same way.
To support this feature, OLM now enforces CRD upgrades are safer by ensuring existing CRD storage versions are not missing in the upgraded CRD, avoiding potential data loss.
1.2.15.3. Report etcd member status conditions
The etcd cluster Operator now reports etcd member status conditions.
1.2.15.4. Admission webhook support in OLM
Validating and mutating admission webhooks allow Operator authors to intercept, modify, and accept or reject resources before they are saved to the object store and handled by the Operator controller. Operator Lifecycle Manager (OLM) can manage the lifecycle of these webhooks when they are shipped alongside your Operator.
See Managing admission webhooks in Operator Lifecycle Manager for more details.
1.2.15.5. Config map configurations added from openshift-config
namespace
Config map configurations are now added from the openshift-config
namespace using the Insights Operator. This allows you to see if certificates are used for cluster certificate authority and to gather other cluster-related settings from the openshift-config
namespace.
1.2.15.6. Read-only Operator API (Technology Preview)
The new Operator API is now available as a Technology Preview feature in read-only mode. Previously, installing Operators using Operator Lifecycle Manager (OLM) required cluster administrators to be aware of multiple APIs, including CatalogSource
, Subscription
, ClusterServiceVersion
, and InstallPlan
objects. This single Operator API resource is a first step towards a more simplified experience discovering and managing the lifecycle of Operators in a OpenShift Container Platform cluster.
Currently only available using the CLI and requiring a few manual steps to enable, this feature previews interacting with Operators as a first-class API object. Cluster administrators can discover previously installed Operators using this API in read-only mode, for example using the oc get operators
command.
To enable this Technology Preview feature:
Procedure
Disable Cluster Version Operator (CVO) management of the OLM:
$ oc patch clusterversion version \ --type=merge -p \ '{ "spec":{ "overrides":[ { "kind":"Deployment", "name":"olm-operator", "namespace":"openshift-operator-lifecycle-manager", "unmanaged":true, "group":"apps/v1" } ] } }'
Add the
OperatorLifecycleManagerV2=true
feature gate to the OLM Operator.Edit the OLM Operator’s deployment:
$ oc -n openshift-operator-lifecycle-manager \ edit deployment olm-operator
Add the following flag to the deployment’s
args
section:... spec: containers: - args: ... - --feature-gates - OperatorLifecycleManagerV2=true
- Save your changes.
-
Install an Operator using the normal OperatorHub method if you have not already; this example uses an etcd Operator installed in the project
test-project
. Create a new Operator resource for the installed etcd Operator.
Save the following to a file:
etcd-test-op.yaml
fileapiVersion: operators.coreos.com/v2alpha1 kind: Operator metadata: name: etcd-test
Create the resource:
$ oc create -f etcd-test-op.yaml
To have the installed Operator opt in to the new API, apply the
operators.coreos.com/etcd-test
label to the following objects related to your Operator:-
Subscription
-
InstallPlan
-
ClusterServiceVersion
- Any CRDs owned by the Operator
注意In a future release, these objects will be automatically labeled for any Operators where the CSV was installed using a
Subscription
object.For example:
$ oc label sub etcd operators.coreos.com/etcd-test="" -n test-project $ oc label ip install-6c5mr operators.coreos.com/etcd-test="" -n test-project $ oc label csv etcdoperator.v0.9.4 operators.coreos.com/etcd-test="" -n test-project $ oc label crd etcdclusters.etcd.database.coreos.com operators.coreos.com/etcd-test="" $ oc label crd etcdbackups.etcd.database.coreos.com operators.coreos.com/etcd-test="" $ oc label crd etcdrestores.etcd.database.coreos.com operators.coreos.com/etcd-test=""
-
Verify your Operator has opted in to the new API.
List all
operators
resources:$ oc get operators NAME AGE etcd-test 17m
Inspect your Operator’s details and note that the objects you labeled are represented:
$ oc describe operators etcd-test
Name: etcd-test Namespace: Labels: <none> Annotations: <none> API Version: operators.coreos.com/v2alpha1 Kind: Operator Metadata: Creation Timestamp: 2020-07-02T05:51:17Z Generation: 1 Resource Version: 37727 Self Link: /apis/operators.coreos.com/v2alpha1/operators/etcd-test UID: 6a441a4d-75fe-4224-a611-7b6c83716909 Status: Components: Label Selector: Match Expressions: Key: operators.coreos.com/etcd-test Operator: Exists Refs: API Version: apiextensions.k8s.io/v1 Conditions: Last Transition Time: 2020-07-02T05:50:40Z Message: no conflicts found Reason: NoConflicts Status: True Type: NamesAccepted Last Transition Time: 2020-07-02T05:50:41Z Message: the initial names have been accepted Reason: InitialNamesAccepted Status: True Type: Established Kind: CustomResourceDefinition Name: etcdclusters.etcd.database.coreos.com 1 ... API Version: operators.coreos.com/v1alpha1 Conditions: Last Transition Time: 2020-07-02T05:50:39Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy Kind: Subscription Name: etcd 2 Namespace: test-project ... API Version: operators.coreos.com/v1alpha1 Conditions: Last Transition Time: 2020-07-02T05:50:43Z Last Update Time: 2020-07-02T05:50:43Z Status: True Type: Installed Kind: InstallPlan Name: install-mhzm8 3 Namespace: test-project ... Kind: ClusterServiceVersion Name: etcdoperator.v0.9.4 4 Namespace: test-project Events: <none>
1.2.15.7. Upgrading metering and support for respecting a cluster-wide proxy configuration
You can now upgrade a Metering Operator to 4.5 from 4.2 through 4.4. Previously, you had to uninstall your current metering installation and then reinstall the new version of the Metering Operator. For more information, see Upgrading metering.
With this update, support for respecting a cluster-wide proxy configuration is available. Additionally, the upstream repository moved from the operator-framework organization to kube-reporting.
1.2.16. OpenShift Virtualization
1.2.16.1. OpenShift Virtualization support on OpenShift Container Platform 4.5
Red Hat OpenShift Virtualization is supported to run on OpenShift Container Platform 4.5. Previously known as container-native virtualization, OpenShift Virtualization enables you to bring traditional virtual machines (VMs) into OpenShift Container Platform where they run alongside containers, and are managed as native Kubernetes objects.