This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.此内容没有您所选择的语言版本。
Chapter 3. Upgrading OpenShift
3.1. Overview 复制链接链接已复制到粘贴板!
When new versions of OpenShift are released, you can upgrade your cluster to apply the latest enhancements and bug fixes. See the OpenShift Enterprise 3.0 Release Notes to review the latest changes.
Unless noted otherwise, node and masters within a major version are forward and backward compatible, so upgrading your cluster should go smoothly. However, you should not run mismatched versions longer than necessary to upgrade the entire cluster.
Starting with OpenShift 3.0.2, if you installed using the advanced installation and the inventory file that was used is available, you can use the upgrade playbook to automate the upgrade process. Alternatively, you can upgrade OpenShift manually.
This topic pertains to RPM-based installations only (i.e., the quick and advanced installation methods) and does not currently cover container-based installations.
3.2. Using the Automated Upgrade Playbook 复制链接链接已复制到粘贴板!
Starting with OpenShift 3.0.2, if you installed using the advanced installation and the inventory file that was used is available, you can use the upgrade playbook to automate the upgrade process. This playbook performs the following steps for you:
- Applies the latest configuration by re-running the installation playbook.
- Upgrades and restart master services.
- Upgrades and restart node services.
- Applies the latest cluster policies.
- Updates the default router if one exists.
- Updates the default registry if one exists.
- Updates default image streams and InstantApp templates.
The upgrade playbook re-runs cluster configuration steps, therefore any settings that are not stored in your inventory file will be overwritten. The playbook creates a backup of any files that are changed, and you should carefully review the differences after the playbook finishes to ensure that your environment is configured as expected.
Running Ansible playbooks with the --tags
or --check
options is not supported by Red Hat.
Ensure that you have the latest openshift-ansible code checked out, then run the playbook utilizing the default ansible-hosts file located in /etc/ansible/hosts. If your hosts file is located somewhere else, add the -i
flag to specify the location:
cd ~/openshift-ansible git pull https://github.com/openshift/openshift-ansible master ansible-playbook [-i /path/to/hosts/file] playbooks/adhoc/upgrades/upgrade.yml
# cd ~/openshift-ansible
# git pull https://github.com/openshift/openshift-ansible master
# ansible-playbook [-i /path/to/hosts/file] playbooks/adhoc/upgrades/upgrade.yml
After the upgrade playbook finishes, verify that all nodes are marked as Ready and that you are running the expected versions of the docker-registry and router images:
After upgrading, you can use the experimental diagnostics tool to look for common issues:
openshift ex diagnostics
# openshift ex diagnostics
...
[Note] Summary of diagnostics execution:
[Note] Completed with no errors or warnings seen.
3.3. Upgrading Manually 复制链接链接已复制到粘贴板!
As an alternative to using the automated upgrade playbook, you can manually upgrade your OpenShift cluster. To manually upgrade without disruption, it is important to upgrade each component as documented in this topic. Before you begin your upgrade, familiarize yourself with the entire procedure. Specific releases may require additional steps to be performed at key points during the standard upgrade process.
3.3.1. Upgrading Masters 复制链接链接已复制到粘贴板!
Upgrade your masters first. On each master host, upgrade the openshift-master package:
yum upgrade openshift-master
# yum upgrade openshift-master
Then, restart the openshift-master service and review its logs to ensure services have been restarted successfully:
systemctl restart openshift-master journalctl -r -u openshift-master
# systemctl restart openshift-master
# journalctl -r -u openshift-master
3.3.2. Updating Policy Definitions 复制链接链接已复制到粘贴板!
After a cluster upgrade, the recommended default cluster roles may have been updated. To check if an update is recommended for your environment, you can run:
oadm policy reconcile-cluster-roles
# oadm policy reconcile-cluster-roles
This command outputs a list of roles that are out of date and their new proposed values. For example:
Your output will vary based on the OpenShift version and any local customizations you have made. Review the proposed policy carefully.
You can either modify this output to re-apply any local policy changes you have made, or you can automatically apply the new policy by running:
oadm policy reconcile-cluster-roles --confirm
# oadm policy reconcile-cluster-roles --confirm
3.3.3. Upgrading Nodes 复制链接链接已复制到粘贴板!
After upgrading your masters, you can upgrade your nodes. When restarting the openshift-node service, there will be a brief disruption of outbound network connectivity from running pods to services while the service proxy is restarted. The length of this disruption should be very short and scales based on the number of services in the entire cluster.
On each node host, upgrade all openshift packages:
yum upgrade openshift\*
# yum upgrade openshift\*
Then, restart the openshift-node service:
systemctl restart openshift-node
# systemctl restart openshift-node
As a user with cluster-admin privileges, verify that all nodes are showing as Ready:
oc get nodes
# oc get nodes
NAME LABELS STATUS
master.example.com kubernetes.io/hostname=master.example.com Ready,SchedulingDisabled
node1.example.com kubernetes.io/hostname=node1.example.com Ready
node2.example.com kubernetes.io/hostname=node2.example.com Ready
3.3.4. Upgrading the Router 复制链接链接已复制到粘贴板!
If you have previously deployed a router, the router deployment configuration must be upgraded to apply updates contained in the router image. To upgrade your router without disrupting services, you must have previously deployed a highly-available routing service.
If you are upgrading to OpenShift Enterprise 3.0.1.0 or 3.0.2.0, first see the Additional Manual Instructions per Release section for important steps specific to your upgrade, then continue with the router upgrade as described in this section.
Edit your router’s deployment configuration. For example, if it has the default router name:
oc edit dc/router
# oc edit dc/router
Apply the following changes:
- 1
- Adjust the image version to match the version you are upgrading to.
You should see one router pod updated and then the next.
3.3.5. Upgrading the Registry 复制链接链接已复制到粘贴板!
The registry must also be upgraded for changes to take effect in the registry image. If you have used a PersistentVolumeClaim
or a host mount point, you may restart the registry without losing the contents of your registry. The registry installation topic details how to configure persistent storage.
Edit your registry’s deployment configuration:
oc edit dc/docker-registry
# oc edit dc/docker-registry
Apply the following changes:
- 1
- Adjust the image version to match the version you are upgrading to.
Images that are being pushed or pulled from the internal registry at the time of upgrade will fail and should be restarted automatically. This will not disrupt pods that are already running.
3.3.6. Updating the Default Image Streams and Templates 复制链接链接已复制到粘贴板!
By default, the quick installation and advanced installation methods automatically create default image streams, QuickStart templates, and database service templates in the openshift project, which is a default project to which all users have view access. These objects were created during installation from the JSON files located under /usr/share/openshift/examples. Running the latest installer will copy newer files into place, but it does not currently update the openshift project.
You can update the openshift project by running the following commands. It is expected that you will receive warnings about items that already exist.
3.3.7. Importing the Latest Images 复制链接链接已复制到粘贴板!
After updating the default image streams, you may also want to ensure that the images within those streams are updated. For each image stream in the default openshift project, you can run:
oc import-image -n openshift <imagestream>
# oc import-image -n openshift <imagestream>
For example, get the list of all image streams in the default openshift project:
Update each image stream one at a time:
In order to update your S2I-based applications, you must manually trigger a new build of those applications after importing the new images using oc start-build <app-name>
.
3.4. Additional Manual Steps Per Release 复制链接链接已复制到粘贴板!
Some OpenShift releases may have additional instructions specific to that release that must be performed to fully apply the updates across the cluster. Read through the following sections carefully depending on your upgrade path, as you may be required to perform certain steps and key points during the standard upgrade process described earlier in this topic.
See the OpenShift Enterprise 3.0 Release Notes to review the latest release notes.
3.4.1. OpenShift Enterprise 3.0.1.0 复制链接链接已复制到粘贴板!
The following steps are required for the OpenShift Enterprise 3.0.1.0 release.
Creating a Service Account for the Router
The default HAProxy router was updated to utilize host ports and requires that a service account be created and made a member of the privileged security context constraint (SCC). Additionally, "down-then-up" rolling upgrades have been added and is now the preferred strategy for upgrading routers.
After upgrading your master and nodes but before updating to the newer router, you must create a service account for the router. As a cluster administrator, ensure you are operating on the default project:
oc project default
# oc project default
Delete any existing router service account and create a new one:
oc delete serviceaccount/router echo '{"kind":"ServiceAccount","apiVersion":"v1","metadata":{"name":"router"}}' | oc create -f -
# oc delete serviceaccount/router
serviceaccounts/router
# echo '{"kind":"ServiceAccount","apiVersion":"v1","metadata":{"name":"router"}}' | oc create -f -
serviceaccounts/router
Edit the privileged SCC:
oc edit scc privileged
# oc edit scc privileged
Apply the following changes:
Edit your router’s deployment configuration:
oc edit dc/router
# oc edit dc/router
Apply the following changes:
Now upgrade your router per the standard router upgrade steps.
3.4.2. OpenShift Enterprise 3.0.2.0 复制链接链接已复制到粘贴板!
The following steps are required for the OpenShift Enterprise 3.0.2.0 release.
Switching the Router to Use the Host Network Stack
The default HAProxy router was updated to use the host networking stack by default instead of the former behavior of using the container network stack, which proxied traffic to the router, which in turn proxied the traffic to the target service and container. This new default behavior benefits performance because network traffic from remote clients no longer needs to take multiple hops through user space in order to reach the target service and container.
Additionally, the new default behavior enables the router to get the actual source IP address of the remote connection. This is useful for defining ingress rules based on the originating IP, supporting sticky sessions, and monitoring traffic, among other uses.
Existing router deployments will continue to use the container network stack unless modified to switch to using the host network stack.
To switch the router to use the host network stack, edit your router’s deployment configuration:
oc edit dc/router
# oc edit dc/router
Apply the following changes:
Now upgrade your router per the standard router upgrade steps.
Configuring serviceNetworkCIDR for the SDN
Add the serviceNetworkCIDR
parameter to the networkConfig
section in /etc/openshift/master/master-config.yaml. This value should match the servicesSubnet
value in the kubernetesMasterConfig
section:
kubernetesMasterConfig: servicesSubnet: 172.30.0.0/16 ... networkConfig: serviceNetworkCIDR: 172.30.0.0/16
kubernetesMasterConfig:
servicesSubnet: 172.30.0.0/16
...
networkConfig:
serviceNetworkCIDR: 172.30.0.0/16
Adding the Scheduler Configuration API Version
The scheduler configuration file incorrectly lacked kind
and apiVersion
fields when deployed using the quick or advanced installation methods. This will affect future upgrades, so it is important to add those values if they do not exist.
Modify the /etc/openshift/master/scheduler.json file to add the kind
and apiVersion
fields: