Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 3. Restoring OpenShift Container Platform components
3.1. Overview
In OpenShift Container Platform, you can restore your cluster and its components by recreating cluster elements, including nodes and applications, from separate storage.
To restore a cluster, you must first back it up.
The following process describes a generic way of restoring applications and the OpenShift Container Platform cluster. It cannot take into account custom requirements. You might need to take additional actions to restore your cluster.
3.2. Restoring a cluster
To restore a cluster, first reinstall OpenShift Container Platform.
Procedure
- Reinstall OpenShift Container Platform in the same way that you originally installed OpenShift Container Platform.
- Run all of your custom post-installation steps, such as changing services outside of the control of OpenShift Container Platform or installing extra services like monitoring agents.
3.3. Restoring a master host backup
After creating a backup of important master host files, if they become corrupted or accidentally removed, you can restore the files by copying the files back to master, ensuring they contain the proper content, and restarting the affected services.
Procedure
- Restore the - /etc/origin/master/master-config.yamlfile:- MYBACKUPDIR=*/backup/$(hostname)/$(date +%Y%m%d)* cp /etc/origin/master/master-config.yaml /etc/origin/master/master-config.yaml.old cp /backup/$(hostname)/$(date +%Y%m%d)/origin/master/master-config.yaml /etc/origin/master/master-config.yaml master-restart api master-restart controllers - # MYBACKUPDIR=*/backup/$(hostname)/$(date +%Y%m%d)* # cp /etc/origin/master/master-config.yaml /etc/origin/master/master-config.yaml.old # cp /backup/$(hostname)/$(date +%Y%m%d)/origin/master/master-config.yaml /etc/origin/master/master-config.yaml # master-restart api # master-restart controllers- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Warning- Restarting the master services can lead to downtime. However, you can remove the master host from the highly available load balancer pool, then perform the restore operation. Once the service has been properly restored, you can add the master host back to the load balancer pool. Note- Perform a full reboot of the affected instance to restore the - iptablesconfiguration.
- If you cannot restart OpenShift Container Platform because packages are missing, reinstall the packages. - Get the list of the current installed packages: - rpm -qa | sort > /tmp/current_packages.txt - $ rpm -qa | sort > /tmp/current_packages.txt- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- View the differences between the package lists: - diff /tmp/current_packages.txt ${MYBACKUPDIR}/packages.txt ansible-2.4.0.0-5.el7.noarch- $ diff /tmp/current_packages.txt ${MYBACKUPDIR}/packages.txt > ansible-2.4.0.0-5.el7.noarch- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Reinstall the missing packages: - yum reinstall -y <packages> - # yum reinstall -y <packages>- 1 - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Replace<packages>with the packages that are different between the package lists.
 
 
- Restore a system certificate by copying the certificate to the - /etc/pki/ca-trust/source/anchors/directory and execute- update-ca-trust:- MYBACKUPDIR=*/backup/$(hostname)/$(date +%Y%m%d)* sudo cp ${MYBACKUPDIR}/etc/pki/ca-trust/source/anchors/<certificate> /etc/pki/ca-trust/source/anchors/ sudo update-ca-trust- $ MYBACKUPDIR=*/backup/$(hostname)/$(date +%Y%m%d)* $ sudo cp ${MYBACKUPDIR}/etc/pki/ca-trust/source/anchors/<certificate> /etc/pki/ca-trust/source/anchors/- 1 - $ sudo update-ca-trust- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Replace<certificate>with the file name of the system certificate to restore.
 Note- Always ensure the user ID and group ID are restored when the files are copied back, as well as the - SELinuxcontext.
3.4. Restoring a node host backup
After creating a backup of important node host files, if they become corrupted or accidentally removed, you can restore the file by copying back the file, ensuring it contains the proper content and restart the affected services.
Procedure
- Restore the - /etc/origin/node/node-config.yamlfile:- MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d) cp /etc/origin/node/node-config.yaml /etc/origin/node/node-config.yaml.old cp /backup/$(hostname)/$(date +%Y%m%d)/etc/origin/node/node-config.yaml /etc/origin/node/node-config.yaml reboot - # MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d) # cp /etc/origin/node/node-config.yaml /etc/origin/node/node-config.yaml.old # cp /backup/$(hostname)/$(date +%Y%m%d)/etc/origin/node/node-config.yaml /etc/origin/node/node-config.yaml # reboot- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
Restarting the services can lead to downtime. See Node maintenance, for tips on how to ease the process.
					Perform a full reboot of the affected instance to restore the iptables configuration.
				
- If you cannot restart OpenShift Container Platform because packages are missing, reinstall the packages. - Get the list of the current installed packages: - rpm -qa | sort > /tmp/current_packages.txt - $ rpm -qa | sort > /tmp/current_packages.txt- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- View the differences between the package lists: - diff /tmp/current_packages.txt ${MYBACKUPDIR}/packages.txt ansible-2.4.0.0-5.el7.noarch- $ diff /tmp/current_packages.txt ${MYBACKUPDIR}/packages.txt > ansible-2.4.0.0-5.el7.noarch- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Reinstall the missing packages: - yum reinstall -y <packages> - # yum reinstall -y <packages>- 1 - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Replace<packages>with the packages that are different between the package lists.
 
 
- Restore a system certificate by copying the certificate to the - /etc/pki/ca-trust/source/anchors/directory and execute- update-ca-trust:- MYBACKUPDIR=*/backup/$(hostname)/$(date +%Y%m%d)* sudo cp ${MYBACKUPDIR}/etc/pki/ca-trust/source/anchors/<certificate> /etc/pki/ca-trust/source/anchors/ sudo update-ca-trust- $ MYBACKUPDIR=*/backup/$(hostname)/$(date +%Y%m%d)* $ sudo cp ${MYBACKUPDIR}/etc/pki/ca-trust/source/anchors/<certificate> /etc/pki/ca-trust/source/anchors/ $ sudo update-ca-trust- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Replace<certificate>with the file name of the system certificate to restore.
 Note- Always ensure proper user ID and group ID are restored when the files are copied back, as well as the - SELinuxcontext.
3.5. Restoring etcd
3.5.1. Restoring the etcd configuration file
					If an etcd host has become corrupted and the /etc/etcd/etcd.conf file is lost, restore it using the following procedure:
				
- Access your etcd host: - ssh master-0 - $ ssh master-0- 1 - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Replacemaster-0with the name of your etcd host.
 
- Copy the backup - etcd.conffile to- /etc/etcd/:- cp /backup/etcd-config-<timestamp>/etcd/etcd.conf /etc/etcd/etcd.conf - # cp /backup/etcd-config-<timestamp>/etcd/etcd.conf /etc/etcd/etcd.conf- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Set the required permissions and selinux context on the file: - restorecon -RvF /etc/etcd/etcd.conf - # restorecon -RvF /etc/etcd/etcd.conf- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
					In this example, the backup file is stored in the /backup/etcd-config-<timestamp>/etcd/etcd.conf path where it can be used as an external NFS share, S3 bucket, or other storage solution.
				
After the etcd configuration file is restored, you must restart the static pod. This is done after you restore the etcd data.
3.5.2. Restoring etcd data
Before restoring etcd on a static pod:
- etcdctlbinaries must be available or, in containerized installations, the- rhel7/etcdcontainer must be available.- You can install the - etcdctlbinary with the etcd package by running the following commands:- yum install etcd - # yum install etcd- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - The package also installs the systemd service. Disable and mask the service so that it does not run as a systemd service when etcd runs in static pod. By disabling and masking the service, you ensure that you do not accidentally start it and prevent it from automatically restarting when you reboot the system. - systemctl disable etcd.service - # systemctl disable etcd.service- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - systemctl mask etcd.service - # systemctl mask etcd.service- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
To restore etcd on a static pod:
- If the pod is running, stop the etcd pod by moving the pod manifest YAML file to another directory: - mkdir -p /etc/origin/node/pods-stopped - # mkdir -p /etc/origin/node/pods-stopped- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - mv /etc/origin/node/pods/etcd.yaml /etc/origin/node/pods-stopped - # mv /etc/origin/node/pods/etcd.yaml /etc/origin/node/pods-stopped- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Move all old data: - mv /var/lib/etcd /var/lib/etcd.old - # mv /var/lib/etcd /var/lib/etcd.old- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - You use the etcdctl to recreate the data in the node where you restore the pod. 
- Restore the etcd snapshot to the mount path for the etcd pod: - export ETCDCTL_API=3 - # export ETCDCTL_API=3- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Obtain the appropriate values for your cluster from your backup etcd.conf file. 
- Set required permissions and selinux context on the data directory: - restorecon -RvF /var/lib/etcd/ - # restorecon -RvF /var/lib/etcd/- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Restart the etcd pod by moving the pod manifest YAML file to the required directory: - mv /etc/origin/node/pods-stopped/etcd.yaml /etc/origin/node/pods/ - # mv /etc/origin/node/pods-stopped/etcd.yaml /etc/origin/node/pods/- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
3.6. Adding an etcd node
After you restore etcd, you can add more etcd nodes to the cluster. You can either add an etcd host by using an Ansible playbook or by manual steps.
3.6.1. Adding a new etcd host using Ansible
Procedure
- In the Ansible inventory file, create a new group named - [new_etcd]and add the new host. Then, add the- new_etcdgroup as a child of the- [OSEv3]group:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Note- Replace the old - etcd hostentry with the new- etcd hostentry in the inventory file. While replacing the older- etcd host, you must create a copy of- /etc/etcd/ca/directory. Alternatively, you can redeploy etcd ca and certs before scaling up the- etcd hosts.
- From the host that installed OpenShift Container Platform and hosts the Ansible inventory file, change to the playbook directory and run the etcd - scaleupplaybook:- cd /usr/share/ansible/openshift-ansible ansible-playbook playbooks/openshift-etcd/scaleup.yml - $ cd /usr/share/ansible/openshift-ansible $ ansible-playbook playbooks/openshift-etcd/scaleup.yml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- After the playbook runs, modify the inventory file to reflect the current status by moving the new etcd host from the - [new_etcd]group to the- [etcd]group:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- If you use Flannel, modify the - flanneldservice configuration on every OpenShift Container Platform host, located at- /etc/sysconfig/flanneld, to include the new etcd host:- FLANNEL_ETCD_ENDPOINTS=https://master-0.example.com:2379,https://master-1.example.com:2379,https://master-2.example.com:2379,https://etcd0.example.com:2379 - FLANNEL_ETCD_ENDPOINTS=https://master-0.example.com:2379,https://master-1.example.com:2379,https://master-2.example.com:2379,https://etcd0.example.com:2379- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Restart the - flanneldservice:- systemctl restart flanneld.service - # systemctl restart flanneld.service- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
3.6.2. Manually adding a new etcd host
If you do not run etcd as static pods on master nodes, you might need to add another etcd host.
Procedure
Modify the current etcd cluster
					To create the etcd certificates, run the openssl command, replacing the values with those from your environment.
				
- Create some environment variables: - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Note- The custom - opensslextensions used as- etcd_v3_ca_*include the $SAN environment variable as- subjectAltName. See- /etc/etcd/ca/openssl.cnffor more information.
- Create the directory to store the configuration and certificates: - mkdir -p ${PREFIX}- # mkdir -p ${PREFIX}- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create the server certificate request and sign it: (server.csr and server.crt) - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create the peer certificate request and sign it: (peer.csr and peer.crt) - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Copy the current etcd configuration and - ca.crtfiles from the current node as examples to modify later:- cp /etc/etcd/etcd.conf ${PREFIX} cp /etc/etcd/ca.crt ${PREFIX}- # cp /etc/etcd/etcd.conf ${PREFIX} # cp /etc/etcd/ca.crt ${PREFIX}- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- While still on the surviving etcd host, add the new host to the cluster. To add additional etcd members to the cluster, you must first adjust the default localhost peer in the - peerURLsvalue for the first member:- Get the member ID for the first member using the - member listcommand:- etcdctl --cert-file=/etc/etcd/peer.crt \ --key-file=/etc/etcd/peer.key \ --ca-file=/etc/etcd/ca.crt \ --peers="https://172.18.1.18:2379,https://172.18.9.202:2379,https://172.18.0.75:2379" \ member list- # etcdctl --cert-file=/etc/etcd/peer.crt \ --key-file=/etc/etcd/peer.key \ --ca-file=/etc/etcd/ca.crt \ --peers="https://172.18.1.18:2379,https://172.18.9.202:2379,https://172.18.0.75:2379" \- 1 - member list- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Ensure that you specify the URLs of only active etcd members in the--peersparameter value.
 
- Obtain the IP address where etcd listens for cluster peers: - ss -l4n | grep 2380 - $ ss -l4n | grep 2380- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Update the value of - peerURLsusing the- etcdctl member updatecommand by passing the member ID and IP address obtained from the previous steps:- etcdctl --cert-file=/etc/etcd/peer.crt \ --key-file=/etc/etcd/peer.key \ --ca-file=/etc/etcd/ca.crt \ --peers="https://172.18.1.18:2379,https://172.18.9.202:2379,https://172.18.0.75:2379" \ member update 511b7fb6cc0001 https://172.18.1.18:2380- # etcdctl --cert-file=/etc/etcd/peer.crt \ --key-file=/etc/etcd/peer.key \ --ca-file=/etc/etcd/ca.crt \ --peers="https://172.18.1.18:2379,https://172.18.9.202:2379,https://172.18.0.75:2379" \ member update 511b7fb6cc0001 https://172.18.1.18:2380- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- 
									Re-run the member listcommand and ensure the peer URLs no longer include localhost.
 
- Add the new host to the etcd cluster. Note that the new host is not yet configured, so the status stays as - unstarteduntil the you configure the new host.Warning- You must add each member and bring it online one at a time. When you add each additional member to the cluster, you must adjust the - peerURLslist for the current peers. The- peerURLslist grows by one for each member added. The- etcdctl member addcommand outputs the values that you must set in the etcd.conf file as you add each member, as described in the following instructions.- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- In this line,10.3.9.222is a label for the etcd member. You can specify the host name, IP address, or a simple name.
 
- Update the sample - ${PREFIX}/etcd.conffile.- Replace the following values with the values generated in the previous step: - ETCD_NAME
- ETCD_INITIAL_CLUSTER
- ETCD_INITIAL_CLUSTER_STATE
 
- Modify the following variables with the new host IP from the output of the previous step. You can use - ${NEW_ETCD_IP}as the value.- ETCD_LISTEN_PEER_URLS ETCD_LISTEN_CLIENT_URLS ETCD_INITIAL_ADVERTISE_PEER_URLS ETCD_ADVERTISE_CLIENT_URLS - ETCD_LISTEN_PEER_URLS ETCD_LISTEN_CLIENT_URLS ETCD_INITIAL_ADVERTISE_PEER_URLS ETCD_ADVERTISE_CLIENT_URLS- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- If you previously used the member system as an etcd node, you must overwrite the current values in the /etc/etcd/etcd.conf file.
- Check the file for syntax errors or missing IP addresses, otherwise the etcd service might fail: - vi ${PREFIX}/etcd.conf- # vi ${PREFIX}/etcd.conf- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
- 
							On the node that hosts the installation files, update the [etcd]hosts group in the /etc/ansible/hosts inventory file. Remove the old etcd hosts and add the new ones.
- Create a - tgzfile that contains the certificates, the sample configuration file, and the- caand copy it to the new host:- tar -czvf /etc/etcd/generated_certs/${CN}.tgz -C ${PREFIX} . scp /etc/etcd/generated_certs/${CN}.tgz ${CN}:/tmp/- # tar -czvf /etc/etcd/generated_certs/${CN}.tgz -C ${PREFIX} . # scp /etc/etcd/generated_certs/${CN}.tgz ${CN}:/tmp/- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
Modify the new etcd host
- Install - iptables-servicesto provide iptables utilities to open the required ports for etcd:- yum install -y iptables-services - # yum install -y iptables-services- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create the - OS_FIREWALL_ALLOWfirewall rules to allow etcd to communicate:- Port 2379/tcp for clients
- Port 2380/tcp for peer communication - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Note- In this example, a new chain - OS_FIREWALL_ALLOWis created, which is the standard naming the OpenShift Container Platform installer uses for firewall rules.Warning- If the environment is hosted in an IaaS environment, modify the security groups for the instance to allow incoming traffic to those ports as well. 
 
- Install etcd: - yum install -y etcd - # yum install -y etcd- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Ensure version - etcd-2.3.7-4.el7.x86_64or greater is installed,
- Ensure the etcd service is not running by removing the etcd pod definition: - mkdir -p /etc/origin/node/pods-stopped mv /etc/origin/node/pods/* /etc/origin/node/pods-stopped/ - # mkdir -p /etc/origin/node/pods-stopped # mv /etc/origin/node/pods/* /etc/origin/node/pods-stopped/- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Remove any etcd configuration and data: - rm -Rf /etc/etcd/* rm -Rf /var/lib/etcd/* - # rm -Rf /etc/etcd/* # rm -Rf /var/lib/etcd/*- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Extract the certificates and configuration files: - tar xzvf /tmp/etcd0.example.com.tgz -C /etc/etcd/ - # tar xzvf /tmp/etcd0.example.com.tgz -C /etc/etcd/- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Start etcd on the new host: - systemctl enable etcd --now - # systemctl enable etcd --now- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Verify that the host is part of the cluster and the current cluster health: - If you use the v2 etcd api, run the following command: - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- If you use the v3 etcd api, run the following command: - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
Modify each OpenShift Container Platform master
- Modify the master configuration in the - etcClientInfosection of the- /etc/origin/master/master-config.yamlfile on every master. Add the new etcd host to the list of the etcd servers OpenShift Container Platform uses to store the data, and remove any failed etcd hosts:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Restart the master API service: - On every master: - master-restart api master-restart controllers - # master-restart api # master-restart controllers- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Warning- The number of etcd nodes must be odd, so you must add at least two hosts. 
 
- If you use Flannel, modify the - flanneldservice configuration located at- /etc/sysconfig/flanneldon every OpenShift Container Platform host to include the new etcd host:- FLANNEL_ETCD_ENDPOINTS=https://master-0.example.com:2379,https://master-1.example.com:2379,https://master-2.example.com:2379,https://etcd0.example.com:2379 - FLANNEL_ETCD_ENDPOINTS=https://master-0.example.com:2379,https://master-1.example.com:2379,https://master-2.example.com:2379,https://etcd0.example.com:2379- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Restart the - flanneldservice:- systemctl restart flanneld.service - # systemctl restart flanneld.service- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
3.7. Bringing OpenShift Container Platform services back online
After you finish your changes, bring OpenShift Container Platform back online.
Procedure
- On each OpenShift Container Platform master, restore your master and node configuration from backup and enable and restart all relevant services: - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- On each OpenShift Container Platform node, update the node configuration maps as needed, and enable and restart the atomic-openshift-node service: - cp /etc/origin/node/node-config.yaml.<timestamp> /etc/origin/node/node-config.yaml systemctl enable atomic-openshift-node systemctl start atomic-openshift-node - # cp /etc/origin/node/node-config.yaml.<timestamp> /etc/origin/node/node-config.yaml # systemctl enable atomic-openshift-node # systemctl start atomic-openshift-node- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
3.8. Restoring a project
				To restore a project, create the new project, then restore any exported files by running oc create -f <file_name>.
			
Procedure
- Create the project: - oc new-project <project_name> - $ oc new-project <project_name>- 1 - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- This<project_name>value must match the name of the project that was backed up.
 
- Import the project objects: - oc create -f project.yaml - $ oc create -f project.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Import any other resources that you exported when backing up the project, such as role bindings, secrets, service accounts, and persistent volume claims: - oc create -f <object>.yaml - $ oc create -f <object>.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Some resources might fail to import if they require another object to exist. If this occurs, review the error message to identify which resources must be imported first. 
Some resources, such as pods and default service accounts, can fail to be created.
3.9. Restoring application data
				You can restore application data by using the oc rsync command, assuming rsync is installed within the container image. The Red Hat rhel7 base image contains rsync. Therefore, all images that are based on rhel7 contain it as well. See Troubleshooting and Debugging CLI Operations - rsync.
			
This is a generic restoration of application data and does not take into account application-specific backup procedures, for example, special export and import procedures for database systems.
Other means of restoration might exist depending on the type of the persistent volume you use, for example, Cinder, NFS, or Gluster.
Procedure
Example of restoring a Jenkins deployment’s application data
- Verify the backup: - ls -la /tmp/jenkins-backup/ - $ ls -la /tmp/jenkins-backup/ total 8 drwxrwxr-x. 3 user user 20 Sep 6 11:14 . drwxrwxrwt. 17 root root 4096 Sep 6 11:16 .. drwxrwsrwx. 12 user user 4096 Sep 6 11:14 jenkins- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Use the - oc rsynctool to copy the data into the running pod:- oc rsync /tmp/jenkins-backup/jenkins jenkins-1-37nux:/var/lib - $ oc rsync /tmp/jenkins-backup/jenkins jenkins-1-37nux:/var/lib- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Note- Depending on the application, you may be required to restart the application. 
- Optionally, restart the application with new data: - oc delete pod jenkins-1-37nux - $ oc delete pod jenkins-1-37nux- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Alternatively, you can scale down the deployment to 0, and then up again: - oc scale --replicas=0 dc/jenkins oc scale --replicas=1 dc/jenkins - $ oc scale --replicas=0 dc/jenkins $ oc scale --replicas=1 dc/jenkins- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
3.10. Restoring Persistent Volume Claims
This topic describes two methods for restoring data. The first involves deleting the file, then placing the file back in the expected location. The second example shows migrating persistent volume claims. The migration would occur in the event that the storage needs to be moved or in a disaster scenario when the backend storage no longer exists.
Check with the restore procedures for the specific application on any steps required to restore data to the application.
3.10.1. Restoring files to an existing PVC
Procedure
- Delete the file: - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Replace the file from the server that contains the rsync backup of the files that were in the pvc: - oc rsync uploaded demo-2-fxx6d:/opt/app-root/src/ - $ oc rsync uploaded demo-2-fxx6d:/opt/app-root/src/- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Validate that the file is back on the pod by using - oc rshto connect to the pod and view the contents of the directory:- oc rsh demo-2-fxx6d - $ oc rsh demo-2-fxx6d sh-4.2$ *ls /opt/app-root/src/uploaded/* lost+found ocp_sop.txt- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
3.10.2. Restoring data to a new PVC
					The following steps assume that a new pvc has been created.
				
Procedure
- Overwrite the currently defined - claim-name:- oc set volume dc/demo --add --name=persistent-volume \ --type=persistentVolumeClaim --claim-name=filestore \ --mount-path=/opt/app-root/src/uploaded --overwrite - $ oc set volume dc/demo --add --name=persistent-volume \ --type=persistentVolumeClaim --claim-name=filestore \ --mount-path=/opt/app-root/src/uploaded --overwrite- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Validate that the pod is using the new PVC: - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Now that the deployment configuration uses the new - pvc, run- oc rsyncto place the files onto the new- pvc:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Validate that the file is back on the pod by using - oc rshto connect to the pod and view the contents of the directory:- oc rsh demo-3-2b8gs - $ oc rsh demo-3-2b8gs sh-4.2$ ls /opt/app-root/src/uploaded/ lost+found ocp_sop.txt- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow