Chapter 9. Backup and Restore for Ansible Automation Platform from GCP Marketplace
- You must restore with the same operational image version as the backup.
- You can only restore using a new VPC network. Backup and restore into an existing VPC network is not currently supported.
- You cannot delete the backed up environment before a restore because the database backups are stored in the deployment. Deleting your deployment therefore deletes the database backups.
-
Versioned or point-in-time backups are not supported by the
ansible-on-clouds-op
container. Only the most recent backup of a deployment is used to restore the deployment.
To backup and restore your Ansible Automation Platform deployment, it is vital to have your existing Ansible Automation Platform administration secret name and value recorded somewhere safe.
It is also important to take regular manual backups of the Cloud SQL database instance and filestore backups, to ensure a deployment can be restored as close as possible to its previous working state.
The playbooks backup and restore provides backup and restore support for the Ansible Automation Platform from GCP Marketplace foundation deployment.
The restore process deploys a new Ansible Automation Platform with the filestore and SQL database instance being restored to the specified backup.
9.1. The Backup Process
A backup enables you to backup your environment by saving the database and the shared file system. A new environment is created during the restore using the saved shared file system. When the new environment is in place, the process restores the database.
The backup and restore process must use the same version. If your backup was done with an earlier version, you must use the restore process of that version. Then, if required, you can run an upgrade.
You must also make a backup before an upgrade. For further information, see Upgrading your deployment
The backup process involves taking a backup of the Cloud SQL database and filestore instances at a given point in time. The backup playbook requires an active Ansible Automation Platform from GCP Marketplace foundation deployment to be running.
A bucket needs to be created in your project as restore information will be stored in that bucket.
Only one backup per version is kept in the bucket, if multiple versions of the backup need to be retain then a new bucket must be created for the new version of the backup.
The following procedures describe how to backup the Ansible Automation Platform from GCP Marketplace deployment.
9.1.1. Pulling the ansible-on-clouds-ops 2.3 container image
Procedure
Pull the docker image for the
ansible-on-clouds-ops
2.3 container with the same tag as the foundation deployment.NoteBefore pulling the docker image, make sure you are logged in to registry.redhat.com using docker. Use the following command to login to registry.redhat.com.
$ docker login registry.redhat.io
For more information about registry login, see Registry Authentication
$ export IMAGE=registry.redhat.io/ansible-on-clouds/ansible-on-clouds-ops-rhel8:2.3.20230221 $ docker pull $IMAGE --platform=linux/amd64
For EMEA regions (Europe, Middle East, Africa) run the following command instead:
$ export IMAGE=registry.redhat.io/ansible-on-clouds/ansible-on-clouds-ops-emea-rhel8:2.3.20230221 $ docker pull $IMAGE --platform=linux/amd64
9.1.2. Setting up the environment
Procedure
Create a folder to hold the configuration files.
$ mkdir command_generator_data
9.1.3. Creating the backup data file
Procedure
Populate the
command_generator_data
directory with the configuration file template.docker run --rm -v $(pwd)/command_generator_data/:/data $IMAGE command_generator_vars gcp_backup_deployment --output-data-file /data/backup.yml
Produces the following output:
docker run --rm -v $(pwd)/command_generator_data/:/data $IMAGE \ command_generator_vars gcp_backup_deployment --output-data-file /data/backup.yml =============================================== Playbook: gcp_backup_deployment Description: This playbook is used to backup the Ansible Automation Platform from GCP Marketplace environment. ----------------------------------------------- This playbook is used to backup the Ansible Automation Platform from GCP Marketplace environment. For more information regarding backup and restore, visit our official documentation - https://access.redhat.com/documentation/en-us/ansible_on_clouds/2.x/html/red_hat_ansible_automation_platform_from_gcp_marketplace_guide/assembly-gcp-backup-and-restore -----------------------------------------------
Run the supplied command:
$ docker run --rm -v $(pwd)/command_generator_data/:/data $IMAGE \ command_generator_vars gcp_backup_deployment --output-data-file /data/backup.yml
After running the command, a
/tmp/backup.yml
template file is created. This template file resembles the following:gcp_backup_deployment: cloud_credentials_path: deployment_name: extra_vars: gcp_bucket_backup_name: gcp_compute_region: gcp_compute_zone:
9.1.4. Parameters in the backup.yml file
You must populate the data file before triggering the backup. The following variables are parameters listed in the data file.
-
cloud_credentials_path
is the path for your Google Cloud service account credentials file. This must be an absolute path. -
gcp_deployment_name
is the name of the AAP deployment manager deployment you want to back up. -
gcp_bucket_backup_name
is the bucket that was previously created to use for the backup. Only the most recent backup is stored in the bucket. Every subsequent backup to the same bucket overwrites the backup files with the latest backup. -
gcp_compute_region
is GCP region where the foundation deployment is deployed. This can be retrieved by checking the Deployments config in Deployment Manager. -
gcp_compute_zone
is the GCP zone where the foundation deployment is deployed. This can be retrieved by checking the Deployments config in Deployment Manager.
9.1.5. Running the backup playbook
Procedure
To run the backup, run the command generator to generate the backup command.
docker run --rm -v $(pwd)/command_generator_data:/data $IMAGE command_generator gcp_backup_deployment --data-file /data/backup.yml
Resulting in the following ouput:
----------------------------------------------- Command to run playbook: docker run --rm --env PLATFORM=GCP -v </path/to/gcp/service-account.json>:/home/runner/.gcp/credentials:ro \ --env ANSIBLE_CONFIG=../gcp-ansible.cfg $IMAGE redhat.ansible_on_clouds.gcp_backup_deployment \ -e 'gcp_service_account_credentials_json_path=/home/runner/.gcp/credentials \ gcp_deployment_name=<deployment_name> gcp_compute_region=<region> \ gcp_compute_zone=<zone> gcp_bucket_backup_name=<bucket>'
Run the supplied backup command to trigger the backup.
$ docker run --rm --env PLATFORM=GCP -v </path/to/gcp/service-account.json>:/home/runner/.gcp/credentials:ro \ --env ANSIBLE_CONFIG=../gcp-ansible.cfg $IMAGE redhat.ansible_on_clouds.gcp_backup_deployment \ -e 'gcp_service_account_credentials_json_path=/home/runner/.gcp/credentials \ gcp_deployment_name=<deployment_name> gcp_compute_region=<region> \ gcp_compute_zone=<zone> gcp_bucket_backup_name=<bucket>'
When the playbook has finished running, the output resembles the following:
TASK [redhat.ansible_on_clouds.standalone_gcp_backup : [backup_deployment] Print vars required for restore process] *** ok: [localhost] => msg: - AAP on GCP Backup successful. Please note below the bucket name which is required for restore process. - <bucket_name> PLAY RECAP ********************************************************************************************** localhost : ok=33 changed=6 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
9.2. Restore Process
The restore process deploys a new deployment, and restores the filestore and SQL database instance to the specified backup.
- You must restore with the same operational image version which was used for the backup.
- Backup and restore into an existing VPC network is not currently supported.
- You cannot delete the backed up environment before a restore because the database backups are stored in the deployment. Deleting your deployment therefore deletes the database backups.
-
Versioned or point-in-time backups are not supported by the
ansible-on-clouds-op
container. Only the most recent backup of a deployment is used to restore the deployment.
The following procedures describe how to restore the Ansible Automation Platform from GCP Marketplace deployment.
9.2.1. Pulling the ansible-on-clouds-ops 2.3 container image
Procedure
Pull the docker image for the
ansible-on-clouds-ops
2.3 container with the same tag as the foundation deployment.NoteBefore pulling the docker image, make sure you are logged in to registry.redhat.com using docker. Use the following command to login to registry.redhat.com.
$ docker login registry.redhat.io
For more information about registry login, see Registry Authentication
$ export IMAGE=registry.redhat.io/ansible-on-clouds/ansible-on-clouds-ops-rhel8:2.3.20230221 $ docker pull $IMAGE --platform=linux/amd64
For EMEA regions (Europe, Middle East, Africa) run the following command instead:
$ export IMAGE=registry.redhat.io/ansible-on-clouds/ansible-on-clouds-ops-emea-rhel8:2.3.20230221 $ docker pull $IMAGE --platform=linux/amd64
9.2.2. Setting up the environment
Procedure
Create a folder to hold the configuration files.
$ mkdir command_generator_data
9.2.3. Generating the restore.yml file
Procedure
Run the command generator
command_generator_vars`
to generaterestore.yml
.docker run --rm -v $(pwd)/command_generator_data:/data $IMAGE command_generator_vars gcp_restore_deployment --output-data-file /data/restore.yml
Providing the following output:
=============================================== Playbook: gcp_restore_deployment Description: This playbook is used to restore the Ansible Automation Platform from GCP Marketplace environment from a backup. ----------------------------------------------- This playbook is used to restore the Ansible Automation Platform from GCP Marketplace environment from a backup. For more information regarding backup and restore, visit our official documentation - https://access.redhat.com/documentation/en-us/ansible_on_clouds/2.x/html/red_hat_ansible_automation_platform_from_gcp_marketplace_guide/assembly-gcp-backup-and-restore ----------------------------------------------- Command generator template: docker run --rm -v <local_data_file_directory>:/data $IMAGE command_generator gcp_restore_deployment --data-file /data/restore.yml
The template resembles the following:
gcp_restore_deployment: cloud_credentials_path: deployment_name: extra_vars: gcp_bucket_backup_name: gcp_cloud_sql_peering_network: gcp_compute_region: gcp_compute_zone: gcp_controller_internal_ip_address: gcp_existing_vpc: gcp_filestore_ip_range: gcp_hub_internal_ip_address: gcp_restored_deployment_name:
9.2.4. Parameters of the restore.yml file
You can only restore into a new VPC network.
For a new VPC
If you want to restore using a new VPC set the following parameters:
-
gcp_existing_vpc
must be set tofalse
.
The following parameters must be removed:
-
gcp_filestore_ip_range
-
gcp_cloud_sql_peering_network
-
gcp_controller_internal_ip_address
-
gcp_hub_internal_ip_address
Provide values for the following parameters:
-
gcp_existing_vpc
must be set tofalse
. -
cloud_credentials_path
is the path for your Google Cloud service account credentials file. -
gcp_deployment_name
is the name of the AAP deployment manager deployment you want to back up. -
gcp_restored_deployment_name
is the name under which the deployment must be restored. A new deployment will be created with this name. A deployment must not already exist with this name. -
gcp_bucket_backup_name
is the bucket name you used for the backup. -
gcp_compute_region
is the region where the backup was taken. This can be retrieved by checking the Deployments config in Deployment Manager. -
gcp_compute_zone
is the zone where the backup was taken. This can be retrieved by checking the Deployments config in Deployment Manager.
9.2.5. Running the restore command
When restore.yml
is populated, you can use the command generator to create the restore command.
Procedure
Run the command generator.
NoteAs
/tmp
is used, the<local_data_file_directory>
must be set to/tmp
unless you chose a different location forrestore.yml
.$ docker run --rm -v $(pwd)/command_generator_data:/data $IMAGE command_generator gcp_restore_deployment --data-file /data/restore.yml
This generates a new command containing all needed volumes, environment variables and parameters.
The generated command resembles the following:
docker run --rm --env PLATFORM=GCP -v <local_credential_file>:/home/runner/.gcp/credentials:ro \ --env ANSIBLE_CONFIG=../gcp-ansible.cfg $IMAGE redhat.ansible_on_clouds.gcp_restore_deployment \ -e 'gcp_service_account_credentials_json_path=/home/runner/.gcp/credentials \ gcp_deployment_name=<former_deployment_name> gcp_restored_deployment_name=<new_deployment_name> \ gcp_compute_region=<region> gcp_compute_zone=<zone> gcp_bucket_backup_name=<bucket> gcp_existing_vpc=False'
Run the generated command.
$ docker run --rm --env PLATFORM=GCP -v <local_credential_file>:/home/runner/.gcp/credentials:ro \ --env ANSIBLE_CONFIG=../gcp-ansible.cfg $IMAGE redhat.ansible_on_clouds.gcp_restore_deployment \ -e 'gcp_service_account_credentials_json_path=/home/runner/.gcp/credentials \ gcp_deployment_name=<former_deployment_name> gcp_restored_deployment_name=<new_deployment_name> \ gcp_compute_region=<region> gcp_compute_zone=<zone> gcp_bucket_backup_name=<bucket> gcp_existing_vpc=False'
When the playbook has completed, the output resembles the following:
TASK [redhat.ansible_on_clouds.standalone_gcp_restore : Display internal IP addresses] *** ok: [localhost] => msg: - 'Hub internal IP: 192.168.240.21' - 'Controller internal IP: 192.168.240.20' PLAY RECAP ********************************************************************* localhost : ok=33 changed=8 unreachable=0 failed=0 skipped=6 rescued=0 ignored=2