Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 12. Advanced migration options
12.1. Changing precopy intervals for warm migration Copier lienLien copié sur presse-papiers!
You can change the snapshot interval by patching the ForkliftController custom resource (CR).
Procedure
Patch the
ForkliftControllerCR:oc patch forkliftcontroller/<forklift-controller> -n openshift-mtv -p '{"spec": {"controller_precopy_interval": <60>}}' --type=merge$ oc patch forkliftcontroller/<forklift-controller> -n openshift-mtv -p '{"spec": {"controller_precopy_interval": <60>}}' --type=merge1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the precopy interval in minutes. The default value is
60.
You do not need to restart the
forklift-controllerpod.
12.2. Creating custom rules for the Validation service Copier lienLien copié sur presse-papiers!
The Validation service uses Open Policy Agent (OPA) policy rules to check the suitability of each virtual machine (VM) for migration. The Validation service generates a list of concerns for each VM, which are stored in the Provider Inventory service as VM attributes. The web console displays the concerns for each VM in the provider inventory.
You can create custom rules to extend the default ruleset of the Validation service. For example, you can create a rule that checks whether a VM has multiple disks.
12.2.1. About Rego files Copier lienLien copié sur presse-papiers!
Validation rules are written in Rego, the Open Policy Agent (OPA) native query language. The rules are stored as .rego files in the /usr/share/opa/policies/io/konveyor/forklift/<provider> directory of the Validation pod.
Each validation rule is defined in a separate .rego file and tests for a specific condition. If the condition evaluates as true, the rule adds a {“category”, “label”, “assessment”} hash to the concerns. The concerns content is added to the concerns key in the inventory record of the VM. The web console displays the content of the concerns key for each VM in the provider inventory.
The following .rego file example checks for distributed resource scheduling enabled in the cluster of a VMware VM:
drs_enabled.rego example
12.2.2. Checking the default validation rules Copier lienLien copié sur presse-papiers!
Before you create a custom rule, you must check the default rules of the Validation service to ensure that you do not create a rule that redefines an existing default value.
Example: If a default rule contains the line default valid_input = false and you create a custom rule that contains the line default valid_input = true, the Validation service will not start.
Procedure
Connect to the terminal of the
Validationpod:oc rsh <validation_pod>
$ oc rsh <validation_pod>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Go to the OPA policies directory for your provider:
cd /usr/share/opa/policies/io/konveyor/forklift/<provider>
$ cd /usr/share/opa/policies/io/konveyor/forklift/<provider>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify
vmwareorovirt.
Search for the default policies:
grep -R "default" *
$ grep -R "default" *Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.2.3. Creating a validation rule Copier lienLien copié sur presse-papiers!
You create a validation rule by applying a config map custom resource (CR) containing the rule to the Validation service.
-
If you create a rule with the same name as an existing rule, the
Validationservice performs anORoperation with the rules. -
If you create a rule that contradicts a default rule, the
Validationservice will not start.
Validation rule example
Validation rules are based on virtual machine (VM) attributes collected by the Provider Inventory service.
For example, the VMware API uses this path to check whether a VMware VM has NUMA node affinity configured: MOR:VirtualMachine.config.extraConfig["numa.nodeAffinity"].
The Provider Inventory service simplifies this configuration and returns a testable attribute with a list value:
"numaNodeAffinity": [
"0",
"1"
],
"numaNodeAffinity": [
"0",
"1"
],
You create a Rego query, based on this attribute, and add it to the forklift-validation-config config map:
`count(input.numaNodeAffinity) != 0`
`count(input.numaNodeAffinity) != 0`
Procedure
Create a config map CR according to the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the provider package name. Allowed values are
io.konveyor.forklift.vmwarefor VMware andio.konveyor.forklift.ovirtfor Red Hat Virtualization. - 2
- Specify the
concernsname and Rego query. - 3
- Specify the
concernsname andflagparameter values. - 4
- Allowed values are
Critical,Warning, andInformation.
Stop the
Validationpod by scaling theforklift-controllerdeployment to0:oc scale -n openshift-mtv --replicas=0 deployment/forklift-controller
$ oc scale -n openshift-mtv --replicas=0 deployment/forklift-controllerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Start the
Validationpod by scaling theforklift-controllerdeployment to1:oc scale -n openshift-mtv --replicas=1 deployment/forklift-controller
$ oc scale -n openshift-mtv --replicas=1 deployment/forklift-controllerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the
Validationpod log to verify that the pod started:oc logs -f <validation_pod>
$ oc logs -f <validation_pod>Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the custom rule conflicts with a default rule, the
Validationpod will not start.Remove the source provider:
oc delete provider <provider> -n openshift-mtv
$ oc delete provider <provider> -n openshift-mtvCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the source provider to apply the new rule:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
You must update the rules version after creating a custom rule so that the Inventory service detects the changes and validates the VMs.
12.2.4. Updating the inventory rules version Copier lienLien copié sur presse-papiers!
You must update the inventory rules version each time you update the rules so that the Provider Inventory service detects the changes and triggers the Validation service.
The rules version is recorded in a rules_version.rego file for each provider.
Procedure
Retrieve the current rules version:
GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version
$ GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
{ "result": { "rules_version": 5 } }{ "result": { "rules_version": 5 } }Copy to Clipboard Copied! Toggle word wrap Toggle overflow Connect to the terminal of the
Validationpod:oc rsh <validation_pod>
$ oc rsh <validation_pod>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Update the rules version in the
/usr/share/opa/policies/io/konveyor/forklift/<provider>/rules_version.regofile. -
Log out of the
Validationpod terminal. Verify the updated rules version:
GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version
$ GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
{ "result": { "rules_version": 6 } }{ "result": { "rules_version": 6 } }Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.3. Retrieving the Inventory service JSON Copier lienLien copié sur presse-papiers!
You retrieve the Inventory service JSON by sending an Inventory service query to a virtual machine (VM). The output contains an "input" key, which contains the inventory attributes that are queried by the Validation service rules.
You can create a validation rule based on any attribute in the "input" key, for example, input.snapshot.kind.
Procedure
Retrieve the routes for the project:
oc get route -n openshift-mtv
oc get route -n openshift-mtvCopy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the
Inventoryservice route:oc get route <inventory_service> -n openshift-mtv
$ oc get route <inventory_service> -n openshift-mtvCopy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the access token:
TOKEN=$(oc whoami -t)
$ TOKEN=$(oc whoami -t)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Trigger an HTTP GET request (for example, using Curl):
curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers -k
$ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers -kCopy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the
UUIDof a provider:curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers/<provider> -k
$ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers/<provider> -k1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the VMs of a provider:
curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers/<provider>/<UUID>/vms -k
$ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers/<provider>/<UUID>/vms -kCopy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the details of a VM:
curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers/<provider>/<UUID>/workloads/<vm> -k
$ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers/<provider>/<UUID>/workloads/<vm> -kCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.4. Adding hooks to an MTV migration plan Copier lienLien copié sur presse-papiers!
You can add hooks to an Migration Toolkit for Virtualization (MTV) migration plan to perform automated operations on a VM, either before or after you migrate it.
12.4.1. About hooks for MTV migration plans Copier lienLien copié sur presse-papiers!
You can add hooks to Migration Toolkit for Virtualization (MTV) migration plans using either the MTV CLI or the MTV user interface, which is located in the Red Hat OpenShift web console.
- Pre-migration hooks are hooks that perform operations on a VM that is located on a provider. This prepares the VM for migration.
- Post-migration hooks are hooks that perform operations on a VM that has migrated to OpenShift Virtualization.
12.4.1.1. Default hook image Copier lienLien copié sur presse-papiers!
The default hook image for an MTV hook is quay.io/konveyor/hook-runner. The image is based on the Ansible Runner image with the addition of python-openshift to provide Ansible Kubernetes resources and a recent oc binary.
12.4.1.2. Hook execution Copier lienLien copié sur presse-papiers!
An Ansible playbook that is provided as part of a migration hook is mounted into the hook container as a ConfigMap. The hook container is run as a job on the desired cluster in the openshift-mtv namespace using the ServiceAccount you choose.
When you add a hook, you must specify the namespace where the Hook CR is located, the name of the hook, and whether the hook is a pre-migration hook or a post-migration hook.
In order for a hook to run on a VM, the VM must be started and available using SSH.
The illustration that follows shows the general process of using a migration hook. For specific procedures, see Adding a migration hook to a migration plan using the Red Hat OpenShift web console and Adding a migration hook to a migration plan using the CLI.
Figure 12.1. Adding a hook to a migration plan
Process:
Input your Ansible hook and credentials.
Input an Ansible hook image to the MTV controller using either the UI or the CLI.
-
In the UI, specify the
ansible-runnerand enter theplaybook.ymlthat contains the hook. - In the CLI, input the hook image, which specifies the playbook that runs the hook.
-
In the UI, specify the
If you need additional data to run the playbook inside the pod, such as SSH data, create a Secret that contains credentials for the VM. The Secret is not mounted to the pod, but is called by the playbook.
NoteThis Secret is not the same as the
SecretCR that contains the credentials of your source provider.
The MTV controller creates the
ConfigMap, which contains:-
workload.yml, which contains information about the VMs. -
playbook.yml, the raw string playbook you want to execute. plan.yml, which is thePlanCR.The
ConfigMapcontains the name of the VM and instructs the playbook what to do.
-
The MTV controller creates a job that starts the user specified image.
Mounts the
ConfigMapto the container.The Ansible hook imports the Secret that the user previously entered.
The job runs a pre-migration hook or a post-migration hook as follows:
- For a pre-migration hook, the job logs into the VMs on the source provider using SSH and runs the hook.
- For a post-migration hook, the job logs into the VMs on OpenShift Virtualization using SSH and runs the hook.
12.4.2. Adding a migration hook to a migration plan using the Red Hat OpenShift web console Copier lienLien copié sur presse-papiers!
You can add a migration hook to an existing migration plan using the Red Hat OpenShift web console. Note that you need to run one command in the Migration Toolkit for Virtualization (MTV) CLI.
For example, you can create a hook to install the cloud-init service on a VM and write a file before migration.
You can run one pre-migration hook, one post-migration hook, or one of each per migration plan.
Prerequisites
- Migration plan
- Migration hook file, whose contents you copy and paste into the web console
-
File containing the
Secretfor the source provider - Red Hat OpenShift service account called by the hook and that has at least write access for the namespace you are working in
- SSH access for VMs you want to migrate with the public key installed on the VMs
- VMs running on Microsoft Server only: Remote Execution enabled
Additional resources
For instructions for creating a service account, see Understanding and creating service accounts.
Procedure
- In the Red Hat OpenShift web console, click Migration > Plans for virtualization and then click the migration plan you want to add the hook to.
- Click Hooks.
For a pre-migration hook, perform the following steps:
- In the Pre migration hook section, toggle the Enable hook switch to Enable pre migration hook.
-
Enter the Hook runner image. If you are specifying the
spec.playbook, you need to use an image that has anansible-runner. - Paste your hook as a YAML file in the Ansible playbook text box.
For a post-migration hook, perform the following steps:
- In the Post migration hook, toggle the Enable hook switch Enable post migration hook.
-
Enter the Hook runner image. If you are specifying the
spec.playbook, you need to use an image that has anansible-runner. - Paste your hook as a YAML file in the Ansible playbook text box.
- At the top of the tab, click Update hooks.
In a terminal, enter the following command to associate each hook with your Red Hat OpenShift service account:
oc -n openshift-mtv patch hook <name_of_hook> \ -p '{"spec":{"serviceAccount":"<service_account>"}}' --type merge$ oc -n openshift-mtv patch hook <name_of_hook> \ -p '{"spec":{"serviceAccount":"<service_account>"}}' --type mergeCopy to Clipboard Copied! Toggle word wrap Toggle overflow
The example migration hook that follows ensures that the VM can be accessed using SSH, creates an SSH key, and runs 2 tasks: stopping the Maria database and generating a text file.
Example migration hook
12.4.3. Adding a migration hook to a migration plan using the CLI Copier lienLien copié sur presse-papiers!
You can use a Hook CR to add a pre-migration hook or a post-migration hook to an existing migration plan using the Migration Toolkit for Virtualization (MTV) CLI.
For example, you can create a Hook CR to install the cloud-init service on a VM and write a file before migration.
You can run one pre-migration hook, one post-migration hook, or one of each per migration plan. Each hook needs its own Hook CR, but a Plan CR contains data for all the hooks it uses.
You can retrieve additional information stored in a secret or in a ConfigMap by using a k8s module.
Prerequisites
- Migration plan
- Migration hook image or the playbook containing the hook image
- File containing the Secret for the source provider
- Red Hat OpenShift service account called by the hook and that has at least write access for the namespace you are working in
- SSH access for VMs you want to migrate with the public key installed on the VMs
- VMs running on Microsoft Server only: Remote Execution enabled
Additional resources
For instructions for creating a service account, see Understanding and creating service accounts.
Procedure
If needed, create a Secret with an SSH private key for the VM.
- Choose an existing key or generate a key pair.
- Install the public key on the VM.
Encode the private key in the Secret to base64.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Encode your playbook by concatenating a file and piping it for Base64 encoding, for example:
cat playbook.yml | base64 -w0
$ cat playbook.yml | base64 -w0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a Hook CR:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can use the default
hook-runnerimage or specify a custom image. If you specify a custom image, you do not have to specify a playbook.NoteTo decode an attached playbook, retrieve the resource with custom output and pipe it to base64. For example:
oc get -n konveyor-forklift hook playbook -o \ go-template='{{ .spec.playbook }}' | base64 -d$ oc get -n konveyor-forklift hook playbook -o \ go-template='{{ .spec.playbook }}' | base64 -dCopy to Clipboard Copied! Toggle word wrap Toggle overflow In the
PlanCR of the migration, for each VM, add the following section to the end of the CR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Options are
PreHook, to run the hook before the migration, andPostHook, to run the hook after the migration.
In order for a PreHook to run on a VM, the VM must be started and available via SSH.
The example migration hook that follows ensures that the VM can be accessed using SSH, creates an SSH key, and runs 2 tasks: stopping the Maria database and generating a text file.
Example migration hook