Chapter 12. Advanced migration options
12.1. Changing precopy intervals for warm migration
You can change the snapshot interval by patching the ForkliftController
custom resource (CR).
Procedure
Patch the
ForkliftController
CR:$ oc patch forkliftcontroller/<forklift-controller> -n openshift-mtv -p '{"spec": {"controller_precopy_interval": <60>}}' --type=merge 1
- 1
- Specify the precopy interval in minutes. The default value is
60
.
You do not need to restart the
forklift-controller
pod.
12.2. Creating custom rules for the Validation service
The Validation
service uses Open Policy Agent (OPA) policy rules to check the suitability of each virtual machine (VM) for migration. The Validation
service generates a list of concerns for each VM, which are stored in the Provider Inventory
service as VM attributes. The web console displays the concerns for each VM in the provider inventory.
You can create custom rules to extend the default ruleset of the Validation
service. For example, you can create a rule that checks whether a VM has multiple disks.
12.2.1. About Rego files
Validation rules are written in Rego, the Open Policy Agent (OPA) native query language. The rules are stored as .rego
files in the /usr/share/opa/policies/io/konveyor/forklift/<provider>
directory of the Validation
pod.
Each validation rule is defined in a separate .rego
file and tests for a specific condition. If the condition evaluates as true
, the rule adds a {“category”, “label”, “assessment”}
hash to the concerns
. The concerns
content is added to the concerns
key in the inventory record of the VM. The web console displays the content of the concerns
key for each VM in the provider inventory.
The following .rego
file example checks for distributed resource scheduling enabled in the cluster of a VMware VM:
drs_enabled.rego example
package io.konveyor.forklift.vmware 1 has_drs_enabled { input.host.cluster.drsEnabled 2 } concerns[flag] { has_drs_enabled flag := { "category": "Information", "label": "VM running in a DRS-enabled cluster", "assessment": "Distributed resource scheduling is not currently supported by OpenShift Virtualization. The VM can be migrated but it will not have this feature in the target environment." } }
12.2.2. Checking the default validation rules
Before you create a custom rule, you must check the default rules of the Validation
service to ensure that you do not create a rule that redefines an existing default value.
Example: If a default rule contains the line default valid_input = false
and you create a custom rule that contains the line default valid_input = true
, the Validation
service will not start.
Procedure
Connect to the terminal of the
Validation
pod:$ oc rsh <validation_pod>
Go to the OPA policies directory for your provider:
$ cd /usr/share/opa/policies/io/konveyor/forklift/<provider> 1
- 1
- Specify
vmware
orovirt
.
Search for the default policies:
$ grep -R "default" *
12.2.3. Creating a validation rule
You create a validation rule by applying a config map custom resource (CR) containing the rule to the Validation
service.
-
If you create a rule with the same name as an existing rule, the
Validation
service performs anOR
operation with the rules. -
If you create a rule that contradicts a default rule, the
Validation
service will not start.
Validation rule example
Validation rules are based on virtual machine (VM) attributes collected by the Provider Inventory
service.
For example, the VMware API uses this path to check whether a VMware VM has NUMA node affinity configured: MOR:VirtualMachine.config.extraConfig["numa.nodeAffinity"]
.
The Provider Inventory
service simplifies this configuration and returns a testable attribute with a list value:
"numaNodeAffinity": [ "0", "1" ],
You create a Rego query, based on this attribute, and add it to the forklift-validation-config
config map:
`count(input.numaNodeAffinity) != 0`
Procedure
Create a config map CR according to the following example:
$ cat << EOF | oc apply -f - apiVersion: v1 kind: ConfigMap metadata: name: <forklift-validation-config> namespace: openshift-mtv data: vmware_multiple_disks.rego: |- package <provider_package> 1 has_multiple_disks { 2 count(input.disks) > 1 } concerns[flag] { has_multiple_disks 3 flag := { "category": "<Information>", 4 "label": "Multiple disks detected", "assessment": "Multiple disks detected on this VM." } } EOF
- 1
- Specify the provider package name. Allowed values are
io.konveyor.forklift.vmware
for VMware andio.konveyor.forklift.ovirt
for Red Hat Virtualization. - 2
- Specify the
concerns
name and Rego query. - 3
- Specify the
concerns
name andflag
parameter values. - 4
- Allowed values are
Critical
,Warning
, andInformation
.
Stop the
Validation
pod by scaling theforklift-controller
deployment to0
:$ oc scale -n openshift-mtv --replicas=0 deployment/forklift-controller
Start the
Validation
pod by scaling theforklift-controller
deployment to1
:$ oc scale -n openshift-mtv --replicas=1 deployment/forklift-controller
Check the
Validation
pod log to verify that the pod started:$ oc logs -f <validation_pod>
If the custom rule conflicts with a default rule, the
Validation
pod will not start.Remove the source provider:
$ oc delete provider <provider> -n openshift-mtv
Add the source provider to apply the new rule:
$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider metadata: name: <provider> namespace: openshift-mtv spec: type: <provider_type> 1 url: <api_end_point> 2 secret: name: <secret> 3 namespace: openshift-mtv EOF
You must update the rules version after creating a custom rule so that the Inventory
service detects the changes and validates the VMs.
12.2.4. Updating the inventory rules version
You must update the inventory rules version each time you update the rules so that the Provider Inventory
service detects the changes and triggers the Validation
service.
The rules version is recorded in a rules_version.rego
file for each provider.
Procedure
Retrieve the current rules version:
$ GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version 1
Example output
{ "result": { "rules_version": 5 } }
Connect to the terminal of the
Validation
pod:$ oc rsh <validation_pod>
-
Update the rules version in the
/usr/share/opa/policies/io/konveyor/forklift/<provider>/rules_version.rego
file. -
Log out of the
Validation
pod terminal. Verify the updated rules version:
$ GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version 1
Example output
{ "result": { "rules_version": 6 } }
12.3. Retrieving the Inventory service JSON
You retrieve the Inventory
service JSON by sending an Inventory
service query to a virtual machine (VM). The output contains an "input"
key, which contains the inventory attributes that are queried by the Validation
service rules.
You can create a validation rule based on any attribute in the "input"
key, for example, input.snapshot.kind
.
Procedure
Retrieve the routes for the project:
oc get route -n openshift-mtv
Retrieve the
Inventory
service route:$ oc get route <inventory_service> -n openshift-mtv
Retrieve the access token:
$ TOKEN=$(oc whoami -t)
Trigger an HTTP GET request (for example, using Curl):
$ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers -k
Retrieve the
UUID
of a provider:$ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers/<provider> -k 1
Retrieve the VMs of a provider:
$ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers/<provider>/<UUID>/vms -k
Retrieve the details of a VM:
$ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers/<provider>/<UUID>/workloads/<vm> -k
Example output
{ "input": { "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/workloads/vm-431", "id": "vm-431", "parent": { "kind": "Folder", "id": "group-v22" }, "revision": 1, "name": "iscsi-target", "revisionValidated": 1, "isTemplate": false, "networks": [ { "kind": "Network", "id": "network-31" }, { "kind": "Network", "id": "network-33" } ], "disks": [ { "key": 2000, "file": "[iSCSI_Datastore] iscsi-target/iscsi-target-000001.vmdk", "datastore": { "kind": "Datastore", "id": "datastore-63" }, "capacity": 17179869184, "shared": false, "rdm": false }, { "key": 2001, "file": "[iSCSI_Datastore] iscsi-target/iscsi-target_1-000001.vmdk", "datastore": { "kind": "Datastore", "id": "datastore-63" }, "capacity": 10737418240, "shared": false, "rdm": false } ], "concerns": [], "policyVersion": 5, "uuid": "42256329-8c3a-2a82-54fd-01d845a8bf49", "firmware": "bios", "powerState": "poweredOn", "connectionState": "connected", "snapshot": { "kind": "VirtualMachineSnapshot", "id": "snapshot-3034" }, "changeTrackingEnabled": false, "cpuAffinity": [ 0, 2 ], "cpuHotAddEnabled": true, "cpuHotRemoveEnabled": false, "memoryHotAddEnabled": false, "faultToleranceEnabled": false, "cpuCount": 2, "coresPerSocket": 1, "memoryMB": 2048, "guestName": "Red Hat Enterprise Linux 7 (64-bit)", "balloonedMemory": 0, "ipAddress": "10.19.2.96", "storageUsed": 30436770129, "numaNodeAffinity": [ "0", "1" ], "devices": [ { "kind": "RealUSBController" } ], "host": { "id": "host-29", "parent": { "kind": "Cluster", "id": "domain-c26" }, "revision": 1, "name": "IP address or host name of the vCenter host or RHV Engine host", "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/hosts/host-29", "status": "green", "inMaintenance": false, "managementServerIp": "10.19.2.96", "thumbprint": <thumbprint>, "timezone": "UTC", "cpuSockets": 2, "cpuCores": 16, "productName": "VMware ESXi", "productVersion": "6.5.0", "networking": { "pNICs": [ { "key": "key-vim.host.PhysicalNic-vmnic0", "linkSpeed": 10000 }, { "key": "key-vim.host.PhysicalNic-vmnic1", "linkSpeed": 10000 }, { "key": "key-vim.host.PhysicalNic-vmnic2", "linkSpeed": 10000 }, { "key": "key-vim.host.PhysicalNic-vmnic3", "linkSpeed": 10000 } ], "vNICs": [ { "key": "key-vim.host.VirtualNic-vmk2", "portGroup": "VM_Migration", "dPortGroup": "", "ipAddress": "192.168.79.13", "subnetMask": "255.255.255.0", "mtu": 9000 }, { "key": "key-vim.host.VirtualNic-vmk0", "portGroup": "Management Network", "dPortGroup": "", "ipAddress": "10.19.2.13", "subnetMask": "255.255.255.128", "mtu": 1500 }, { "key": "key-vim.host.VirtualNic-vmk1", "portGroup": "Storage Network", "dPortGroup": "", "ipAddress": "172.31.2.13", "subnetMask": "255.255.0.0", "mtu": 1500 }, { "key": "key-vim.host.VirtualNic-vmk3", "portGroup": "", "dPortGroup": "dvportgroup-48", "ipAddress": "192.168.61.13", "subnetMask": "255.255.255.0", "mtu": 1500 }, { "key": "key-vim.host.VirtualNic-vmk4", "portGroup": "VM_DHCP_Network", "dPortGroup": "", "ipAddress": "10.19.2.231", "subnetMask": "255.255.255.128", "mtu": 1500 } ], "portGroups": [ { "key": "key-vim.host.PortGroup-VM Network", "name": "VM Network", "vSwitch": "key-vim.host.VirtualSwitch-vSwitch0" }, { "key": "key-vim.host.PortGroup-Management Network", "name": "Management Network", "vSwitch": "key-vim.host.VirtualSwitch-vSwitch0" }, { "key": "key-vim.host.PortGroup-VM_10G_Network", "name": "VM_10G_Network", "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1" }, { "key": "key-vim.host.PortGroup-VM_Storage", "name": "VM_Storage", "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1" }, { "key": "key-vim.host.PortGroup-VM_DHCP_Network", "name": "VM_DHCP_Network", "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1" }, { "key": "key-vim.host.PortGroup-Storage Network", "name": "Storage Network", "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1" }, { "key": "key-vim.host.PortGroup-VM_Isolated_67", "name": "VM_Isolated_67", "vSwitch": "key-vim.host.VirtualSwitch-vSwitch2" }, { "key": "key-vim.host.PortGroup-VM_Migration", "name": "VM_Migration", "vSwitch": "key-vim.host.VirtualSwitch-vSwitch2" } ], "switches": [ { "key": "key-vim.host.VirtualSwitch-vSwitch0", "name": "vSwitch0", "portGroups": [ "key-vim.host.PortGroup-VM Network", "key-vim.host.PortGroup-Management Network" ], "pNICs": [ "key-vim.host.PhysicalNic-vmnic4" ] }, { "key": "key-vim.host.VirtualSwitch-vSwitch1", "name": "vSwitch1", "portGroups": [ "key-vim.host.PortGroup-VM_10G_Network", "key-vim.host.PortGroup-VM_Storage", "key-vim.host.PortGroup-VM_DHCP_Network", "key-vim.host.PortGroup-Storage Network" ], "pNICs": [ "key-vim.host.PhysicalNic-vmnic2", "key-vim.host.PhysicalNic-vmnic0" ] }, { "key": "key-vim.host.VirtualSwitch-vSwitch2", "name": "vSwitch2", "portGroups": [ "key-vim.host.PortGroup-VM_Isolated_67", "key-vim.host.PortGroup-VM_Migration" ], "pNICs": [ "key-vim.host.PhysicalNic-vmnic3", "key-vim.host.PhysicalNic-vmnic1" ] } ] }, "networks": [ { "kind": "Network", "id": "network-31" }, { "kind": "Network", "id": "network-34" }, { "kind": "Network", "id": "network-57" }, { "kind": "Network", "id": "network-33" }, { "kind": "Network", "id": "dvportgroup-47" } ], "datastores": [ { "kind": "Datastore", "id": "datastore-35" }, { "kind": "Datastore", "id": "datastore-63" } ], "vms": null, "networkAdapters": [], "cluster": { "id": "domain-c26", "parent": { "kind": "Folder", "id": "group-h23" }, "revision": 1, "name": "mycluster", "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/clusters/domain-c26", "folder": "group-h23", "networks": [ { "kind": "Network", "id": "network-31" }, { "kind": "Network", "id": "network-34" }, { "kind": "Network", "id": "network-57" }, { "kind": "Network", "id": "network-33" }, { "kind": "Network", "id": "dvportgroup-47" } ], "datastores": [ { "kind": "Datastore", "id": "datastore-35" }, { "kind": "Datastore", "id": "datastore-63" } ], "hosts": [ { "kind": "Host", "id": "host-44" }, { "kind": "Host", "id": "host-29" } ], "dasEnabled": false, "dasVms": [], "drsEnabled": true, "drsBehavior": "fullyAutomated", "drsVms": [], "datacenter": null } } } }
12.4. Adding hooks to an MTV migration plan
You can add hooks to an Migration Toolkit for Virtualization (MTV) migration plan to perform automated operations on a VM, either before or after you migrate it.
12.4.1. About hooks for MTV migration plans
You can add hooks to Migration Toolkit for Virtualization (MTV) migration plans using either the MTV CLI or the MTV user interface, which is located in the Red Hat OpenShift web console.
- Pre-migration hooks are hooks that perform operations on a VM that is located on a provider. This prepares the VM for migration.
- Post-migration hooks are hooks that perform operations on a VM that has migrated to OpenShift Virtualization.
12.4.1.1. Default hook image
The default hook image for an MTV hook is quay.io/kubev2v/hook-runner
. The image is based on the Ansible Runner image with the addition of python-openshift
to provide Ansible Kubernetes resources and a recent oc
binary.
12.4.1.2. Hook execution
An Ansible playbook that is provided as part of a migration hook is mounted into the hook container as a ConfigMap
. The hook container is run as a job on the desired cluster in the openshift-mtv
namespace using the ServiceAccount
you choose.
When you add a hook, you must specify the namespace where the Hook
CR is located, the name of the hook, and whether the hook is a pre-migration hook or a post-migration hook.
In order for a hook to run on a VM, the VM must be started and available using SSH.
The illustration that follows shows the general process of using a migration hook. For specific procedures, see Adding a migration hook to a migration plan using the Red Hat OpenShift web console and Adding a migration hook to a migration plan using the CLI.
Figure 12.1. Adding a hook to a migration plan

Process:
Input your Ansible hook and credentials.
Input an Ansible hook image to the MTV controller using either the UI or the CLI.
-
In the UI, specify the
ansible-runner
and enter theplaybook.yml
that contains the hook. - In the CLI, input the hook image, which specifies the playbook that runs the hook.
-
In the UI, specify the
If you need additional data to run the playbook inside the pod, such as SSH data, create a Secret that contains credentials for the VM. The Secret is not mounted to the pod, but is called by the playbook.
NoteThis Secret is not the same as the
Secret
CR that contains the credentials of your source provider.
The MTV controller creates the
ConfigMap
, which contains:-
workload.yml
, which contains information about the VMs. -
playbook.yml
, the raw string playbook you want to execute. plan.yml
, which is thePlan
CR.The
ConfigMap
contains the name of the VM and instructs the playbook what to do.
-
The MTV controller creates a job that starts the user specified image.
Mounts the
ConfigMap
to the container.The Ansible hook imports the Secret that the user previously entered.
The job runs a pre-migration hook or a post-migration hook as follows:
- For a pre-migration hook, the job logs into the VMs on the source provider using SSH and runs the hook.
- For a post-migration hook, the job logs into the VMs on OpenShift Virtualization using SSH and runs the hook.
12.4.2. Adding a migration hook to a migration plan using the Red Hat OpenShift web console
You can add a migration hook to an existing migration plan using the Red Hat OpenShift web console. Note that you need to run one command in the Migration Toolkit for Virtualization (MTV) CLI.
For example, you can create a hook to install the cloud-init
service on a VM and write a file before migration.
You can run one pre-migration hook, one post-migration hook, or one of each per migration plan.
Prerequisites
- Migration plan
- Migration hook file, whose contents you copy and paste into the web console
-
File containing the
Secret
for the source provider - Red Hat OpenShift service account called by the hook and that has at least write access for the namespace you are working in
- SSH access for VMs you want to migrate with the public key installed on the VMs
- VMs running on Microsoft Server only: Remote Execution enabled
Additional resources
For instructions for creating a service account, see Understanding and creating service accounts.
Procedure
- In the Red Hat OpenShift web console, click Migration > Plans for virtualization and then click the migration plan you want to add the hook to.
- Click Hooks.
For a pre-migration hook, perform the following steps:
- In the Pre migration hook section, toggle the Enable hook switch to Enable pre migration hook.
-
Enter the Hook runner image. If you are specifying the
spec.playbook
, you need to use an image that has anansible-runner
. - Paste your hook as a YAML file in the Ansible playbook text box.
For a post-migration hook, perform the following steps:
- In the Post migration hook, toggle the Enable hook switch Enable post migration hook.
-
Enter the Hook runner image. If you are specifying the
spec.playbook
, you need to use an image that has anansible-runner
. - Paste your hook as a YAML file in the Ansible playbook text box.
- At the top of the tab, click Update hooks.
In a terminal, enter the following command to associate each hook with your Red Hat OpenShift service account:
$ oc -n openshift-mtv patch hook <name_of_hook> \ -p '{"spec":{"serviceAccount":"<service_account>"}}' --type merge
The example migration hook that follows ensures that the VM can be accessed using SSH, creates an SSH key, and runs 2 tasks: stopping the Maria database and generating a text file.
Example migration hook
- name: Main hosts: localhost vars_files: - plan.yml - workload.yml tasks: - k8s_info: api_version: v1 kind: Secret name: privkey namespace: openshift-mtv register: ssh_credentials - name: Ensure SSH directory exists file: path: ~/.ssh state: directory mode: 0750 - name: Create SSH key copy: dest: ~/.ssh/id_rsa content: "{{ ssh_credentials.resources[0].data.key | b64decode }}" mode: 0600 - add_host: name: "{{ vm.ipaddress }}" # ALT "{{ vm.guestnetworks[2].ip }}" ansible_user: root groups: vms - hosts: vms vars_files: - plan.yml - workload.yml tasks: - name: Stop MariaDB service: name: mariadb state: stopped - name: Create Test File copy: dest: /premigration.txt content: "Migration from {{ provider.source.name }} of {{ vm.vm1.vm0.id }} has finished\n" mode: 0644
12.4.3. Adding a migration hook to a migration plan using the CLI
You can use a Hook
CR to add a pre-migration hook or a post-migration hook to an existing migration plan using the Migration Toolkit for Virtualization (MTV) CLI.
For example, you can create a Hook
CR to install the cloud-init
service on a VM and write a file before migration.
You can run one pre-migration hook, one post-migration hook, or one of each per migration plan. Each hook needs its own Hook
CR, but a Plan
CR contains data for all the hooks it uses.
You can retrieve additional information stored in a secret or in a ConfigMap
by using a k8s
module.
Prerequisites
- Migration plan
- Migration hook image or the playbook containing the hook image
- File containing the Secret for the source provider
- Red Hat OpenShift service account called by the hook and that has at least write access for the namespace you are working in
- SSH access for VMs you want to migrate with the public key installed on the VMs
- VMs running on Microsoft Server only: Remote Execution enabled
Additional resources
For instructions for creating a service account, see Understanding and creating service accounts.
Procedure
If needed, create a Secret with an SSH private key for the VM.
- Choose an existing key or generate a key pair.
- Install the public key on the VM.
Encode the private key in the Secret to base64.
apiVersion: v1 data: key: VGhpcyB3YXMgZ2Vu... kind: Secret metadata: name: ssh-credentials namespace: openshift-mtv type: Opaque
Encode your playbook by concatenating a file and piping it for Base64 encoding, for example:
$ cat playbook.yml | base64 -w0
Create a Hook CR:
$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Hook metadata: name: <hook> namespace: <namespace> spec: image: quay.io/kubev2v/hook-runner serviceAccount:<service account> 1 playbook: | LS0tCi0gbm... 2 EOF
NoteYou can use the default
hook-runner
image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.NoteTo decode an attached playbook, retrieve the resource with custom output and pipe it to base64. For example:
$ oc get -n konveyor-forklift hook playbook -o \ go-template='{{ .spec.playbook }}' | base64 -d
In the
Plan
CR of the migration, for each VM, add the following section to the end of the CR:vms: - id: <vm_id> hooks: - hook: namespace: <namespace> name: <name_of_hook> step: <type_of_hook> 1
- 1
- Options are
PreHook
, to run the hook before the migration, andPostHook
, to run the hook after the migration.
In order for a PreHook to run on a VM, the VM must be started and available via SSH.
The example migration hook that follows ensures that the VM can be accessed using SSH, creates an SSH key, and runs 2 tasks: stopping the Maria database and generating a text file.
Example migration hook
- name: Main hosts: localhost vars_files: - plan.yml - workload.yml tasks: - k8s_info: api_version: v1 kind: Secret name: privkey namespace: openshift-mtv register: ssh_credentials - name: Ensure SSH directory exists file: path: ~/.ssh state: directory mode: 0750 - name: Create SSH key copy: dest: ~/.ssh/id_rsa content: "{{ ssh_credentials.resources[0].data.key | b64decode }}" mode: 0600 - add_host: name: "{{ vm.ipaddress }}" # ALT "{{ vm.guestnetworks[2].ip }}" ansible_user: root groups: vms - hosts: vms vars_files: - plan.yml - workload.yml tasks: - name: Stop MariaDB service: name: mariadb state: stopped - name: Create Test File copy: dest: /premigration.txt content: "Migration from {{ provider.source.name }} of {{ vm.vm1.vm0.id }} has finished\n" mode: 0644