Search

Chapter 6. Advanced migration options

download PDF

6.1. Changing precopy intervals for warm migration

You can change the snapshot interval by patching the ForkliftController custom resource (CR).

Procedure

  • Patch the ForkliftController CR:

    $ oc patch forkliftcontroller/<forklift-controller> -n openshift-mtv -p '{"spec": {"controller_precopy_interval": <60>}}' --type=merge 1
    1
    Specify the precopy interval in minutes. The default value is 60.

    You do not need to restart the forklift-controller pod.

6.2. Creating custom rules for the Validation service

The Validation service uses Open Policy Agent (OPA) policy rules to check the suitability of each virtual machine (VM) for migration. The Validation service generates a list of concerns for each VM, which are stored in the Provider Inventory service as VM attributes. The web console displays the concerns for each VM in the provider inventory.

You can create custom rules to extend the default ruleset of the Validation service. For example, you can create a rule that checks whether a VM has multiple disks.

6.2.1. About Rego files

Validation rules are written in Rego, the Open Policy Agent (OPA) native query language. The rules are stored as .rego files in the /usr/share/opa/policies/io/konveyor/forklift/<provider> directory of the Validation pod.

Each validation rule is defined in a separate .rego file and tests for a specific condition. If the condition evaluates as true, the rule adds a {“category”, “label”, “assessment”} hash to the concerns. The concerns content is added to the concerns key in the inventory record of the VM. The web console displays the content of the concerns key for each VM in the provider inventory.

The following .rego file example checks for distributed resource scheduling enabled in the cluster of a VMware VM:

drs_enabled.rego example

package io.konveyor.forklift.vmware 1

has_drs_enabled {
    input.host.cluster.drsEnabled 2
}

concerns[flag] {
    has_drs_enabled
    flag := {
        "category": "Information",
        "label": "VM running in a DRS-enabled cluster",
        "assessment": "Distributed resource scheduling is not currently supported by OpenShift Virtualization. The VM can be migrated but it will not have this feature in the target environment."
    }
}

1
Each validation rule is defined within a package. The package namespaces are io.konveyor.forklift.vmware for VMware and io.konveyor.forklift.ovirt for Red Hat Virtualization.
2
Query parameters are based on the input key of the Validation service JSON.

6.2.2. Checking the default validation rules

Before you create a custom rule, you must check the default rules of the Validation service to ensure that you do not create a rule that redefines an existing default value.

Example: If a default rule contains the line default valid_input = false and you create a custom rule that contains the line default valid_input = true, the Validation service will not start.

Procedure

  1. Connect to the terminal of the Validation pod:

    $ oc rsh <validation_pod>
  2. Go to the OPA policies directory for your provider:

    $ cd /usr/share/opa/policies/io/konveyor/forklift/<provider> 1
    1
    Specify vmware or ovirt.
  3. Search for the default policies:

    $ grep -R "default" *

6.2.3. Retrieving the Inventory service JSON

You retrieve the Inventory service JSON by sending an Inventory service query to a virtual machine (VM). The output contains an "input" key, which contains the inventory attributes that are queried by the Validation service rules.

You can create a validation rule based on any attribute in the "input" key, for example, input.snapshot.kind.

Procedure

  1. Retrieve the routes for the project:

    oc get route -n openshift-mtv
  2. Retrieve the Inventory service route:

    $ oc get route <inventory_service> -n openshift-mtv
  3. Retrieve the access token:

    $ TOKEN=$(oc whoami -t)
  4. Trigger an HTTP GET request (for example, using Curl):

    $ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers -k
  5. Retrieve the UUID of a provider:

    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/<provider> -k 1
    1
    Allowed values for the provider are vsphere, ovirt, and openstack.
  6. Retrieve the VMs of a provider:

    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/<provider>/<UUID>/vms -k
  7. Retrieve the details of a VM:

    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/<provider>/<UUID>/workloads/<vm> -k

    Example output

    {
        "input": {
            "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/workloads/vm-431",
            "id": "vm-431",
            "parent": {
                "kind": "Folder",
                "id": "group-v22"
            },
            "revision": 1,
            "name": "iscsi-target",
            "revisionValidated": 1,
            "isTemplate": false,
            "networks": [
                {
                    "kind": "Network",
                    "id": "network-31"
                },
                {
                    "kind": "Network",
                    "id": "network-33"
                }
            ],
            "disks": [
                {
                    "key": 2000,
                    "file": "[iSCSI_Datastore] iscsi-target/iscsi-target-000001.vmdk",
                    "datastore": {
                        "kind": "Datastore",
                        "id": "datastore-63"
                    },
                    "capacity": 17179869184,
                    "shared": false,
                    "rdm": false
                },
                {
                    "key": 2001,
                    "file": "[iSCSI_Datastore] iscsi-target/iscsi-target_1-000001.vmdk",
                    "datastore": {
                        "kind": "Datastore",
                        "id": "datastore-63"
                    },
                    "capacity": 10737418240,
                    "shared": false,
                    "rdm": false
                }
            ],
            "concerns": [],
            "policyVersion": 5,
            "uuid": "42256329-8c3a-2a82-54fd-01d845a8bf49",
            "firmware": "bios",
            "powerState": "poweredOn",
            "connectionState": "connected",
            "snapshot": {
                "kind": "VirtualMachineSnapshot",
                "id": "snapshot-3034"
            },
            "changeTrackingEnabled": false,
            "cpuAffinity": [
                0,
                2
            ],
            "cpuHotAddEnabled": true,
            "cpuHotRemoveEnabled": false,
            "memoryHotAddEnabled": false,
            "faultToleranceEnabled": false,
            "cpuCount": 2,
            "coresPerSocket": 1,
            "memoryMB": 2048,
            "guestName": "Red Hat Enterprise Linux 7 (64-bit)",
            "balloonedMemory": 0,
            "ipAddress": "10.19.2.96",
            "storageUsed": 30436770129,
            "numaNodeAffinity": [
                "0",
                "1"
            ],
            "devices": [
                {
                    "kind": "RealUSBController"
                }
            ],
            "host": {
                "id": "host-29",
                "parent": {
                    "kind": "Cluster",
                    "id": "domain-c26"
                },
                "revision": 1,
                "name": "IP address or host name of the vCenter host or RHV Engine host",
                "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/hosts/host-29",
                "status": "green",
                "inMaintenance": false,
                "managementServerIp": "10.19.2.96",
                "thumbprint": <thumbprint>,
                "timezone": "UTC",
                "cpuSockets": 2,
                "cpuCores": 16,
                "productName": "VMware ESXi",
                "productVersion": "6.5.0",
                "networking": {
                    "pNICs": [
                        {
                            "key": "key-vim.host.PhysicalNic-vmnic0",
                            "linkSpeed": 10000
                        },
                        {
                            "key": "key-vim.host.PhysicalNic-vmnic1",
                            "linkSpeed": 10000
                        },
                        {
                            "key": "key-vim.host.PhysicalNic-vmnic2",
                            "linkSpeed": 10000
                        },
                        {
                            "key": "key-vim.host.PhysicalNic-vmnic3",
                            "linkSpeed": 10000
                        }
                    ],
                    "vNICs": [
                        {
                            "key": "key-vim.host.VirtualNic-vmk2",
                            "portGroup": "VM_Migration",
                            "dPortGroup": "",
                            "ipAddress": "192.168.79.13",
                            "subnetMask": "255.255.255.0",
                            "mtu": 9000
                        },
                        {
                            "key": "key-vim.host.VirtualNic-vmk0",
                            "portGroup": "Management Network",
                            "dPortGroup": "",
                            "ipAddress": "10.19.2.13",
                            "subnetMask": "255.255.255.128",
                            "mtu": 1500
                        },
                        {
                            "key": "key-vim.host.VirtualNic-vmk1",
                            "portGroup": "Storage Network",
                            "dPortGroup": "",
                            "ipAddress": "172.31.2.13",
                            "subnetMask": "255.255.0.0",
                            "mtu": 1500
                        },
                        {
                            "key": "key-vim.host.VirtualNic-vmk3",
                            "portGroup": "",
                            "dPortGroup": "dvportgroup-48",
                            "ipAddress": "192.168.61.13",
                            "subnetMask": "255.255.255.0",
                            "mtu": 1500
                        },
                        {
                            "key": "key-vim.host.VirtualNic-vmk4",
                            "portGroup": "VM_DHCP_Network",
                            "dPortGroup": "",
                            "ipAddress": "10.19.2.231",
                            "subnetMask": "255.255.255.128",
                            "mtu": 1500
                        }
                    ],
                    "portGroups": [
                        {
                            "key": "key-vim.host.PortGroup-VM Network",
                            "name": "VM Network",
                            "vSwitch": "key-vim.host.VirtualSwitch-vSwitch0"
                        },
                        {
                            "key": "key-vim.host.PortGroup-Management Network",
                            "name": "Management Network",
                            "vSwitch": "key-vim.host.VirtualSwitch-vSwitch0"
                        },
                        {
                            "key": "key-vim.host.PortGroup-VM_10G_Network",
                            "name": "VM_10G_Network",
                            "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
                        },
                        {
                            "key": "key-vim.host.PortGroup-VM_Storage",
                            "name": "VM_Storage",
                            "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
                        },
                        {
                            "key": "key-vim.host.PortGroup-VM_DHCP_Network",
                            "name": "VM_DHCP_Network",
                            "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
                        },
                        {
                            "key": "key-vim.host.PortGroup-Storage Network",
                            "name": "Storage Network",
                            "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
                        },
                        {
                            "key": "key-vim.host.PortGroup-VM_Isolated_67",
                            "name": "VM_Isolated_67",
                            "vSwitch": "key-vim.host.VirtualSwitch-vSwitch2"
                        },
                        {
                            "key": "key-vim.host.PortGroup-VM_Migration",
                            "name": "VM_Migration",
                            "vSwitch": "key-vim.host.VirtualSwitch-vSwitch2"
                        }
                    ],
                    "switches": [
                        {
                            "key": "key-vim.host.VirtualSwitch-vSwitch0",
                            "name": "vSwitch0",
                            "portGroups": [
                                "key-vim.host.PortGroup-VM Network",
                                "key-vim.host.PortGroup-Management Network"
                            ],
                            "pNICs": [
                                "key-vim.host.PhysicalNic-vmnic4"
                            ]
                        },
                        {
                            "key": "key-vim.host.VirtualSwitch-vSwitch1",
                            "name": "vSwitch1",
                            "portGroups": [
                                "key-vim.host.PortGroup-VM_10G_Network",
                                "key-vim.host.PortGroup-VM_Storage",
                                "key-vim.host.PortGroup-VM_DHCP_Network",
                                "key-vim.host.PortGroup-Storage Network"
                            ],
                            "pNICs": [
                                "key-vim.host.PhysicalNic-vmnic2",
                                "key-vim.host.PhysicalNic-vmnic0"
                            ]
                        },
                        {
                            "key": "key-vim.host.VirtualSwitch-vSwitch2",
                            "name": "vSwitch2",
                            "portGroups": [
                                "key-vim.host.PortGroup-VM_Isolated_67",
                                "key-vim.host.PortGroup-VM_Migration"
                            ],
                            "pNICs": [
                                "key-vim.host.PhysicalNic-vmnic3",
                                "key-vim.host.PhysicalNic-vmnic1"
                            ]
                        }
                    ]
                },
                "networks": [
                    {
                        "kind": "Network",
                        "id": "network-31"
                    },
                    {
                        "kind": "Network",
                        "id": "network-34"
                    },
                    {
                        "kind": "Network",
                        "id": "network-57"
                    },
                    {
                        "kind": "Network",
                        "id": "network-33"
                    },
                    {
                        "kind": "Network",
                        "id": "dvportgroup-47"
                    }
                ],
                "datastores": [
                    {
                        "kind": "Datastore",
                        "id": "datastore-35"
                    },
                    {
                        "kind": "Datastore",
                        "id": "datastore-63"
                    }
                ],
                "vms": null,
                "networkAdapters": [],
                "cluster": {
                    "id": "domain-c26",
                    "parent": {
                        "kind": "Folder",
                        "id": "group-h23"
                    },
                    "revision": 1,
                    "name": "mycluster",
                    "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/clusters/domain-c26",
                    "folder": "group-h23",
                    "networks": [
                        {
                            "kind": "Network",
                            "id": "network-31"
                        },
                        {
                            "kind": "Network",
                            "id": "network-34"
                        },
                        {
                            "kind": "Network",
                            "id": "network-57"
                        },
                        {
                            "kind": "Network",
                            "id": "network-33"
                        },
                        {
                            "kind": "Network",
                            "id": "dvportgroup-47"
                        }
                    ],
                    "datastores": [
                        {
                            "kind": "Datastore",
                            "id": "datastore-35"
                        },
                        {
                            "kind": "Datastore",
                            "id": "datastore-63"
                        }
                    ],
                    "hosts": [
                        {
                            "kind": "Host",
                            "id": "host-44"
                        },
                        {
                            "kind": "Host",
                            "id": "host-29"
                        }
                    ],
                    "dasEnabled": false,
                    "dasVms": [],
                    "drsEnabled": true,
                    "drsBehavior": "fullyAutomated",
                    "drsVms": [],
                    "datacenter": null
                }
            }
        }
    }

6.2.4. Creating a validation rule

You create a validation rule by applying a config map custom resource (CR) containing the rule to the Validation service.

Important
  • If you create a rule with the same name as an existing rule, the Validation service performs an OR operation with the rules.
  • If you create a rule that contradicts a default rule, the Validation service will not start.

Validation rule example

Validation rules are based on virtual machine (VM) attributes collected by the Provider Inventory service.

For example, the VMware API uses this path to check whether a VMware VM has NUMA node affinity configured: MOR:VirtualMachine.config.extraConfig["numa.nodeAffinity"].

The Provider Inventory service simplifies this configuration and returns a testable attribute with a list value:

"numaNodeAffinity": [
    "0",
    "1"
],

You create a Rego query, based on this attribute, and add it to the forklift-validation-config config map:

`count(input.numaNodeAffinity) != 0`

Procedure

  1. Create a config map CR according to the following example:

    $ cat << EOF | oc apply -f -
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: <forklift-validation-config>
      namespace: openshift-mtv
    data:
      vmware_multiple_disks.rego: |-
        package <provider_package> 1
    
        has_multiple_disks { 2
          count(input.disks) > 1
        }
    
        concerns[flag] {
          has_multiple_disks 3
            flag := {
              "category": "<Information>", 4
              "label": "Multiple disks detected",
              "assessment": "Multiple disks detected on this VM."
            }
        }
    EOF
    1
    Specify the provider package name. Allowed values are io.konveyor.forklift.vmware for VMware and io.konveyor.forklift.ovirt for Red Hat Virtualization.
    2
    Specify the concerns name and Rego query.
    3
    Specify the concerns name and flag parameter values.
    4
    Allowed values are Critical, Warning, and Information.
  2. Stop the Validation pod by scaling the forklift-controller deployment to 0:

    $ oc scale -n openshift-mtv --replicas=0 deployment/forklift-controller
  3. Start the Validation pod by scaling the forklift-controller deployment to 1:

    $ oc scale -n openshift-mtv --replicas=1 deployment/forklift-controller
  4. Check the Validation pod log to verify that the pod started:

    $ oc logs -f <validation_pod>

    If the custom rule conflicts with a default rule, the Validation pod will not start.

  5. Remove the source provider:

    $ oc delete provider <provider> -n openshift-mtv
  6. Add the source provider to apply the new rule:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Provider
    metadata:
      name: <provider>
      namespace: openshift-mtv
    spec:
      type: <provider_type> 1
      url: <api_end_point> 2
      secret:
        name: <secret> 3
        namespace: openshift-mtv
    EOF
    1
    Allowed values are ovirt, vsphere, and openstack.
    2
    Specify the API end point URL, for example, https://<vCenter_host>/sdk for vSphere, https://<engine_host>/ovirt-engine/api for RHV, or https://<identity_service>/v3 for OpenStack.
    3
    Specify the name of the provider Secret CR.

You must update the rules version after creating a custom rule so that the Inventory service detects the changes and validates the VMs.

6.2.5. Updating the inventory rules version

You must update the inventory rules version each time you update the rules so that the Provider Inventory service detects the changes and triggers the Validation service.

The rules version is recorded in a rules_version.rego file for each provider.

Procedure

  1. Retrieve the current rules version:

    $ GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version 1

    Example output

    {
       "result": {
           "rules_version": 5
       }
    }

  2. Connect to the terminal of the Validation pod:

    $ oc rsh <validation_pod>
  3. Update the rules version in the /usr/share/opa/policies/io/konveyor/forklift/<provider>/rules_version.rego file.
  4. Log out of the Validation pod terminal.
  5. Verify the updated rules version:

    $ GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version 1

    Example output

    {
       "result": {
           "rules_version": 6
       }
    }

6.3. Adding hooks to a migration plan

You can add hooks a migration plan from the command line by using the Migration Toolkit for Virtualization API.

6.3.1. API-based hooks for MTV migration plans

You can add hooks to a migration plan from the command line by using the Migration Toolkit for Virtualization API.

Default hook image

The default hook image for an MTV hook is registry.redhat.io/rhmtc/openshift-migration-hook-runner-rhel8:v1.8.2-2. The image is based on the Ansible Runner image with the addition of python-openshift to provide Ansible Kubernetes resources and a recent oc binary.

Hook execution

An Ansible playbook that is provided as part of a migration hook is mounted into the hook container as a ConfigMap. The hook container is run as a job on the desired cluster, using the default ServiceAccount in the konveyor-forklift namespace.

PreHooks and PostHooks

You specify hooks per VM and you can run each as a PreHook or a PostHook. In this context, a PreHook is a hook that is run before a migration and a PostHook is a hook that is run after a migration.

When you add a hook, you must specify the namespace where the hook CR is located, the name of the hook, and specify whether the hook is a PreHook or PostHook.

Important

In order for a PreHook to run on a VM, the VM must be started and available via SSH.

Example PreHook:

kind: Plan
apiVersion: forklift.konveyor.io/v1beta1
metadata:
  name: test
  namespace: konveyor-forklift
spec:
  vms:
    - id: vm-2861
      hooks:
        - hook:
            namespace: konveyor-forklift
            name: playbook
          step: PreHook

6.3.2. Adding Hook CRs to a VM migration by using the MTV API

You can add a PreHook or a PostHook Hook CR when you migrate a virtual machine from the command line by using the Migration Toolkit for Virtualization API. A PreHook runs before a migration, a PostHook, after.

Note

You can retrieve additional information stored in a secret or in a configMap by using a k8s module.

For example, you can create a hook CR to install cloud-init on a VM and write a file before migration.

Procedure

  1. If needed, create a secret with an SSH private key for the VM. You can either use an existing key or generate a key pair, install the public key on the VM, and base64 encode the private key in the secret.

    apiVersion: v1
    data:
      key: VGhpcyB3YXMgZ2VuZXJhdGVkIHdpdGggc3NoLWtleWdlbiBwdXJlbHkgZm9yIHRoaXMgZXhhbXBsZS4KSXQgaXMgbm90IHVzZWQgYW55d2hlcmUuCi0tLS0tQkVHSU4gT1BFTlNTSCBQUklWQVRFIEtFWS0tLS0tCmIzQmxibk56YUMxclpYa3RkakVBQUFBQUJHNXZibVVBQUFBRWJtOXVaUUFBQUFBQUFBQUJBQUFCbHdBQUFBZHpjMmd0Y24KTmhBQUFBQXdFQUFRQUFBWUVBMzVTTFRReDBFVjdPTWJQR0FqcEsxK2JhQURTTVFuK1NBU2pyTGZLNWM5NGpHdzhDbnA4LwovRHErZHFBR1pxQkg2ZnAxYmVJM1BZZzVWVDk0RVdWQ2RrTjgwY3dEcEo0Z1R0NHFUQ1gzZUYvY2x5VXQyUC9zaTNjcnQ0CjBQdi9wVnZXU1U2TlhHaDJIZC93V0MwcGh5Z0RQOVc5SHRQSUF0OFpnZmV2ZnUwZHpraVl6OHNVaElWU2ZsRGpaNUFqcUcKUjV2TVVUaGlrczEvZVlCeTdiMkFFSEdzYU8xN3NFbWNiYUlHUHZuUFVwWmQrdjkyYU1JdWZoYjhLZkFSbzZ3Ty9ISW1VbQovdDdHWFBJUmxBMUhSV0p1U05odTQzZS9DY3ZYd3Z6RnZrdE9kYXlEQzBMTklHMkpVaURlNWd0UUQ1WHZXc1p3MHQvbEs1CklacjFrZXZRNUJsYWNISmViV1ZNYUQvdllpdFdhSFo4OEF1Y0czaGh2bjkrOGNSTGhNVExiVlFSMWh2UVpBL1JtQXN3eE0KT3VJSmRaUmtxTThLZlF4Z28zQThRNGJhQW1VbnpvM3Zwa0FWdC9uaGtIOTRaRE5rV2U2RlRhdThONStyYTJCZkdjZVA4VApvbjFEeTBLRlpaUlpCREVVRVc0eHdTYUVOYXQ3c2RDNnhpL1d5OURaQUFBRm1NRFBXeDdBejFzZUFBQUFCM056YUMxeWMyCkVBQUFHQkFOK1VpMDBNZEJGZXpqR3p4Z0k2U3RmbTJnQTBqRUova2dFbzZ5M3l1WFBlSXhzUEFwNmZQL3c2dm5hZ0JtYWcKUituNmRXM2lOejJJT1ZVL2VCRmxRblpEZk5ITUE2U2VJRTdlS2t3bDkzaGYzSmNsTGRqLzdJdDNLN2VORDcvNlZiMWtsTwpqVnhvZGgzZjhGZ3RLWWNvQXovVnZSN1R5QUxmR1lIM3IzN3RIYzVJbU0vTEZJU0ZVbjVRNDJlUUk2aGtlYnpGRTRZcExOCmYzbUFjdTI5Z0JCeHJHanRlN0JKbkcyaUJqNzV6MUtXWGZyL2RtakNMbjRXL0Nud0VhT3NEdnh5SmxKdjdleGx6eUVaUU4KUjBWaWJrallidU4zdnduTDE4TDh4YjVMVG5Xc2d3dEN6U0J0aVZJZzN1WUxVQStWNzFyR2NOTGY1U3VTR2E5WkhyME9RWgpXbkJ5WG0xbFRHZy83MklyVm1oMmZQQUxuQnQ0WWI1L2Z2SEVTNFRFeTIxVUVkWWIwR1FQMFpnTE1NVERyaUNYV1VaS2pQCkNuME1ZS053UEVPRzJnSmxKODZONzZaQUZiZjU0WkIvZUdRelpGbnVoVTJydkRlZnEydGdYeG5Iai9FNko5UTh0Q2hXV1UKV1FReEZCRnVNY0VtaERXcmU3SFF1c1l2MXN2UTJRQUFBQU1CQUFFQUFBR0JBSlZtZklNNjdDQmpXcU9KdnFua2EvakRrUwo4TDdpSE5mekg1TnRZWVdPWmRMTlk2L0lRa1pDeFcwTWtSKzlUK0M3QUZKZzBNV2Q5ck5PeUxJZDkxNjZoOVJsNG0xdFJjCnViZ1o2dWZCZ3hGVDlXS21mSEdCNm4zelh5b2pQOEFJTnR6ODVpaUVHVXFFRWtVRVdMd0RGSmdvcFllQ3l1VmZ2ZE92MUgKRm1WWmEwNVo0b3NQNkNENXVmc2djQ1RYQTR6VnZ5ZHVCYkxqdHN5RjdYZjNUdjZUQ1QxU0swZHErQk1OOXRvb0RZaXpwagpzbDh6NzlybXp3eUFyWFlVcnFUUkpsNmpwRkNrWHJLcy9LeG96MHhhbXlMY2RORk9hWE51LzlnTkpjRERsV2hPcFRqNHk4CkpkNXBuV1Jueis1RHJLRFdhY0loUW1CMUxVd2ZLWmQwbVFxaUpzMUMxcXZVUmlKOGExaThKUTI4bHFuWTFRRk9wbk13emcKWEpla2FndThpT1ExRFJlQkhaM0NkcVJUYnY3bVJZSGxramx0dXJmZGc4M3hvM0ErZ1JSR001eUVOcW5xSkplQjhJQVB5UwptMFp0dGdqbHNqNTJ2K1B1NmExMHoxZndKK1VML2N6dTRKeEpOYlp6WTFIMnpLODJBaVI1T3JYNmx2aUEvSWFSRVcwUUFBCkFNQndVeUJpcUc5bEZCUnltL2UvU1VORVMzdHpicUZNdTdIcy84WTV5SnAxKzR6OXUxNGtJR2ttV0Y5eE5HT3hrY3V0cWwKeHVUcndMbjFUaFNQTHQrTjUwTGhVdzR4ZjBhNUxqemdPbklPU0FRbm5HY1Nxa0dTRDlMR21obGE2WmpydFBHY29lQ3JHdAo5M1Vvcmx5YkxNRzFFRFAxWmpKS1RaZzl6OUMwdDlTTGd3ei9DbFhydW9UNXNQVUdKWnUrbHlIZXpSTDRtcHl6OEZMcnlOCkdNci9leVM5bWdISjNVVkZEYjNIZ3BaK1E1SUdBRU5rZVZEcHIwMGhCZXZndGd6YWtBQUFEQkFQVXQ1RitoMnBVby94V1YKenRkcVQvMzA4dFB5MXVMMU1lWFoydEJPQmRwSDJyd0JzdWt0aTIySGtWZUZXQjJFdUlFUXppMzY3MGc1UGdxR1p4Vng4dQpobEE0Rkg4ZXN1NTNQckZqVW9EeFJhb3d3WXBFcFh5Y2pnNUE1MStwR1VQcWljWjB0YjliaWlhc3BWWXZhWW5sdGlnVG5iClN0UExMY29nemNiL0dGcVYyaXlzc3lwTlMwKzBNRTUxcEtxWGNaS2swbi8vVHpZWWs4TW8vZzRsQ3pmUEZQUlZrVVM5blIKWU1pQzRlcEk0TERmbVdnM0xLQ2N1Zk85all3aWgwYlFBQUFNRUE2WEtldDhEMHNvc0puZVh5WFZGd0dyVyszNlhBVGRQTwpMWDdjaStjYzFoOGV1eHdYQWx3aTJJNFhxSmJBVjBsVEhuVGEycXN3Uy9RQlpJUUJWSkZlVjVyS1daZTc4R2F3d1pWTFZNCldETmNwdFFyRTFaM2pGNS9TdUVzdlVxSDE0Tkc5RUFXWG1iUkNzelE0Vlk3NzQrSi9sTFkvMnlDT1diNzlLYTJ5OGxvYUoKVXczWWVtSld3blp2R3hKNldsL3BmQ2xYN3lEVXlXUktLdGl0cWNjbmpCWVkyRE1tZURwdURDYy9ZdDZDc3dLRmRkMkJ1UwpGZGt5cDlZY3VMaDlLZEFBQUFIR3BoYzI5dVFFRlVMVGd3TWxVdWJXOXVkR3hsYjI0dWFXNTBjbUVCQWdNRUJRWT0KLS0tLS1FTkQgT1BFTlNTSCBQUklWQVRFIEtFWS0tLS0tCgo=
    kind: Secret
    metadata:
      name: ssh-credentials
      namespace: konveyor-forklift
    type: Opaque
  2. Encode your playbook by conncatenating a file and piping it for base64, for example:

    $ cat playbook.yml | base64 -w0
    Note

    You can also use a here document to encode a playbook:

    $ cat << EOF | base64 -w0
    - hosts: localhost
      tasks:
      - debug:
          msg: test
    EOF
  3. Create a Hook CR:

    apiVersion: forklift.konveyor.io/v1beta1
    kind: Hook
    metadata:
      name: playbook
      namespace: konveyor-forklift
    spec:
      image: registry.redhat.io/rhmtc/openshift-migration-hook-runner-rhel8:v1.8.2-2
      playbook: LSBuYW1lOiBNYWluCiAgaG9zdHM6IGxvY2FsaG9zdAogIHRhc2tzOgogIC0gbmFtZTogTG9hZCBQbGFuCiAgICBpbmNsdWRlX3ZhcnM6CiAgICAgIGZpbGU6IHBsYW4ueW1sCiAgICAgIG5hbWU6IHBsYW4KCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3ZhcnM6CiAgICAgIGZpbGU6IHdvcmtsb2FkLnltbAogICAgICBuYW1lOiB3b3JrbG9hZAoKICAtIG5hbWU6IAogICAgZ2V0ZW50OgogICAgICBkYXRhYmFzZTogcGFzc3dkCiAgICAgIGtleTogInt7IGFuc2libGVfdXNlcl9pZCB9fSIKICAgICAgc3BsaXQ6ICc6JwoKICAtIG5hbWU6IEVuc3VyZSBTU0ggZGlyZWN0b3J5IGV4aXN0cwogICAgZmlsZToKICAgICAgcGF0aDogfi8uc3NoCiAgICAgIHN0YXRlOiBkaXJlY3RvcnkKICAgICAgbW9kZTogMDc1MAogICAgZW52aXJvbm1lbnQ6CiAgICAgIEhPTUU6ICJ7eyBhbnNpYmxlX2ZhY3RzLmdldGVudF9wYXNzd2RbYW5zaWJsZV91c2VyX2lkXVs0XSB9fSIKCiAgLSBrOHNfaW5mbzoKICAgICAgYXBpX3ZlcnNpb246IHYxCiAgICAgIGtpbmQ6IFNlY3JldAogICAgICBuYW1lOiBzc2gtY3JlZGVudGlhbHMKICAgICAgbmFtZXNwYWNlOiBrb252ZXlvci1mb3JrbGlmdAogICAgcmVnaXN0ZXI6IHNzaF9jcmVkZW50aWFscwoKICAtIG5hbWU6IENyZWF0ZSBTU0gga2V5CiAgICBjb3B5OgogICAgICBkZXN0OiB+Ly5zc2gvaWRfcnNhCiAgICAgIGNvbnRlbnQ6ICJ7eyBzc2hfY3JlZGVudGlhbHMucmVzb3VyY2VzWzBdLmRhdGEua2V5IHwgYjY0ZGVjb2RlIH19IgogICAgICBtb2RlOiAwNjAwCgogIC0gYWRkX2hvc3Q6CiAgICAgIG5hbWU6ICJ7eyB3b3JrbG9hZC52bS5pcGFkZHJlc3MgfX0iCiAgICAgIGFuc2libGVfdXNlcjogcm9vdAogICAgICBncm91cHM6IHZtcwoKLSBob3N0czogdm1zCiAgdGFza3M6CiAgLSBuYW1lOiBJbnN0YWxsIGNsb3VkLWluaXQKICAgIGRuZjoKICAgICAgbmFtZToKICAgICAgLSBjbG91ZC1pbml0CiAgICAgIHN0YXRlOiBsYXRlc3QKCiAgLSBuYW1lOiBDcmVhdGUgVGVzdCBGaWxlCiAgICBjb3B5OgogICAgICBkZXN0OiAvdGVzdC50eHQKICAgICAgY29udGVudDogIkhlbGxvIFdvcmxkIgogICAgICBtb2RlOiAwNjQ0Cg==
      serviceAccount: forklift-controller 1
    1 1 1
    Specify a serviceAccount to run the hook with in order to control access to resources on the cluster.
    Note

    To decode an attached playbook retrieve the resource with custom output and pipe it to base64. For example:

     oc get -n konveyor-forklift hook playbook -o \
       go-template='{{ .spec.playbook }}' | base64 -d

    The playbook encoded here runs the following:

    - name: Main
      hosts: localhost
      tasks:
      - name: Load Plan
        include_vars:
          file: plan.yml
          name: plan
    
      - name: Load Workload
        include_vars:
          file: workload.yml
          name: workload
    
      - name:
        getent:
          database: passwd
          key: "{{ ansible_user_id }}"
          split: ':'
    
      - name: Ensure SSH directory exists
        file:
          path: ~/.ssh
          state: directory
          mode: 0750
        environment:
          HOME: "{{ ansible_facts.getent_passwd[ansible_user_id][4] }}"
    
      - k8s_info:
          api_version: v1
          kind: Secret
          name: ssh-credentials
          namespace: konveyor-forklift
        register: ssh_credentials
    
      - name: Create SSH key
        copy:
          dest: ~/.ssh/id_rsa
          content: "{{ ssh_credentials.resources[0].data.key | b64decode }}"
          mode: 0600
    
      - add_host:
          name: "{{ workload.vm.ipaddress }}"
          ansible_user: root
          groups: vms
    
    - hosts: vms
      tasks:
      - name: Install cloud-init
        dnf:
          name:
          - cloud-init
          state: latest
    
      - name: Create Test File
        copy:
          dest: /test.txt
          content: "Hello World"
          mode: 0644
  4. Create a Plan CR using the hook:

    kind: Plan
    apiVersion: forklift.konveyor.io/v1beta1
    metadata:
      name: test
      namespace: konveyor-forklift
    spec:
      map:
        network:
          namespace: "konveyor-forklift"
          name: "network"
        storage:
          namespace: "konveyor-forklift"
          name: "storage"
      provider:
        source:
          namespace: "konveyor-forklift"
          name: "boston"
        destination:
          namespace: "konveyor-forklift"
          name: host
      targetNamespace: "konveyor-forklift"
      vms:
        - id: vm-2861
          hooks:
            - hook:
                namespace: konveyor-forklift
                name: playbook
              step: PreHook 1
    1
    Options are PreHook, to run the hook before the migration, and PostHook, to run the hook after the migration.
Important

In order for a PreHook to run on a VM, the VM must be started and available via SSH.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.