Este contenido no está disponible en el idioma seleccionado.
Chapter 6. Advanced migration options
6.1. Changing precopy intervals for warm migration Copiar enlaceEnlace copiado en el portapapeles!
You can change the snapshot interval by patching the ForkliftController
custom resource (CR).
Procedure
Patch the
ForkliftController
CR:oc patch forkliftcontroller/<forklift-controller> -n openshift-mtv -p '{"spec": {"controller_precopy_interval": <60>}}' --type=merge
$ oc patch forkliftcontroller/<forklift-controller> -n openshift-mtv -p '{"spec": {"controller_precopy_interval": <60>}}' --type=merge
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the precopy interval in minutes. The default value is
60
.
You do not need to restart the
forklift-controller
pod.
6.2. Creating custom rules for the Validation service Copiar enlaceEnlace copiado en el portapapeles!
The Validation
service uses Open Policy Agent (OPA) policy rules to check the suitability of each virtual machine (VM) for migration. The Validation
service generates a list of concerns for each VM, which are stored in the Provider Inventory
service as VM attributes. The web console displays the concerns for each VM in the provider inventory.
You can create custom rules to extend the default ruleset of the Validation
service. For example, you can create a rule that checks whether a VM has multiple disks.
6.2.1. About Rego files Copiar enlaceEnlace copiado en el portapapeles!
Validation rules are written in Rego, the Open Policy Agent (OPA) native query language. The rules are stored as .rego
files in the /usr/share/opa/policies/io/konveyor/forklift/<provider>
directory of the Validation
pod.
Each validation rule is defined in a separate .rego
file and tests for a specific condition. If the condition evaluates as true
, the rule adds a {“category”, “label”, “assessment”}
hash to the concerns
. The concerns
content is added to the concerns
key in the inventory record of the VM. The web console displays the content of the concerns
key for each VM in the provider inventory.
The following .rego
file example checks for distributed resource scheduling enabled in the cluster of a VMware VM:
drs_enabled.rego example
6.2.2. Checking the default validation rules Copiar enlaceEnlace copiado en el portapapeles!
Before you create a custom rule, you must check the default rules of the Validation
service to ensure that you do not create a rule that redefines an existing default value.
Example: If a default rule contains the line default valid_input = false
and you create a custom rule that contains the line default valid_input = true
, the Validation
service will not start.
Procedure
Connect to the terminal of the
Validation
pod:oc rsh <validation_pod>
$ oc rsh <validation_pod>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Go to the OPA policies directory for your provider:
cd /usr/share/opa/policies/io/konveyor/forklift/<provider>
$ cd /usr/share/opa/policies/io/konveyor/forklift/<provider>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify
vmware
orovirt
.
Search for the default policies:
grep -R "default" *
$ grep -R "default" *
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.2.3. Creating a validation rule Copiar enlaceEnlace copiado en el portapapeles!
You create a validation rule by applying a config map custom resource (CR) containing the rule to the Validation
service.
-
If you create a rule with the same name as an existing rule, the
Validation
service performs anOR
operation with the rules. -
If you create a rule that contradicts a default rule, the
Validation
service will not start.
Validation rule example
Validation rules are based on virtual machine (VM) attributes collected by the Provider Inventory
service.
For example, the VMware API uses this path to check whether a VMware VM has NUMA node affinity configured: MOR:VirtualMachine.config.extraConfig["numa.nodeAffinity"]
.
The Provider Inventory
service simplifies this configuration and returns a testable attribute with a list value:
"numaNodeAffinity": [ "0", "1" ],
"numaNodeAffinity": [
"0",
"1"
],
You create a Rego query, based on this attribute, and add it to the forklift-validation-config
config map:
`count(input.numaNodeAffinity) != 0`
`count(input.numaNodeAffinity) != 0`
Procedure
Create a config map CR according to the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the provider package name. Allowed values are
io.konveyor.forklift.vmware
for VMware andio.konveyor.forklift.ovirt
for Red Hat Virtualization. - 2
- Specify the
concerns
name and Rego query. - 3
- Specify the
concerns
name andflag
parameter values. - 4
- Allowed values are
Critical
,Warning
, andInformation
.
Stop the
Validation
pod by scaling theforklift-controller
deployment to0
:oc scale -n openshift-mtv --replicas=0 deployment/forklift-controller
$ oc scale -n openshift-mtv --replicas=0 deployment/forklift-controller
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start the
Validation
pod by scaling theforklift-controller
deployment to1
:oc scale -n openshift-mtv --replicas=1 deployment/forklift-controller
$ oc scale -n openshift-mtv --replicas=1 deployment/forklift-controller
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the
Validation
pod log to verify that the pod started:oc logs -f <validation_pod>
$ oc logs -f <validation_pod>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the custom rule conflicts with a default rule, the
Validation
pod will not start.Remove the source provider:
oc delete provider <provider> -n openshift-mtv
$ oc delete provider <provider> -n openshift-mtv
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the source provider to apply the new rule:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
You must update the rules version after creating a custom rule so that the Inventory
service detects the changes and validates the VMs.
6.2.4. Updating the inventory rules version Copiar enlaceEnlace copiado en el portapapeles!
You must update the inventory rules version each time you update the rules so that the Provider Inventory
service detects the changes and triggers the Validation
service.
The rules version is recorded in a rules_version.rego
file for each provider.
Procedure
Retrieve the current rules version:
GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version
$ GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
{ "result": { "rules_version": 5 } }
{ "result": { "rules_version": 5 } }
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Connect to the terminal of the
Validation
pod:oc rsh <validation_pod>
$ oc rsh <validation_pod>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Update the rules version in the
/usr/share/opa/policies/io/konveyor/forklift/<provider>/rules_version.rego
file. -
Log out of the
Validation
pod terminal. Verify the updated rules version:
GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version
$ GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
{ "result": { "rules_version": 6 } }
{ "result": { "rules_version": 6 } }
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.3. Retrieving the Inventory service JSON Copiar enlaceEnlace copiado en el portapapeles!
You retrieve the Inventory
service JSON by sending an Inventory
service query to a virtual machine (VM). The output contains an "input"
key, which contains the inventory attributes that are queried by the Validation
service rules.
You can create a validation rule based on any attribute in the "input"
key, for example, input.snapshot.kind
.
Procedure
Retrieve the routes for the project:
oc get route -n openshift-mtv
oc get route -n openshift-mtv
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the
Inventory
service route:oc get route <inventory_service> -n openshift-mtv
$ oc get route <inventory_service> -n openshift-mtv
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the access token:
TOKEN=$(oc whoami -t)
$ TOKEN=$(oc whoami -t)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Trigger an HTTP GET request (for example, using Curl):
curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers -k
$ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers -k
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the
UUID
of a provider:curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers/<provider> -k
$ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers/<provider> -k
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the VMs of a provider:
curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers/<provider>/<UUID>/vms -k
$ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers/<provider>/<UUID>/vms -k
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the details of a VM:
curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers/<provider>/<UUID>/workloads/<vm> -k
$ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers/<provider>/<UUID>/workloads/<vm> -k
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.4. Adding hooks to a migration plan Copiar enlaceEnlace copiado en el portapapeles!
You can add hooks a migration plan from the command line by using the Migration Toolkit for Virtualization API.
6.4.1. API-based hooks for MTV migration plans Copiar enlaceEnlace copiado en el portapapeles!
You can add hooks to a migration plan from the command line by using the Migration Toolkit for Virtualization API.
Default hook image
The default hook image for an MTV hook is registry.redhat.io/rhmtc/openshift-migration-hook-runner-rhel8:v1.8.2-2
. The image is based on the Ansible Runner image with the addition of python-openshift
to provide Ansible Kubernetes resources and a recent oc
binary.
Hook execution
An Ansible playbook that is provided as part of a migration hook is mounted into the hook container as a ConfigMap
. The hook container is run as a job on the desired cluster, using the default ServiceAccount
in the konveyor-forklift
namespace.
PreHooks and PostHooks
You specify hooks per VM and you can run each as a PreHook or a PostHook. In this context, a PreHook is a hook that is run before a migration and a PostHook is a hook that is run after a migration.
When you add a hook, you must specify the namespace where the hook CR is located, the name of the hook, and specify whether the hook is a PreHook or PostHook.
In order for a PreHook to run on a VM, the VM must be started and available via SSH.
Example PreHook:
6.4.2. Adding Hook CRs to a VM migration by using the MTV API Copiar enlaceEnlace copiado en el portapapeles!
You can add a PreHook
or a PostHook
Hook CR when you migrate a virtual machine from the command line by using the Migration Toolkit for Virtualization API. A PreHook
runs before a migration, a PostHook
, after.
You can retrieve additional information stored in a secret or in a configMap
by using a k8s
module.
For example, you can create a hook CR to install cloud-init
on a VM and write a file before migration.
Procedure
If needed, create a secret with an SSH private key for the VM. You can either use an existing key or generate a key pair, install the public key on the VM, and base64 encode the private key in the secret.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Encode your playbook by conncatenating a file and piping it for base64, for example:
cat playbook.yml | base64 -w0
$ cat playbook.yml | base64 -w0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can also use a here document to encode a playbook:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a Hook CR:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify a
serviceAccount
to run the hook with in order to control access to resources on the cluster.
NoteTo decode an attached playbook retrieve the resource with custom output and pipe it to base64. For example:
oc get -n konveyor-forklift hook playbook -o \ go-template='{{ .spec.playbook }}' | base64 -d
oc get -n konveyor-forklift hook playbook -o \ go-template='{{ .spec.playbook }}' | base64 -d
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The playbook encoded here runs the following:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a Plan CR using the hook:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Options are
PreHook
, to run the hook before the migration, andPostHook
, to run the hook after the migration.
In order for a PreHook to run on a VM, the VM must be started and available via SSH.