12.2.
- 중요
12.2.1. 링크 복사링크가 클립보드에 복사되었습니다!
12.2.2. 링크 복사링크가 클립보드에 복사되었습니다!
12.2.2.1. 링크 복사링크가 클립보드에 복사되었습니다!
12.2.2.2. 링크 복사링크가 클립보드에 복사되었습니다!
12.2.3. 링크 복사링크가 클립보드에 복사되었습니다!
12.2.3.1. 링크 복사링크가 클립보드에 복사되었습니다!
예 12.1.
--- apiVersion: v1 kind: ServiceAccount metadata: name: vm-latency-checkup-sa --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kubevirt-vm-latency-checker rules: - apiGroups: ["kubevirt.io"] resources: ["virtualmachineinstances"] verbs: ["get", "create", "delete"] - apiGroups: ["subresources.kubevirt.io"] resources: ["virtualmachineinstances/console"] verbs: ["get"] - apiGroups: ["k8s.cni.cncf.io"] resources: ["network-attachment-definitions"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kubevirt-vm-latency-checker subjects: - kind: ServiceAccount name: vm-latency-checkup-sa roleRef: kind: Role name: kubevirt-vm-latency-checker apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kiagnose-configmap-access rules: - apiGroups: [ "" ] resources: [ "configmaps" ] verbs: ["get", "update"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kiagnose-configmap-access subjects: - kind: ServiceAccount name: vm-latency-checkup-sa roleRef: kind: Role name: kiagnose-configmap-access apiGroup: rbac.authorization.k8s.io$ oc apply -n <target_namespace> -f <latency_sa_roles_rolebinding>.yaml1 apiVersion: v1 kind: ConfigMap metadata: name: kubevirt-vm-latency-checkup-config labels: kiagnose/checkup-type: kubevirt-vm-latency data: spec.timeout: 5m spec.param.networkAttachmentDefinitionNamespace: <target_namespace> spec.param.networkAttachmentDefinitionName: "blue-network"1 spec.param.maxDesiredLatencyMilliseconds: "10"2 spec.param.sampleDurationSeconds: "5"3 spec.param.sourceNode: "worker1"4 spec.param.targetNode: "worker2"5 $ oc apply -n <target_namespace> -f <latency_config_map>.yamlapiVersion: batch/v1 kind: Job metadata: name: kubevirt-vm-latency-checkup labels: kiagnose/checkup-type: kubevirt-vm-latency spec: backoffLimit: 0 template: spec: serviceAccountName: vm-latency-checkup-sa restartPolicy: Never containers: - name: vm-latency-checkup image: registry.redhat.io/container-native-virtualization/vm-network-latency-checkup-rhel9:v4.15.0 securityContext: allowPrivilegeEscalation: false capabilities: drop: ["ALL"] runAsNonRoot: true seccompProfile: type: "RuntimeDefault" env: - name: CONFIGMAP_NAMESPACE value: <target_namespace> - name: CONFIGMAP_NAME value: kubevirt-vm-latency-checkup-config - name: POD_UID valueFrom: fieldRef: fieldPath: metadata.uid$ oc apply -n <target_namespace> -f <latency_job>.yaml$ oc wait job kubevirt-vm-latency-checkup -n <target_namespace> --for condition=complete --timeout 6m$ oc get configmap kubevirt-vm-latency-checkup-config -n <target_namespace> -o yamlapiVersion: v1 kind: ConfigMap metadata: name: kubevirt-vm-latency-checkup-config namespace: <target_namespace> labels: kiagnose/checkup-type: kubevirt-vm-latency data: spec.timeout: 5m spec.param.networkAttachmentDefinitionNamespace: <target_namespace> spec.param.networkAttachmentDefinitionName: "blue-network" spec.param.maxDesiredLatencyMilliseconds: "10" spec.param.sampleDurationSeconds: "5" spec.param.sourceNode: "worker1" spec.param.targetNode: "worker2" status.succeeded: "true" status.failureReason: "" status.completionTimestamp: "2022-01-01T09:00:00Z" status.startTimestamp: "2022-01-01T09:00:07Z" status.result.avgLatencyNanoSec: "177000" status.result.maxLatencyNanoSec: "244000"1 status.result.measurementDurationSec: "5" status.result.minLatencyNanoSec: "135000" status.result.sourceNode: "worker1" status.result.targetNode: "worker2"$ oc logs job.batch/kubevirt-vm-latency-checkup -n <target_namespace>$ oc delete job -n <target_namespace> kubevirt-vm-latency-checkup$ oc delete config-map -n <target_namespace> kubevirt-vm-latency-checkup-config$ oc delete -f <latency_sa_roles_rolebinding>.yaml
12.2.3.2. 링크 복사링크가 클립보드에 복사되었습니다!
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kubevirt-storage-checkup-clustereader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-reader subjects: - kind: ServiceAccount name: storage-checkup-sa namespace: <target_namespace>1
예 12.2.
--- apiVersion: v1 kind: ServiceAccount metadata: name: storage-checkup-sa --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: storage-checkup-role rules: - apiGroups: [ "" ] resources: [ "configmaps" ] verbs: ["get", "update"] - apiGroups: [ "kubevirt.io" ] resources: [ "virtualmachines" ] verbs: [ "create", "delete" ] - apiGroups: [ "kubevirt.io" ] resources: [ "virtualmachineinstances" ] verbs: [ "get" ] - apiGroups: [ "subresources.kubevirt.io" ] resources: [ "virtualmachineinstances/addvolume", "virtualmachineinstances/removevolume" ] verbs: [ "update" ] - apiGroups: [ "kubevirt.io" ] resources: [ "virtualmachineinstancemigrations" ] verbs: [ "create" ] - apiGroups: [ "cdi.kubevirt.io" ] resources: [ "datavolumes" ] verbs: [ "create", "delete" ] - apiGroups: [ "" ] resources: [ "persistentvolumeclaims" ] verbs: [ "delete" ] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: storage-checkup-role subjects: - kind: ServiceAccount name: storage-checkup-sa roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: storage-checkup-role$ oc apply -n <target_namespace> -f <storage_sa_roles_rolebinding>.yaml--- apiVersion: v1 kind: ConfigMap metadata: name: storage-checkup-config namespace: $CHECKUP_NAMESPACE data: spec.timeout: 10m spec.param.storageClass: ocs-storagecluster-ceph-rbd-virtualization spec.param.vmiTimeout: 3m --- apiVersion: batch/v1 kind: Job metadata: name: storage-checkup namespace: $CHECKUP_NAMESPACE spec: backoffLimit: 0 template: spec: serviceAccount: storage-checkup-sa restartPolicy: Never containers: - name: storage-checkup image: quay.io/kiagnose/kubevirt-storage-checkup:main imagePullPolicy: Always env: - name: CONFIGMAP_NAMESPACE value: $CHECKUP_NAMESPACE - name: CONFIGMAP_NAME value: storage-checkup-config$ oc apply -n <target_namespace> -f <storage_configmap_job>.yaml$ oc wait job storage-checkup -n <target_namespace> --for condition=complete --timeout 10m$ oc get configmap storage-checkup-config -n <target_namespace> -o yamlapiVersion: v1 kind: ConfigMap metadata: name: storage-checkup-config labels: kiagnose/checkup-type: kubevirt-storage data: spec.timeout: 10m status.succeeded: "true"1 status.failureReason: ""2 status.startTimestamp: "2023-07-31T13:14:38Z"3 status.completionTimestamp: "2023-07-31T13:19:41Z"4 status.result.cnvVersion: 4.16.25 status.result.defaultStorageClass: trident-nfs6 status.result.goldenImagesNoDataSource: <data_import_cron_list>7 status.result.goldenImagesNotUpToDate: <data_import_cron_list>8 status.result.ocpVersion: 4.16.09 status.result.pvcBound: "true"10 status.result.storageProfileMissingVolumeSnapshotClass: <storage_class_list>11 status.result.storageProfilesWithEmptyClaimPropertySets: <storage_profile_list>12 status.result.storageProfilesWithSmartClone: <storage_profile_list>13 status.result.storageProfilesWithSpecClaimPropertySets: <storage_profile_list>14 status.result.storageProfilesWithRWX: |- ocs-storagecluster-ceph-rbd ocs-storagecluster-ceph-rbd-virtualization ocs-storagecluster-cephfs trident-iscsi trident-minio trident-nfs windows-vms status.result.vmBootFromGoldenImage: VMI "vmi-under-test-dhkb8" successfully booted status.result.vmHotplugVolume: |- VMI "vmi-under-test-dhkb8" hotplug volume ready VMI "vmi-under-test-dhkb8" hotplug volume removed status.result.vmLiveMigration: VMI "vmi-under-test-dhkb8" migration completed status.result.vmVolumeClone: 'DV cloneType: "csi-clone"' status.result.vmsWithNonVirtRbdStorageClass: <vm_list>15 status.result.vmsWithUnsetEfsStorageClass: <vm_list>16 $ oc delete job -n <target_namespace> storage-checkup$ oc delete config-map -n <target_namespace> storage-checkup-config$ oc delete -f <storage_sa_roles_rolebinding>.yaml
12.2.3.3. 링크 복사링크가 클립보드에 복사되었습니다!
예 12.3.
--- apiVersion: v1 kind: ServiceAccount metadata: name: dpdk-checkup-sa --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kiagnose-configmap-access rules: - apiGroups: [ "" ] resources: [ "configmaps" ] verbs: [ "get", "update" ] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kiagnose-configmap-access subjects: - kind: ServiceAccount name: dpdk-checkup-sa roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kiagnose-configmap-access --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kubevirt-dpdk-checker rules: - apiGroups: [ "kubevirt.io" ] resources: [ "virtualmachineinstances" ] verbs: [ "create", "get", "delete" ] - apiGroups: [ "subresources.kubevirt.io" ] resources: [ "virtualmachineinstances/console" ] verbs: [ "get" ] - apiGroups: [ "" ] resources: [ "configmaps" ] verbs: [ "create", "delete" ] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kubevirt-dpdk-checker subjects: - kind: ServiceAccount name: dpdk-checkup-sa roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubevirt-dpdk-checker$ oc apply -n <target_namespace> -f <dpdk_sa_roles_rolebinding>.yamlapiVersion: v1 kind: ConfigMap metadata: name: dpdk-checkup-config labels: kiagnose/checkup-type: kubevirt-dpdk data: spec.timeout: 10m spec.param.networkAttachmentDefinitionName: <network_name>1 spec.param.trafficGenContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:v0.3.12 spec.param.vmUnderTestContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-vm:v0.3.1"3 $ oc apply -n <target_namespace> -f <dpdk_config_map>.yamlapiVersion: batch/v1 kind: Job metadata: name: dpdk-checkup labels: kiagnose/checkup-type: kubevirt-dpdk spec: backoffLimit: 0 template: spec: serviceAccountName: dpdk-checkup-sa restartPolicy: Never containers: - name: dpdk-checkup image: registry.redhat.io/container-native-virtualization/kubevirt-dpdk-checkup-rhel9:v4.15.0 imagePullPolicy: Always securityContext: allowPrivilegeEscalation: false capabilities: drop: ["ALL"] runAsNonRoot: true seccompProfile: type: "RuntimeDefault" env: - name: CONFIGMAP_NAMESPACE value: <target-namespace> - name: CONFIGMAP_NAME value: dpdk-checkup-config - name: POD_UID valueFrom: fieldRef: fieldPath: metadata.uid$ oc apply -n <target_namespace> -f <dpdk_job>.yaml$ oc wait job dpdk-checkup -n <target_namespace> --for condition=complete --timeout 10m$ oc get configmap dpdk-checkup-config -n <target_namespace> -o yamlapiVersion: v1 kind: ConfigMap metadata: name: dpdk-checkup-config labels: kiagnose/checkup-type: kubevirt-dpdk data: spec.timeout: 10m spec.param.NetworkAttachmentDefinitionName: "dpdk-network-1" spec.param.trafficGenContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:v0.2.0" spec.param.vmUnderTestContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-vm:v0.2.0" status.succeeded: "true"1 status.failureReason: ""2 status.startTimestamp: "2023-07-31T13:14:38Z"3 status.completionTimestamp: "2023-07-31T13:19:41Z"4 status.result.trafficGenSentPackets: "480000000"5 status.result.trafficGenOutputErrorPackets: "0"6 status.result.trafficGenInputErrorPackets: "0"7 status.result.trafficGenActualNodeName: worker-dpdk18 status.result.vmUnderTestActualNodeName: worker-dpdk29 status.result.vmUnderTestReceivedPackets: "480000000"10 status.result.vmUnderTestRxDroppedPackets: "0"11 status.result.vmUnderTestTxDroppedPackets: "0"12 $ oc delete job -n <target_namespace> dpdk-checkup$ oc delete config-map -n <target_namespace> dpdk-checkup-config$ oc delete -f <dpdk_sa_roles_rolebinding>.yaml
12.2.3.3.1. 링크 복사링크가 클립보드에 복사되었습니다!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12.2.3.3.2. 링크 복사링크가 클립보드에 복사되었습니다!
# dnf install libguestfs-tools
# composer-cli distros list참고# usermod -a -G weldr user$ newgrp weldr$ cat << EOF > dpdk-vm.toml name = "dpdk_image" description = "Image to use with the DPDK checkup" version = "0.0.1" distro = "rhel-87" [[customizations.user]] name = "root" password = "redhat" [[packages]] name = "dpdk" [[packages]] name = "dpdk-tools" [[packages]] name = "driverctl" [[packages]] name = "tuned-profiles-cpu-partitioning" [customizations.kernel] append = "default_hugepagesz=1GB hugepagesz=1G hugepages=1" [customizations.services] disabled = ["NetworkManager-wait-online", "sshd"] EOF# composer-cli blueprints push dpdk-vm.toml# composer-cli compose start dpdk_image qcow2# composer-cli compose status# composer-cli compose image <UUID>$ cat <<EOF >customize-vm #!/bin/bash # Setup hugepages mount mkdir -p /mnt/huge echo "hugetlbfs /mnt/huge hugetlbfs defaults,pagesize=1GB 0 0" >> /etc/fstab # Create vfio-noiommu.conf echo "options vfio enable_unsafe_noiommu_mode=1" > /etc/modprobe.d/vfio-noiommu.conf # Enable guest-exec,guest-exec-status on the qemu-guest-agent configuration sed -i '/^BLACKLIST_RPC=/ { s/guest-exec-status//; s/guest-exec//g }' /etc/sysconfig/qemu-ga sed -i '/^BLACKLIST_RPC=/ { s/,\+/,/g; s/^,\|,$//g }' /etc/sysconfig/qemu-ga EOF$ virt-customize -a <UUID>-disk.qcow2 --run=customize-vm --selinux-relabel$ cat << EOF > Dockerfile FROM scratch COPY --chown=107:107 <UUID>-disk.qcow2 /disk/ EOF$ podman build . -t dpdk-rhel:latest$ podman push dpdk-rhel:latest