12.2.


  • 중요

중요

12.2.1.

중요

12.2.2.

12.2.2.1.

12.2.2.2.

12.2.3.

12.2.3.1.

  1. 예 12.1.

    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: vm-latency-checkup-sa
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: kubevirt-vm-latency-checker
    rules:
    - apiGroups: ["kubevirt.io"]
      resources: ["virtualmachineinstances"]
      verbs: ["get", "create", "delete"]
    - apiGroups: ["subresources.kubevirt.io"]
      resources: ["virtualmachineinstances/console"]
      verbs: ["get"]
    - apiGroups: ["k8s.cni.cncf.io"]
      resources: ["network-attachment-definitions"]
      verbs: ["get"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: kubevirt-vm-latency-checker
    subjects:
    - kind: ServiceAccount
      name: vm-latency-checkup-sa
    roleRef:
      kind: Role
      name: kubevirt-vm-latency-checker
      apiGroup: rbac.authorization.k8s.io
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: kiagnose-configmap-access
    rules:
    - apiGroups: [ "" ]
      resources: [ "configmaps" ]
      verbs: ["get", "update"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: kiagnose-configmap-access
    subjects:
    - kind: ServiceAccount
      name: vm-latency-checkup-sa
    roleRef:
      kind: Role
      name: kiagnose-configmap-access
      apiGroup: rbac.authorization.k8s.io
    Copy to Clipboard Toggle word wrap
  2. $ oc apply -n <target_namespace> -f <latency_sa_roles_rolebinding>.yaml 
    1
    Copy to Clipboard Toggle word wrap
    1
  3. apiVersion: v1
    kind: ConfigMap
    metadata:
      name: kubevirt-vm-latency-checkup-config
      labels:
        kiagnose/checkup-type: kubevirt-vm-latency
    data:
      spec.timeout: 5m
      spec.param.networkAttachmentDefinitionNamespace: <target_namespace>
      spec.param.networkAttachmentDefinitionName: "blue-network" 
    1
    
      spec.param.maxDesiredLatencyMilliseconds: "10" 
    2
    
      spec.param.sampleDurationSeconds: "5" 
    3
    
      spec.param.sourceNode: "worker1" 
    4
    
      spec.param.targetNode: "worker2" 
    5
    Copy to Clipboard Toggle word wrap

    1
    2
    3
    4
    5
  4. $ oc apply -n <target_namespace> -f <latency_config_map>.yaml
    Copy to Clipboard Toggle word wrap
  5. apiVersion: batch/v1
    kind: Job
    metadata:
      name: kubevirt-vm-latency-checkup
      labels:
        kiagnose/checkup-type: kubevirt-vm-latency
    spec:
      backoffLimit: 0
      template:
        spec:
          serviceAccountName: vm-latency-checkup-sa
          restartPolicy: Never
          containers:
            - name: vm-latency-checkup
              image: registry.redhat.io/container-native-virtualization/vm-network-latency-checkup-rhel9:v4.17.0
              securityContext:
                allowPrivilegeEscalation: false
                capabilities:
                  drop: ["ALL"]
                runAsNonRoot: true
                seccompProfile:
                  type: "RuntimeDefault"
              env:
                - name: CONFIGMAP_NAMESPACE
                  value: <target_namespace>
                - name: CONFIGMAP_NAME
                  value: kubevirt-vm-latency-checkup-config
                - name: POD_UID
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.uid
    Copy to Clipboard Toggle word wrap

  6. $ oc apply -n <target_namespace> -f <latency_job>.yaml
    Copy to Clipboard Toggle word wrap
  7. $ oc wait job kubevirt-vm-latency-checkup -n <target_namespace> --for condition=complete --timeout 6m
    Copy to Clipboard Toggle word wrap
  8. $ oc get configmap kubevirt-vm-latency-checkup-config -n <target_namespace> -o yaml
    Copy to Clipboard Toggle word wrap

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: kubevirt-vm-latency-checkup-config
      namespace: <target_namespace>
      labels:
        kiagnose/checkup-type: kubevirt-vm-latency
    data:
      spec.timeout: 5m
      spec.param.networkAttachmentDefinitionNamespace: <target_namespace>
      spec.param.networkAttachmentDefinitionName: "blue-network"
      spec.param.maxDesiredLatencyMilliseconds: "10"
      spec.param.sampleDurationSeconds: "5"
      spec.param.sourceNode: "worker1"
      spec.param.targetNode: "worker2"
      status.succeeded: "true"
      status.failureReason: ""
      status.completionTimestamp: "2022-01-01T09:00:00Z"
      status.startTimestamp: "2022-01-01T09:00:07Z"
      status.result.avgLatencyNanoSec: "177000"
      status.result.maxLatencyNanoSec: "244000" 
    1
    
      status.result.measurementDurationSec: "5"
      status.result.minLatencyNanoSec: "135000"
      status.result.sourceNode: "worker1"
      status.result.targetNode: "worker2"
    Copy to Clipboard Toggle word wrap

    1
  9. $ oc logs job.batch/kubevirt-vm-latency-checkup -n <target_namespace>
    Copy to Clipboard Toggle word wrap
  10. $ oc delete job -n <target_namespace> kubevirt-vm-latency-checkup
    Copy to Clipboard Toggle word wrap
    $ oc delete config-map -n <target_namespace> kubevirt-vm-latency-checkup-config
    Copy to Clipboard Toggle word wrap
  11. $ oc delete -f <latency_sa_roles_rolebinding>.yaml
    Copy to Clipboard Toggle word wrap

12.2.3.2.

  • apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: kubevirt-storage-checkup-clustereader
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-reader
    subjects:
    - kind: ServiceAccount
      name: storage-checkup-sa
      namespace: <target_namespace> 
    1
    Copy to Clipboard Toggle word wrap
    1

  1. 예 12.2.

    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: storage-checkup-sa
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: storage-checkup-role
    rules:
      - apiGroups: [ "" ]
        resources: [ "configmaps" ]
        verbs: ["get", "update"]
      - apiGroups: [ "kubevirt.io" ]
        resources: [ "virtualmachines" ]
        verbs: [ "create", "delete" ]
      - apiGroups: [ "kubevirt.io" ]
        resources: [ "virtualmachineinstances" ]
        verbs: [ "get" ]
      - apiGroups: [ "subresources.kubevirt.io" ]
        resources: [ "virtualmachineinstances/addvolume", "virtualmachineinstances/removevolume" ]
        verbs: [ "update" ]
      - apiGroups: [ "kubevirt.io" ]
        resources: [ "virtualmachineinstancemigrations" ]
        verbs: [ "create" ]
      - apiGroups: [ "cdi.kubevirt.io" ]
        resources: [ "datavolumes" ]
        verbs: [ "create", "delete" ]
      - apiGroups: [ "" ]
        resources: [ "persistentvolumeclaims" ]
        verbs: [ "delete" ]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: storage-checkup-role
    subjects:
      - kind: ServiceAccount
        name: storage-checkup-sa
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: storage-checkup-role
    Copy to Clipboard Toggle word wrap
  2. $ oc apply -n <target_namespace> -f <storage_sa_roles_rolebinding>.yaml
    Copy to Clipboard Toggle word wrap
  3. ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: storage-checkup-config
      namespace: $CHECKUP_NAMESPACE
    data:
      spec.timeout: 10m
      spec.param.storageClass: ocs-storagecluster-ceph-rbd-virtualization
      spec.param.vmiTimeout: 3m
    ---
    apiVersion: batch/v1
    kind: Job
    metadata:
      name: storage-checkup
      namespace: $CHECKUP_NAMESPACE
    spec:
      backoffLimit: 0
      template:
        spec:
          serviceAccount: storage-checkup-sa
          restartPolicy: Never
          containers:
            - name: storage-checkup
              image: quay.io/kiagnose/kubevirt-storage-checkup:main
              imagePullPolicy: Always
              env:
                - name: CONFIGMAP_NAMESPACE
                  value: $CHECKUP_NAMESPACE
                - name: CONFIGMAP_NAME
                  value: storage-checkup-config
    Copy to Clipboard Toggle word wrap

  4. $ oc apply -n <target_namespace> -f <storage_configmap_job>.yaml
    Copy to Clipboard Toggle word wrap
  5. $ oc wait job storage-checkup -n <target_namespace> --for condition=complete --timeout 10m
    Copy to Clipboard Toggle word wrap
  6. $ oc get configmap storage-checkup-config -n <target_namespace> -o yaml
    Copy to Clipboard Toggle word wrap

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: storage-checkup-config
      labels:
        kiagnose/checkup-type: kubevirt-storage
    data:
      spec.timeout: 10m
      status.succeeded: "true" 
    1
    
      status.failureReason: "" 
    2
    
      status.startTimestamp: "2023-07-31T13:14:38Z" 
    3
    
      status.completionTimestamp: "2023-07-31T13:19:41Z" 
    4
    
      status.result.cnvVersion: 4.17.2 
    5
    
      status.result.defaultStorageClass: trident-nfs 
    6
    
      status.result.goldenImagesNoDataSource: <data_import_cron_list> 
    7
    
      status.result.goldenImagesNotUpToDate: <data_import_cron_list> 
    8
    
      status.result.ocpVersion: 4.17.0 
    9
    
      status.result.pvcBound: "true" 
    10
    
      status.result.storageProfileMissingVolumeSnapshotClass: <storage_class_list> 
    11
    
      status.result.storageProfilesWithEmptyClaimPropertySets: <storage_profile_list> 
    12
    
      status.result.storageProfilesWithSmartClone: <storage_profile_list> 
    13
    
      status.result.storageProfilesWithSpecClaimPropertySets: <storage_profile_list> 
    14
    
      status.result.storageProfilesWithRWX: |-
        ocs-storagecluster-ceph-rbd
        ocs-storagecluster-ceph-rbd-virtualization
        ocs-storagecluster-cephfs
        trident-iscsi
        trident-minio
        trident-nfs
        windows-vms
      status.result.vmBootFromGoldenImage: VMI "vmi-under-test-dhkb8" successfully booted
      status.result.vmHotplugVolume: |-
        VMI "vmi-under-test-dhkb8" hotplug volume ready
        VMI "vmi-under-test-dhkb8" hotplug volume removed
      status.result.vmLiveMigration: VMI "vmi-under-test-dhkb8" migration completed
      status.result.vmVolumeClone: 'DV cloneType: "csi-clone"'
      status.result.vmsWithNonVirtRbdStorageClass: <vm_list> 
    15
    
      status.result.vmsWithUnsetEfsStorageClass: <vm_list> 
    16
    Copy to Clipboard Toggle word wrap

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
  7. $ oc delete job -n <target_namespace> storage-checkup
    Copy to Clipboard Toggle word wrap
    $ oc delete config-map -n <target_namespace> storage-checkup-config
    Copy to Clipboard Toggle word wrap
  8. $ oc delete -f <storage_sa_roles_rolebinding>.yaml
    Copy to Clipboard Toggle word wrap

12.2.3.3.

  1. 예 12.3.

    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: dpdk-checkup-sa
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: kiagnose-configmap-access
    rules:
      - apiGroups: [ "" ]
        resources: [ "configmaps" ]
        verbs: [ "get", "update" ]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: kiagnose-configmap-access
    subjects:
      - kind: ServiceAccount
        name: dpdk-checkup-sa
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: kiagnose-configmap-access
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: kubevirt-dpdk-checker
    rules:
      - apiGroups: [ "kubevirt.io" ]
        resources: [ "virtualmachineinstances" ]
        verbs: [ "create", "get", "delete" ]
      - apiGroups: [ "subresources.kubevirt.io" ]
        resources: [ "virtualmachineinstances/console" ]
        verbs: [ "get" ]
      - apiGroups: [ "" ]
        resources: [ "configmaps" ]
        verbs: [ "create", "delete" ]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: kubevirt-dpdk-checker
    subjects:
      - kind: ServiceAccount
        name: dpdk-checkup-sa
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: kubevirt-dpdk-checker
    Copy to Clipboard Toggle word wrap
  2. $ oc apply -n <target_namespace> -f <dpdk_sa_roles_rolebinding>.yaml
    Copy to Clipboard Toggle word wrap
  3. apiVersion: v1
    kind: ConfigMap
    metadata:
      name: dpdk-checkup-config
      labels:
        kiagnose/checkup-type: kubevirt-dpdk
    data:
      spec.timeout: 10m
      spec.param.networkAttachmentDefinitionName: <network_name> 
    1
    
      spec.param.trafficGenContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:v0.4.0 
    2
    
      spec.param.vmUnderTestContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-vm:v0.4.0" 
    3
    Copy to Clipboard Toggle word wrap

    1
    2
    3
  4. $ oc apply -n <target_namespace> -f <dpdk_config_map>.yaml
    Copy to Clipboard Toggle word wrap
  5. apiVersion: batch/v1
    kind: Job
    metadata:
      name: dpdk-checkup
      labels:
        kiagnose/checkup-type: kubevirt-dpdk
    spec:
      backoffLimit: 0
      template:
        spec:
          serviceAccountName: dpdk-checkup-sa
          restartPolicy: Never
          containers:
            - name: dpdk-checkup
              image: registry.redhat.io/container-native-virtualization/kubevirt-dpdk-checkup-rhel9:v4.17.0
              imagePullPolicy: Always
              securityContext:
                allowPrivilegeEscalation: false
                capabilities:
                  drop: ["ALL"]
                runAsNonRoot: true
                seccompProfile:
                  type: "RuntimeDefault"
              env:
                - name: CONFIGMAP_NAMESPACE
                  value: <target-namespace>
                - name: CONFIGMAP_NAME
                  value: dpdk-checkup-config
                - name: POD_UID
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.uid
    Copy to Clipboard Toggle word wrap

  6. $ oc apply -n <target_namespace> -f <dpdk_job>.yaml
    Copy to Clipboard Toggle word wrap
  7. $ oc wait job dpdk-checkup -n <target_namespace> --for condition=complete --timeout 10m
    Copy to Clipboard Toggle word wrap
  8. $ oc get configmap dpdk-checkup-config -n <target_namespace> -o yaml
    Copy to Clipboard Toggle word wrap

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: dpdk-checkup-config
      labels:
        kiagnose/checkup-type: kubevirt-dpdk
    data:
      spec.timeout: 10m
      spec.param.NetworkAttachmentDefinitionName: "dpdk-network-1"
      spec.param.trafficGenContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:v0.4.0"
      spec.param.vmUnderTestContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-vm:v0.4.0"
      status.succeeded: "true" 
    1
    
      status.failureReason: "" 
    2
    
      status.startTimestamp: "2023-07-31T13:14:38Z" 
    3
    
      status.completionTimestamp: "2023-07-31T13:19:41Z" 
    4
    
      status.result.trafficGenSentPackets: "480000000" 
    5
    
      status.result.trafficGenOutputErrorPackets: "0" 
    6
    
      status.result.trafficGenInputErrorPackets: "0" 
    7
    
      status.result.trafficGenActualNodeName: worker-dpdk1 
    8
    
      status.result.vmUnderTestActualNodeName: worker-dpdk2 
    9
    
      status.result.vmUnderTestReceivedPackets: "480000000" 
    10
    
      status.result.vmUnderTestRxDroppedPackets: "0" 
    11
    
      status.result.vmUnderTestTxDroppedPackets: "0" 
    12
    Copy to Clipboard Toggle word wrap

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
  9. $ oc delete job -n <target_namespace> dpdk-checkup
    Copy to Clipboard Toggle word wrap
    $ oc delete config-map -n <target_namespace> dpdk-checkup-config
    Copy to Clipboard Toggle word wrap
  10. $ oc delete -f <dpdk_sa_roles_rolebinding>.yaml
    Copy to Clipboard Toggle word wrap
12.2.3.3.1.

Expand
표 12.1.
   

12.2.3.3.2.

  • # dnf install guestfs-tools
    Copy to Clipboard Toggle word wrap

  1. # composer-cli distros list
    Copy to Clipboard Toggle word wrap
    참고

    # usermod -a -G weldr <user>
    Copy to Clipboard Toggle word wrap
    $ newgrp weldr
    Copy to Clipboard Toggle word wrap
  2. $ cat << EOF > dpdk-vm.toml
    name = "dpdk_image"
    description = "Image to use with the DPDK checkup"
    version = "0.0.1"
    distro = "rhel-9.4"
    
    [[customizations.user]]
    name = "root"
    password = "redhat"
    
    [[packages]]
    name = "dpdk"
    
    [[packages]]
    name = "dpdk-tools"
    
    [[packages]]
    name = "driverctl"
    
    [[packages]]
    name = "tuned-profiles-cpu-partitioning"
    
    [customizations.kernel]
    append = "default_hugepagesz=1GB hugepagesz=1G hugepages=1"
    
    [customizations.services]
    disabled = ["NetworkManager-wait-online", "sshd"]
    EOF
    Copy to Clipboard Toggle word wrap
  3. # composer-cli blueprints push dpdk-vm.toml
    Copy to Clipboard Toggle word wrap
  4. # composer-cli compose start dpdk_image qcow2
    Copy to Clipboard Toggle word wrap
  5. # composer-cli compose status
    Copy to Clipboard Toggle word wrap
  6. # composer-cli compose image <UUID>
    Copy to Clipboard Toggle word wrap
  7. $ cat <<EOF >customize-vm
    #!/bin/bash
    
    # Setup hugepages mount
    mkdir -p /mnt/huge
    echo "hugetlbfs /mnt/huge hugetlbfs defaults,pagesize=1GB 0 0" >> /etc/fstab
    
    # Create vfio-noiommu.conf
    echo "options vfio enable_unsafe_noiommu_mode=1" > /etc/modprobe.d/vfio-noiommu.conf
    
    # Enable guest-exec,guest-exec-status on the qemu-guest-agent configuration
    sed -i 's/\(--allow-rpcs=[^"]*\)/\1,guest-exec-status,guest-exec/' /etc/sysconfig/qemu-ga
    
    # Disable Bracketed-paste mode
    echo "set enable-bracketed-paste off" >> /root/.inputrc
    EOF
    Copy to Clipboard Toggle word wrap
  8. $ virt-customize -a <UUID>-disk.qcow2 --run=customize-vm --selinux-relabel
    Copy to Clipboard Toggle word wrap
  9. $ cat << EOF > Dockerfile
    FROM scratch
    COPY --chown=107:107 <UUID>-disk.qcow2 /disk/
    EOF
    Copy to Clipboard Toggle word wrap

  10. $ podman build . -t dpdk-rhel:latest
    Copy to Clipboard Toggle word wrap
  11. $ podman push dpdk-rhel:latest
    Copy to Clipboard Toggle word wrap
맨 위로 이동
Red Hat logoGithubredditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다. 최신 업데이트를 확인하세요.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

Theme

© 2025 Red Hat