가상화


OpenShift Container Platform 4.17

OpenShift Virtualization 설치, 사용법, 릴리스 정보

Red Hat OpenShift Documentation Team

초록

이 문서에서는 OpenShift Container Platform에서 OpenShift Virtualization을 사용하는 방법에 대한 정보를 제공합니다.

1장. 정보

1.1. OpenShift Virtualization 정보

OpenShift Virtualization의 기능 및 지원 범위에 대해 알아보십시오.

1.1.1. OpenShift Virtualization으로 수행할 수 있는 작업

OpenShift Virtualization은 Red Hat OpenShift에서 확장 가능한 엔터프라이즈급 가상화 기능을 제공합니다. 컨테이너 워크로드와 함께 또는 컨테이너 워크로드와 함께 VM(가상 머신)을 관리하는 데 사용할 수 있습니다.

참고

Red Hat OpenShift Virtualization Engine 서브스크립션이 있는 경우 서브스크립션 호스트에서 무제한 VM을 실행할 수 있지만 컨테이너에서 애플리케이션 인스턴스를 실행할 수 없습니다. 자세한 내용은 Red Hat OpenShift Virtualization Engine 및 관련 제품에 대한 서브스크립션 가이드 섹션을 참조하십시오.

OpenShift Virtualization은 Kubernetes 사용자 지정 리소스를 사용하여 가상화 작업을 활성화하여 OpenShift Container Platform 클러스터에 새 개체를 추가합니다. 다음과 같은 가상화 작업이 지원됩니다.

  • Linux 및 Windows VM 생성 및 관리
  • 클러스터에서 서로 함께 Pod 및 VM 워크로드 실행
  • 다양한 콘솔 및 CLI 툴을 통해 VM에 연결
  • 기존 VM 가져오기 및 복제
  • VM에 연결된 네트워크 인터페이스 컨트롤러 및 스토리지 디스크 관리
  • 노드 간 VM 실시간 마이그레이션

OpenShift Container Platform 웹 콘솔의 가상화 화면을 사용하고 OpenShift CLI(oc)를 사용하여 클러스터 및 가상화 리소스를 관리할 수 있습니다.

OpenShift Virtualization은 Red Hat OpenShift Data Foundation 기능과 원활하게 작동하도록 설계 및 테스트되었습니다.

중요

OpenShift Data Foundation을 사용하여 OpenShift Virtualization을 배포할 때 Windows 가상 머신 디스크용 전용 스토리지 클래스를 생성해야 합니다. 자세한 내용은 Windows VM용 ODF PersistentVolume 최적화 를 참조하십시오.

OVN-Kubernetes 또는 인증된 OpenShift CNI 플러그인에 나열된 다른 인증 네트워크 플러그인 중 하나와 함께 OpenShift Virtualization을 사용할 수 있습니다.

Compliance Operator 를 설치하고 ocp4-moderateocp4-moderate-node 프로필 을 사용하여 검사를 실행하여 OpenShift Virtualization 클러스터에서 규정 준수 문제를 확인할 수 있습니다. Compliance Operator는 NIST 인증 툴 인 OpenSCAP을 사용하여 보안 정책을 검사하고 적용합니다.

1.1.2. OpenShift Virtualization과 VMware vSphere 비교

VMware vSphere에 익숙한 경우 다음 표에는 유사한 작업을 수행하는 데 사용할 수 있는 OpenShift Virtualization 구성 요소가 나열되어 있습니다. 그러나 OpenShift Virtualization은 vSphere와 개념적으로 다르며 기본 OpenShift Container Platform과 많은 기능이 제공되기 때문에 OpenShift Virtualization에는 모든 vSphere 개념 또는 구성 요소에 대한 직접적인 대안이 없습니다.

Expand
표 1.1. vSphere 개념의 가장 가까운 OpenShift Virtualization에 매핑
vSphere 개념OpenShift Virtualization설명

데이터 저장소

영구 볼륨(PV) +
영구 볼륨 클레임(PVC)

VM 디스크를 저장합니다. PV는 기존 스토리지를 나타내며 PVC를 통해 VM에 연결됩니다. RWX( ReadWriteMany ) 액세스 모드로 생성되면 여러 VM에서 동시에 PVC를 마운트할 수 있습니다.

DRS(Dynamic Resource Scheduling)

Pod 제거 정책 +
Descheduler

활성 리소스 밸런싱을 제공합니다. Pod 제거 정책과 Descheduler의 조합을 사용하면 더 적절한 노드로 VM을 실시간 마이그레이션하여 노드 리소스 사용률을 관리할 수 있습니다.

NSX

Multus +
OVN-Kubernetes +
타사 컨테이너 네트워크 인터페이스(CNI) 플러그인

오버레이 네트워크 구성을 제공합니다. OpenShift Virtualization에서 NSX에 직접 해당하는 것은 없지만 OVN-Kubernetes 네트워크 공급자를 사용하거나 인증된 타사 CNI 플러그인을 설치할 수 있습니다.

스토리지 정책 기반 관리(SPBM)

스토리지 클래스

정책 기반 스토리지 선택을 제공합니다. 스토리지 클래스는 다양한 스토리지 유형을 나타내며 서비스 품질, 백업 정책, 회수 정책, 볼륨 확장이 허용되는지 여부와 같은 스토리지 기능을 설명합니다. PVC는 애플리케이션 요구 사항을 충족하기 위해 특정 스토리지 클래스를 요청할 수 있습니다.

vCenter
Cryostat 작업

OpenShift 지표 및 모니터링

호스트 및 VM 메트릭을 제공합니다. OpenShift Container Platform 웹 콘솔을 사용하여 클러스터 및 VM의 전반적인 상태를 보고 모니터링할 수 있습니다.

vMotion

실시간 마이그레이션

실행 중인 VM을 중단 없이 다른 노드로 이동합니다. 실시간 마이그레이션을 사용하려면 VM에 연결된 PVC에 RWX( ReadWriteMany ) 액세스 모드가 있어야 합니다.

vSwitch
DvSwitch

NMState Operator +
Multus

물리적 네트워크 구성을 제공합니다. NMState Operator를 사용하여 상태 중심 네트워크 구성을 적용하고 Linux 브리지 및 네트워크 본딩을 비롯한 다양한 네트워크 인터페이스 유형을 관리할 수 있습니다. Multus를 사용하면 여러 네트워크 인터페이스를 연결하고 VM을 외부 네트워크에 연결할 수 있습니다.

1.1.3. OpenShift Virtualization에서 지원되는 클러스터 버전

OpenShift Container Platform 4.17 클러스터에서 사용할 수 있도록 OpenShift Virtualization 4.17이 지원됩니다. OpenShift Virtualization의 최신 z-stream 릴리스를 사용하려면 먼저 OpenShift Container Platform의 최신 버전으로 업그레이드해야 합니다.

1.1.4. 가상 머신 디스크의 볼륨 및 액세스 모드 정보

알려진 스토리지 공급자와 스토리지 API를 사용하는 경우 볼륨 및 액세스 모드가 자동으로 선택됩니다. 그러나 스토리지 프로필이 없는 스토리지 클래스를 사용하는 경우 볼륨 및 액세스 모드를 구성해야 합니다.

최상의 결과를 얻으려면 RWX( ReadWriteMany ) 액세스 모드와 Block 볼륨 모드를 사용합니다. 이는 다음과 같은 이유로 중요합니다.

  • 실시간 마이그레이션에는 RWX( ReadWriteMany ) 액세스 모드가 필요합니다.
  • 블록 볼륨 모드는 Filesystem 볼륨 모드보다 훨씬 더 잘 작동합니다. 이는 Filesystem 볼륨 모드가 파일 시스템 계층 및 디스크 이미지 파일을 포함하여 더 많은 스토리지 계층을 사용하기 때문입니다. 이러한 계층은 VM 디스크 스토리지에 필요하지 않습니다.

    예를 들어 Red Hat OpenShift Data Foundation을 사용하는 경우 CephFS 볼륨에 Ceph RBD 볼륨을 사용하는 것이 좋습니다.

중요

다음 구성으로 가상 머신을 실시간 마이그레이션할 수 없습니다.

  • RWO( ReadWriteOnce ) 액세스 모드가 있는 스토리지 볼륨
  • GPU와 같은 패스스루 기능

이러한 가상 머신에 대해 evictionStrategy 필드를 None 으로 설정합니다. None 전략은 노드를 재부팅하는 동안 VM의 전원을 끕니다.

1.1.5. 단일 노드 OpenShift 차이점

단일 노드 OpenShift에 OpenShift Virtualization을 설치할 수 있습니다.

그러나 Single-node OpenShift는 다음 기능을 지원하지 않습니다.

  • 고가용성
  • Pod 중단
  • 실시간 마이그레이션
  • 제거 전략이 구성된 가상 머신 또는 템플릿

1.2. 지원되는 제한

OpenShift Virtualization에 대한 OpenShift Container Platform 환경을 계획할 때 테스트된 오브젝트 최대값을 참조할 수 있습니다. 그러나 최대값에 접근하면 성능을 줄이고 대기 시간을 높일 수 있습니다. 특정 사용 사례를 계획하고 클러스터 확장에 영향을 줄 수 있는 모든 요소를 고려해야 합니다.

성능에 영향을 미치는 클러스터 구성 및 옵션에 대한 자세한 내용은 Red Hat 지식 베이스의 OpenShift Virtualization - 튜닝 및 확장 가이드를 참조하십시오.

1.2.1. OpenShift Virtualization에서 테스트된 최대값

다음 제한은 대규모 OpenShift Virtualization 4.x 환경에 적용됩니다. 이는 가능한 가장 큰 크기의 단일 클러스터를 기반으로 합니다. 환경을 계획할 때는 여러 개의 작은 클러스터가 사용 사례에 가장 적합한 옵션이 될 수 있습니다.

1.2.1.1. 가상 머신 최대값

다음 최대값은 OpenShift Virtualization에서 실행되는 VM(가상 머신)에 적용됩니다. 이러한 값은 KVM을 사용하는 Red Hat Enterprise Linux의 가상화 제한에 지정된 제한의 적용을 받습니다.

Expand
목표(VM당)테스트된 제한이론적 제한

가상 CPU

참고

1.2.1.2.

Expand
   

1.2.1.3.

Expand
   

1.3.

1.3.1.

1.3.2.

1.3.3.

1.3.3.1.

Expand
표 1.2.
   

1.3.3.2.

1.3.3.2.1.
Expand
표 1.3.
   

Expand
표 1.4.
   

Expand
표 1.5.
   

1.3.3.2.2.
Expand
표 1.6.
   

Expand
표 1.7.
   

1.3.3.3.

$ oc get scc kubevirt-controller -o yaml

$ oc get clusterrole kubevirt-controller -o yaml

1.4.

1.4.1.

Expand
표 1.8.
  

1.4.2.

Expand
표 1.9.
  

1.4.3.

Expand
표 1.10.
  

1.4.4.

Expand
표 1.11.
  

1.4.5.

1.4.6.

Expand
표 1.12.
  

2장.

2.1.

2.1.1.

2.1.2.

2.1.2.1.

2.1.2.2.

2.1.2.3.

2.1.3.

2.1.4.

2.1.4.1.
2.1.4.2.
  • 참고

2.1.4.3.
2.1.4.4.

2.1.5.

2.1.5.1.

2.1.5.2.

2.1.6.

  • 참고

2.1.7.

3장.

3.1.

참고

3.1.1.

3.1.2.

3.1.3.

  • 참고

3.1.4.

참고

3.1.5.

3.2.

3.2.1.

3.2.1.1.

      1. $ tar -xvf <virtctl-version-distribution.arch>.tar.gz
      2. $ chmod +x <path/virtctl-file-name>
      3. $ echo $PATH
      4. $ export KUBECONFIG=/home/<user>/clusters/current/auth/kubeconfig
      1. C:\> path
      1. echo $PATH
3.2.1.2.

  1. # subscription-manager repos --enable cnv-4.17-for-rhel-8-x86_64-rpms
  2. # yum install kubevirt-virtctl

3.2.2.

참고

3.2.2.1.

Expand
표 3.1.
  

3.2.2.2.

Expand
표 3.2.
  

3.2.2.3.

Expand
표 3.3.
  

3.2.2.4.

Expand
표 3.4.
  

3.2.2.5.

Expand
표 3.5.
  

3.2.2.6.

Expand
표 3.6.
  

3.2.2.7.

  • $ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \
      --type json -p '[{"op": "add", "path": "/spec/featureGates", \
      "value": "HotplugVolumes"}]'

$ virtctl vmexport download <vmexport_name> --vm|pvc=<object_name> \
  --volume=<volume_name> --output=<output_file>
Expand
표 3.7.
  

3.2.2.8.

Expand
표 3.8.
  

3.2.2.9.

Expand
표 3.9.
  

3.2.3.

  • $ virtctl guestfs -n <namespace> <pvc_name> 
    1
    1
3.2.3.1.

Expand
  

Expand
  

참고

3.2.4.

4장.

4.1.

중요

4.1.1.

  • 중요

4.1.1.1.

참고

  • 중요

  • Expand
    표 4.1.
        
     

      

4.1.2.

4.1.2.1.
  • 참고

4.1.2.2.
  • 참고

4.1.2.3.
참고

4.1.2.3.1.

중요

4.1.3.

  • 참고

    Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)

4.1.4.

중요

Memory overhead per infrastructure node ≈ 150 MiB

Memory overhead per worker node ≈ 360 MiB

Memory overhead per virtual machine ≈ (1.002 × requested memory) \
              + 218 MiB \ 
1

              + 8 MiB × (number of vCPUs) \ 
2

              + 16 MiB × (number of graphics devices) \ 
3

              + (additional memory overhead) 
4

1
2
3
4

CPU overhead for infrastructure nodes ≈ 4 cores

CPU overhead for worker nodes ≈ 2 cores + CPU overhead per virtual machine

Aggregated storage overhead per node ≈ 10 GiB

4.1.5.

4.1.6.

4.1.7.

  • 참고

  • 참고

4.2.

중요

4.2.1.

4.2.1.1.

    1. 주의

    2. 주의

4.2.1.2.

4.2.1.2.1.

  1. apiVersion: v1
    kind: Namespace
    metadata:
      name: openshift-cnv
    ---
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: kubevirt-hyperconverged-group
      namespace: openshift-cnv
    spec:
      targetNamespaces:
        - openshift-cnv
    ---
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: hco-operatorhub
      namespace: openshift-cnv
    spec:
      source: redhat-operators
      sourceNamespace: openshift-marketplace
      name: kubevirt-hyperconverged
      startingCSV: kubevirt-hyperconverged-operator.v4.17.7
      channel: "stable" 
    1
    1
  2. $ oc apply -f <file name>.yaml
참고

4.2.1.2.2.

  1. apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
  2. $ oc apply -f <file_name>.yaml

  • $ watch oc get csv -n openshift-cnv

    NAME                                      DISPLAY                    VERSION   REPLACES   PHASE
    kubevirt-hyperconverged-operator.v4.17.7   OpenShift Virtualization   4.17.7                Succeeded

4.2.2.

4.3.

4.3.1.

중요

4.3.1.1.

4.3.1.2.

  1. 참고

4.3.1.3.

4.3.1.4.

4.3.2.

  1. $ oc delete HyperConverged kubevirt-hyperconverged -n openshift-cnv
  2. $ oc delete subscription kubevirt-hyperconverged -n openshift-cnv
  3. $ oc delete csv -n openshift-cnv -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv
  4. $ oc delete namespace openshift-cnv
  5. $ oc delete crd --dry-run=client -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv

    customresourcedefinition.apiextensions.k8s.io "cdis.cdi.kubevirt.io" deleted (dry run)
    customresourcedefinition.apiextensions.k8s.io "hostpathprovisioners.hostpathprovisioner.kubevirt.io" deleted (dry run)
    customresourcedefinition.apiextensions.k8s.io "hyperconvergeds.hco.kubevirt.io" deleted (dry run)
    customresourcedefinition.apiextensions.k8s.io "kubevirts.kubevirt.io" deleted (dry run)
    customresourcedefinition.apiextensions.k8s.io "networkaddonsconfigs.networkaddonsoperator.network.kubevirt.io" deleted (dry run)
    customresourcedefinition.apiextensions.k8s.io "ssps.ssp.kubevirt.io" deleted (dry run)
    customresourcedefinition.apiextensions.k8s.io "tektontasks.tektontasks.kubevirt.io" deleted (dry run)

  6. $ oc delete crd -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv

5장.

5.1.

5.2.

참고

5.2.1.

5.2.2.

  1. $ oc edit <resource_type> <resource_name> -n openshift-cnv

5.2.3.

5.2.3.1.

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: hco-operatorhub
  namespace: openshift-cnv
spec:
  source: redhat-operators
  sourceNamespace: openshift-marketplace
  name: kubevirt-hyperconverged
  startingCSV: kubevirt-hyperconverged-operator.v4.17.7
  channel: "stable"
  config:
    nodeSelector:
      example.io/example-infra-key: example-infra-value 
1

1

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: hco-operatorhub
  namespace: openshift-cnv
spec:
  source:  redhat-operators
  sourceNamespace: openshift-marketplace
  name: kubevirt-hyperconverged
  startingCSV: kubevirt-hyperconverged-operator.v4.17.7
  channel: "stable"
  config:
    tolerations:
    - key: "key"
      operator: "Equal"
      value: "virtualization" 
1

      effect: "NoSchedule"

1
5.2.3.2.

apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
  name: kubevirt-hyperconverged
  namespace: openshift-cnv
spec:
  infra:
    nodePlacement:
      nodeSelector:
        example.io/example-infra-key: example-infra-value 
1

  workloads:
    nodePlacement:
      nodeSelector:
        example.io/example-workloads-key: example-workloads-value 
2

1
2

apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
  name: kubevirt-hyperconverged
  namespace: openshift-cnv
spec:
  infra:
    nodePlacement:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: example.io/example-infra-key
                operator: In
                values:
                - example-infra-value 
1

  workloads:
    nodePlacement:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: example.io/example-workloads-key 
2

                operator: In
                values:
                - example-workloads-value
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 1
            preference:
              matchExpressions:
              - key: example.io/num-cpus
                operator: Gt
                values:
                - 8 
3

1
2
3

apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
  name: kubevirt-hyperconverged
  namespace: openshift-cnv
spec:
  workloads:
    nodePlacement:
      tolerations: 
1

      - key: "key"
        operator: "Equal"
        value: "virtualization"
        effect: "NoSchedule"

1
5.2.3.3.

주의

apiVersion: hostpathprovisioner.kubevirt.io/v1beta1
kind: HostPathProvisioner
metadata:
  name: hostpath-provisioner
spec:
  imagePullPolicy: IfNotPresent
  pathConfig:
    path: "</path/to/backing/directory>"
    useNamingPrefix: false
  workload:
    nodeSelector:
      example.io/example-workloads-key: example-workloads-value 
1

1

5.3.

5.3.1.

5.3.2.

5.3.2.1.

  • apiVersion: nmstate.io/v1
    kind: NodeNetworkConfigurationPolicy
    metadata:
      name: br1-eth1-policy 
    1
    
    spec:
      desiredState:
        interfaces:
          - name: br1 
    2
    
            description: Linux bridge with eth1 as a port 
    3
    
            type: linux-bridge 
    4
    
            state: up 
    5
    
            ipv4:
              enabled: false 
    6
    
            bridge:
              options:
                stp:
                  enabled: false 
    7
    
              port:
                - name: eth1 
    8
    1
    2
    3
    4
    5
    6
    7
    8
5.3.2.2.

주의

  1. 참고

5.3.3.

5.3.3.1.

  1. apiVersion: "k8s.cni.cncf.io/v1"
    kind: NetworkAttachmentDefinition
    metadata:
      name: my-secondary-network 
    1
    
      namespace: openshift-cnv
    spec:
      config: '{
        "cniVersion": "0.3.1",
        "name": "migration-bridge",
        "type": "macvlan",
        "master": "eth1", 
    2
    
        "mode": "bridge",
        "ipam": {
          "type": "whereabouts", 
    3
    
          "range": "10.200.5.0/24" 
    4
    
        }
      }'

    1
    2
    3
    4
  2. $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  3. apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
      liveMigrationConfig:
        completionTimeoutPerGiB: 800
        network: <network> 
    1
    
        parallelMigrationsPerCluster: 5
        parallelOutboundMigrationsPerNode: 2
        progressTimeout: 150
    # ...

    1

  • $ oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'
5.3.3.2.

5.3.4.

5.3.4.1.

참고

  1. apiVersion: sriovnetwork.openshift.io/v1
    kind: SriovNetworkNodePolicy
    metadata:
      name: <name> 
    1
    
      namespace: openshift-sriov-network-operator 
    2
    
    spec:
      resourceName: <sriov_resource_name> 
    3
    
      nodeSelector:
        feature.node.kubernetes.io/network-sriov.capable: "true" 
    4
    
      priority: <priority> 
    5
    
      mtu: <mtu> 
    6
    
      numVfs: <num> 
    7
    
      nicSelector: 
    8
    
        vendor: "<vendor_code>" 
    9
    
        deviceID: "<device_id>" 
    10
    
        pfNames: ["<pf_name>", ...] 
    11
    
        rootDevices: ["<pci_bus_id>", "..."] 
    12
    
      deviceType: vfio-pci 
    13
    
      isRdma: false 
    14
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    참고

  2. $ oc create -f <name>-sriov-node-network.yaml

  3. $ oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'

5.3.5.

5.4.

5.4.1.

중요

5.4.1.1.

참고

  1. apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: hostpath-csi
    provisioner: kubevirt.io.hostpath-provisioner
    reclaimPolicy: Delete 
    1
    
    volumeBindingMode: WaitForFirstConsumer 
    2
    
    parameters:
      storagePool: my-storage-pool 
    3
    1
    2
    3
  2. $ oc create -f storageclass_csi.yaml

5.5.

참고

5.5.1.

중요

참고

    1. apiVersion: machineconfiguration.openshift.io/v1
      kind: KubeletConfig
      metadata:
        name: custom-config
      spec:
        machineConfigPoolSelector:
          matchLabels:
            pools.operator.machineconfiguration.openshift.io/worker: ''  # MCP
            #machine.openshift.io/cluster-api-machine-role: worker # machine
            #node-role.kubernetes.io/worker: '' # node
        kubeletConfig:
          failSwapOn: false

    2. $ oc wait mcp worker --for condition=Updated=True --timeout=-1s
  1. apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: worker
      name: 90-worker-swap
    spec:
      config:
        ignition:
          version: 3.4.0
        systemd:
          units:
            - contents: |
                [Unit]
                Description=Provision and enable swap
                ConditionFirstBoot=no
                ConditionPathExists=!/var/tmp/swapfile
    
                [Service]
                Type=oneshot
                Environment=SWAP_SIZE_MB=5000
                ExecStart=/bin/sh -c "sudo dd if=/dev/zero of=/var/tmp/swapfile count=${SWAP_SIZE_MB} bs=1M && \
                sudo chmod 600 /var/tmp/swapfile && \
                sudo mkswap /var/tmp/swapfile && \
                sudo swapon /var/tmp/swapfile && \
                free -h"
    
                [Install]
                RequiredBy=kubelet-dependencies.target
              enabled: true
              name: swap-provision.service
            - contents: |
                [Unit]
                Description=Restrict swap for system slice
                ConditionFirstBoot=no
    
                [Service]
                Type=oneshot
                ExecStart=/bin/sh -c "sudo systemctl set-property --runtime system.slice MemorySwapMax=0 IODeviceLatencyTargetSec=\"/ 50ms\""
    
                [Install]
                RequiredBy=kubelet-dependencies.target
              enabled: true
              name: cgroup-system-slice-config.service

    NODE_SWAP_SPACE = NODE_RAM * (MEMORY_OVER_COMMIT_PERCENT / 100% - 1)

    NODE_SWAP_SPACE = 16 GB * (150% / 100% - 1)
                   = 16 GB * (1.5 - 1)
                   = 16 GB * (0.5)
                   =  8 GB

  2. $ oc adm new-project wasp
    $ oc create sa -n wasp wasp
    $ oc create clusterrolebinding wasp --clusterrole=cluster-admin --serviceaccount=wasp:wasp
    $ oc adm policy add-scc-to-user -n wasp privileged -z wasp
  3. $ oc wait mcp worker --for condition=Updated=True --timeout=-1s
  4. $ oc get csv -n openshift-cnv -l=operators.coreos.com/kubevirt-hyperconverged.openshift-cnv -ojson | jq '.items[0].spec.relatedImages[] | select(.name|test(".*wasp-agent.*")) | .image'
  5. kind: DaemonSet
    apiVersion: apps/v1
    metadata:
      name: wasp-agent
      namespace: wasp
      labels:
        app: wasp
        tier: node
    spec:
      selector:
        matchLabels:
          name: wasp
      template:
        metadata:
          annotations:
            description: >-
              Configures swap for workloads
          labels:
            name: wasp
        spec:
          containers:
            - env:
                - name: SWAP_UTILIZATION_THRESHOLD_FACTOR
                  value: "0.8"
                - name: MAX_AVERAGE_SWAP_IN_PAGES_PER_SECOND
                  value: "1000000000"
                - name: MAX_AVERAGE_SWAP_OUT_PAGES_PER_SECOND
                  value: "1000000000"
                - name: AVERAGE_WINDOW_SIZE_SECONDS
                  value: "30"
                - name: VERBOSITY
                  value: "1"
                - name: FSROOT
                  value: /host
                - name: NODE_NAME
                  valueFrom:
                    fieldRef:
                      fieldPath: spec.nodeName
              image: >-
                quay.io/openshift-virtualization/wasp-agent:v4.17 
    1
    
              imagePullPolicy: Always
              name: wasp-agent
              resources:
                requests:
                  cpu: 100m
                  memory: 50M
              securityContext:
                privileged: true
              volumeMounts:
                - mountPath: /host
                  name: host
                - mountPath: /rootfs
                  name: rootfs
          hostPID: true
          hostUsers: true
          priorityClassName: system-node-critical
          serviceAccountName: wasp
          terminationGracePeriodSeconds: 5
          volumes:
            - hostPath:
                path: /
              name: host
            - hostPath:
                path: /
              name: rootfs
      updateStrategy:
        type: RollingUpdate
        rollingUpdate:
          maxUnavailable: 10%
          maxSurge: 0
    1
  6. apiVersion: monitoring.coreos.com/v1
    kind: PrometheusRule
    metadata:
      labels:
        tier: node
        wasp.io: ""
      name: wasp-rules
      namespace: wasp
    spec:
      groups:
        - name: alerts.rules
          rules:
            - alert: NodeHighSwapActivity
              annotations:
                description: High swap activity detected at {{ $labels.instance }}. The rate
                  of swap out and swap in exceeds 200 in both operations in the last minute.
                  This could indicate memory pressure and may affect system performance.
                runbook_url: https://github.com/openshift-virtualization/wasp-agent/tree/main/docs/runbooks/NodeHighSwapActivity.md
                summary: High swap activity detected at {{ $labels.instance }}.
              expr: rate(node_vmstat_pswpout[1m]) > 200 and rate(node_vmstat_pswpin[1m]) >
                200
              for: 1m
              labels:
                kubernetes_operator_component: kubevirt
                kubernetes_operator_part_of: kubevirt
                operator_health_impact: warning
                severity: warning
  7. $ oc label namespace wasp openshift.io/cluster-monitoring="true"
      • $ oc -n openshift-cnv patch HyperConverged/kubevirt-hyperconverged --type='json' -p='[ \
          { \
          "op": "replace", \
          "path": "/spec/higherWorkloadDensity/memoryOvercommitPercentage", \
          "value": 150 \
          } \
        ]'

        hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged patched

  1. $ oc rollout status ds wasp-agent -n wasp

    daemon set "wasp-agent" successfully rolled out

    1. $ oc get nodes -l node-role.kubernetes.io/worker
    2. $ oc debug node/<selected_node> -- free -m 
      1
      1

      Expand
      표 5.1.
       

         
  2. $ oc -n openshift-cnv get HyperConverged/kubevirt-hyperconverged -o jsonpath='{.spec.higherWorkloadDensity}{"\n"}'

    {"memoryOvercommitPercentage":150}

5.5.2.

averageSwapInPerSecond > maxAverageSwapInPagesPerSecond
&&
averageSwapOutPerSecond > maxAverageSwapOutPagesPerSecond

nodeWorkingSet + nodeSwapUsage < totalNodeMemory + totalSwapMemory × thresholdFactor

5.5.2.1.

Expand

6장.

6.1.

6.1.1.

6.1.1.2.
중요

6.1.1.3.
6.1.1.4.

6.1.1.4.1.

중요

6.1.2.

참고

  1. $ oc get csv -n openshift-cnv
  2. VERSION  REPLACES                                        PHASE
    4.9.0    kubevirt-hyperconverged-operator.v4.8.2         Installing
    4.9.0    kubevirt-hyperconverged-operator.v4.9.0         Replacing

  3. $ oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv \
      -o=jsonpath='{range .status.conditions[*]}{.type}{"\t"}{.status}{"\t"}{.message}{"\n"}{end}'

    ReconcileComplete  True  Reconcile completed successfully
    Available          True  Reconcile completed successfully
    Progressing        False Reconcile completed successfully
    Degraded           False Reconcile completed successfully
    Upgradeable        True  Reconcile completed successfully

6.1.3.

참고

참고

6.1.3.1.

  • 참고

  1. $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
    spec:
      workloadUpdateStrategy:
        workloadUpdateMethods: 
    1
    
        - LiveMigrate 
    2
    
        - Evict 
    3
    
        batchEvictionSize: 10 
    4
    
        batchEvictionInterval: "1m0s" 
    5
    
    # ...
    1
    2
    3
    4
    5
    참고

6.1.3.2.

참고

  • $ oc get vmi -l kubevirt.io/outdatedLauncherImage --all-namespaces
참고

6.1.4.

6.1.4.1.

참고

6.1.4.2.

중요

$ oc get kv kubevirt-kubevirt-hyperconverged -o json -n openshift-cnv | jq .status.outdatedVirtualMachineInstanceWorkloads

$ oc get vmi -l kubevirt.io/outdatedLauncherImage --all-namespaces

  1. $ oc get kv kubevirt-kubevirt-hyperconverged \
      -n openshift-cnv -o jsonpath='{.spec.workloadUpdateStrategy.workloadUpdateMethods}'
  2. $ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \
      --type json -p '[{"op":"replace","path":"/spec/workloadUpdateStrategy/workloadUpdateMethods", "value":[]}]'

    hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged patched

  3. $ oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o json | jq ".status.conditions"

    예 6.1.

    [
      {
        "lastTransitionTime": "2022-12-09T16:29:11Z",
        "message": "Reconcile completed successfully",
        "observedGeneration": 3,
        "reason": "ReconcileCompleted",
        "status": "True",
        "type": "ReconcileComplete"
      },
      {
        "lastTransitionTime": "2022-12-09T20:30:10Z",
        "message": "Reconcile completed successfully",
        "observedGeneration": 3,
        "reason": "ReconcileCompleted",
        "status": "True",
        "type": "Available"
      },
      {
        "lastTransitionTime": "2022-12-09T20:30:10Z",
        "message": "Reconcile completed successfully",
        "observedGeneration": 3,
        "reason": "ReconcileCompleted",
        "status": "False",
        "type": "Progressing"
      },
      {
        "lastTransitionTime": "2022-12-09T16:39:11Z",
        "message": "Reconcile completed successfully",
        "observedGeneration": 3,
        "reason": "ReconcileCompleted",
        "status": "False",
        "type": "Degraded"
      },
      {
        "lastTransitionTime": "2022-12-09T20:30:10Z",
        "message": "Reconcile completed successfully",
        "observedGeneration": 3,
        "reason": "ReconcileCompleted",
        "status": "True",
        "type": "Upgradeable" 
    1
    
      }
    ]
    1
  4. $ oc adm upgrade

    • $ oc get clusterversion
      참고

  5. $ oc get csv -n openshift-cnv
  6. $ oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o json | jq ".status.versions"

    [
      {
        "name": "operator",
        "version": "4.17.7"
      }
    ]

  7. $ oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o json | jq ".status.conditions"
  8. $ oc get clusterversion
  9. $ oc get csv -n openshift-cnv

  10. $ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p \
      "[{\"op\":\"add\",\"path\":\"/spec/workloadUpdateStrategy/workloadUpdateMethods\", \"value\":{WorkloadUpdateMethodConfig}}]"

    hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged patched

    • $ oc get vmim -A

6.1.5.

6.1.5.1.

6.1.5.2.

6.1.5.3.

6.1.6.

중요

7장.

7.1.

7.1.1.

중요

7.1.1.1.

7.1.1.1.1.

7.1.1.1.2.

7.1.1.2.

7.1.1.3.

7.1.1.3.1.

7.1.1.3.2.

  1. $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. apiVersion: hco.kubevirt.io/v1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
      commonBootImageNamespace: <custom_namespace> 
    1
    
    # ...

    1

7.1.2.

7.1.2.1.

7.1.2.1.1.

참고

apiVersion: instancetype.kubevirt.io/v1beta1
kind: VirtualMachineInstancetype
metadata:
  name: example-instancetype
spec:
  cpu:
    guest: 1 
1

  memory:
    guest: 128Mi 
2

1
2

$ virtctl create instancetype --cpu 2 --memory 256Mi

작은 정보

$ virtctl create instancetype --cpu 2 --memory 256Mi | oc apply -f -
7.1.2.1.2.

7.1.2.2.

Expand
표 7.1.
     

7.1.2.3.

7.1.2.4.

    • 참고

7.1.3.

7.1.3.1.

참고

7.1.3.2.

7.1.3.2.1.
Expand
표 7.2.
  

참고

7.1.3.2.2.
Expand
  

Expand
   

참고

7.1.3.2.3.

7.1.3.2.4.

7.1.4.

참고

7.1.4.1.

  1. $ virtctl create vm --name rhel-9-minimal --volume-datasource src:openshift-virtualization-os-images/rhel9
  2. 참고

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: rhel-9-minimal 
    1
    
    spec:
      dataVolumeTemplates:
      - metadata:
          name: imported-volume-mk4lj
        spec:
          sourceRef:
            kind: DataSource
            name: rhel9 
    2
    
            namespace: openshift-virtualization-os-images 
    3
    
          storage:
            resources: {}
      instancetype:
        inferFromVolume: imported-volume-mk4lj 
    4
    
        inferFromVolumeFailurePolicy: Ignore
      preference:
        inferFromVolume: imported-volume-mk4lj 
    5
    
        inferFromVolumeFailurePolicy: Ignore
      runStrategy: Always
      template:
        spec:
          domain:
            devices: {}
            memory:
              guest: 512Mi
            resources: {}
          terminationGracePeriodSeconds: 180
          volumes:
          - dataVolume:
              name: imported-volume-mk4lj
            name: imported-volume-mk4lj

    1
    2
    3
    4
    5
  3. $ oc create -f <vm_manifest_file>.yaml
  4. $ virtctl start <vm_name>

7.2.

7.2.1.

중요

7.2.2.

중요

중요

7.2.2.1.

참고

  1. $ cat > Dockerfile << EOF
    FROM registry.access.redhat.com/ubi8/ubi:latest AS builder
    ADD --chown=107:107 <vm_image>.qcow2 /disk/ 
    1
    
    RUN chmod 0440 /disk/*
    
    FROM scratch
    COPY --from=builder /disk/* /disk/
    EOF
    1
  2. $ podman build -t <registry>/<container_disk_name>:latest .
  3. $ podman push <registry>/<container_disk_name>:latest
7.2.2.2.

  1. $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
      storageImport:
        insecureRegistries: 
    1
    
          - "private-registry-example-1:5000"
          - "private-registry-example-2:5000"

    1
7.2.2.3.

7.2.2.4.

  1. $ virtctl create vm --name vm-rhel-9 --instancetype u1.small --preference rhel.9 --volume-containerdisk src:registry.redhat.io/rhel9/rhel-guest-image:9.5
  2. apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: vm-rhel-9 
    1
    
    spec:
      instancetype:
        name: u1.small 
    2
    
      preference:
        name: rhel.9 
    3
    
      runStrategy: Always
      template:
        metadata:
          creationTimestamp: null
        spec:
          domain:
            devices: {}
            resources: {}
          terminationGracePeriodSeconds: 180
          volumes:
          - containerDisk:
              image: registry.redhat.io/rhel9/rhel-guest-image:9.5 
    4
    
            name: vm-rhel-9-containerdisk-0
    1
    2
    3
    4
  3. $ oc create -f <vm_manifest_file>.yaml

  1. $ oc get vm <vm_name>

    NAME        AGE   STATUS    READY
    vm-rhel-9   18s   Running   True

  2. $ virtctl console <vm_name>

    Successfully connected to vm-rhel-9 console. The escape sequence is ^]

7.2.3.

중요

7.2.3.1.

7.2.3.2.

  1. $ virtctl create vm --name vm-rhel-9 --instancetype u1.small --preference rhel.9 --volume-import type:http,url:https://example.com/rhel9.qcow2,size:10Gi
  2. apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: vm-rhel-9 
    1
    
    spec:
      dataVolumeTemplates:
      - metadata:
          name: imported-volume-6dcpf 
    2
    
        spec:
          source:
            http:
              url: https://example.com/rhel9.qcow2 
    3
    
          storage:
            resources:
              requests:
                storage: 10Gi 
    4
    
      instancetype:
        name: u1.small 
    5
    
      preference:
        name: rhel.9 
    6
    
      runStrategy: Always
      template:
        spec:
          domain:
            devices: {}
            resources: {}
          terminationGracePeriodSeconds: 180
          volumes:
          - dataVolume:
              name: imported-volume-6dcpf
            name: imported-volume-6dcpf
    1
    2
    3
    4
    5
    6
  3. $ oc create -f <vm_manifest_file>.yaml

  1. $ oc get pods
  2. $ oc get dv <data_volume_name>

    NAME                    PHASE       PROGRESS   RESTARTS   AGE
    imported-volume-6dcpf   Succeeded   100.0%                18s

  3. $ virtctl console <vm_name>

    Successfully connected to vm-rhel-9 console. The escape sequence is ^]

7.2.4.

중요

7.2.4.1.

7.2.4.1.1.

  1. $ virtctl stop <my_vm_name>
  2. $ oc get vm <my_vm_name> -o jsonpath="{.spec.template.spec.volumes}{'\n'}"

    [{"dataVolume":{"name":"<my_vm_volume>"},"name":"rootdisk"},{"cloudInitNoCloud":{...}]

  3. $ oc get pvc

    NAME            STATUS   VOLUME  CAPACITY   ACCESS MODES  STORAGECLASS     AGE
    <my_vm_volume> Bound  …

    참고

  4. $ virtctl guestfs <my-vm-volume> --uid 107

  5. $ virt-sysprep -a disk.img

7.2.4.2.

7.2.4.2.1.

  1. %WINDIR%\System32\Sysprep\sysprep.exe /generalize /shutdown /oobe /mode:vm

7.2.4.2.2.

7.2.4.3.

  1. $ virtctl image-upload dv <datavolume_name> \ 
    1
    
      --size=<datavolume_size> \ 
    2
    
      --image-path=</path/to/image> \ 
    3
    1
    2
    3
    참고
  2. $ oc get dvs

7.2.5.

7.2.5.1.
7.2.5.1.1.

  1. $ yum install -y qemu-guest-agent
  2. $ systemctl enable --now qemu-guest-agent

  • $ oc get vm <vm_name>
7.2.5.1.2.

  1. $ net start
7.2.5.2.

Expand
표 7.3.
   

7.2.5.2.1.

7.2.5.2.2.

7.2.5.2.3.

참고

7.2.5.2.4.

참고

7.2.5.2.5.

작은 정보

  1. # ...
    spec:
      domain:
        devices:
          disks:
            - name: virtiocontainerdisk
              bootOrder: 2 
    1
    
              cdrom:
                bus: sata
    volumes:
      - containerDisk:
          image: container-native-virtualization/virtio-win
        name: virtiocontainerdisk
    1
    • $ virtctl start <vm> -n <namespace>
    • $ oc apply -f <vm.yaml>
7.2.5.3.
7.2.5.3.1.

7.2.6.

중요

7.2.6.1.

7.2.6.2.

7.2.7.

7.2.7.1.

7.2.7.1.1.

7.2.7.1.2.

7.2.7.1.3.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    cdi.kubevirt.io/cloneFallbackReason: The volume modes of source and target are incompatible
    cdi.kubevirt.io/clonePhase: Succeeded
    cdi.kubevirt.io/cloneType: copy

NAMESPACE   LAST SEEN   TYPE      REASON                    OBJECT                              MESSAGE
test-ns     0s          Warning   IncompatibleVolumeModes   persistentvolumeclaim/test-target   The volume modes of source and target are incompatible

7.2.7.2.

7.2.7.3.

7.2.7.3.1.

apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
  name: golden-volumesnapshot
  namespace: golden-ns
spec:
  volumeSnapshotClassName: ocs-storagecluster-rbdplugin-snapclass
  source:
    persistentVolumeClaimName: golden-snap-source
spec:
  source:
    snapshot:
      namespace: golden-ns
      name: golden-volumesnapshot
7.2.7.3.2.

참고

    • kind: VolumeSnapshotClass
      apiVersion: snapshot.storage.k8s.io/v1
      driver: openshift-storage.rbd.csi.ceph.com
      # ...

      kind: StorageClass
      apiVersion: storage.k8s.io/v1
      # ...
      provisioner: openshift-storage.rbd.csi.ceph.com

  1. apiVersion: cdi.kubevirt.io/v1beta1
    kind: DataVolume
    metadata:
      name: <datavolume> 
    1
    
    spec:
      source:
        pvc:
          namespace: "<source_namespace>" 
    2
    
          name: "<my_vm_disk>" 
    3
    
      storage: {}
    1
    2
    3
  2. $ oc create -f <datavolume>.yaml
    참고

7.2.7.3.3.

  1. $ virtctl create vm --name rhel-9-clone --volume-import type:pvc,src:my-project/imported-volume-q5pr9
  2. apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: rhel-9-clone 
    1
    
    spec:
      dataVolumeTemplates:
      - metadata:
          name: imported-volume-h4qn8
        spec:
          source:
            pvc:
              name: imported-volume-q5pr9 
    2
    
              namespace: my-project 
    3
    
          storage:
            resources: {}
      instancetype:
        inferFromVolume: imported-volume-h4qn8 
    4
    
        inferFromVolumeFailurePolicy: Ignore
      preference:
        inferFromVolume: imported-volume-h4qn8 
    5
    
        inferFromVolumeFailurePolicy: Ignore
      runStrategy: Always
      template:
        spec:
          domain:
            devices: {}
            memory:
              guest: 512Mi
            resources: {}
          terminationGracePeriodSeconds: 180
          volumes:
          - dataVolume:
              name: imported-volume-h4qn8
            name: imported-volume-h4qn8
    1
    2
    3
    4
    5
  3. $ oc create -f <vm_manifest_file>.yaml

7.3.

7.3.1.

7.3.1.1.

참고

7.3.1.2.

참고

  1. $ virtctl vnc <vm_name>
  2. $ virtctl vnc <vm_name> -v 4
7.3.1.3.

참고

  1. $ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{"op": "replace", "path": "/spec/featureGates/deployVmConsoleProxy", "value": true}]'
  2. $ curl --header "Authorization: Bearer ${TOKEN}" \
         "https://api.<cluster_fqdn>/apis/token.kubevirt.io/v1alpha1/namespaces/<namespace>/virtualmachines/<vm_name>/vnc?duration=<duration>"

    { "token": "eyJhb..." }
  3. $ export VNC_TOKEN="<token>"

  1. $ oc login --token ${VNC_TOKEN}
  2. $ virtctl vnc <vm_name> -n <namespace>
주의

$ virtctl delete serviceaccount --namespace "<namespace>" "<vm_name>-vnc-access"
7.3.1.3.1.

    • $ kubectl create rolebinding "${ROLE_BINDING_NAME}" --clusterrole="token.kubevirt.io:generate" --user="${USER_NAME}"
    • $ kubectl create rolebinding "${ROLE_BINDING_NAME}" --clusterrole="token.kubevirt.io:generate" --serviceaccount="${SERVICE_ACCOUNT_NAME}"

7.3.2.

참고

7.3.2.1.

참고

7.3.2.2.

참고

  1. $ virtctl console <vm_name>
  2. $ virtctl vnc <vm_name>
  3. $ virtctl vnc <vm_name> -v 4

7.3.3.

7.3.3.1.

참고

7.4.

7.4.1.

  1. $ virtctl create vm --instancetype <my_instancetype> --preference <my_preference>
  2. $ virtctl create vm --instancetype virtualmachineinstancetype/<my_instancetype> --preference virtualmachinepreference/<my_preference>

7.4.2.

  • $ virtctl create vm --volume-import type:pvc,src:my-ns/my-pvc --infer-instancetype --infer-preference

7.4.3.

  • $ oc label DataSource foo instancetype.kubevirt.io/default-instancetype=<my_instancetype>

7.5.

7.5.1.

7.5.2.

7.5.2.1.

참고

7.5.2.2.

참고

7.5.2.2.1.

7.5.2.2.2.

    • 참고

7.5.2.2.3.

  1. apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: example-vm
      namespace: example-namespace
    spec:
      dataVolumeTemplates:
        - metadata:
            name: example-vm-volume
          spec:
            sourceRef:
              kind: DataSource
              name: rhel9
              namespace: openshift-virtualization-os-images
            storage:
              resources: {}
      instancetype:
        name: u1.medium
      preference:
        name: rhel.9
      running: true
      template:
        spec:
          domain:
            devices: {}
          volumes:
            - dataVolume:
                name: example-vm-volume
              name: rootdisk
            - cloudInitNoCloud: 
    1
    
                userData: |-
                  #cloud-config
                  user: cloud-user
              name: cloudinitdisk
          accessCredentials:
            - sshPublicKey:
                propagationMethod:
                  noCloud: {}
                source:
                  secret:
                    secretName: authorized-keys 
    2
    
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: authorized-keys
    data:
      key: c3NoLXJzYSB... 
    3

    1
    2
    3
  2. $ oc create -f <manifest_file>.yaml
  3. $ virtctl start vm example-vm -n example-namespace

  • $ oc describe vm example-vm -n example-namespace

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: example-vm
      namespace: example-namespace
    spec:
      template:
        spec:
          accessCredentials:
            - sshPublicKey:
                propagationMethod:
                  noCloud: {}
                source:
                  secret:
                    secretName: authorized-keys
    # ...

7.5.2.3.

참고

7.5.2.3.1.

참고

7.5.2.3.2.

참고

    • 참고

7.5.2.3.3.

7.5.2.3.4.

참고

  1. apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: example-vm
      namespace: example-namespace
    spec:
      dataVolumeTemplates:
        - metadata:
            name: example-vm-volume
          spec:
            sourceRef:
              kind: DataSource
              name: rhel9
              namespace: openshift-virtualization-os-images
            storage:
              resources: {}
      instancetype:
        name: u1.medium
      preference:
        name: rhel.9
      running: true
      template:
        spec:
          domain:
            devices: {}
          volumes:
            - dataVolume:
                name: example-vm-volume
              name: rootdisk
            - cloudInitNoCloud: 
    1
    
                userData: |-
                  #cloud-config
                  runcmd:
                  - [ setsebool, -P, virt_qemu_ga_manage_ssh, on ]
              name: cloudinitdisk
          accessCredentials:
            - sshPublicKey:
                propagationMethod:
                  qemuGuestAgent:
                    users: ["cloud-user"]
                source:
                  secret:
                    secretName: authorized-keys 
    2
    
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: authorized-keys
    data:
      key: c3NoLXJzYSB... 
    3

    1
    2
    3
  2. $ oc create -f <manifest_file>.yaml
  3. $ virtctl start vm example-vm -n example-namespace

  • $ oc describe vm example-vm -n example-namespace

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: example-vm
      namespace: example-namespace
    spec:
      template:
        spec:
          accessCredentials:
            - sshPublicKey:
                propagationMethod:
                  qemuGuestAgent:
                    users: ["cloud-user"]
                source:
                  secret:
                    secretName: authorized-keys
    # ...

7.5.2.4.

  • $ virtctl -n <namespace> ssh <username>@example-vm -i <ssh_key> 
    1
    1

    $ virtctl -n my-namespace ssh cloud-user@example-vm -i my-key

작은 정보

7.5.3.

  1. Host vm/*
      ProxyCommand virtctl port-forward --stdio=true %h %p
  2. $ ssh <user>@vm/<vm_name>.<namespace>

7.5.4.

7.5.4.1.

참고

7.5.4.2.

7.5.4.2.1.

7.5.4.2.2.

7.5.4.2.3.

  • $ virtctl expose vm <vm_name> --name <service_name> --type <service_type> --port <port> 
    1
    1

    $ virtctl expose vm example-vm --name example-service --type NodePort --port 22

  • $ oc get service

7.5.4.2.4.

  1. apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: example-vm
      namespace: example-namespace
    spec:
      running: false
      template:
        metadata:
          labels:
            special: key 
    1
    
    # ...
    1
    참고

  2. apiVersion: v1
    kind: Service
    metadata:
      name: example-service
      namespace: example-namespace
    spec:
    # ...
      selector:
        special: key 
    1
    
      type: NodePort 
    2
    
      ports: 
    3
    
        protocol: TCP
        port: 80
        targetPort: 9376
        nodePort: 30000
    1
    2
    3
  3. $ oc create -f example-service.yaml

  • $ oc get service -n example-namespace
7.5.4.3.

  • $ ssh <user_name>@<ip_address> -p <port> 
    1
    1

7.5.5.

중요

7.5.5.1.

7.5.5.2.

  1. $ oc describe vm <vm_name> -n <namespace>

    # ...
    Interfaces:
      Interface Name:  eth0
      Ip Address:      10.244.0.37/24
      Ip Addresses:
        10.244.0.37/24
        fe80::858:aff:fef4:25/64
      Mac:             0a:58:0a:f4:00:25
      Name:            default
    # ...

  2. $ ssh <user_name>@<ip_address> -i <ssh_key>

    $ ssh cloud-user@10.244.0.37 -i ~/.ssh/id_rsa_cloud-user

참고

7.6.

7.6.1.

참고

7.6.2.

7.6.3.

  1. $ oc edit vm <vm_name>
    • $ oc apply vm <vm_name> -n <namespace>

7.6.4.

참고

7.6.4.1.
Expand
  

Expand
   

참고

7.6.5.

7.6.6.

7.7.

7.7.1.

참고

7.7.2.

참고

7.7.3.

  1. $ oc edit vm <vm_name> -n <namespace>
  2. disks:
      - bootOrder: 1 
    1
    
        disk:
          bus: virtio
        name: containerdisk
      - disk:
          bus: virtio
        name: cloudinitdisk
      - cdrom:
          bus: virtio
        name: cd-drive-1
    interfaces:
      - boot Order: 2 
    2
    
        macAddress: '02:96:c4:00:00'
        masquerade: {}
        name: default
    1
    2

7.7.4.

참고

7.8.

7.8.1.

7.8.2.

  • $ oc delete vm <vm_name>
    참고

7.9.

참고

7.9.1.

  1. apiVersion: export.kubevirt.io/v1beta1
    kind: VirtualMachineExport
    metadata:
      name: example-export
    spec:
      source:
        apiGroup: "kubevirt.io" 
    1
    
        kind: VirtualMachine 
    2
    
        name: example-vm
      ttlDuration: 1h 
    3

    1
    2
    3
  2. $ oc create -f example-export.yaml
  3. $ oc get vmexport example-export -o yaml

    apiVersion: export.kubevirt.io/v1beta1
    kind: VirtualMachineExport
    metadata:
      name: example-export
      namespace: example
    spec:
      source:
        apiGroup: ""
        kind: PersistentVolumeClaim
        name: example-pvc
      tokenSecretRef: example-token
    status:
      conditions:
      - lastProbeTime: null
        lastTransitionTime: "2022-06-21T14:10:09Z"
        reason: podReady
        status: "True"
        type: Ready
      - lastProbeTime: null
        lastTransitionTime: "2022-06-21T14:09:02Z"
        reason: pvcBound
        status: "True"
        type: PVCReady
      links:
        external: 
    1
    
          cert: |-
            -----BEGIN CERTIFICATE-----
            ...
            -----END CERTIFICATE-----
          volumes:
          - formats:
            - format: raw
              url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img
            - format: gzip
              url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img.gz
            name: example-disk
        internal:  
    2
    
          cert: |-
            -----BEGIN CERTIFICATE-----
            ...
            -----END CERTIFICATE-----
          volumes:
          - formats:
            - format: raw
              url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img
            - format: gzip
              url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img.gz
            name: example-disk
      phase: Ready
      serviceName: virt-export-example-export

    1
    2

7.9.2.

  • 참고

    1. $ oc get vmexport <export_name> -o jsonpath={.status.links.external.cert} > cacert.crt 
      1
      1
  1. $ oc get secret export-token-<export_name> -o jsonpath={.data.token} | base64 --decode > token_decode 
    1
    1
  2. $ oc get vmexport <export_name> -o yaml
  3. apiVersion: export.kubevirt.io/v1beta1
    kind: VirtualMachineExport
    metadata:
      name: example-export
    spec:
      source:
        apiGroup: "kubevirt.io"
        kind: VirtualMachine
        name: example-vm
      tokenSecretRef: example-token
    status:
    #...
      links:
        external:
    #...
          manifests:
          - type: all
            url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/all 
    1
    
          - type: auth-header-secret
            url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/secret 
    2
    
        internal:
    #...
          manifests:
          - type: all
            url: https://virt-export-export-pvc.default.svc/internal/manifests/all 
    3
    
          - type: auth-header-secret
            url: https://virt-export-export-pvc.default.svc/internal/manifests/secret
      phase: Ready
      serviceName: virt-export-example-export

    1
    2
    3
  4. $ curl --cacert cacert.crt <secret_manifest_url> -H \ 
    1
    
    "x-kubevirt-export-token:token_decode" -H \ 
    2
    
    "Accept:application/yaml"
    1
    2

    $ curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/secret -H "x-kubevirt-export-token:token_decode" -H "Accept:application/yaml"
  5. $ curl --cacert cacert.crt <all_manifest_url> -H \ 
    1
    
    "x-kubevirt-export-token:token_decode" -H \ 
    2
    
    "Accept:application/yaml"
    1
    2

    $ curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/all -H "x-kubevirt-export-token:token_decode" -H "Accept:application/yaml"

7.10.

7.10.1.

참고

7.10.2.

  • $ oc get vmis -A

7.10.3.

참고

7.10.4.

7.10.5.

  • $ oc delete vmi <vmi_name>

7.10.6.

7.11.

7.11.1.

참고

7.11.2.

7.11.3.

중요

7.11.4.

7.11.5.

7.12.

중요

7.12.1.

kind: HyperConverged
metadata:
  name: kubevirt-hyperconverged
spec:
  vmStateStorageClass: <storage_class_name>

# ...
참고

7.12.2.

  1. $ oc edit vm <vm_name> -n <namespace>
  2. apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
        name: example-vm
    spec:
      template:
        spec:
          domain:
            devices:
              tpm:  
    1
    
                persistent: true 
    2
    
    # ...
    1
    2

7.13.

7.13.1.

7.13.2.

Expand
표 7.4.
  

참고

7.13.3.

참고

7.13.3.1.

7.13.3.2.

  1. apiVersion: tekton.dev/v1
    kind: PipelineRun
    metadata:
      generateName: windows11-installer-run-
      labels:
        pipelinerun: windows11-installer-run
    spec:
        params:
        -   name: winImageDownloadURL
            value: <windows_image_download_url> 
    1
    
        -   name: acceptEula
            value: false 
    2
    
        pipelineRef:
            params:
            -   name: catalog
                value: redhat-pipelines
            -   name: type
                value: artifact
            -   name: kind
                value: pipeline
            -   name: name
                value: windows-efi-installer
            -   name: version
                value: 4.17
            resolver: hub
        taskRunSpecs:
        -   pipelineTaskName: modify-windows-iso-file
            PodTemplate:
                securityContext:
                    fsGroup: 107
                    runAsUser: 107
    1
    2
  2. $ oc apply -f windows11-customize-run.yaml

7.14.

7.14.1.

7.14.1.1.

참고

$ oc annotate namespace my-virtualization-project alpha.kubevirt.io/auto-memory-limits-ratio=1.2

  1. $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. spec:
      featureGates:
        autoResourceLimits: true
7.14.1.1.1.

주의

  1. apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: with-limits
    spec:
      running: false
      template:
        spec:
          domain:
    # ...
            resources:
              requests:
                memory: 128Mi
              limits:
                memory: 256Mi  
    1
    1

7.14.2.

7.14.2.1.

7.14.2.1.1.

  • apiVersion: aaq.kubevirt.io/v1alpha1
    kind: ApplicationAwareResourceQuota
    metadata:
      name: example-resource-quota
    spec:
      hard:
        requests.memory: 1Gi
        limits.memory: 1Gi
        requests.cpu/vmi: "1" 
    1
    
        requests.memory/vmi: 1Gi 
    2
    
    # ...

    1
    2
  • apiVersion: aaq.kubevirt.io/v1alpha1
    kind: ApplicationAwareClusterResourceQuota 
    1
    
    metadata:
      name: example-resource-quota
    spec:
      quota:
        hard:
          requests.memory: 1Gi
          limits.memory: 1Gi
          requests.cpu/vmi: "1"
          requests.memory/vmi: 1Gi
      selector:
        annotations: null
        labels:
          matchLabels:
            kubernetes.io/metadata.name: default
    # ...

    1
    참고

중요

7.14.2.2.

  • $ oc patch hco kubevirt-hyperconverged -n openshift-cnv \
     --type json -p '[{"op": "add", "path": "/spec/featureGates/enableApplicationAwareQuota", "value": true}]'
7.14.2.3.

  • $ oc patch hco kubevirt-hyperconverged -n openshift-cnv --type merge -p '{
      "spec": {
        "applicationAwareConfig": {
          "vmiCalcConfigName": "DedicatedVirtualResources",
          "namespaceSelector": {
            "matchLabels": {
              "app": "my-app"
            }
          },
          "allowApplicationAwareClusterResourceQuota": true
        }
      }
    }'

7.14.3.

7.14.3.1.

참고

참고

7.14.3.2.

7.14.3.2.1.

주의

metadata:
  name: example-vm-node-selector
apiVersion: kubevirt.io/v1
kind: VirtualMachine
spec:
  template:
    spec:
      nodeSelector:
        example-key-1: example-value-1
        example-key-2: example-value-2
# ...

7.14.3.2.2.

metadata:
  name: example-vm-pod-affinity
apiVersion: kubevirt.io/v1
kind: VirtualMachine
spec:
  template:
    spec:
      affinity:
        podAffinity:
          requiredDuringSchedulingIgnoredDuringExecution: 
1

          - labelSelector:
              matchExpressions:
              - key: example-key-1
                operator: In
                values:
                - example-value-1
            topologyKey: kubernetes.io/hostname
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution: 
2

          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: example-key-2
                  operator: In
                  values:
                  - example-value-2
              topologyKey: kubernetes.io/hostname
# ...

1
2
7.14.3.2.3.

metadata:
  name: example-vm-node-affinity
apiVersion: kubevirt.io/v1
kind: VirtualMachine
spec:
  template:
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution: 
1

            nodeSelectorTerms:
            - matchExpressions:
              - key: example.io/example-key
                operator: In
                values:
                - example-value-1
                - example-value-2
          preferredDuringSchedulingIgnoredDuringExecution: 
2

          - weight: 1
            preference:
              matchExpressions:
              - key: example-node-label-key
                operator: In
                values:
                - example-node-label-value
# ...

1
2
7.14.3.2.4.

참고

metadata:
  name: example-vm-tolerations
apiVersion: kubevirt.io/v1
kind: VirtualMachine
spec:
  tolerations:
  - key: "key"
    operator: "Equal"
    value: "virtualization"
    effect: "NoSchedule"
# ...

7.14.4.

중요

7.14.4.1.
7.14.4.2.

7.14.4.2.1.

참고

7.14.4.2.2.

7.14.4.3.

7.14.4.4.

  1. $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
    • apiVersion: hco.kubevirt.io/v1beta1
      kind: HyperConverged
      metadata:
        name: kubevirt-hyperconverged
        namespace: openshift-cnv
      spec:
        configuration:
          ksmConfiguration:
            nodeLabelSelector: {}
      # ...
    • apiVersion: hco.kubevirt.io/v1beta1
      kind: HyperConverged
      metadata:
        name: kubevirt-hyperconverged
        namespace: openshift-cnv
      spec:
        configuration:
          ksmConfiguration:
            nodeLabelSelector:
              matchLabels:
                <first_example_key>: "true"
                <second_example_key>: "true"
      # ...
    • apiVersion: hco.kubevirt.io/v1beta1
      kind: HyperConverged
      metadata:
        name: kubevirt-hyperconverged
        namespace: openshift-cnv
      spec:
        configuration:
      # ...

7.14.5.

7.14.5.1.

  1. $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
      certConfig:
        ca:
          duration: 48h0m0s
          renewBefore: 24h0m0s 
    1
    
        server:
          duration: 24h0m0s  
    2
    
          renewBefore: 12h0m0s  
    3
    1
    2
    3
7.14.5.2.

certConfig:
   ca:
     duration: 4h0m0s
     renewBefore: 1h0m0s
   server:
     duration: 4h0m0s
     renewBefore: 4h0m0s

error: hyperconvergeds.hco.kubevirt.io "kubevirt-hyperconverged" could not be patched: admission webhook "validate-hco.kubevirt.io" denied the request: spec.certConfig: ca.duration is smaller than server.duration

7.14.6.

7.14.6.1.

참고

  1. $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
     name: kubevirt-hyperconverged
     namespace: openshift-cnv
    spec:
      defaultCPUModel: "EPYC"

7.14.7.

7.14.7.1.

7.14.7.2.

  1. apiversion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      labels:
        special: vm-secureboot
      name: vm-secureboot
    spec:
      template:
        metadata:
          labels:
            special: vm-secureboot
        spec:
          domain:
            devices:
              disks:
              - disk:
                  bus: virtio
                name: containerdisk
            features:
              acpi: {}
              smm:
                enabled: true 
    1
    
            firmware:
              bootloader:
                efi:
                  secureBoot: true 
    2
    
    # ...

    1
    2
  2. $ oc create -f <file_name>.yaml
7.14.7.3.

  • $ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \
      --type json -p '[{"op":"replace","path":"/spec/featureGates/VMPersistentState", "value": true}]'
7.14.7.4.

  • apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: vm
    spec:
      template:
        spec:
          domain:
            firmware:
              bootloader:
                efi:
                  persistent: true
    # ...

7.14.8.

7.14.8.1.
7.14.8.2.

    1. apiVersion: "k8s.cni.cncf.io/v1"
      kind: NetworkAttachmentDefinition
      metadata:
        name: pxe-net-conf 
      1
      
      spec:
        config: |
          {
            "cniVersion": "0.3.1",
            "name": "pxe-net-conf", 
      2
      
            "type": "bridge", 
      3
      
            "bridge": "bridge-interface", 
      4
      
            "macspoofchk": false, 
      5
      
            "vlan": 100, 
      6
      
            "disableContainerInterface": true,
            "preserveDefaultVlan": false 
      7
      
          }
      1
      2
      3
      4
      5
      6
      7
  1. $ oc create -f pxe-net-conf.yaml
    1. interfaces:
      - masquerade: {}
        name: default
      - bridge: {}
        name: pxe-net
        macAddress: de:00:00:00:00:de
        bootOrder: 1
      참고

    2. devices:
        disks:
        - disk:
            bus: virtio
          name: containerdisk
          bootOrder: 2
    3. networks:
      - name: default
        pod: {}
      - name: pxe-net
        multus:
          networkName: pxe-net-conf
  2. $ oc create -f vmi-pxe-boot.yaml

      virtualmachineinstance.kubevirt.io "vmi-pxe-boot" created

  3. $ oc get vmi vmi-pxe-boot -o yaml | grep -i phase
      phase: Running
  4. $ virtctl vnc vmi-pxe-boot
  5. $ virtctl console vmi-pxe-boot

  1. $ ip addr

    ...
    3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
       link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff

7.14.8.3.

7.14.9.

7.14.9.1.
7.14.9.2.

7.14.9.3.

참고

  1. kind: VirtualMachine
    # ...
    spec:
      domain:
        resources:
          requests:
            memory: "4Gi" 
    1
    
        memory:
          hugepages:
            pageSize: "1Gi" 
    2
    
    # ...
    1
    2
  2. $ oc apply -f <virtual_machine>.yaml

7.14.10.

7.14.10.1.

7.14.10.2.
7.14.10.3.

7.14.11.

7.14.11.1.

Expand
  

7.14.11.2.

  • apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: myvm
    spec:
      template:
        spec:
          domain:
            cpu:
              features:
                - name: apic 
    1
    
                  policy: require 
    2
    1
    2
7.14.11.3.

  • apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: myvm
    spec:
      template:
        spec:
          domain:
            cpu:
              model: Conroe 
    1
    1
7.14.11.4.

  • apiVersion: kubevirt/v1alpha3
    kind: VirtualMachine
    metadata:
      name: myvm
    spec:
      template:
        spec:
          domain:
            cpu:
              model: host-model 
    1
    1
7.14.11.5.

  • apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: vm-fedora
    spec:
      running: true
      template:
        spec:
          schedulerName: my-scheduler 
    1
    
          domain:
            devices:
              disks:
                - name: containerdisk
                  disk:
                    bus: virtio
    # ...
    1

    1. $ oc get pods

      NAME                             READY   STATUS    RESTARTS   AGE
      virt-launcher-vm-fedora-dpc87    2/2     Running   0          24m

    2. $ oc describe pod virt-launcher-vm-fedora-dpc87

      [...]
      Events:
        Type    Reason     Age   From              Message
        ----    ------     ----  ----              -------
        Normal  Scheduled  21m   my-scheduler  Successfully assigned default/virt-launcher-vm-fedora-dpc87 to node01
      [...]

7.14.12.

7.14.12.1.

7.14.12.1.1.

  • $ oc label node <node_name> nvidia.com/gpu.deploy.operands=false 
    1
    1

  1. $ oc describe node <node_name>
    1. $ oc get pods -n nvidia-gpu-operator

      NAME                             READY   STATUS        RESTARTS   AGE
      gpu-operator-59469b8c5c-hw9wj    1/1     Running       0          8d
      nvidia-sandbox-validator-7hx98   1/1     Running       0          8d
      nvidia-sandbox-validator-hdb7p   1/1     Running       0          8d
      nvidia-sandbox-validator-kxwj7   1/1     Terminating   0          9d
      nvidia-vfio-manager-7w9fs        1/1     Running       0          8d
      nvidia-vfio-manager-866pz        1/1     Running       0          8d
      nvidia-vfio-manager-zqtck        1/1     Terminating   0          9d

    2. $ oc get pods -n nvidia-gpu-operator

      NAME                             READY   STATUS    RESTARTS   AGE
      gpu-operator-59469b8c5c-hw9wj    1/1     Running   0          8d
      nvidia-sandbox-validator-7hx98   1/1     Running   0          8d
      nvidia-sandbox-validator-hdb7p   1/1     Running   0          8d
      nvidia-vfio-manager-7w9fs        1/1     Running   0          8d
      nvidia-vfio-manager-866pz        1/1     Running   0          8d

7.14.12.2.
7.14.12.2.1.

7.14.12.2.2.

  1. apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: worker 
    1
    
      name: 100-worker-iommu 
    2
    
    spec:
      config:
        ignition:
          version: 3.2.0
      kernelArguments:
          - intel_iommu=on 
    3
    
    # ...
    1
    2
    3
  2. $ oc create -f 100-worker-kernel-arg-iommu.yaml

  • $ oc get MachineConfig
7.14.12.2.3.

  1. $ lspci -nnv | grep -i nvidia

    02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)

  2. 참고

    variant: openshift
    version: 4.17.0
    metadata:
      name: 100-worker-vfiopci
      labels:
        machineconfiguration.openshift.io/role: worker 
    1
    
    storage:
      files:
      - path: /etc/modprobe.d/vfio.conf
        mode: 0644
        overwrite: true
        contents:
          inline: |
            options vfio-pci ids=10de:1eb8 
    2
    
      - path: /etc/modules-load.d/vfio-pci.conf 
    3
    
        mode: 0644
        overwrite: true
        contents:
          inline: vfio-pci

    1
    2
    3
  3. $ butane 100-worker-vfiopci.bu -o 100-worker-vfiopci.yaml
  4. $ oc apply -f 100-worker-vfiopci.yaml
  5. $ oc get MachineConfig

    NAME                             GENERATEDBYCONTROLLER                      IGNITIONVERSION  AGE
    00-master                        d3da910bfa9f4b599af4ed7f5ac270d55950a3a1   3.2.0            25h
    00-worker                        d3da910bfa9f4b599af4ed7f5ac270d55950a3a1   3.2.0            25h
    01-master-container-runtime      d3da910bfa9f4b599af4ed7f5ac270d55950a3a1   3.2.0            25h
    01-master-kubelet                d3da910bfa9f4b599af4ed7f5ac270d55950a3a1   3.2.0            25h
    01-worker-container-runtime      d3da910bfa9f4b599af4ed7f5ac270d55950a3a1   3.2.0            25h
    01-worker-kubelet                d3da910bfa9f4b599af4ed7f5ac270d55950a3a1   3.2.0            25h
    100-worker-iommu                                                            3.2.0            30s
    100-worker-vfiopci-configuration                                            3.2.0            30s

  • $ lspci -nnk -d 10de:

    04:00.0 3D controller [0302]: NVIDIA Corporation GP102GL [Tesla P40] [10de:1eb8] (rev a1)
            Subsystem: NVIDIA Corporation Device [10de:1eb8]
            Kernel driver in use: vfio-pci
            Kernel modules: nouveau

7.14.12.2.4.

  1. $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. apiVersion: hco.kubevirt.io/v1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
      permittedHostDevices: 
    1
    
        pciHostDevices: 
    2
    
        - pciDeviceSelector: "10DE:1DB6" 
    3
    
          resourceName: "nvidia.com/GV100GL_Tesla_V100" 
    4
    
        - pciDeviceSelector: "10DE:1EB8"
          resourceName: "nvidia.com/TU104GL_Tesla_T4"
        - pciDeviceSelector: "8086:6F54"
          resourceName: "intel.com/qat"
          externalResourceProvider: true 
    5
    
    # ...

    1
    2
    3
    4
    5
    참고

  • $ oc describe node <node_name>

    Capacity:
      cpu:                            64
      devices.kubevirt.io/kvm:        110
      devices.kubevirt.io/tun:        110
      devices.kubevirt.io/vhost-net:  110
      ephemeral-storage:              915128Mi
      hugepages-1Gi:                  0
      hugepages-2Mi:                  0
      memory:                         131395264Ki
      nvidia.com/GV100GL_Tesla_V100   1
      nvidia.com/TU104GL_Tesla_T4     1
      intel.com/qat:                  1
      pods:                           250
    Allocatable:
      cpu:                            63500m
      devices.kubevirt.io/kvm:        110
      devices.kubevirt.io/tun:        110
      devices.kubevirt.io/vhost-net:  110
      ephemeral-storage:              863623130526
      hugepages-1Gi:                  0
      hugepages-2Mi:                  0
      memory:                         130244288Ki
      nvidia.com/GV100GL_Tesla_V100   1
      nvidia.com/TU104GL_Tesla_T4     1
      intel.com/qat:                  1
      pods:                           250

7.14.12.2.5.

  1. $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. apiVersion: hco.kubevirt.io/v1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
      permittedHostDevices:
        pciHostDevices:
        - pciDeviceSelector: "10DE:1DB6"
          resourceName: "nvidia.com/GV100GL_Tesla_V100"
        - pciDeviceSelector: "10DE:1EB8"
          resourceName: "nvidia.com/TU104GL_Tesla_T4"
    # ...

  • $ oc describe node <node_name>

    Capacity:
      cpu:                            64
      devices.kubevirt.io/kvm:        110
      devices.kubevirt.io/tun:        110
      devices.kubevirt.io/vhost-net:  110
      ephemeral-storage:              915128Mi
      hugepages-1Gi:                  0
      hugepages-2Mi:                  0
      memory:                         131395264Ki
      nvidia.com/GV100GL_Tesla_V100   1
      nvidia.com/TU104GL_Tesla_T4     1
      intel.com/qat:                  0
      pods:                           250
    Allocatable:
      cpu:                            63500m
      devices.kubevirt.io/kvm:        110
      devices.kubevirt.io/tun:        110
      devices.kubevirt.io/vhost-net:  110
      ephemeral-storage:              863623130526
      hugepages-1Gi:                  0
      hugepages-2Mi:                  0
      memory:                         130244288Ki
      nvidia.com/GV100GL_Tesla_V100   1
      nvidia.com/TU104GL_Tesla_T4     1
      intel.com/qat:                  0
      pods:                           250

7.14.12.3.

7.14.12.3.1.

  • apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    spec:
      domain:
        devices:
          hostDevices:
          - deviceName: nvidia.com/TU104GL_Tesla_T4 
    1
    
            name: hostdevices1

    1

  • $ lspci -nnk | grep NVIDIA

    $ 02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)

7.14.13.

7.14.13.1.

참고

7.14.13.2.

7.14.13.2.1.

  1. apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: worker 
    1
    
      name: 100-worker-iommu 
    2
    
    spec:
      config:
        ignition:
          version: 3.2.0
      kernelArguments:
          - intel_iommu=on 
    3
    
    # ...
    1
    2
    3
  2. $ oc create -f 100-worker-kernel-arg-iommu.yaml

  • $ oc get MachineConfig
7.14.13.3.

참고

7.14.13.3.1.

7.14.13.3.2.

  • 중요

  • kind: ClusterPolicy
    apiVersion: nvidia.com/v1
    metadata:
      name: gpu-cluster-policy
    spec:
      operator:
        defaultRuntime: crio
        use_ocp_driver_toolkit: true
        initContainer: {}
      sandboxWorkloads:
        enabled: true
        defaultWorkload: vm-vgpu
      driver:
        enabled: false 
    1
    
      dcgmExporter: {}
      dcgm:
        enabled: true
      daemonsets: {}
      devicePlugin: {}
      gfd: {}
      migManager:
        enabled: true
      nodeStatusExporter:
        enabled: true
      mig:
        strategy: single
      toolkit:
        enabled: true
      validator:
        plugin:
          env:
            - name: WITH_WORKLOAD
              value: "true"
      vgpuManager:
        enabled: true 
    2
    
        repository: <vgpu_container_registry> 
    3
    
        image: <vgpu_image_name>
        version: nvidia-vgpu-manager
      vgpuDeviceManager:
        enabled: false 
    4
    
        config:
          name: vgpu-devices-config
          default: default
      sandboxDevicePlugin:
        enabled: false 
    5
    
      vfioManager:
        enabled: false 
    6

    1
    2
    3
    4
    5
    6
7.14.13.4.

# ...
mediatedDevicesConfiguration:
  mediatedDeviceTypes:
  - nvidia-222
  - nvidia-228
  - nvidia-105
  - nvidia-108
# ...

nvidia-105
# ...
nvidia-108
nvidia-217
nvidia-299
# ...

# ...
mediatedDevicesConfiguration:
  mediatedDeviceTypes:
  - nvidia-22
  - nvidia-223
  - nvidia-224
# ...

7.14.13.5.

7.14.13.5.1.

  1. $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv

    예 7.1.

    apiVersion: hco.kubevirt.io/v1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
      mediatedDevicesConfiguration:
        mediatedDeviceTypes:
        - nvidia-231
        nodeMediatedDeviceTypes:
        - mediatedDeviceTypes:
          - nvidia-233
          nodeSelector:
            kubernetes.io/hostname: node-11.redhat.com
      permittedHostDevices:
        mediatedDevices:
        - mdevNameSelector: GRID T4-2Q
          resourceName: nvidia.com/GRID_T4-2Q
        - mdevNameSelector: GRID T4-8Q
          resourceName: nvidia.com/GRID_T4-8Q
    # ...
  2. # ...
    spec:
      mediatedDevicesConfiguration:
        mediatedDeviceTypes: 
    1
    
        - <device_type>
        nodeMediatedDeviceTypes: 
    2
    
        - mediatedDeviceTypes: 
    3
    
          - <device_type>
          nodeSelector: 
    4
    
            <node_selector_key>: <node_selector_value>
    # ...

    1
    2
    3
    4
    중요

    1. $ oc get $NODE -o json \
        | jq '.status.allocatable \
          | with_entries(select(.key | startswith("nvidia.com/"))) \
          | with_entries(select(.value != "0"))'
  3. # ...
      permittedHostDevices:
        mediatedDevices:
        - mdevNameSelector: GRID T4-2Q 
    1
    
          resourceName: nvidia.com/GRID_T4-2Q 
    2
    
    # ...

    1
    2

  • $ oc describe node <node_name>
7.14.13.5.2.

  • 참고

7.14.13.5.3.

  1. $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. apiVersion: hco.kubevirt.io/v1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
      mediatedDevicesConfiguration:
        mediatedDeviceTypes: 
    1
    
          - nvidia-231
      permittedHostDevices:
        mediatedDevices: 
    2
    
        - mdevNameSelector: GRID T4-2Q
          resourceName: nvidia.com/GRID_T4-2Q

    1
    2
7.14.13.6.

7.14.13.6.1.

  • apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    spec:
      domain:
        devices:
          gpus:
          - deviceName: nvidia.com/TU104GL_Tesla_T4 
    1
    
            name: gpu1 
    2
    
          - deviceName: nvidia.com/GRID_T4-2Q
            name: gpu2

    1
    2

  • $ lspci -nnk | grep <device_name>
7.14.13.6.2.

참고

7.14.14.

7.14.14.1.

  1. $ lsusb
  2. $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  3. apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
       name: kubevirt-hyperconverged
       namespace: openshift-cnv
    spec:
      configuration:
        permittedHostDevices: 
    1
    
          usbHostDevices: 
    2
    
            - resourceName: kubevirt.io/peripherals 
    3
    
              selectors:
                - vendor: "045e"
                  product: "07a5"
                - vendor: "062a"
                  product: "4102"
                - vendor: "072f"
                  product: "b100"

    1
    2
    3
7.14.14.2.

  1. $ oc /dev/serial/by-id/usb-VENDOR_device_name
  2. $ oc edit vmi vmi-usb
  3. apiVersion: kubevirt.io/v1
    kind: VirtualMachineInstance
    metadata:
      labels:
        special: vmi-usb
      name: vmi-usb 
    1
    
    spec:
      domain:
        devices:
          hostDevices:
          - deviceName: kubevirt.io/peripherals
            name: local-peripherals
    # ...

    1

7.14.15.

7.14.15.1.

7.14.15.2.

중요

      1. 중요

7.14.15.3.

  1. apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    spec:
      template:
        metadata:
          annotations:
            descheduler.alpha.kubernetes.io/evict: "true"
  2. apiVersion: operator.openshift.io/v1
    kind: KubeDescheduler
    metadata:
      name: cluster
      namespace: openshift-kube-descheduler-operator
    spec:
      deschedulingIntervalSeconds: 3600
      profiles:
      - LongLifecycle 
    1
    
      mode: Predictive 
    2
    
      profileCustomizations:
        devEnableEvictionsInBackground: true 
    3
    1
    2
    3

7.14.16.

7.14.17.

7.14.17.1.

  • $ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \
      --type=json -p='[{"op": "add", "path": "/spec/tuningPolicy", \
      "value": "highBurst"}]'

  • $ oc get kubevirt.kubevirt.io/kubevirt-kubevirt-hyperconverged \
      -n openshift-cnv -o go-template --template='{{range $config, \
      $value := .spec.configuration}} {{if eq $config "apiConfiguration" \
      "webhookConfiguration" "controllerConfiguration" "handlerConfiguration"}} \
      {{"\n"}} {{$config}} = {{$value}} {{end}} {{end}} {{"\n"}}

7.14.18.

7.14.18.1.

7.14.18.2.

  1. $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. ...
    spec:
      resourceRequirements:
        vmiCPUAllocationRatio: 1 
    1
    
    # ...
    1

7.14.19.

참고

7.14.19.1.
7.14.19.2.

  1. apiVersion: kubevirt.io/v1
    kind: VM
    spec:
      domain:
        devices:
          networkInterfaceMultiqueue: true

7.14.20.

7.15.

7.15.1.

참고

7.15.1.1.

7.15.1.2.

  • $ virtctl addvolume <virtual-machine|virtual-machine-instance> \
      --volume-name=<datavolume|PVC> \
      [--persist] [--serial=<label-name>]
  • $ virtctl removevolume <virtual-machine|virtual-machine-instance> \
      --volume-name=<datavolume|PVC>

7.15.2.

7.15.2.1.

  1. $ oc edit pvc <pvc_name>
  2. apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
       name: vm-disk-expand
    spec:
      accessModes:
         - ReadWriteMany
      resources:
        requests:
           storage: 3Gi 
    1
    
    # ...
    1
7.15.2.2.

  1. apiVersion: cdi.kubevirt.io/v1beta1
    kind: DataVolume
    metadata:
      name: blank-image-datavolume
    spec:
      source:
        blank: {}
      storage:
        resources:
          requests:
            storage: <2Gi> 
    1
    
      storageClassName: "<storage_class>" 
    2

    1
    2
  2. $ oc create -f <blank-image-datavolume>.yaml

7.15.3.

7.15.3.1.

  1. apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: <vm_name>
    spec:
      template:
    # ...
        spec:
          domain:
            devices:
              disks:
              - disk:
                  bus: virtio
                name: rootdisk
                errorPolicy: report 
    1
    
                disk1: disk_one 
    2
    
              - disk:
                  bus: virtio
                name: cloudinitdisk
                disk2: disk_two
                shareable: true 
    3
    
              interfaces:
              - masquerade: {}
                name: default
    1
    2
    3
7.15.3.2.

중요

  1. apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: vm-0
    spec:
      template:
        spec:
          domain:
            devices:
              disks:
              - disk:
                  bus: sata
                name: rootdisk
              - errorPolicy: report 
    1
    
                lun: 
    2
    
                  bus: scsi
                  reservation: true 
    3
    
                name: na-shared
                serial: shared1234
          volumes:
          - dataVolume:
              name: vm-0
            name: rootdisk
          - name: na-shared
            persistentVolumeClaim:
              claimName: pvc-na-share
    1
    2
    3
7.15.3.2.1.

7.15.3.2.2.

  1. apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: vm-0
    spec:
      template:
        spec:
          domain:
            devices:
              disks:
              - disk:
                  bus: sata
                name: rootdisk
              - errorPolicy: report
                lun: 
    1
    
                  bus: scsi
                  reservation: true 
    2
    
                name: na-shared
                serial: shared1234
          volumes:
          - dataVolume:
              name: vm-0
            name: rootdisk
          - name: na-shared
            persistentVolumeClaim:
              claimName: pvc-na-share
    1
    2
7.15.3.3.

7.15.3.3.1.

7.15.3.3.2.

  1. $ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p \
    '[{"op":"replace","path":"/spec/featureGates/persistentReservation", "value": true}]'

8장.

8.1.

참고

그림 8.1.

참고

8.1.1.

8.1.2.

8.1.3.

  1. 참고

참고

8.1.3.1.

Expand
표 8.1.
   

8.1.4.

8.1.5.

8.1.6.

8.2.

참고

8.2.1.

  1. apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: example-vm
    spec:
      template:
        spec:
          domain:
            devices:
              interfaces:
                - name: default
                  masquerade: {} 
    1
    
                  ports: 
    2
    
                    - port: 80
    # ...
          networks:
          - name: default
            pod: {}
    1
    2
    참고

  2. $ oc create -f <vm-name>.yaml

8.2.2.

  1. apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: example-vm-ipv6
    spec:
      template:
        spec:
          domain:
            devices:
              interfaces:
                - name: default
                  masquerade: {} 
    1
    
                  ports:
                    - port: 80 
    2
    
    # ...
          networks:
          - name: default
            pod: {}
          volumes:
          - cloudInitNoCloud:
              networkData: |
                version: 2
                ethernets:
                  eth0:
                    dhcp4: true
                    addresses: [ fd10:0:2::2/120 ] 
    3
    
                    gateway6: fd10:0:2::1 
    4
    1
    2
    3
    4
  2. $ oc create -f example-vm-ipv6.yaml

$ oc get vmi <vmi-name> -o jsonpath="{.status.interfaces[*].ipAddresses}"

8.2.3.

참고

8.3.

8.3.1.

참고

8.3.2.

8.3.3.

  1. apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: example-vm
      namespace: example-namespace
    spec:
      running: false
      template:
        metadata:
          labels:
            special: key 
    1
    
    # ...
    1
    참고

  2. apiVersion: v1
    kind: Service
    metadata:
      name: example-service
      namespace: example-namespace
    spec:
    # ...
      selector:
        special: key 
    1
    
      type: NodePort 
    2
    
      ports: 
    3
    
        protocol: TCP
        port: 80
        targetPort: 9376
        nodePort: 30000
    1
    2
    3
  3. $ oc create -f example-service.yaml

  • $ oc get service -n example-namespace

8.4.

중요

8.4.1.

  1. apiVersion: v1
    kind: Service
    metadata:
      name: mysubdomain 
    1
    
    spec:
      selector:
        expose: me 
    2
    
      clusterIP: None 
    3
    
      ports: 
    4
    
      - protocol: TCP
        port: 1234
        targetPort: 1234
    1
    2
    3
    4
  2. $ oc create -f headless_service.yaml

8.4.2.

  1. $ oc edit vm <vm_name>

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: vm-fedora
    spec:
      template:
        metadata:
          labels:
            expose: me 
    1
    
        spec:
          hostname: "myvm" 
    2
    
          subdomain: "mysubdomain" 
    3
    
    # ...

    1
    2
    3

8.4.3.

  1. $ virtctl console vm-fedora
  2. $ ping myvm.mysubdomain.<namespace>.svc.cluster.local

    PING myvm.mysubdomain.default.svc.cluster.local (10.244.0.57) 56(84) bytes of data.
    64 bytes from myvm.mysubdomain.default.svc.cluster.local (10.244.0.57): icmp_seq=1 ttl=64 time=0.029 ms

8.5.

참고

8.5.1.

  • apiVersion: nmstate.io/v1
    kind: NodeNetworkConfigurationPolicy
    metadata:
      name: br1-eth1-policy 
    1
    
    spec:
      desiredState:
        interfaces:
          - name: br1 
    2
    
            description: Linux bridge with eth1 as a port 
    3
    
            type: linux-bridge 
    4
    
            state: up 
    5
    
            ipv4:
              enabled: false 
    6
    
            bridge:
              options:
                stp:
                  enabled: false 
    7
    
              port:
                - name: eth1 
    8
    1
    2
    3
    4
    5
    6
    7
    8

8.5.2.

8.5.2.1.

주의

  1. 참고

8.5.2.2.

주의

  1. apiVersion: "k8s.cni.cncf.io/v1"
    kind: NetworkAttachmentDefinition
    metadata:
      name: bridge-network 
    1
    
      annotations:
        k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/br1 
    2
    
    spec:
      config: |
        {
          "cniVersion": "0.3.1",
          "name": "bridge-network", 
    3
    
          "type": "bridge", 
    4
    
          "bridge": "br1", 
    5
    
          "macspoofchk": false, 
    6
    
          "vlan": 100, 
    7
    
          "disableContainerInterface": true,
          "preserveDefaultVlan": false 
    8
    
        }
    1
    2
    3
    4
    5
    6
    7
    8
    참고

  2. $ oc create -f network-attachment-definition.yaml 
    1
    1

  • $ oc get network-attachment-definition bridge-network

8.5.3.

8.5.3.1.

Expand
  

8.5.3.2.

  1. apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: example-vm
    spec:
      template:
        spec:
          domain:
            devices:
              interfaces:
                - bridge: {}
                  name: bridge-net 
    1
    
    # ...
          networks:
            - name: bridge-net 
    2
    
              multus:
                networkName: bridge-network 
    3
    1
    2
    3
  2. $ oc apply -f example-vm.yaml

8.6.

8.6.1.

참고

  1. apiVersion: sriovnetwork.openshift.io/v1
    kind: SriovNetworkNodePolicy
    metadata:
      name: <name> 
    1
    
      namespace: openshift-sriov-network-operator 
    2
    
    spec:
      resourceName: <sriov_resource_name> 
    3
    
      nodeSelector:
        feature.node.kubernetes.io/network-sriov.capable: "true" 
    4
    
      priority: <priority> 
    5
    
      mtu: <mtu> 
    6
    
      numVfs: <num> 
    7
    
      nicSelector: 
    8
    
        vendor: "<vendor_code>" 
    9
    
        deviceID: "<device_id>" 
    10
    
        pfNames: ["<pf_name>", ...] 
    11
    
        rootDevices: ["<pci_bus_id>", "..."] 
    12
    
      deviceType: vfio-pci 
    13
    
      isRdma: false 
    14
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    참고

  2. $ oc create -f <name>-sriov-node-network.yaml

  3. $ oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'

8.6.2.

참고

apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetwork
metadata:
  name: <name> 
1

  namespace: openshift-sriov-network-operator 
2

spec:
  resourceName: <sriov_resource_name> 
3

  networkNamespace: <target_namespace> 
4

  vlan: <vlan> 
5

  spoofChk: "<spoof_check>" 
6

  linkState: <link_state> 
7

  maxTxRate: <max_tx_rate> 
8

  minTxRate: <min_rx_rate> 
9

  vlanQoS: <vlan_qos> 
10

  trust: "<trust_vf>" 
11

  capabilities: <capabilities> 
12
1
2
3
4
5
6
중요

7
8
9
참고

10
11
중요

12
  1. $ oc create -f <name>-sriov-network.yaml
  2. $ oc get net-attach-def -n <namespace>

8.6.3.

  1. apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: example-vm
    spec:
      domain:
        devices:
          interfaces:
          - name: nic1 
    1
    
            sriov: {}
      networks:
      - name: nic1 
    2
    
        multus:
            networkName: sriov-network 
    3
    
    # ...
    1
    2
    3
  2. $ oc apply -f <vm_sriov>.yaml 
    1
    1

8.6.4.

8.7.

8.7.1.

    1. $ oc label node <node_name> node-role.kubernetes.io/worker-dpdk=""
    2. apiVersion: machineconfiguration.openshift.io/v1
      kind: MachineConfigPool
      metadata:
        name: worker-dpdk
        labels:
          machineconfiguration.openshift.io/role: worker-dpdk
      spec:
        machineConfigSelector:
          matchExpressions:
            - key: machineconfiguration.openshift.io/role
              operator: In
              values:
                - worker
                - worker-dpdk
        nodeSelector:
          matchLabels:
            node-role.kubernetes.io/worker-dpdk: ""

  1. apiVersion: performance.openshift.io/v2
    kind: PerformanceProfile
    metadata:
      name: profile-1
    spec:
      cpu:
        isolated: 4-39,44-79
        reserved: 0-3,40-43
      globallyDisableIrqLoadBalancing: true
      hugepages:
        defaultHugepagesSize: 1G
        pages:
        - count: 8
          node: 0
          size: 1G
      net:
        userLevelNetworking: true
      nodeSelector:
        node-role.kubernetes.io/worker-dpdk: ""
      numa:
        topologyPolicy: single-numa-node

    참고

  2. $ oc get performanceprofiles.performance.openshift.io profile-1 -o=jsonpath='{.status.runtimeClass}{"\n"}'
  3. $ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \
        --type='json' -p='[{"op": "add", "path": "/spec/defaultRuntimeClass", "value":"<runtimeclass-name>"}]'
    참고

  4. $ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \
        --type='json' -p='[{"op": "replace", "path": "/spec/featureGates/alignCPUs", "value": true}]'
    참고

  5. apiVersion: sriovnetwork.openshift.io/v1
    kind: SriovNetworkNodePolicy
    metadata:
      name: policy-1
      namespace: openshift-sriov-network-operator
    spec:
      resourceName: intel_nics_dpdk
      deviceType: vfio-pci
      mtu: 9000
      numVfs: 4
      priority: 99
      nicSelector:
        vendor: "8086"
        deviceID: "1572"
        pfNames:
          - eno3
        rootDevices:
          - "0000:19:00.2"
      nodeSelector:
        feature.node.kubernetes.io/network-sriov.capable: "true"

8.7.1.1.

  1. $ oc label node <node_name> node-role.kubernetes.io/worker-dpdk-
  2. $ oc delete mcp worker-dpdk

8.7.2.

  1. $ oc create ns dpdk-checkup-ns
  2. apiVersion: sriovnetwork.openshift.io/v1
    kind: SriovNetwork
    metadata:
      name: dpdk-sriovnetwork
      namespace: openshift-sriov-network-operator
    spec:
      ipam: |
        {
          "type": "host-local",
          "subnet": "10.56.217.0/24",
          "rangeStart": "10.56.217.171",
          "rangeEnd": "10.56.217.181",
          "routes": [{
            "dst": "0.0.0.0/0"
          }],
          "gateway": "10.56.217.1"
        }
      networkNamespace: dpdk-checkup-ns 
    1
    
      resourceName: intel_nics_dpdk 
    2
    
      spoofChk: "off"
      trust: "on"
      vlan: 1019

    1
    2

8.7.3.

  1. apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: rhel-dpdk-vm
    spec:
      running: true
      template:
        metadata:
          annotations:
            cpu-load-balancing.crio.io: disable 
    1
    
            cpu-quota.crio.io: disable 
    2
    
            irq-load-balancing.crio.io: disable 
    3
    
        spec:
          domain:
            cpu:
              sockets: 1 
    4
    
              cores: 5 
    5
    
              threads: 2
              dedicatedCpuPlacement: true
              isolateEmulatorThread: true
            interfaces:
              - masquerade: {}
                name: default
              - model: virtio
                name: nic-east
                pciAddress: '0000:07:00.0'
                sriov: {}
              networkInterfaceMultiqueue: true
              rng: {}
          memory:
            hugepages:
              pageSize: 1Gi 
    6
    
              guest: 8Gi
          networks:
            - name: default
              pod: {}
            - multus:
                networkName: dpdk-net 
    7
    
              name: nic-east
    # ...

    1
    2
    3
    4
    5
    6
    7
  2. $ oc apply -f <file_name>.yaml
    1. $ grubby --update-kernel=ALL --args="default_hugepagesz=1GB hugepagesz=1G hugepages=8"
    2. $ dnf install -y tuned-profiles-cpu-partitioning
      $ echo isolated_cores=2-9 > /etc/tuned/cpu-partitioning-variables.conf

      $ tuned-adm profile cpu-partitioning
    3. $ dnf install -y driverctl
      $ driverctl set-override 0000:07:00.0 vfio-pci

8.8.

참고

  1. 참고

8.8.1.

참고

8.8.1.1.

  1. apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: l2-network
      namespace: my-namespace
    spec:
      config: |-
        {
                "cniVersion": "0.3.1", 
    1
    
                "name": "my-namespace-l2-network", 
    2
    
                "type": "ovn-k8s-cni-overlay", 
    3
    
                "topology":"layer2", 
    4
    
                "mtu": 1300, 
    5
    
                "netAttachDefName": "my-namespace/l2-network" 
    6
    
        }
    1
    2
    3
    4
    5
    6
    참고

  2. $ oc apply -f <filename>.yaml
8.8.1.2.

  1. apiVersion: nmstate.io/v1
    kind: NodeNetworkConfigurationPolicy
    metadata:
      name: mapping 
    1
    
    spec:
      nodeSelector:
        node-role.kubernetes.io/worker: '' 
    2
    
      desiredState:
        ovn:
          bridge-mappings:
          - localnet: localnet-network 
    3
    
            bridge: br-ex 
    4
    
            state: present 
    5
    1
    2
    3
    4
    5
    참고

  2. apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: localnet-network
      namespace: default
    spec:
      config: |-
        {
                "cniVersion": "0.3.1", 
    1
    
                "name": "localnet-network", 
    2
    
                "type": "ovn-k8s-cni-overlay", 
    3
    
                "topology": "localnet", 
    4
    
                "netAttachDefName": "default/localnet-network" 
    5
    
        }
    1
    2
    3
    4
    5
  3. $ oc apply -f <filename>.yaml
8.8.1.3.

8.8.1.4.

8.8.2.

8.8.2.1.

  1. apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: vm-server
    spec:
      running: true
      template:
        spec:
          domain:
            devices:
              interfaces:
              - name: secondary 
    1
    
                bridge: {}
            resources:
              requests:
                memory: 1024Mi
          networks:
          - name: secondary  
    2
    
            multus:
              networkName: <nad_name> 
    3
    
          nodeSelector:
            node-role.kubernetes.io/worker: '' 
    4
    
    # ...
    1
    2
    3
    4
  2. $ oc apply -f <filename>.yaml

8.9.

참고

8.9.1.

참고

8.9.2.

  1. $ virtctl start <vm_name> -n <namespace>
  2. $ oc edit vm <vm_name>

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: vm-fedora
    template:
      spec:
        domain:
          devices:
            interfaces:
            - name: defaultnetwork
              masquerade: {}
            # new interface
            - name: <secondary_nic> 
    1
    
              bridge: {}
        networks:
        - name: defaultnetwork
          pod: {}
        # new network
        - name: <secondary_nic> 
    2
    
          multus:
            networkName: <nad_name> 
    3
    
    # ...

    1
    2
    3
  3. $ virtctl migrate <vm_name>

  1. $ oc get VirtualMachineInstanceMigration -w

    NAME                        PHASE             VMI
    kubevirt-migrate-vm-lj62q   Scheduling        vm-fedora
    kubevirt-migrate-vm-lj62q   Scheduled         vm-fedora
    kubevirt-migrate-vm-lj62q   PreparingTarget   vm-fedora
    kubevirt-migrate-vm-lj62q   TargetReady       vm-fedora
    kubevirt-migrate-vm-lj62q   Running           vm-fedora
    kubevirt-migrate-vm-lj62q   Succeeded         vm-fedora

  2. $ oc get vmi vm-fedora -ojsonpath="{ @.status.interfaces }"

    [
      {
        "infoSource": "domain, guest-agent",
        "interfaceName": "eth0",
        "ipAddress": "10.130.0.195",
        "ipAddresses": [
          "10.130.0.195",
          "fd02:0:0:3::43c"
        ],
        "mac": "52:54:00:0e:ab:25",
        "name": "default",
        "queueCount": 1
      },
      {
        "infoSource": "domain, guest-agent, multus-status",
        "interfaceName": "eth1",
        "mac": "02:d8:b8:00:00:2a",
        "name": "bridge-interface", 
    1
    
        "queueCount": 1
      }
    ]

    1

8.9.3.

참고

  1. $ oc edit vm <vm_name>

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: vm-fedora
    template:
      spec:
        domain:
          devices:
            interfaces:
              - name: defaultnetwork
                masquerade: {}
              # set the interface state to absent
              - name: <secondary_nic>
                state: absent 
    1
    
                bridge: {}
        networks:
          - name: defaultnetwork
            pod: {}
          - name: <secondary_nic>
            multus:
              networkName: <nad_name>
    # ...

    1
  2. $ virtctl migrate <vm_name>

8.10.

8.10.1.

중요

  1. apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      labels:
        kubevirt.io/vm: vm-istio
      name: vm-istio
    spec:
      runStrategy: Always
      template:
        metadata:
          labels:
            kubevirt.io/vm: vm-istio
            app: vm-istio 
    1
    
          annotations:
            sidecar.istio.io/inject: "true" 
    2
    
        spec:
          domain:
            devices:
              interfaces:
              - name: default
                masquerade: {} 
    3
    
              disks:
              - disk:
                  bus: virtio
                name: containerdisk
              - disk:
                  bus: virtio
                name: cloudinitdisk
            resources:
              requests:
                memory: 1024M
          networks:
          - name: default
            pod: {}
          terminationGracePeriodSeconds: 180
          volumes:
          - containerDisk:
              image: registry:5000/kubevirt/fedora-cloud-container-disk-demo:devel
            name: containerdisk

    1
    2
    3
  2. $ oc apply -f <vm_name>.yaml 
    1
    1
  3. apiVersion: v1
    kind: Service
    metadata:
      name: vm-istio
    spec:
      selector:
        app: vm-istio 
    1
    
      ports:
        - port: 8080
          name: http
          protocol: TCP
    1
  4. $ oc create -f <service_name>.yaml 
    1
    1

8.11.

8.11.1.

  1. apiVersion: "k8s.cni.cncf.io/v1"
    kind: NetworkAttachmentDefinition
    metadata:
      name: my-secondary-network 
    1
    
      namespace: openshift-cnv
    spec:
      config: '{
        "cniVersion": "0.3.1",
        "name": "migration-bridge",
        "type": "macvlan",
        "master": "eth1", 
    2
    
        "mode": "bridge",
        "ipam": {
          "type": "whereabouts", 
    3
    
          "range": "10.200.5.0/24" 
    4
    
        }
      }'

    1
    2
    3
    4
  2. $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  3. apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
      liveMigrationConfig:
        completionTimeoutPerGiB: 800
        network: <network> 
    1
    
        parallelMigrationsPerCluster: 5
        parallelOutboundMigrationsPerNode: 2
        progressTimeout: 150
    # ...

    1

  • $ oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'

8.11.2.

8.12.

8.12.1.

8.12.1.1.

참고

    • kind: VirtualMachine
      spec:
      # ...
        template:
        # ...
          spec:
            volumes:
            - cloudInitNoCloud:
                networkData: |
                  version: 2
                  ethernets:
                    eth1: 
      1
      
                      dhcp4: true
      1
    • kind: VirtualMachine
      spec:
      # ...
        template:
        # ...
          spec:
            volumes:
            - cloudInitNoCloud:
                networkData: |
                  version: 2
                  ethernets:
                    eth1: 
      1
      
                      addresses:
                      - 10.10.10.14/24 
      2
      1
      2

8.12.2.

8.12.2.1.

참고

8.12.2.2.

참고

  • $ oc describe vmi <vmi_name>

    # ...
    Interfaces:
       Interface Name:  eth0
       Ip Address:      10.244.0.37/24
       Ip Addresses:
         10.244.0.37/24
         fe80::858:aff:fef4:25/64
       Mac:             0a:58:0a:f4:00:25
       Name:            default
       Interface Name:  v2
       Ip Address:      1.1.1.7/24
       Ip Addresses:
         1.1.1.7/24
         fe80::f4d9:70ff:fe13:9089/64
       Mac:             f6:d9:70:13:90:89
       Interface Name:  v1
       Ip Address:      1.1.1.1/24
       Ip Addresses:
         1.1.1.1/24
         1.1.1.2/24
         1.1.1.4/24
         2001:de7:0:f101::1/64
         2001:db8:0:f101::1/64
         fe80::1420:84ff:fe10:17aa/64
       Mac:             16:20:84:10:17:aa

8.13.

중요

8.13.1.

  1. $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
        featureGates:
          deployKubeSecondaryDNS: true 
    1
    
    # ...
    1
  3. $ oc expose -n openshift-cnv deployment/secondary-dns --name=dns-lb \
      --type=LoadBalancer --port=53 --target-port=5353 --protocol='UDP'
  4. $ oc get service -n openshift-cnv

    NAME       TYPE             CLUSTER-IP     EXTERNAL-IP      PORT(S)          AGE
    dns-lb     LoadBalancer     172.30.27.5    10.46.41.94      53:31829/TCP     5s

  5. $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  6. apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
      featureGates:
        deployKubeSecondaryDNS: true
      kubeSecondaryDNSNameServerIP: "10.46.41.94" 
    1
    
    # ...
    1
  7.  $ oc get dnses.config.openshift.io cluster -o jsonpath='{.spec.baseDomain}'

    openshift.example.com

  8. vm.<FQDN>. IN NS ns.vm.<FQDN>.
    ns.vm.<FQDN>. IN A <kubeSecondaryDNSNameServerIP>

8.13.2.

  • $ oc get dnses.config.openshift.io cluster -o json | jq .spec.baseDomain

  1. $ oc get vm -n <namespace> <vm_name> -o yaml

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: example-vm
      namespace: example-namespace
    spec:
      running: true
      template:
        spec:
          domain:
            devices:
              interfaces:
                - bridge: {}
                  name: example-nic
    # ...
          networks:
          - multus:
              networkName: bridge-conf
            name: example-nic 
    1

    1
  2. $ ssh <user_name>@<interface_name>.<vm_name>.<namespace>.vm.<cluster_fqdn>

8.14.

참고

8.14.1.

  • $ oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io=ignore
  • $ oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io-

9장.

9.1.

9.1.1.

9.1.2.

9.1.3.

9.1.4.

9.2.

중요

9.2.1.

주의

  1. $ oc edit storageprofile <storage_class>
  2. apiVersion: cdi.kubevirt.io/v1beta1
    kind: StorageProfile
    metadata:
      name: <unknown_provisioner_class>
    # ...
    spec:
      claimPropertySets:
      - accessModes:
        - ReadWriteOnce 
    1
    
        volumeMode: Filesystem 
    2
    
    status:
      provisioner: <unknown_provisioner>
      storageClass: <unknown_provisioner_class>

    1
    2
9.2.1.1.

9.2.1.2.

  • apiVersion: cdi.kubevirt.io/v1beta1
    kind: StorageProfile
    metadata:
      name: ocs-storagecluster-ceph-rbd-virtualization
    spec:
      snapshotClass: ocs-storagecluster-rbdplugin-snapclass
  • # oc patch VolumeSnapshotClass ocs-storagecluster-cephfsplugin-snapclass --type=merge -p '{"metadata":{"annotations":{"snapshot.storage.kubernetes.io/is-default-class":"true"}}}'
9.2.1.3.

  1. $ oc get storageprofile
  2. $ oc describe storageprofile <name>

    Name:         ocs-storagecluster-ceph-rbd-virtualization
    Namespace:
    Labels:       app=containerized-data-importer
                  app.kubernetes.io/component=storage
                  app.kubernetes.io/managed-by=cdi-controller
                  app.kubernetes.io/part-of=hyperconverged-cluster
                  app.kubernetes.io/version=4.17.2
                  cdi.kubevirt.io=
    Annotations:  <none>
    API Version:  cdi.kubevirt.io/v1beta1
    Kind:         StorageProfile
    Metadata:
      Creation Timestamp:  2023-11-13T07:58:02Z
      Generation:          2
      Owner References:
        API Version:           cdi.kubevirt.io/v1beta1
        Block Owner Deletion:  true
        Controller:            true
        Kind:                  CDI
        Name:                  cdi-kubevirt-hyperconverged
        UID:                   2d6f169a-382c-4caf-b614-a640f2ef8abb
      Resource Version:        4186799537
      UID:                     14aef804-6688-4f2e-986b-0297fd3aaa68
    Spec:
    Status:
      Claim Property Sets: 
    1
    
        accessModes:
          ReadWriteMany
        volumeMode:  Block
        accessModes:
          ReadWriteOnce
        volumeMode:  Block
        accessModes:
          ReadWriteOnce
        volumeMode:                   Filesystem
      Clone Strategy:                  csi-clone 
    2
    
      Data Import Cron Source Format:  snapshot 
    3
    
      Provisioner:                     openshift-storage.rbd.csi.ceph.com
      Snapshot Class:                  ocs-storagecluster-rbdplugin-snapclass
      Storage Class:                   ocs-storagecluster-ceph-rbd-virtualization
    Events:                            <none>

    1
    2
    3
9.2.1.4.

참고

apiVersion: cdi.kubevirt.io/v1beta1
kind: StorageProfile
metadata:
  name: <provisioner_class>
# ...
spec:
  claimPropertySets:
  - accessModes:
    - ReadWriteOnce 
1

    volumeMode:  Filesystem 
2

  cloneStrategy: csi-clone 
3

status:
  provisioner: <provisioner>
  storageClass: <provisioner_class>

1
2
3

9.3.

9.3.1.

9.3.1.1.

참고

    • $ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \
        --type json -p '[{"op": "replace", "path": \
        "/spec/featureGates/enableCommonBootImageImport", \
        "value": false}]'
    • $ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \
        --type json -p '[{"op": "replace", "path": \
        "/spec/featureGates/enableCommonBootImageImport", \
        "value": true}]'

9.3.2.

중요

9.3.2.1.

중요

    1. $ oc get sc -o json| jq '.items[].metadata|select(.annotations."storageclass.kubevirt.io/is-default-virt-class"=="true")|.name'
    2. $ oc patch storageclass <storage_class_name> -p '{"metadata": {"annotations": {"storageclass.kubevirt.io/is-default-virt-class": "false"}}}'
    3. $ oc get sc -o json| jq '.items[].metadata|select(.annotations."storageclass.kubernetes.io/is-default-class"=="true")|.name'
    4. $ oc patch storageclass <storage_class_name> -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}'
    1. $ oc patch storageclass <storage_class_name> -p '{"metadata": {"annotations": {"storageclass.kubevirt.io/is-default-virt-class": "true"}}}'
    2. $ oc patch storageclass <storage_class_name> -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}'
  1. $ oc delete DataVolume,VolumeSnapshot -n openshift-virtualization-os-images --selector=cdi.kubevirt.io/dataImportCron
  2. $ oc get storageprofile <storage_class_name> -o json | jq .status.dataImportCronSourceFormat
9.3.2.2.

중요

  1. $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
    spec:
      dataImportCronTemplates:
      - metadata:
          name: rhel9-image-cron
        spec:
          template:
            spec:
              storage:
                storageClassName: <storage_class> 
    1
    
          schedule: "0 */12 * * *" 
    2
    
          managedDataSource: <data_source> 
    3
    
    # ...
    1
    2
    3
    참고

  3. $ oc delete DataVolume,VolumeSnapshot -n openshift-virtualization-os-images --selector=cdi.kubevirt.io/dataImportCron
  4. $ oc get storageprofile <storage_class_name> -o json | jq .status.dataImportCronSourceFormat
9.3.2.3.

  1. $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
    spec:
      dataImportCronTemplates:
      - metadata:
          name: centos-stream9-image-cron
          annotations:
            cdi.kubevirt.io/storage.bind.immediate.requested: "true" 
    1
    
        spec:
          schedule: "0 */12 * * *" 
    2
    
          template:
            spec:
              source:
                registry: 
    3
    
                  url: docker://quay.io/containerdisks/centos-stream:9
              storage:
                resources:
                  requests:
                    storage: 30Gi
          garbageCollect: Outdated
          managedDataSource: centos-stream9 
    4

    1
    2
    3
    4
9.3.2.4.

참고

  1. $ oc edit storageprofile <storage_class>
  2. apiVersion: cdi.kubevirt.io/v1beta1
    kind: StorageProfile
    metadata:
    # ...
    spec:
      dataImportCronSourceFormat: snapshot

  1. $ oc get storageprofile <storage_class>  -oyaml

9.3.3.

  1. $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
    1. 참고

    2. apiVersion: hco.kubevirt.io/v1beta1
      kind: HyperConverged
      metadata:
        name: kubevirt-hyperconverged
      spec:
        dataImportCronTemplates:
        - metadata:
            annotations:
              dataimportcrontemplate.kubevirt.io/enable: 'false'
            name: rhel8-image-cron
      # ...

9.3.4.

  1. $ oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o yaml

    apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
    spec:
    # ...
    status:
    # ...
      dataImportCronTemplates:
      - metadata:
          annotations:
            cdi.kubevirt.io/storage.bind.immediate.requested: "true"
          name: centos-9-image-cron
        spec:
          garbageCollect: Outdated
          managedDataSource: centos-stream9
          schedule: 55 8/12 * * *
          template:
            metadata: {}
            spec:
              source:
                registry:
                  url: docker://quay.io/containerdisks/centos-stream:9
              storage:
                resources:
                  requests:
                    storage: 30Gi
            status: {}
        status:
          commonTemplate: true 
    1
    
    # ...
      - metadata:
          annotations:
            cdi.kubevirt.io/storage.bind.immediate.requested: "true"
          name: user-defined-dic
        spec:
          garbageCollect: Outdated
          managedDataSource: user-defined-centos-stream9
          schedule: 55 8/12 * * *
          template:
            metadata: {}
            spec:
              source:
                registry:
                  pullMethod: node
                  url: docker://quay.io/containerdisks/centos-stream:9
              storage:
                resources:
                  requests:
                    storage: 30Gi
            status: {}
        status: {} 
    2
    
    # ...

    1
    2

9.4.

9.4.1.

  1. $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. # ...
    spec:
      filesystemOverhead:
        global: "<new_global_value>" 
    1
    
        storageClass:
          <storage_class_name>: "<new_value_for_this_storage_class>" 
    2
    1
    2

  • $ oc get cdiconfig -o yaml

    $ oc get cdiconfig -o jsonpath='{.items..status.filesystemOverhead}'

9.5.

9.5.1.

중요

  1. apiVersion: hostpathprovisioner.kubevirt.io/v1beta1
    kind: HostPathProvisioner
    metadata:
      name: hostpath-provisioner
    spec:
      imagePullPolicy: IfNotPresent
      storagePools: 
    1
    
      - name: any_name
        path: "/var/myvolumes" 
    2
    
      workload:
        nodeSelector:
          kubernetes.io/os: linux
    1
    2
  2. $ oc create -f hpp_cr.yaml
9.5.1.1.

참고

9.5.1.2.

참고

  1. apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: hostpath-csi
    provisioner: kubevirt.io.hostpath-provisioner
    reclaimPolicy: Delete 
    1
    
    volumeBindingMode: WaitForFirstConsumer 
    2
    
    parameters:
      storagePool: my-storage-pool 
    3
    1
    2
    3
  2. $ oc create -f storageclass_csi.yaml

9.5.2.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: iso-pvc
spec:
  volumeMode: Block 
1

  storageClassName: my-storage-class
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi

1

9.5.2.1.

중요

  1. apiVersion: hostpathprovisioner.kubevirt.io/v1beta1
    kind: HostPathProvisioner
    metadata:
      name: hostpath-provisioner
    spec:
      imagePullPolicy: IfNotPresent
      storagePools: 
    1
    
      - name: my-storage-pool
        path: "/var/myvolumes" 
    2
    
        pvcTemplate:
          volumeMode: Block 
    3
    
          storageClassName: my-storage-class 
    4
    
          accessModes:
          - ReadWriteOnce
          resources:
            requests:
              storage: 5Gi 
    5
    
      workload:
        nodeSelector:
          kubernetes.io/os: linux
    1
    2
    3
    4
    5
  2. $ oc create -f hpp_pvc_template_pool.yaml

9.6.

9.6.1.

참고

  1. apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: <datavolume-cloner> 
    1
    
    rules:
    - apiGroups: ["cdi.kubevirt.io"]
      resources: ["datavolumes/source"]
      verbs: ["*"]
    1
  2. $ oc create -f <datavolume-cloner.yaml> 
    1
    1
  3. apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: <allow-clone-to-user> 
    1
    
      namespace: <Source namespace> 
    2
    
    subjects:
    - kind: ServiceAccount
      name: default
      namespace: <Destination namespace> 
    3
    
    roleRef:
      kind: ClusterRole
      name: datavolume-cloner 
    4
    
      apiGroup: rbac.authorization.k8s.io
    1
    2
    3
    4
  4. $ oc create -f <datavolume-cloner.yaml> 
    1
    1

9.7.

9.7.1.

9.7.2.

  1. $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
    spec:
      resourceRequirements:
        storageWorkloads:
          limits:
            cpu: "500m"
            memory: "2Gi"
          requests:
            cpu: "250m"
            memory: "1Gi"

9.8.

9.8.1.

참고

9.8.2.

Expand
  

9.8.3.

  1. $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
    spec:
      scratchSpaceStorageClass: "<storage_class>" 
    1
    1

9.8.4.

Expand
      

9.9.

9.9.1.

9.9.2.

  • apiVersion: cdi.kubevirt.io/v1beta1
    kind: DataVolume
    metadata:
      name: preallocated-datavolume
    spec:
      source: 
    1
    
        registry:
          url: <image_url> 
    2
    
      storage:
        resources:
          requests:
            storage: 1Gi
      preallocation: true
    # ...
    1
    2

9.10.

9.10.1.

apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
  name: datavolume-example
  annotations:
    v1.multus-cni.io/default-network: bridge-network 
1

# ...

1

9.11.

9.11.1.

10장.

10.1.

10.1.1.

  • 참고

    Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)

10.1.2.

참고

10.1.3.

10.1.4.

10.2.

10.2.1.

  • $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv

    apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
      liveMigrationConfig:
        bandwidthPerMigration: 64Mi 
    1
    
        completionTimeoutPerGiB: 800 
    2
    
        parallelMigrationsPerCluster: 5 
    3
    
        parallelOutboundMigrationsPerNode: 2 
    4
    
        progressTimeout: 150 
    5
    
        allowPostCopy: false 
    6

    1
    2
    3
    4
    5
    6
참고

10.2.2.

  1. $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv

    apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
      liveMigrationConfig:
        bandwidthPerMigration: 0Mi 
    1
    
        completionTimeoutPerGiB: 150 
    2
    
        parallelMigrationsPerCluster: 5 
    3
    
        parallelOutboundMigrationsPerNode: 1 
    4
    
        progressTimeout: 150 
    5
    
        allowPostCopy: true 
    6

    1
    2
    3
    4
    5
    6
참고

10.2.4.

작은 정보

10.2.4.1.

참고

    1. $ oc edit vm <vm_name>
    2. apiVersion: migrations.kubevirt.io/v1alpha1
      kind: VirtualMachine
      metadata:
        name: <vm_name>
        namespace: default
        labels:
          app: my-app
          environment: production
      spec:
        template:
          metadata:
            labels:
              kubevirt.io/domain: <vm_name>
              kubevirt.io/size: large
              kubevirt.io/environment: production
      # ...
  1. apiVersion: migrations.kubevirt.io/v1alpha1
    kind: MigrationPolicy
    metadata:
      name: <migration_policy>
    spec:
      selectors:
        namespaceSelector: 
    1
    
          hpc-workloads: "True"
          xyz-workloads-type: ""
        virtualMachineInstanceSelector: 
    2
    
          kubevirt.io/environment: "production"
    1
    2
  2. $ oc create -f <migration_policy>.yaml

10.3.

작은 정보

10.3.1.

10.3.1.1.

참고

10.3.1.2.

  1. apiVersion: kubevirt.io/v1
    kind: VirtualMachineInstanceMigration
    metadata:
      name: <migration_name>
    spec:
      vmiName: <vm_name>
  2. $ oc create -f <migration_name>.yaml

  • $ oc describe vmi <vm_name> -n <namespace>

    # ...
    Status:
      Conditions:
        Last Probe Time:       <nil>
        Last Transition Time:  <nil>
        Status:                True
        Type:                  LiveMigratable
      Migration Method:  LiveMigration
      Migration State:
        Completed:                    true
        End Timestamp:                2018-12-24T06:19:42Z
        Migration UID:                d78c8962-0743-11e9-a540-fa163e0c69f1
        Source Node:                  node2.example.com
        Start Timestamp:              2018-12-24T06:19:35Z
        Target Node:                  node1.example.com
        Target Node Address:          10.9.0.18:43891
        Target Node Domain Detected:  true

10.3.2.

10.3.2.1.

10.3.2.2.

  • $ oc delete vmim migration-job

11장.

11.1.

참고

중요

참고

11.1.1.

중요

Expand
표 11.1.
    

11.1.1.1.

중요

  1. $ oc edit vm <vm_name> -n <namespace>

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: <vm_name>
    spec:
      template:
        spec:
          evictionStrategy: LiveMigrateIfPossible 
    1
    
    # ...

    1
  2. $ virtctl restart <vm_name> -n <namespace>
11.1.1.2.

  1. $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
    spec:
      evictionStrategy: LiveMigrate
    # ...

11.1.2.

중요

11.1.2.1.

Expand
표 11.2.
    

참고

11.1.2.2.

중요

  • $ oc edit vm <vm_name> -n <namespace>

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    spec:
      runStrategy: Always
    # ...

11.1.3.

11.2.

11.2.1.

예 11.1.

"486"
Conroe
athlon
core2duo
coreduo
kvm32
kvm64
n270
pentium
pentium2
pentium3
pentiumpro
phenom
qemu32
qemu64

11.2.2.

  • 예 11.2.

    apic
    clflush
    cmov
    cx16
    cx8
    de
    fpu
    fxsr
    lahf_lm
    lm
    mca
    mce
    mmx
    msr
    mtrr
    nx
    pae
    pat
    pge
    pni
    pse
    pse36
    sep
    sse
    sse2
    sse4.1
    ssse3
    syscall
    tsc

    예 11.3.

    aes
    apic
    avx
    avx2
    bmi1
    bmi2
    clflush
    cmov
    cx16
    cx8
    de
    erms
    fma
    fpu
    fsgsbase
    fxsr
    hle
    invpcid
    lahf_lm
    lm
    mca
    mce
    mmx
    movbe
    msr
    mtrr
    nx
    pae
    pat
    pcid
    pclmuldq
    pge
    pni
    popcnt
    pse
    pse36
    rdtscp
    rtm
    sep
    smep
    sse
    sse2
    sse4.1
    sse4.2
    ssse3
    syscall
    tsc
    tsc-deadline
    x2apic
    xsave
  • 예 11.4.

    aes
    avx
    avx2
    bmi1
    bmi2
    erms
    fma
    fsgsbase
    hle
    invpcid
    movbe
    pcid
    pclmuldq
    popcnt
    rdtscp
    rtm
    sse4.2
    tsc-deadline
    x2apic
    xsave

11.2.3.

  • apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
      obsoleteCPUs:
        cpuModels: 
    1
    
          - "<obsolete_cpu_1>"
          - "<obsolete_cpu_2>"
        minCPUModel: "<minimum_cpu_model>" 
    2
    1
    2

11.3.

11.3.1.

  • $ oc annotate node <node_name> node-labeller.kubevirt.io/skip-node=true 
    1
    1

11.4.

11.4.1.

11.4.2.

  1. $ oc adm cordon <node_name>
  2. $ oc adm drain <node_name> --force=true

  3. $ oc delete node <node_name>

11.4.3.

11.4.3.1.

  • $ oc get vmis -A

12장.

12.1.

12.2.

  • 중요

중요

12.2.1.

중요

12.2.2.

12.2.2.1.

12.2.2.2.

12.2.3.

12.2.3.1.

  1. 예 12.1.

    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: vm-latency-checkup-sa
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: kubevirt-vm-latency-checker
    rules:
    - apiGroups: ["kubevirt.io"]
      resources: ["virtualmachineinstances"]
      verbs: ["get", "create", "delete"]
    - apiGroups: ["subresources.kubevirt.io"]
      resources: ["virtualmachineinstances/console"]
      verbs: ["get"]
    - apiGroups: ["k8s.cni.cncf.io"]
      resources: ["network-attachment-definitions"]
      verbs: ["get"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: kubevirt-vm-latency-checker
    subjects:
    - kind: ServiceAccount
      name: vm-latency-checkup-sa
    roleRef:
      kind: Role
      name: kubevirt-vm-latency-checker
      apiGroup: rbac.authorization.k8s.io
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: kiagnose-configmap-access
    rules:
    - apiGroups: [ "" ]
      resources: [ "configmaps" ]
      verbs: ["get", "update"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: kiagnose-configmap-access
    subjects:
    - kind: ServiceAccount
      name: vm-latency-checkup-sa
    roleRef:
      kind: Role
      name: kiagnose-configmap-access
      apiGroup: rbac.authorization.k8s.io
  2. $ oc apply -n <target_namespace> -f <latency_sa_roles_rolebinding>.yaml 
    1
    1
  3. apiVersion: v1
    kind: ConfigMap
    metadata:
      name: kubevirt-vm-latency-checkup-config
      labels:
        kiagnose/checkup-type: kubevirt-vm-latency
    data:
      spec.timeout: 5m
      spec.param.networkAttachmentDefinitionNamespace: <target_namespace>
      spec.param.networkAttachmentDefinitionName: "blue-network" 
    1
    
      spec.param.maxDesiredLatencyMilliseconds: "10" 
    2
    
      spec.param.sampleDurationSeconds: "5" 
    3
    
      spec.param.sourceNode: "worker1" 
    4
    
      spec.param.targetNode: "worker2" 
    5

    1
    2
    3
    4
    5
  4. $ oc apply -n <target_namespace> -f <latency_config_map>.yaml
  5. apiVersion: batch/v1
    kind: Job
    metadata:
      name: kubevirt-vm-latency-checkup
      labels:
        kiagnose/checkup-type: kubevirt-vm-latency
    spec:
      backoffLimit: 0
      template:
        spec:
          serviceAccountName: vm-latency-checkup-sa
          restartPolicy: Never
          containers:
            - name: vm-latency-checkup
              image: registry.redhat.io/container-native-virtualization/vm-network-latency-checkup-rhel9:v4.17.0
              securityContext:
                allowPrivilegeEscalation: false
                capabilities:
                  drop: ["ALL"]
                runAsNonRoot: true
                seccompProfile:
                  type: "RuntimeDefault"
              env:
                - name: CONFIGMAP_NAMESPACE
                  value: <target_namespace>
                - name: CONFIGMAP_NAME
                  value: kubevirt-vm-latency-checkup-config
                - name: POD_UID
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.uid

  6. $ oc apply -n <target_namespace> -f <latency_job>.yaml
  7. $ oc wait job kubevirt-vm-latency-checkup -n <target_namespace> --for condition=complete --timeout 6m
  8. $ oc get configmap kubevirt-vm-latency-checkup-config -n <target_namespace> -o yaml

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: kubevirt-vm-latency-checkup-config
      namespace: <target_namespace>
      labels:
        kiagnose/checkup-type: kubevirt-vm-latency
    data:
      spec.timeout: 5m
      spec.param.networkAttachmentDefinitionNamespace: <target_namespace>
      spec.param.networkAttachmentDefinitionName: "blue-network"
      spec.param.maxDesiredLatencyMilliseconds: "10"
      spec.param.sampleDurationSeconds: "5"
      spec.param.sourceNode: "worker1"
      spec.param.targetNode: "worker2"
      status.succeeded: "true"
      status.failureReason: ""
      status.completionTimestamp: "2022-01-01T09:00:00Z"
      status.startTimestamp: "2022-01-01T09:00:07Z"
      status.result.avgLatencyNanoSec: "177000"
      status.result.maxLatencyNanoSec: "244000" 
    1
    
      status.result.measurementDurationSec: "5"
      status.result.minLatencyNanoSec: "135000"
      status.result.sourceNode: "worker1"
      status.result.targetNode: "worker2"

    1
  9. $ oc logs job.batch/kubevirt-vm-latency-checkup -n <target_namespace>
  10. $ oc delete job -n <target_namespace> kubevirt-vm-latency-checkup
    $ oc delete config-map -n <target_namespace> kubevirt-vm-latency-checkup-config
  11. $ oc delete -f <latency_sa_roles_rolebinding>.yaml
12.2.3.2.

  • apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: kubevirt-storage-checkup-clustereader
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-reader
    subjects:
    - kind: ServiceAccount
      name: storage-checkup-sa
      namespace: <target_namespace> 
    1
    1

  1. 예 12.2.

    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: storage-checkup-sa
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: storage-checkup-role
    rules:
      - apiGroups: [ "" ]
        resources: [ "configmaps" ]
        verbs: ["get", "update"]
      - apiGroups: [ "kubevirt.io" ]
        resources: [ "virtualmachines" ]
        verbs: [ "create", "delete" ]
      - apiGroups: [ "kubevirt.io" ]
        resources: [ "virtualmachineinstances" ]
        verbs: [ "get" ]
      - apiGroups: [ "subresources.kubevirt.io" ]
        resources: [ "virtualmachineinstances/addvolume", "virtualmachineinstances/removevolume" ]
        verbs: [ "update" ]
      - apiGroups: [ "kubevirt.io" ]
        resources: [ "virtualmachineinstancemigrations" ]
        verbs: [ "create" ]
      - apiGroups: [ "cdi.kubevirt.io" ]
        resources: [ "datavolumes" ]
        verbs: [ "create", "delete" ]
      - apiGroups: [ "" ]
        resources: [ "persistentvolumeclaims" ]
        verbs: [ "delete" ]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: storage-checkup-role
    subjects:
      - kind: ServiceAccount
        name: storage-checkup-sa
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: storage-checkup-role
  2. $ oc apply -n <target_namespace> -f <storage_sa_roles_rolebinding>.yaml
  3. ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: storage-checkup-config
      namespace: $CHECKUP_NAMESPACE
    data:
      spec.timeout: 10m
      spec.param.storageClass: ocs-storagecluster-ceph-rbd-virtualization
      spec.param.vmiTimeout: 3m
    ---
    apiVersion: batch/v1
    kind: Job
    metadata:
      name: storage-checkup
      namespace: $CHECKUP_NAMESPACE
    spec:
      backoffLimit: 0
      template:
        spec:
          serviceAccount: storage-checkup-sa
          restartPolicy: Never
          containers:
            - name: storage-checkup
              image: quay.io/kiagnose/kubevirt-storage-checkup:main
              imagePullPolicy: Always
              env:
                - name: CONFIGMAP_NAMESPACE
                  value: $CHECKUP_NAMESPACE
                - name: CONFIGMAP_NAME
                  value: storage-checkup-config

  4. $ oc apply -n <target_namespace> -f <storage_configmap_job>.yaml
  5. $ oc wait job storage-checkup -n <target_namespace> --for condition=complete --timeout 10m
  6. $ oc get configmap storage-checkup-config -n <target_namespace> -o yaml

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: storage-checkup-config
      labels:
        kiagnose/checkup-type: kubevirt-storage
    data:
      spec.timeout: 10m
      status.succeeded: "true" 
    1
    
      status.failureReason: "" 
    2
    
      status.startTimestamp: "2023-07-31T13:14:38Z" 
    3
    
      status.completionTimestamp: "2023-07-31T13:19:41Z" 
    4
    
      status.result.cnvVersion: 4.17.2 
    5
    
      status.result.defaultStorageClass: trident-nfs 
    6
    
      status.result.goldenImagesNoDataSource: <data_import_cron_list> 
    7
    
      status.result.goldenImagesNotUpToDate: <data_import_cron_list> 
    8
    
      status.result.ocpVersion: 4.17.0 
    9
    
      status.result.pvcBound: "true" 
    10
    
      status.result.storageProfileMissingVolumeSnapshotClass: <storage_class_list> 
    11
    
      status.result.storageProfilesWithEmptyClaimPropertySets: <storage_profile_list> 
    12
    
      status.result.storageProfilesWithSmartClone: <storage_profile_list> 
    13
    
      status.result.storageProfilesWithSpecClaimPropertySets: <storage_profile_list> 
    14
    
      status.result.storageProfilesWithRWX: |-
        ocs-storagecluster-ceph-rbd
        ocs-storagecluster-ceph-rbd-virtualization
        ocs-storagecluster-cephfs
        trident-iscsi
        trident-minio
        trident-nfs
        windows-vms
      status.result.vmBootFromGoldenImage: VMI "vmi-under-test-dhkb8" successfully booted
      status.result.vmHotplugVolume: |-
        VMI "vmi-under-test-dhkb8" hotplug volume ready
        VMI "vmi-under-test-dhkb8" hotplug volume removed
      status.result.vmLiveMigration: VMI "vmi-under-test-dhkb8" migration completed
      status.result.vmVolumeClone: 'DV cloneType: "csi-clone"'
      status.result.vmsWithNonVirtRbdStorageClass: <vm_list> 
    15
    
      status.result.vmsWithUnsetEfsStorageClass: <vm_list> 
    16

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
  7. $ oc delete job -n <target_namespace> storage-checkup
    $ oc delete config-map -n <target_namespace> storage-checkup-config
  8. $ oc delete -f <storage_sa_roles_rolebinding>.yaml
12.2.3.3.

  1. 예 12.3.

    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: dpdk-checkup-sa
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: kiagnose-configmap-access
    rules:
      - apiGroups: [ "" ]
        resources: [ "configmaps" ]
        verbs: [ "get", "update" ]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: kiagnose-configmap-access
    subjects:
      - kind: ServiceAccount
        name: dpdk-checkup-sa
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: kiagnose-configmap-access
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: kubevirt-dpdk-checker
    rules:
      - apiGroups: [ "kubevirt.io" ]
        resources: [ "virtualmachineinstances" ]
        verbs: [ "create", "get", "delete" ]
      - apiGroups: [ "subresources.kubevirt.io" ]
        resources: [ "virtualmachineinstances/console" ]
        verbs: [ "get" ]
      - apiGroups: [ "" ]
        resources: [ "configmaps" ]
        verbs: [ "create", "delete" ]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: kubevirt-dpdk-checker
    subjects:
      - kind: ServiceAccount
        name: dpdk-checkup-sa
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: kubevirt-dpdk-checker
  2. $ oc apply -n <target_namespace> -f <dpdk_sa_roles_rolebinding>.yaml
  3. apiVersion: v1
    kind: ConfigMap
    metadata:
      name: dpdk-checkup-config
      labels:
        kiagnose/checkup-type: kubevirt-dpdk
    data:
      spec.timeout: 10m
      spec.param.networkAttachmentDefinitionName: <network_name> 
    1
    
      spec.param.trafficGenContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:v0.4.0 
    2
    
      spec.param.vmUnderTestContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-vm:v0.4.0" 
    3

    1
    2
    3
  4. $ oc apply -n <target_namespace> -f <dpdk_config_map>.yaml
  5. apiVersion: batch/v1
    kind: Job
    metadata:
      name: dpdk-checkup
      labels:
        kiagnose/checkup-type: kubevirt-dpdk
    spec:
      backoffLimit: 0
      template:
        spec:
          serviceAccountName: dpdk-checkup-sa
          restartPolicy: Never
          containers:
            - name: dpdk-checkup
              image: registry.redhat.io/container-native-virtualization/kubevirt-dpdk-checkup-rhel9:v4.17.0
              imagePullPolicy: Always
              securityContext:
                allowPrivilegeEscalation: false
                capabilities:
                  drop: ["ALL"]
                runAsNonRoot: true
                seccompProfile:
                  type: "RuntimeDefault"
              env:
                - name: CONFIGMAP_NAMESPACE
                  value: <target-namespace>
                - name: CONFIGMAP_NAME
                  value: dpdk-checkup-config
                - name: POD_UID
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.uid

  6. $ oc apply -n <target_namespace> -f <dpdk_job>.yaml
  7. $ oc wait job dpdk-checkup -n <target_namespace> --for condition=complete --timeout 10m
  8. $ oc get configmap dpdk-checkup-config -n <target_namespace> -o yaml

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: dpdk-checkup-config
      labels:
        kiagnose/checkup-type: kubevirt-dpdk
    data:
      spec.timeout: 10m
      spec.param.NetworkAttachmentDefinitionName: "dpdk-network-1"
      spec.param.trafficGenContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:v0.4.0"
      spec.param.vmUnderTestContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-vm:v0.4.0"
      status.succeeded: "true" 
    1
    
      status.failureReason: "" 
    2
    
      status.startTimestamp: "2023-07-31T13:14:38Z" 
    3
    
      status.completionTimestamp: "2023-07-31T13:19:41Z" 
    4
    
      status.result.trafficGenSentPackets: "480000000" 
    5
    
      status.result.trafficGenOutputErrorPackets: "0" 
    6
    
      status.result.trafficGenInputErrorPackets: "0" 
    7
    
      status.result.trafficGenActualNodeName: worker-dpdk1 
    8
    
      status.result.vmUnderTestActualNodeName: worker-dpdk2 
    9
    
      status.result.vmUnderTestReceivedPackets: "480000000" 
    10
    
      status.result.vmUnderTestRxDroppedPackets: "0" 
    11
    
      status.result.vmUnderTestTxDroppedPackets: "0" 
    12

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
  9. $ oc delete job -n <target_namespace> dpdk-checkup
    $ oc delete config-map -n <target_namespace> dpdk-checkup-config
  10. $ oc delete -f <dpdk_sa_roles_rolebinding>.yaml
12.2.3.3.1.

Expand
표 12.1.
   

12.2.3.3.2.

  • # dnf install guestfs-tools

  1. # composer-cli distros list
    참고

    # usermod -a -G weldr <user>
    $ newgrp weldr
  2. $ cat << EOF > dpdk-vm.toml
    name = "dpdk_image"
    description = "Image to use with the DPDK checkup"
    version = "0.0.1"
    distro = "rhel-9.4"
    
    [[customizations.user]]
    name = "root"
    password = "redhat"
    
    [[packages]]
    name = "dpdk"
    
    [[packages]]
    name = "dpdk-tools"
    
    [[packages]]
    name = "driverctl"
    
    [[packages]]
    name = "tuned-profiles-cpu-partitioning"
    
    [customizations.kernel]
    append = "default_hugepagesz=1GB hugepagesz=1G hugepages=1"
    
    [customizations.services]
    disabled = ["NetworkManager-wait-online", "sshd"]
    EOF
  3. # composer-cli blueprints push dpdk-vm.toml
  4. # composer-cli compose start dpdk_image qcow2
  5. # composer-cli compose status
  6. # composer-cli compose image <UUID>
  7. $ cat <<EOF >customize-vm
    #!/bin/bash
    
    # Setup hugepages mount
    mkdir -p /mnt/huge
    echo "hugetlbfs /mnt/huge hugetlbfs defaults,pagesize=1GB 0 0" >> /etc/fstab
    
    # Create vfio-noiommu.conf
    echo "options vfio enable_unsafe_noiommu_mode=1" > /etc/modprobe.d/vfio-noiommu.conf
    
    # Enable guest-exec,guest-exec-status on the qemu-guest-agent configuration
    sed -i 's/\(--allow-rpcs=[^"]*\)/\1,guest-exec-status,guest-exec/' /etc/sysconfig/qemu-ga
    
    # Disable Bracketed-paste mode
    echo "set enable-bracketed-paste off" >> /root/.inputrc
    EOF
  8. $ virt-customize -a <UUID>-disk.qcow2 --run=customize-vm --selinux-relabel
  9. $ cat << EOF > Dockerfile
    FROM scratch
    COPY --chown=107:107 <UUID>-disk.qcow2 /disk/
    EOF

  10. $ podman build . -t dpdk-rhel:latest
  11. $ podman push dpdk-rhel:latest

12.3.

12.3.1.

12.3.2.

  1. Expand
      

  2. 참고
  3. Expand
      

12.3.3.

참고

  1. Expand
      

  2. 참고
  3. Expand
      

12.3.4.

참고

12.3.4.1.

참고

topk(3, sum by (name, namespace) (rate(kubevirt_vmi_vcpu_wait_seconds_total[6m]))) > 0 
1

1
12.3.4.2.

topk(3, sum by (name, namespace) (rate(kubevirt_vmi_network_receive_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_network_transmit_bytes_total[6m]))) > 0 
1

1
12.3.4.3.
12.3.4.3.1.

topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_read_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_write_traffic_bytes_total[6m]))) > 0 
1

1
12.3.4.3.2.

kubevirt_vmsnapshot_disks_restored_from_source{vm_name="simple-vm", vm_namespace="default"} 
1

1
kubevirt_vmsnapshot_disks_restored_from_source_bytes{vm_name="simple-vm", vm_namespace="default"} 
1
1
12.3.4.3.3.

topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_read_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_write_total[6m]))) > 0 
1

1
12.3.4.4.

topk(3, sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_in_traffic_bytes[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_out_traffic_bytes[6m]))) > 0 
1

1
참고

12.3.4.5.

12.4.

12.4.1.

  1. kind: Service
    apiVersion: v1
    metadata:
      name: node-exporter-service 
    1
    
      namespace: dynamation 
    2
    
      labels:
        servicetype: metrics 
    3
    
    spec:
      ports:
        - name: exmet 
    4
    
          protocol: TCP
          port: 9100 
    5
    
          targetPort: 9100 
    6
    
      type: ClusterIP
      selector:
        monitor: metrics 
    7
    1
    2
    3
    4
    5
    6
    7
  2. $ oc create -f node-exporter-service.yaml

12.4.2.

  1. $ wget https://github.com/prometheus/node_exporter/releases/download/v1.3.1/node_exporter-1.3.1.linux-amd64.tar.gz
  2. $ sudo tar xvf node_exporter-1.3.1.linux-amd64.tar.gz \
        --directory /usr/bin --strip 1 "*/node_exporter"
  3. [Unit]
    Description=Prometheus Metrics Exporter
    After=network.target
    StartLimitIntervalSec=0
    
    [Service]
    Type=simple
    Restart=always
    RestartSec=1
    User=root
    ExecStart=/usr/bin/node_exporter
    
    [Install]
    WantedBy=multi-user.target
  4. $ sudo systemctl enable node_exporter.service
    $ sudo systemctl start node_exporter.service

  • $ curl http://localhost:9100/metrics

    go_gc_duration_seconds{quantile="0"} 1.5244e-05
    go_gc_duration_seconds{quantile="0.25"} 3.0449e-05
    go_gc_duration_seconds{quantile="0.5"} 3.7913e-05

12.4.3.

  1. spec:
      template:
        metadata:
          labels:
            monitor: metrics
12.4.3.1.

  1. $ oc get service -n <namespace> <node-exporter-service>
  2. $ curl http://<172.30.226.162:9100>/metrics | grep -vE "^#|^$"

    node_arp_entries{device="eth0"} 1
    node_boot_time_seconds 1.643153218e+09
    node_context_switches_total 4.4938158e+07
    node_cooling_device_cur_state{name="0",type="Processor"} 0
    node_cooling_device_max_state{name="0",type="Processor"} 0
    node_cpu_guest_seconds_total{cpu="0",mode="nice"} 0
    node_cpu_guest_seconds_total{cpu="0",mode="user"} 0
    node_cpu_seconds_total{cpu="0",mode="idle"} 1.10586485e+06
    node_cpu_seconds_total{cpu="0",mode="iowait"} 37.61
    node_cpu_seconds_total{cpu="0",mode="irq"} 233.91
    node_cpu_seconds_total{cpu="0",mode="nice"} 551.47
    node_cpu_seconds_total{cpu="0",mode="softirq"} 87.3
    node_cpu_seconds_total{cpu="0",mode="steal"} 86.12
    node_cpu_seconds_total{cpu="0",mode="system"} 464.15
    node_cpu_seconds_total{cpu="0",mode="user"} 1075.2
    node_disk_discard_time_seconds_total{device="vda"} 0
    node_disk_discard_time_seconds_total{device="vdb"} 0
    node_disk_discarded_sectors_total{device="vda"} 0
    node_disk_discarded_sectors_total{device="vdb"} 0
    node_disk_discards_completed_total{device="vda"} 0
    node_disk_discards_completed_total{device="vdb"} 0
    node_disk_discards_merged_total{device="vda"} 0
    node_disk_discards_merged_total{device="vdb"} 0
    node_disk_info{device="vda",major="252",minor="0"} 1
    node_disk_info{device="vdb",major="252",minor="16"} 1
    node_disk_io_now{device="vda"} 0
    node_disk_io_now{device="vdb"} 0
    node_disk_io_time_seconds_total{device="vda"} 174
    node_disk_io_time_seconds_total{device="vdb"} 0.054
    node_disk_io_time_weighted_seconds_total{device="vda"} 259.79200000000003
    node_disk_io_time_weighted_seconds_total{device="vdb"} 0.039
    node_disk_read_bytes_total{device="vda"} 3.71867136e+08
    node_disk_read_bytes_total{device="vdb"} 366592
    node_disk_read_time_seconds_total{device="vda"} 19.128
    node_disk_read_time_seconds_total{device="vdb"} 0.039
    node_disk_reads_completed_total{device="vda"} 5619
    node_disk_reads_completed_total{device="vdb"} 96
    node_disk_reads_merged_total{device="vda"} 5
    node_disk_reads_merged_total{device="vdb"} 0
    node_disk_write_time_seconds_total{device="vda"} 240.66400000000002
    node_disk_write_time_seconds_total{device="vdb"} 0
    node_disk_writes_completed_total{device="vda"} 71584
    node_disk_writes_completed_total{device="vdb"} 0
    node_disk_writes_merged_total{device="vda"} 19761
    node_disk_writes_merged_total{device="vdb"} 0
    node_disk_written_bytes_total{device="vda"} 2.007924224e+09
    node_disk_written_bytes_total{device="vdb"} 0

12.4.4.

  1. apiVersion: monitoring.coreos.com/v1
    kind: ServiceMonitor
    metadata:
      labels:
        k8s-app: node-exporter-metrics-monitor
      name: node-exporter-metrics-monitor 
    1
    
      namespace: dynamation 
    2
    
    spec:
      endpoints:
      - interval: 30s 
    3
    
        port: exmet 
    4
    
        scheme: http
      selector:
        matchLabels:
          servicetype: metrics
    1
    2
    3
    4
  2. $ oc create -f node-exporter-metrics-monitor.yaml
12.4.4.1.

  1. $ oc expose service -n <namespace> <node_exporter_service_name>
  2. $ oc get route -o=custom-columns=NAME:.metadata.name,DNS:.spec.host

    NAME                    DNS
    node-exporter-service   node-exporter-service-dynamation.apps.cluster.example.org

  3. $ curl -s http://node-exporter-service-dynamation.apps.cluster.example.org/metrics

    go_gc_duration_seconds{quantile="0"} 1.5382e-05
    go_gc_duration_seconds{quantile="0.25"} 3.1163e-05
    go_gc_duration_seconds{quantile="0.5"} 3.8546e-05
    go_gc_duration_seconds{quantile="0.75"} 4.9139e-05
    go_gc_duration_seconds{quantile="1"} 0.000189423

12.5.

참고

12.5.1.

12.5.1.1.

  1. $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
    • apiVersion: hco.kubevirt.io/v1beta1
      kind: HyperConverged
      metadata:
        name: kubevirt-hyperconverged
        namespace: openshift-cnv
      spec:
          featureGates:
            downwardMetrics: true
      # ...
    • apiVersion: hco.kubevirt.io/v1beta1
      kind: HyperConverged
      metadata:
        name: kubevirt-hyperconverged
        namespace: openshift-cnv
      spec:
          featureGates:
            downwardMetrics: false
      # ...
12.5.1.2.

    • $ oc patch hco kubevirt-hyperconverged -n openshift-cnv \
        --type json -p '[{"op": "replace", "path": \
        "/spec/featureGates/downwardMetrics", \
        "value": true}]'
    • $ oc patch hco kubevirt-hyperconverged -n openshift-cnv \
        --type json -p '[{"op": "replace", "path": \
        "/spec/featureGates/downwardMetrics", \
        "value": false}]'

12.5.2.

  • apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: fedora
      namespace: default
    spec:
      dataVolumeTemplates:
        - metadata:
            name: fedora-volume
          spec:
            sourceRef:
              kind: DataSource
              name: fedora
              namespace: openshift-virtualization-os-images
            storage:
              resources: {}
              storageClassName: hostpath-csi-basic
      instancetype:
        name: u1.medium
      preference:
        name: fedora
      running: true
      template:
        metadata:
          labels:
            app.kubernetes.io/name: headless
        spec:
          domain:
            devices:
              downwardMetrics: {} 
    1
    
          subdomain: headless
          volumes:
            - dataVolume:
                name: fedora-volume
              name: rootdisk
            - cloudInitNoCloud:
                userData: |
                  #cloud-config
                  chpasswd:
                    expire: false
                  password: '<password>' 
    2
    
                  user: fedora
              name: cloudinitdisk

    1
    2

12.5.3.

참고

12.5.3.1.

  • $ sudo sh -c 'printf "GET /metrics/XML\n\n" > /dev/virtio-ports/org.github.vhostmd.1'
    $ sudo cat /dev/virtio-ports/org.github.vhostmd.1
12.5.3.2.

참고

  1. $ sudo dnf install -y vm-dump-metrics
  2. $ sudo vm-dump-metrics

    <metrics>
      <metric type="string" context="host">
        <name>HostName</name>
        <value>node01</value>
    [...]
      <metric type="int64" context="host" unit="s">
        <name>Time</name>
        <value>1619008605</value>
      </metric>
      <metric type="string" context="host">
        <name>VirtualizationVendor</name>
        <value>kubevirt.io</value>
      </metric>
    </metrics>

12.6.

12.6.1.

12.6.1.1.

  1. apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      annotations:
      name: fedora-vm
      namespace: example-namespace
    # ...
    spec:
      template:
        spec:
          readinessProbe:
            httpGet: 
    1
    
              port: 1500 
    2
    
              path: /healthz 
    3
    
              httpHeaders:
              - name: Custom-Header
                value: Awesome
            initialDelaySeconds: 120 
    4
    
            periodSeconds: 20 
    5
    
            timeoutSeconds: 10 
    6
    
            failureThreshold: 3 
    7
    
            successThreshold: 3 
    8
    
    # ...

    1
    2
    3
    4
    5
    6
    7
    8
  2. $ oc create -f <file_name>.yaml
12.6.1.2.

  1. apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      annotations:
      name: fedora-vm
      namespace: example-namespace
    # ...
    spec:
      template:
        spec:
          readinessProbe:
            initialDelaySeconds: 120 
    1
    
            periodSeconds: 20 
    2
    
            tcpSocket: 
    3
    
              port: 1500 
    4
    
            timeoutSeconds: 10 
    5
    
    # ...

    1
    2
    3
    4
    5
  2. $ oc create -f <file_name>.yaml
12.6.1.3.

  1. apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      annotations:
      name: fedora-vm
      namespace: example-namespace
    # ...
    spec:
      template:
        spec:
          livenessProbe:
            initialDelaySeconds: 120 
    1
    
            periodSeconds: 20 
    2
    
            httpGet: 
    3
    
              port: 1500 
    4
    
              path: /healthz 
    5
    
              httpHeaders:
              - name: Custom-Header
                value: Awesome
            timeoutSeconds: 10 
    6
    
    # ...

    1
    2
    3
    4
    5
    6
  2. $ oc create -f <file_name>.yaml

12.6.2.

  • 참고

참고

12.6.2.1.

  1. apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      labels:
        kubevirt.io/vm: vm2-rhel84-watchdog
      name: <vm-name>
    spec:
      running: false
      template:
        metadata:
          labels:
            kubevirt.io/vm: vm2-rhel84-watchdog
        spec:
          domain:
            devices:
              watchdog:
                name: <watchdog>
                i6300esb:
                  action: "poweroff" 
    1
    
    # ...
    1

  2. $ oc apply -f <file_name>.yaml
중요

  1. $ lspci | grep watchdog -i
    • # echo c > /proc/sysrq-trigger
    • # pkill -9 watchdog
12.6.2.2.

  1. # yum install watchdog
  2. #watchdog-device = /dev/watchdog
  3. # systemctl enable --now watchdog.service

12.6.3.

중요

  1. apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      annotations:
      name: fedora-vm
      namespace: example-namespace
    # ...
    spec:
      template:
        spec:
          readinessProbe:
            guestAgentPing: {} 
    1
    
            initialDelaySeconds: 120 
    2
    
            periodSeconds: 20 
    3
    
            timeoutSeconds: 10 
    4
    
            failureThreshold: 3 
    5
    
            successThreshold: 3 
    6
    
    # ...

    1
    2
    3
    4
    5
    6
  2. $ oc create -f <file_name>.yaml

12.7.

12.7.1.

12.7.2.

12.7.3.

12.7.4.

12.7.5.

12.7.6.

12.7.7.

12.7.8.

12.7.9.

12.7.10.

12.7.11.

12.7.12.

12.7.13.

12.7.14.

12.7.15.

12.7.16.

12.7.17.

12.7.18.

12.7.19.

12.7.20.

12.7.21.

12.7.22.

12.7.23.

12.7.24.

12.7.25.

12.7.26.

12.7.27.

12.7.28.

12.7.29.

12.7.30.

12.7.31.

12.7.32.

12.7.33.

12.7.34.

12.7.35.

12.7.36.

12.7.37.

12.7.38.

12.7.39.

12.7.40.

12.7.41.

12.7.42.

12.7.43.

12.7.44.

12.7.45.

12.7.46.

12.7.47.

12.7.48.

12.7.49.

12.7.50.

12.7.51.

12.7.52.

12.7.53.

12.7.54.

12.7.55.

12.7.56.

12.7.57.

12.7.58.

13장.

13.1.

13.1.1.

13.1.1.1.

13.1.1.1.1.

13.1.1.2.

13.1.2.

Expand
표 13.1.
  

13.2.

13.2.1.

13.2.2.

13.2.3.

  • $ oc adm must-gather \
      --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.17.7 \
      -- /usr/bin/gather
13.2.3.1.

13.2.3.1.1.

중요

13.2.3.1.2.

Expand
표 13.2.
  

$ oc adm must-gather --all-images

$ oc adm must-gather \
  --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.17.7 \
  -- <environment_variable_1> <environment_variable_2> <script_name>

$ oc adm must-gather \
  --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.17.7 \
  -- PROS=5 /usr/bin/gather 
1
1

$ oc adm must-gather \
  --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.17.7 \
  -- NS=mynamespace VM=my-vm /usr/bin/gather --vms_details 
1
1

$ oc adm must-gather \
  --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.17.7 \
  /usr/bin/gather --images

$ oc adm must-gather \
  --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.17.7 \
  /usr/bin/gather --instancetypes

13.3.

13.3.1.

  • $ oc get events -n <namespace>

    $ oc describe <resource> <resource_name>

13.3.2.

13.3.2.1.

  1. $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
    spec:
      logVerbosityConfig:
        kubevirt:
          virtAPI: 5 
    1
    
          virtController: 4
          virtHandler: 3
          virtLauncher: 2
          virtOperator: 6
    1
13.3.2.2.

13.3.2.3.

  1. $ oc get pods -n openshift-cnv

    예 13.1.

    NAME                               READY   STATUS    RESTARTS   AGE
    disks-images-provider-7gqbc        1/1     Running   0          32m
    disks-images-provider-vg4kx        1/1     Running   0          32m
    virt-api-57fcc4497b-7qfmc          1/1     Running   0          31m
    virt-api-57fcc4497b-tx9nc          1/1     Running   0          31m
    virt-controller-76c784655f-7fp6m   1/1     Running   0          30m
    virt-controller-76c784655f-f4pbd   1/1     Running   0          30m
    virt-handler-2m86x                 1/1     Running   0          30m
    virt-handler-9qs6z                 1/1     Running   0          30m
    virt-operator-7ccfdbf65f-q5snk     1/1     Running   0          32m
    virt-operator-7ccfdbf65f-vllz8     1/1     Running   0          32m
  2. $ oc logs -n openshift-cnv <pod_name>
    참고

    예 13.2.

    {"component":"virt-handler","level":"info","msg":"set verbosity to 2","pos":"virt-handler.go:453","timestamp":"2022-04-17T08:58:37.373695Z"}
    {"component":"virt-handler","level":"info","msg":"set verbosity to 2","pos":"virt-handler.go:453","timestamp":"2022-04-17T08:58:37.373726Z"}
    {"component":"virt-handler","level":"info","msg":"setting rate limiter to 5 QPS and 10 Burst","pos":"virt-handler.go:462","timestamp":"2022-04-17T08:58:37.373782Z"}
    {"component":"virt-handler","level":"info","msg":"CPU features of a minimum baseline CPU model: map[apic:true clflush:true cmov:true cx16:true cx8:true de:true fpu:true fxsr:true lahf_lm:true lm:true mca:true mce:true mmx:true msr:true mtrr:true nx:true pae:true pat:true pge:true pni:true pse:true pse36:true sep:true sse:true sse2:true sse4.1:true ssse3:true syscall:true tsc:true]","pos":"cpu_plugin.go:96","timestamp":"2022-04-17T08:58:37.390221Z"}
    {"component":"virt-handler","level":"warning","msg":"host model mode is expected to contain only one model","pos":"cpu_plugin.go:103","timestamp":"2022-04-17T08:58:37.390263Z"}
    {"component":"virt-handler","level":"info","msg":"node-labeller is running","pos":"node_labeller.go:94","timestamp":"2022-04-17T08:58:37.391011Z"}

13.3.3.

중요

13.3.3.1.

13.3.3.2.

  1. $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
    spec:
      virtualMachineOptions:
        disableSerialConsoleLog: true 
    1
    
    #...
    1
13.3.3.3.

13.3.3.4.

  1. $ oc edit vm <vm_name>
  2. apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: example-vm
    spec:
      template:
        spec:
          domain:
            devices:
              logSerialConsole: true 
    1
    
    #...
    1
  3. $ oc apply vm <vm_name>
  4. $ virtctl restart <vm_name> -n <namespace>
13.3.3.5.

13.3.3.6.

  • $ oc logs -n <namespace> -l kubevirt.io/domain=<vm_name> --tail=-1 -c guest-console-log

13.3.4.

13.3.4.1.

13.3.4.2.

참고

Expand
표 13.3.
  

{log_type=~".+"}|json
|kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster"

{log_type=~".+"}|json
|kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster"
|kubernetes_labels_app_kubernetes_io_component="storage"

{log_type=~".+"}|json
|kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster"
|kubernetes_labels_app_kubernetes_io_component="deployment"

{log_type=~".+"}|json
|kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster"
|kubernetes_labels_app_kubernetes_io_component="network"

{log_type=~".+"}|json
|kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster"
|kubernetes_labels_app_kubernetes_io_component="compute"

{log_type=~".+"}|json
|kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster"
|kubernetes_labels_app_kubernetes_io_component="schedule"

{log_type=~".+",kubernetes_container_name=~"<container>|<container>"} 
1

|json|kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster"
1

{log_type=~".+", kubernetes_container_name="compute"}|json
|!= "custom-ga-command" 
1
1

Expand
표 13.4.
  

{log_type=~".+"}|json
|kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster"
|= "error" != "timeout"

13.3.5.

13.3.6.

13.3.6.1.

$ oc describe dv <DataVolume>

13.3.6.2.

  • Status:
      Conditions:
        Last Heart Beat Time:  2020-07-15T03:58:24Z
        Last Transition Time:  2020-07-15T03:58:24Z
        Message:               PVC win10-rootdisk Bound
        Reason:                Bound
        Status:                True
        Type:                  Bound
    ...
      Events:
        Type     Reason     Age    From                   Message
        ----     ------     ----   ----                   -------
        Normal   Bound      24s    datavolume-controller  PVC example-dv Bound

  • Status:
      Conditions:
        Last Heart Beat Time:  2020-07-15T04:31:39Z
        Last Transition Time:  2020-07-15T04:31:39Z
        Message:               Import Complete
        Reason:                Completed
        Status:                False
        Type:                  Running
    ...
      Events:
        Type     Reason       Age                From                   Message
        ----     ------       ----               ----                   -------
        Warning  Error        12s (x2 over 14s)  datavolume-controller  Unable to connect
        to http data source: expected status code 200, got 404. Status: 404 Not Found

  • Status:
      Conditions:
        Last Heart Beat Time: 2020-07-15T04:31:39Z
        Last Transition Time:  2020-07-15T04:31:39Z
        Status:                True
        Type:                  Ready

14장.

14.1.

중요

14.1.1.

  • 중요

14.1.2.

14.1.3.

14.1.3.1.

14.1.3.2.

  • $ oc get kubevirt kubevirt-hyperconverged -n openshift-cnv -o yaml

    spec:
      developerConfiguration:
        featureGates:
          - Snapshot

  1. apiVersion: snapshot.kubevirt.io/v1beta1
    kind: VirtualMachineSnapshot
    metadata:
      name: <snapshot_name>
    spec:
      source:
        apiGroup: kubevirt.io
        kind: VirtualMachine
        name: <vm_name>
  2. $ oc create -f <snapshot_name>.yaml

    1. $ oc wait <vm_name> <snapshot_name> --for condition=Ready
      • 참고

  1. $ oc describe vmsnapshot <snapshot_name>

    apiVersion: snapshot.kubevirt.io/v1beta1
    kind: VirtualMachineSnapshot
    metadata:
      creationTimestamp: "2020-09-30T14:41:51Z"
      finalizers:
      - snapshot.kubevirt.io/vmsnapshot-protection
      generation: 5
      name: mysnap
      namespace: default
      resourceVersion: "3897"
      selfLink: /apis/snapshot.kubevirt.io/v1beta1/namespaces/default/virtualmachinesnapshots/my-vmsnapshot
      uid: 28eedf08-5d6a-42c1-969c-2eda58e2a78d
    spec:
      source:
        apiGroup: kubevirt.io
        kind: VirtualMachine
        name: my-vm
    status:
      conditions:
      - lastProbeTime: null
        lastTransitionTime: "2020-09-30T14:42:03Z"
        reason: Operation complete
        status: "False" 
    1
    
        type: Progressing
      - lastProbeTime: null
        lastTransitionTime: "2020-09-30T14:42:03Z"
        reason: Operation complete
        status: "True" 
    2
    
        type: Ready
      creationTime: "2020-09-30T14:42:03Z"
      readyToUse: true 
    3
    
      sourceUID: 355897f3-73a0-4ec4-83d3-3c2df9486f4f
      virtualMachineSnapshotContentName: vmsnapshot-content-28eedf08-5d6a-42c1-969c-2eda58e2a78d 
    4
    
      indications: 
    5
    
        - Online
      includedVolumes: 
    6
    
        - name: rootdisk
          kind: PersistentVolumeClaim
          namespace: default
        - name: datadisk1
          kind: DataVolume
          namespace: default

    1
    2
    3
    4
    5
    6

14.1.4.

14.1.5.

14.1.5.1.

14.1.5.2.

  1. apiVersion: snapshot.kubevirt.io/v1beta1
    kind: VirtualMachineRestore
    metadata:
      name: <vm_restore>
    spec:
      target:
        apiGroup: kubevirt.io
        kind: VirtualMachine
        name: <vm_name>
      virtualMachineSnapshotName: <snapshot_name>
  2. $ oc create -f <vm_restore>.yaml

  • $ oc get vmrestore <vm_restore>

    apiVersion: snapshot.kubevirt.io/v1beta1
    kind: VirtualMachineRestore
    metadata:
    creationTimestamp: "2020-09-30T14:46:27Z"
    generation: 5
    name: my-vmrestore
    namespace: default
    ownerReferences:
    - apiVersion: kubevirt.io/v1
      blockOwnerDeletion: true
      controller: true
      kind: VirtualMachine
      name: my-vm
      uid: 355897f3-73a0-4ec4-83d3-3c2df9486f4f
      resourceVersion: "5512"
      selfLink: /apis/snapshot.kubevirt.io/v1beta1/namespaces/default/virtualmachinerestores/my-vmrestore
      uid: 71c679a8-136e-46b0-b9b5-f57175a6a041
      spec:
        target:
          apiGroup: kubevirt.io
          kind: VirtualMachine
          name: my-vm
      virtualMachineSnapshotName: my-vmsnapshot
      status:
      complete: true 
    1
    
      conditions:
      - lastProbeTime: null
      lastTransitionTime: "2020-09-30T14:46:28Z"
      reason: Operation complete
      status: "False" 
    2
    
      type: Progressing
      - lastProbeTime: null
      lastTransitionTime: "2020-09-30T14:46:28Z"
      reason: Operation complete
      status: "True" 
    3
    
      type: Ready
      deletedDataVolumes:
      - test-dv1
      restoreTime: "2020-09-30T14:46:28Z"
      restores:
      - dataVolumeName: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1
      persistentVolumeClaim: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1
      volumeName: datavolumedisk1
      volumeSnapshotName: vmsnapshot-28eedf08-5d6a-42c1-969c-2eda58e2a78d-volume-datavolumedisk1

    1
    2
    3

14.1.6.

14.1.6.1.

14.1.6.2.

  • $ oc delete vmsnapshot <snapshot_name>

  • $ oc get vmsnapshot

14.2.

중요

참고

14.2.1.

  1. 주의

14.2.2.

  • 참고

  1. apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: <dpa_sample>
      namespace: openshift-adp 
    1
    
    spec:
      configuration:
        velero:
          defaultPlugins:
            - kubevirt 
    2
    
            - gcp 
    3
    
            - csi 
    4
    
            - openshift 
    5
    
          resourceTimeout: 10m 
    6
    
        nodeAgent: 
    7
    
          enable: true 
    8
    
          uploaderType: kopia 
    9
    
          podConfig:
            nodeSelector: <node_selector> 
    10
    
      backupLocations:
        - velero:
            provider: gcp 
    11
    
            default: true
            credential:
              key: cloud
              name: <default_secret> 
    12
    
            objectStorage:
              bucket: <bucket_name> 
    13
    
              prefix: <prefix> 
    14
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14

  1. $ oc get all -n openshift-adp

    NAME                                                     READY   STATUS    RESTARTS   AGE
    pod/oadp-operator-controller-manager-67d9494d47-6l8z8    2/2     Running   0          2m8s
    pod/node-agent-9cq4q                                     1/1     Running   0          94s
    pod/node-agent-m4lts                                     1/1     Running   0          94s
    pod/node-agent-pv4kr                                     1/1     Running   0          95s
    pod/velero-588db7f655-n842v                              1/1     Running   0          95s
    
    NAME                                                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
    service/oadp-operator-controller-manager-metrics-service   ClusterIP   172.30.70.140    <none>        8443/TCP   2m8s
    service/openshift-adp-velero-metrics-svc                   ClusterIP   172.30.10.0      <none>        8085/TCP   8h
    
    NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
    daemonset.apps/node-agent    3         3         3       3            3           <none>          96s
    
    NAME                                                READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/oadp-operator-controller-manager    1/1     1            1           2m9s
    deployment.apps/velero                              1/1     1            1           96s
    
    NAME                                                           DESIRED   CURRENT   READY   AGE
    replicaset.apps/oadp-operator-controller-manager-67d9494d47    1         1         1       2m9s
    replicaset.apps/velero-588db7f655                              1         1         1       96s

  2. $ oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'

    {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}

  3. $ oc get backupstoragelocations.velero.io -n openshift-adp

    NAME           PHASE       LAST VALIDATED   AGE     DEFAULT
    dpa-sample-1   Available   1s               3d16h   true

14.3.

14.3.1.

14.3.1.1.

14.3.1.2.

14.3.2.

14.3.2.1.

14.3.2.2.

중요

14.3.3.

14.3.4.

14.3.4.1.

  • 작은 정보

14.3.4.2.

Legal Notice

Copyright © Red Hat

OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).

Modified versions must remove all Red Hat trademarks.

Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.

Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.

Linux® is the registered trademark of Linus Torvalds in the United States and other countries.

Java® is a registered trademark of Oracle and/or its affiliates.

XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.

MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.

Node.js® is an official trademark of the OpenJS Foundation.

The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.

All other trademarks are the property of their respective owners.

Red Hat logoGithubredditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다. 최신 업데이트를 확인하세요.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

Theme

© 2026 Red Hat
맨 위로 이동