OpenStack에 설치


OpenShift Container Platform 4.15

OpenStack에 OpenShift Container Platform 설치

Red Hat OpenShift Documentation Team

초록

이 문서에서는 OpenStack에 OpenShift Container Platform을 설치하는 방법을 설명합니다.

1장. OpenStack에 설치 준비

RHOSP(Red Hat OpenStack Platform)에 OpenShift Container Platform을 설치할 수 있습니다.

1.1. 사전 요구 사항

1.2. OpenStack에 OpenShift Container Platform을 설치할 방법 선택

설치 관리자 프로비저닝 또는 사용자 프로비저닝 인프라에 OpenShift Container Platform을 설치할 수 있습니다. 기본 설치 유형은 설치 프로그램이 클러스터의 기본 인프라를 프로비저닝하는 설치 관리자 프로비저닝 인프라를 사용합니다. 제공하는 인프라에 OpenShift Container Platform도 설치할 수 있습니다. 설치 프로그램에서 프로비저닝한 인프라를 사용하지 않는 경우 클러스터 리소스를 직접 관리하고 유지보수해야 합니다.

설치 관리자 프로비저닝 및 사용자 프로비저닝 설치 프로세스에 대한 자세한 내용은 설치 프로세스를 참조하십시오.

1.2.1. 설치 관리자 프로비저닝 인프라를 사용하여 클러스터 설치

다음 방법 중 하나를 사용하여 OpenShift Container Platform 설치 프로그램에서 프로비저닝하는 RHOSP(Red Hat OpenStack Platform) 인프라에 클러스터를 설치할 수 있습니다.

  • 사용자 지정으로 OpenStack에 클러스터 설치: RHOSP에 사용자 지정 클러스터를 설치할 수 있습니다. 설치 프로그램을 통해 설치 단계에서 일부 사용자 지정을 적용할 수 있습니다. 다른 많은 사용자 정의 옵션은 설치 후 사용할 수 있습니다.
  • 제한된 네트워크의 OpenStack에 클러스터 설치: 설치 릴리스 콘텐츠의 내부 미러를 생성하여 RHOSP에 OpenShift Container Platform을 제한되거나 연결이 끊긴 네트워크에 설치할 수 있습니다. 이 방법을 사용하여 소프트웨어 구성 요소를 받기 위해 활성 인터넷 연결이 필요하지 않은 클러스터를 설치할 수 있습니다. 이 설치 방법을 사용하여 클러스터가 외부 콘텐츠에 대해 조직의 제어 조건을 충족하는 컨테이너 이미지만 사용하도록 할 수 있습니다.

1.2.2. 사용자 프로비저닝 인프라를 사용하여 클러스터 설치

다음 방법 중 하나를 사용하여 프로비저닝하는 RHOSP 인프라에 클러스터를 설치할 수 있습니다.

  • 자체 인프라의 OpenStack에 클러스터 설치: 사용자 프로비저닝 RHOSP 인프라에 OpenShift Container Platform을 설치할 수 있습니다. 이 설치 방법을 사용하면 클러스터를 기존 인프라 및 수정 사항과 통합할 수 있습니다. 사용자 프로비저닝 인프라에 설치하려면 Nova 서버, Neutron 포트, 보안 그룹과 같은 모든 RHOSP 리소스를 생성해야 합니다. 제공된 Ansible 플레이북을 사용하여 배포 프로세스를 지원할 수 있습니다.

1.3. 기존 HTTPS 인증서를 위해 RHOSP 엔드포인트 스캔

OpenShift Container Platform 4.10부터 HTTPS 인증서에는 SAN(주체 대체 이름) 필드가 포함되어야 합니다. 다음 스크립트를 실행하여 RHOSP(Red Hat OpenStack Platform) 카탈로그에서 CommonName 필드만 포함된 레거시 인증서의 각 HTTPS 엔드포인트를 스캔합니다.

중요

OpenShift Container Platform은 설치 또는 업데이트 전에 기존 인증서의 기본 RHOSP 인프라를 확인하지 않습니다. 제공된 스크립트를 사용하여 이러한 인증서를 직접 확인합니다. 클러스터를 설치하거나 업데이트하기 전에 기존 인증서를 업데이트하지 않으면 클러스터 기능이 저하됩니다.

사전 요구 사항

프로세스

  1. 다음 스크립트를 머신에 저장합니다.

    #!/usr/bin/env bash
    
    set -Eeuo pipefail
    
    declare catalog san
    catalog="$(mktemp)"
    san="$(mktemp)"
    readonly catalog san
    
    declare invalid=0
    
    openstack catalog list --format json --column Name --column Endpoints \
    	| jq -r '.[] | .Name as $name | .Endpoints[] | select(.interface=="public") | [$name, .interface, .url] | join(" ")' \
    	| sort \
    	> "$catalog"
    
    while read -r name interface url; do
    	# Ignore HTTP
    	if [[ ${url#"http://"} != "$url" ]]; then
    		continue
    	fi
    
    	# Remove the schema from the URL
    	noschema=${url#"https://"}
    
    	# If the schema was not HTTPS, error
    	if [[ "$noschema" == "$url" ]]; then
    		echo "ERROR (unknown schema): $name $interface $url"
    		exit 2
    	fi
    
    	# Remove the path and only keep host and port
    	noschema="${noschema%%/*}"
    	host="${noschema%%:*}"
    	port="${noschema##*:}"
    
    	# Add the port if was implicit
    	if [[ "$port" == "$host" ]]; then
    		port='443'
    	fi
    
    	# Get the SAN fields
    	openssl s_client -showcerts -servername "$host" -connect "$host:$port" </dev/null 2>/dev/null \
    		| openssl x509 -noout -ext subjectAltName \
    		> "$san"
    
    	# openssl returns the empty string if no SAN is found.
    	# If a SAN is found, openssl is expected to return something like:
    	#
    	#    X509v3 Subject Alternative Name:
    	#        DNS:standalone, DNS:osp1, IP Address:192.168.2.1, IP Address:10.254.1.2
    	if [[ "$(grep -c "Subject Alternative Name" "$san" || true)" -gt 0 ]]; then
    		echo "PASS: $name $interface $url"
    	else
    		invalid=$((invalid+1))
    		echo "INVALID: $name $interface $url"
    	fi
    done < "$catalog"
    
    # clean up temporary files
    rm "$catalog" "$san"
    
    if [[ $invalid -gt 0 ]]; then
    	echo "${invalid} legacy certificates were detected. Update your certificates to include a SAN field."
    	exit 1
    else
    	echo "All HTTPS certificates for this cloud are valid."
    fi
  2. 스크립트를 실행합니다.
  3. 스크립트가 INVALID 로 보고하는 인증서를 SAN 필드가 포함된 인증서로 바꿉니다.
중요

OpenShift Container Platform 4.10을 설치하기 전에 기존 HTTPS 인증서를 모두 교체하거나 클러스터를 해당 버전으로 업데이트해야 합니다. 다음 메시지와 함께 기존 인증서가 거부됩니다.

x509: certificate relies on legacy Common Name field, use SANs instead

1.3.1. 기존 HTTPS 인증서를 수동으로 위해 RHOSP 끝점 스캔

OpenShift Container Platform 4.10부터 HTTPS 인증서에는 SAN(주체 대체 이름) 필드가 포함되어야 합니다. "기존 HTTPS 인증서로 RHOSP 끝점 스캔"에 나열된 사전 요구 사항 툴에 액세스할 수 없는 경우 다음 단계를 수행하여 공통Name 필드만 포함하는 레거시 인증서의 RHOSP(Red Hat OpenStack Platform) 카탈로그의 각 HTTPS 끝점을 스캔합니다.

중요

OpenShift Container Platform은 설치 또는 업데이트 전에 기존 인증서의 기본 RHOSP 인프라를 확인하지 않습니다. 이러한 인증서를 직접 확인하려면 다음 단계를 수행합니다. 클러스터를 설치하거나 업데이트하기 전에 기존 인증서를 업데이트하지 않으면 클러스터 기능이 저하됩니다.

프로세스

  1. 명령줄에서 다음 명령을 실행하여 RHOSP 공용 끝점의 URL을 확인합니다.

    $ openstack catalog list

    명령에서 반환하는 각 HTTPS 끝점의 URL을 기록합니다.

  2. 각 공용 엔드포인트에 대해 호스트와 포트를 기록해 둡니다.

    작은 정보

    스키마, 포트 및 경로를 제거하여 끝점의 호스트를 결정합니다.

  3. 각 끝점에 다음 명령을 실행하여 인증서의 SAN 필드를 추출합니다.

    1. 호스트 변수를 설정합니다.

      $ host=<host_name>
    2. 포트 변수를 설정합니다.

      $ port=<port_number>

    3. $ openssl s_client -showcerts -servername "$host" -connect "$host:$port" </dev/null 2>/dev/null \
          | openssl x509 -noout -ext subjectAltName

      X509v3 Subject Alternative Name:
          DNS:your.host.example.net

중요

x509: certificate relies on legacy Common Name field, use SANs instead

2장.

2.1.

2.1.1.

2.1.2.

2.2.

2.2.1.

참고

  • 참고

  1. $ openstack network create radio --provider-physical-network radio --provider-network-type flat --external
  2. $ openstack network create uplink --provider-physical-network uplink --provider-network-type vlan --external
  3. $ openstack subnet create --network radio --subnet-range <radio_network_subnet_range> radio
  4. $ openstack subnet create --network uplink --subnet-range <uplink_network_subnet_range> uplink

2.3.

2.4.

3장.

3.1.

3.2.

Expand
표 3.1.
  

중요

참고

3.2.1.

3.2.2.

작은 정보

3.2.3.

3.2.4.

참고

    중요

    Expand
    표 3.2.
         

     

    참고

    작은 정보

    Expand
    표 3.3.
         

    참고

3.2.4.1.

참고

예 3.1.

global
  log         127.0.0.1 local2
  pidfile     /var/run/haproxy.pid
  maxconn     4000
  daemon
defaults
  mode                    http
  log                     global
  option                  dontlognull
  option http-server-close
  option                  redispatch
  retries                 3
  timeout http-request    10s
  timeout queue           1m
  timeout connect         10s
  timeout client          1m
  timeout server          1m
  timeout http-keep-alive 10s
  timeout check           10s
  maxconn                 3000
listen api-server-6443 
1

  bind *:6443
  mode tcp
  option  httpchk GET /readyz HTTP/1.0
  option  log-health-checks
  balance roundrobin
  server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 
2

  server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3
  server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3
  server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3
listen machine-config-server-22623 
3

  bind *:22623
  mode tcp
  server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 
4

  server master0 master0.ocp4.example.com:22623 check inter 1s
  server master1 master1.ocp4.example.com:22623 check inter 1s
  server master2 master2.ocp4.example.com:22623 check inter 1s
listen ingress-router-443 
5

  bind *:443
  mode tcp
  balance source
  server compute0 compute0.ocp4.example.com:443 check inter 1s
  server compute1 compute1.ocp4.example.com:443 check inter 1s
listen ingress-router-80 
6

  bind *:80
  mode tcp
  balance source
  server compute0 compute0.ocp4.example.com:80 check inter 1s
  server compute1 compute1.ocp4.example.com:80 check inter 1s
1
2 4
3
5
6
참고

작은 정보

3.3.

중요

3.4.

중요

중요

  1. $ openstack role add --user <user> --project <project> swiftoperator

3.5.

  1. apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: custom-csi-storageclass
    provisioner: cinder.csi.openstack.org
    volumeBindingMode: WaitForFirstConsumer
    allowVolumeExpansion: true
    parameters:
      availability: <availability_zone_name>
    참고

  2. $ oc apply -f <storage_class_file_name>

    storageclass.storage.k8s.io/custom-csi-storageclass created

  3. apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: csi-pvc-imageregistry
      namespace: openshift-image-registry 
    1
    
      annotations:
        imageregistry.openshift.io: "true"
    spec:
      accessModes:
      - ReadWriteOnce
      volumeMode: Filesystem
      resources:
        requests:
          storage: 100Gi 
    2
    
      storageClassName: <your_custom_storage_class> 
    3
    1
    2
    3
  4. $ oc apply -f <pvc_file_name>

    persistentvolumeclaim/csi-pvc-imageregistry created

  5. $ oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{"op": "replace", "path": "/spec/storage/pvc/claim", "value": "csi-pvc-imageregistry"}]'

    config.imageregistry.operator.openshift.io/cluster patched

  1. $ oc get configs.imageregistry.operator.openshift.io/cluster -o yaml

    ...
    status:
        ...
        managementState: Managed
        pvc:
          claim: csi-pvc-imageregistry
    ...

  2. $ oc get pvc -n openshift-image-registry csi-pvc-imageregistry

    NAME                   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS             AGE
    csi-pvc-imageregistry  Bound    pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5   100Gi      RWO            custom-csi-storageclass  11m

3.6.

  1. $ openstack network list --long -c ID -c Name -c "Router Type"

    +--------------------------------------+----------------+-------------+
    | ID                                   | Name           | Router Type |
    +--------------------------------------+----------------+-------------+
    | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External    |
    +--------------------------------------+----------------+-------------+

중요

Expand
  

주의

참고

3.7.

    • 중요

    • clouds:
        shiftstack:
          auth:
            auth_url: http://10.10.14.42:5000/v3
            project_name: shiftstack
            username: <username>
            password: <password>
            user_domain_name: Default
            project_domain_name: Default
        dev-env:
          region_name: RegionOne
          auth:
            username: <username>
            password: <password>
            project_name: 'devonly'
            auth_url: 'https://10.10.14.22:5001/v2.0'
    1. clouds:
        shiftstack:
          ...
          cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem"
      작은 정보

      $ oc edit configmap -n openshift-config cloud-provider-config

3.8.

  1. $ openshift-install --dir <destination_directory> create manifests
  2. $ vi openshift/manifests/cloud-provider-config.yaml
  3. #...
    [LoadBalancer]
    lb-provider = "amphora" 
    1
    
    floating-network-id="d3deb660-4190-40a3-91f1-37326fe6ec4a" 
    2
    
    create-monitor = True 
    3
    
    monitor-delay = 10s 
    4
    
    monitor-timeout = 10s 
    5
    
    monitor-max-retries = 1 
    6
    
    #...
    1
    2
    3
    4
    5
    6
    중요

    중요

  4. 작은 정보

    $ oc edit configmap -n openshift-config cloud-provider-config

3.9.

  1. 중요
  2. $ tar -xvf openshift-install-linux.tar.gz
작은 정보

3.10.

    1. $ ./openshift-install create install-config --dir <installation_directory> 
      1
      1

      1. 참고

  1. 중요

3.10.1.

  • 참고

  1. apiVersion: v1
    baseDomain: my.domain.com
    proxy:
      httpProxy: http://<username>:<pswd>@<ip>:<port> 
    1
    
      httpsProxy: https://<username>:<pswd>@<ip>:<port> 
    2
    
      noProxy: example.com 
    3
    
    additionalTrustBundle: | 
    4
    
        -----BEGIN CERTIFICATE-----
        <MY_TRUSTED_CA_CERT>
        -----END CERTIFICATE-----
    additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 
    5
    1
    2
    3
    4
    5
    참고

    참고

    $ ./openshift-install wait-for install-complete --log-level debug

참고

3.10.2.

참고

중요

3.10.3.

참고

    1. controlPlane:
          platform:
            openstack:
              type: <bare_metal_control_plane_flavor> 
      1
      
      ...
      
      compute:
        - architecture: amd64
          hyperthreading: Enabled
          name: worker
          platform:
            openstack:
              type: <bare_metal_compute_flavor> 
      2
      
          replicas: 3
      ...
      
      platform:
          openstack:
            machinesSubnet: <subnet_UUID> 
      3
      
      ...

      1
      2
      3

참고

$ ./openshift-install wait-for install-complete --log-level debug

3.10.4.

참고

3.10.4.1.

  • 작은 정보

  • 작은 정보
    $ openstack network create --project openshift
    $ openstack subnet create --project openshift

    중요

  • $ openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ...
3.10.4.2.

중요

        ...
        platform:
          openstack:
            apiVIPs: 
1

              - 192.0.2.13
            ingressVIPs: 
2

              - 192.0.2.23
            machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf
            # ...
        networking:
          machineNetwork:
          - cidr: 192.0.2.0/24

1 2
주의

작은 정보

3.10.5.

중요

예 3.2.

apiVersion: v1
baseDomain: example.com
controlPlane:
  name: master
  platform: {}
  replicas: 3
compute:
- name: worker
  platform:
    openstack:
      type: ml.large
  replicas: 3
metadata:
  name: example
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineNetwork:
  - cidr: 10.0.0.0/16
  serviceNetwork:
  - 172.30.0.0/16
  networkType: OVNKubernetes
platform:
  openstack:
    cloud: mycloud
    externalNetwork: external
    computeFlavor: m1.xlarge
    apiFloatingIP: 128.0.0.1
fips: false
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA...

예 3.3.

apiVersion: v1
baseDomain: example.com
controlPlane:
  name: master
  platform: {}
  replicas: 3
compute:
- name: worker
  platform:
    openstack:
      type: ml.large
  replicas: 3
metadata:
  name: example
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  - cidr: fd01::/48
    hostPrefix: 64
  machineNetwork:
  - cidr: 192.168.25.0/24
  - cidr: fd2e:6f44:5dd8:c956::/64
  serviceNetwork:
  - 172.30.0.0/16
  - fd02::/112
  networkType: OVNKubernetes
platform:
  openstack:
    cloud: mycloud
    externalNetwork: external
    computeFlavor: m1.xlarge
    apiVIPs:
    - 192.168.25.10
    - fd2e:6f44:5dd8:c956:f816:3eff:fec3:5955
    ingressVIPs:
    - 192.168.25.132
    - fd2e:6f44:5dd8:c956:f816:3eff:fe40:aecb
    controlPlanePort:
      fixedIPs:
      - subnet:
          name: openshift-dual4
      - subnet:
          name: openshift-dual6
      network:
        name: openshift-dual
fips: false
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA...

3.10.6.

참고

3.10.6.1.

  1. 참고

    1. apiVersion: v1
      baseDomain: mydomain.test
      compute:
      - name: worker
        platform:
          openstack:
            type: m1.xlarge
        replicas: 3
      controlPlane:
        name: master
        platform:
          openstack:
            type: m1.xlarge
        replicas: 3
      metadata:
        name: mycluster
      networking:
        machineNetwork: 
      1
      
        - cidr: "192.168.25.0/24"
        - cidr: "fd2e:6f44:5dd8:c956::/64"
        clusterNetwork: 
      2
      
        - cidr: 10.128.0.0/14
          hostPrefix: 23
        - cidr: fd01::/48
          hostPrefix: 64
        serviceNetwork: 
      3
      
        - 172.30.0.0/16
        - fd02::/112
      platform:
        openstack:
          ingressVIPs: ['192.168.25.79', 'fd2e:6f44:5dd8:c956:f816:3eff:fef1:1bad'] 
      4
      
          apiVIPs: ['192.168.25.199', 'fd2e:6f44:5dd8:c956:f816:3eff:fe78:cf36'] 
      5
      
          controlPlanePort: 
      6
      
            fixedIPs: 
      7
      
            - subnet: 
      8
      
                name: subnet-v4
                id: subnet-v4-id
            - subnet: 
      9
      
                name: subnet-v6
                id: subnet-v6-id
            network: 
      10
      
              name: dualstack
              id: network-id
      1 2 3
      4
      5
      6
      7
      8 9
      10
    2. apiVersion: v1
      baseDomain: mydomain.test
      compute:
      - name: worker
        platform:
          openstack:
            type: m1.xlarge
        replicas: 3
      controlPlane:
        name: master
        platform:
          openstack:
            type: m1.xlarge
        replicas: 3
      metadata:
        name: mycluster
      networking:
        machineNetwork: 
      1
      
        - cidr: "fd2e:6f44:5dd8:c956::/64"
        - cidr: "192.168.25.0/24"
        clusterNetwork: 
      2
      
        - cidr: fd01::/48
          hostPrefix: 64
        - cidr: 10.128.0.0/14
          hostPrefix: 23
        serviceNetwork: 
      3
      
        - fd02::/112
        - 172.30.0.0/16
      platform:
        openstack:
          ingressVIPs: ['fd2e:6f44:5dd8:c956:f816:3eff:fef1:1bad', '192.168.25.79'] 
      4
      
          apiVIPs: ['fd2e:6f44:5dd8:c956:f816:3eff:fe78:cf36', '192.168.25.199'] 
      5
      
          controlPlanePort: 
      6
      
            fixedIPs: 
      7
      
            - subnet: 
      8
      
                name: subnet-v6
                id: subnet-v6-id
            - subnet: 
      9
      
                name: subnet-v4
                id: subnet-v4-id
            network: 
      10
      
              name: dualstack
              id: network-id
      1 2 3
      4
      5
      6
      7
      8 9
      10
    1. [connection]
      type=ethernet
      [ipv6]
      addr-gen-mode=eui64
      method=auto
    1. [connection]
      ipv6.addr-gen-mode=0

3.10.7.

apiVersion: v1
baseDomain: mydomain.test
compute:
- name: worker
  platform:
    openstack:
      type: m1.xlarge
  replicas: 3
controlPlane:
  name: master
  platform:
    openstack:
      type: m1.xlarge
  replicas: 3
metadata:
  name: mycluster
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineNetwork:
  - cidr: 192.168.10.0/24
platform:
  openstack:
    cloud: mycloud
    machinesSubnet: 8586bf1a-cc3c-4d40-bdf6-c243decc603a 
1

    apiVIPs:
    - 192.168.10.5
    ingressVIPs:
    - 192.168.10.7
    loadBalancer:
      type: UserManaged 
2
1
2

3.11.

중요

  1. $ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 
    1
    1
    참고

  2. $ cat <path>/<file_name>.pub

    $ cat ~/.ssh/id_ed25519.pub
  3. 참고

    1. $ eval "$(ssh-agent -s)"

      Agent pid 31874

      참고

  4. $ ssh-add <path>/<file_name> 
    1
    1

    Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

3.12.

3.12.1.

  1. $ openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network>
  2. $ openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network>
  3. api.<cluster_name>.<base_domain>.  IN  A  <API_FIP>
    *.apps.<cluster_name>.<base_domain>. IN  A <apps_FIP>
    참고

작은 정보

3.12.2.

참고

api.<cluster_name>.<base_domain>.  IN  A  <api_port_IP>
*.apps.<cluster_name>.<base_domain>. IN  A <ingress_port_IP>

3.13.

중요

  • $ ./openshift-install create cluster --dir <installation_directory> \ 
    1
    
        --log-level=info 
    2
    1
    2

중요

...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "password"
INFO Time elapsed: 36m22s

중요

3.14.

  1. $ export KUBECONFIG=<installation_directory>/auth/kubeconfig 
    1
    1

  2. $ oc get nodes
  3. $ oc get clusterversion
  4. $ oc get clusteroperator
  5. $ oc get pods -A

3.15.

  1. $ export KUBECONFIG=<installation_directory>/auth/kubeconfig 
    1
    1
  2. $ oc whoami

    system:admin

3.16.

3.17.

4장.

4.1.

4.2.

중요

4.3.

Expand
표 4.1.
  

중요

참고

4.3.1.

4.3.2.

작은 정보

4.3.3.

4.4.

참고

    1. $ sudo subscription-manager register # If not done already
    2. $ sudo subscription-manager attach --pool=$YOUR_POOLID # If not done already
    3. $ sudo subscription-manager repos --disable=* # If not done already
    4. $ sudo subscription-manager repos \
        --enable=rhel-8-for-x86_64-baseos-rpms \
        --enable=openstack-16-tools-for-rhel-8-x86_64-rpms \
        --enable=ansible-2.9-for-rhel-8-x86_64-rpms \
        --enable=rhel-8-for-x86_64-appstream-rpms
  1. $ sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr ansible-collections-openstack
  2. $ sudo alternatives --set python /usr/bin/python3

4.5.

  • $ xargs -n 1 curl -O <<< '
            https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/openstack/bootstrap.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/openstack/common.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/openstack/compute-nodes.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/openstack/control-plane.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/openstack/down-bootstrap.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/openstack/down-compute-nodes.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/openstack/down-control-plane.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/openstack/down-network.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/openstack/down-security-groups.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/openstack/down-containers.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/openstack/inventory.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/openstack/network.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/openstack/security-groups.yaml
            https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/openstack/update-network-resources.yaml'

중요

중요

4.6.

  1. 중요
  2. $ tar -xvf openshift-install-linux.tar.gz
작은 정보

4.7.

중요

참고

  1. $ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 
    1
    1
    참고

  2. $ cat <path>/<file_name>.pub

    $ cat ~/.ssh/id_ed25519.pub
  3. 참고

    1. $ eval "$(ssh-agent -s)"

      Agent pid 31874

      참고

  4. $ ssh-add <path>/<file_name> 
    1
    1

    Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

4.8.

  1. 중요

  2. 참고

    $ file <name_of_downloaded_file>
  3. $ openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-${RHCOS_VERSION}-openstack.qcow2 rhcos
    중요

    주의

4.9.

  1. $ openstack network list --long -c ID -c Name -c "Router Type"

    +--------------------------------------+----------------+-------------+
    | ID                                   | Name           | Router Type |
    +--------------------------------------+----------------+-------------+
    | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External    |
    +--------------------------------------+----------------+-------------+

참고

4.10.

4.10.1.

  1. $ openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network>
  2. $ openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network>
  3. $ openstack floating ip create --description "bootstrap machine" <external_network>
  4. api.<cluster_name>.<base_domain>.  IN  A  <API_FIP>
    *.apps.<cluster_name>.<base_domain>. IN  A <apps_FIP>
    참고

작은 정보

4.10.2.

참고

api.<cluster_name>.<base_domain>.  IN  A  <api_port_IP>
*.apps.<cluster_name>.<base_domain>. IN  A <ingress_port_IP>

4.11.

    • 중요

    • clouds:
        shiftstack:
          auth:
            auth_url: http://10.10.14.42:5000/v3
            project_name: shiftstack
            username: <username>
            password: <password>
            user_domain_name: Default
            project_domain_name: Default
        dev-env:
          region_name: RegionOne
          auth:
            username: <username>
            password: <password>
            project_name: 'devonly'
            auth_url: 'https://10.10.14.22:5001/v2.0'
    1. clouds:
        shiftstack:
          ...
          cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem"
      작은 정보

      $ oc edit configmap -n openshift-config cloud-provider-config

4.12.

  1. $ ansible-playbook -i inventory.yaml network.yaml
    참고

    참고

4.13.

    1. $ ./openshift-install create install-config --dir <installation_directory> 
      1
      1

      1. 참고

  1. 중요

4.13.1.

참고

중요

4.13.2.

중요

예 4.1.

apiVersion: v1
baseDomain: example.com
controlPlane:
  name: master
  platform: {}
  replicas: 3
compute:
- name: worker
  platform:
    openstack:
      type: ml.large
  replicas: 3
metadata:
  name: example
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineNetwork:
  - cidr: 10.0.0.0/16
  serviceNetwork:
  - 172.30.0.0/16
  networkType: OVNKubernetes
platform:
  openstack:
    cloud: mycloud
    externalNetwork: external
    computeFlavor: m1.xlarge
    apiFloatingIP: 128.0.0.1
fips: false
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA...

예 4.2.

apiVersion: v1
baseDomain: example.com
controlPlane:
  name: master
  platform: {}
  replicas: 3
compute:
- name: worker
  platform:
    openstack:
      type: ml.large
  replicas: 3
metadata:
  name: example
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  - cidr: fd01::/48
    hostPrefix: 64
  machineNetwork:
  - cidr: 192.168.25.0/24
  - cidr: fd2e:6f44:5dd8:c956::/64
  serviceNetwork:
  - 172.30.0.0/16
  - fd02::/112
  networkType: OVNKubernetes
platform:
  openstack:
    cloud: mycloud
    externalNetwork: external
    computeFlavor: m1.xlarge
    apiVIPs:
    - 192.168.25.10
    - fd2e:6f44:5dd8:c956:f816:3eff:fec3:5955
    ingressVIPs:
    - 192.168.25.132
    - fd2e:6f44:5dd8:c956:f816:3eff:fe40:aecb
    controlPlanePort:
      fixedIPs:
      - subnet:
          name: openshift-dual4
      - subnet:
          name: openshift-dual6
      network:
        name: openshift-dual
fips: false
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA...

4.13.3.

    • $ python -c 'import yaml
      path = "install-config.yaml"
      data = yaml.safe_load(open(path))
      inventory = yaml.safe_load(open("inventory.yaml"))["all"]["hosts"]["localhost"]
      machine_net = [{"cidr": inventory["os_subnet_range"]}]
      api_vips = [inventory["os_apiVIP"]]
      ingress_vips = [inventory["os_ingressVIP"]]
      ctrl_plane_port = {"network": {"name": inventory["os_network"]}, "fixedIPs": [{"subnet": {"name": inventory["os_subnet"]}}]}
      if inventory.get("os_subnet6"): 
      1
      
          machine_net.append({"cidr": inventory["os_subnet6_range"]})
          api_vips.append(inventory["os_apiVIP6"])
          ingress_vips.append(inventory["os_ingressVIP6"])
          data["networking"]["networkType"] = "OVNKubernetes"
          data["networking"]["clusterNetwork"].append({"cidr": inventory["cluster_network6_cidr"], "hostPrefix": inventory["cluster_network6_prefix"]})
          data["networking"]["serviceNetwork"].append(inventory["service_subnet6_range"])
          ctrl_plane_port["fixedIPs"].append({"subnet": {"name": inventory["os_subnet6"]}})
      data["networking"]["machineNetwork"] = machine_net
      data["platform"]["openstack"]["apiVIPs"] = api_vips
      data["platform"]["openstack"]["ingressVIPs"] = ingress_vips
      data["platform"]["openstack"]["controlPlanePort"] = ctrl_plane_port
      del data["platform"]["openstack"]["externalDNS"]
      open(path, "w").write(yaml.dump(data, default_flow_style=False))'
      1

4.13.4.

    • $ python -c '
      import yaml;
      path = "install-config.yaml";
      data = yaml.safe_load(open(path));
      data["compute"][0]["replicas"] = 0;
      open(path, "w").write(yaml.dump(data, default_flow_style=False))'

4.13.5.

참고

4.13.5.1.

  • 작은 정보

  • 작은 정보
    $ openstack network create --project openshift
    $ openstack subnet create --project openshift

    중요

  • $ openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ...
4.13.5.2.

중요

        ...
        platform:
          openstack:
            apiVIPs: 
1

              - 192.0.2.13
            ingressVIPs: 
2

              - 192.0.2.23
            machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf
            # ...
        networking:
          machineNetwork:
          - cidr: 192.0.2.0/24

1 2
주의

작은 정보

4.14.

중요

  1. $ ./openshift-install create manifests --dir <installation_directory> 
    1
    1
  2. $ rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml

  3. $ ./openshift-install create ignition-configs --dir <installation_directory> 
    1
    1

    .
    ├── auth
    │   ├── kubeadmin-password
    │   └── kubeconfig
    ├── bootstrap.ign
    ├── master.ign
    ├── metadata.json
    └── worker.ign
  4. $ export INFRA_ID=$(jq -r .infraID metadata.json)
작은 정보

4.15.

  1. import base64
    import json
    import os
    
    with open('bootstrap.ign', 'r') as f:
        ignition = json.load(f)
    
    files = ignition['storage'].get('files', [])
    
    infra_id = os.environ.get('INFRA_ID', 'openshift').encode()
    hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\n').decode().strip()
    files.append(
    {
        'path': '/etc/hostname',
        'mode': 420,
        'contents': {
            'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64
        }
    })
    
    ca_cert_path = os.environ.get('OS_CACERT', '')
    if ca_cert_path:
        with open(ca_cert_path, 'r') as f:
            ca_cert = f.read().encode()
            ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip()
    
        files.append(
        {
            'path': '/opt/openshift/tls/cloud-ca-cert.pem',
            'mode': 420,
            'contents': {
                'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64
            }
        })
    
    ignition['storage']['files'] = files;
    
    with open('bootstrap.ign', 'w') as f:
        json.dump(ignition, f)
  2. $ openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name>
  3. $ openstack image show <image_name>

    참고

  4. $ openstack catalog show image
  5. $ openstack token issue -c id -f value
  6. {
      "ignition": {
        "config": {
          "merge": [{
            "source": "<storage_url>", 
    1
    
            "httpHeaders": [{
              "name": "X-Auth-Token", 
    2
    
              "value": "<token_ID>" 
    3
    
            }]
          }]
        },
        "security": {
          "tls": {
            "certificateAuthorities": [{
              "source": "data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>" 
    4
    
            }]
          }
        },
        "version": "3.2.0"
      }
    }
    1
    2
    3
    4

주의

4.16.

참고

  • $ for index in $(seq 0 2); do
        MASTER_HOSTNAME="$INFRA_ID-master-$index\n"
        python -c "import base64, json, sys;
    ignition = json.load(sys.stdin);
    storage = ignition.get('storage', {});
    files = storage.get('files', []);
    files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'$MASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'});
    storage['files'] = files;
    ignition['storage'] = storage
    json.dump(ignition, sys.stdout)" <master.ign >"$INFRA_ID-master-$index-ignition.json"
    done

4.17.

  1. ...
          # The public network providing connectivity to the cluster. If not
          # provided, the cluster external connectivity must be provided in another
          # way.
    
          # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip.
          os_external_network: 'external'
    ...

    중요

  2. ...
          # OpenShift API floating IP address. If this value is non-empty, the
          # corresponding floating IP will be attached to the Control Plane to
          # serve the OpenShift API.
          os_api_fip: '203.0.113.23'
    
          # OpenShift Ingress floating IP address. If this value is non-empty, the
          # corresponding floating IP will be attached to the worker nodes to serve
          # the applications.
          os_ingress_fip: '203.0.113.19'
    
          # If this value is non-empty, the corresponding floating IP will be
          # attached to the bootstrap machine. This is needed for collecting logs
          # in case of install failure.
          os_bootstrap_fip: '203.0.113.20'

    중요

  3. $ ansible-playbook -i inventory.yaml security-groups.yaml
  4. $ ansible-playbook -i inventory.yaml update-network-resources.yaml 
    1
    1
  5. $ openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> "$INFRA_ID-nodes"

4.17.1.

참고

    1. all:
        hosts:
          localhost:
            ansible_connection: local
            ansible_python_interpreter: "{{ansible_playbook_python}}"
      
            # User-provided values
            os_subnet_range: '10.0.0.0/16'
            os_flavor_master: 'my-bare-metal-flavor' 
      1
      
            os_flavor_worker: 'my-bare-metal-flavor' 
      2
      
            os_image_rhcos: 'rhcos'
            os_external_network: 'external'
      ...

      1
      2

참고

$ ./openshift-install wait-for install-complete --log-level debug

4.18.

  1. $ ansible-playbook -i inventory.yaml bootstrap.yaml
  2. $ openstack console log show "$INFRA_ID-bootstrap"

4.19.

  1. $ ansible-playbook -i inventory.yaml control-plane.yaml
  2. $ openshift-install wait-for bootstrap-complete

    INFO API v1.28.5 up
    INFO Waiting up to 30m0s for bootstrapping to complete...
    ...
    INFO It is now safe to remove the bootstrap resources

4.20.

  1. $ export KUBECONFIG=<installation_directory>/auth/kubeconfig 
    1
    1
  2. $ oc whoami

    system:admin

4.21.

  1. $ ansible-playbook -i inventory.yaml down-bootstrap.yaml

주의

4.22.

  1. $ ansible-playbook -i inventory.yaml compute-nodes.yaml

4.23.

  1. $ oc get nodes

    NAME      STATUS    ROLES   AGE  VERSION
    master-0  Ready     master  63m  v1.28.5
    master-1  Ready     master  63m  v1.28.5
    master-2  Ready     master  64m  v1.28.5

    참고

  2. $ oc get csr

    NAME        AGE     REQUESTOR                                                                   CONDITION
    csr-8b2br   15m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
    csr-8vnps   15m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
    ...

  3. 참고

    참고

    • $ oc adm certificate approve <csr_name> 
      1
      1
    • $ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve
      참고

  4. $ oc get csr

    NAME        AGE     REQUESTOR                                                                   CONDITION
    csr-bfd72   5m26s   system:node:ip-10-0-50-126.us-east-2.compute.internal                       Pending
    csr-c57lv   5m26s   system:node:ip-10-0-95-157.us-east-2.compute.internal                       Pending
    ...

    • $ oc adm certificate approve <csr_name> 
      1
      1
    • $ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
  5. $ oc get nodes

    NAME      STATUS    ROLES   AGE  VERSION
    master-0  Ready     master  73m  v1.28.5
    master-1  Ready     master  73m  v1.28.5
    master-2  Ready     master  74m  v1.28.5
    worker-0  Ready     worker  11m  v1.28.5
    worker-1  Ready     worker  11m  v1.28.5

    참고

4.24.

  • $ openshift-install --log-level debug wait-for install-complete

4.25.

4.26.

5장.

5.1.

  • 중요

5.2.

5.2.1.

5.3.

Expand
표 5.1.
  

중요

참고

5.3.1.

5.3.2.

작은 정보

5.3.3.

5.4.

5.5.

중요

중요

  1. $ openstack role add --user <user> --project <project> swiftoperator

5.6.

    • 중요

    • clouds:
        shiftstack:
          auth:
            auth_url: http://10.10.14.42:5000/v3
            project_name: shiftstack
            username: <username>
            password: <password>
            user_domain_name: Default
            project_domain_name: Default
        dev-env:
          region_name: RegionOne
          auth:
            username: <username>
            password: <password>
            project_name: 'devonly'
            auth_url: 'https://10.10.14.22:5001/v2.0'
    1. clouds:
        shiftstack:
          ...
          cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem"
      작은 정보

      $ oc edit configmap -n openshift-config cloud-provider-config

5.7.

  1. $ openshift-install --dir <destination_directory> create manifests
  2. $ vi openshift/manifests/cloud-provider-config.yaml
  3. #...
    [LoadBalancer]
    lb-provider = "amphora" 
    1
    
    floating-network-id="d3deb660-4190-40a3-91f1-37326fe6ec4a" 
    2
    
    create-monitor = True 
    3
    
    monitor-delay = 10s 
    4
    
    monitor-timeout = 10s 
    5
    
    monitor-max-retries = 1 
    6
    
    #...
    1
    2
    3
    4
    5
    6
    중요

    중요

  4. 작은 정보

    $ oc edit configmap -n openshift-config cloud-provider-config

5.8.

  1. 중요

  2. 참고

    $ file <name_of_downloaded_file>
  3. $ openstack image create --file rhcos-44.81.202003110027-0-openstack.x86_64.qcow2 --disk-format qcow2 rhcos-${RHCOS_VERSION}
    중요

    주의

5.9.

    1. $ ./openshift-install create install-config --dir <installation_directory> 
      1
      1

      1. 참고

  1. platform:
      openstack:
          clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d
    1. pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "you@example.com"}}}'

    2. additionalTrustBundle: |
        -----BEGIN CERTIFICATE-----
        ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ
        -----END CERTIFICATE-----

    3. imageContentSources:
      - mirrors:
        - <mirror_host_name>:5000/<repo_name>/release
        source: quay.io/openshift-release-dev/ocp-release
      - mirrors:
        - <mirror_host_name>:5000/<repo_name>/release
        source: registry.redhat.io/ocp/release

    4. publish: Internal

  2. 중요

5.9.1.

  • 참고

  1. apiVersion: v1
    baseDomain: my.domain.com
    proxy:
      httpProxy: http://<username>:<pswd>@<ip>:<port> 
    1
    
      httpsProxy: https://<username>:<pswd>@<ip>:<port> 
    2
    
      noProxy: example.com 
    3
    
    additionalTrustBundle: | 
    4
    
        -----BEGIN CERTIFICATE-----
        <MY_TRUSTED_CA_CERT>
        -----END CERTIFICATE-----
    additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 
    5
    1
    2
    3
    4
    5
    참고

    참고

    $ ./openshift-install wait-for install-complete --log-level debug

참고

5.9.2.

중요

apiVersion: v1
baseDomain: example.com
controlPlane:
  name: master
  platform: {}
  replicas: 3
compute:
- name: worker
  platform:
    openstack:
      type: ml.large
  replicas: 3
metadata:
  name: example
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineNetwork:
  - cidr: 10.0.0.0/16
  serviceNetwork:
  - 172.30.0.0/16
  networkType: OVNKubernetes
platform:
  openstack:
    region: region1
    cloud: mycloud
    externalNetwork: external
    computeFlavor: m1.xlarge
    apiFloatingIP: 128.0.0.1
fips: false
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA...
additionalTrustBundle: |

  -----BEGIN CERTIFICATE-----

  ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ

  -----END CERTIFICATE-----

imageContentSources:
- mirrors:
  - <mirror_registry>/<repo_name>/release
  source: quay.io/openshift-release-dev/ocp-release
- mirrors:
  - <mirror_registry>/<repo_name>/release
  source: quay.io/openshift-release-dev/ocp-v4.0-art-dev

5.10.

중요

참고

  1. $ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 
    1
    1
    참고

  2. $ cat <path>/<file_name>.pub

    $ cat ~/.ssh/id_ed25519.pub
  3. 참고

    1. $ eval "$(ssh-agent -s)"

      Agent pid 31874

      참고

  4. $ ssh-add <path>/<file_name> 
    1
    1

    Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

5.11.

5.11.1.

  1. $ openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network>
  2. $ openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network>
  3. api.<cluster_name>.<base_domain>.  IN  A  <API_FIP>
    *.apps.<cluster_name>.<base_domain>. IN  A <apps_FIP>
    참고

작은 정보

5.11.2.

참고

api.<cluster_name>.<base_domain>.  IN  A  <api_port_IP>
*.apps.<cluster_name>.<base_domain>. IN  A <ingress_port_IP>

5.12.

중요

  • $ ./openshift-install create cluster --dir <installation_directory> \ 
    1
    
        --log-level=info 
    2
    1
    2

중요

...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "password"
INFO Time elapsed: 36m22s

중요

5.13.

  1. $ export KUBECONFIG=<installation_directory>/auth/kubeconfig 
    1
    1

  2. $ oc get nodes
  3. $ oc get clusterversion
  4. $ oc get clusteroperator
  5. $ oc get pods -A

5.14.

  1. $ export KUBECONFIG=<installation_directory>/auth/kubeconfig 
    1
    1
  2. $ oc whoami

    system:admin

5.15.

  • $ oc patch OperatorHub cluster --type json \
        -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'
작은 정보

5.16.

5.17.

6장.

6.1.

참고

  1. $ openstack port show <cluster_name>-<cluster_ID>-ingress-port
  2. $ openstack floating ip set --port <ingress_port_ID> <apps_FIP>
  3. *.apps.<cluster_name>.<base_domain>  IN  A  <apps_FIP>
참고

<apps_FIP> console-openshift-console.apps.<cluster name>.<base domain>
<apps_FIP> integrated-oauth-server-openshift-authentication.apps.<cluster name>.<base domain>
<apps_FIP> oauth-openshift.apps.<cluster name>.<base domain>
<apps_FIP> prometheus-k8s-openshift-monitoring.apps.<cluster name>.<base domain>
<apps_FIP> <app name>.apps.<cluster name>.<base domain>

6.2.

참고

  1. apiVersion: sriovnetwork.openshift.io/v1
    kind: SriovNetworkNodePolicy 
    1
    
    metadata:
      name: "hwoffload9"
      namespace: openshift-sriov-network-operator
    spec:
      deviceType: netdevice
      isRdma: true
      nicSelector:
        pfNames: 
    2
    
        - ens6
      nodeSelector:
        feature.node.kubernetes.io/network-sriov.capable: 'true'
      numVfs: 1
      priority: 99
      resourceName: "hwoffload9"

    1
    2

    apiVersion: sriovnetwork.openshift.io/v1
    kind: SriovNetworkNodePolicy 
    1
    
    metadata:
      name: "hwoffload10"
      namespace: openshift-sriov-network-operator
    spec:
      deviceType: netdevice
      isRdma: true
      nicSelector:
        pfNames: 
    2
    
        - ens5
      nodeSelector:
        feature.node.kubernetes.io/network-sriov.capable: 'true'
      numVfs: 1
      priority: 99
      resourceName: "hwoffload10"

    1
    2
  2. apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      annotations:
        k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload9
      name: hwoffload9
      namespace: default
    spec:
        config: '{ "cniVersion":"0.3.1", "name":"hwoffload9","type":"host-device","device":"ens6"
        }'

    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      annotations:
        k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload10
      name: hwoffload10
      namespace: default
    spec:
        config: '{ "cniVersion":"0.3.1", "name":"hwoffload10","type":"host-device","device":"ens5"
        }'

  3. apiVersion: v1
    kind: Pod
    metadata:
      name: dpdk-testpmd
      namespace: default
      annotations:
        irq-load-balancing.crio.io: disable
        cpu-quota.crio.io: disable
        k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload9
        k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload10
    spec:
      restartPolicy: Never
      containers:
      - name: dpdk-testpmd
        image: quay.io/krister/centos8_nfv-container-dpdk-testpmd:latest

6.3.

  1. spec:
      additionalNetworks:
      - name: hwoffload1
        namespace: cnf
        rawCNIConfig: '{ "cniVersion": "0.3.1", "name": "hwoffload1", "type": "host-device","pciBusId": "0000:00:05.0", "ipam": {}}' 
    1
    
        type: Raw

    $ oc describe SriovNetworkNodeState -n openshift-sriov-network-operator
  2. $ oc apply -f network.yaml

6.4.

중요

  • $ openstack port set --no-security-group --disable-port-security <compute_ipv6_port> 
    1
    1 1
    중요

6.5.

  1. apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: hello-openshift
      namespace: ipv6
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
             - labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - hello-openshift
      replicas: 2
      selector:
        matchLabels:
          app: hello-openshift
      template:
        metadata:
          labels:
            app: hello-openshift
          annotations:
            k8s.v1.cni.cncf.io/networks: ipv6
        spec:
          securityContext:
            runAsNonRoot: true
            seccompProfile:
              type: RuntimeDefault
          containers:
          - name: hello-openshift
            securityContext:
              allowPrivilegeEscalation: false
              capabilities:
                drop:
                - ALL
            image: quay.io/openshift/origin-hello-openshift
            ports:
            - containerPort: 8080
  2. $ oc create -f <ipv6_enabled_resource> 
    1
    1

6.6.

  1. $ oc edit networks.operator.openshift.io cluster
  2. ...
    spec:
      additionalNetworks:
      - name: ipv6
        namespace: ipv6 
    1
    
        rawCNIConfig: '{ "cniVersion": "0.3.1", "name": "ipv6", "type": "macvlan", "master": "ens4"}' 
    2
    
        type: Raw
    1
    2
    참고

  • $ oc get network-attachment-definitions -A

    NAMESPACE       NAME            AGE
    ipv6            ipv6            21h

7장.

7.1.

참고

[Global]
use-clouds = true
clouds-file = /etc/openstack/secret/clouds.yaml
cloud = openstack
...

[LoadBalancer]
enabled = true

7.2.

중요

apiVersion: v1
data:
  cloud.conf: |
    [Global] 
1

    secret-name = openstack-credentials
    secret-namespace = kube-system
    region = regionOne
    [LoadBalancer]
    enabled = True
kind: ConfigMap
metadata:
  creationTimestamp: "2022-12-20T17:01:08Z"
  name: cloud-conf
  namespace: openshift-cloud-controller-manager
  resourceVersion: "2519"
  uid: cbbeedaf-41ed-41c2-9f37-4885732d3677

1

7.2.1.

참고

Expand
  

7.2.2.

Expand
  

8장.

중요

8.1.

주의

  1. $ openstack flavor create --<ram 16384> --<disk 0> --ephemeral 10 --vcpus 4 <flavor_name>
  2. # ...
    controlPlane:
      name: master
      platform:
        openstack:
          type: ${CONTROL_PLANE_FLAVOR}
          rootVolume:
            size: 25
            types:
            - ${CINDER_TYPE}
      replicas: 3
    # ...

  3. $ openshift-install create cluster --dir <installation_directory> 
    1
    1
  4. $ oc wait clusteroperators --all --for=condition=Progressing=false 
    1
    1
  5. $ oc patch ControlPlaneMachineSet/cluster -n openshift-machine-api --type json -p ' 
    1
    
    [
        {
          "op": "add",
          "path": "/spec/template/machines_v1beta1_machine_openshift_io/spec/providerSpec/value/additionalBlockDevices", 
    2
    
          "value": [
            {
              "name": "etcd",
              "sizeGiB": 10,
              "storage": {
                "type": "Local" 
    3
    
              }
            }
          ]
        }
      ]
    '
    1
    2
    3
    1. $ oc wait --timeout=90m --for=condition=Progressing=false controlplanemachineset.machine.openshift.io -n openshift-machine-api cluster
    2. $ oc wait --timeout=90m --for=jsonpath='{.status.updatedReplicas}'=3 controlplanemachineset.machine.openshift.io -n openshift-machine-api cluster
    3. $ oc wait --timeout=90m --for=jsonpath='{.status.replicas}'=3 controlplanemachineset.machine.openshift.io -n openshift-machine-api cluster
    4. $ oc wait clusteroperators --timeout=30m --all --for=condition=Progressing=false
    $ cp_machines=$(oc get machines -n openshift-machine-api --selector='machine.openshift.io/cluster-api-machine-role=master' --no-headers -o custom-columns=NAME:.metadata.name) 
    1
    
    
    
    if [[ $(echo "${cp_machines}" | wc -l) -ne 3 ]]; then
      exit 1
    fi 
    2
    
    
    
    for machine in ${cp_machines}; do
      if ! oc get machine -n openshift-machine-api "${machine}" -o jsonpath='{.spec.providerSpec.value.additionalBlockDevices}' | grep -q 'etcd'; then
    	exit 1
      fi 
    3
    
    done
    1
    2
    3
  6. 주의

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: master
      name: 98-var-lib-etcd
    spec:
      config:
        ignition:
          version: 3.4.0
        systemd:
          units:
          - contents: |
              [Unit]
              Description=Mount local-etcd to /var/lib/etcd
    
              [Mount]
              What=/dev/disk/by-label/local-etcd 
    1
    
              Where=/var/lib/etcd
              Type=xfs
              Options=defaults,prjquota
    
              [Install]
              WantedBy=local-fs.target
            enabled: true
            name: var-lib-etcd.mount
          - contents: |
              [Unit]
              Description=Create local-etcd filesystem
              DefaultDependencies=no
              After=local-fs-pre.target
              ConditionPathIsSymbolicLink=!/dev/disk/by-label/local-etcd 
    2
    
    
              [Service]
              Type=oneshot
              RemainAfterExit=yes
              ExecStart=/bin/bash -c "[ -L /dev/disk/by-label/ephemeral0 ] || ( >&2 echo Ephemeral disk does not exist; /usr/bin/false )"
              ExecStart=/usr/sbin/mkfs.xfs -f -L local-etcd /dev/disk/by-label/ephemeral0 
    3
    
    
              [Install]
              RequiredBy=dev-disk-by\x2dlabel-local\x2detcd.device
            enabled: true
            name: create-local-etcd.service
          - contents: |
              [Unit]
              Description=Migrate existing data to local etcd
              After=var-lib-etcd.mount
              Before=crio.service 
    4
    
    
              Requisite=var-lib-etcd.mount
              ConditionPathExists=!/var/lib/etcd/member
              ConditionPathIsDirectory=/sysroot/ostree/deploy/rhcos/var/lib/etcd/member 
    5
    
    
              [Service]
              Type=oneshot
              RemainAfterExit=yes
    
              ExecStart=/bin/bash -c "if [ -d /var/lib/etcd/member.migrate ]; then rm -rf /var/lib/etcd/member.migrate; fi" 
    6
    
    
              ExecStart=/usr/bin/cp -aZ /sysroot/ostree/deploy/rhcos/var/lib/etcd/member/ /var/lib/etcd/member.migrate
              ExecStart=/usr/bin/mv /var/lib/etcd/member.migrate /var/lib/etcd/member 
    7
    
    
              [Install]
              RequiredBy=var-lib-etcd.mount
            enabled: true
            name: migrate-to-local-etcd.service
          - contents: |
              [Unit]
              Description=Relabel /var/lib/etcd
    
              After=migrate-to-local-etcd.service
              Before=crio.service
    
              [Service]
              Type=oneshot
              RemainAfterExit=yes
    
              ExecCondition=/bin/bash -c "[ -n \"$(restorecon -nv /var/lib/etcd)\" ]" 
    8
    
    
              ExecStart=/usr/sbin/restorecon -R /var/lib/etcd
    
              [Install]
              RequiredBy=var-lib-etcd.mount
            enabled: true
            name: relabel-var-lib-etcd.service
    1
    2
    3
    4
    5
    6
    7
    8
  7. $ oc create -f 98-var-lib-etcd.yaml
    참고

    1. $ oc wait --timeout=45m --for=condition=Updating=false machineconfigpool/master
    2. $ oc wait node --selector='node-role.kubernetes.io/master' --for condition=Ready --timeout=30s
    3. $ oc wait clusteroperators --timeout=30m --all --for=condition=Progressing=false

9장.

9.1.

참고

참고

  1. $ ./openshift-install destroy cluster \
    --dir <installation_directory> --log-level info 
    1
     
    2
    1
    2
    참고

10장.

10.1.

참고

    1. $ sudo subscription-manager register # If not done already
    2. $ sudo subscription-manager attach --pool=$YOUR_POOLID # If not done already
    3. $ sudo subscription-manager repos --disable=* # If not done already
    4. $ sudo subscription-manager repos \
        --enable=rhel-8-for-x86_64-baseos-rpms \
        --enable=openstack-16-tools-for-rhel-8-x86_64-rpms \
        --enable=ansible-2.9-for-rhel-8-x86_64-rpms \
        --enable=rhel-8-for-x86_64-appstream-rpms
  1. $ sudo yum install python3-openstackclient ansible python3-openstacksdk
  2. $ sudo alternatives --set python /usr/bin/python3

10.2.

  1. $ ansible-playbook -i inventory.yaml  \
    	down-bootstrap.yaml      \
    	down-control-plane.yaml  \
    	down-compute-nodes.yaml  \
    	down-load-balancers.yaml \
    	down-network.yaml        \
    	down-security-groups.yaml

11장.

11.1.

참고

11.1.1.

Expand
표 11.1.
   
apiVersion:

baseDomain:

metadata:

metadata:
  name:

platform:

pullSecret:

{
   "auths":{
      "cloud.openshift.com":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      },
      "quay.io":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      }
   }
}

11.1.2.

참고

Expand
표 11.2.
   
networking:

참고

networking:
  networkType:

networking:
  clusterNetwork:

networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
networking:
  clusterNetwork:
    cidr:

networking:
  clusterNetwork:
    hostPrefix:

networking:
  serviceNetwork:

networking:
  serviceNetwork:
   - 172.30.0.0/16
networking:
  machineNetwork:

networking:
  machineNetwork:
  - cidr: 10.0.0.0/16
networking:
  machineNetwork:
    cidr:

참고

11.1.3.

Expand
표 11.3.
   
additionalTrustBundle:

capabilities:

capabilities:
  baselineCapabilitySet:

capabilities:
  additionalEnabledCapabilities:

cpuPartitioningMode:

compute:

compute:
  architecture:

compute:
  hyperthreading:

중요

compute:
  name:

compute:
  platform:

compute:
  replicas:

featureSet:

controlPlane:

controlPlane:
  architecture:

controlPlane:
  hyperthreading:

중요

controlPlane:
  name:

controlPlane:
  platform:

controlPlane:
  replicas:

credentialsMode:

fips:

중요

참고

imageContentSources:

imageContentSources:
  source:

imageContentSources:
  mirrors:

platform:
  aws:
    lbType:

publish:

sshKey:

참고

  1. 참고

    중요

11.1.4.

Expand
표 11.4.
   
compute:
  platform:
    aws:
      amiID:

compute:
  platform:
    aws:
      iamRole:

compute:
  platform:
    aws:
      rootVolume:
        iops:

compute:
  platform:
    aws:
      rootVolume:
        size:

compute:
  platform:
    aws:
      rootVolume:
        type:

compute:
  platform:
    aws:
      rootVolume:
        kmsKeyARN:

compute:
  platform:
    aws:
      type:

compute:
  platform:
    aws:
      zones:

compute:
  aws:
    region:

aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge
중요

controlPlane:
  platform:
    aws:
      amiID:

controlPlane:
  platform:
    aws:
      iamRole:

controlPlane:
  platform:
    aws:
      rootVolume:
        iops:

controlPlane:
  platform:
    aws:
      rootVolume:
        size:

controlPlane:
  platform:
    aws:
      rootVolume:
        type:

controlPlane:
  platform:
    aws:
      rootVolume:
        kmsKeyARN:

controlPlane:
  platform:
    aws:
      type:

controlPlane:
  platform:
    aws:
      zones:

controlPlane:
  aws:
    region:

platform:
  aws:
    amiID:

platform:
  aws:
    hostedZone:

platform:
  aws:
    hostedZoneRole:

platform:
  aws:
    serviceEndpoints:
      - name:
        url:

platform:
  aws:
    userTags:

참고

platform:
  aws:
    propagateUserTags:

platform:
  aws:
    subnets:

platform:
  aws:
    preserveBootstrapIgnition:

11.1.5.

Expand
표 11.5.
   
compute:
  platform:
    openstack:
      rootVolume:
        size:

compute:
  platform:
    openstack:
      rootVolume:
        types:

compute:
  platform:
    openstack:
      rootVolume:
        type:

compute:
  platform:
    openstack:
      rootVolume:
        zones:

controlPlane:
  platform:
    openstack:
      rootVolume:
        size:

controlPlane:
  platform:
    openstack:
      rootVolume:
        types:

controlPlane:
  platform:
    openstack:
      rootVolume:
        type:

controlPlane:
  platform:
    openstack:
      rootVolume:
        zones:

platform:
  openstack:
    cloud:

platform:
  openstack:
    externalNetwork:

platform:
  openstack:
    computeFlavor:

11.1.6.

Expand
표 11.6.
   
compute:
  platform:
    openstack:
      additionalNetworkIDs:

compute:
  platform:
    openstack:
      additionalSecurityGroupIDs:

compute:
  platform:
    openstack:
      zones:

compute:
  platform:
    openstack:
      serverGroupPolicy:

controlPlane:
  platform:
    openstack:
      additionalNetworkIDs:

controlPlane:
  platform:
    openstack:
      additionalSecurityGroupIDs:

controlPlane:
  platform:
    openstack:
      zones:

controlPlane:
  platform:
    openstack:
      serverGroupPolicy:

platform:
  openstack:
    clusterOSImage:

platform:
  openstack:
    clusterOSImageProperties:

platform:
  openstack:
    defaultMachinePlatform:

{
   "type": "ml.large",
   "rootVolume": {
      "size": 30,
      "type": "performance"
   }
}
platform:
  openstack:
    ingressFloatingIP:

platform:
  openstack:
    apiFloatingIP:

platform:
  openstack:
    externalDNS:

platform:
  openstack:
    loadbalancer:

platform:
  openstack:
    machinesSubnet:

11.1.7.

Expand
표 11.7.
   
controlPlane:
  platform:
    gcp:
      osImage:
        project:

controlPlane:
  platform:
    gcp:
      osImage:
        name:

compute:
  platform:
    gcp:
      osImage:
        project:

compute:
  platform:
    gcp:
      osImage:
        name:

platform:
  gcp:
    network:

platform:
  gcp:
    networkProjectID:

platform:
  gcp:
    projectID:

platform:
  gcp:
    region:

platform:
  gcp:
    controlPlaneSubnet:

platform:
  gcp:
    computeSubnet:

platform:
  gcp:
    defaultMachinePlatform:
      zones:

중요

platform:
  gcp:
    defaultMachinePlatform:
      osDisk:
        diskSizeGB:

platform:
  gcp:
    defaultMachinePlatform:
      osDisk:
        diskType:

platform:
  gcp:
    defaultMachinePlatform:
      osImage:
        project:

platform:
  gcp:
    defaultMachinePlatform:
      osImage:
        name:

platform:
  gcp:
    defaultMachinePlatform:
      tags:

platform:
  gcp:
    defaultMachinePlatform:
      type:

platform:
  gcp:
    defaultMachinePlatform:
      osDisk:
        encryptionKey:
          kmsKey:
            name:

platform:
  gcp:
    defaultMachinePlatform:
      osDisk:
        encryptionKey:
          kmsKey:
            keyRing:

platform:
  gcp:
    defaultMachinePlatform:
      osDisk:
        encryptionKey:
          kmsKey:
            location:

platform:
  gcp:
    defaultMachinePlatform:
      osDisk:
        encryptionKey:
          kmsKey:
            projectID:

platform:
  gcp:
    defaultMachinePlatform:
      osDisk:
        encryptionKey:
          kmsKeyServiceAccount:

platform:
  gcp:
    defaultMachinePlatform:
      secureBoot:

platform:
  gcp:
    defaultMachinePlatform:
      confidentialCompute:

platform:
  gcp:
    defaultMachinePlatform:
      onHostMaintenance:

controlPlane:
  platform:
    gcp:
      osDisk:
        encryptionKey:
          kmsKey:
            name:

controlPlane:
  platform:
    gcp:
      osDisk:
        encryptionKey:
          kmsKey:
            keyRing:

controlPlane:
  platform:
    gcp:
      osDisk:
        encryptionKey:
          kmsKey:
            location:

controlPlane:
  platform:
    gcp:
      osDisk:
        encryptionKey:
          kmsKey:
            projectID:

controlPlane:
  platform:
    gcp:
      osDisk:
        encryptionKey:
          kmsKeyServiceAccount:

controlPlane:
  platform:
    gcp:
      osDisk:
        diskSizeGB:

controlPlane:
  platform:
    gcp:
      osDisk:
        diskType:

controlPlane:
  platform:
    gcp:
      tags:

controlPlane:
  platform:
    gcp:
      type:

controlPlane:
  platform:
    gcp:
      zones:

중요

controlPlane:
  platform:
    gcp:
      secureBoot:

controlPlane:
  platform:
    gcp:
      confidentialCompute:

controlPlane:
  platform:
    gcp:
      onHostMaintenance:

compute:
  platform:
    gcp:
      osDisk:
        encryptionKey:
          kmsKey:
            name:

compute:
  platform:
    gcp:
      osDisk:
        encryptionKey:
          kmsKey:
            keyRing:

compute:
  platform:
    gcp:
      osDisk:
        encryptionKey:
          kmsKey:
            location:

compute:
  platform:
    gcp:
      osDisk:
        encryptionKey:
          kmsKey:
            projectID:

compute:
  platform:
    gcp:
      osDisk:
        encryptionKey:
          kmsKeyServiceAccount:

compute:
  platform:
    gcp:
      osDisk:
        diskSizeGB:

compute:
  platform:
    gcp:
      osDisk:
        diskType:

compute:
  platform:
    gcp:
      tags:

compute:
  platform:
    gcp:
      type:

compute:
  platform:
    gcp:
      zones:

중요

compute:
  platform:
    gcp:
      secureBoot:

compute:
  platform:
    gcp:
      confidentialCompute:

compute:
  platform:
    gcp:
      onHostMaintenance:

Legal Notice

Copyright © Red Hat

OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).

Modified versions must remove all Red Hat trademarks.

Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.

Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.

Linux® is the registered trademark of Linus Torvalds in the United States and other countries.

Java® is a registered trademark of Oracle and/or its affiliates.

XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.

MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.

Node.js® is an official trademark of the OpenJS Foundation.

The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.

All other trademarks are the property of their respective owners.

Red Hat logoGithubredditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다. 최신 업데이트를 확인하세요.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

Theme

© 2026 Red Hat
맨 위로 이동