이 콘텐츠는 선택한 언어로 제공되지 않습니다.

Chapter 20. Configuring for Azure


20.1. Overview

OpenShift Container Platform can be configured to access a Microsoft Azure infrastructure, including using Azure disk as persistent storage for application data. After Microsoft Azure is configured properly, some additional configurations need to be completed on the OpenShift Container Platform hosts.

20.2. Permissions

Configuring Microsoft Azure for OpenShift Container Platform requires the following role:

Contributor

To create and manage all types of Microsoft Azure resources.

For more information about adding administrator roles, see Add or change Azure subscription administrators.

20.3. Prerequisites

  • If you are using Microsoft Azure Disk as a persistent volume on the OpenShift Container Platform version 3.5 or later, you must enable Azure Cloud Provider.
  • All OpenShift Container Platform node virtual machines (VMs) running in Microsoft Azure must belong to a single resource group.
  • Microsoft Azure VMs must be named the same as OpenShift Container Platform nodes and this cannot include capital letters.
  • If you plan to use Azure Managed Disks:

    • OpenShift Container Platform version 3.7 or later is required.
    • You must create VMs with Azure Managed Disks.
  • If you plan to use unmanaged disks:

    • You must create VMs with unmanaged disks.
  • If you are using a custom DNS configuration for your OpenShift Container Platform cluster or your cluster nodes are in different Microsoft Azure Virtual Networks (VNet), you must configure DNS so that each node in the cluster can resolve IP addresses for other nodes.

20.4. The Azure Configuration File

Configuring OpenShift Container Platform for Azure requires the /etc/azure/azure.conf file, on each node host.

If the file does not exist, create it, and add the following:

tenantId: <> 1
subscriptionId: <> 2
aadClientId: <> 3
aadClientSecret: <> 4
aadTenantId: <> 5
resourceGroup: <> 6
cloud: <> 7
location: <> 8
vnetName: <> 9
securityGroupName: <> 10
primaryAvailabilitySetName: <> 11
1
The AAD tenant ID for the subscription that the cluster is deployed in.
2
The Azure subscription ID that the cluster is deployed in.
3
The client ID for an AAD application with RBAC access to talk to Azure RM APIs.
4
The client secret for an AAD application with RBAC access to talk to Azure RM APIs.
5
Ensure this is the same as tenant ID (optional).
6
The Azure Resource Group name that the Azure VM belongs to.
7
The specific cloud region. For example, AzurePublicCloud.
8
The compact style Azure region. For example, southeastasia (optional).
9
Virtual network containing instances and used when creating load balancers.
10
Security group name associated with instances and load balancers.
11
Availability set to use when creating resources such as load balancers (optional).
Important

The NIC used for accessing the instance must have an internal-dns-name set or the node will not be able to rejoin the cluster, display build logs to the console, and will cause oc rsh to not work correctly.

20.5. Configuring Masters

Edit or create the master configuration file on all masters (/etc/origin/master/master-config.yaml by default) and update the contents of the apiServerArguments and controllerArguments sections:

kubernetesMasterConfig:
  ...
  apiServerArguments:
    cloud-provider:
      - "azure"
    cloud-config:
      - "/etc/azure/azure.conf"
  controllerArguments:
    cloud-provider:
      - "azure"
    cloud-config:
      - "/etc/azure/azure.conf"
Important

When triggering a containerized installation, only the /etc/origin and /var/lib/origin directories are mounted to the master and node container. Therefore, master-config.yaml should be in /etc/origin/master instead of /etc/.

20.6. Configuring Nodes

  1. Edit or create the node configuration file on all nodes (/etc/origin/node/node-config.yaml by default) and update the contents of the kubeletArguments section:

    kubeletArguments:
      cloud-provider:
        - "azure"
      cloud-config:
        - "/etc/azure/azure.conf"
    Important

    When triggering a containerized installation, only the /etc/origin and /var/lib/origin directories are mounted to the master and node container. Therefore, node-config.yaml should be in /etc/origin/node instead of /etc/.

20.7. Applying Configuration Changes

Start or restart OpenShift Container Platform services on all master and node hosts to apply your configuration changes, see Restarting OpenShift Container Platform services:

# systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers
# systemctl restart atomic-openshift-node

Switching from not using a cloud provider to using a cloud provider produces an error message. Adding the cloud provider tries to delete the node because the node switches from using the hostname as the externalID (which would have been the case when no cloud provider was being used) to using the cloud provider’s instance-id (which is what the cloud provider specifies). To resolve this issue:

  1. Log in to the CLI as a cluster administrator.
  2. Check and back up existing node labels:

    $ oc describe node <node_name> | grep -Poz '(?s)Labels.*\n.*(?=Taints)'
  3. Delete the nodes:

    $ oc delete node <node_name>
  4. On each node host, restart the OpenShift Container Platform service.

    # systemctl restart atomic-openshift-node
  5. Add back any labels on each node that you previously had.
Red Hat logoGithubRedditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

© 2024 Red Hat, Inc.