Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 1. Creating and deploying a RHOSP overcloud with director Operator

download PDF

Red Hat OpenShift Container Platform (RHOCP) uses a modular system of Operators to extend the functions of your RHOCP cluster. Red Hat OpenStack Platform (RHOSP) director Operator (OSPdO) adds the ability to install and run a RHOSP cloud within RHOCP. OSPdO manages a set of Custom Resource Definitions (CRDs) that deploy and manage the infrastructure and configuration of RHOSP nodes. The basic architecture of an OSPdO-deployed RHOSP cloud includes the following features:

Virtualized control plane
The Controller nodes are virtual machines (VMs) that OSPdO creates in Red Hat OpenShift Virtualization.
Bare-metal machine provisioning
OSPdO uses RHOCP bare-metal machine management to provision the Compute nodes for the RHOSP cloud.
Networking
OSPdO configures the underlying networks for RHOSP services.
Heat and Ansible-based configuration
OSPdO stores custom heat configuration in RHOCP and uses the config-download functionality in director to convert the configuration into Ansible playbooks. If you change the stored heat configuration, OSPdO automatically regenerates the Ansible playbooks.
CLI client
OSPdO creates an openstackclient pod for users to run RHOSP CLI commands and interact with their RHOSP cloud.

You can use the resources specific to OSPdO to provision your overcloud infrastructure, generate your overcloud configuration, and create an overcloud. To create a RHOSP overcloud with OSPdO, you must complete the following tasks:

  1. Install OSPdO on an operational RHOCP cluster.
  2. Create a RHOCP cluster data volume for the base operating system and add authentication details for your remote Git repository.
  3. Create the overcloud networks using the OpenStackNetConfig CRD, including the control plane and any isolated networks.
  4. Create ConfigMaps to store any custom heat templates and environment files for your overcloud.
  5. Create a control plane, which includes three virtual machines for Controller nodes and a pod to perform client operations.
  6. Create bare-metal Compute nodes.
  7. Create an OpenStackConfigGenerator custom resource to render Ansible playbooks for overcloud configuration.
  8. Apply the Ansible playbook configuration to your overcloud nodes by using openstackdeploy.

1.1. Custom resource definitions for director Operator

The Red Hat OpenStack Platform (RHOSP) director Operator (OSPdO) includes a set of custom resource definitions (CRDs) that you can use to manage overcloud resources.

  • Use the following command to view a complete list of the OSPdO CRDs:

    $ oc get crd | grep "^openstack"
  • Use the following command to view the definition for a specific CRD:

    $ oc describe crd openstackbaremetalset
    Name:         openstackbaremetalsets.osp-director.openstack.org
    Namespace:
    Labels:       operators.coreos.com/osp-director-operator.openstack=
    Annotations:  cert-manager.io/inject-ca-from: $(CERTIFICATE_NAMESPACE)/$(CERTIFICATE_NAME)
                  controller-gen.kubebuilder.io/version: v0.3.0
    API Version:  apiextensions.k8s.io/v1
    Kind:         CustomResourceDefinition
    ...
  • Use the following command to view descriptions of the fields you can use to configure a specific CRD:

    $ oc explain openstackbaremetalset.spec
    KIND:     OpenStackBaremetalSet
    VERSION:  osp-director.openstack.org/v1beta1
    
    RESOURCE: spec <Object>
    
    DESCRIPTION:
         <empty>
    
    FIELDS:
       count                <Object>
       baseImageUrl         <Object>
       deploymentSSHSecret  <Object>
       ctlplaneInterface    <Object>
       networks             <[]Object>
       ...

OSPdO includes two types of CRD: hardware provisioning and software configuration.

Hardware Provisioning CRDs

openstacknetattachment (internal)
Used by the OSPdO to manage the NodeNetworkConfigurationPolicy and NodeSriovConfigurationPolicy CRDs, which are used to attach networks to virtual machines (VMs).
openstacknetconfig
Use to specify openstacknetattachment and openstacknet CRDs that describe the full network configuration. The set of reserved IP and MAC addresses for each node are reflected in the status.
openstackbaremetalset
Use to create sets of bare-metal hosts for specific RHOSP roles, such as "Compute" and "Storage".
openstackcontrolplane
Use to create the RHOSP control plane and manage associated openstackvmset CRs.
openstacknet (internal)
Use to create networks that are used to assign IPs to the openstackvmset and openstackbaremetalset CRs.
openstackipset (internal)
Contains a set of IPs for a given network and role. Used by the OSPdO to manage IP addresses.
openstackprovisionservers
Use to serve custom images for provisioning bare-metal nodes with Metal3.
openstackvmset
Use to create sets of OpenShift Virtualization VMs for a specific RHOSP role, such as "Controller", "Database", or "NetworkController".

Software Configuration CRDs

openstackconfiggenerator
Use to automatically generate Ansible playbooks for deployment when you scale up or make changes to custom ConfigMaps for deployment.
openstackconfigversion
Use to represent a set of executable Ansible playbooks.
openstackdeploy
Use to execute the set of Ansible playbooks defined in the openstackconfigversion CR.
openstackclient
Creates a pod used to run RHOSP deployment commands.

1.2. CRD naming conventions

Each custom resource definition (CRD) can have multiple names defined with the spec.names parameter. Which name you use depends on the context of the action you perform:

  • Use kind when you create and interact with resource manifests:

    apiVersion: osp-director.openstack.org/v1beta1
    kind: OpenStackBaremetalSet
    ...

    The kind name in the resource manifest correlates to the kind name in the respective CRD.

  • Use plural when you interact with multiple resources:

    $ oc get openstackbaremetalsets
  • Use singular when you interact with a single resource:

    $ oc describe openstackbaremetalset/compute
  • Use shortName for any CLI interactions:

    $ oc get osbmset

1.3. Features not supported by director Operator

Fiber Channel back end
Block Storage (cinder) image-to-volume is not supported for back ends that use Fiber Channel. Red Hat OpenShift Virtualization does not support N_Port ID Virtualization (NPIV). Therefore, Block Storage drivers that need to map LUNs from a storage back end to the controllers, where cinder-volume runs by default, do not work. You must create a dedicated role for cinder-volume and use the role to create physical nodes instead of including it on the virtualized controllers. For more information, see Composable services and custom roles in the Customizing your Red Hat OpenStack Platform deployment guide.
Role-based Ansible playbooks
Director Operator (OSPdO) does not support running Ansible playbooks to configure role-based node attributes after the bare-metal nodes are provisioned. This means that you cannot use the role_growvols_args extra Ansible variable to configure whole disk partitions for the Object Storage service (swift). Role-based Ansible playbook configuration only applies to bare-metal nodes that are provisioned by using a node definition file.
Migration of workloads from Red Hat Virtualization to OSPdO
You cannot migrate workloads from a Red Hat Virtualization environment to an OSPdO environment.
Using a VLAN for the control plane network
TripleO does not support using a VLAN for the control plane (ctlplane) network.
Multiple Compute cells
You cannot add additional Compute cells to an OSPdO environment.
BGP for the control plane
BGP is not supported for the control plane in an OSPdO environment.
PCI passthrough and attaching hardware devices to Controller VMs
You cannot attach SRIOV devices and FC SAN Storage to Controller VMs.

1.4. Limitations with a director Operator deployment

A director Operator (OSPdO) environment has the following support limitations:

  • Single-stack IPv6 is not supported. Only IPv4 is supported on the ctlplane network.
  • You cannot create VLAN provider networks without dedicated networker nodes, because the NMState Operator cannot attach a VLAN trunk to the OSPdO Controller VMs. Therefore, to create VLAN provider networks, you must create dedicated Networker nodes on bare metal. For more information, see https://github.com/openstack/tripleo-heat-templates/blob/stable/wallaby/roles/Networker.yaml.
  • You cannot remove the provisioning network.
  • You cannot use a proxy for SSH connections to communicate with the Git repository.
  • You cannot use HTTP or HTTPS to connect to the Git repository.

1.5. Recommendations for a director Operator deployment

Storage class
For back end performance, use low latency SSD/NVMe-backed storage to create the RWX/RWO storage class required by the Controller virtual machines (VMs), the client pod, and images.

1.6. Additional resources

Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.