Adding the Red Hat OpenShift Data Foundation Managed Service


Red Hat OpenShift Data Foundation Managed Service 2022-Q2

Red Hat OpenShift Data Foundation Managed Service add-ons

Abstract

This section describes how to install the Red Hat OpenShift Data Foundation Managed Service add-ons.

The Red Hat OpenShift Data Foundation Managed Service layers upon Red Hat OpenShift Service on AWS (ROSA), and has two parts, Red Hat OpenShift Data Foundation Managed Service provider and consumer. To add the Red Hat OpenShift Data Foundation Managed Service, install the ODF provider service and ODF consumer add-on.

The ODF provider service and the ODF consumer add-on make the ROSA clusters act as ODF provider and consumer clusters.

The ODF provider provides storage services to the ODF consumer.

To know how to install the provider service and the consumer add-on, see the procedures in the following chapters.

Red Hat OpenShift Data Foundation Managed Service provider is a single-purpose Red Hat OpenShift Service on AWS (ROSA) managed cluster that provides Red Hat OpenShift Data Foundation (ODF) storage services to one or more general-purpose ROSA managed clusters in the same Virtual Private Cloud (VPC) and region.

To install the OpenShift Data Foundation Managed Service provider service, complete the steps in the following sections:

  1. Creating RSA public-private key pair.
  2. Creating VPC on AWS.
  3. Creating AWS security group.
  4. Installing the Red Hat OpenShift Data Foundation provider service.

2.1. Creating RSA public-private key pair

The RSA public-private key pair is required to install the Red Hat OpenShift Data Foundation Managed Service provider and consumer add-ons.

Note

The following procedure uses OpenSSL to create the public-private key pair. You can use any other method to create the RSA public-private key pair.

Prerequisites

  • Ensure you have OpenSSL installed on your machine.

Procedure

  1. To create the private key, run the following command using the command-line interface:

    $ openssl genrsa -out key.pem 4096

    Example output

    Generating RSA private key, 4096 bit long modulus(2 primes)
    .......................++++
    .......................++++

  2. To create the public key, run the following command using the command-line interface:

    $ openssl rsa -in key.pem -out pubkey.pem -outform PEM -pubout

    Example output

    writing RSA key

Verification steps

  1. To check if the private key is generated, run the following command:

    $ ls

    Example output

    key.pem

  2. To check if the public key is generated, run the following command:

    $ ls

    Example output

    pubkey.pem

2.2. Creating a Virtual Private Cloud in AWS

To install the Red Hat OpenShift Data Foundation Managed Service provider service, create your own Virtual Private Cloud (VPC) using your Amazon Web Services (AWS) account.

Prerequisites

  • Access to AWS account.

Procedure

  1. Log in to your AWS account using the URL: AWS console.
  2. Search for VPC in the search tab. A list of features is displayed.
  3. Select Your VPC.
  4. Select the correct location from the Regions drop-down list.
  5. Click Create VPC. The Create VPC page is displayed.
  6. On the VPC Settings page:

    1. Select Resources to create as VPC, Subnets, etc.
    2. Add a name for the VPC in the Auto Generate field.
    3. Add IPv4 CIDR block as 10.0.0.0/16.
    4. Select IPv6 CIDR block as No IPv6 CIDR block.
    5. Select Default for Tenancy.
    6. Select 3 for Availability zones (AZs), Number of public subnets* and for Number of private subnets.
    7. Select NAT gateways ($) as 1 per AZ.
    8. Select VPC endpoints as S3 Gateway.
  7. Check the option Enable DNS hostnames.
  8. Click Create VPC. A new VPC is created.
  9. Note the 6 subnet IDs that display after you have created the VPC. You need these IDs to install the ODF provider service.

2.3. Creating an AWS security group

OpenShift Data Foundation provider service requires an Amazon Web Services (AWS) security group. The installation of the provider service fails if the AWS security group is not created.

Prerequisites

  • Access to AWS account.
  • Details of the VPC created in the previous section.

Procedure

  1. Log in to your AWS account using the URL: AWS console.
  2. Search for Security Groups in the search tab and a list of features is displayed.
  3. Under Features, click Security Groups for EC2.
  4. Click Create security group.
  5. On the Create security group page:

    1. Add the Security group name as odf-sec-group.
    2. In the Description field, add a description of your choice.
    3. Select the previously created VPC in the VPC option.
  6. In the Inbound Rules tab, click Add rules.
  7. Add the Source as 10.0.0.0/16 for all 5 ports.

    Important

    If you are using a custom CIDR range, you must use this CIDR range as the Source for the inbound rules. Also, if you are using multiple CIDR ranges, you must create copies of each rule for each CIDR range.

  8. Add Type as custom TCP for all the 5 ports.
  9. Add the 5 ports as mentioned in the following table:

    Expand
    Table 2.1. Ports required for the deployment of OpenShift Data Foundation Managed Service

    Type

    Protocol

    Ports

    Description

    Custom TCP

    TCP

    6789

    Ceph Monitor

    3300

    Ceph Monitor

    6800-7300

    Ceph OSD, MGR, MDS

    9283

    Ceph MGR Prometheus Exporter

    31659

    API server

  10. In the Tags tab:

    1. Select Name in the Key field.
    2. Add Value as odf-sec-group.
  11. Click Create security group. A new security group with the name odf-sec-group is created.

The ODF provider service deploys a Red Hat OpenShift Service on AWS (ROSA) cluster along with the provider add-on enabling the provider to provide storage services to other OpenShift managed clusters.

You can also deploy the ODF provider in a private network using the ODF provider service with the --private-link flag. PrivateLink provides private connectivity between VPCs, AWS services, and your on-premises networks, without exposing your traffic to the public internet.

Important

You must use the --private-link flag and add 3 private subnet IDs only if you want to deploy the ODF provider in a private network.

Prerequisites

Procedure

  1. Deploy a ROSA cluster along with the provider add-on by running the following command from the ROSA CLI:

    $ rosa create service \
    --name <cluster-name> \
    --type ocs-provider  \
    --size <size in TiB> \
    --onboarding-validation-key "$(cat pubkey.pem | sed 's/-.*PUBLIC KEY-*//')" \
    --machine-cidr <new CIDR> \
    --subnet-ids <3 private and 3 public subnet-ids separated by comma> \
    --notification-email-<x> <email address>

    Command for deploying ODF provider in a private network using the --private-link flag:

    $ rosa create service \
    --name <cluster-name> \
    --type ocs-provider  \
    --size <size in TiB> \
    --onboarding-validation-key "$(cat pubkey.pem | sed 's/-.*PUBLIC KEY-*//')" \
    --private-link \
    --machine-cidr <new CIDR> \
    --subnet-ids <3 private subnet-ids separated by comma> \
    --notification-email-<x> <email address>

    where,

    cluster-name: name of the cluster.

    size: size of the cluster in TiB. You can choose from either 4, 8 and 20 TiB.

    onboarding-validation-key: public key.

    sed 's/-.* PUBLIC KEY-*//': this part of the command is required to ignore the header and footer present in the public key file.

    machine-cidr: optional parameter. You can use this parameter if you are installing the provider with CIDR than the default one generated while creating Virtual Private Cloud (VPC). However, it is recommended to use only the single default CIDR.

    Important

    Ensure to use the correct subnet IDs if you are installing the provider cluster with non default CIDRs. Do not use the default subnet IDs generated while creating the VPC.

    subnet-ids: subnet-ids generated while creating the Virtual Private Cloud (VPC). If you have used a non-default machine-cidr, the subnet-ids are inside that CIDR.

    notification-email-<x>: optional parameter. You can add up to 3 email addresses by using this parameter, where x is either 0, 1, or 2. Change x as 0 to add the 1st email address and to 1 or 2 to add the 2nd and 3rd email addresses.

    Example output

    $ rosa create service \
    --type ocs-provider \
    --name provider-clstr \
    --size 20 \
    --onboarding-validation-key "$(cat pubkey.pem | sed 's/-.*PUBLIC KEY-*//')" \
    --subnet-ids $SUBNET_IDS
    --notification-email-1 abc@xyz.com
    
    I: Using "arn:aws:iam::0123456789:role/ManagedOpenShift-Installer-Role" for the Installer role
    I: Using "arn:aws:iam::0123456789:role/ManagedOpenShift-ControlPlane-Role" for the ControlPlane role
    I: Using "arn:aws:iam::0123456789:role/ManagedOpenShift-Worker-Role" for the Worker role
    I: Using "arn:aws:iam::0123456789:role/ManagedOpenShift-Support-Role" for the Support role
    I: Service created!
            Service ID: 287a9hfdBTfta7PocZ2nkWyiT6k
    I: Run the following commands to continue the cluster creation:
            rosa create operator-roles --cluster provider-clstr
            rosa create oidc-provider --cluster provider-clstr

  2. Create the cluster specific operator IAM roles. The roles created include the relevant prefix for the cluster name. Replace <cluster-name> with the name of the cluster.

    $ rosa create operator-roles --cluster <cluster-name> --yes --mode auto
  3. Create the OpenID Connect (OIDC) provider for the operators to authenticate. Replace <cluster-name> with the name of the cluster.

    $ rosa create oidc-provider --cluster <cluster-name> --yes --mode auto

Verification steps

  • To check the status of the service installation, run the following command. If the provider service is installing, the status changes from waiting for cluster to pending and if the installation is complete, the status changes to ready.

    1. Get the service ID of the cluster

      $ rosa list services

      After the installation is complete, a unique service ID is created.

      Example output

      $ rosa list services
      
      SERVICE_ID                   SERVICE         SERVICE_STATE   CLUSTER_NAME
      287a9hfdBTfta7PocZ2nkWyiT6k  ocs-provider    ready           provider-clstr

    2. Run the following command to get more information on a specific service:

      $ rosa describe service --id <service_ID>

      Example output

      $ rosa describe service --id=287a9hfdBTfta7PocZ2nkWyiT6k
      
      Id:                         287a9hfdBTfta7PocZ2nkWyiT6k
      Href:                       /api/service_mgmt/v1/services/2CVi2QOko5StffxMqdIMYtHKULQ
      Service type:               ocs-provider
      Service State:              ready
      Cluster Name:               provider-clstr
      Created At:                 2022-04-21 03:52:24 +0000 UTC
      Updated At:                 2022-04-21 11:00:11 +0000 UTC

You can expand the usable storage capacity of provider cluster using the ROSA CLI. The available capacity options are 4, 8 and 20 TiB.

Note

The time to upscale the cluster depends on the time taken to rebalance the Ceph OSDs.

Important

Scaling down of the cluster size is not supported.

Prerequisites

Procedure

  1. Get the service ID of the provider cluster.

    $ rosa list services

    Example output

    $ rosa list services
    
    SERVICE_ID                    SERVICE           SERVICE_STATE   CLUSTER_NAME
    287a9hfdBTfta7PocZ2nkWyiT6k   ocs-provider      ready           provider-clstr

  2. Get the details of the service ID.

    $ rosa describe service --id=<service-id>

    Example output

    $ rosa describe service --id="287a9hfdBTfta7PocZ2nkWyiT6k"
    
    Id:                         287a9hfdBTfta7PocZ2nkWyiT6k
    Href:                       /api/service_mgmt/v1/services/287a9hfdBTfta7PocZ2nkWyiT6k
    Service type:               ocs-provider
    Service State:              ready
    Cluster Name:               provider-clstr
    Created At:                 2022-04-21 14:34:27 +0000 UTC
    Updated At:                 2022-04-21 15:37:31 +0000 UTC

  3. Expand the size of the cluster using the following command:

    $ rosa edit service --id=<service_ID>  --size="<new_size in TiB>"

    Example output

    $ rosa edit service id=287a9hfdBTfta7PocZ2nkWyiT6k --size="8"
    I: Service "287a9hfdBTfta7PocZ2nkWyiT6k" is now updating. To check the status run rosa describe service --id 287a9hfdBTfta7PocZ2nkWyiT6k

Verification step

  • To verify if the size has increased, run the following command.

    $ rosa describe service --id <service_ID>

    Example output

    $ rosa describe service --id 287a9hfdBTfta7PocZ2nkWyiT6k
    Id:                         287a9hfdBTfta7PocZ2nkWyiT6k
    Href:                       /api/service_mgmt/v1/services/287a9hfdBTfta7PocZ2nkWyiT6k
    Service type:               ocs-provider
    Service State:              ready
    Cluster Name:               provider-clstr
    Created At:                 2022-11-11 07:46:59 +0000 UTC
    Updated At:                 2022-11-11 11:21:10 +0000 UTC
    Parameters:
        "onboarding-validation-key" : "<key>"
        "notification-email-1"      : "abc@xyz.com"
        "notification-email-2"      : "pqr@xyz.com"
        "size"                      : "8"

The consumer add-on is a small footprint add-on that enables general-purpose ROSA clusters to connect to, and consume storage services from a Red Hat OpenShift Data Foundation Managed Service provider.

To install the consumer add-on, complete the steps in the following sections:

  1. Installing a Red Hat OpenShift Service on AWS cluster for Red Hat OpenShift Data Foundation Managed Service consumer.
  2. Creating the onboarding ticket for Red Hat OpenShift Data Foundation Managed Service consumer.
  3. Getting the storage provider API endpoint information for the Red Hat OpenShift Data Foundation Managed Service consumer
  4. Installing the Red Hat OpenShift Data Foundation Managed Service consumer add-on using the Red Hat OpenShift Service on AWS command-line interface

To install a Red Hat OpenShift Service on AWS cluster, see Installing a Red Hat OpenShift Service on AWS cluster for Red Hat OpenShift Data Foundation Managed Service consumer add-on in the Getting started with Red Hat OpenShift Data Foundation Managed Service guide.

The onboarding ticket is an alpha-numeric string and required while installing the consumer add-on.

Prerequisites

  • An RSA public-private key pair.

Procedure

  1. To create the onboarding ticket, download the ticketgen script from the GIT repository.

    $ wget https://raw.githubusercontent.com/red-hat-storage/ocs-operator/main/hack/ticketgen/ticketgen.sh
  2. Make the ticketgen script executable.

    $ chmod +x ./ticketgen.sh
  3. Execute the ticketgen script. The output of this command is an encrypted token.

    $ ./ticketgen.sh <private-key>

    Example output

     ./ticketgen.sh key.pem
    eyJpZCI6IjI4MWI5Mjc5LWI0OWEtNDVlZS04NzQ3LWQyMWRjM2M1NWJjMSIsImV4cGlyYXRpb25EYXRlIjoiMTY0NTA5OTA0NyJ9.N87tPZ9pJQCyMcObJ5sa1id0drvKx/oQUvrXQTN6AAa16GJC0Za/1rSMf0dHoNuo4rOQuHfkOjq0U2I8yFZ9D8PqlAfhFQLnc1h0rBiTWAkbjHdmrpI7wDH2/1azooFIT5Aug=

Important

You must use a new onboarding ticket every time you attempt to install the consumer add-on.

The storage provider API endpoint information is required to install the consumer add-on.

Prerequisites

  • The OpenShift Data Foundation Managed Service provider.
  • Access to AWS account.

Procedure

  1. Log in to your AWS account using the URL: AWS console.
  2. Search for EC2 in the search tab. A list of features is displayed.
  3. Click EC2. A list of EC2 instances is displayed.
  4. From the Regions drop-down list, select the region where you have installed the provider.
  5. Click on any one of the worker nodes and look for the Private IPv4 addresses. Make a note of the IP address.

Install the consumer add-on to enable the Red Hat OpenShift Service on AWS (ROSA) cluster to consume the storage services provided by Red Hat OpenShift Data Foundation Managed Service provider

Prerequisites

Procedure

  1. From the newly installed ROSA cluster CLI, run the following command to list all the available add-ons:

    $ rosa list addons

    Example output

    $ rosa list addons
    
    ID                              NAME                                                          AVAILABILITY
    cluster-logging-operator        Cluster Logging Operator                                      available
    dbaas-operator                  Red Hat OpenShift Database Access                             available
    managed-api-service             Red Hat Openshift API Management                              available
    managed-api-service-internal    Red Hat Openshift API Management (internal)                   unavailable
    managed-odh                     Red Hat OpenShift Data Science                                available
    ocs-consumer                    Red Hat OpenShift Data Foundation Managed Service Consumer    available
    ocs-converged                   Red Hat OpenShift Data Foundation Managed Service             available
    ocs-provider                    Red Hat OpenShift Data Foundation Managed Service Provider    available

  2. Install the consumer add-on using the following command. After you run the command, you get an interactive prompt. Add the information asked in the prompt.

    Important

    When adding the storage provider API endpoint information, append the API port number (31659) to the IP address.

    For example, if the IP address is 10.10.xxx.xx, append 31659 to this address as 10.10.xxx.xx:31659.

    $ rosa install addon --cluster=<cluster-name> ocs-consumer

    Example output

    $ rosa install addon --cluster=rosa-cluster ocs-consumer
    
    ? Are you sure you want to install add-on ocs-consumer on cluster rosa-cluster? Yes
    ? Consumer Onboarding Ticket: <alpha-numeric string>
    ? Storage Provider API Endpoint: 10.10.xxx.xx:31659
    ? Notification Email (optional): abc@xyz.com
    ? Additional Notification Email (optional):
    ? Additional Notification Email (optional):
    I: Add-on ocs-consumer is now installing. To check the status run rosa list addons -c rosa-cluster
    I: To install this addon again in the future, you can run:
    rosa install addon --cluster rosa-cluster ocs-consumer -y --storage-provider-endpoint 10.0.xxx.xx:31659 --onboarding-ticket <alpha-numeric string> --notification-email-0 abc@xyz.com

Verification steps

  • To check the status of the add-on installation process, run the following command. If the add-on is installing, the status is installing and if the installation is complete, the status changes to installed.

    $ rosa list addons -c <cluster-name>

    Example output

    $ rosa list addons -c rosa-cluster
    
    ID                              NAME                                                          STATE
    cluster-logging-operator        Cluster Logging Operator                                      installed
    dbaas-operator                  Red Hat OpenShift Database Access                             not installed
    managed-api-service             Red Hat Openshift API Management                              not installed
    managed-api-service-internal    Red Hat Openshift API Management (internal)                   not installed
    managed-odh                     Red Hat OpenShift Data Science                                not installed
    ocs-consumer                    Red Hat OpenShift Data Foundation Managed Service Consumer    installing
    ocs-converged                   Red Hat OpenShift Data Foundation Managed Service             not installed
    ocs-provider                    Red Hat OpenShift Data Foundation Managed Service Provider    not installed

A volume snapshot is the state of the storage volume in a cluster at a particular point in time. These snapshots help to use storage more efficiently by not having to make a full copy each time and can be used as building blocks for developing an application.

The volume snapshot class allows an administrator to specify different attributes belonging to a volume snapshot object. To create a volume snapshot, the VolumeSnapshotClass must be created manually.

Important

The name of the VolumeSnapshotClass must be same as the storageclass.

To create the VolumeSnapshotClass manually, the following details are required from the storageclass:

  • Provisioner
  • ClusterID
  • Provisioner secret name
  • Provisioner secret namespace

Procedure

  1. Get the details of the storageclass for which you want to create the VolumeSnapshotClass. You can create the VolumeSnapshotClass for storageclasses: ocs-storagecluster-cephfs and ocs-storagecluster-ceph-rbd.

    $ oc get sc <storageclass_name> -o yaml
  2. Make a note of the following parameters:

    • provisioner
    • ClusterID
    • provisioner-secret-name
    • provisioner-secret-namespace

      Example output for storage class: ocs-storagecluster-cephfs

      $ oc get sc ocs-storagecluster-cephfs -o yaml
      
      allowVolumeExpansion: true
      apiVersion: storage.k8s.io/v1
      kind: StorageClass
      metadata:
        annotations:
          description: Provides RWO and RWX Filesystem volumes
        creationTimestamp: "2022-06-29T09:49:55Z"
        name: ocs-storagecluster-cephfs
        resourceVersion: "49859"
        uid: c2e243c2-debd-4003-acd9-83bfc6941010
      parameters:
        clusterID: acc17ba1860acbfab323094c41044821
        csi.storage.k8s.io/controller-expand-secret-name: rook-ceph-client-af1ecb1069297b385e35f6b256bee035
        csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage
        csi.storage.k8s.io/node-stage-secret-name: rook-ceph-client-b9b6f5bcb4b159155e540f375d2c27a6
        csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage
        csi.storage.k8s.io/provisioner-secret-name: rook-ceph-client-af1ecb1069297b385e35f6b256bee035
        csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage
        fsName: ocs-storagecluster-cephfilesystem
      provisioner: openshift-storage.cephfs.csi.ceph.com
      reclaimPolicy: Delete
      volumeBindingMode: Immediate

      Example output for storage class: ocs-storagecluster-ceph-rbd

      $ oc get sc ocs-storagecluster-ceph-rbd -o yaml
      
      allowVolumeExpansion: true
      apiVersion: storage.k8s.io/v1
      kind: StorageClass
      metadata:
        annotations:
          description: Provides RWO Filesystem volumes, and RWO and RWX Block volumes
        creationTimestamp: "2022-06-29T09:49:55Z"
        name: ocs-storagecluster-ceph-rbd
        resourceVersion: "49858"
        uid: 3ff3e81e-edfc-4e7b-a6cd-e475d78844f3
      parameters:
        clusterID: openshift-storage
        csi.storage.k8s.io/controller-expand-secret-name: rook-ceph-client-f93f7b537015c19d4a5b93838cf1426f
        csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage
        csi.storage.k8s.io/fstype: ext4
        csi.storage.k8s.io/node-stage-secret-name: rook-ceph-client-998fd2bee8383a87e7a48e9642f33b76
        csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage
        csi.storage.k8s.io/provisioner-secret-name: rook-ceph-client-f93f7b537015c19d4a5b93838cf1426f
        csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage
        imageFeatures: layering
        imageFormat: "2"
        pool: cephblockpool-storageconsumer-23277c14-f608-46b9-8709-97c7b89ba396
      provisioner: openshift-storage.rbd.csi.ceph.com
      reclaimPolicy: Delete
      volumeBindingMode: Immediate

  3. Create a new YAML file with the following content and add the parameters noted in the above step:

    • provisioner name for driver.
    • clusterID for clusterID
    • provisioner-secret-name for snapshotter-secret-name
    • provisioner-secret-namespace for snapshotter-secret-namespace

      Example YAML file for creating VolumeSnapshotClass for ocs-storagecluster-ceph-rbd storageclass:

      apiVersion: snapshot.storage.k8s.io/v1
      kind: VolumeSnapshotClass
      metadata:
        name: ocs-storagecluster-ceph-rbd
        driver: openshift-storage.rbd.csi.ceph.com
      parameters:
        clusterID: openshift-storage
        csi.storage.k8s.io/snapshotter-secret-name: rook-ceph-client-f93f7b537015c19d4a5b93838cf1426f
        csi.storage.k8s.io/snapshotter-secret-namespace: openshift-storage
      deletionPolicy: Delete

      Example YAML file for creating VolumeSnapshotClass for ocs-storagecluster-cephfs storageclass:

      apiVersion: snapshot.storage.k8s.io/v1
      kind: VolumeSnapshotClass
      metadata:
        name: ocs-storagecluster-cephfs
        driver: openshift-storage.cephfs.csi.ceph.com
      parameters:
        clusterID: acc17ba1860acbfab323094c41044821
        csi.storage.k8s.io/snapshotter-secret-name: rook-ceph-client-af1ecb1069297b385e35f6b256bee035
        csi.storage.k8s.io/snapshotter-secret-namespace: openshift-storage
      deletionPolicy: Delete

  4. Save the file.
  5. Create the VolumeSnapshotClass.

    oc create -f <file_name>.yaml

    Example output

    $ oc create -f snapshotclass-file.yaml
    
    volumesnapshotclass.snapshot.storage.k8s.io/ocs-storagecluster-ceph-rbd created

Verification steps

  • Verify if the VolumeSnapshotClass is created.

    $ oc get volumesnapshotclass <volumesnapshotclass-name>

    Example output

    $ oc get volumesnapshotclass
    
    NAME                                      DRIVER                               DELETIONPOLICY       AGE
    csi-aws-vsc                               ebs.csi.aws.com                        Delete              3h
    ocs-storagecluster-ceph-rbd             openshift-storage.rbd.csi.ceph.com   Delete            105m
    ocs-storagecluster-cephfs                 openshift-storage.cephfs.csi.ceph.com  Delete              88m
    ocs-storagecluster-cephfsplugin-snapclass openshift-storage.cephfs.csi.ceph.com  Delete              158m
    ocs-storagecluster-rbdplugin-snapclass    openshift-storage.rbd.csi.ceph.com     Delete              158m

You must delete the volumesnapshots before you delete the volumesnapshotclass

Procedure

  1. List the available volumesnapshots to check if there is a volumesnapshot attached to the volumesnapshotclass that you want to delete.

    $ oc get volumesnapshot -A  | grep <volumesnapshotclass_name>

    Example output

    $ oc get volumesnapshot -A | grep ocs-storagecluster-ceph-rbd
    
    test-project rbd-snapshot true rbd 100Gi ocs-storagecluster-ceph-rbd snapcontent-f6017561-5b03-4d74-a3ef-7e64524462e4 19h 19h
    test-project snapshot-rbd-test-project true pvc-rbd-test-project 10Gi ocs-storagecluster-ceph-rbd snapcontent-dfb89b4b-477d-4a5a-819b-d058a07d17e5 19h 19h
    test-project snapshot-pvc-rbd-test-project-restore-snapshot true snapshot-pvc-rbd-test-project-restore 10Gi ocs-storagecluster-ceph-rbd snapcontent-ba727d4a-c3dc-481e-9753-6979940edd8b 19h 19h

  2. Delete all the snapshots listed in the step 1.

    $ oc delete volumesnapshot <volumesnapshot_name> -n <namespace>

    Example output

    $ oc delete volumesnapshot rbd-snapshot -n test-project
    
    volumesnapshot.snapshot.storage.k8s.io "rbd-snapshot" deleted

    Note

    Repeat this step to delete all the snapshots listed in the step 1.

  3. Delete the VolumeSnapshotClass if there are no volume snapshots attached to it.

    $ oc delete volumesnapshotclass <snapshotclass_name>

    Example output

    $ oc delete volumesnapshotclass ocs-storagecluster-ceph-rbd
    
    Volumesnapshotclass.snapshot.storage.k8s.io "ocs-storagecluster-ceph-rbd" deleted

Verification step

  • Verify if the volumesnapshotclass is deleted.

    $ oc get volumesnapshotclass <snapshotclass_name>

    Example output

    $ oc get volumesnapshotclass ocs-storagecluster-ceph-rbd
    
    Error from server (NotFound): volumesnapshotclasses.snapshot.storage.k8s.io "ocs-storagecluster-ceph-rbd" not found

You can update the email addresses entered while installing the ODF provider service and the ODF consumer add-on by using the ROSA CLI. You can also delete the email address if you want to stop receiving the alert notifications to that email address.

The commands to update and delete the email addresses are different for ODF provider and ODF consumer.

To update the email addresses for ODF provider, perform the steps in the procedure.

Prerequisite

  • Access to the latest version of ROSA command-line interface (CLI). For more information about the latest ROSA CLI version, see the link: ROSA CLI versions.

Procedure

  1. Get the service ID of the cluster for which you want to edit the email address.

    $ rosa list services

    Example output

    $ rosa list services
    
    SERVICE_ID                    SERVICE           SERVICE_STATE   CLUSTER_NAME
    287a9hfdBTfta7PocZ2nkWyiT6k   ocs-provider      ready           provider-clstr

  2. Update the notification email address using the following command.

    While installing the ODF provider service, you are given the option to provide 3 email addresses, use the flag --notification-email<x>=<new email_address> to update the email address. x is either 0,1 or 2.

    Use --notification-email-0 if you want to update 1st notification email address and similarly, use flags --notification-email-1 or --notification-email-2 if you want to update the 2nd and 3rd email addresses.

    $ ./rosa edit service --id=<service_ID>  --notification-email-<x>="<new email_address>"

    Example output

    $ ./rosa edit service --id=287a9hfdBTfta7PocZ2nkWyiT6k  --notification-email-0=""
    
    I: Service "287a9hfdBTfta7PocZ2nkWyiT6k" is now updating. To check the status run rosa describe service --id 287a9hfdBTfta7PocZ2nkWyiT6k

Verification step

  • To verify if the email address is updated, run the following command.

    $ rosa describe service --id <service_ID>

    Example output

    $ ./rosa describe service --id=287a9hfdBTfta7PocZ2nkWyiT6k
    
    Id:                         287a9hfdBTfta7PocZ2nkWyiT6k
    Href:                       /api/service_mgmt/v1/services/2CVi2QOko5StffxMqdIMYtHKULQ
    Service type:               ocs-provider
    Service State:              ready
    Cluster Name:               provider-clstr
    Created At:                 2022-04-21 03:52:24 +0000 UTC
    Updated At:                 2022-04-21 11:00:11 +0000 UTC
    Parameters:                
        "size"                      : "20"
        "onboarding-validation-key" : "MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAw0JtzdXJx9zw90BFVqx3gXn861kh1YZHKF9YoD6gXO0bSXslvK52TklYcF7fYGx615R/qRraI/N7kg6Igp8n255Yz1Ycdf9C0/ThlynhnNo7HeLA5MZy9L22hzm72PztY0CAwEAAQ=="
        "notification-email-0"      : "abc123@xyz.com"

If you want to stop receiving the alert notifications to a particular email address, you can delete that notification email address using the steps in the procedure.

Prerequisite

  • Access to the latest version of ROSA command-line interface (CLI). For more information about the latest ROSA CLI version, see the link: ROSA CLI versions.

Procedure

  1. Get the service ID of the cluster for which you want to edit the email address.

    $ rosa list services

    Example output

    $ rosa list services
    
    SERVICE_ID                    SERVICE           SERVICE_STATE   CLUSTER_NAME
    287a9hfdBTfta7PocZ2nkWyiT6k   ocs-provider      ready           provider-clstr

  2. Do not pass any value to the --notification-email- flag to delete the email address by using the edit command.

    While installing the ODF provider service, you are given the option to provide 3 email addresses, use the flag --notification-email<x>="" to delete the email address. x is either 0,1 or 2.

    Use --notification-email-0 if you want to delete 1st notification email address and similarly, use flags --notification-email-1 or --notification-email-2 if you want to delete the 2nd and 3rd email addresses.

    $ ./rosa edit service --id=<service_ID> --notification-email-<x>=""

    Example output

    $ ./rosa edit service --id=287a9hfdBTfta7PocZ2nkWyiT6k  --notification-email-0=""
    
    I: Service "287a9hfdBTfta7PocZ2nkWyiT6k" is now updating. To check the status run rosa describe service --id 287a9hfdBTfta7PocZ2nkWyiT6k

Verification step

  • To verify if the email address is deleted, run the following command.

    $ rosa describe service --id <service_ID>

    Example output

    $ ./rosa describe service --id=287a9hfdBTfta7PocZ2nkWyiT6k
    
    Id:                         287a9hfdBTfta7PocZ2nkWyiT6k
    Href:                       /api/service_mgmt/v1/services/2CVi2QOko5StffxMqdIMYtHKULQ
    Service type:               ocs-provider
    Service State:              ready
    Cluster Name:               provider-clstr
    Created At:                 2022-04-21 03:52:24 +0000 UTC
    Updated At:                 2022-04-21 11:00:11 +0000 UTC
    Parameters:                
        "size"                      : "20"
        "onboarding-validation-key" : "MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAw0JtzdXJx9zw90BFVqx3gXn861kh1YZHKF9YoD6gXO0bSXslvK52TklYcF7fYGx615R/qRraI/N7kg6Igp8n255Yz1Ycdf9C0/ThlynhnNo7HeLA5MZy9L22hzm72PztY0CAwEAAQ=="
        "notification-email-0"      : ""

To update the email addresses for ODF consumer, perform the steps in the procedure.

Prerequisite

  • Access to the latest version of ROSA command-line interface (CLI). For more information about the latest ROSA CLI version, see the link: ROSA CLI versions.

Procedure

  • Run the following command to update the email address:

    While installing the ODF consumer add-on, you are given the option to provide 3 email addresses, use the flag --notification-email<x>=<new email_address> to update the email address. x is either 0,1 or 2.

    Use --notification-email-0 if you want to update 1st notification email address and similarly, use flags --notification-email-1 or --notification-email-2 if you want to update the 2nd and 3rd email addresses.

    $ rosa edit addon -c <cluster-name> ocs-consumer --notification-email-<x> "<new email_address>"

    Example output

    $ rosa edit addon -c cnsumr-clstr ocs-consumer --notification-email-1 "abc123@xyz.com"
    
    I: Add-on ocs-consumer is now updating. To check the status run rosa list addons -c cnsumr-clstr

Verification step

  • To verify if the email address is updated, run the following command.

    $ rosa describe addon-installation --cluster <cluster-name> --addon ocs-consumer

    Example output

    $ rosa describe addon-installation --cluster cnsumr-clstr --addon ocs-consumer
    Id:                          ocs-consumer
    Href:                        /api/clusters_mgmt/v1/clusters/1tn7ic59t3qp96vmbjjqokjfurac11u7/addons/ocs-consumer
    Addon state:                 ready
    Parameters:
    	"notification-email-1"      : "abc123@xyz.com"
    	"storage-provider-endpoint" : "10.x.x.x:31659"
    	"onboarding-ticket"         : "eyJpZCI6ImIxMDk2YmE1LTM5M2QtNGY5OC1hNDk3LTQ0NzE3NzZiN2NjYiIsImV4cGlWq4NaX9//htjCWNmhM547JirUi9YXvxC38H6yrMdm6vXIj0xpRHrU=\n"
    	"notification-email-2"      : "abc@xyz.com"

If you want to stop receiving the alert notifications to a particular email address, you can delete that notification email address using the steps in the procedure.

Prerequisite

  • Access to the latest version of ROSA command-line interface (CLI). For more information about the latest ROSA CLI version, see the link: ROSA CLI versions.

Procedure

  • Do not pass any value to the --notification-email- flag to delete the email address by using the edit command.

    While installing the ODF consumer add-on, you are given the option to provide 3 email addresses, use the flag --notification-email<x>="" to delete the email address. x is either 0,1 or 2.

    Use --notification-email-0 if you want to delete 1st notification email address and similarly, use flags --notification-email-1 or --notification-email-2 if you want to delete the 2nd and 3rd email addresses.

    $ rosa edit addon -c <cluster-name> ocs-consumer --notification-email-<x> ""

    Example output

    $ rosa edit addon -c cnsumr-clstr ocs-consumer --notification-email-1 ""
    
    I: Add-on ocs-consumer is now updating. To check the status run rosa list addons -c cnsumr-clstr

Verification step

  • To verify if the email address is deleted, run the following command.

    $ rosa describe addon-installation --cluster <cluster-name> --addon ocs-consumer

    Example output

    $ rosa describe addon-installation --cluster cnsumr-clstr --addon ocs-consumer
    Id:                          ocs-consumer
    Href:                        /api/clusters_mgmt/v1/clusters/1tn7ic59t3qp96vmbjjqokjfurac11u7/addons/ocs-consumer-qe
    Addon state:                 ready
    Parameters:
    	"notification-email-1"      : ""
    	"storage-provider-endpoint" : "10.x.x.x:31659"
    	"onboarding-ticket"         : "eyJpZCI6ImIxMDk2YmE1LTM5M2QtNGY5OC1hNDk3LTQ0NzE3NzZiN2NjYiIsImV4cGlWq4NaX9//htjCWNmhM547JirUi9YXvxC38H6yrMdm6vXIj0xpRHrU=\n"
    	"notification-email-2"      : "abc@xyz.com"

You can uninstall the Red Hat OpenShift Data Foundation Managed Service consumer add-on using the Red Hat OpenShift Service on AWS (ROSA) command-line interface (CLI). After the add-on is uninstalled, the ROSA cluster can still be used for running other applications.

Important

Initiating the uninstallation process with dependent PVs causes the uninstallation process to hang until the PVs have been deleted.

Prerequisites

  • A ROSA cluster with consumer add-on installed on it.
  • Access to the latest version of ROSA command-line interface (CLI). For more information about the latest ROSA CLI version, see the link: ROSA CLI versions.
  • Delete the PVCs after ensuring that no applications are consuming these PVCs using the following storage classes provided by OpenShift Data Foundation Managed Service

    • ocs-storagecluster-ceph-rbd
    • ocs-storagecluster-cephfs
  • Delete volume snapshots after removing any resources that used the snapshots.

Procedure

  • To uninstall the consumer add-on, run the following command from the ROSA CLI:

    $ rosa uninstall addon --cluster=<cluster-name> ocs-consumer

Verification steps

  • To verify the status of the consumer add-on, run the following command from the ROSA CLI.

    When you list the add-ons after the uninstallation process initiates, the state of the consumer add-on changes to deleting. After the uninstallation is complete, the state changes from deleting to not installed.

    $ rosa list addons -c <cluster-name>

    Example output

    $ rosa list addons -c rosa-cluster
    
     ID                          NAME                                                                STATE
    cluster-logging-operator    Cluster Logging Operator                                            installed
    dbaas-operator              Red Hat OpenShift Database Access                                   not installed
    managed-api-service         Red Hat Openshift API Management                                    not installed
    managed-odh                 Red Hat OpenShift Data Science                                      not installed
    mcg-osd                     Red Hat Data Federation Managed Service                             not installed
    ocm-addon-test-operator     OCM Add-On Test Operator                                            not installed
    ocs-converged               Red Hat OpenShift Data Foundation Managed Service                   not installed
    ocs-consumer                Red Hat OpenShift Data Foundation Managed Service Consumer          deleting

You can uninstall the Red Hat OpenShift Data Foundation Managed Service provider service using the Red Hat OpenShift Service on AWS (ROSA) command-line interface (CLI).

Important

Uninstalling the provider service deletes the provider add-on and also the underlying ROSA cluster.

Prerequisites

  • Remove the consumers that are connected to the provider.
  • Access to the latest version of ROSA command-line interface (CLI). For more information about the latest ROSA CLI version, see the link: ROSA CLI versions.

Procedure

  1. To delete the provider add-on, you require the service ID that was created after installation of the provider service. To get the service ID, run the following command from the ROSA CLI:

    $ rosa list services

    Example output

    $ rosa list services
    
    SERVICE_ID                   SERVICE         SERVICE_STATE   CLUSTER_NAME
    287a9hfdBTfta7PocZ2nkWyiT6k  ocs-provider    ready           provider-clstr

  2. To uninstall the provider service, run the following command:

    $ rosa delete service --id= <service_ID>

    Type yes to the prompt Are you sure you want to delete service with id <service_ID>.

Verification steps

  • To verify the status of the service, run the following command from the ROSA CLI. When you list the services after the uninstallation process initiates, the state of the service changes to deleting. After the uninstallation is complete, the deleted service ID is not listed in the services list.

    $ rosa list services

    Example output

    $ rosa list services
    
    SERVICE_ID                  SERVICE         SERVICE_STATE    CLUSTER_NAME
    287a9hfdBTfta7PocZ2nkWyiT6k ocs-provider    deleting service provider-clstr

Legal Notice

Copyright © 2023 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top