Chapter 5. Storage classes


5.1. Storage classes

Storage classes define how object data is placed and managed in the Ceph Object Gateway (RGW). They map objects to specific placement targets and support cost- and performance-optimized tiering, specially when used with the S3 bucket lifecycle transitions.

All placement targets include the STANDARD storage class, which is applied to new objects by default. Users can override this default by setting the default_storage_class value. To store an object in a non-default storage class, specify the storage class name in the request header:

  • S3 protocol: X-Amz-Storage-Class
  • Swift protocol: X-Object-Storage-Class

S3 Object Lifecycle Management can then automate transitions between storage classes using Transition actions. jjksdfjhsdjf kakhsdkhsa sdasdfas

When using AWS S3 SDKs (for example, boto3), storage class names must match AWS naming conventions. Otherwise, the SDK might drop the request or raise an exception. Some SDKs also expect AWS-specific behavior when names such as GLACIER are used, which can cause failures when accessing Ceph RGW.

To avoid these issues, Ceph recommends using alternative storage class names such as:

  • INTELLIGENT-TIERING
  • STANDARD_IA
  • REDUCED_REDUNDANCY
  • ONEZONE_IA

Custom storage classes, such as CHEAPNDEEP, are accepted by Ceph but might not be recognized by some S3 clients or libraries.

5.1.1. Use cases

Storage classes are commonly used in the following scenarios to optimize data placement, cost, and performance:

  • Moving infrequently accessed objects to low-cost pools using automated lifecycle transitions.
  • Assigning latency-sensitive or frequently accessed workloads to faster pools, such as NVMe-backed pools.
  • Creating custom storage classes for compliance, isolation, or application-specific data placement (for example, APP_LOGS, ML_DATA).
  • Automating multi-tier transitions, such as STANDARD STANDARD_IA archival pool, based on object age or access patterns.
  • Applying different durability or resiliency profiles by mapping storage classes to pools with varying replication or erasure coding settings.
  • Separating workloads (analytics, logging, or backup) into pools optimized for compression, durability, or cost models.

Add a new storage class to a placement target in the IBM Storage Ceph Object Gateway and map it to a data pool and compression settings.

5.1.1.1. Prerequisites

Before you begin, ensure that the following requirements are met:

  • You have administrator privileges to run radosgw-admin commands.
  • The zonegroup and zone are already configured in the Ceph Object Gateway.
  • The required data pool (for example, default.rgw.glacier.data) exists in the Ceph cluster.
  • The radosgw-admin tool is installed and available on the system where you run the commands.

5.1.1.2. Procedure

  1. Add the new storage class to the zonegroup placement target.

    Syntax

    radosgw-admin zonegroup placement add \
      --rgw-zonegroup default \
      --placement-id default-placement \
      --storage-class STANDARD_IA
    Copy to Clipboard Toggle word wrap

    Example

    radosgw-admin zonegroup placement add \
      --rgw-zonegroup prod-zonegroup \
      --placement-id app-placement \
      --storage-class APP_LOGS
    Copy to Clipboard Toggle word wrap

    This command updates the zonegroup placement configuration to include the new storage class.

  2. Define the zone-specific placement configuration for the storage class.

    Syntax

    radosgw-admin zone placement add \
      --rgw-zone default \
      --placement-id default-placement \
      --storage-class STANDARD_IA \
      --data-pool default.rgw.glacier.data \
      --compression lz4
    Copy to Clipboard Toggle word wrap

    Example

    radosgw-admin zone placement add \
      --rgw-zone prod-zone \
      --placement-id app-placement \
      --storage-class APP_LOGS \
      --data-pool prod.rgw.logs.data \
      --compression lz4
    Copy to Clipboard Toggle word wrap

    This command maps the storage class to the specified data pool with the selected compression algorithm.

5.1.1.3. Result

The new storage class is now available for use in the specified placement target. You can specify it when uploading objects using S3 headers or reference it in S3 Bucket Lifecycle transition rules.

5.2. High availability for the Ceph Object Gateway

As a storage administrator, you can assign many instances of the Ceph Object Gateway to a single zone. This allows you to scale out as the load increases, that is, the same zone group and zone; however, you do not need a federated architecture to use a highly available proxy. Since each Ceph Object Gateway daemon has its own IP address, you can use the ingress service to balance the load across many Ceph Object Gateway daemons or nodes. The ingress service manages HAProxy and keepalived daemons for the Ceph Object Gateway environment. You can also terminate HTTPS traffic at the HAProxy server, and use HTTP between the HAProxy server and the Beast front-end web server instances for the Ceph Object Gateway.

Prerequisites

  • At least two Ceph Object Gateway daemons running on different hosts.
  • Capacity for at least two instances of the ingress service running on different hosts.

5.2.1. High availability service

The ingress service provides a highly available endpoint for the Ceph Object Gateway. The ingress service can be deployed to any number of hosts as needed. Red Hat recommends having at least two supported Red Hat Enterprise Linux servers, each server configured with the ingress service. You can run a high availability (HA) service with a minimum set of configuration options. The Ceph orchestrator deploys the ingress service, which manages the haproxy and keepalived daemons, by providing load balancing with a floating virtual IP address. The active haproxy distributes all Ceph Object Gateway requests to all the available Ceph Object Gateway daemons.

A virtual IP address is automatically configured on one of the ingress hosts at a time, known as the primary host. The Ceph orchestrator selects the first network interface based on existing IP addresses that are configured as part of the same subnet. In cases where the virtual IP address does not belong to the same subnet, you can define a list of subnets for the Ceph orchestrator to match with existing IP addresses. If the keepalived daemon and the active haproxy are not responding on the primary host, then the virtual IP address moves to a backup host. This backup host becomes the new primary host.

Warning

Currently, you can not configure a virtual IP address on a network interface that does not have a configured IP address.

Important

To use the secure socket layer (SSL), SSL must be terminated by the ingress service and not at the Ceph Object Gateway.

To configure high availability (HA) for the Ceph Object Gateway you write a YAML configuation file, and the Ceph orchestrator does the installation, configuraton, and management of the ingress service. The ingress service uses the haproxy and keepalived daemons to provide high availability for the Ceph Object Gateway.

With the Ceph 8.0 release, you can now deploy an ingress service with RGW as the backend, where the "use_tcp_mode_over_rgw" option is set to true in the "spec" section of the ingress specification.

Prerequisites

  • A minimum of two hosts running Red Hat Enterprise Linux 9, or higher, for installing the ingress service on.
  • A healthy running Red Hat Ceph Storage cluster.
  • A minimum of two Ceph Object Gateway daemons running on different hosts.
  • Root-level access to the host running the ingress service.
  • If using a firewall, then open port 80 for HTTP and port 443 for HTTPS traffic.

Procedure

  1. Create a new ingress.yaml file:

    Example

    [root@host01 ~] touch ingress.yaml
    Copy to Clipboard Toggle word wrap

  2. Open the ingress.yaml file for editing. Add the following options, and add values applicable to the environment:

    Syntax

    service_type: ingress 
    1
    
    service_id: SERVICE_ID 
    2
    
    placement: 
    3
    
      hosts:
        - HOST1
        - HOST2
        - HOST3
    spec:
      backend_service: SERVICE_ID
      virtual_ip: IP_ADDRESS/CIDR 
    4
    
      frontend_port: INTEGER 
    5
    
      monitor_port: INTEGER 
    6
    
      virtual_interface_networks: 
    7
    
        - IP_ADDRESS/CIDR
      ssl_cert: | 
    8
    Copy to Clipboard Toggle word wrap

    1
    Must be set to ingress.
    2
    Must match the existing Ceph Object Gateway service name.
    3
    Where to deploy the haproxy and keepalived containers.
    4
    The virtual IP address where the ingress service is available.
    5
    The port to access the ingress service.
    6
    The port to access the haproxy load balancer status.
    7
    Optional list of available subnets.
    8
    Optional SSL certificate and private key.

    Example of providing an SSL cert

    service_type: ingress
    service_id: rgw.foo
    placement:
      hosts:
        - host01.example.com
        - host02.example.com
        - host03.example.com
    spec:
      backend_service: rgw.foo
      virtual_ip: 192.168.1.2/24
      frontend_port: 8080
      monitor_port: 1967
      virtual_interface_networks:
        - 10.10.0.0/16
      ssl_cert: |
        -----BEGIN CERTIFICATE-----
        MIIEpAIBAAKCAQEA+Cf4l9OagD6x67HhdCy4Asqw89Zz9ZuGbH50/7ltIMQpJJU0
        gu9ObNtIoC0zabJ7n1jujueYgIpOqGnhRSvsGJiEkgN81NLQ9rqAVaGpadjrNLcM
        bpgqJCZj0vzzmtFBCtenpb5l/EccMFcAydGtGeLP33SaWiZ4Rne56GBInk6SATI/
        JSKweGD1y5GiAWipBR4C74HiAW9q6hCOuSdp/2WQxWT3T1j2sjlqxkHdtInUtwOm
        j5Ism276IndeQ9hR3reFR8PJnKIPx73oTBQ7p9CMR1J4ucq9Ny0J12wQYT00fmJp
        -----END CERTIFICATE-----
        -----BEGIN PRIVATE KEY-----
        MIIEBTCCAu2gAwIBAgIUGfYFsj8HyA9Zv2l600hxzT8+gG4wDQYJKoZIhvcNAQEL
        BQAwgYkxCzAJBgNVBAYTAklOMQwwCgYDVQQIDANLQVIxDDAKBgNVBAcMA0JMUjEM
        MAoGA1UECgwDUkhUMQswCQYDVQQLDAJCVTEkMCIGA1UEAwwbY2VwaC1zc2wtcmhj
        czUtOGRjeHY2LW5vZGU1MR0wGwYJKoZIhvcNAQkBFg5hYmNAcmVkaGF0LmNvbTAe
        -----END PRIVATE KEY-----
    Copy to Clipboard Toggle word wrap

    Example of not providing an SSL cert

    service_type: ingress
    
    service_id: rgw.ssl    # adjust to match your existing RGW service
    placement:
      hosts:
        - hostname1
        - hostname2
    spec:
      backend_service: rgw.rgw.ssl.ceph13   # adjust to match your existing RGW service
      virtual_ip: IP_ADDRESS/CIDR           # ex: 192.168.20.1/24
      frontend_port: INTEGER                # ex: 443
      monitor_port: INTEGER                 # ex: 1969
      use_tcp_mode_over_rgw: True
    Copy to Clipboard Toggle word wrap

  3. Launch the Cephadm shell:

    Example

    [root@host01 ~]# cephadm shell --mount ingress.yaml:/var/lib/ceph/radosgw/ingress.yaml
    Copy to Clipboard Toggle word wrap

  4. Configure the latest haproxy and keepalived images:

    Syntax

    ceph config set mgr mgr/cephadm/container_image_haproxy HAPROXY_IMAGE_ID
    ceph config set mgr mgr/cephadm/container_image_keepalived KEEPALIVED_IMAGE_ID
    Copy to Clipboard Toggle word wrap

    Red Hat Enterprise Linux 9

    [ceph: root@host01 /]# ceph config set mgr mgr/cephadm/container_image_haproxy registry.redhat.io/rhceph/rhceph-haproxy-rhel9:latest
    [ceph: root@host01 /]# ceph config set mgr mgr/cephadm/container_image_keepalived registry.redhat.io/rhceph/keepalived-rhel9:latest
    Copy to Clipboard Toggle word wrap

  5. Install and configure the new ingress service using the Ceph orchestrator:

    [ceph: root@host01 /]# ceph orch apply -i /var/lib/ceph/radosgw/ingress.yaml
    Copy to Clipboard Toggle word wrap
  6. After the Ceph orchestrator completes, verify the HA configuration.

    1. On the host running the ingress service, check that the virtual IP address appears:

      Example

      [root@host01 ~]# ip addr show
      Copy to Clipboard Toggle word wrap

    2. Try reaching the Ceph Object Gateway from a Ceph client:

      Syntax

      wget HOST_NAME
      Copy to Clipboard Toggle word wrap

      Example

      [root@client ~]# wget host01.example.com
      Copy to Clipboard Toggle word wrap

      If this returns an index.html with similar content as in the example below, then the HA configuration for the Ceph Object Gateway is working properly.

      Example

      <?xml version="1.0" encoding="UTF-8"?>
      	<ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
      		<Owner>
      			<ID>anonymous</ID>
      			<DisplayName></DisplayName>
      		</Owner>
      		<Buckets>
      		</Buckets>
      	</ListAllMyBucketsResult>
      Copy to Clipboard Toggle word wrap

Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top