Este contenido no está disponible en el idioma seleccionado.

Chapter 9. Bucket notification in Multicloud Object Gateway


Bucket notification allows you to send notification to an external server whenever there is an event in the bucket. Multicloud Object Gateway (MCG) supports several bucket event notification types. You can receive bucket notifications for activities such as flow creation, triggering data digestion, and so on. MCG supports the following event types that can be specified in the notification configuration:

  • s3:TestEvent
  • s3:ObjectCreated:*
  • s3:ObjectCreated:Put
  • s3:ObjectCreated:Post
  • s3:ObjectCreated:Copy
  • s3:ObjectCreated:CompleteMultipartUpload
  • s3:ObjectRemoved:*
  • s3:ObjectRemoved:Delete
  • s3:ObjectRemoved:DeleteMarkerCreated
  • s3:LifecycleExpiration:*
  • s3:LifecycleExpiration:Delete
  • s3:LifecycleExpiration:DeleteMarkerCreated
  • s3:ObjectRestore:*
  • s3:ObjectRestore:Post
  • s3:ObjectRestore:Completed
  • s3:ObjectRestore:Delete
  • s3:ObjectTagging:*
  • s3:ObjectTagging:Put
  • s3:ObjectTagging:Delete

9.1. Configuring bucket notification in Multicloud Object Gateway

Prerequisites

  • Ensure to have one of the following before configuring:

    • Kafka cluster is deployed.

      For example, you can deploy Kafka using AMQ/strimzi by referring to Getting Started with AMQ Streams on OpenShift.

    • An HTTP(s) server is connected.

      For example, you can set up an HTTP server to log incoming HTTP requests so that you can observe them using the oc logs command as follows:

      $ cat http_logging_server.yaml
      apiVersion: v1
      kind: List
      metadata: {}
      items:
      - apiVersion: apps/v1
        kind: Deployment
        metadata:
          name: http-logger
        spec:
          replicas: 1
          selector:
            matchLabels:
              app: http-logger
          template:
            metadata:
              labels:
                app: http-logger
            spec:
              containers:
              - name: http-logger
                image: registry.redhat.io/ubi9/python-39:latest
                command:
                  - /bin/sh
                  - -c
                  - |
                    set -e  # Fail on any error
                    mkdir -p /tmp/app
                    pip install flask
                    cat <<EOF > /tmp/app/server.py
                    from flask import Flask, request
                    app = Flask(__name__)
                    @app.route("/", methods=["POST", "GET"])
                    def log_request():
                        body = request.get_data(as_text=True)
                        print(body)  # Simple one-line logging per request
                        return "", 200
                    if __name__ == "__main__":
                        app.run(host="0.0.0.0", port=8676, debug=True)
                    EOF
                    exec python /tmp/app/server.py
                ports:
                - containerPort: 8676
                  protocol: TCP
                securityContext:
                  runAsNonRoot: true
                  allowPrivilegeEscalation: false
      - apiVersion: v1
        kind: Service
        metadata:
          name: http-logger
          labels:
            app: http-logger
        spec:
          selector:
            app: http-logger
          ports:
            - port: 8676
              targetPort: 8676
              protocol: TCP
              name: http
      
      $ oc create -f http_logging_server.yaml -n <http-server-namespace>
      Copy to Clipboard Toggle word wrap
  • Make sure that server is accessible from the MCG core pod from which the notifications are sent.
  • Ensure to fetch MCG’s credentials:

    NOOBAA_ACCESS_KEY=$(oc extract secret/noobaa-admin -n openshift-storage --keys=AWS_ACCESS_KEY_ID --to=- 2>/dev/null); \
    NOOBAA_SECRET_KEY=$(oc extract secret/noobaa-admin -n openshift-storage --keys=AWS_SECRET_ACCESS_KEY --to=- 2>/dev/null); \
    S3_ENDPOINT=https://$(oc get route s3 -n openshift-storage -o json | jq -r ".spec.host")
    alias aws_alias='AWS_ACCESS_KEY_ID=$NOOBAA_ACCESS_KEY AWS_SECRET_ACCESS_KEY=$NOOBAA_SECRET_KEY aws --endpoint $S3_ENDPOINT --no-verify-
    Copy to Clipboard Toggle word wrap

Procedure

  1. Describe connection using a json file, for example, connection.json on your local machine:

    For Kafka:

    {
      "name": "kafka_notif_conn_file" <-- any string
      "notification_protocol": "kafka",
      "metadata.broker.list": "<kafka-service-name>.<project>.svc.cluster.local:<kafka-service-port>",
      "topic": "my-topic",<--  refer to an existing KafkaTopic resource
    }
    Copy to Clipboard Toggle word wrap

    The structure under metadata.broker.list must be <service-name>.<namespace>.svc.cluster.local:9092, topic must refer to the name of an existing kafkatopic resource in the namespace.

    For HTTP(s):

    {
      "name": "http_notif_connection_config",
      "notification_protocol": "http", <-- or "https"
      "agent_request_object": {
        "host": "<http-service>.<http-server-namespace>.svc.cluster.local",
        "port": <http-server-port>
       }
    }
    Copy to Clipboard Toggle word wrap

    Additional options:

    request_options_object
    Value is a JSON that is passed to nodejs' http(s) request (optional).

    Any field supported by nodejs' http(s) request option can be used, for example:

    'path'
    Used to specify the url path
    'auth'
    Used for http simple auth. Syntax for the value of 'auth' is: <name>:<password>.
  2. Create a secret from the file in the openshift-storage namespace:

    $ oc create secret generic <connection-secret> --from-file=connect.json -n openshift-storage
    Copy to Clipboard Toggle word wrap
  3. Update the NooBaa CR in the openshift-storage namespace with the connection secret and an optional CephFS RWX PVC. If PVC is not provided, MCG creates one automatically as needed:

    $ oc get pvc bn-pvc -n openshift-storage
    NAME     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                VOLUMEATTRIBUTESCLASS   AGE
    bn-pvc   Bound    pvc-24683f8d-48e4-4c6b-b108-d507ca2f4fd1   25Gi       RWX            ocs-storagecluster-cephfs   <unset>                 25h
    Copy to Clipboard Toggle word wrap
    $ oc patch noobaa noobaa --type='merge' -n openshift-storage -p '{
      "spec": {
        "bucketNotifications": {
          "connections": [
            {
              "name": <connection-secret>,
              "namespace": "openshift-storage"
            }
          ],
          "enabled": true,
          "pvc": "bn-pvc" <-- optional
        }
      }
    }'
    Copy to Clipboard Toggle word wrap
    Important

    Names of the connection secret must be unique in the list even though the namespaces are different.

    Wait for the noobaa-core and noobaa-endpoint pods to restart before proceeding with the next step.

  4. Use S3:PutBucketNotification on an MCG bucket using the noobaa-admin credentials and S3 endpoint under an S3 alias.

    $ aws_alias s3api put-bucket-notification --bucket first.bucket --notification-configuration '{
      "TopicConfiguration": {
        "Id": "<notif_event_kafka>", <-- Unique string
        "Events": ["s3:ObjectCreated:*"], <-- a filter for events
        "Topic": "<connection-secret/connect.json>"
      }
    }'
    Copy to Clipboard Toggle word wrap

Verification steps

  1. Verify that the bucket notification configuration is set on the bucket:

    $ aws_alias s3api get-bucket-notification-configuration --bucket first.bucket
    {
        "TopicConfigurations": [
            {
                "Id": "notif_event_kafka",
                "TopicArn": "kafka-connection-secret/connect.json",
                "Events": [
                    "s3:ObjectCreated:*"
                ]
            }
        ]
    }
    Copy to Clipboard Toggle word wrap
  2. Add some objects on the bucket:

    echo 'a' | aws_alias s3 cp - s3://first.bucket/a
    Copy to Clipboard Toggle word wrap

    Wait for a while and query the topic messages to verify the expected notification has been sent and received.

    For example:

    For Kafka

    $ oc -n your-kafka-project rsh my-cluster-kafka-0 bin/kafka-console-consumer.sh \
      --bootstrap-server my-cluster-kafka-bootstrap.myproject.svc.cluster.local:9092 \
      --topic my-topic \
      --from-beginning --timeout-ms 10000 | grep '^{.*}' | jq -c '.' | jq
    Copy to Clipboard Toggle word wrap

    For HTTP(s)

    $ oc logs deployment/http-logger -n <http-server-namespace> | grep '^{.*}' | jq
    Copy to Clipboard Toggle word wrap

    Output

    {
      "Records": [
        {
          "eventVersion": "2.3",
          "eventSource": "noobaa:s3",
          "eventTime": "2024-11-27T12:44:21.987Z",
          "s3": {
            "s3SchemaVersion": "1.0",
            "object": {
              "sequencer": 10,
              "key": "a",
              "eTag": "60b725f10c9c85c70d97880dfe8191b3"
            },
            "bucket": {
              "name": "second.bucket",
              "ownerIdentity": {
                "principalId": "admin@noobaa.io"
              },
              "arn": "arn:aws:s3:::first.bucket"
            }
          },
          "eventName": "ObjectCreated:Put",
          "userIdentity": {
            "principalId": "noobaa"
          },
          "requestParameters": {
            "sourceIPAddress": "100.64.0.3"
          },
          "responseElements": {
            "x-amz-request-id": "m3zvo0cm-5239xd-bhj",
            "x-amz-id-2": "m3zvo0cm-5239xd-bhj"
          }
        }
      ]
    }
    Copy to Clipboard Toggle word wrap
Volver arriba
Red Hat logoGithubredditYoutubeTwitter

Aprender

Pruebe, compre y venda

Comunidades

Acerca de la documentación de Red Hat

Ayudamos a los usuarios de Red Hat a innovar y alcanzar sus objetivos con nuestros productos y servicios con contenido en el que pueden confiar. Explore nuestras recientes actualizaciones.

Hacer que el código abierto sea más inclusivo

Red Hat se compromete a reemplazar el lenguaje problemático en nuestro código, documentación y propiedades web. Para más detalles, consulte el Blog de Red Hat.

Acerca de Red Hat

Ofrecemos soluciones reforzadas que facilitan a las empresas trabajar en plataformas y entornos, desde el centro de datos central hasta el perímetro de la red.

Theme

© 2025 Red Hat