Este contenido no está disponible en el idioma seleccionado.
Chapter 9. Bucket notification in Multicloud Object Gateway
Bucket notification allows you to send notification to an external server whenever there is an event in the bucket. Multicloud Object Gateway (MCG) supports several bucket event notification types. You can receive bucket notifications for activities such as flow creation, triggering data digestion, and so on. MCG supports the following event types that can be specified in the notification configuration:
- s3:TestEvent
- s3:ObjectCreated:*
- s3:ObjectCreated:Put
- s3:ObjectCreated:Post
- s3:ObjectCreated:Copy
- s3:ObjectCreated:CompleteMultipartUpload
- s3:ObjectRemoved:*
- s3:ObjectRemoved:Delete
- s3:ObjectRemoved:DeleteMarkerCreated
- s3:LifecycleExpiration:*
- s3:LifecycleExpiration:Delete
- s3:LifecycleExpiration:DeleteMarkerCreated
- s3:ObjectRestore:*
- s3:ObjectRestore:Post
- s3:ObjectRestore:Completed
- s3:ObjectRestore:Delete
- s3:ObjectTagging:*
- s3:ObjectTagging:Put
- s3:ObjectTagging:Delete
9.1. Configuring bucket notification in Multicloud Object Gateway Copiar enlaceEnlace copiado en el portapapeles!
Prerequisites
Ensure to have one of the following before configuring:
Kafka cluster is deployed.
For example, you can deploy Kafka using
AMQ/strimzi
by referring to Getting Started with AMQ Streams on OpenShift.An HTTP(s) server is connected.
For example, you can set up an HTTP server to log incoming HTTP requests so that you can observe them using the
oc logs
command as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Make sure that server is accessible from the MCG core pod from which the notifications are sent.
Ensure to fetch MCG’s credentials:
NOOBAA_ACCESS_KEY=$(oc extract secret/noobaa-admin -n openshift-storage --keys=AWS_ACCESS_KEY_ID --to=- 2>/dev/null); \ NOOBAA_SECRET_KEY=$(oc extract secret/noobaa-admin -n openshift-storage --keys=AWS_SECRET_ACCESS_KEY --to=- 2>/dev/null); \ S3_ENDPOINT=https://$(oc get route s3 -n openshift-storage -o json | jq -r ".spec.host") alias aws_alias='AWS_ACCESS_KEY_ID=$NOOBAA_ACCESS_KEY AWS_SECRET_ACCESS_KEY=$NOOBAA_SECRET_KEY aws --endpoint $S3_ENDPOINT --no-verify-
NOOBAA_ACCESS_KEY=$(oc extract secret/noobaa-admin -n openshift-storage --keys=AWS_ACCESS_KEY_ID --to=- 2>/dev/null); \ NOOBAA_SECRET_KEY=$(oc extract secret/noobaa-admin -n openshift-storage --keys=AWS_SECRET_ACCESS_KEY --to=- 2>/dev/null); \ S3_ENDPOINT=https://$(oc get route s3 -n openshift-storage -o json | jq -r ".spec.host") alias aws_alias='AWS_ACCESS_KEY_ID=$NOOBAA_ACCESS_KEY AWS_SECRET_ACCESS_KEY=$NOOBAA_SECRET_KEY aws --endpoint $S3_ENDPOINT --no-verify-
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Describe connection using a
json
file, for example,connection.json
on your local machine:For Kafka:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The structure under
metadata.broker.list
must be<service-name>.<namespace>.svc.cluster.local:9092
, topic must refer to the name of an existingkafkatopic
resource in the namespace.For HTTP(s):
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Additional options:
- request_options_object
- Value is a JSON that is passed to nodejs' http(s) request (optional).
Any field supported by nodejs' http(s) request option can be used, for example:
- 'path'
- Used to specify the url path
- 'auth'
- Used for http simple auth. Syntax for the value of 'auth' is: <name>:<password>.
Create a secret from the file in the
openshift-storage
namespace:oc create secret generic <connection-secret> --from-file=connect.json -n openshift-storage
$ oc create secret generic <connection-secret> --from-file=connect.json -n openshift-storage
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
NooBaa
CR in theopenshift-storage
namespace with the connection secret and an optional CephFS RWX PVC. If PVC is not provided, MCG creates one automatically as needed:oc get pvc bn-pvc -n openshift-storage
$ oc get pvc bn-pvc -n openshift-storage NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE bn-pvc Bound pvc-24683f8d-48e4-4c6b-b108-d507ca2f4fd1 25Gi RWX ocs-storagecluster-cephfs <unset> 25h
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantNames of the connection secret must be unique in the list even though the namespaces are different.
Wait for the
noobaa-core
andnoobaa-endpoint
pods to restart before proceeding with the next step.Use
S3:PutBucketNotification
on an MCG bucket using thenoobaa-admin
credentials and S3 endpoint under an S3 alias.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Verify that the bucket notification configuration is set on the bucket:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add some objects on the bucket:
echo 'a' | aws_alias s3 cp - s3://first.bucket/a
echo 'a' | aws_alias s3 cp - s3://first.bucket/a
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for a while and query the topic messages to verify the expected notification has been sent and received.
For example:
For Kafka
oc -n your-kafka-project rsh my-cluster-kafka-0 bin/kafka-console-consumer.sh \ --bootstrap-server my-cluster-kafka-bootstrap.myproject.svc.cluster.local:9092 \ --topic my-topic \ --from-beginning --timeout-ms 10000 | grep '^{.*}' | jq -c '.' | jq
$ oc -n your-kafka-project rsh my-cluster-kafka-0 bin/kafka-console-consumer.sh \ --bootstrap-server my-cluster-kafka-bootstrap.myproject.svc.cluster.local:9092 \ --topic my-topic \ --from-beginning --timeout-ms 10000 | grep '^{.*}' | jq -c '.' | jq
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For HTTP(s)
oc logs deployment/http-logger -n <http-server-namespace> | grep '^{.*}' | jq
$ oc logs deployment/http-logger -n <http-server-namespace> | grep '^{.*}' | jq
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow