4.3. Configuring persistent storage
Metering requires persistent storage to persist data collected by the Metering Operator and to store the results of reports. A number of different storage providers and storage formats are supported. Select your storage provider and modify the example configuration files to configure persistent storage for your metering installation.
4.3.1. Storing data in Amazon S3 复制链接链接已复制到粘贴板!
Metering can use an existing Amazon S3 bucket or create a bucket for storage.
Metering does not manage or delete any S3 bucket data. You must manually clean up S3 buckets that are used to store metering data.
Procedure
Edit the
spec.storagesection in thes3-storage.yamlfile:Example
s3-storage.yamlfileapiVersion: metering.openshift.io/v1 kind: MeteringConfig metadata: name: "operator-metering" spec: storage: type: "hive" hive: type: "s3" s3: bucket: "bucketname/path/"1 region: "us-west-1"2 secretName: "my-aws-secret"3 # Set to false if you want to provide an existing bucket, instead of # having metering create the bucket on your behalf. createBucket: true4 - 1
- Specify the name of the bucket where you would like to store your data. Optional: Specify the path within the bucket.
- 2
- Specify the region of your bucket.
- 3
- The name of a secret in the metering namespace containing the AWS credentials in the
data.aws-access-key-idanddata.aws-secret-access-keyfields. See the exampleSecretobject below for more details. - 4
- Set this field to
falseif you want to provide an existing S3 bucket, or if you do not want to provide IAM credentials that haveCreateBucketpermissions.
Use the following
Secretobject as a template:Example AWS
SecretobjectapiVersion: v1 kind: Secret metadata: name: my-aws-secret data: aws-access-key-id: "dGVzdAo=" aws-secret-access-key: "c2VjcmV0Cg=="注意The values of the
aws-access-key-idandaws-secret-access-keymust be base64 encoded.Create the secret:
$ oc create secret -n openshift-metering generic my-aws-secret \ --from-literal=aws-access-key-id=my-access-key \ --from-literal=aws-secret-access-key=my-secret-key注意This command automatically base64 encodes your
aws-access-key-idandaws-secret-access-keyvalues.
The aws-access-key-id and aws-secret-access-key credentials must have read and write access to the bucket. The following aws/read-write.json file shows an IAM policy that grants the required permissions:
Example aws/read-write.json file
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:DeleteObject",
"s3:GetObject",
"s3:HeadBucket",
"s3:ListBucket",
"s3:ListMultipartUploadParts",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::operator-metering-data/*",
"arn:aws:s3:::operator-metering-data"
]
}
]
}
If spec.storage.hive.s3.createBucket is set to true or unset in your s3-storage.yaml file, then you should use the aws/read-write-create.json file that contains permissions for creating and deleting buckets:
Example aws/read-write-create.json file
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:DeleteObject",
"s3:GetObject",
"s3:HeadBucket",
"s3:ListBucket",
"s3:CreateBucket",
"s3:DeleteBucket",
"s3:ListMultipartUploadParts",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::operator-metering-data/*",
"arn:aws:s3:::operator-metering-data"
]
}
]
}
4.3.2. Storing data in S3-compatible storage 复制链接链接已复制到粘贴板!
You can use S3-compatible storage such as Noobaa.
Procedure
Edit the
spec.storagesection in thes3-compatible-storage.yamlfile:Example
s3-compatible-storage.yamlfileapiVersion: metering.openshift.io/v1 kind: MeteringConfig metadata: name: "operator-metering" spec: storage: type: "hive" hive: type: "s3Compatible" s3Compatible: bucket: "bucketname"1 endpoint: "http://example:port-number"2 secretName: "my-aws-secret"3 Use the following
Secretobject as a template:Example S3-compatible
SecretobjectapiVersion: v1 kind: Secret metadata: name: my-aws-secret data: aws-access-key-id: "dGVzdAo=" aws-secret-access-key: "c2VjcmV0Cg=="
4.3.3. Storing data in Microsoft Azure 复制链接链接已复制到粘贴板!
To store data in Azure blob storage, you must use an existing container.
Procedure
Edit the
spec.storagesection in theazure-blob-storage.yamlfile:Example
azure-blob-storage.yamlfileapiVersion: metering.openshift.io/v1 kind: MeteringConfig metadata: name: "operator-metering" spec: storage: type: "hive" hive: type: "azure" azure: container: "bucket1"1 secretName: "my-azure-secret"2 rootDirectory: "/testDir"3 Use the following
Secretobject as a template:Example Azure
SecretobjectapiVersion: v1 kind: Secret metadata: name: my-azure-secret data: azure-storage-account-name: "dGVzdAo=" azure-secret-access-key: "c2VjcmV0Cg=="Create the secret:
$ oc create secret -n openshift-metering generic my-azure-secret \ --from-literal=azure-storage-account-name=my-storage-account-name \ --from-literal=azure-secret-access-key=my-secret-key
4.3.4. Storing data in Google Cloud Storage 复制链接链接已复制到粘贴板!
To store your data in Google Cloud Storage, you must use an existing bucket.
Procedure
Edit the
spec.storagesection in thegcs-storage.yamlfile:Example
gcs-storage.yamlfileapiVersion: metering.openshift.io/v1 kind: MeteringConfig metadata: name: "operator-metering" spec: storage: type: "hive" hive: type: "gcs" gcs: bucket: "metering-gcs/test1"1 secretName: "my-gcs-secret"2 Use the following
Secretobject as a template:Example Google Cloud Storage
SecretobjectapiVersion: v1 kind: Secret metadata: name: my-gcs-secret data: gcs-service-account.json: "c2VjcmV0Cg=="Create the secret:
$ oc create secret -n openshift-metering generic my-gcs-secret \ --from-file gcs-service-account.json=/path/to/my/service-account-key.json