Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 2. Integrating filtered Google Cloud data into hybrid committed spend

download PDF

You can configure a function script in Google Cloud to copy the cost exports and object storage bucket that hybrid committed spend can access and filter your data to share a subset of your billing data with Red Hat.

Note

You must have a Red Hat account user with Cloud Administrator permissions before you can add integrations to hybrid committed spend.

To configure your Google Cloud account to be a hybrid committed spend integration, you must complete the following tasks:

  • Create a Google Cloud project for your hybrid committed spend data.
  • Create a bucket for filtered reports.
  • Have a billing service account member with the correct role to export your data to hybrid committed spend.
  • Create a BigQuery dataset to contain the cost data.
  • Create a billing export that sends the hybrid committed spend data to your BigQuery dataset.

Because you will complete some of the following steps in the Google Cloud Console, and some steps in the hybrid committed spend user interface, keep both applications open in a web browser.

Note

Because third-party products and documentation can change, instructions for configuring the third-party integrations provided are general and correct at the time of publishing. For the most up-to-date information, see the Google Cloud Platform documentation.

Add your Google Cloud integration to hybrid committed spend from the Integrations page.

2.1. Adding your Google Cloud account as an integration

You can add your Google Cloud account as an integration. After adding a Google Cloud integration, the hybrid committed spend application processes the cost and usage data from your Google Cloud account and makes it viewable.

Prerequisites

  • To add data integrations to cost management, you must have a Red Hat account with Cloud Administrator permissions.

Procedure

  1. From Red Hat Hybrid Cloud Console, click Settings Menu Settings icon > Integrations.
  2. On the Settings page, in the Cloud tab, click Add integration.
  3. In the Add a cloud integration wizard, select Google Cloud as the cloud provider type and click Next.
  4. Enter a name for your integration. Click Next.
  5. In the Select application step, select Hybrid committed spend and click Next.

2.2. Creating a Google Cloud project

Create a Google Cloud project to gather and send your cost reports to hybrid committed spend.

Prerequisites

  • Access to Google Cloud Console with resourcemanager.projects.create permission

Procedure

  1. In the Google Cloud Console click IAM & Admin Create a Project.
  2. Enter a Project name in the new page that appears and select your billing account.
  3. Select the Organization.
  4. Enter the parent organization in the Location box.
  5. Click Create.
  6. In the hybrid committed spend Add a cloud integration wizard, on the Project page, enter your Project ID.
  7. To configure Google Cloud to filter your data before it sends the data to Red Hat, select I wish to manually customize the data set sent to hybrid committed spend, click Next.

Verification steps

  1. Navigate to the Google Cloud Console Dashboard
  2. Verify the project is in the menu bar.

Additional resources

2.3. Creating a Google Cloud bucket

Create a bucket for filtered reports that you will create later. Buckets are containers that store data.

Procedure

  1. In the Google Cloud Console, click Buckets.
  2. Click Create bucket.
  3. Enter your bucket information. Name your bucket. In this example, use customer-data.
  4. Click Create, then click Confirm in the confirmation dialog.
  5. In the hybrid committed spend Add a cloud integration wizard, on the Create cloud storage bucket page, enter your Cloud storage bucket name.

Additional resources

  • For additional information about creating buckets, see the Google Cloud documentation on Creating buckets.

2.4. Creating a Google Cloud Identity and Access Management role

A custom Identity and Access Management (IAM) role for hybrid committed spend gives access to specific cost related resources required to enable a Google Cloud Platform integration and prohibits access to other resources.

Prerequisites

  • Access to Google Cloud Console with these permissions:

    • resourcemanager.projects.get
    • resourcemanager.projects.getIamPolicy
    • resourcemanager.projects.setIamPolicy
  • Google Cloud project

Procedure

  1. In the Google Cloud Console, click IAM & Admin Roles.
  2. Select the hybrid committed spend project from the dropdown in the menu bar.
  3. Click + Create role.
  4. Enter a Title, Description and ID for the role. In this example, use customer-data-role.
  5. Click + ADD PERMISSIONS.
  6. Use the Enter property name or value field to search and select these four permissions for your custom role:

    • storage.objects.get
    • storage.objects.list
    • storage.buckets.get
  7. Click ADD.
  8. Click CREATE.
  9. In the hybrid committed spend Add a cloud integration wizard, on the Create IAM role page, click Next.

Additional resources

2.5. Adding a billing service account member to your Google Cloud project

You must create a billing service account member that can export cost reports to Red Hat Hybrid Cloud Console in your project.

Prerequisites

  • Access to Google Cloud Console with these permissions:

    • resourcemanager.projects.get
    • resourcemanager.projects.getIamPolicy
    • resourcemanager.projects.setIamPolicy
  • Google Cloud project
  • A hybrid committed spend Identity and Access Management (IAM) role

Procedure

  1. In the Google Cloud Console, click IAM & Admin IAM.
  2. Select the hybrid committed spend project from the dropdown in the menu bar.
  3. Click ADD.
  4. Paste the IAM role you created into the New principals field:

    billing-export@red-hat-cost-management.iam.gserviceaccount.com
  5. In the Assign roles section, assign the IAM role you created. In this example, use customer-data-role.
  6. Click SAVE.
  7. In the hybrid committed spend Add a cloud integration wizard, on the Assign access page, click Next.

Verification steps

  1. Navigate to IAM & Admin IAM.
  2. Verify the new member is present with the correct role.

Additional resources

2.6. Creating a Google Cloud BigQuery dataset

Create a BigQuery dataset to collect and store the billing data for hybrid committed spend.

Prerequisites

  • Access to Google Cloud Console with bigquery.datasets.create permission
  • Google Cloud project

Procedure

  1. In Google Cloud Console, click Big Data BigQuery.
  2. Select the hybrid committed spend project in the Explorer panel.
  3. Click CREATE DATASET.
  4. Enter a name for your dataset in the Dataset ID field. In this example, use CustomerFilteredData.
  5. Click CREATE DATASET.

2.7. Exporting Google Cloud billing data to BigQuery

Enabling a billing export to BigQuery sends your Google Cloud billing data (such as usage, cost estimates, and pricing data) automatically to the hybrid committed spend BigQuery dataset.

Prerequisites

Procedure

  1. In the Google Cloud Console, click Billing Billing export.
  2. Click the Billing export tab.
  3. Click EDIT SETTINGS in the Detailed usage cost section.
  4. Select the hybrid committed spend Project and Billing export dataset you created in the dropdown menus.
  5. Click SAVE.
  6. In the hybrid committed spend Add a cloud integration wizard, on the Billing export page, click Next.
  7. In the hybrid committed spend Add a cloud integration wizard, on the Review details page, click Add.

Verification steps

  1. Verify a checkmark with Enabled in the Detailed usage cost section, with correct Project name and Dataset name.

2.8. Creating a function to post filtered data to your storage bucket

Create a function that filters your data and adds it to the storage account that you created to share with Red Hat. You can use the example Python script to gather the cost data from your cost exports related to your Red Hat expenses and add it to the storage account. This script filters the cost data you created with BigQuery, removes non-Red Hat information, then creates .csv files, stores them in the bucket you created, and sends the data to Red Hat.

Procedure

  1. In the Google Cloud Console, search for secret and select the Secret manager result to set up a secret to authenticate your function with Red Hat without storing your credentials in your function.

    1. On the Secret Manager page, click Create Secret.
    2. Name your secret, add your Red Hat username, and click Create Secret.
    3. Repeat this process to save a secret for your Red Hat password.
  2. In the Google Cloud Console search bar, search for functions and select the Cloud Functions result.
  3. On the Cloud Functions page, click Create function.
  4. Name the function. In this example, use customer-data-function.
  5. In the Trigger section, click Save to accept the HTTP Trigger type.
  6. In the Runtime, build, connections and security settings, click the Security and image repository, reference the secrets you created, click Done, and click Next.
  7. On the Cloud Functions Code page, set the runtime to Python 3.9.
  8. Open the requirements.txt file. Paste the following lines to the end of the file.

    requests
    google-cloud-bigquery
    google-cloud-storage
  9. Open the main.py file.

    1. Set the Entry Point to get_filtered_data.
    2. Paste the following python script. Change the values in the section marked # Required vars to update to the values for your environment.

      import csv
      import datetime
      import uuid
      import os
      import requests
      from google.cloud import bigquery
      from google.cloud import storage
      from itertools import islice
      from dateutil.relativedelta import relativedelta
      
      query_range = 5
      now = datetime.datetime.now()
      delta = now - relativedelta(days=query_range)
      year = now.strftime("%Y")
      month = now.strftime("%m")
      day = now.strftime("%d")
      report_prefix=f"{year}/{month}/{day}/{uuid.uuid4()}"
      
      # Required vars to update
      USER = os.getenv('username')         # Cost management username
      PASS = os.getenv('password')         # Cost management password
      INTEGRATION_ID = "<integration_id>"  # Cost management integration_id
      BUCKET = "<bucket>"                  # Filtered data GCP Bucket
      PROJECT_ID = "<project_id>"          # Your project ID
      DATASET = "<dataset>"                # Your dataset name
      TABLE_ID = "<table_id>"              # Your table ID
      
      gcp_big_query_columns = [
          "billing_account_id",
          "service.id",
          "service.description",
          "sku.id",
          "sku.description",
          "usage_start_time",
          "usage_end_time",
          "project.id",
          "project.name",
          "project.labels",
          "project.ancestry_numbers",
          "labels",
          "system_labels",
          "location.location",
          "location.country",
          "location.region",
          "location.zone",
          "export_time",
          "cost",
          "currency",
          "currency_conversion_rate",
          "usage.amount",
          "usage.unit",
          "usage.amount_in_pricing_units",
          "usage.pricing_unit",
          "credits",
          "invoice.month",
          "cost_type",
          "resource.name",
          "resource.global_name",
      ]
      table_name = ".".join([PROJECT_ID, DATASET, TABLE_ID])
      
      BATCH_SIZE = 200000
      
      def batch(iterable, n):
          """Yields successive n-sized chunks from iterable"""
          it = iter(iterable)
          while chunk := tuple(islice(it, n)):
              yield chunk
      
      def build_query_select_statement():
          """Helper to build query select statement."""
          columns_list = gcp_big_query_columns.copy()
          columns_list = [
              f"TO_JSON_STRING({col})" if col in ("labels", "system_labels", "project.labels", "credits") else col
              for col in columns_list
          ]
          columns_list.append("DATE(_PARTITIONTIME) as partition_date")
          return ",".join(columns_list)
      
      def create_reports(query_date):
          query = f"SELECT {build_query_select_statement()} FROM {table_name} WHERE DATE(_PARTITIONTIME) = {query_date} AND sku.description LIKE '%RedHat%' OR sku.description LIKE '%Red Hat%' OR  service.description LIKE '%Red Hat%' ORDER BY usage_start_time"
          client = bigquery.Client()
          query_job = client.query(query).result()
          column_list = gcp_big_query_columns.copy()
          column_list.append("partition_date")
          daily_files = []
          storage_client = storage.Client()
          bucket = storage_client.bucket(BUCKET)
          for i, rows in enumerate(batch(query_job, BATCH_SIZE)):
              csv_file = f"{report_prefix}/{query_date}_part_{str(i)}.csv"
              daily_files.append(csv_file)
              blob = bucket.blob(csv_file)
              with blob.open(mode='w') as f:
                  writer = csv.writer(f)
                  writer.writerow(column_list)
                  writer.writerows(rows)
          return daily_files
      
      def post_data(files_list):
          # Post CSV's to console.redhat.com API
          url = "https://console.redhat.com/api/cost-management/v1/ingress/reports/"
          json_data = {"source": INTEGRATION_ID, "reports_list": files_list, "bill_year": year, "bill_month": month}
          resp = requests.post(url, json=json_data, auth=(USER, PASS))
          return resp
      
      def get_filtered_data(request):
          files_list = []
          query_dates = [delta + datetime.timedelta(days=x) for x in range(query_range)]
          for query_date in query_dates:
              files_list += create_reports(query_date.date())
          resp = post_data(files_list)
          return f'Files posted! {resp}'
  10. Click Deploy.

2.9. Trigger your function to post filtered data to your storage bucket

Create a scheduler job to run the function you created to send filtered data to Red Hat on a schedule.

Procedure

  1. Copy the Trigger URL for the function you created to post the cost reports. You will need to add it to the Google Cloud Scheduler.

    1. In the Google Cloud Console, search for functions and select the Cloud Functions result.
    2. On the Cloud Functions page, select your function, and click the Trigger tab.
    3. In the HTTP section, click Copy to clipboard.
  2. Create the scheduler job. In the Google Cloud Console, search for cloud scheduler and select the Cloud Scheduler result.
  3. Click Create job.

    1. Name your scheduler job. In this example, use CustomerFilteredDataSchedule.
    2. In the Frequency field, set the cron expression for when you want the function to run. In this example, use 09*** to run the function daily at 9 AM.
    3. Set the timezone and click Continue.
  4. Configure the execution on the next page.

    1. In the Target type field, select HTTP.
    2. In the URL field, paste the Trigger URL you copied.
    3. In the body field, paste the following code that passes into the function to trigger it.

      {"name": "Scheduler"}
    4. In the Auth header field, select Add OIDC token.
    5. Click the Service account field and click Create to create a service account and role for the scheduler job.
  5. In the Service account details step, name your service account. In this example, use scheduler-service-account. Accept the default Service account ID and click Create and Continue.

    1. In the Grand this service account access to project, select two roles for your account.
    2. Click ADD ANOTHER ROLE then search for and select Cloud Scheduler Job Runner and Cloud Functions Invoker.
    3. Click Continue.
    4. Click Done to finish creating the service account.
  6. On the Service accounts for your project page, select the scheduler job that you were working on. In this example, the name is scheduler-service-account.
  7. In the Configure the execution page, select the Service account field and select the scheduler-service-account you just created.
  8. Click Continue and then click Create.
Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.