Este conteúdo não está disponível no idioma selecionado.

Chapter 2. Configuring audit logs for Developer Hub on OpenShift Container Platform


Use the OpenShift Container Platform web console to configure the following OpenShift Container Platform logging components to use audit logging for Developer Hub:

Logging deployment
Configure the logging environment, including both the CPU and memory limits for each logging component. For more information, see Red Hat OpenShift Container Platform - Configuring your Logging deployment.
Logging collector
Configure the spec.collection stanza in the ClusterLogging custom resource (CR) to use a supported modification to the log collector and collect logs from STDOUT. For more information, see Red Hat OpenShift Container Platform - Configuring the logging collector.
Log forwarding
Send logs to specific endpoints inside and outside your OpenShift Container Platform cluster by specifying a combination of outputs and pipelines in a ClusterLogForwarder CR. For more information, see Red Hat OpenShift Container Platform - Enabling JSON log forwarding and Red Hat OpenShift Container Platform - Configuring log forwarding.

2.1. Forwarding Red Hat Developer Hub audit logs to Splunk

You can use the Red Hat OpenShift Logging (OpenShift Logging) Operator and a ClusterLogForwarder instance to capture the streamed audit logs from a Developer Hub instance and forward them to the HTTPS endpoint associated with your Splunk instance.

Prerequisites

  • You have a cluster running on a supported OpenShift Container Platform version.
  • You have an account with cluster-admin privileges.
  • You have a Splunk Cloud account or Splunk Enterprise installation.

Procedure

  1. Log in to your OpenShift Container Platform cluster.
  2. Install the OpenShift Logging Operator in the openshift-logging namespace and switch to the namespace:

    Example command to switch to a namespace

    oc project openshift-logging

  3. Create a serviceAccount named log-collector and bind the collect-application-logs role to the serviceAccount :

    Example command to create a serviceAccount

    oc create sa log-collector

    Example command to bind a role to a serviceAccount

    oc create clusterrolebinding log-collector --clusterrole=collect-application-logs --serviceaccount=openshift-logging:log-collector

  4. Generate a hecToken in your Splunk instance.
  5. Create a key/value secret in the openshift-logging namespace and verify the secret:

    Example command to create a key/value secret with hecToken

    oc -n openshift-logging create secret generic splunk-secret --from-literal=hecToken=<HEC_Token>

    Example command to verify a secret

    oc -n openshift-logging get secret/splunk-secret -o yaml

  6. Create a basic `ClusterLogForwarder`resource YAML file as follows:

    Example `ClusterLogForwarder`resource YAML file

    apiVersion: logging.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: instance
      namespace: openshift-logging

    For more information, see Creating a log forwarder.

  7. Define the following ClusterLogForwarder configuration using OpenShift web console or OpenShift CLI:

    1. Specify the log-collector as serviceAccount in the YAML file:

      Example serviceAccount configuration

      serviceAccount:
        name: log-collector

    2. Configure inputs to specify the type and source of logs to forward. The following configuration enables the forwarder to capture logs from all applications in a provided namespace:

      Example inputs configuration

      inputs:
        - name: my-app-logs-input
          type: application
          application:
            includes:
              - namespace: my-developer-hub-namespace
            containerLimit:
              maxRecordsPerSecond: 100

      For more information, see Forwarding application logs from specific pods.

    3. Configure outputs to specify where the captured logs are sent. In this step, focus on the splunk type. You can either use tls.insecureSkipVerify option if the Splunk endpoint uses self-signed TLS certificates (not recommended) or provide the certificate chain using a Secret.

      Example outputs configuration

      outputs:
        - name: splunk-receiver-application
          type: splunk
          splunk:
            authentication:
              token:
                key: hecToken
                secretName: splunk-secret
            index: main
            url: 'https://my-splunk-instance-url'
            rateLimit:
              maxRecordsPerSecond: 250

      For more information, see Forwarding logs to Splunk in OpenShift Container Platform documentation.

    4. Optional: Filter logs to include only audit logs:

      Example filters configuration

      filters:
        - name: audit-logs-only
          type: drop
          drop:
            - test:
              - field: .message
                notMatches: isAuditLog

      For more information, see Filtering logs by content in OpenShift Container Platform documentation.

    5. Configure pipelines to route logs from specific inputs to designated outputs. Use the names of the defined inputs and outputs to specify multiple inputRefs and outputRefs in each pipeline:

      Example pipelines configuration

      pipelines:
        - name: my-app-logs-pipeline
          detectMultilineErrors: true
          inputRefs:
            - my-app-logs-input
          outputRefs:
            - splunk-receiver-application
          filterRefs:
            - audit-logs-only

  8. Run the following command to apply the ClusterLogForwarder configuration:

    Example command to apply ClusterLogForwarder configuration

    oc apply -f <ClusterLogForwarder-configuration.yaml>

  9. Optional: To reduce the risk of log loss, configure your ClusterLogForwarder pods using the following options:

    1. Define the resource requests and limits for the log collector as follows:

      Example collector configuration

      collector:
        resources:
          requests:
            cpu: 250m
            memory: 64Mi
            ephemeral-storage: 250Mi
          limits:
            cpu: 500m
            memory: 128Mi
            ephemeral-storage: 500Mi

    2. Define tuning options for log delivery, including delivery, compression, and RetryDuration. Tuning can be applied per output as needed.

      Example tuning configuration

      tuning:
        delivery: AtLeastOnce 1
        compression: none
        minRetryDuration: 1s
        maxRetryDuration: 10s

      1
      AtLeastOnce delivery mode means that if the log forwarder crashes or is restarted, any logs that were read before the crash but not sent to their destination are re-sent. It is possible that some logs are duplicated after a crash.

Verification

  1. Confirm that logs are being forwarded to your Splunk instance by viewing them in the Splunk dashboard.
  2. Troubleshoot any issues using OpenShift Container Platform and Splunk logs as needed.
Red Hat logoGithubRedditYoutubeTwitter

Aprender

Experimente, compre e venda

Comunidades

Sobre a documentação da Red Hat

Ajudamos os usuários da Red Hat a inovar e atingir seus objetivos com nossos produtos e serviços com conteúdo em que podem confiar.

Tornando o open source mais inclusivo

A Red Hat está comprometida em substituir a linguagem problemática em nosso código, documentação e propriedades da web. Para mais detalhes veja oBlog da Red Hat.

Sobre a Red Hat

Fornecemos soluções robustas que facilitam o trabalho das empresas em plataformas e ambientes, desde o data center principal até a borda da rede.

© 2024 Red Hat, Inc.