This documentation is for a release that is no longer maintained
See documentation for the latest supported version.이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 2. Configuring audit logs for Developer Hub on OpenShift Container Platform
Use the OpenShift Container Platform web console to configure the following OpenShift Container Platform logging components to use audit logging for Developer Hub:
- Logging deployment
- Configure the logging environment, including both the CPU and memory limits for each logging component. For more information, see Red Hat OpenShift Container Platform - Configuring your Logging deployment.
- Logging collector
-
Configure the
spec.collectionstanza in theClusterLoggingcustom resource (CR) to use a supported modification to the log collector and collect logs fromSTDOUT. For more information, see Red Hat OpenShift Container Platform - Configuring the logging collector. - Log forwarding
-
Send logs to specific endpoints inside and outside your OpenShift Container Platform cluster by specifying a combination of outputs and pipelines in a
ClusterLogForwarderCR. For more information, see Red Hat OpenShift Container Platform - Enabling JSON log forwarding and Red Hat OpenShift Container Platform - Configuring log forwarding.
2.1. Forwarding Red Hat Developer Hub audit logs to Splunk 링크 복사링크가 클립보드에 복사되었습니다!
You can use the Red Hat OpenShift Logging (OpenShift Logging) Operator and a ClusterLogForwarder instance to capture the streamed audit logs from a Developer Hub instance and forward them to the HTTPS endpoint associated with your Splunk instance.
Prerequisites
- You have a cluster running on a supported OpenShift Container Platform version.
-
You have an account with
cluster-adminprivileges. - You have a Splunk Cloud account or Splunk Enterprise installation.
Procedure
- Log in to your OpenShift Container Platform cluster.
Install the OpenShift Logging Operator in the
openshift-loggingnamespace and switch to the namespace:Example command to switch to a namespace
oc project openshift-logging
oc project openshift-loggingCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
serviceAccountnamedlog-collectorand bind thecollect-application-logsrole to theserviceAccount:Example command to create a
serviceAccountoc create sa log-collector
oc create sa log-collectorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example command to bind a role to a
serviceAccountoc create clusterrolebinding log-collector --clusterrole=collect-application-logs --serviceaccount=openshift-logging:log-collector
oc create clusterrolebinding log-collector --clusterrole=collect-application-logs --serviceaccount=openshift-logging:log-collectorCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Generate a
hecTokenin your Splunk instance. Create a key/value secret in the
openshift-loggingnamespace and verify the secret:Example command to create a key/value secret with
hecTokenoc -n openshift-logging create secret generic splunk-secret --from-literal=hecToken=<HEC_Token>
oc -n openshift-logging create secret generic splunk-secret --from-literal=hecToken=<HEC_Token>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example command to verify a secret
oc -n openshift-logging get secret/splunk-secret -o yaml
oc -n openshift-logging get secret/splunk-secret -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a basic `ClusterLogForwarder`resource YAML file as follows:
Example `ClusterLogForwarder`resource YAML file
apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging
apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-loggingCopy to Clipboard Copied! Toggle word wrap Toggle overflow For more information, see Creating a log forwarder.
Define the following
ClusterLogForwarderconfiguration using OpenShift web console or OpenShift CLI:Specify the
log-collectorasserviceAccountin the YAML file:Example
serviceAccountconfigurationserviceAccount: name: log-collector
serviceAccount: name: log-collectorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure
inputsto specify the type and source of logs to forward. The following configuration enables the forwarder to capture logs from all applications in a provided namespace:Example
inputsconfigurationCopy to Clipboard Copied! Toggle word wrap Toggle overflow For more information, see Forwarding application logs from specific pods.
Configure outputs to specify where the captured logs are sent. In this step, focus on the
splunktype. You can either usetls.insecureSkipVerifyoption if the Splunk endpoint uses self-signed TLS certificates (not recommended) or provide the certificate chain using a Secret.Example
outputsconfigurationCopy to Clipboard Copied! Toggle word wrap Toggle overflow For more information, see Forwarding logs to Splunk in OpenShift Container Platform documentation.
Optional: Filter logs to include only audit logs:
Example
filtersconfigurationCopy to Clipboard Copied! Toggle word wrap Toggle overflow For more information, see Filtering logs by content in OpenShift Container Platform documentation.
Configure pipelines to route logs from specific inputs to designated outputs. Use the names of the defined inputs and outputs to specify multiple
inputRefsandoutputRefsin each pipeline:Example
pipelinesconfigurationCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Run the following command to apply the
ClusterLogForwarderconfiguration:Example command to apply
ClusterLogForwarderconfigurationoc apply -f <ClusterLogForwarder-configuration.yaml>
oc apply -f <ClusterLogForwarder-configuration.yaml>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: To reduce the risk of log loss, configure your
ClusterLogForwarderpods using the following options:Define the resource requests and limits for the log collector as follows:
Example
collectorconfigurationCopy to Clipboard Copied! Toggle word wrap Toggle overflow Define
tuningoptions for log delivery, includingdelivery,compression, andRetryDuration. Tuning can be applied per output as needed.Example
tuningconfigurationtuning: delivery: AtLeastOnce compression: none minRetryDuration: 1s maxRetryDuration: 10s
tuning: delivery: AtLeastOnce1 compression: none minRetryDuration: 1s maxRetryDuration: 10sCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
AtLeastOncedelivery mode means that if the log forwarder crashes or is restarted, any logs that were read before the crash but not sent to their destination are re-sent. It is possible that some logs are duplicated after a crash.
Verification
- Confirm that logs are being forwarded to your Splunk instance by viewing them in the Splunk dashboard.
- Troubleshoot any issues using OpenShift Container Platform and Splunk logs as needed.