Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 6. Configuring action log storage for Elasticsearch and Splunk


By default, usage logs are stored in the Red Hat Quay database and exposed through the web UI on organization and repository levels. Appropriate administrative privileges are required to see log entries. For deployments with a large amount of logged operations, you can store the usage logs in Elasticsearch and Splunk instead of the Red Hat Quay database backend.

6.1. Configuring action log storage for Elasticsearch

Note

To configure action log storage for Elasticsearch, you must provide your own Elasticsearch stack; it is not included with Red Hat Quay as a customizable component.

Enabling Elasticsearch logging can be done during Red Hat Quay deployment or post-deployment by updating your config.yaml file. When configured, usage log access continues to be provided through the web UI for repositories and organizations.

Use the following procedure to configure action log storage for Elasticsearch:

Procedure

  1. Obtain an Elasticsearch account.
  2. Update your Red Hat Quay config.yaml file to include the following information:

    # ...
    LOGS_MODEL: elasticsearch 1
    LOGS_MODEL_CONFIG:
        producer: elasticsearch 2
        elasticsearch_config:
            host: http://<host.elasticsearch.example>:<port> 3
            port: 9200 4
            access_key: <access_key> 5
            secret_key: <secret_key> 6
            use_ssl: True 7
            index_prefix: <logentry> 8
            aws_region: <us-east-1> 9
    # ...
    1
    The method for handling log data.
    2
    Choose either Elasticsearch or Kinesis to direct logs to an intermediate Kinesis stream on AWS. You need to set up your own pipeline to send logs from Kinesis to Elasticsearch, for example, Logstash.
    3
    The hostname or IP address of the system providing the Elasticsearch service.
    4
    The port number providing the Elasticsearch service on the host you just entered. Note that the port must be accessible from all systems running the Red Hat Quay registry. The default is TCP port 9200.
    5
    The access key needed to gain access to the Elasticsearch service, if required.
    6
    The secret key needed to gain access to the Elasticsearch service, if required.
    7
    Whether to use SSL/TLS for Elasticsearch. Defaults to True.
    8
    Choose a prefix to attach to log entries.
    9
    If you are running on AWS, set the AWS region (otherwise, leave it blank).
  3. Optional. If you are using Kinesis as your logs producer, you must include the following fields in your config.yaml file:

        kinesis_stream_config:
            stream_name: <kinesis_stream_name> 1
            access_key: <aws_access_key> 2
            secret_key: <aws_secret_key> 3
            aws_region: <aws_region> 4
    1
    The name of the Kinesis stream.
    2
    The name of the AWS access key needed to gain access to the Kinesis stream, if required.
    3
    The name of the AWS secret key needed to gain access to the Kinesis stream, if required.
    4
    The Amazon Web Services (AWS) region.
  4. Save your config.yaml file and restart your Red Hat Quay deployment.

6.2. Configuring action log storage for Splunk

Splunk is an alternative to Elasticsearch that can provide log analyses for your Red Hat Quay data.

Enabling Splunk logging can be done during Red Hat Quay deployment or post-deployment using the configuration tool. Configuration includes both the option to forward action logs directly to Splunk or to the Splunk HTTP Event Collector (HEC).

Use the following procedures to enable Splunk for your Red Hat Quay deployment.

6.2.1. Installing and creating a username for Splunk

Use the following procedure to install and create Splunk credentials.

Procedure

  1. Create a Splunk account by navigating to Splunk and entering the required credentials.
  2. Navigate to the Splunk Enterprise Free Trial page, select your platform and installation package, and then click Download Now.
  3. Install the Splunk software on your machine. When prompted, create a username, for example, splunk_admin and password.
  4. After creating a username and password, a localhost URL will be provided for your Splunk deployment, for example, http://<sample_url>.remote.csb:8000/. Open the URL in your preferred browser.
  5. Log in with the username and password you created during installation. You are directed to the Splunk UI.

6.2.2. Generating a Splunk token

Use one of the following procedures to create a bearer token for Splunk.

6.2.2.1. Generating a Splunk token using the Splunk UI

Use the following procedure to create a bearer token for Splunk using the Splunk UI.

Prerequisites

  • You have installed Splunk and created a username.

Procedure

  1. On the Splunk UI, navigate to Settings Tokens.
  2. Click Enable Token Authentication.
  3. Ensure that Token Authentication is enabled by clicking Token Settings and selecting Token Authentication if necessary.
  4. Optional: Set the expiration time for your token. This defaults at 30 days.
  5. Click Save.
  6. Click New Token.
  7. Enter information for User and Audience.
  8. Optional: Set the Expiration and Not Before information.
  9. Click Create. Your token appears in the Token box. Copy the token immediately.

    Important

    If you close out of the box before copying the token, you must create a new token. The token in its entirety is not available after closing the New Token window.

6.2.2.2. Generating a Splunk token using the CLI

Use the following procedure to create a bearer token for Splunk using the CLI.

Prerequisites

  • You have installed Splunk and created a username.

Procedure

  1. In your CLI, enter the following CURL command to enable token authentication, passing in your Splunk username and password:

    $ curl -k -u <username>:<password> -X POST <scheme>://<host>:<port>/services/admin/token-auth/tokens_auth -d disabled=false
  2. Create a token by entering the following CURL command, passing in your Splunk username and password.

    $ curl -k -u <username>:<password> -X POST <scheme>://<host>:<port>/services/authorization/tokens?output_mode=json --data name=<username> --data audience=Users --data-urlencode expires_on=+30d
  3. Save the generated bearer token.

6.2.3. Configuring Red Hat Quay to use Splunk

Use the following procedure to configure Red Hat Quay to use Splunk or the Splunk HTTP Event Collector (HEC).

Prerequisites

  • You have installed Splunk and created a username.
  • You have generated a Splunk bearer token.

Procedure

  1. Configure Red Hat Quay to use Splunk or the Splunk HTTP Event Collector (HEC).

    1. If opting to use Splunk, open your Red Hat Quay config.yaml file and add the following configuration fields:

      # ...
      LOGS_MODEL: splunk
      LOGS_MODEL_CONFIG:
          producer: splunk
          splunk_config:
              host: http://<user_name>.remote.csb 1
              port: 8089 2
              bearer_token: <bearer_token> 3
              url_scheme: <http/https> 4
              verify_ssl: False 5
              index_prefix: <splunk_log_index_name> 6
              ssl_ca_path: <location_to_ssl-ca-cert.pem> 7
      # ...
      1
      String. The Splunk cluster endpoint.
      2
      Integer. The Splunk management cluster endpoint port. Differs from the Splunk GUI hosted port. Can be found on the Splunk UI under Settings Server Settings General Settings.
      3
      String. The generated bearer token for Splunk.
      4
      String. The URL scheme for access the Splunk service. If Splunk is configured to use TLS/SSL, this must be https.
      5
      Boolean. Whether to enable TLS/SSL. Defaults to true.
      6
      String. The Splunk index prefix. Can be a new, or used, index. Can be created from the Splunk UI.
      7
      String. The relative container path to a single .pem file containing a certificate authority (CA) for TLS/SSL validation.
    2. If opting to use Splunk HEC, open your Red Hat Quay config.yaml file and add the following configuration fields:

      # ...
      LOGS_MODEL: splunk
      LOGS_MODEL_CONFIG:
        producer: splunk_hec 1
        splunk_hec_config: 2
          host: prd-p-aaaaaq.splunkcloud.com 3
          port: 8088 4
          hec_token: 12345678-1234-1234-1234-1234567890ab 5
          url_scheme: https 6
          verify_ssl: False 7
          index: quay 8
          splunk_host: quay-dev 9
          splunk_sourcetype: quay_logs 10
      # ...
      1
      Specify splunk_hec when configuring Splunk HEC.
      2
      Logs model configuration for Splunk HTTP event collector action logs configuration.
      3
      The Splunk cluster endpoint.
      4
      Splunk management cluster endpoint port.
      5
      HEC token for Splunk.
      6
      The URL scheme for access the Splunk service. If Splunk is behind SSL/TLS, must be https.
      7
      Boolean. Enable (true) or disable (false) SSL/TLS verification for HTTPS connections.
      8
      The Splunk index to use.
      9
      The host name to log this event.
      10
      The name of the Splunk sourcetype to use.
  2. If you are configuring ssl_ca_path, you must configure the SSL/TLS certificate so that Red Hat Quay will trust it.

    1. If you are using a standalone deployment of Red Hat Quay, SSL/TLS certificates can be provided by placing the certificate file inside of the extra_ca_certs directory, or inside of the relative container path and specified by ssl_ca_path.
    2. If you are using the Red Hat Quay Operator, create a config bundle secret, including the certificate authority (CA) of the Splunk server. For example:

      $ oc create secret generic --from-file config.yaml=./config_390.yaml --from-file extra_ca_cert_splunkserver.crt=./splunkserver.crt config-bundle-secret

      Specify the conf/stack/extra_ca_certs/splunkserver.crt file in your config.yaml. For example:

      # ...
      LOGS_MODEL: splunk
      LOGS_MODEL_CONFIG:
          producer: splunk
          splunk_config:
              host: ec2-12-345-67-891.us-east-2.compute.amazonaws.com
              port: 8089
              bearer_token: eyJra
              url_scheme: https
              verify_ssl: true
              index_prefix: quay123456
              ssl_ca_path: conf/stack/splunkserver.crt
      # ...

6.2.4. Creating an action log

Use the following procedure to create a user account that can forward action logs to Splunk.

Important

You must use the Splunk UI to view Red Hat Quay action logs. At this time, viewing Splunk action logs on the Red Hat Quay Usage Logs page is unsupported, and returns the following message: Method not implemented. Splunk does not support log lookups.

Prerequisites

  • You have installed Splunk and created a username.
  • You have generated a Splunk bearer token.
  • You have configured your Red Hat Quay config.yaml file to enable Splunk.

Procedure

  1. Log in to your Red Hat Quay deployment.
  2. Click on the name of the organization that you will use to create an action log for Splunk.
  3. In the navigation pane, click Robot Accounts Create Robot Account.
  4. When prompted, enter a name for the robot account, for example spunkrobotaccount, then click Create robot account.
  5. On your browser, open the Splunk UI.
  6. Click Search and Reporting.
  7. In the search bar, enter the name of your index, for example, <splunk_log_index_name> and press Enter.

    The search results populate on the Splunk UI. Logs are forwarded in JSON format. A response might look similar to the following:

    {
      "log_data": {
        "kind": "authentication", 1
        "account": "quayuser123", 2
        "performer": "John Doe", 3
        "repository": "projectQuay", 4
        "ip": "192.168.1.100", 5
        "metadata_json": {...}, 6
        "datetime": "2024-02-06T12:30:45Z" 7
      }
    }
    1
    Specifies the type of log event. In this example, authentication indicates that the log entry relates to an authentication event.
    2
    The user account involved in the event.
    3
    The individual who performed the action.
    4
    The repository associated with the event.
    5
    The IP address from which the action was performed.
    6
    Might contain additional metadata related to the event.
    7
    The timestamp of when the event occurred.

6.3. Understanding usage logs

By default, usage logs are stored in the Red Hat Quay database. They are exposed through the web UI, on the organization and repository levels, and in the Superuser Admin Panel.

Database logs capture a wide ranges of events in Red Hat Quay, such as the changing of account plans, user actions, and general operations. Log entries include information such as the action performed (kind_id), the user who performed the action (account_id or performer_id), the timestamp (datetime), and other relevant data associated with the action (metadata_json).

6.3.1. Viewing database logs

The following procedure shows you how to view repository logs that are stored in a PostgreSQL database.

Prerequisites

  • You have administrative privileges.
  • You have installed the psql CLI tool.

Procedure

  1. Enter the following command to log in to your Red Hat Quay PostgreSQL database:

    $ psql -h <quay-server.example.com> -p 5432 -U <user_name> -d <database_name>

    Example output

    psql (16.1, server 13.7)
    Type "help" for help.

  2. Optional. Enter the following command to display the tables list of your PostgreSQL database:

    quay=> \dt

    Example output

                       List of relations
     Schema |            Name            | Type  |  Owner
    --------+----------------------------+-------+----------
     public | logentry                   | table | quayuser
     public | logentry2                  | table | quayuser
     public | logentry3                  | table | quayuser
     public | logentrykind               | table | quayuser
    ...

  3. You can enter the following command to return a list of repository_ids that are required to return log information:

    quay=> SELECT id, name FROM repository;

    Example output

     id |        name
    ----+---------------------
      3 | new_repository_name
      6 | api-repo
      7 | busybox
    ...

  4. Enter the following command to use the logentry3 relation to show log information about one of your repositories:

    SELECT * FROM logentry3 WHERE repository_id = <repository_id>;

    Example output

     id | kind_id | account_id | performer_id | repository_id | datetime | ip |    metadata_json
    
     59 | 14 | 2 | 1 | 6 | 2024-05-13 15:51:01.897189 | 192.168.1.130 | {"repo": "api-repo", "namespace": "test-org"}

    In the above example, the following information is returned:

    {
      "log_data": {
        "id": 59 1
        "kind_id": "14", 2
        "account_id": "2", 3
        "performer_id": "1", 4
        "repository_id": "6", 5
        "ip": "192.168.1.100", 6
        "metadata_json": {"repo": "api-repo", "namespace": "test-org"} 7
        "datetime": "2024-05-13 15:51:01.897189" 8
      }
    }
    1
    The unique identifier for the log entry.
    2
    The action that was done. In this example, it was 14. The key, or table, in the following section shows you that this kind_id is related to the creation of a repository.
    3
    The account that performed the action.
    4
    The performer of the action.
    5
    The repository that the action was done on. In this example, 6 correlates to the api-repo that was discovered in Step 3.
    6
    The IP address where the action was performed.
    7
    Metadata information, including the name of the repository and its namespace.
    8
    The time when the action was performed.

6.3.2. Log entry kind_ids

The following table represents the kind_ids associated with Red Hat Quay actions.

kind_idActionDescription

1

account_change_cc

Change of credit card information.

2

account_change_password

Change of account password.

3

account_change_plan

Change of account plan.

4

account_convert

Account conversion.

5

add_repo_accesstoken

Adding an access token to a repository.

6

add_repo_notification

Adding a notification to a repository.

7

add_repo_permission

Adding permissions to a repository.

8

add_repo_webhook

Adding a webhook to a repository.

9

build_dockerfile

Building a Dockerfile.

10

change_repo_permission

Changing permissions of a repository.

11

change_repo_visibility

Changing the visibility of a repository.

12

create_application

Creating an application.

13

create_prototype_permission

Creating permissions for a prototype.

14

create_repo

Creating a repository.

15

create_robot

Creating a robot (service account or bot).

16

create_tag

Creating a tag.

17

delete_application

Deleting an application.

18

delete_prototype_permission

Deleting permissions for a prototype.

19

delete_repo

Deleting a repository.

20

delete_repo_accesstoken

Deleting an access token from a repository.

21

delete_repo_notification

Deleting a notification from a repository.

22

delete_repo_permission

Deleting permissions from a repository.

23

delete_repo_trigger

Deleting a repository trigger.

24

delete_repo_webhook

Deleting a webhook from a repository.

25

delete_robot

Deleting a robot.

26

delete_tag

Deleting a tag.

27

manifest_label_add

Adding a label to a manifest.

28

manifest_label_delete

Deleting a label from a manifest.

29

modify_prototype_permission

Modifying permissions for a prototype.

30

move_tag

Moving a tag.

31

org_add_team_member

Adding a member to a team.

32

org_create_team

Creating a team within an organization.

33

org_delete_team

Deleting a team within an organization.

34

org_delete_team_member_invite

Deleting a team member invitation.

35

org_invite_team_member

Inviting a member to a team in an organization.

36

org_remove_team_member

Removing a member from a team.

37

org_set_team_description

Setting the description of a team.

38

org_set_team_role

Setting the role of a team.

39

org_team_member_invite_accepted

Acceptance of a team member invitation.

40

org_team_member_invite_declined

Declining of a team member invitation.

41

pull_repo

Pull from a repository.

42

push_repo

Push to a repository.

43

regenerate_robot_token

Regenerating a robot token.

44

repo_verb

Generic repository action (specifics might be defined elsewhere).

45

reset_application_client_secret

Resetting the client secret of an application.

46

revert_tag

Reverting a tag.

47

service_key_approve

Approving a service key.

48

service_key_create

Creating a service key.

49

service_key_delete

Deleting a service key.

50

service_key_extend

Extending a service key.

51

service_key_modify

Modifying a service key.

52

service_key_rotate

Rotating a service key.

53

setup_repo_trigger

Setting up a repository trigger.

54

set_repo_description

Setting the description of a repository.

55

take_ownership

Taking ownership of a resource.

56

update_application

Updating an application.

57

change_repo_trust

Changing the trust level of a repository.

58

reset_repo_notification

Resetting repository notifications.

59

change_tag_expiration

Changing the expiration date of a tag.

60

create_app_specific_token

Creating an application-specific token.

61

revoke_app_specific_token

Revoking an application-specific token.

62

toggle_repo_trigger

Toggling a repository trigger on or off.

63

repo_mirror_enabled

Enabling repository mirroring.

64

repo_mirror_disabled

Disabling repository mirroring.

65

repo_mirror_config_changed

Changing the configuration of repository mirroring.

66

repo_mirror_sync_started

Starting a repository mirror sync.

67

repo_mirror_sync_failed

Repository mirror sync failed.

68

repo_mirror_sync_success

Repository mirror sync succeeded.

69

repo_mirror_sync_now_requested

Immediate repository mirror sync requested.

70

repo_mirror_sync_tag_success

Repository mirror tag sync succeeded.

71

repo_mirror_sync_tag_failed

Repository mirror tag sync failed.

72

repo_mirror_sync_test_success

Repository mirror sync test succeeded.

73

repo_mirror_sync_test_failed

Repository mirror sync test failed.

74

repo_mirror_sync_test_started

Repository mirror sync test started.

75

change_repo_state

Changing the state of a repository.

76

create_proxy_cache_config

Creating proxy cache configuration.

77

delete_proxy_cache_config

Deleting proxy cache configuration.

78

start_build_trigger

Starting a build trigger.

79

cancel_build

Cancelling a build.

80

org_create

Creating an organization.

81

org_delete

Deleting an organization.

82

org_change_email

Changing organization email.

83

org_change_invoicing

Changing organization invoicing.

84

org_change_tag_expiration

Changing organization tag expiration.

85

org_change_name

Changing organization name.

86

user_create

Creating a user.

87

user_delete

Deleting a user.

88

user_disable

Disabling a user.

89

user_enable

Enabling a user.

90

user_change_email

Changing user email.

91

user_change_password

Changing user password.

92

user_change_name

Changing user name.

93

user_change_invoicing

Changing user invoicing.

94

user_change_tag_expiration

Changing user tag expiration.

95

user_change_metadata

Changing user metadata.

96

user_generate_client_key

Generating a client key for a user.

97

login_success

Successful login.

98

logout_success

Successful logout.

99

permanently_delete_tag

Permanently deleting a tag.

100

autoprune_tag_delete

Auto-pruning tag deletion.

101

create_namespace_autoprune_policy

Creating namespace auto-prune policy.

102

update_namespace_autoprune_policy

Updating namespace auto-prune policy.

103

delete_namespace_autoprune_policy

Deleting namespace auto-prune policy.

104

login_failure

Failed login attempt.

Red Hat logoGithubRedditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

© 2024 Red Hat, Inc.