이 콘텐츠는 선택한 언어로 제공되지 않습니다.

Security


Red Hat OpenShift GitOps 1.18

Using security features to configure secure communication and protect the possibly sensitive data in transit

Red Hat OpenShift Documentation Team

Abstract

This document provides instructions for using the Transport Layer Security (TLS) encryption with the OpenShift GitOps. It also discusses how to configure secure communication with Redis to protect the possibly sensitive data in transit.

Chapter 1. Configuring secure communication with Redis

Using the Transport Layer Security (TLS) encryption with Red Hat OpenShift GitOps, you can secure the communication between the Argo CD components and Redis cache and protect the possibly sensitive data in transit.

You can secure communication with Redis by using one of the following configurations:

  • Enable the autotls setting to issue an appropriate certificate for TLS encryption.
  • Manually configure the TLS encryption by creating the argocd-operator-redis-tls secret with a key and certificate pair.

Both configurations are possible with or without the High Availability (HA) enabled.

1.1. Prerequisites

  • You have access to the cluster with cluster-admin privileges.
  • You have access to the OpenShift Container Platform web console.
  • Red Hat OpenShift GitOps Operator is installed on your cluster.

1.2. Configuring TLS for Redis with autotls enabled

You can configure TLS encryption for Redis by enabling the autotls setting on a new or already existing Argo CD instance. The configuration automatically provisions the argocd-operator-redis-tls secret and does not require further steps. Currently, OpenShift Container Platform is the only supported secret provider.

Note

By default, the autotls setting is disabled.

Procedure

  1. Log in to the OpenShift Container Platform web console.
  2. Create an Argo CD instance with autotls enabled:

    1. In the Administrator perspective of the web console, use the left navigation panel to go to AdministrationCustomResourceDefinitions.
    2. Search for argocds.argoproj.io and click ArgoCD custom resource definition (CRD).
    3. On the CustomResourceDefinition details page, click the Instances tab, and then click Create ArgoCD.
    4. Edit or replace the YAML similar to the following example:

      Example Argo CD CR with autotls enabled

      apiVersion: argoproj.io/v1beta1
      kind: ArgoCD
      metadata:
        name: argocd 
      1
      
        namespace: openshift-gitops 
      2
      
      spec:
        redis:
          autotls: openshift 
      3
      
        ha:
          enabled: true 
      4
      Copy to Clipboard Toggle word wrap

      1
      The name of the Argo CD instance.
      2
      The namespace where you want to run the Argo CD instance.
      3
      The flag that enables the autotls setting and creates a TLS certificate for Redis.
      4
      The flag value that enables the HA feature. If you do not want to enable HA, do not include this line or set the flag value as false.
      Tip

      Alternatively, you can enable the autotls setting on an already existing Argo CD instance by running the following command:

      $ oc patch argocds.argoproj.io <instance-name> --type=merge -p '{"spec":{"redis":{"autotls":"openshift"}}}'
      Copy to Clipboard Toggle word wrap
    5. Click Create.
    6. Verify that the Argo CD pods are ready and running:

      $ oc get pods -n <namespace> 
      1
      Copy to Clipboard Toggle word wrap
      1
      Specify a namespace where the Argo CD instance is running, for example openshift-gitops.

      Example output with HA disabled

      NAME                                  READY   STATUS    RESTARTS   AGE
      argocd-application-controller-0       1/1     Running   0          26s
      argocd-redis-84b77d4f58-vp6zm         1/1     Running   0          37s
      argocd-repo-server-5b959b57f4-znxjq   1/1     Running   0          37s
      argocd-server-6b8787d686-wv9zh        1/1     Running   0          37s
      Copy to Clipboard Toggle word wrap

      Note

      The HA-enabled TLS configuration requires a cluster with at least three worker nodes. It can take a few minutes for the output to appear if you have enabled the Argo CD instances with HA configuration.

      Example output with HA enabled

      NAME                                       READY   STATUS    RESTARTS   AGE
      argocd-application-controller-0            1/1     Running   0          10m
      argocd-redis-ha-haproxy-669757fdb7-5xg8h   1/1     Running   0          10m
      argocd-redis-ha-server-0                   2/2     Running   0          9m9s
      argocd-redis-ha-server-1                   2/2     Running   0          98s
      argocd-redis-ha-server-2                   2/2     Running   0          53s
      argocd-repo-server-576499d46d-8hgbh        1/1     Running   0          10m
      argocd-server-9486f88b7-dk2ks              1/1     Running   0          10m
      Copy to Clipboard Toggle word wrap

  3. Verify that the argocd-operator-redis-tls secret is created:

    $ oc get secrets argocd-operator-redis-tls -n <namespace> 
    1
    Copy to Clipboard Toggle word wrap
    1
    Specify a namespace where the Argo CD instance is running, for example openshift-gitops.

    Example output

    NAME                        TYPE                DATA   AGE
    argocd-operator-redis-tls   kubernetes.io/tls   2      30s
    Copy to Clipboard Toggle word wrap

    The secret must be of the kubernetes.io/tls type and a size of 2.

1.3. Configuring TLS for Redis with autotls disabled

You can manually configure TLS encryption for Redis by creating the argocd-operator-redis-tls secret with a key and certificate pair. In addition, you must annotate the secret to indicate that it belongs to the appropriate Argo CD instance. The steps to create a certificate and secret vary for instances with High Availability (HA) enabled.

Procedure

  1. Log in to the OpenShift Container Platform web console.
  2. Create an Argo CD instance:

    1. In the Administrator perspective of the web console, use the left navigation panel to go to AdministrationCustomResourceDefinitions.
    2. Search for argocds.argoproj.io and click ArgoCD custom resource definition (CRD).
    3. On the CustomResourceDefinition details page, click the Instances tab, and then click Create ArgoCD.
    4. Edit or replace the YAML similar to the following example:

      Example ArgoCD CR with autotls disabled

      apiVersion: argoproj.io/v1beta1
      kind: ArgoCD
      metadata:
        name: argocd 
      1
      
        namespace: openshift-gitops 
      2
      
      spec:
        ha:
          enabled: true 
      3
      Copy to Clipboard Toggle word wrap

      1
      The name of the Argo CD instance.
      2
      The namespace where you want to run the Argo CD instance.
      3
      The flag value that enables the HA feature. If you do not want to enable HA, do not include this line or set the flag value as false.
    5. Click Create.
    6. Verify that the Argo CD pods are ready and running:

      $ oc get pods -n <namespace> 
      1
      Copy to Clipboard Toggle word wrap
      1
      Specify a namespace where the Argo CD instance is running, for example openshift-gitops.

      Example output with HA disabled

      NAME                                  READY   STATUS    RESTARTS   AGE
      argocd-application-controller-0       1/1     Running   0          26s
      argocd-redis-84b77d4f58-vp6zm         1/1     Running   0          37s
      argocd-repo-server-5b959b57f4-znxjq   1/1     Running   0          37s
      argocd-server-6b8787d686-wv9zh        1/1     Running   0          37s
      Copy to Clipboard Toggle word wrap

      Note

      The HA-enabled TLS configuration requires a cluster with at least three worker nodes. It can take a few minutes for the output to appear if you have enabled the Argo CD instances with HA configuration.

      Example output with HA enabled

      NAME                                       READY   STATUS    RESTARTS   AGE
      argocd-application-controller-0            1/1     Running   0          10m
      argocd-redis-ha-haproxy-669757fdb7-5xg8h   1/1     Running   0          10m
      argocd-redis-ha-server-0                   2/2     Running   0          9m9s
      argocd-redis-ha-server-1                   2/2     Running   0          98s
      argocd-redis-ha-server-2                   2/2     Running   0          53s
      argocd-repo-server-576499d46d-8hgbh        1/1     Running   0          10m
      argocd-server-9486f88b7-dk2ks              1/1     Running   0          10m
      Copy to Clipboard Toggle word wrap

  3. Create a self-signed certificate for the Redis server by using one of the following options depending on your HA configuration:

    • For the Argo CD instance with HA disabled, run the following command:

      $ openssl req -new -x509 -sha256 \
        -subj "/C=XX/ST=XX/O=Testing/CN=redis" \
        -reqexts SAN -extensions SAN \
        -config <(printf "\n[SAN]\nsubjectAltName=DNS:argocd-redis.<namespace>.svc.cluster.local\n[req]\ndistinguished_name=req") \ 
      1
      
        -keyout /tmp/redis.key \
        -out /tmp/redis.crt \
        -newkey rsa:4096 \
        -nodes \
        -sha256 \
        -days 10
      Copy to Clipboard Toggle word wrap
      1
      Specify a namespace where the Argo CD instance is running, for example openshift-gitops.

      Example output

      Generating a RSA private key
      ...............++++
      ............................++++
      writing new private key to '/tmp/redis.key'
      Copy to Clipboard Toggle word wrap

    • For the Argo CD instance with HA enabled, run the following command:

      $ openssl req -new -x509 -sha256 \
        -subj "/C=XX/ST=XX/O=Testing/CN=redis" \
        -reqexts SAN -extensions SAN \
        -config <(printf "\n[SAN]\nsubjectAltName=DNS:argocd-redis-ha-haproxy.<namespace>.svc.cluster.local\n[req]\ndistinguished_name=req") \ 
      1
      
        -keyout /tmp/redis-ha.key \
        -out /tmp/redis-ha.crt \
        -newkey rsa:4096 \
        -nodes \
        -sha256 \
        -days 10
      Copy to Clipboard Toggle word wrap
      1
      Specify a namespace where the Argo CD instance is running, for example openshift-gitops.

      Example output

      Generating a RSA private key
      ...............++++
      ............................++++
      writing new private key to '/tmp/redis-ha.key'
      Copy to Clipboard Toggle word wrap

  4. Verify that the generated certificate and key are available in the /tmp directory by running the following commands:

    $ cd /tmp
    Copy to Clipboard Toggle word wrap
    $ ls
    Copy to Clipboard Toggle word wrap

    Example output with HA disabled

    ...
    redis.crt
    redis.key
    ...
    Copy to Clipboard Toggle word wrap

    Example output with HA enabled

    ...
    redis-ha.crt
    redis-ha.key
    ...
    Copy to Clipboard Toggle word wrap

  5. Create the argocd-operator-redis-tls secret by using one of the following options depending on your HA configuration:

    • For the Argo CD instance with HA disabled, run the following command:

      $ oc create secret tls argocd-operator-redis-tls --key=/tmp/redis.key --cert=/tmp/redis.crt
      Copy to Clipboard Toggle word wrap
    • For the Argo CD instance with HA enabled, run the following command:

      $ oc create secret tls argocd-operator-redis-tls --key=/tmp/redis-ha.key --cert=/tmp/redis-ha.crt
      Copy to Clipboard Toggle word wrap

      Example output

      secret/argocd-operator-redis-tls created
      Copy to Clipboard Toggle word wrap

  6. Annotate the secret to indicate that it belongs to the Argo CD CR:

    $ oc annotate secret argocd-operator-redis-tls argocds.argoproj.io/name=<instance-name> 
    1
    Copy to Clipboard Toggle word wrap
    1
    Specify a name of the Argo CD instance, for example argocd.

    Example output

    secret/argocd-operator-redis-tls annotated
    Copy to Clipboard Toggle word wrap

  7. Verify that the Argo CD pods are ready and running:

    $ oc get pods -n <namespace> 
    1
    Copy to Clipboard Toggle word wrap
    1
    Specify a namespace where the Argo CD instance is running, for example openshift-gitops.

    Example output with HA disabled

    NAME                                  READY   STATUS    RESTARTS   AGE
    argocd-application-controller-0       1/1     Running   0          26s
    argocd-redis-84b77d4f58-vp6zm         1/1     Running   0          37s
    argocd-repo-server-5b959b57f4-znxjq   1/1     Running   0          37s
    argocd-server-6b8787d686-wv9zh        1/1     Running   0          37s
    Copy to Clipboard Toggle word wrap

    Note

    It can take a few minutes for the output to appear if you have enabled the Argo CD instances with HA configuration.

    Example output with HA enabled

    NAME                                       READY   STATUS    RESTARTS   AGE
    argocd-application-controller-0            1/1     Running   0          10m
    argocd-redis-ha-haproxy-669757fdb7-5xg8h   1/1     Running   0          10m
    argocd-redis-ha-server-0                   2/2     Running   0          9m9s
    argocd-redis-ha-server-1                   2/2     Running   0          98s
    argocd-redis-ha-server-2                   2/2     Running   0          53s
    argocd-repo-server-576499d46d-8hgbh        1/1     Running   0          10m
    argocd-server-9486f88b7-dk2ks              1/1     Running   0          10m
    Copy to Clipboard Toggle word wrap

Chapter 2. Managing secrets securely using Secrets Store CSI driver with GitOps

This guide walks you through the process of integrating the Secrets Store Container Storage Interface (SSCSI) driver with the GitOps Operator in OpenShift Container Platform 4.14 and later.

2.1. Overview of managing secrets using Secrets Store CSI driver with GitOps

Some applications need sensitive information, such as passwords and usernames which must be concealed as good security practice. If sensitive information is exposed because role-based access control (RBAC) is not configured properly on your cluster, anyone with API or etcd access can retrieve or modify a secret.

Important

Anyone who is authorized to create a pod in a namespace can use that RBAC to read any secret in that namespace. With the SSCSI Driver Operator, you can use an external secrets store to store and provide sensitive information to pods securely.

2.1.1. Benefits

Integrating the SSCSI driver with the GitOps Operator provides the following benefits:

  • Enhance the security and efficiency of your GitOps workflows
  • Facilitate the secure attachment of secrets into deployment pods as a volume
  • Ensure that sensitive information is accessed securely and efficiently

2.2. Secrets store providers

The following secrets store providers are available for use with the Secrets Store CSI Driver Operator:

  • AWS Secrets Manager
  • AWS Systems Manager Parameter Store
  • Microsoft Azure Key Vault
  • HashiCorp Vault

2.3. Configure AWS Secrets Manager on OpenShift with GitOps

This guide provides instructions with examples to help you use GitOps workflows with the Secrets Store Container Storage Interface (SSCSI) Driver Operator to mount secrets from AWS Secrets Manager to a CSI volume in OpenShift Container Platform.

As an example, consider that you are using AWS Secrets Manager as your secrets store provider with the SSCSI Driver Operator. The following example shows the directory structure in GitOps repository that is ready to use the secrets from AWS Secrets Manager:

Example directory structure in GitOps repository

├── config
│   ├── argocd
│   │   ├── argo-app.yaml
│   │   ├── secret-provider-app.yaml 
1

│   │   ├── ...
│   └── sscsid 
2

│       └── aws-provider.yaml 
3

├── environments
│   ├── dev 
4

│   │   ├── apps
│   │   │   └── app-taxi 
5

│   │   │       ├── ...
│   │   ├── credentialsrequest-dir-aws 
6

│   │   └── env
│   │       ├── ...
│   ├── new-env
│   │   ├── ...
Copy to Clipboard Toggle word wrap

2
Directory that stores the aws-provider.yaml file.
3
Configuration file that installs the AWS Secrets Manager provider and deploys resources for it.
1
Configuration file that creates an application and deploys resources for AWS Secrets Manager.
4
Directory that stores the deployment pod and credential requests.
5
Directory that stores the SecretProviderClass resources to define your secrets store provider.
6
Folder that stores the credentialsrequest.yaml file. This file contains the configuration for the credentials request to mount a secret to the deployment pod.

2.3.1. Storing AWS Secrets Manager resources in GitOps repository

You can store AWS Secrets Manager configurations in the GitOps repository for declarative and version-controlled secret management.

Important

Using the SSCSI Driver Operator with AWS Secrets Manager is not supported in a hosted control plane cluster.

Prerequisites

  • You have access to the cluster with cluster-admin privileges.
  • You have access to the OpenShift Container Platform web console.
  • You have extracted and prepared the ccoctl binary.
  • You have installed the jq CLI tool.
  • Your cluster is installed on AWS and uses AWS Security Token Service (STS).
  • You have configured AWS Secrets Manager to store the required secrets.
  • SSCSI Driver Operator is installed on your cluster.
  • Red Hat OpenShift GitOps Operator is installed on your cluster.
  • You have a GitOps repository ready to use the secrets.
  • You are logged in to the Argo CD instance by using the Argo CD admin account.

Procedure

  1. Install the AWS Secrets Manager provider and add resources:

    1. In your GitOps repository, create a directory and add aws-provider.yaml file in it with the following configuration to deploy resources for the AWS Secrets Manager provider:

      Important

      The AWS Secrets Manager provider for the SSCSI driver is an upstream provider.

      This configuration is modified from the configuration provided in the upstream AWS documentation so that it works properly with OpenShift Container Platform. Changes to this configuration might impact functionality.

      Example aws-provider.yaml file

      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: csi-secrets-store-provider-aws
        namespace: openshift-cluster-csi-drivers
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRole
      metadata:
        name: csi-secrets-store-provider-aws-cluster-role
      rules:
      - apiGroups: [""]
        resources: ["serviceaccounts/token"]
        verbs: ["create"]
      - apiGroups: [""]
        resources: ["serviceaccounts"]
        verbs: ["get"]
      - apiGroups: [""]
        resources: ["pods"]
        verbs: ["get"]
      - apiGroups: [""]
        resources: ["nodes"]
        verbs: ["get"]
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRoleBinding
      metadata:
        name: csi-secrets-store-provider-aws-cluster-rolebinding
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
        name: csi-secrets-store-provider-aws-cluster-role
      subjects:
      - kind: ServiceAccount
        name: csi-secrets-store-provider-aws
        namespace: openshift-cluster-csi-drivers
      ---
      apiVersion: apps/v1
      kind: DaemonSet
      metadata:
        namespace: openshift-cluster-csi-drivers
        name: csi-secrets-store-provider-aws
        labels:
          app: csi-secrets-store-provider-aws
      spec:
        updateStrategy:
          type: RollingUpdate
        selector:
          matchLabels:
            app: csi-secrets-store-provider-aws
        template:
          metadata:
            labels:
              app: csi-secrets-store-provider-aws
          spec:
            serviceAccountName: csi-secrets-store-provider-aws
            hostNetwork: false
            containers:
              - name: provider-aws-installer
                image: public.ecr.aws/aws-secrets-manager/secrets-store-csi-driver-provider-aws:1.0.r2-50-g5b4aca1-2023.06.09.21.19
                imagePullPolicy: Always
                args:
                    - --provider-volume=/etc/kubernetes/secrets-store-csi-providers
                resources:
                  requests:
                    cpu: 50m
                    memory: 100Mi
                  limits:
                    cpu: 50m
                    memory: 100Mi
                securityContext:
                  privileged: true
                volumeMounts:
                  - mountPath: "/etc/kubernetes/secrets-store-csi-providers"
                    name: providervol
                  - name: mountpoint-dir
                    mountPath: /var/lib/kubelet/pods
                    mountPropagation: HostToContainer
            tolerations:
            - operator: Exists
            volumes:
              - name: providervol
                hostPath:
                  path: "/etc/kubernetes/secrets-store-csi-providers"
              - name: mountpoint-dir
                hostPath:
                  path: /var/lib/kubelet/pods
                  type: DirectoryOrCreate
            nodeSelector:
              kubernetes.io/os: linux
      Copy to Clipboard Toggle word wrap

    2. Add a secret-provider-app.yaml file in your GitOps repository to create an application and deploy resources for AWS Secrets Manager:

      Example secret-provider-app.yaml file

      apiVersion: argoproj.io/v1alpha1
      kind: Application
      metadata:
        name: secret-provider-app
        namespace: openshift-gitops
      spec:
        destination:
          namespace: openshift-cluster-csi-drivers
          server: https://kubernetes.default.svc
        project: default
        source:
          path: path/to/aws-provider/resources
          repoURL: https://github.com/<my-domain>/<gitops>.git 
      1
      
        syncPolicy:
          automated:
          prune: true
          selfHeal: true
      Copy to Clipboard Toggle word wrap

      1
      Update the value of the repoURL field to point to your GitOps repository.
  2. Synchronize resources with the default Argo CD instance to deploy them in the cluster:

    1. Add a label to the openshift-cluster-csi-drivers namespace your application is deployed in so that the Argo CD instance in the openshift-gitops namespace can manage it:

      $ oc label namespace openshift-cluster-csi-drivers argocd.argoproj.io/managed-by=openshift-gitops
      Copy to Clipboard Toggle word wrap
    2. Apply the resources in your GitOps repository to your cluster, including the aws-provider.yaml file you just pushed:

      Example output

      application.argoproj.io/argo-app created
      application.argoproj.io/secret-provider-app created
      ...
      Copy to Clipboard Toggle word wrap

In the Argo CD UI, you can observe that the csi-secrets-store-provider-aws daemonset continues to synchronize resources. To resolve this issue, you must configure the SSCSI driver to mount secrets from the AWS Secrets Manager.

2.3.2. Configuring SSCSI driver to mount secrets from AWS Secrets Manager

To store and manage your secrets securely, use GitOps workflows and configure the Secrets Store Container Storage Interface (SSCSI) Driver Operator to mount secrets from AWS Secrets Manager to a CSI volume in OpenShift Container Platform. For example, consider that you want to mount a secret to a deployment pod under the dev namespace which is in the /environments/dev/ directory.

Prerequisites

  • You have the AWS Secrets Manager resources stored in your GitOps repository.

Procedure

  1. Grant privileged access to the csi-secrets-store-provider-aws service account by running the following command:

    $ oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-aws -n openshift-cluster-csi-drivers
    Copy to Clipboard Toggle word wrap

    Example output

    clusterrole.rbac.authorization.k8s.io/system:openshift:scc:privileged added: "csi-secrets-store-provider-aws"
    Copy to Clipboard Toggle word wrap

  2. Grant permission to allow the service account to read the AWS secret object:

    1. Create a credentialsrequest-dir-aws folder under a namespace-scoped directory in your GitOps repository because the credentials request is namespace-scoped. For example, create a credentialsrequest-dir-aws folder under the dev namespace which is in the /environments/dev/ directory by running the following command:

      $ mkdir credentialsrequest-dir-aws
      Copy to Clipboard Toggle word wrap
    2. Create a YAML file with the following configuration for the credentials request in the /environments/dev/credentialsrequest-dir-aws/ path to mount a secret to the deployment pod in the dev namespace:

      Example credentialsrequest.yaml file

      apiVersion: cloudcredential.openshift.io/v1
      kind: CredentialsRequest
      metadata:
        name: aws-provider-test
        namespace: openshift-cloud-credential-operator
      spec:
        providerSpec:
          apiVersion: cloudcredential.openshift.io/v1
          kind: AWSProviderSpec
          statementEntries:
          - action:
            - "secretsmanager:GetSecretValue"
            - "secretsmanager:DescribeSecret"
            effect: Allow
            resource: "<aws_secret_arn>" 
      1
      
      secretRef:
        name: aws-creds
        namespace: dev 
      2
      
      serviceAccountNames:
        - default
      Copy to Clipboard Toggle word wrap

      2
      The namespace for the secret reference. Update the value of this namespace field according to your project deployment setup.
      1
      The ARN of your secret in the region where your cluster is on. The <aws_region> of <aws_secret_arn> has to match the cluster region. If it does not match, create a replication of your secret in the region where your cluster is on.
      Tip

      To find your cluster region, run the command:

      $ oc get infrastructure cluster -o jsonpath='{.status.platformStatus.aws.region}'
      Copy to Clipboard Toggle word wrap

      Example output

      us-west-2
      Copy to Clipboard Toggle word wrap

    3. Retrieve the OIDC provider by running the following command:

      $ oc get --raw=/.well-known/openid-configuration | jq -r '.issuer'
      Copy to Clipboard Toggle word wrap

      Example output

      https://<oidc_provider_name>
      Copy to Clipboard Toggle word wrap

      Copy the OIDC provider name <oidc_provider_name> from the output to use in the next step.

    4. Use the ccoctl tool to process the credentials request by running the following command:

      $ ccoctl aws create-iam-roles \
          --name my-role --region=<aws_region> \
          --credentials-requests-dir=credentialsrequest-dir-aws \
          --identity-provider-arn arn:aws:iam::<aws_account>:oidc-provider/<oidc_provider_name> --output-dir=credrequests-ccoctl-output
      Copy to Clipboard Toggle word wrap

      Example output

      2023/05/15 18:10:34 Role arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds created
      2023/05/15 18:10:34 Saved credentials configuration to: credrequests-ccoctl-output/manifests/my-namespace-aws-creds-credentials.yaml
      2023/05/15 18:10:35 Updated Role policy for Role my-role-my-namespace-aws-creds
      Copy to Clipboard Toggle word wrap

      Copy the <aws_role_arn> from the output to use in the next step. For example, arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds.

    5. Check the role policy on AWS to confirm the <aws_region> of "Resource" in the role policy matches the cluster region:

      Example role policy

      {
      	"Version": "2012-10-17",
      	"Statement": [
      		{
      			"Effect": "Allow",
      			"Action": [
      				"secretsmanager:GetSecretValue",
      				"secretsmanager:DescribeSecret"
      			],
      			"Resource": "arn:aws:secretsmanager:<aws_region>:<aws_account_id>:secret:my-secret-xxxxxx"
      		}
      	]
      }
      Copy to Clipboard Toggle word wrap

    6. Bind the service account with the role ARN by running the following command:

      $ oc annotate -n <namespace> sa/<app_service_account> eks.amazonaws.com/role-arn="<aws_role_arn>"
      Copy to Clipboard Toggle word wrap

      Example command

      $ oc annotate -n dev sa/default eks.amazonaws.com/role-arn="<aws_role_arn>"
      Copy to Clipboard Toggle word wrap

      Example output

      serviceaccount/default annotated
      Copy to Clipboard Toggle word wrap

  3. Create a namespace-scoped SecretProviderClass resource to define your secrets store provider. For example, you create a SecretProviderClass resource in /environments/dev/apps/app-taxi/services/taxi/base/config directory of your GitOps repository.

    1. Create a secret-provider-class-aws.yaml file in the same directory where the target deployment is located in your GitOps repository:

      Example secret-provider-class-aws.yaml

      apiVersion: secrets-store.csi.x-k8s.io/v1
      kind: SecretProviderClass
      metadata:
        name: my-aws-provider 
      1
      
        namespace: dev 
      2
      
      spec:
        provider: aws 
      3
      
        parameters: 
      4
      
          objects: |
            - objectName: "testSecret" 
      5
      
              objectType: "secretsmanager"
      Copy to Clipboard Toggle word wrap

      1
      Name of the secret provider class.
      2
      Namespace for the secret provider class. The namespace must match the namespace of the resource which will use the secret.
      3
      Name of the secret store provider.
      4
      Specifies the provider-specific configuration parameters.
      5
      The secret name you created in AWS.
    2. Verify that after pushing this YAML file to your GitOps repository, the namespace-scoped SecretProviderClass resource is populated in the target application page in the Argo CD UI.

      Note

      If the Sync Policy of your application is not set to Auto, you can manually sync the SecretProviderClass resource by clicking Sync in the Argo CD UI.

2.3.3. Configuring GitOps managed resources to use mounted secrets

You must configure the GitOps managed resources by adding volume mounts configuration to a deployment and configuring the container pod to use the mounted secret.

Prerequisites

  • You have the AWS Secrets Manager resources stored in your GitOps repository.
  • You have the Secrets Store Container Storage Interface (SSCSI) driver configured to mount secrets from AWS Secrets Manager.

Procedure

  1. Configure the GitOps managed resources. For example, consider that you want to add volume mounts configuration to the deployment of app-taxi application and the 100-deployment.yaml file is in the /environments/dev/apps/app-taxi/services/taxi/base/config/ directory.

    1. Add the volume mounting to the deployment YAML file and configure the container pod to use the secret provider class resources and mounted secret:

      Example YAML file

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: taxi
        namespace: dev 
      1
      
      spec:
        replicas: 1
        template:
          metadata:
      # ...
          spec:
            containers:
              - image: nginxinc/nginx-unprivileged:latest
                imagePullPolicy: Always
                name: taxi
                ports:
                  - containerPort: 8080
                volumeMounts:
                  - name: secrets-store-inline
                    mountPath: "/mnt/secrets-store" 
      2
      
                    readOnly: true
                resources: {}
          serviceAccountName: default
          volumes:
            - name: secrets-store-inline
              csi:
                driver: secrets-store.csi.k8s.io
                readOnly: true
                volumeAttributes:
                  secretProviderClass: "my-aws-provider" 
      3
      
          status: {}
      # ...
      Copy to Clipboard Toggle word wrap

      1
      Namespace for the deployment. This must be the same namespace as the secret provider class.
      2
      The path to mount secrets in the volume mount.
      3
      Name of the secret provider class.
    2. Push the updated resource YAML file to your GitOps repository.
  2. In the Argo CD UI, click REFRESH on the target application page to apply the updated deployment manifest.
  3. Verify that all the resources are successfully synchronized on the target application page.
  4. Verify that you can you can access the secrets from AWS Secrets manager in the pod volume mount:

    1. List the secrets in the pod mount:

      $ oc exec <deployment_name>-<hash> -n <namespace> -- ls /mnt/secrets-store/
      Copy to Clipboard Toggle word wrap

      Example command

      $ oc exec taxi-5959644f9-t847m -n dev -- ls /mnt/secrets-store/
      Copy to Clipboard Toggle word wrap

      Example output

      <secret_name>
      Copy to Clipboard Toggle word wrap

    2. View a secret in the pod mount:

      $ oc exec <deployment_name>-<hash> -n <namespace> -- cat /mnt/secrets-store/<secret_name>
      Copy to Clipboard Toggle word wrap

      Example command

      $ oc exec taxi-5959644f9-t847m -n dev -- cat /mnt/secrets-store/testSecret
      Copy to Clipboard Toggle word wrap

      Example output

      <secret_value>
      Copy to Clipboard Toggle word wrap

2.4. Configure HashiCorp Vault as a secrets provider on OpenShift with GitOps

You can configure HashiCorp Vault as a secrets provider by using the Secrets Store CSI Driver Operator on the OpenShift Container Platform. When combined with GitOps workflows managed by Argo CD, this setup enables you to securely retrieve secrets from Vault and inject it into your applications running on OpenShift.

Structure the GitOps repository and configure the Vault CSI provider to integrate with the Secrets Store CSI Driver in OpenShift Container Platform.

The following sample GitOps repository layout is used for integrating Vault with your application.

Example directory structure in GitOps repository

├── config
│   ├── argocd 
1

│   │   ├── vault-secret-provider-app.yaml
│   │   ├── ...
│── environments
│   ├── dev
│   │   ├── apps
│   │   │   ├── demo-app
│   │   │       ├── manifest 
2

│   │   │       |    ├── secretProviderClass.yaml
│   │   │       |    ├── serviceAccount.yaml
│   │   │       |    ├── deployment.yaml
│   │   │       ├── argocd 
3

│   │   │            ├── demo-app.yaml
Copy to Clipboard Toggle word wrap

1
config/argocd/ - Stores Argo CD Application definitions for cluster-wide tools like the Vault CSI provider.
2
environments/<env>/apps/<app-name>/manifest/: Contains Kubernetes manifests specific to an application in a particular environment.
3
environments/<env>/apps/<app-name>/argocd/: Contains the Argo CD Application definition that deploys the application and its resources.

2.4.1. Installing the Vault CSI Provider using GitOps

Install the Vault CSI provider by deploying an Argo CD Application that uses HashiCorp’s official Helm chart. This method follows GitOps best practices by managing the installation declaratively through a version-controlled Argo CD Application resource.

Prerequisites

  • You are logged in to the OpenShift Container Platform cluster as an administrator.
  • You have access to the OpenShift Container Platform web console.
  • The SSCSI Driver Operator is installed on your cluster.
  • You installed Red Hat OpenShift GitOps on your OpenShift Container Platform cluster.
  • You have a GitOps repository ready to use the secrets.

Procedure

  1. Creating the Argo CD Application resource for the Vault CSI Provider.

    1. Create an Argo CD Application resource to deploy the Vault CSI provider. Add this resource to your GitOps repository, for example, config/argocd/vault-secret-provider-app.yaml:

      Example vault-secret-provider-app.yaml file

      apiVersion: argoproj.io/v1alpha1
      kind: Application
      metadata:
        name: vault-secret-provider-app
        namespace: openshift-gitops
      spec:
        destination:
          namespace: vault-csi-provider
          server: https://kubernetes.default.svc
        project: default
        source:
          repoURL: https://helm.releases.hashicorp.com
          chart: vault
          targetRevision: 0.30.0
          helm:
            releaseName: vault
            values: |
              csi:
                enabled: true
              server:
                enabled: true
                dataStorage:
                   enabled: false
              injector:
                enabled: false
        syncPolicy:
          automated:
            prune: true
            selfHeal: true
          syncOptions:
          - CreateNamespace=true
        ignoreDifferences:
        - kind: DaemonSet
          group: apps
          jsonPointers:
            - /spec/template/spec/containers/0/securityContext/privileged
      Copy to Clipboard Toggle word wrap

      Note

      The server.enabled: true and dataStorage.enabled: false settings in the Helm values deploy a HashiCorp Vault server instance using ephemeral storage. This setup is suitable for development or testing environments. For production, you can enable dataStorage with a persistent volume (PV) or use an external Vault cluster and set server.enabled to false. If a Vault server is already deployed, you can set server.enabled to false.

  2. Apply the vault-secret-provider-app.yaml file from the GitOps repository to your cluster:

    $ oc apply -f vault-secret-provider-app.yaml
    Copy to Clipboard Toggle word wrap

    After deploying the Vault CSI provider, the vault-csi-provider DaemonSet may fail to run. This issue occurs because OpenShift Container Platform restricts privileged containers by default. In addition, the Vault CSI provider and the Secrets Store CSI Driver require access to hostPath mounts, which OpenShift Container Platform blocks unless the pods run as privileged.

    1. To resolve permission issues in OpenShift Container Platform:

      1. Patch the vault-csi-provider DaemonSet to run its containers as privileged:

        $ oc patch daemonset vault-csi-provider -n vault-csi-provider --type=json --patch='[{"op":"add","path":"/spec/template/spec/containers/0/securityContext","value":{"privileged":true}}]
        Copy to Clipboard Toggle word wrap
      2. Grant the Secrets Store CSI Driver service account access to the privileged Security Context Constraints (SCC) in OpenShift Container Platform.

        $ oc adm policy add-scc-to-user privileged \ system:serviceaccount:openshift-cluster-csi-drivers:secrets-store-csi-driver-operator
        Copy to Clipboard Toggle word wrap
      3. Grant the Vault CSI Provider service account access to the privileged Security Context Constraints (SCC) in OpenShift Container Platform.

        $ oc adm policy add-scc-to-user privileged \
        system:serviceaccount:vault-csi-provider:vault-csi-provider
        Copy to Clipboard Toggle word wrap
        Note

        If server.enabled is set to true in the Helm chart, the Vault server pods run with specific user IDs (UIDs) or group IDs (GIDs) that OpenShift Container Platform blocks by default.

      4. Grant the Vault server service account the required Security Context Constraints (SCC) permissions.

        $ oc adm policy add-scc-to-user anyuid  system:serviceaccount:vault-csi-provider:vault
        Copy to Clipboard Toggle word wrap

2.4.2. Initializing and configuring Vault to store a Secret

After deploying Vault using Argo CD and applying the necessary SCC permissions and DaemonSet patches, initialize Vault, unseal it, and configure Kubernetes authentication to enable secure secret storage and access.

Procedure

  1. Access the Vault Pod.

    1. If Vault is running within your OpenShift Container Platform cluster, for example, as the vault-0 pod in the vault-csi-provider namespace, run the following command to access the Vault CLI inside the pod:

      $ oc exec -it vault-0 -n vault-csi-provider -- /bin/sh
      Copy to Clipboard Toggle word wrap
  2. Initialize Vault.

    1. If your Vault instance is not yet initialized, run the following command:

      $ vault operator init
      Copy to Clipboard Toggle word wrap

      As a result, the following output is displayed.

      5 Unseal Keys - required to unseal the Vault.
      Initial Root Token - required to log in and configure Vault.
      Copy to Clipboard Toggle word wrap
      Important

      Store these credentials securely. At least 3 out of 5 unseal keys are required to unseal Vault. If the keys are lost, access to stored secrets is permanently blocked.

  3. Unseal Vault.

    1. Vault starts in a sealed state. Run the following commands to use three of the five Unseal Keys obtained in the earlier step:

      $ vault operator unseal <Unseal Key 1>
        vault operator unseal <Unseal Key 2>
        vault operator unseal <Unseal Key 3>
      Copy to Clipboard Toggle word wrap

      Once unsealed, the Vault becomes active and ready for use.

  4. Log into Vault.

    1. To use the root token to log in to Vault, run the following command:

      $ vault login <Initial Root Token>
      Copy to Clipboard Toggle word wrap

      This provides administrator access to enable and configure secret engines and authentication methods.

  5. Enable Kubernetes Authentication in Vault.

    1. Run the following command to enable Kubernetes authentication in Vault.

      $ vault auth enable kubernetes
      Copy to Clipboard Toggle word wrap

      This allows Kubernetes workloads, for example, pods, to authenticate with Vault using their service accounts.

  6. Configure Kubernetes authentication method in Vault.

    1. To configure Vault for communicating with the Kubernetes API, run the following command:

      $ vault write auth/kubernetes/config \
      issuer="https://kubernetes.default.svc" \
      token_reviewer_jwt="$(cat/var/run/secrets/kubernetes.io/serviceaccount/token)" \
      kubernetes_host="https://${KUBERNETES_PORT_443_TCP_ADDR}:443" \
      kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      Copy to Clipboard Toggle word wrap

      As a result, the following output is displayed.

      Success! Data written to: auth/kubernetes/config
      Copy to Clipboard Toggle word wrap

      Where:

      • <issuer> is the name of the Kubernetes token issuer URL.
      • <token_reviewer_jwt> is a JSON Web Token (JWT) that Vault uses to call the Kubernetes TokenReview API and to validate service account tokens.
      • <kubernetes_host> is the URL that Vault uses to communicate with the Kubernetes API server.
      • <kubernetes_ca_cert> is the CA certificate that Vault uses for secure communication with the Kubernetes API server.

2.4.3. Managing Secrets, Policies, and Roles in Vault

To create a secret in Vault, define a Vault policy and configure a Kubernetes authentication role that enables a Kubernetes workload to retrieve the secret securely.

Procedure

  1. Enable the KV Secrets Engine

    1. Use the Key-Value (KV) Version 2 secrets engine to store arbitrary secrets with versioning support. Run the following command to enable the KV secrets engine at the path secret/:

      $ vault secrets enable -path=secret/ kv
      Copy to Clipboard Toggle word wrap
  2. Store a secret in Vault.

    1. Store a secret using the KV Version 2 secrets engine. Run the following command to store the secret data, username and password, at path secret/demo/config:

      $ vault kv put secret/demo/config username="demo-user" password="demo-pass"
      Copy to Clipboard Toggle word wrap
  3. Create a Vault policy.

    1. To create a policy that grants read access to the secret, run the following command:

      $ vault policy write demo-app-policy -<<EOF
      path "secret/demo/config" {
        capabilities = ["read"]
      }
      EOF
      Copy to Clipboard Toggle word wrap

      This demo-app-policy grants read access to the secret at secret/demo/config and is later linked to a Kubernetes role.

  4. Create a Kubernetes Authentication Role in Vault.

    1. To create a role that binds a Kubernetes service account to the Vault policy, run the following command.

      $ vault write auth/kubernetes/role/app \ bound_service_account_names=demo-app-sa \ bound_service_account_namespaces=demo-app \ policies=demo-app-policy \ ttl=24h
      Copy to Clipboard Toggle word wrap

      This allows any pod using the service account to authenticate to Vault and retrieve the secret.

      Where:

      • <bound_service_account_names> is the name of the Kubernetes service account that Vault trusts.
      • <bound_service_account_namespaces> is the name of the namespace where the service account is located.
      • <policies> is the name of the attached Vault policy.
      • <ttl> is the Time-to-live value issued for the token.

2.4.4. Configuring GitOps managed resources to use Vault-mounted secrets

Securely inject secrets from HashiCorp Vault into GitOps-managed Kubernetes workloads using the Secrets Store CSI driver and Vault provider. The secrets are mounted as files in the pod’s filesystem, allowing applications to access the data without storing it in Kubernetes Secret objects.

Procedure

  1. Creating the SecretProviderClass.

    1. Create a SecretProviderClass resource in the application’s manifest directory for example, environments/dev/apps/demo-app/manifest/secretProviderClass.yaml. This resource defines how the Secrets Store CSI driver retrieves secrets from Vault.

      Example vault-secret-provider-app.yaml file

      apiVersion: secrets-store.csi.x-k8s.io/v1
        kind: SecretProviderClass
           metadata:
               name: demo-app-creds
               namespace: demo-app
       spec:
              provider: vault 
      1
      
              parameters:
                    vaultAddress: http://vault.vault-csi-provider:8200 # <name>.<namespace>:port 
      2
      
                    roleName: app 
      3
      
              objects: | 
      4
      
                 - objectName: "demoAppUsername"
                   secretPath: "secret/demo/config"
                   secretKey: "username"
                - objectName: "demoAppPassword"
                  secretPath: "secret/demo/config"
                   secretKey: "password"
      Copy to Clipboard Toggle word wrap

      1
      <provider: vault> - Specifies the name of the HashiCorp Vault.
      2
      <vaultAddress> - Specifies the network address of the Vault server. Adjust this based on your Vault setup, such as, in-cluster service or an external URL.
      3
      <roleName> - Specifies the Vault Kubernetes authentication role used by the application Service Account. Describes an array that defines which secrets to retrieve and how to map them to file names.
      4
      <objects> - Specifies an array that defines which secrets to retrieve and how to map them to file names. The secretPath for KV v2 includes /data/.
  2. Create an Application, such as, ServiceAccount.

    1. Create a Kubernetes ServiceAccount for the application workload. The ServiceAccount name must match the bound_service_account_names value defined in the Vault Kubernetes authentication role. Store the manifest in the GitOps repository, for example, environments/dev/apps/demo-app/manifest/serviceAccount.yaml.

      Example ServiceAccount.yaml file

      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: demo-app-sa
        namespace: demo-app
      Copy to Clipboard Toggle word wrap

  3. Create the Application deployment:

    1. Modify the application’s deployment to use the designated ServiceAccount and mount secrets using the CSI volume. Store the updated manifest in the GitOps repository, for example, environments/dev/apps/demo-app/manifest/deployment.yaml:

      Example deployment.yaml file

      apiVersion: apps/v1
      kind: Deployment
      metadata:
         name: app
         namespace: demo-app
         labels:
           app: demo
      spec:
         replicas: 1
         selector:
           matchLabels:
              app: demo
         template:
            metadata:
             labels:
               app: demo
         spec:
              serviceAccountName: demo-app-sa 
      1
      
              containers:
                 - name: app
                   image: nginxinc/nginx-unprivileged:latest
                   volumeMounts: 
      2
      
                   - name: vault-secrets
                     mountPath: /mnt/secrets-store
                     readOnly: true
         volumes: 
      3
      
              - name: vault-secrets
                csi:
                  driver: secrets-store.csi.k8s.io
                  readOnly: true
                  volumeAttributes:
                  secretProviderClass: demo-app-creds
      Copy to Clipboard Toggle word wrap

      1
      serviceAccountName - Assigns the Kubernetes ServiceAccount name, for example, demo-app-sa, used by the application pod. This ServiceAccount is fundamental for authenticating with HashiCorp Vault, as it is linked to a Vault role that grants permissions to retrieve the necessary secrets.
      2
      volumeMounts - Mounts the vault-secrets volume into the container at the /mnt/secrets-store directory.
      3
      volumes - Defines the vault-secrets volume using the secrets-store.csi.k8s.io driver and references the demo-app-creds SecretProviderClass.
  4. Define the Argo CD application for the workload:

    1. Define an Argo CD application resource to deploy application components such as ServiceAccount, SecretProviderClass, and Deployment from the GitOps repository. Store the Argo CD manifest in a directory location, such as, environments/dev/apps/demo-app/argocd/demo-app.yaml.

      Example demo-app.yaml file

      apiVersion: argoproj.io/v1alpha1
      kind: Application
      metadata:
        name: demo-app
        namespace: openshift-gitops
      spec:
        project: default
        source:
          repoURL: https://your-git-repo-url.git
          targetRevision: HEAD
          path: environments/dev/apps/demo-app/manifest
        destination:
          server: https://kubernetes.default.svc
          namespace: demo-app
        syncPolicy:
          automated:
            prune: true
            selfHeal: true
          syncOptions:
            - CreateNamespace=true
      Copy to Clipboard Toggle word wrap

2.4.5. Verifying secret injection

Verify the secret injection to ensure that Vault contains the expected values.

Procedure

  1. Check the Pod status.

    1. After the Argo CD Application has synced and all the resources are deployed, verify that the application pod is running successfully in the demo-app namespace. Run the following command:

      $ oc get pods -n demo-app
      Copy to Clipboard Toggle word wrap
  2. Open the Shell session.

    1. Use the name of the application pod to open a shell session. Replace <your-app-pod-name> with the actual pod name.

      $ oc exec -it <your-app-pod-name> -n demo-app -- sh
      Copy to Clipboard Toggle word wrap
  3. Verify mounted secrets.

    1. To verify that the secrets are mounted at the expected path, run the following command:

      $ ls -l /mnt/secrets-store
        cat /mnt/secrets-store/demoAppUsername
        cat /mnt/secrets-store/demoAppPassword
      Copy to Clipboard Toggle word wrap

      Verify that the mounted secret files demoAppUsername and demoAppPassword contain the expected values from Vault.

Chapter 3. Masking sensitive annotations in the Argo CD Web UI

Argo CD hides sensitive annotation values on Secret resources from the Argo CD user interface (UI) and command-line interface (CLI). Users can configure this by specifying annotation keys to be masked in the Argo CD custom resource (CR). This feature enhances security by preventing accidental exposure of sensitive information, such as tokens or API keys, stored in annotations on Secret resources.

To enable this feature, add the resource.sensitive.mask.annotations key under .spec.extraConfig in the Argo CD CR. Specify a comma-separated list of annotation keys to mask.

Important

Ensure that the annotation keys listed in resource.sensitive.mask.annotations are accurate and relevant to your use case. This feature does not support wildcards and requires explicit configuration in the Argo CD CR.

Prerequisites

  • You have created an Argo CD instance. For more information, see "Installing a user-defined Argo CD instance".

3.1. Enabling sensitive annotations masking in the Argo CD Web UI

To enable sensitive annotations masking in the Argo CD user interface (UI), you can add the annotation key, resource.sensitive.mask.annotations, in the Argo CD custom resource (CR).

Procedure

  1. Log in to the OpenShift Container Platform web console.
  2. In the Administrator perspective of the web console, click OperatorsInstalled Operators.
  3. From the Project list, create or select the project where you want to install the user-defined Argo CD instance.
  4. From the installed Operators list, select Red Hat OpenShift GitOps, and then click the Argo CD tab.
  5. To edit the Argo CD CR, complete the following steps:

    1. Under the .spec.extraConfig section, add the resource.sensitive.mask.annotations key.
    2. To mask a comma-separated list of values, specify the annotation key in the following YAML snippet:

      apiVersion: argoproj.io/v1beta1
      kind: ArgoCD
      metadata:
        name: example
      spec:
        extraConfig:
          resource.sensitive.mask.annotations: openshift.io/token-secret.value, api-key, token 
      1
      Copy to Clipboard Toggle word wrap
      1
      Specify a comma-separated list of sensitive annotation values, such as openshift.io/token-secret.value, api-key, and token.
  6. To verify that the value in the Argo CD resource has been updated successfully, complete the following steps:

    1. In the Administrator perspective of the web console, click OperatorsInstalled Operators.
    2. In the Project option, select the Argo CD namespace.
    3. From the installed Operators list, select Red Hat OpenShift GitOps, and then click the Argo CD tab.
    4. Verify that the Status field of the ArgoCD instance shows as Phase: Available.

Argo CD hides the values of the specified annotation keys in the Argo CD UI.

Legal Notice

Copyright © 2025 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
맨 위로 이동
Red Hat logoGithubredditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다. 최신 업데이트를 확인하세요.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

Theme

© 2025 Red Hat