Chapter 10. Zero Trust Workload Identity Manager


Important

Zero Trust Workload Identity Manager is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

The Zero Trust Workload Identity Manager leverages Secure Production Identity Framework for Everyone (SPIFFE) and the SPIFFE Runtime Environment (SPIRE) to provide a comprehensive identity management solution for distributed systems. SPIFFE and SPIRE provide a standardized approach to workload identity, allowing workloads to communicate with other services whether on the same cluster, or in another environment.

Zero Trust Workload Identity Manager replaces long-lived, manually managed secrets with cryptographically verifiable identities. It provides strong authentication ensuring workloads that are communicating with each other are who they claim to be. SPIRE automates the issuing, rotating, and revoking of a SPIFFE Verifiable Identity Document (SVID), reducing the workload of developers and administrators managing secrets.

SPIFFE can work across diverse infrastructures including on-premise, cloud, and hybrid environments. SPIFFE identities are cryptographically enabled providing a basis for auditing and compliance.

The following are components of the Zero Trust Workload Identity Manager architecture:

10.1.1. SPIFFE

Secure Production Identity Framework for Everyone (SPIFFE) provides a standardized way to establish trust between software workloads in distributed systems. SPIFFE assigns unique IDs called SPIFFE IDs. These IDs are Uniform Resource Identifiers (URI) that include a trust domain and a workload identifier.

The SPIFFE IDs are contained in the SPIFFE Verifiable Identity Document (SVID). SVIDs are used by workloads to verify their identity to other workloads so that the workloads can communicate with each other. The two main SVID formats are:

  • X.509-SVIDs: X.509 certificates where the SPIFFE ID is embedded in the Subject Alternative Name (SAN) field.
  • JWT-SVIDs: JSON Web Tokens (JWTs) where the SPIFFE ID is included as the sub claim.

For more information, see SPIFFE Overview.

10.1.2. SPIRE Server

A SPIRE Server is responsible for managing and issuing SPIFFE identities within a trust domain. It stores registration entries (selectors that determine under what conditions a SPIFFE ID should be issued) and signing keys. The SPIRE Server works in conjunction with the SPIRE Agent to perform node attestion via node plugins. For more information, see About the SPIRE Server.

10.1.3. SPIRE Agent

The SPIRE Agent is responsible for workload attestation, ensuring that workloads receive a verified identity when requesting authentication through the SPIFFE Workload API. It accomplishes this by using configured workload attestor plugins. In Kubernetes environments, the Kubernetes workload attestor plugin is used.

SPIRE and the SPIRE Agent perform node attestation via node plugins. The plugins are used to verify the identity of the node on which the agent is running. For more information, see About the SPIRE Agent.

10.1.4. Attestation

Attestation is the process by which the identity of nodes and workloads are verified before SPIFFE IDs and SVIDs are issued. The SPIRE Server gathers attributes of both the workload and node that the SPIRE Agent runs on, and then compares them to a set of selectors defined when the workload was registered. If the comparison is successful, the entities are provided with credentials. This ensures that only legitimate and expected entities within the trust domain receive cryptographic identities. The two main types of attestation in SPIFFE/SPIRE are:

  • Node attestation: verifies the identity of a machine or a node on a system, before a SPIRE Agent running on that node can be trusted to request identities for workloads.
  • Workload attestation: verifies the identity of an application or service running on an attested node before the SPIRE Agent on that node can provide it with a SPIFFE ID and SVID.

For more information, see Attestation.

The following components are available as part of the initial release of Zero Trust Workload Identity Manager.

10.1.5.1. SPIFFE CSI Driver

The SPIFFE Container Storage Interface (CSI) is a plugin that helps pods securely obtain their SPIFFE Verifiable Identity Document (SVID) by delivering the Workload API socket into the pod. The SPIFFE CSI driver is deployed as a daemonset on the cluster ensuring that a driver instance runs on each node. The driver uses the ephemeral inline volume capability of Kubernetes allowing pods to request volumes directly provided by the SPIFFE CSI driver. This simplifies their use by applications that need temporary storage.

When the pod starts, the Kubelet calls the SPIFFE CSI driver to provision and mount a volume into the pod’s containers. The SPIFFE CSI driver mounts a directory that contains the SPIFFE Workload API into the pod. Applications in the pod then communicate with the Workload API to obtain their SVIDs. The driver guarantees that each SVID is unique.

10.1.5.2. SPIRE OpenID Connect Discovery Provider

The SPIRE OpenID Connect Discovery Provider is a standalone component that makes SPIRE-issued JWT-SVIDs compatible with standard OpenID Connect (OIDC) users by exposing a open ID configuration endpoint and a JWKS URI for token verification. It is essential for integrating SPIRE-based workload identity with systems that require OIDC-compliant tokens, especially, external APIs. While SPIRE primarily issues identities for workloads, additional workload-related claims can be embedded into JWT-SVIDs through the configuration of SPIRE, which these claims to be included in the token and verified by OIDC-compliant clients.

10.1.5.3. SPIRE Controller Manager

The SPIRE Controller Manager uses custom resource definitions (CRDs) to facilitate the registration of workloads. To facilitate workload registration, the SPIRE Controller Manager registers controllers against pods and CRDs. When changes are detected on these resources, a workload reconciliation process is triggered. This process determines which SPIRE entries should exist based on the existing pods and CRDs. The reconciliation process creates, updates, and deletes entries on the SPIRE Server as appropriate.

The SPIRE Controller Manager is designed to be deployed on the same pod as the SPIRE Server. The manager communicates with the SPIRE Server API using a private UNIX Domain Socket within a shared volume.

10.1.6.1. SPIRE Server and Agent telemetry

SPIRE Server and Agent telemetry provide insight into the health of the SPIRE deployment. The metrics are in the format provided by the Prometheus Operator. The metrics exposed help in understanding server health & lifecycle, SPIRE component performance, attestation and SVID issuance, and plugin statistics.

The following is a high-level workflow of the Zero Trust Workload Identity Manager within the Red Hat OpenShift cluster.

  1. The SPIRE, SPIRE Agent, SPIFFE CSI Driver, and the SPIRE OIDC Discovery Provider operands are deployed and managed by Zero Trust Workload Identity Manager via associated customer resource definitions (CRDs).
  2. Watches are then registered for relevant Kubernetes resources and the necessary SPIRE CRDs are applied to the cluster.
  3. The CR for the ZeroTrustWorkloadIdentityManager resource named cluster is deployed and managed by a controller.
  4. To deploy the SPIRE Server, SPIRE Agent, SPIFFE CSI Driver, and SPIRE OIDC Discovery Provider, you need to create a custom resource of a each certain type and name it cluster. The custom resource types are as follows:

    • SPIRE Server - SpireServer
    • SPIRE Agent - SpireAgent
    • SPIFFE CSI Driver - SpiffeCSIDriver
    • SPIRE OIDC discovery provider - SpireOIDCDiscoveryProvider
  5. When a node starts, the SPIRE Agent initializes, and connects to the SPIRE Server.
  6. The SPIRE Agent begins the node attestation process. The agent collects information on the node’s identity such as label name and namespace. The agent securely provides the information it gathered through the attestation to the SPIRE Server.
  7. The SPIRE Server then evaluates this information against its configured attestation policies and registration entries. If successful, the server generates an agent SVID and the Trust Bundle (CA Certificate) and securely sends this back to the SPIRE Agent.
  8. A workload starts on the node and needs a secure identity. The workload connects to the agent’s Workload API and requests a SVID.
  9. The SPIRE Agent receives the request and begins a workload attestation to gather information about the workload.
  10. After the SPIRE Agent gathers the information, the information is sent to the SPIRE Server and the server checks its configured registration entries.
  11. The SPIRE Agent receives the workload SVID and Trust Bundle and passes it on to the workload. The workload can now present their SVIDs to other SPIFFE-aware devices to communicate with them.

The Zero Trust Workload Identity Manager leverages Secure Production Identity Framework for Everyone (SPIFFE) and the SPIFFE Runtime Environment (SPIRE) to provide a comprehensive identity management solution for distributed systems. Zero Trust Workload Identity Manager supports SPIRE version 1.12.4 running as an operand.

These release notes track the development of Zero Trust Workload Identity Manager.

Important

Zero Trust Workload Identity Manager is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Issued: 2025-09-08

The following advisories are available for the Zero Trust Workload Identity Manager.

This release of Zero Trust Workload Identity Manager is a Technology Preview.

10.2.1.1. New features and enhancements

  • The Operator exposes the SPIREOIDCDiscoveryProvider spec through OpenShift Routes under the domain *.apps.<cluster_domain> for the selected default installation.
  • The managedRoute and externalSecretRef fields have been added to the spireOidcDiscoveryProvider spec.
  • The managedRoute field is boolean and is set to true by default. If set to false, the Operator stops managing the route and the existing route will not be deleted automatically. If set back to true, the Operator resumes managing the route. If a route does not exist, the Operator creates a new one. If a route already exists, the Operator will override the user configuration if a conflict exists.
  • The externalSecretRef references an externally managed Secret that has the TLS certificate for the oidc-discovery-provider Route host. When provided, this populates the route’s .Spec.TLS.ExternalCertificate field. For more information, see Creating a route with externally managed certificate
  • The following Time-To-Live (TTL) fields have been added to the SpireServer custom resource definition (CRD) API for SPIRE Server certificate management:

    • CAValidity (default: 24h)
    • DefaultX509Validity (default: 1h)
    • DefaultJWTValidity (default: 5m)
  • The default values can be replaced in the server configuration with user-configurable options that give users the flexibility to customize certificate and SPIFFE Verifiable Identity Document (SVID) lifetimes based on their security requirements.
10.2.1.1.3. Enabling Manual User Configurations
  • The Operator controller switches to create-only mode once the ztwim.openshift.io/create-only=true annotation is present on the Operator’s APIs. This allows resource creation while skipping the updates. A user can update the resources manually to test their configuration. This annotation supports APIs such as SpireServer, SpireAgents, SpiffeCSIDriver, SpireOIDCDiscoveryProvider, and ZeroTrustWorkloadIdentityManager.
  • When the annotation is applied, all derived resources including resources created and managed by the Operator.
  • Once the annotation is removed and the pod restarts, the operator tries to come back to the required state. The annotation is applied only once during start or a restart.

10.2.1.2. Bug fixes

  • Before this update, the JwtIssuer field for both the SpireServer and the SpireOidcDiscoveryProvider did not need to be a URL causing an error in configurations. With this release, the user must manually enter an issuer URL in the JwtIssuer field in both custom resources. (SPIRE-117)

Issued: 2025-06-16

The following advisories are available for the Zero Trust Workload Identity Manager:

This initial release of Zero Trust Workload Identity Manager is a Technology Preview. This version has the following known limitations:

  • Support for SPIRE federation is not enabled.
  • Key manager supports only the disk storage type.
  • Telemetry is supported only through Prometheus.
  • High availability (HA) configuration for SPIRE Servers or the OpenID Connect (OIDC) Discovery provider is not supported.
  • External datastore is not supported. This version uses the internal sqlite datastore deployed by SPIRE.
  • This version operates using a fixed configuration. User-defined configurations are not allowed.
  • The log level of operands are not configurable. The default value is DEBUG`.
Important

Zero Trust Workload Identity Manager for Red Hat OpenShift is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

The Zero Trust Workload Identity Manager is not installed in OpenShift Container Platform by default. You can install the Zero Trust Workload Identity Manager by using either the web console or CLI.

You can use the web console to install the Zero Trust Workload Identity Manager.

Prerequisites

  • You have access to the cluster with cluster-admin privileges.
  • You have access to the OpenShift Container Platform web console.

Procedure

  1. Log in to the OpenShift Container Platform web console.
  2. Go to Operators OperatorHub.
  3. Enter Zero Trust Workload Identity Manager into the filter box.
  4. Select the Zero Trust Workload Identity Manager
  5. Select the Zero Trust Workload Identity Manager version from Version drop-down list, and click Install.
  6. On the Install Operator page:

    1. Update the Update channel, if necessary. The channel defaults to tech-preview-v0.1, which installs the latest Technology Preview v0.1 release of the Zero Trust Workload Identity Manager.
    2. Choose the Installed Namespace for the Operator. The default Operator namespace is zero-trust-workload-identity-manager.

      If the zero-trust-workload-identity-manager namespace does not exist, it is created for you.

    3. Select an Update approval strategy.

      • The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available.
      • The Manual strategy requires a user with appropriate credentials to approve the Operator update.
    4. Click Install.

Verification

  • Navigate to Operators Installed Operators.

    • Verify that Zero Trust Workload Identity Manager is listed with a Status of Succeeded in the zero-trust-workload-identity-manager namespace.
    • Verify that Zero Trust Workload Identity Manager controller manager deployment is ready and available by running the following command:

      $ oc get deployment -l name=zero-trust-workload-identity-manager -n zero-trust-workload-identity-manager
      Copy to Clipboard Toggle word wrap

      Example output

      NAME                                                            READY   UP-TO-DATE    AVAILABLE  AGE
      zero-trust-workload-identity-manager-controller-manager-6c4djb  1/1     1             1          43m
      Copy to Clipboard Toggle word wrap

Prerequisites

  • You have access to the cluster with cluster-admin privileges.

Procedure

  1. Create a new project named zero-trust-workload-identity-manager by running the following command:

    $ oc new-project zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap
  2. Create an OperatorGroup object:

    1. Create a YAML file, for example, operatorGroup.yaml, with the following content:

      Example operatorGroup.yaml

      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
        name: openshift-zero-trust-workload-identity-manager
        namespace: zero-trust-workload-identity-manager
      spec:
        upgradeStrategy: Default
      Copy to Clipboard Toggle word wrap

    2. Create the OperatorGroup object by running the following command:

      $ oc create -f operatorGroup.yaml
      Copy to Clipboard Toggle word wrap
  3. Create a Subscription object:

    1. Create a YAML file, for example, subscription.yaml, that defines the Subscription object:

      Example subscription.yaml

      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: openshift-zero-trust-workload-identity-manager
        namespace: zero-trust-workload-identity-manager
      spec:
        channel: tech-preview-v0.1
        name: openshift-zero-trust-workload-identity-manager
        source: redhat-operators
        sourceNamespace: openshift-marketplace
        installPlanApproval: Automatic
      Copy to Clipboard Toggle word wrap

    2. Create the Subscription object by running the following command:

      $ oc create -f subscription.yaml
      Copy to Clipboard Toggle word wrap

Verification

  • Verify that the OLM subscription is created by running the following command:

    $ oc get subscription -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                             PACKAGE                                SOURCE             CHANNEL
    openshift-zero-trust-workload-identity-manager   zero-trust-workload-identity-manager   redhat-operators   tech-preview-v0.1
    Copy to Clipboard Toggle word wrap

  • Verify whether the Operator is successfully installed by running the following command:

    $ oc get csv -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                         DISPLAY                                VERSION  PHASE
    zero-trust-workload-identity-manager.v0.1.0   Zero Trust Workload Identity Manager   0.1.0    Succeeded
    Copy to Clipboard Toggle word wrap

  • Verify that the Zero Trust Workload Identity Manager controller manager is ready by running the following command:

    $ oc get deployment -l name=zero-trust-workload-identity-manager -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                                      READY   UP-TO-DATE   AVAILABLE   AGE
    zero-trust-workload-identity-manager-controller-manager   1/1     1            1           43m
    Copy to Clipboard Toggle word wrap

Important

Zero Trust Workload Identity Manager is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

You can deploy the following operands by creating the respective custom resources (CRs). You must deploy the operands in the following sequence to ensure successful installation.

  1. SPIRE Server
  2. SPIRE Agent
  3. SPIFFE CSI driver
  4. SPIRE OIDC discovery provider

10.4.1. Deploying the SPIRE Server

You can configure the SpireServer custom resource (CR) to deploy and configure a SPIRE Server.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.
  • You have installed Zero Trust Workload Identity Manager in the cluster.

Procedure

  1. Create the SpireServer CR:

    1. Create a YAML file that defines the SpireServer CR, for example, SpireServer.yaml:

      Example SpireServer.yaml

      apiVersion: operator.openshift.io/v1alpha1
      kind: SpireServer
      metadata:
        name: cluster
      spec:
        trustDomain: <trust_domain> 
      1
      
        clusterName: <cluster_name> 
      2
      
        caSubject:
          commonName: example.org 
      3
      
          country: "US" 
      4
      
          organization: "RH" 
      5
      
        persistence:
          type: pvc 
      6
      
          size: "5Gi" 
      7
      
          accessMode: ReadWriteOnce 
      8
      
        datastore:
          databaseType: sqlite3
          connectionString: "/run/spire/data/datastore.sqlite3"
          maxOpenConns: 100 
      9
      
          maxIdleConns: 2 
      10
      
          connMaxLifetime: 3600 
      11
      
        jwtIssuer: <jwt_issuer_domain> 
      12
      Copy to Clipboard Toggle word wrap

      1
      The trust domain to be used for the SPIFFE identifiers.
      2
      The name of your cluster.
      3
      The common name for SPIRE Server CA.
      4
      The country for SPIRE Server CA.
      5
      The organization for SPIRE Server CA.
      6
      The volume type to be used for persistence. The valid options are pvc and hostPath.
      7
      The volume size to be used for persistence
      8
      The access mode to be used for persistence. The valid options are ReadWriteOnce, ReadWriteOncePod, and ReadWriteMany.
      9
      The maximum number of open database connections.
      10
      The maximum number of idle connections in the pool.
      11
      The maximum amount of time a connection can be reused. To specify an unlimited time, you can set the value to 0.
      12
      The JSON Web Token (JWT) issuer domain. The value must be a valid URL.
    2. Apply the configuration by running the following command:

      $ oc apply -f SpireServer.yaml
      Copy to Clipboard Toggle word wrap

Verification

  • Verify that the stateful set of SPIRE Server is ready and available by running the following command:

    $ oc get statefulset -l app.kubernetes.io/name=server -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap

    Example output

    NAME            READY   AGE
    spire-server    1/1     65s
    Copy to Clipboard Toggle word wrap

  • Verify that the status of the SPIRE Server pod is Running by running the following command:

    $ oc get po -l app.kubernetes.io/name=server -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap

    Example output

    NAME               READY   STATUS    RESTARTS        AGE
    spire-server-0     2/2     Running   1 (108s ago)    111s
    Copy to Clipboard Toggle word wrap

  • Verify that the persistent volume claim (PVC) is bound, by running the following command:

    $ oc get pvc -l app.kubernetes.io/name=server -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                        STATUS    VOLUME                                     CAPACITY   ACCESS MODES  STORAGECLASS  VOLUMEATTRIBUTECLASS  AGE
    spire-data-spire-server-0   Bound     pvc-27a36535-18a1-4fde-ab6d-e7ee7d3c2744   5Gi        RW0           gp3-csi       <unset>               22m
    Copy to Clipboard Toggle word wrap

10.4.2. Deploying the SPIRE Agent

You can configure the SpireAgent custom resource (CR) to deploy and configure a SPIRE Agent.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.
  • You have installed Zero Trust Workload Identity Manager in the cluster.

Procedure

  1. Create the SpireAgent CR:

    1. Create a YAML file that defines the SpireAgent CR, for example, SpireAgent.yaml:

      Example SpireAgent.yaml

      apiVersion: operator.openshift.io/v1alpha1
      kind: SpireAgent
      metadata:
        name: cluster
      spec:
        trustDomain: <trust_domain> 
      1
      
        clusterName: <cluster_name> 
      2
      
        nodeAttestor:
          k8sPSATEnabled: "true" 
      3
      
        workloadAttestors:
          k8sEnabled: "true" 
      4
      
          workloadAttestorsVerification:
            type: "auto" 
      5
      Copy to Clipboard Toggle word wrap

      1
      The trust domain to be used for the SPIFFE identifiers.
      2
      The name of your cluster.
      3
      Enable or disable the projected service account token (PSAT) Kubernetes node attestor. The valid options are true and false.
      4
      Enable or disable the Kubernetes workload attestor. The valid options are true and false.
      5
      The type of verification to be done against the kubelet. Valid options are auto, hostCert, apiServerCA, skip. The auto option initially attempts to use hostCert, and then falls back to apiServerCA.
    2. Apply the configuration by running the following command:

      $ oc apply -f SpireAgent.yaml
      Copy to Clipboard Toggle word wrap

Verification

  • Verify that the daemon set of the SPIRE Agent is ready and available by running the following command:

    $ oc get daemonset -l app.kubernetes.io/name=agent -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap

    Example output

    NAME          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
    spire-agent   3         3         3       3            3           <none>          10m
    Copy to Clipboard Toggle word wrap

  • Verify that the status of SPIRE Agent pods is Running by running the following command:

    $ oc get po -l app.kubernetes.io/name=agent -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                READY   STATUS    RESTARTS   AGE
    spire-agent-dp4jb   1/1     Running   0          12m
    spire-agent-nvwjm   1/1     Running   0          12m
    spire-agent-vtvlk   1/1     Running   0          12m
    Copy to Clipboard Toggle word wrap

You can configure the SpiffeCSIDriver custom resource (CR) to deploy and configure a SPIFFE Container Storage Interface (CSI) driver.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.
  • You have installed Zero Trust Workload Identity Manager in the cluster.

Procedure

  1. Create the SpiffeCSIDriver CR:

    1. Create a YAML file that defines the SpiffeCSIDriver CR object, for example, SpiffeCSIDriver.yaml:

      Example SpiffeCSIDriver.yaml

      apiVersion: operator.openshift.io/v1alpha1
      kind: SpiffeCSIDriver
      metadata:
        name: cluster
      spec:
        agentSocketPath: '/run/spire/agent-sockets/spire-agent.sock' 
      1
      Copy to Clipboard Toggle word wrap

      1
      The UNIX socket path to the SPIRE Agent.
    2. Apply the configuration by running the following command:

      $ oc apply -f SpiffeCSIDriver.yaml
      Copy to Clipboard Toggle word wrap

Verification

  • Verify that the daemon set of the SPIFFE CSI driver is ready and available by running the following command:

    $ oc get daemonset -l app.kubernetes.io/name=spiffe-csi-driver -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
    spire-spiffe-csi-driver   3         3         3       3            3           <none>          114s
    Copy to Clipboard Toggle word wrap

  • Verify that the status of SPIFFE Container Storage Interface (CSI) Driver pods is Running by running the following command:

    $ oc get po -l app.kubernetes.io/name=spiffe-csi-driver -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                            READY   STATUS    RESTARTS   AGE
    spire-spiffe-csi-driver-gpwcp   2/2     Running   0          2m37s
    spire-spiffe-csi-driver-rrbrd   2/2     Running   0          2m37s
    spire-spiffe-csi-driver-w6s6q   2/2     Running   0          2m37s
    Copy to Clipboard Toggle word wrap

You can configure the SpireOIDCDiscoveryProvider custom resource (CR) to deploy and configure the SPIRE OpenID Connect (OIDC) Discovery Provider.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.
  • You have installed Zero Trust Workload Identity Manager in the cluster.

Procedure

  1. Create the SpireOIDCDiscoveryProvider CR:

    1. Create a YAML file that defines the SpireOIDCDiscoveryProvider CR, for example, SpireOIDCDiscoveryProvider.yaml:

      Example SpireOIDCDiscoveryProvider.yaml

      apiVersion: operator.openshift.io/v1alpha1
      kind: SpireOIDCDiscoveryProvider
      metadata:
        name: cluster
      spec:
        trustDomain: <trust_domain> 
      1
      
        agentSocketName: 'spire-agent.sock' 
      2
      
        jwtIssuer: <jwt_issuer_domain> 
      3
      Copy to Clipboard Toggle word wrap

      1
      The trust domain to be used for the SPIFFE identifiers.
      2
      The name of the SPIRE Agent UNIX socket.
      3
      The JSON Web Token (JWT) issuer domain. The value must be a valid URL.
    2. Apply the configuration by running the following command:

      $ oc apply -f SpireOIDCDiscoveryProvider.yaml
      Copy to Clipboard Toggle word wrap

Verification

  1. Verify that the deployment of OIDC Discovery Provider is ready and available by running the following command:

    $ oc get deployment -l app.kubernetes.io/name=spiffe-oidc-discovery-provider -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                    READY  UP-TO-DATE  AVAILABLE  AGE
    spire-spiffe-oidc-discovery-provider    1/1    1           1          2m58s
    Copy to Clipboard Toggle word wrap

  2. Verify that the status of OIDC Discovery Provider pods is Running by running the following command:

    $ oc get po -l app.kubernetes.io/name=spiffe-oidc-discovery-provider -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                                    READY   STATUS    RESTARTS   AGE
    spire-spiffe-oidc-discovery-provider-64586d599f-lcc94   2/2     Running   0          7m15s
    Copy to Clipboard Toggle word wrap

Zero Trust Workload Identity Manager integrates with OpenID Connect (OIDC) by allowing a SPIRE server to act as an OIDC provider. This enables workloads to request and receive verifiable JSON Web Tokens - SPIFFE Verifiable Identity Documents (JWT-SVIDs) from the local SPIRE agent. External systems, such as cloud providers, can then use the OIDC discovery endpoint exposed by the SPIRE server to retrieve public keys.

Important

Zero Trust Workload Identity Manager for Red Hat OpenShift is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

The following providers are verified to work with SPIRE OIDC federation:

  • Azure Entra ID
  • Vault

10.5.1. About the Entra ID OpenID Connect

Entra ID is a cloud-based identity and access management service that centralizes user management and access control. Entra ID serves as the identify provider, verifying user identities and issuing and ID token to the application. This token has essential user information, allowing the application to confirm who the user is without managing their credentials.

Integrating Entra ID OpenID Connect (OIDC) with SPIRE provides workloads with automatic, short-lived cryptographic identities. The SPIRE-issued identities are sent to Entra ID to securely authenticate the service without any static secrets.

The managed route uses the External Route Certificate feature to set the tls.externalCertificate field to an externally managed Transfer Layer Security (TLS) secret’s name.

Prerequisites

  • You have installed Zero Trust Workload Identity Manager 0.2.0 or later.
  • You have deployed the SPIRE Server, SPIRE Agent, SPIFFEE CSI Driver, and the SPIRE OIDC Discovery Provider operands in the cluster.
  • You have installed the cert-manager Operator for Red Hat OpenShift. For more information, Installing the cert-manager Operator for Red Hat OpenShift.
  • You have created a ClusterIssuer or Issuer configured with a publicly trusted CA service. For example, an Automated Certificate Management Environment (ACME) type Issuer with the "Let’s Encrypt ACME" service. For more information, see Configuring an ACME issuer

Procedure

  1. Create a Role to provide the router service account permissions to read the referenced secret by running the following command:

    $ oc create role secret-reader \
      --verb=get,list,watch \
      --resource=secrets \
      --resource-name=$TLS_SECRET_NAME \
      -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap
  2. Create a RoleBinding resource to bind the router service account with the newly created Role resource by running the following command:

    $ oc create rolebinding secret-reader-binding \
      --role=secret-reader \
      --serviceaccount=openshift-ingress:router \
      -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap
  3. Configure the SpireOIDCDIscoveryProvider Custom Resource (CR) object to reference the Secret generated in the earlier step by running the following command:

    $ oc patch SpireOIDCDiscoveryProvider cluster --type=merge -p='
    spec:
      externalSecretRef: ${TLS_SECRET_NAME}
    '
    Copy to Clipboard Toggle word wrap

Verification

  1. In the SpireOIDCDiscoveryProvider CR, check if the ManageRouteReady condition is set to True by running the following command:

    $ oc wait --for=jsonpath='{.status.conditions[?(@.type=="ManagedRouteReady")].status}'=True SpireOIDCDiscoveryProvider/cluster --timeout=120s
    Copy to Clipboard Toggle word wrap
  2. Verify that the OIDC endpoint can be accessed securely through HTTPS by running the following command:

    $ curl https://$JWT_ISSUER_ENDPOINT/.well-known/openid-configuration
    
    {
      "issuer": "https://$JWT_ISSUER_ENDPOINT",
      "jwks_uri": "https://$JWT_ISSUER_ENDPOINT/keys",
      "authorization_endpoint": "",
      "response_types_supported": [
        "id_token"
      ],
      "subject_types_supported": [],
      "id_token_signing_alg_values_supported": [
        "RS256",
        "ES256",
        "ES384"
      ]
    }%
    Copy to Clipboard Toggle word wrap

10.5.1.2. Disabling a managed route

If you want to fully control the behavior of exposing the OIDC Discovery Provider service, you can disable the managed Route based on your requirements.

Procedure

  • To manually configure the OIDC Discovery Provider, set managedRoute to false by running the following command:

    $ oc patch SpireOIDCDiscoveryProvider cluster --type=merge -p='
    spec:
      managedRoute: "false"
    Copy to Clipboard Toggle word wrap

10.5.1.3. Using Entra ID with Microsoft Azure

After the Entra ID configuration is complete, you can set up Entra ID to work with Azure.

Prerequisites

  • You have configured the SPIRE OIDC Discovery Provider Route to serve the TLS certificates from a publicly trusted CA.

Procedure

  1. Log in to Azure by running the following command:

    $ az login
    Copy to Clipboard Toggle word wrap
  2. Configure variables for your Azure subscription and tenant by running the following commands:

    $ export SUBSCRIPTION_ID=$(az account list --query "[?isDefault].id" -o tsv) 
    1
    Copy to Clipboard Toggle word wrap
    $ export TENANT_ID=$(az account list --query "[?isDefault].tenantId" -o tsv) 
    1
    Copy to Clipboard Toggle word wrap
    $ export LOCATION=centralus 
    1
    Copy to Clipboard Toggle word wrap
    1
    Your unique subscription identifier.
    1
    The ID for your Azure Active Directory instance.
    1
    The Azure region where your resource is created.
  3. Define resource variable names by running the following commands:

    $ export NAME=ztwim 
    1
    Copy to Clipboard Toggle word wrap
    $ export RESOURCE_GROUP="${NAME}-rg" 
    1
    Copy to Clipboard Toggle word wrap
    $ export STORAGE_ACCOUNT="${NAME}storage" 
    1
    Copy to Clipboard Toggle word wrap
    $ export STORAGE_CONTAINER="${NAME}storagecontainer" 
    1
    Copy to Clipboard Toggle word wrap
    $ export USER_ASSIGNED_IDENTITY_NAME="${NAME}-identity" 
    1
    Copy to Clipboard Toggle word wrap
    1
    A base name for all resources.
    1
    The name of the resource group.
    1
    The name for the storage account.
    1
    The name for the storage container.
    1
    The name for a managed identity.
  4. Create the resource group by running the following command:

    $ az group create \
      --name "${RESOURCE_GROUP}" \
      --location "${LOCATION}"
    Copy to Clipboard Toggle word wrap

10.5.1.4. Configuring Azure blob storage

You need to create a new storage account to be used to store content.

Procedure

  1. Create a new storage account that is used to store content by running the following command:

    $ az storage account create \
      --name ${STORAGE_ACCOUNT} \
      --resource-group ${RESOURCE_GROUP} \
      --location ${LOCATION} \
      --encryption-services blob
    Copy to Clipboard Toggle word wrap
  2. Obtain the storage ID for the newly created storage account by running the following command:

    $ export STORAGE_ACCOUNT_ID=$(az storage account show -n ${STORAGE_ACCOUNT} -g ${RESOURCE_GROUP} --query id --out tsv)
    Copy to Clipboard Toggle word wrap
  3. Create a storage container inside the newly created storage account to provide a location to support the storage of blobs by running the following command:

    $ az storage container create \
      --account-name ${STORAGE_ACCOUNT} \
      --name ${STORAGE_CONTAINER} \
      --auth-mode login
    Copy to Clipboard Toggle word wrap

You need to Create a new User Managed Identity and then obtain the Client ID of the related Service Principal associated with the User Managed Identity.

Procedure

  1. Create a new User Managed Identity and then obtain the Client ID of the related Service Principal associated with the User Managed Identity by running the following command:

    $ az identity create \
      --name ${USER_ASSIGNED_IDENTITY_NAME} \
      --resource-group ${RESOURCE_GROUP}
    
    $ export IDENTITY_CLIENT_ID=$(az identity show --resource-group "${RESOURCE_GROUP}" --name "${USER_ASSIGNED_IDENTITY_NAME}" --query 'clientId' -otsv)
    Copy to Clipboard Toggle word wrap
  2. Retrieve the CLIENT_ID of an Azure user-assigned managed identity and save it as an environment variable by running the following command:

    $ export IDENTITY_CLIENT_ID=$(az identity show --resource-group "${RESOURCE_GROUP}" --name "${USER_ASSIGNED_IDENTITY_NAME}" --query 'clientId' -otsv)
    Copy to Clipboard Toggle word wrap
  3. Associate a role with the Service Principal associated with the User Managed Identity by running the following command:

    $ az role assignment create \
      --role "Storage Blob Data Contributor" \
      --assignee "${IDENTITY_CLIENT_ID}" \
      --scope ${STORAGE_ACCOUNT_ID}
    Copy to Clipboard Toggle word wrap

10.5.1.6. Creating the demonstration application

The demonstration application provides you a way to see if the entire system works.

Procedure

To create the demonstration application, complete the following steps:

  1. Set the application name and namespace by running the following commands:

    $ export APP_NAME=workload-app
    Copy to Clipboard Toggle word wrap
    $ export APP_NAMESPACE=demo
    Copy to Clipboard Toggle word wrap
  2. Create the namespace by running the following command:

    $ oc create namespace $APP_NAMESPACE
    Copy to Clipboard Toggle word wrap
  3. Create the application Secret by running the following command:

    $ oc apply -f - << EOF
    apiVersion: v1
    kind: Secret
    metadata:
      name: $APP_NAME
      namespace: $APP_NAMESPACE
    stringData:
      AAD_AUTHORITY: https://login.microsoftonline.com/
      AZURE_AUDIENCE: "api://AzureADTokenExchange"
      AZURE_TENANT_ID: "${TENANT_ID}"
      AZURE_CLIENT_ID: "${IDENTITY_CLIENT_ID}"
      BLOB_STORE_ACCOUNT: "${STORAGE_ACCOUNT}"
      BLOB_STORE_CONTAINER: "${STORAGE_CONTAINER}"
    EOF
    Copy to Clipboard Toggle word wrap

10.5.1.7. Deploying the workload application

Once the demonstration application has been created.

Prerequisites

  • The demonstration application has been created and deployed.

Procedure

  1. To deploy the application, copy the entire command block provided and paste it directly into your terminal. Press Enter.

    $ oc apply -f - << EOF
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: $APP_NAME
      namespace: $APP_NAMESPACE
    ---
    kind: Deployment
    apiVersion: apps/v1
    metadata:
      name: $APP_NAME
      namespace: $APP_NAMESPACE
    spec:
      selector:
        matchLabels:
          app: $APP_NAME
      template:
        metadata:
          labels:
            app: $APP_NAME
            deployment: $APP_NAME
        spec:
          serviceAccountName: $APP_NAME
          containers:
            - name: $APP_NAME
              image: "registry.redhat.io/ubi9/python-311:latest"
              command:
                - /bin/bash
                - "-c"
                - |
                  #!/bin/bash
                  pip install spiffe azure-cli
    
                  cat << EOF > /opt/app-root/src/get-spiffe-token.py
                  #!/opt/app-root/bin/python
                  from spiffe import JwtSource
                  import argparse
                  parser = argparse.ArgumentParser(description='Retrieve SPIFFE Token.')
                  parser.add_argument("-a", "--audience", help="The audience to include in the token", required=True)
                  args = parser.parse_args()
                  with JwtSource() as source:
                    jwt_svid = source.fetch_svid(audience={args.audience})
                    print(jwt_svid.token)
                  EOF
    
                  chmod +x /opt/app-root/src/get-spiffe-token.py
                  while true; do sleep 10; done
              envFrom:
              - secretRef:
                  name: $APP_NAME
              env:
                - name: SPIFFE_ENDPOINT_SOCKET
                  value: unix:///run/spire/sockets/spire-agent.sock
              securityContext:
                allowPrivilegeEscalation: false
                capabilities:
                  drop:
                    - ALL
                readOnlyRootFilesystem: false
                runAsNonRoot: true
                seccompProfile:
                  type: RuntimeDefault
              ports:
                - containerPort: 8080
                  protocol: TCP
              volumeMounts:
                - name: spiffe-workload-api
                  mountPath: /run/spire/sockets
                  readOnly: true
          volumes:
            - name: spiffe-workload-api
              csi:
                driver: csi.spiffe.io
                readOnly: true
    EOF
    Copy to Clipboard Toggle word wrap

Verification

  1. Ensure that the workload-app pod is running successfully by running the following command:

    $ oc get pods -n $APP_NAMESPACE
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                             READY     STATUS      RESTARTS      AGE
    workload-app-5f8b9d685b-abcde    1/1       Running     0             60s
    Copy to Clipboard Toggle word wrap

  2. Retrieve the SPIFFE JWT Token (SVID-JWT):

    1. Get the pod name dynamically by running the following command:

      $ POD_NAME=$(oc get pods -n $APP_NAMESPACE -l app=$APP_NAME -o jsonpath='{.items[0].metadata.name}')
      Copy to Clipboard Toggle word wrap
    2. Run the script inside the pod by running the following command:

      $ oc exec -it $POD_NAME -n $APP_NAMESPACE -- \
        /opt/app-root/src/get-spiffe-token.py -a "api://AzureADTokenExchange"
      Copy to Clipboard Toggle word wrap

You can configure Azure with the SPIFFE identity federation to enable password-free and automated authentication to the demonstration application.

Procedure

  • Federate the identities between the User Managed Identity and the SPIFFE identity associated with the workload application by running the following command:

    $ az identity federated-credential create \
     --name ${NAME} \
     --identity-name ${USER_ASSIGNED_IDENTITY_NAME} \
     --resource-group ${RESOURCE_GROUP} \
     --issuer https://$JWT_ISSUER_ENDPOINT \
     --subject spiffe://$APP_DOMAIN/ns/$APP_NAMESPACE/sa/$APP_NAME \
     --audience api://AzureADTokenExchange
    Copy to Clipboard Toggle word wrap

You can check if the application workload can access the Azure Blob Storage.

Prerequisites

  • An Azure Blob Storage has been created.

Procedure

  1. Retrieve a JWT token from the SPIFFE Workload API by running the following command:

    $ oc rsh -n $APP_NAMESPACE deployment/$APP_NAME
    Copy to Clipboard Toggle word wrap
  2. Create and export an environment variable named TOKEN by running the following command:

    $ export TOKEN=$(/opt/app-root/src/get-spiffe-token.py --audience=$AZURE_AUDIENCE)
    Copy to Clipboard Toggle word wrap
  3. Log in to Azure CLI included within the pod by running the following command:

    $ az login --service-principal \
      -t ${AZURE_TENANT_ID} \
      -u ${AZURE_CLIENT_ID} \
      --federated-token ${TOKEN}
    Copy to Clipboard Toggle word wrap
  4. Create a new file with the application workload pod and upload the file to the Blob Storage by running the following command:

    $ echo “Hello from OpenShift” > openshift-spire-federated-identities.txt
    Copy to Clipboard Toggle word wrap
  5. Upload a file to the Azure Blog Storage by running the following command:

    $ az storage blob upload \
      --account-name ${BLOB_STORE_ACCOUNT} \
      --container-name ${BLOB_STORE_CONTAINER} \
      --name openshift-spire-federated-identities.txt \
      --file openshift-spire-federated-identities.txt \
      --auth-mode login
    Copy to Clipboard Toggle word wrap

Verification

  • Confirm the file uploaded successfully by listing the files contained by running the following command:

    $ az storage blob list \
      --account-name ${BLOB_STORE_ACCOUNT} \
      --container-name ${BLOB_STORE_CONTAINER} \
      --auth-mode login \
      -o table
    Copy to Clipboard Toggle word wrap

10.5.2. About Vault OpenID Connect

Vault OpenID Connect (OIDC) with SPIRE creates a secure authentication method where Vault uses SPIRE as a trusted OIDC provider. A workload requests a JWT-SVID from its local SPIRE Agent, which has a unique SPIFFE ID. The workload then presents this token to Vault, and Vault validates it against the public keys on the SPIRE Server. If all conditions are met, Vault issues a short-lived Vault token to the workload which the workload can now use to access secrets and perform actions within Vault.

10.5.2.1. Installing Vault

Before Vault is used as an OIDC, you need to install Vault.

Prerequisites

  • Configure a route. For more information, see Route configuration
  • Helm is installed.
  • A command-line JSON processor for easily reading the output from the Vault API.
  • A HashiCorp Helm repository is added.

Procedure

  1. Create the vault-helm-value.yaml file.

    global:
      enabled: true
      openshift: true 
    1
    
      tlsDisable: true 
    2
    
    injector:
      enabled: false
    server:
      ui:
        enabled: true
      image:
        repository: docker.io/hashicorp/vault
        tag: "1.19.0"
      dataStorage:
        enabled: true 
    3
    
        size: 1Gi
      standalone:
        enabled: true 
    4
    
        config: |
          listener "tcp" {
            tls_disable = 1 
    5
    
            address = "[::]:8200"
            cluster_address = "[::]:8201"
          }
          storage "file" {
            path = "/vault/data"
          }
      extraEnvironmentVars: {}
    Copy to Clipboard Toggle word wrap
    1
    Optimizes the deployment for OpenShift-specific security contexts.
    2
    Disables TLS for Kubernetes objects created by the chart.
    3
    Creates a 1Gi persistent volume to store Vault data.
    4
    Deploys a single Vault pod.
    5
    Tells the Vault server to not use TLS.
  2. Run the helm install command:

    $ helm install vault hashicorp/vault \
      --create-namespace -n vault \
      --values ./vault-helm-value.yaml
    Copy to Clipboard Toggle word wrap
  3. Expose the Vault service by running the following command:

    $ oc expose service vault -n vault
    Copy to Clipboard Toggle word wrap
  4. Set the VAULT_ADDR environment variable to retrieve the hostname from the new route and then export it by running the following command:

    $ export VAULT_ADDR="http://$(oc get route vault -n vault -o jsonpath='{.spec.host}')"
    Copy to Clipboard Toggle word wrap
    Note

    http:// is prepended because TLS is disabled.

Verification

  • To ensure your Vault instance is running, run the following command:

    $ curl -s $VAULT_ADDR/v1/sys/health | jq
    Copy to Clipboard Toggle word wrap

    Example output

    {
      "initialized": true,
      "sealed": true,
      "standby": true,
      "performance_standby": false,
      "replication_performance_mode": "disabled",
      "replication_dr_mode": "disabled",
      "server_time_utc": 1663786574,
      "version": "1.19.0",
      "cluster_name": "vault-cluster-a1b2c3d4",
      "cluster_id": "5e6f7a8b-9c0d-1e2f-3a4b-5c6d7e8f9a0b"
    }
    Copy to Clipboard Toggle word wrap

10.5.2.2. Initializing and unsealing Vault

A newly installed Vault is sealed. This means that the primary encryption key, which protects all other encryption keys, is not loaded into the server memory upon startup. You need to initialize Vault to unseal it.

The steps to initialize a Vault server are:

  1. Initialize and unseal Vault
  2. Enable the key-value (KV) secrets engine and store a test secret
  3. Configure JSON Web Token (JWT) authentication with SPIRE
  4. Deploy a demonstration application
  5. Authenticate and retrieve the secret

Prerequisites

  • Ensure that Vault is running.
  • Ensure that Vault is not initialized. You can only initialize a Vault server once.

Procedure

  1. Open a remote shell into the vault pod by running the following command:

    $ oc rsh -n vault statefulset/vault
    Copy to Clipboard Toggle word wrap
  2. Initialize Vault to get your unseal key and root token by running the following command:

    $ vault operator init -key-shares=1 -key-threshold=1 -format=json
    Copy to Clipboard Toggle word wrap
  3. Export the unseal key and root token you received from the earlier command by running the following commands:

    $ export UNSEAL_KEY=<Your-Unseal-Key>
    Copy to Clipboard Toggle word wrap
    $ export ROOT_TOKEN=<Your-Root-Token>
    Copy to Clipboard Toggle word wrap
  4. Unseal Vault using your unseal key by running the following command:

    $ vault operator unseal -format=json $UNSEAL_KEY
    Copy to Clipboard Toggle word wrap
  5. Exit the pod by entering exit.

Verification

  • To verify that the Vault pod is ready, run the following command:

    $ oc get pod -n vault
    Copy to Clipboard Toggle word wrap

    Example output

    NAME        READY        STATUS      RESTARTS     AGE
    vault-0     1/1          Running     0            65d
    Copy to Clipboard Toggle word wrap

You enable the key-value secrets engine to establish a secure, centralized location for managing credentials.

Prerequisites

  • Make sure that Vault is initialized and unsealed.

Procedure

  1. Open another shell session in the Vault pod by running the following command:

    $ oc rsh -n vault statefulset/vault
    Copy to Clipboard Toggle word wrap
  2. Export your root token again within this new session and log in by running the following command:

    $ export ROOT_TOKEN=<Your-Root-Token>
    Copy to Clipboard Toggle word wrap
    $ vault login "${ROOT_TOKEN}"
    Copy to Clipboard Toggle word wrap
  3. Enable the KV secrets engine at the secret/ path and create a test secret by running the following commands:

    $ export NAME=ztwim
    Copy to Clipboard Toggle word wrap
    $ vault secrets enable -path=secret kv
    Copy to Clipboard Toggle word wrap
    $ vault kv put secret/$NAME version=v0.1.0
    Copy to Clipboard Toggle word wrap

Verification

  • To verify that the secret is stored correctly, run the following command:

    $ vault kv get secret/$NAME
    Copy to Clipboard Toggle word wrap

You need to set up JSON Web Token (JWT) authentication so your applications can securely log in to Vault by using SPIFFE identities.

Prerequisites

  • Make sure that Vault is initialized and unsealed.
  • Ensure that a test secret is stored in the key-value secrets engine.

Procedure

  1. On your local machine, retrieve the SPIRE Certificate Authority (CA) bundle and save it to a file by running the following command:

    $ oc get cm -n zero-trust-workload-identity-manager spire-bundle -o jsonpath='{ .data.bundle\.crt }' > oidc_provider_ca.pem
    Copy to Clipboard Toggle word wrap
  2. Back in the Vault pod shell, create a temporary file and paste the contents of oidc_provider_ca.pem into it by running the following command:

    $ cat << EOF > /tmp/oidc_provider_ca.pem
    -----BEGIN CERTIFICATE-----
    <Paste-Your-Certificate-Content-Here>
    -----END CERTIFICATE-----
    EOF
    Copy to Clipboard Toggle word wrap
  3. Set up the necessary environment variables for the JWT configuration by running the following commands:

    $ export APP_DOMAIN=<Your-App-Domain>
    Copy to Clipboard Toggle word wrap
    $ export JWT_ISSUER_ENDPOINT="oidc-discovery.$APP_DOMAIN"
    Copy to Clipboard Toggle word wrap
    $ export OIDC_URL="https://$JWT_ISSUER_ENDPOINT"
    Copy to Clipboard Toggle word wrap
    $ export OIDC_CA_PEM="$(cat /tmp/oidc_provider_ca.pem)"
    Copy to Clipboard Toggle word wrap
  4. Crate a new environment variable by running the following command:

    $ export ROLE="${NAME}-role"
    Copy to Clipboard Toggle word wrap
  5. Enable the JWT authentication method by running the following command:

    $ vault auth enable jwt
    Copy to Clipboard Toggle word wrap
  6. Configure you ODIC authentication method by running the following command:

    $ vault write auth/jwt/config \
      oidc_discovery_url=$OIDC_URL \
      oidc_discovery_ca_pem="$OIDC_CA_PEM" \
      default_role=$ROLE
    Copy to Clipboard Toggle word wrap
  7. Create a policy named ztwim-policy by running the following command:

    $ export POLICY="${NAME}-policy"
    Copy to Clipboard Toggle word wrap
  8. Grant read access to the secret you created earlier by running the following command:

    $ vault policy write $POLICY -<<EOF
    path "secret/$NAME" {
        capabilities = ["read"]
    }
    EOF
    Copy to Clipboard Toggle word wrap
  9. Create the following environment variables by running the following commands:

    $ export APP_NAME=client
    Copy to Clipboard Toggle word wrap
    $ export APP_NAMESPACE=demo
    Copy to Clipboard Toggle word wrap
    $ export AUDIENCE=$APP_NAME
    Copy to Clipboard Toggle word wrap
  10. Create a JWT role that binds the policy to workload with a specific SPIFFE ID by running the following command:

    $ vault write auth/jwt/role/$ROLE -<<EOF
    {
      "role_type": "jwt",
      "user_claim": "sub",
      "bound_audiences": "$AUDIENCE",
      "bound_claims_type": "glob",
      "bound_claims": {
        "sub": "spiffe://$APP_DOMAIN/ns/$APP_NAMESPACE/sa/$APP_NAME"
      },
      "token_ttl": "24h",
      "token_policies": "$POLICY"
    }
    EOF
    Copy to Clipboard Toggle word wrap

10.5.2.5. Deploying a demonstration application

When you deploy a demonstration application, you create a simple client application that uses its SPIFFE identity to authenticate with Vault.

Procedure

  1. On your local machine, set the environment variables for your application by running the following commands:

    $ export APP_NAME=client
    Copy to Clipboard Toggle word wrap
    $ export APP_NAMESPACE=demo
    Copy to Clipboard Toggle word wrap
    $ export AUDIENCE=$APP_NAME
    Copy to Clipboard Toggle word wrap
  2. Apply the Kubernetes manifest to create the namespace, service account, and deployment for the demo app by running the following command. This deployment mounts the SPIFFE CSI driver socket.

    $ oc apply -f - <<EOF
    # ... (paste the full YAML from your provided code here) ...
    EOF
    Copy to Clipboard Toggle word wrap

Verification

  • Verify that the client deployment is ready by running the following command:

    $ oc get deploy -n $APP_NAMESPACE
    Copy to Clipboard Toggle word wrap

    Example output

    NAME             READY        UP-TO-DATE      AVAILABLE     AGE
    frontend-app     2/2          2               2             120d
    backend-api      3/3          3               3             120d
    Copy to Clipboard Toggle word wrap

10.5.2.6. Authenticating and retrieving the secret

You use the demonstration application to fetch a JWT token from the SPIFFE Workload API and use it to log in to Vault and retrieve the secret.

Procedure

  1. Fetch a JWT-SVID by running the following command inside the running client pod:

    $ oc -n $APP_NAMESPACE exec -it $(oc get pod -o=jsonpath='{.items[*].metadata.name}' -l app=$APP_NAME -n $APP_NAMESPACE) \
      -- /opt/spire/bin/spire-agent api fetch jwt \
      -socketPath /run/spire/sockets/spire-agent.sock \
      -audience $AUDIENCE
    Copy to Clipboard Toggle word wrap
  2. Copy the token from the output and export it as an environment variable on your local machine by running the following command:

    $ export IDENTITY_TOKEN=<Your-JWT-Token>
    Copy to Clipboard Toggle word wrap
  3. Crate a new environment variable by running the following command:

    $ export ROLE="${NAME}-role"
    Copy to Clipboard Toggle word wrap
  4. Use curl to send the JWT token to the Vault login endpoint to get a Vault client token by running the following command:

    $ VAULT_TOKEN=$(curl -s --request POST --data '{ "jwt": "'"${IDENTITY_TOKEN}"'", "role": "'"${ROLE}"'"}' "${VAULT_ADDR}"/v1/auth/jwt/login | jq -r '.auth.client_token')
    Copy to Clipboard Toggle word wrap

Verification

  • Use the newly acquired Vault token to read the secret from the KV store by running the following command:

    $ curl -s -H "X-Vault-Token: $VAULT_TOKEN" $VAULT_ADDR/v1/secret/$NAME | jq
    Copy to Clipboard Toggle word wrap

    You should see the contents of the secret ("version": "v0.1.0") in the output, confirming the entire workflow is successful

By enabling the create-only mode, you can pause the Operator reconciliation, which allows you to perform manual configurations or debug without the controller overwriting your changes. This is done by annotating the API resources which are managed by the Operator. The following scenarios are examples of when the create-only mode might be of use:

Manual Customization Required: You need to customize operator-managed resources (ConfigMaps, Deployments, DaemonSets, etc.) with specific configurations that differ from the operator’s defaults

Day 2 Operations: After initial deployment, you want to prevent the operator from overwriting their manual changes during subsequent reconciliation cycles

Configuration Drift Prevention: You want to maintain control over certain resource configurations while still benefiting from the operator’s lifecycle management

Reconciliation by annotation supports the SpireServer, SpireAgent, SpiffeCSIDriver, SpireOIDCDiscoveryProvider, and the ZeroTrustWorkloadIdentityManager custom resources. You can pause the reconciliation process by adding an annotation.

Prerequisites

  • You have installed Zero Trust Workload Identity Manager on your machine.
  • You have installed the SPIRE Servers, Agents, SPIFFE Container Storage Interface (CSI), and an OpenID Connect (OIDC) Discovery Provider and are in running status.

Procedure

  • To pause reconciling the SpireServer custom resource, add the create-only annotation to the named cluster by running the following command:

    $ oc annotate SpireServer cluster -n zero-trust-workload-identity-manager ztwim.openshift.io/create-only=true
    Copy to Clipboard Toggle word wrap

Verification

  • Check the status of the SpireServer resource to confirm that the create-only mode is active. The status must be true and the reason must be CreateOnlyModeEnabled.

    $ oc get SpireServer cluster -o yaml
    Copy to Clipboard Toggle word wrap

Example output

status:
  conditions:
  - lastTransitionTime: "2025-09-03T12:13:39Z"
    message: Create-only mode is enabled via ztwim.openshift.io/create-only annotation
    reason: CreateOnlyModeEnabled
    status: "True"
    type: CreateOnlyMode
Copy to Clipboard Toggle word wrap

Procedure

Follow these steps to restart the reconciliation process:

  1. Run the oc annotate command, adding a hyphen (-) at the end of the annotation name. This removes the annotation from the cluster resource.

    $ oc annotate SpireServer cluster -n zero-trust-workload-identity-manager ztwim.openshift.io/create-only-
    Copy to Clipboard Toggle word wrap
  2. Restart the controller by running the following command:

    $ oc rollout restart deploy/zero-trust-workload-identity-manager-controller-manager -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap

Verification

  • Check the status of the SpireServer resource to confirm that the create-only mode is disabled. The status must be false and the reason must be CreateOnlyModeDisabled.

    $ oc get SpireServer cluster -o yaml
    Copy to Clipboard Toggle word wrap

Example output

status:
  conditions:
  - lastTransitionTime: "2025-09-03T12:13:39Z"
    message: Create-only mode is enabled via ztwim.openshift.io/create-only annotation
    reason: CreateOnlyModeDisabled
    status: "False"
    type: CreateOnlyMode
Copy to Clipboard Toggle word wrap

Once create-only mode is enabled, it persists until the Operator pod restarts, even if the annotation is removed. To exit this mode, you might need to remove or unset the annotation and restart the Operator pod.

By default, the SPIRE Server and SPIRE Agent components of the Zero Trust Workload Identity Manager emit metrics. You can configure OpenShift Monitoring to collect these metrics by using the Prometheus Operator format.

10.7.1. Enabling user workload monitoring

You can enable monitoring for user-defined projects by configuring user workload monitoring in the cluster.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.

Procedure

  1. Create the cluster-monitoring-config.yaml file to define and configure the ConfigMap:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        enableUserWorkload: true
    Copy to Clipboard Toggle word wrap
  2. Apply the ConfigMap by running the following command:

    $ oc apply -f cluster-monitoring-config.yaml
    Copy to Clipboard Toggle word wrap

Verification

  • Verify that the monitoring components for user workloads are running in the openshift-user-workload-monitoring namespace:

    $ oc -n openshift-user-workload-monitoring get pod
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                   READY   STATUS    RESTARTS   AGE
    prometheus-operator-6cb6bd9588-dtzxq   2/2     Running   0          50s
    prometheus-user-workload-0             6/6     Running   0          48s
    prometheus-user-workload-1             6/6     Running   0          48s
    thanos-ruler-user-workload-0           4/4     Running   0          42s
    thanos-ruler-user-workload-1           4/4     Running   0          42s
    Copy to Clipboard Toggle word wrap

The status of the pods such as prometheus-operator, prometheus-user-workload, and thanos-ruler-user-workload must be Running.

The SPIRE Server operand exposes metrics by default on port 9402 at the /metrics endpoint. You can configure metrics collection for the SPIRE Server by creating a ServiceMonitor custom resource (CR) that enables the Prometheus Operator to collect custom metrics.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.
  • You have installed the Zero Trust Workload Identity Manager.
  • You have deployed the SPIRE Server operand in the cluster.
  • You have enabled the user workload monitoring.

Procedure

  1. Create the ServiceMonitor CR:

    1. Create the YAML file that defines the ServiceMonitor CR:

      Example servicemonitor-spire-server file

      apiVersion: monitoring.coreos.com/v1
      kind: ServiceMonitor
      metadata:
        labels:
          app.kubernetes.io/name: server
          app.kubernetes.io/instance: spire
        name: spire-server-metrics
        namespace: zero-trust-workload-identity-manager
      spec:
        endpoints:
        - port: metrics
          interval: 30s
          path: /metrics
        selector:
          matchLabels:
            app.kubernetes.io/name: server
            app.kubernetes.io/instance: spire
        namespaceSelector:
          matchNames:
          - zero-trust-workload-identity-manager
      Copy to Clipboard Toggle word wrap

    2. Create the ServiceMonitor CR by running the following command:

      $ oc create -f servicemonitor-spire-server.yaml
      Copy to Clipboard Toggle word wrap

      After the ServiceMonitor CR is created, the user workload Prometheus instance begins metrics collection from the SPIRE Server. The collected metrics are labeled with job="spire-server".

Verification

  1. In the OpenShift Container Platform web console, navigate to Observe Targets.
  2. In the Label filter field, enter the following label to filter the metrics targets:

    $ service=spire-server
    Copy to Clipboard Toggle word wrap
  3. Confirm that the Status column shows Up for the spire-server-metrics entry.

The SPIRE Agent operand exposes metrics by default on port 9402 at the /metrics endpoint. You can configure metrics collection for the SPIRE Agent by creating a ServiceMonitor custom resource (CR), which enables the Prometheus Operator to collect custom metrics.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.
  • You have installed the Zero Trust Workload Identity Manager.
  • You have deployed the SPIRE Agent operand in the cluster.
  • You have enabled the user workload monitoring.

Procedure

  1. Create the ServiceMonitor CR:

    1. Create the YAML file that defines the ServiceMonitor CR:

      Example servicemonitor-spire-agent.yaml file

      apiVersion: monitoring.coreos.com/v1
      kind: ServiceMonitor
      metadata:
        labels:
          app.kubernetes.io/name: agent
          app.kubernetes.io/instance: spire
        name: spire-agent-metrics
        namespace: zero-trust-workload-identity-manager
      spec:
        endpoints:
        - port: metrics
          interval: 30s
          path: /metrics
        selector:
          matchLabels:
            app.kubernetes.io/name: agent
            app.kubernetes.io/instance: spire
        namespaceSelector:
          matchNames:
          - zero-trust-workload-identity-manager
      Copy to Clipboard Toggle word wrap

    2. Create the ServiceMonitor CR by running the following command:

      $ oc create -f servicemonitor-spire-agent.yaml
      Copy to Clipboard Toggle word wrap

      After the ServiceMonitor CR is created, the user workload Prometheus instance begins metrics collection from the SPIRE Agent. The collected metrics are labeled with job="spire-agent".

Verification

  1. In the OpenShift Container Platform web console, navigate to Observe Targets.
  2. In the Label filter field, enter the following label to filter the metrics targets:

    $ service=spire-agent
    Copy to Clipboard Toggle word wrap
  3. Confirm that the Status column shows Up for the spire-agent-metrics entry.

As a cluster administrator, or as a user with view access to all namespaces, you can query SPIRE Agent and SPIRE Server metrics by using the OpenShift Container Platform web console or the command line. The query retrieves all the metrics collected from the SPIRE components that match the specified job labels.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.
  • You have installed the Zero Trust Workload Identity Manager.
  • You have deployed the SPIRE Server and SPIRE Agent operands in the cluster.
  • You have enabled monitoring and metrics collection by creating ServiceMonitor objects.

Procedure

  1. In the OpenShift Container Platform web console, navigate to Observe Metrics.
  2. In the query field, enter the following PromQL expression to query SPIRE Server metrics:

    {job="spire-server"}
    Copy to Clipboard Toggle word wrap
  3. In the query field, enter the following PromQL expression to query SPIRE Agent metrics.

    {job="spire-agent"}
    Copy to Clipboard Toggle word wrap
Important

Zero Trust Workload Identity Manager is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

You can remove the Zero Trust Workload Identity Manager from OpenShift Container Platform by uninstalling the Operator and removing its related resources.

You can uninstall the Zero Trust Workload Identity Manager by using the web console.

Prerequisites

  • You have access to the cluster with cluster-admin privileges.
  • You have access to the OpenShift Container Platform web console.
  • The Zero Trust Workload Identity Manager is installed.

Procedure

  1. Log in to the OpenShift Container Platform web console.
  2. Uninstall the Zero Trust Workload Identity Manager.

    1. Go to Operators Installed Operators.
    2. Click the Options menu next to the Zero Trust Workload Identity Manager entry, and then click Uninstall Operator.
    3. In the confirmation dialog, click Uninstall.

After you have uninstalled the Zero Trust Workload Identity Manager, you have the option to delete its associated resources from your cluster.

Prerequisites

  • You have access to the cluster with cluster-admin privileges.

Procedure

  1. Uninstall the operands by running each of the following commands:

    1. Delete the ZeroTrustWorkloadIdentityManager cluster by running the following command:

      $ oc delete ZeroTrustWorkloadIdentityManager cluster
      Copy to Clipboard Toggle word wrap
    2. Delete the SpireOIDCDiscoveryProvider cluster by running the following command:

      $ oc delete SpireOIDCDiscoveryProvider cluster
      Copy to Clipboard Toggle word wrap
    3. Delete the SpiffeCSIDriver cluster by running the following command:

      $ oc delete SpiffeCSIDriver cluster
      Copy to Clipboard Toggle word wrap
    4. Delete the SpireAgent cluster by running the following command:

      $ oc delete SpireAgent cluster
      Copy to Clipboard Toggle word wrap
    5. Delete the SpireServer cluster by running the following command:

      $ oc delete SpireServer cluster
      Copy to Clipboard Toggle word wrap
    6. Delete the Persistent Volume Claim (PVC) by running the following command:

      $ oc delete pvc -l=app.kubernetes.io/managed-by=zero-trust-workload-identity-manager
      Copy to Clipboard Toggle word wrap
    7. Delete the CSI Driver by running the following command:

      $ oc delete csidriver -l=app.kubernetes.io/managed-by=zero-trust-workload-identity-manager
      Copy to Clipboard Toggle word wrap
    8. Delete the service by running the following command:

      $ oc delete service -l=app.kubernetes.io/managed-by=zero-trust-workload-identity-manager
      Copy to Clipboard Toggle word wrap
    9. Delete the namespace by running the following command:

      $ oc delete ns zero-trust-workload-identity-manager
      Copy to Clipboard Toggle word wrap
    10. Delete the cluster role binding by running the following command:

      $ oc delete clusterrolebinding -l=app.kubernetes.io/managed-by=zero-trust-workload-identity-manager
      Copy to Clipboard Toggle word wrap
    11. Delete the cluster role by running the following command:

      $ oc delete clusterrole -l=app.kubernetes.io/managed-by=zero-trust-workload-identity-manager
      Copy to Clipboard Toggle word wrap
    12. Delete the admission wehhook configuration by running the following command:

      $ oc delete validatingwebhookconfigurations -l=app.kubernetes.io/managed-by=zero-trust-workload-identity-manager
      Copy to Clipboard Toggle word wrap
  2. Delete the custom resource definitions (CRDs) by running each of the following commands:

    1. Delete the SPIRE Server CRD by running the following command:

      $ oc delete crd spireservers.operator.openshift.io
      Copy to Clipboard Toggle word wrap
    2. Delete the SPIRE Agent CRD by running the following command:

      $ oc delete crd spireagents.operator.openshift.io
      Copy to Clipboard Toggle word wrap
    3. Delete the SPIFFEE CSI Drivers CRD by running the following command:

      $ oc delete crd spiffecsidrivers.operator.openshift.io
      Copy to Clipboard Toggle word wrap
    4. Delete the SPIRE OIDC Discovery Provider CRD by running the following command:

      $ oc delete crd spireoidcdiscoveryproviders.operator.openshift.io
      Copy to Clipboard Toggle word wrap
    5. Delete the SPIRE and SPIFFE cluster federated trust domains CRD by running the following command:

      $ oc delete crd clusterfederatedtrustdomains.spire.spiffe.io
      Copy to Clipboard Toggle word wrap
    6. Delete the cluster SPIFFE IDs CRD by running the following command:

      $ oc delete crd clusterspiffeids.spire.spiffe.io
      Copy to Clipboard Toggle word wrap
    7. Delete the SPIRE and SPIFFE cluster static entries CRD by running the following command:

      $ oc delete crd clusterstaticentries.spire.spiffe.io
      Copy to Clipboard Toggle word wrap
    8. Delete the Zero Trust Workload Identity Manager CRD by running the following command:

      $ oc delete crd zerotrustworkloadidentitymanagers.operator.openshift.io
      Copy to Clipboard Toggle word wrap

Verification

To verify that the resources have been deleted, replace each oc delete command with oc get, and then run the command. If no resources are returned, the deletion was successful.

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat