Este contenido no está disponible en el idioma seleccionado.

Chapter 10. Zero Trust Workload Identity Manager


10.1. Zero Trust Workload Identity Manager overview

The Zero Trust Workload Identity Manager is an OpenShift Container Platform Operator that manages the lifecycle of SPIFFE Runtime Environment (SPIRE) components. It enables workload identity management based on the Secure Production Identity Framework for Everyone (SPIFFE) standard, providing cryptographically verifiable identities (SVIDs) to workloads running in OpenShift Container Platform clusters.

The following are components of the Zero Trust Workload Identity Manager architecture:

10.1.1. SPIFFE

Secure Production Identity Framework for Everyone (SPIFFE) provides a standardized way to establish trust between software workloads in distributed systems. SPIFFE assigns unique IDs called SPIFFE IDs. These IDs are Uniform Resource Identifiers (URI) that include a trust domain and a workload identifier.

The SPIFFE IDs are contained in the SPIFFE Verifiable Identity Document (SVID). SVIDs are used by workloads to verify their identity to other workloads so that the workloads can communicate with each other. The two main SVID formats are:

  • X.509-SVIDs: X.509 certificates where the SPIFFE ID is embedded in the Subject Alternative Name (SAN) field.
  • JWT-SVIDs: JSON Web Tokens (JWTs) where the SPIFFE ID is included as the sub claim.

For more information, see SPIFFE Overview.

10.1.2. SPIRE Server

A SPIRE Server is responsible for managing and issuing SPIFFE identities within a trust domain. It stores registration entries (selectors that determine under what conditions a SPIFFE ID should be issued) and signing keys. The SPIRE Server works in conjunction with the SPIRE Agent to perform node attestion via node plugins. For more information, see About the SPIRE Server.

10.1.3. SPIRE Agent

The SPIRE Agent is responsible for workload attestation, ensuring that workloads receive a verified identity when requesting authentication through the SPIFFE Workload API. It accomplishes this by using configured workload attestor plugins. In Kubernetes environments, the Kubernetes workload attestor plugin is used.

SPIRE and the SPIRE Agent perform node attestation via node plugins. The plugins are used to verify the identity of the node on which the agent is running. For more information, see About the SPIRE Agent.

10.1.4. Attestation

Attestation is the process by which the identity of nodes and workloads are verified before SPIFFE IDs and SVIDs are issued. The SPIRE Server gathers attributes of both the workload and node that the SPIRE Agent runs on, and then compares them to a set of selectors defined when the workload was registered. If the comparison is successful, the entities are provided with credentials. This ensures that only legitimate and expected entities within the trust domain receive cryptographic identities. The two main types of attestation in SPIFFE/SPIRE are:

  • Node attestation: verifies the identity of a machine or a node on a system, before a SPIRE Agent running on that node can be trusted to request identities for workloads.
  • Workload attestation: verifies the identity of an application or service running on an attested node before the SPIRE Agent on that node can provide it with a SPIFFE ID and SVID.

For more information, see Attestation.

10.2. Zero Trust Workload Identity Manager components

Review the components available in the initial release of Zero Trust Workload Identity Manager to understand the architecture. These components provide the foundation for identifying and securing your workloads.

10.2.1. SPIFFE CSI Driver

The SPIFFE Container Storage Interface (CSI) driver helps pods securely obtain their SPIFFE Verifiable Identity Document (SVID) by delivering the Workload API socket. By using Kubernetes ephemeral inline volumes, the driver simplifies how applications request temporary storage for identity management.

When the pod starts, the Kubelet calls the SPIFFE CSI driver to provision and mount a volume into the pod’s containers. The SPIFFE CSI driver mounts a directory that contains the SPIFFE Workload API into the pod. Applications in the pod then communicate with the Workload API to obtain their SVIDs. The driver guarantees that each SVID is unique.

10.2.2. SPIRE OpenID Connect Discovery Provider

Use the SPIRE OpenID Connect (OIDC) Discovery Provider to integrate SPIRE workload identities with OIDC-compliant systems. This component exposes endpoints for token verification. It helps ensure compatibility between SPIRE-issued credentials and external APIs requiring standard OIDC tokens.

While SPIRE primarily issues identities for workloads, additional workload-related claims can be embedded into JWT-SVIDs through the configuration of SPIRE, which these claims to be included in the token and verified by OIDC-compliant clients.

10.2.3. SPIRE Controller Manager

Use the SPIRE Controller Manager to automate workload registration with custom resource definitions (CRDs). The manager monitors pods and CRDs to create, update, or delete entries on the SPIRE Server. This process helps ensure that your SPIRE entries accurately reflect your active resources.

The SPIRE Controller Manager is designed to be deployed on the same pod as the SPIRE Server. The manager communicates with the SPIRE Server API using a private UNIX Domain Socket within a shared volume.

10.2.4. SPIRE Server and Agent telemetry

Use the SPIRE Controller Manager to register workloads by using custom resource definitions (CRDs). The manager monitors pods and CRDs for changes and triggers a reconciliation process. This process creates, updates, or deletes SPIRE Server entries to help ensure they match your configuration.

10.2.5. About the Zero Trust Workload Identity Manager workflow

Understand the high-level workflow of Zero Trust Workload Identity Manager to help you manage secure identities. This process relies on SPIRE components and custom resource definitions (CRDs) to validate nodes and workloads.

The following is a high-level workflow of the Zero Trust Workload Identity Manager within the Red Hat OpenShift cluster.

  1. The SPIRE, SPIRE Agent, SPIFFE CSI Driver, and the SPIRE OIDC Discovery Provider operands are deployed and managed by Zero Trust Workload Identity Manager via associated customer resource definitions (CRDs).
  2. Watches are then registered for relevant Kubernetes resources and the necessary SPIRE CRDs are applied to the cluster.
  3. The CR for the ZeroTrustWorkloadIdentityManager resource named cluster is deployed and managed by a controller.
  4. To deploy the SPIRE Server, SPIRE Agent, SPIFFE CSI Driver, and SPIRE OIDC Discovery Provider, you need to create a custom resource of a each certain type and name it cluster. The custom resource types are as follows:

    • SPIRE Server - SpireServer
    • SPIRE Agent - SpireAgent
    • SPIFFE CSI Driver - SpiffeCSIDriver
    • SPIRE OIDC discovery provider - SpireOIDCDiscoveryProvider
  5. When a node starts, the SPIRE Agent initializes, and connects to the SPIRE Server.
  6. The SPIRE Agent begins the node attestation process. The agent collects information on the node’s identity such as label name and namespace. The agent securely provides the information it gathered through the attestation to the SPIRE Server.
  7. The SPIRE Server then evaluates this information against its configured attestation policies and registration entries. If successful, the server generates an agent SVID and the Trust Bundle (CA Certificate) and securely sends this back to the SPIRE Agent.
  8. A workload starts on the node and needs a secure identity. The workload connects to the agent’s Workload API and requests a SVID.
  9. The SPIRE Agent receives the request and begins a workload attestation to gather information about the workload.
  10. After the SPIRE Agent gathers the information, the information is sent to the SPIRE Server and the server checks its configured registration entries.
  11. The SPIRE Agent receives the workload SVID and Trust Bundle and passes it on to the workload. The workload can now present their SVIDs to other SPIFFE-aware devices to communicate with them.

10.3. Zero Trust Workload Identity Manager release notes

The Zero Trust Workload Identity Manager leverages Secure Production Identity Framework for Everyone (SPIFFE) and the SPIFFE Runtime Environment (SPIRE) to provide a comprehensive identity management solution for distributed systems.

These release notes track the development of Zero Trust Workload Identity Manager.

Issued: 2025-12-17

This release introduces capabilities for enterprise readiness, security, and operational flexibility. It includes SPIRE federation for cross-cluster identity, PostgreSQL support for production persistence, and enhanced security through stricter constraints and API validation.

The following advisories are available for the Zero Trust Workload Identity Manager.

Zero Trust Workload Identity Manager supports the following components and versions:

Expand
ComponentVersion

SPIRE Server

1.13.3

SPIRE Agent

1.13.3

SPIRE Controller Manager

0.6.3

SPIRE OIDC Discovery Provider

1.13.3

SPIFFE CSI Driver

0.2.8

10.3.1.1. New features and enhancements

SPIRE federation support

The Operator now includes support for SPIRE federation, enabling workloads across distinct trust domains to securely communicate and authenticate with each other.

  • Key capabilities:

    • Configuration of bundle endpoints using https_spiffe (TLS) or https_web (Web PKI) profiles.
    • Automatic certificate management via the ACME protocol (e.g., Let’s Encrypt).
    • Automatic OpenShift Container Platform route creation for federation endpoints.
    • Ability to configure relationships with multiple federated trust domains.
  • Customer action required:

    • Review the federation configuration within the SpireServer Custom Resource (CR).
    • Ensure proper DNS resolution and network connectivity to federated trust domains.
PostgreSQL database support

SPIRE Server now supports PostgreSQL as an external database backend, accommodating production deployments that necessitate enterprise-grade data persistence and high availability.

  • Supported Types: sqlite3 (default), postgres, mysql.
  • Customer action required:

    • For production, evaluation of migration from SQLite to PostgreSQL is recommended.
    • Creation and configuration of Kubernetes Secrets for database TLS certificates and credentials are required.
Configurable agent socket path and Container Storage Interface (CSI) plugin name

The SPIRE Agent socket path and the SPIFFE CSI Driver plugin name are now configurable, providing operational flexibility for environments with specific directory requirements or co-existence with multiple SPIFFE deployments.

  • Key configuration points:

    • SpireAgent.spec.socketPath
    • SpiffeCSIDriver.spec.agentSocketPath
    • SpiffeCSIDriver.spec.pluginName
  • Customer action required:

    • Ensure consistency between socketPath in the SpireAgent CR and agentSocketPath in the SpiffeCSIDriver CR.
Workload attestors verification API

A new API has been introduced to configure kubelet certificate verification for workload attestation, enhancing security and supporting various OpenShift configurations.

  • Verification types:

    • auto (default): Verification utilizes OpenShift defaults (/etc/kubernetes/kubelet-ca.crt).
    • hostCert: Uses a custom CA certificate path.
    • skip: Skips TLS verification (not recommended for production use).
Configurable Certificate Authority and JSON Web Token key types

Administrators can now configure the cryptographic key types used for the SPIRE Server Certificate Authority (CA) and JSON Web Token (JWT) signing, ensuring compliance with organizational security policies.

  • Supported Key Types: rsa-2048 (default), rsa-4096, ec-p256, ec-p384.
  • Customer action required:

    • Review organizational security policies to determine required key types.
Custom namespace deployment
  • The Operator and all associated operands can now be deployed within a custom namespace, providing flexibility for organizations with specific namespace governance requirements.
Proxy-aware Operator and operands
  • The Operator and all managed operands are now proxy-aware and automatically inherit cluster-wide proxy settings when configured.
Enhanced Security Context Constraints
  • SPIRE Agent and SPIFFE CSI Driver now run with Security Context Constraints (SCC) that prevent root user execution, though privileged container mode remains enabled for necessary host-level operations.
  • The Operator and all operand containers are configured with the ReadOnlyRootFilesystem set to true.
Enhanced API validation

Comprehensive Common Expression Language (CEL) validation has been integrated into all Custom Resource Definitions (CRDs) to prevent configuration errors during admission control.

  • Key validations:

    • All Operator CRDs are enforced as singletons (must be named cluster).
    • Immutable Fields: Fields including trustDomain, clusterName, bundleConfigMap, federation, bundleEndpoint profile, and all Persistence settings (size, accessMode, and storageClass) are now immutable after initial creation.
  • Customer action required:

    • Review existing CR configurations to ensure compliance with the new validation rules.
Common configuration consolidation
  • Standard configuration options (labels, resources, affinity, tolerations, nodeSelector) are now standardized across all operand CRs via a shared CommonConfig structure.
Configuring log level and log format for the operands

This release introduces flexible logging controls to improve observability and debugging across the platform:

  • SPIRE Components: Users can now configure the logLevel (debug, info, warn, error) and logFormat (text, JSON) independently for SpireServer, SpireAgent, and SpireOIDCDiscoveryProvider directly within their CR specifications. The defaults are set to "info" for the logLevel and "text" for the logFormat.
  • Operator: The operator’s log verbosity is now configurable via the OPERATOR_LOG_LEVEL environment variable using klog’s textlogger.
Refactor for create-only mode
By setting the CREATE_ONLY_MODE environment variable, users can prevent the operator from reconciling updates. This allows for manual resource modification without interference. If this mode is disabled, the Operator resumes enforcing the state and overwrites any manual changes.

10.3.1.2. Status and observability improvements

Enhanced status reporting
  • The main CR now aggregates status information from all operand CRs.
  • New status conditions include Upgradeable (indicating a safe upgrade path) and Progressing (detailing deployment progress).
Operator metrics
  • Operator metrics are now exposed and secured with appropriate RBAC configuration.
  • Integration is supported with the OpenShift monitoring stack.

10.3.1.3. Fixed issues

Enhanced Security Context Constraints for SPIRE Agent

Before this update, the SPIRE Agent and SPIFFE CSI Driver containers were running as root user, leading to potential security violations. With this release, Security Context Constraints (SCC) have been configured to ensure these components no longer run as root. While privileged container mode is still required for necessary capabilities, this change reduces potential security risks for the end user.

(SPIRE-60)

SpireServer updates now propagate without operator restart
  • Before this update, the operator failed to trigger reconciliation after updating the operand CR spec. As a consequence, user updates to SpireServer CR resources were not propagated to the StatefulSet, causing reconciliation to fail and changes to be ignored, leading to inconsistent resource allocation. With this release, the race condition between the manager and reconciler’s cache to trigger reconciliation after CR updates has been fixed. As a result, day2 patch operations on SpireServer CRs will reliably trigger reconciliation, ensuring updated values are applied to the StatefulSet without manual operator restart.

    (SPIRE-68)

Removed unnecessary security context constraint for OpenID Connect discovery provider
  • Before this update, the system unnecessarily created a custom security context constraint (SCC) for the OpenID Connect (OIDC) discovery provider, which increased the security footprint and configuration complexity even though the deployment did not require it. With this release, the custom SCC creation logic has been removed, resulting in a configuration where the OIDC discovery provider operates successfully without the extra security constraints.

    (SPIRE-190)

Fixed ConfigMap Reconciliation for SPIRE Controller Manager

Before this update, Spire-controller manager ConfigMap reconciliation failed due to an unhandled edge case in the previous implementation. As a consequence, users experienced configuration inconsistencies. With this release, the Spire-controller manager ConfigMap reconciliation issue has been resolved. As a result, end users now experience seamless Spire-controller manager configuration.

(SPIRE-195)

OIDC discovery provider now restarts automatically on configuration changes
  • Before this update, the SPIRE OIDC discovery provider failed to automatically restart following configmap changes, leading to persistent authentication failures. With this release, updates to the CR now trigger an automatic pod restart, ensuring that configmap changes are applied immediately, providing a seamless experience for end users.

    (SPIRE-225)

Corrected update rollback for DaemonSets, Deployments, and StatefulSets
  • Before this update, daemonset, deployment, and statefulsets were not properly reverted to their original form in all valid scenarios due to an oversight in the update logic. As a consequence, user data loss or inconsistency occurred in valid scenarios. With this release, the update logic has been corrected, ensuring all valid scenarios revert to their original form.

    (SPIRE-248)

  • Other bug fixes included:

    • Fixed issues related to continuous reconciliation and unnecessary updates.
    • Eliminated requeue logic for user input validation errors.

Issued: 2025-09-08

The following advisories are available for the Zero Trust Workload Identity Manager.

This release of Zero Trust Workload Identity Manager is a Technology Preview.

10.3.2.1. New features and enhancements

Support for the managed OIDC Discovery Provider Route
  • The Operator exposes the SPIREOIDCDiscoveryProvider spec through OpenShift Routes under the domain *.apps.<cluster_domain> for the selected default installation.
  • The managedRoute and externalSecretRef fields have been added to the spireOidcDiscoveryProvider spec.
  • The managedRoute field is boolean and is set to true by default. If set to false, the Operator stops managing the route and the existing route will not be deleted automatically. If set back to true, the Operator resumes managing the route. If a route does not exist, the Operator creates a new one. If a route already exists, the Operator will override the user configuration if a conflict exists.
  • The externalSecretRef references an externally managed Secret that has the TLS certificate for the oidc-discovery-provider Route host. When provided, this populates the route’s .Spec.TLS.ExternalCertificate field. For more information, see Creating a route with externally managed certificate
Enabling the custom Certificate Authority Time-To-Live for the SPIRE bundle
  • The following Time-To-Live (TTL) fields have been added to the SpireServer custom resource definition (CRD) API for SPIRE Server certificate management:

    • CAValidity (default: 24h)
    • DefaultX509Validity (default: 1h)
    • DefaultJWTValidity (default: 5m)
  • The default values can be replaced in the server configuration with user-configurable options that give users the flexibility to customize certificate and SPIFFE Verifiable Identity Document (SVID) lifetimes based on their security requirements.
Enabling Manual User Configurations
  • The Operator controller switches to create-only mode once the ztwim.openshift.io/create-only=true annotation is present on the Operator’s APIs. This allows resource creation while skipping the updates. A user can update the resources manually to test their configuration. This annotation supports APIs such as SpireServer, SpireAgents, SpiffeCSIDriver, SpireOIDCDiscoveryProvider, and ZeroTrustWorkloadIdentityManager.
  • When the annotation is applied, all derived resources including resources created and managed by the Operator.
  • Once the annotation is removed and the pod restarts, the operator tries to come back to the required state. The annotation is applied only once during start or a restart.

10.3.2.2. Fixed issues

JSON Web Token Issuer field now requires a valid URL
  • Before this update, the JwtIssuer field for both the SpireServer and the SpireOidcDiscoveryProvider did not need to be a URL causing an error in configurations. With this release, the user must manually enter an issuer URL in the JwtIssuer field in both custom resources.

    (SPIRE-117)

Important

The Zero Trust Workload Identity Manager is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Issued: 2025-06-16

The following advisories are available for the Zero Trust Workload Identity Manager:

This initial release of Zero Trust Workload Identity Manager is a Technology Preview. This version has the following known limitations:

  • Support for SPIRE federation is not enabled.
  • Key manager supports only the disk storage type.
  • Telemetry is supported only through Prometheus.
  • High availability (HA) configuration for SPIRE Servers or the OpenID Connect (OIDC) Discovery provider is not supported.
  • External datastore is not supported. This version uses the internal sqlite datastore deployed by SPIRE.
  • This version operates using a fixed configuration. User-defined configurations are not allowed.
  • The log level of operands are not configurable. The default value is DEBUG.

10.4. Installing the Zero Trust Workload Identity Manager

Install Zero Trust Workload Identity Manager to help ensure secure communication between your workloads. You can install the Zero Trust Workload Identity Manager by using either the web console or CLI.

If you install the Operator into a custom namespace (for example, my-custom-namespace), all managed operand resources are deployed within that same namespace. All secrets and ConfigMaps referenced by the Custom Resources (CRs) must also exist in that custom namespace.

Important

The Operator installation is not supported in the openshift-* namespaces and the default namespace.

Use the OperatorHub in the OpenShift Container Platform web console to install the Zero Trust Workload Identity Manager. This process streamlines deployment and helps ensure the Operator is installed in the correct namespace with the appropriate installation mode.

Note

A minimum of 1Gi persistent volume is required to install the SPIRE Server.

Prerequisites

  • You have access to the cluster with cluster-admin privileges.
  • You have access to the OpenShift Container Platform web console.

Procedure

  1. Log in to the OpenShift Container Platform web console.
  2. Go to Ecosystem Software Catalog.
  3. Search for Zero Trust Workload Identity Manager.
  4. On the Install Operator page:

    1. Update the Update channel, if necessary. The channel defaults to stable-v1, which installs the latest stable-v1 release of the Zero Trust Workload Identity Manager.
    2. Choose the Installed Namespace for the Operator. The default Operator namespace is zero-trust-workload-identity-manager.

      If the zero-trust-workload-identity-manager namespace does not exist, it is created for you.

      Note

      The Operator and operands are deployed in the same namespace.

    3. Select an Update Approval strategy

      • The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available.
      • The Manual strategy requires a user with appropriate credentials to approve the Operator update.
  5. Click Install.

Verification

  1. Navigate to Ecosystem Installed Operators.

    1. Verify that Zero Trust Workload Identity Manager is listed with a Status of Succeeded in the zero-trust-workload-identity-manager namespace.
    2. Verify that Zero Trust Workload Identity Manager controller manager deployment is ready and available by running the following command:

      $ oc get deployment -l name=zero-trust-workload-identity-manager -n zero-trust-workload-identity-manager
      Copy to Clipboard Toggle word wrap

      Example output

      NAME                                                           READY UP-TO-DATE AVAILABLE AGE
      zero-trust-workload-identity-manager-controller-manager-6c4djb 1/1   1          1         43m
      Copy to Clipboard Toggle word wrap

  2. To check the Operator logs, run the following command:

    $ oc logs -f deployment/zero-trust-workload-identity-manager -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap

Prerequisites

  • You have access to the cluster with cluster-admin privileges.
Note

A minimum of 1Gi persistent volume is required to install the SPIRE Server.

Procedure

  1. Create a new project named zero-trust-workload-identity-manager by running the following command:

    $ oc new-project zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap
  2. Create an OperatorGroup object:

    1. Create a YAML file, for example, operatorGroup.yaml, with the following content:

      Example operatorGroup.yaml

      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
        name: openshift-zero-trust-workload-identity-manager
        namespace: zero-trust-workload-identity-manager
      spec:
        upgradeStrategy: Default
      Copy to Clipboard Toggle word wrap

    2. Create the OperatorGroup object by running the following command:

      $ oc create -f operatorGroup.yaml
      Copy to Clipboard Toggle word wrap
  3. Create a Subscription object:

    1. Create a YAML file, for example, subscription.yaml, that defines the Subscription object:

      Example subscription.yaml

      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: openshift-zero-trust-workload-identity-manager
        namespace: zero-trust-workload-identity-manager
      spec:
        channel: stable-v1
        name: openshift-zero-trust-workload-identity-manager
        source: redhat-operators
        sourceNamespace: openshift-marketplace
        installPlanApproval: Automatic
      Copy to Clipboard Toggle word wrap

    2. Create the Subscription object by running the following command:

      $ oc create -f subscription.yaml
      Copy to Clipboard Toggle word wrap

Verification

  • Verify that the OLM subscription is created by running the following command:

    $ oc get subscription -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                             PACKAGE                                SOURCE             CHANNEL
    openshift-zero-trust-workload-identity-manager   zero-trust-workload-identity-manager   redhat-operators   stable-v1
    Copy to Clipboard Toggle word wrap

  • Verify whether the Operator is successfully installed by running the following command:

    $ oc get csv -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                         DISPLAY                                VERSION  PHASE
    zero-trust-workload-identity-manager.v1.0.0   Zero Trust Workload Identity Manager   1.0.0    Succeeded
    Copy to Clipboard Toggle word wrap

  • Verify that the Zero Trust Workload Identity Manager controller manager is ready by running the following command:

    $ oc get deployment -l name=zero-trust-workload-identity-manager -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                                      READY   UP-TO-DATE   AVAILABLE   AGE
    zero-trust-workload-identity-manager-controller-manager   1/1     1            1           43m
    Copy to Clipboard Toggle word wrap

10.5. Deploying Zero Trust Workload Identity Manager operands

You can deploy the following operands by creating the respective custom resources (CRs). You must deploy the operands in the following sequence to ensure successful installation.

  • ZeroTrustWorkloadIdentityManager CR* SPIRE Server
  • SPIRE Agent
  • SPIFFE CSI driver
  • SPIRE OIDC discovery provider

10.5.1. About the ZeroTrustWorkloadIdentityManager custom resource

The ZeroTrustWorkloadIdentityManager is the primary custom resource that initializes the SPIRE deployments. This primary resource defines the trust domain and cluster name to help ensure secure workload identity management.

Reference the complete YAML specification to correctly structure the ZeroTrustWorkloadIdentityManager CR. This example helps you identify required fields and immutable parameters for your configuration.

apiVersion: operator.openshift.io/v1alpha1
kind: ZeroTrustWorkloadIdentityManager
metadata:
 name: cluster
 labels:
   app.kubernetes.io/name: zero-trust-workload-identity-manager
   app.kubernetes.io/managed-by: zero-trust-workload-identity-manager
spec:
  trustDomain: "example.com"
  clusterName: "production-cluster"
  bundleConfigMap: "spire-bundle"
Copy to Clipboard Toggle word wrap

where:

trustDomain
Specifies tThe trust domain to be used for the SPIFFE identifiers. Must be a valid SPIFFE trust domain (lowercase alphanumeric, hyphens, and dots). Maximum length is 255 characters. Once set, this field is immutable.
clusterName
Specifies tThe name that identifies this cluster within the trust domain. Must be a valid DNS-1123 subdomain with a maximum length of 63 characters. Once set, this field is immutable.
bundleConfigMap
Specifies the name of the ConfigMap that stores the SPIRE trust bundle. This ConfigMap contains the root certificates for the trust domain and is created and maintained by the Operator. Must be a valid Kubernetes name with a maximum length of 253 characters. This field is optional (defaults to spire-bundle) and once set, is immutable.

10.5.2. Deploying the SPIRE Server

You can configure the SpireServer custom resource (CR) to deploy and configure a SPIRE Server.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.
  • You have installed Zero Trust Workload Identity Manager in the cluster.

Procedure

  1. Create the SpireServer CR:

    1. Create a YAML file that defines the SpireServer CR, for example, SpireServer.yaml:

      Example SpireServer.yaml

      apiVersion: operator.openshift.io/v1alpha1
      kind: SpireServer
      metadata:
        name: cluster
      spec:
        logLevel: "info"
        logFormat: "text"
        jwtIssuer: "https://oidc-discovery.apps.cluster.example.com"
        caValidity: "24h"
        defaultX509Validity: "1h"
        defaultJWTValidity: "5m"
        jwtKeyType: "rsa-248"
        caSubject:
          country: "US"
          organization: "Example Corporation"
          commonName: "SPIRE Server CA"
        persistence:
          size: "5Gi"
          accessMode: "ReadWriteOnce"
          storageClass: "gp3-csi"
        datastore:
          databaseType: "sqlite3"
          connectionString: "/run/spire/data/datastore.sqlite3"
          tlsSecretName: ""
          maxOpenConns: 100
          maxIdleConns: 10
          connMaxLifetime: 0
          disableMigration: "false"
        federation:
          bundleEndpoint:
            profile: "https_spiffe"
            refreshHint: 300
          federatesWith: []
          managedRoute: "true"
      Copy to Clipboard Toggle word wrap

      where:

      name
      Specifies that the value mmust be 'cluster'.
      logLevel
      Specifies the logging level for the SPIRE Server. The valid options are debug, info, warn, and error.
      logFormat
      Specifies the logging format for the SPIRE Server. The valid options are text and json.
      jwtIssuer
      Specifies the JWT issuer URL. Must be a valid HTTPS or HTTP URL with a maximum length of 512 characters.
      caValidity
      Specifies the validity period (Time to Live (TTL)) for the SPIRE Server’s CA certificate. This determines how long the server’s root or intermediate certificate is valid. The format is a duration string (for example, 24h, 168h).
      defaultX509Validity
      Specifies the default validity period (TTL) for X.509 SVIDs issued to workloads. This value is used if a specific TTL is not configured for a registration entry.
      defaultJWTValidity
      Specifies thedefault validity period (TTL) for JWT SVIDs issued to workloads. This value is used if a specific TTL is not configured for a registration entry.
      jwtKeyType
      Specifies the key type used for JWT signing. The valid options are rsa-2048, rsa-4096, ec-p256, and ec-p384. This field is optional.
      country
      Specifies the country for the SPIRE Server certificate authority (CA). Must be an ISO 3166-1 alpha-2 country code (2 characters).
      organization
      Specifies the organization for the SPIRE Server CA. Maximum length is 64 characters.
      commonName
      Specifies the common name for the SPIRE Server CA. Maximum length is 255 characters.
      size
      Specifies the size of the persistent volume (for example, 1Gi, 5Gi). Once set, this field is immutable.
      accessMode
      Specifies the access mode for the persistent volume. The valid options are ReadWriteOnce, ReadWriteOncePod, and ReadWriteMany. Once set, this field is immutable.
      storageClass
      Specifies the storage class to be used for the PVC. Once set, this field is immutable.
      databaseType
      Specifies the type of database to use for the datastore. The valid options are sql, sqlite3, postgres, mysql, aws_postgresql, and aws_mysql.
      connectionString
      Specifies the connection string for the database. For PostgreSQL with SSL, include sslmode and certificate paths (for example, dbname=spire user=spire host=postgres.example.com sslmode=verify-full).
      tlsSecretName
      Specifies the name of a Kubernetes Secret containing TLS certificates for database connections. The Secret will be mounted at /run/spire/db/certs. This field is optional.
      maxOpenConns
      Specifies the maximum number of open database connections. Must be between 1 and 10000.
      maxIdleConns
      Specifies the maximum number of idle database connections in the pool. Must be between 0 and 10000.
      connMaxLifetime
      Specifies the maximum lifetime of a database connection in seconds. A value of 0 means connections are not closed due to age.
      disableMigration
      Specifies whether to disable automatic database migration. The valid options are true and false.
      profile
      Specifies the bundle endpoint authentication profile for federation. The valid options are https_spiffe and https_web.
      refreshHint
      Specifies the hint for bundle refresh interval in seconds. Must be between 60 and 3600.
      federatesWith
      Specifies the list of trust domains this cluster federates with. Each entry requires trustDomain, bundleEndpointUrl, and bundleEndpointProfile.
      managedRoute
      Specifies either enabling or disabling automatic route creation for the federation endpoint. Set to true to allow automatic exposure through a managed OpenShift Route, or false to manually configure routing.
    2. Apply the configuration by running the following command:

      $ oc apply -f SpireServer.yaml
      Copy to Clipboard Toggle word wrap

Verification

  • Verify that the stateful set of SPIRE Server is ready and available by running the following command:

    $ oc get statefulset -l app.kubernetes.io/name=server -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap

    Example output

    NAME            READY   AGE
    spire-server    1/1     65s
    Copy to Clipboard Toggle word wrap

  • Verify that the status of the SPIRE Server pod is Running by running the following command:

    $ oc get po -l app.kubernetes.io/name=server -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap

    Example output

    NAME               READY   STATUS    RESTARTS        AGE
    spire-server-0     2/2     Running   1 (108s ago)    111s
    Copy to Clipboard Toggle word wrap

  • Verify that the persistent volume claim (PVC) is bound, by running the following command:

    $ oc get pvc -l app.kubernetes.io/name=server -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                        STATUS    VOLUME                                     CAPACITY   ACCESS MODES  STORAGECLASS  VOLUMEATTRIBUTECLASS  AGE
    spire-data-spire-server-0   Bound     pvc-27a36535-18a1-4fde-ab6d-e7ee7d3c2744   5Gi        RW0           gp3-csi       <unset>               22m
    Copy to Clipboard Toggle word wrap

10.5.3. Deploying the SPIRE Agent

Use the SpireAgent custom resource to configure the SPIRE Agent DaemonSet on your nodes. This defines how the agent verifies workloads and manages identity attestation across your OpenShift Container Platform cluster.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.
  • You have installed Zero Trust Workload Identity Manager in the cluster.

Procedure

  1. Create the SpireAgent CR:

    1. Create a YAML file that defines the SpireAgent CR, for example, SpireAgent.yaml:

      Example SpireAgent.yaml

      apiVersion: operator.openshift.io/v1alpha1
      kind: SpireAgent
      metadata:
        name: cluster
      spec:
        socketPath: "/run/spire/agent-sockets"
        logLevel: "info"
        logFormat: "text"
        nodeAttestor:
          k8sPSATEnabled: "true"
        workloadAttestors:
          k8sEnabled: "true"
          workloadAttestorsVerification:
            type: "auto"
            hostCertBasePath: "/etc/kubernetes"
            hostCertFileName: "kubelet-ca.crt"
          disableContainerSelectors: "false"
          useNewContainerLocator: "true"
      Copy to Clipboard Toggle word wrap

      where:

      name
      Must be named 'cluster'.
      socketPath
      Specifies the directory on the host where the SPIRE agent socket is created. This directory is shared with the SPIFFE CSI driver via the hostPath volume. Must match the SpiffeCSIDriver.spec.agentSocketPath for workloads to access the socket. Must be an absolute path with a maximum length of 256 characters.
      logLevel
      Specifies the logging level for the SPIRE Server. The valid options are debug, info, warn, and error.
      logFormat
      Specifies the logging format for the SPIRE Server. The valid options are text and json.
      k8sPSATEnabled
      Specifies whether Kubernetes Projected Service Account Token (PSAT) node attestation is enabled. When enabled, the SPIRE agent uses K8s PSATs to prove its identity to the SPIRE server during node attestation. The valid options are true and false.
      k8sEnabled
      Specifies whether the Kubernetes workload attestor is enabled. When enabled, the SPIRE agent can verify workload identities using Kubernetes pod information and service account tokens. The valid options are true and false.
      type
      Specifies the kubelet certificate verification mode. The valid options are auto, hostCert, and skip.
      hostCertBasePath
      Specifies the directory containing the kubelet CA certificate. Required when type is hostCert. Optional when type is auto (defaults to /etc/kubernetes if not specified).
      hostCertFileName
      Specifies the file name for the kubelet’s CA certificate. When combined with hostCertBasePath, forms the full path. Required when type is hostCert. Optional when type is auto. Defaults to kubelet-ca.crt if not specified.
      disableContainerSelectors
      Specifies whether to disable container selectors in the Kubernetes workload attestor. Set to true if using holdApplicationUntilProxyStarts in Istio. The valid options are true and false.
      useNewContainerLocator
      Specifies enabling the new container locator algorithm that has support for cgroups v2. The valid options are true and false.
    2. Apply the configuration by running the following command:

      $ oc apply -f SpireAgent.yaml
      Copy to Clipboard Toggle word wrap

Verification

  • Verify that the daemon set of the SPIRE Agent is ready and available by running the following command:

    $ oc get daemonset -l app.kubernetes.io/name=agent -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap

    Example output

    NAME          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
    spire-agent   3         3         3       3            3           <none>          10m
    Copy to Clipboard Toggle word wrap

  • Verify that the status of SPIRE Agent pods is Running by running the following command:

    $ oc get po -l app.kubernetes.io/name=agent -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                READY   STATUS    RESTARTS   AGE
    spire-agent-dp4jb   1/1     Running   0          12m
    spire-agent-nvwjm   1/1     Running   0          12m
    spire-agent-vtvlk   1/1     Running   0          12m
    Copy to Clipboard Toggle word wrap

10.5.4. Deploying the SPIFFE Container Storage Interface driver

Configure the Container Storage Interface (CSI) driver using the SpiffeCSIDriver CR. This configuration mounts SPIFFE sockets directly into workload pods, which allows your applications to access the SPIFFE Workload API securely.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.
  • You have installed Zero Trust Workload Identity Manager in the cluster.

Procedure

  1. Create the SpiffeCSIDriver CR:

    1. Create a YAML file that defines the SpiffeCSIDriver CR object, for example, SpiffeCSIDriver.yaml:

      Example SpiffeCSIDriver.yaml

      apiVersion: operator.openshift.io/v1alpha1
      kind: SpiffeCSIDriver
      metadata:
        name: cluster
      spec:
        agentSocketPath: "/run/spire/agent-sockets"
        pluginName: "csi.spiffe.io"
      Copy to Clipboard Toggle word wrap

      where:

      name
      Specifies that the name must be 'cluster'.
      agentSocketPath
      Specifies the path to the directory containing the SPIRE agent’s Workload API socket. This directory is bind-mounted into workload containers by the CSI driver. The directory is shared between the SPIRE agent and CSI driver via a hostPath volume. Must be an absolute path with a maximum length of 256 characters. This value must match SpireAgent.spec.socketPath for workloads to access the socket.
      pluginName
      Specifies the name of the CSI plugin. This sets the CSI driver name that is deployed to the cluster and used in VolumeMount configurations. Must match the driver name referenced in the workload pods. Must be a valid domain name format (for example, csi.spiffe.io) with a maximum length of 127 characters.
    2. Apply the configuration by running the following command:

      $ oc apply -f SpiffeCSIDriver.yaml
      Copy to Clipboard Toggle word wrap

Verification

  • Verify that the daemon set of the SPIFFE CSI driver is ready and available by running the following command:

    $ oc get daemonset -l app.kubernetes.io/name=spiffe-csi-driver -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
    spire-spiffe-csi-driver   3         3         3       3            3           <none>          114s
    Copy to Clipboard Toggle word wrap

  • Verify that the status of SPIFFE Container Storage Interface (CSI) Driver pods is Running by running the following command:

    $ oc get po -l app.kubernetes.io/name=spiffe-csi-driver -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                            READY   STATUS    RESTARTS   AGE
    spire-spiffe-csi-driver-gpwcp   2/2     Running   0          2m37s
    spire-spiffe-csi-driver-rrbrd   2/2     Running   0          2m37s
    spire-spiffe-csi-driver-w6s6q   2/2     Running   0          2m37s
    Copy to Clipboard Toggle word wrap

10.5.5. Deploying the SPIRE OpenID Connect Discovery Provider

Deploy the SPIRE OpenID Connect (OIDC) Discovery Provider by configuring the SpireOIDCDiscoveryProvider CR. This allows you to define the trust domain and JSON web token (JWT) issuer for your cluster.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.
  • You have installed Zero Trust Workload Identity Manager in the cluster.

Procedure

  1. Create the SpireOIDCDiscoveryProvider CR:

    1. Create a YAML file that defines the SpireOIDCDiscoveryProvider CR, for example, SpireOIDCDiscoveryProvider.yaml:

      Example SpireOIDCDiscoveryProvider.yaml

      apiVersion: operator.openshift.io/v1alpha1
      kind: SpireOIDCDiscoveryProvider
      metadata:
        name: cluster
      spec:
        logLevel: "info"
        logFormat: "text"
        csiDriverName: "csi.spiffe.io"
        jwtIssuer: "https://oidc-discovery.apps.cluster.example.com"
        replicaCount: 1
        managedRoute: "true"
        externalSecretRef: ""
      Copy to Clipboard Toggle word wrap

      where:

      name
      Specifies that the value must be 'cluster'.
      logLevel
      Specifies the logging level for the SPIRE Server. The valid options are debug, info, warn, and error.
      logFormat
      Specifies the logging format for the SPIRE Server. The valid options are text and json.
      csiDriverName
      Specifies the name of the CSI driver to use for mounting the Workload API socket. This must match the SpiffeCSIDriver.spec.pluginName value for the OIDC provider to access SPIFFE identities. Must be a valid DNS subdomain format (for example, csi.spiffe.io) with a maximum length of 127 characters.
      jwtIssuer
      Specifies the JWT issuer URL. Must be a valid HTTPS or HTTP URL with a maximum length of 512 characters. This value must match the SpireServer.spec.jwtIssuer value.
      replicaCount
      Specifies the number of replicas for the OIDC Discovery Provider deployment. Must be between 1 and 5.
      managedRoute
      Specifies whether the Operator automatically creates an OpenShift route for the OIDC Discovery Provider endpoints. Set to true to have the Operator automatically create and maintain an OpenShift route for OIDC discovery endpoints (*.apps.). Set to false for administrators to manually configure routes or ingress.
      externalSecretRef
      Specifies a reference to an externally managed secret that contains the TLS certificate for the OIDC Discovery Provider route host. Must be a valid Kubernetes secret reference name with a maximum length of 253 characters. This field is optional.
    2. Apply the configuration by running the following command:

      $ oc apply -f SpireOIDCDiscoveryProvider.yaml
      Copy to Clipboard Toggle word wrap

Verification

  1. Verify that the deployment of OIDC Discovery Provider is ready and available by running the following command:

    $ oc get deployment -l app.kubernetes.io/name=spiffe-oidc-discovery-provider -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                    READY  UP-TO-DATE  AVAILABLE  AGE
    spire-spiffe-oidc-discovery-provider    1/1    1           1          2m58s
    Copy to Clipboard Toggle word wrap

  2. Verify that the status of OIDC Discovery Provider pods is Running by running the following command:

    $ oc get po -l app.kubernetes.io/name=spiffe-oidc-discovery-provider -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                                    READY   STATUS    RESTARTS   AGE
    spire-spiffe-oidc-discovery-provider-64586d599f-lcc94   2/2     Running   0          7m15s
    Copy to Clipboard Toggle word wrap

10.5.6. Verify the health of the operands

View the status fields to verify the operational health of managed components. This information helps you confirm that the SPIRE Server, SPIRE Agent, SPIFFE CSI driver, and the SPIRE OIDC discovery provider operands are ready and functioning correctly.

  • To verify the operands, run the following command:

    oc get ZeroTrustWorkloadIdentityManager cluster -o yaml
    Copy to Clipboard Toggle word wrap

    Example output

    status:
      conditions:
      - lastTransitionTime: "2025-12-16T10:59:06Z"
        message: All components are ready
        reason: Ready
        status: "True"
        type: Ready
      - lastTransitionTime: "2025-12-16T10:59:06Z"
        message: All operand CRs are ready
        reason: Ready
        status: "True"
        type: OperandsAvailable
      operands:
      - kind: SpireServer
        message: Ready
        name: cluster
        ready: "true"
      - kind: SpireAgent
        message: Ready
        name: cluster
        ready: "true"
      - kind: SpiffeCSIDriver
        message: Ready
        name: cluster
        ready: "true"
      - kind: SpireOIDCDiscoveryProvider
        message: Ready
        name: cluster
        ready: "true"
       # ...
    Copy to Clipboard Toggle word wrap

This status is reflected when all operands are healthy and stable.

Important

The Operator adds the owner reference for the ZeroTrustWorkloadIdentityManager CR on the other operands' CRs. This causes the operands' resources to be deleted once the ZeroTrustWorkloadIdentityManager CRs are deleted.

Operator Lifecycle Manager (OLM) automatically configures managed Operators with proxy settings when you use a cluster-wide egress proxy. To support proxying HTTPS connections, you can inject certificate authority (CA) certificates into the Zero Trust Workload Identity Manager.

Inject certificate authority (CA) certificates into the Zero Trust Workload Identity Manager to support proxying HTTPS connections. This configuration helps ensure that the Identity Manager can communicate securely when you enable a cluster-wide proxy.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.
  • You have enabled the cluster-wide proxy for OpenShift Container Platform.
  • You have installed Zero Trust Workload Identity Manager 1.0.0 or later.
  • You have deployed the SPIRE Server, SPIRE Agent, SPIFFEE CSI Driver, and the SPIRE OIDC Discovery Provider operands in the cluster.

Procedure

  1. Create a config map in the zero-trust-workload-identity-manager namespace by running the following command:

    $ oc create configmap trusted-ca -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap
  2. Inject the CA bundle that is trusted by OpenShift Container Platform into the config map by running the following command:

    $ oc label cm trusted-ca config.openshift.io/inject-trusted-cabundle=true -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap
  3. Update the subscription for the Zero Trust Workload Identity Manager to use the config map by running the following command:

    $ oc -n zero-trust-workload-identity-manager patch subscription openshift-zero-trust-workload-identity-manager --type='merge' -p '{"spec":{"config":{"env":[{"name":"TRUSTED_CA_BUNDLE_CONFIGMAP","value":"trusted-ca"}]}}}'
    Copy to Clipboard Toggle word wrap

Verification

  1. Verify that the operands have finished rolling out by running the following command:

    $ oc rollout status deployment/zero-trust-workload-identity-manager-controller-manager -n zero-trust-workload-identity-manager && \
    $ oc rollout status statefulset/spireserver -n zero-trust-workload-identity-manager && \
    $ oc rollout status daemonset/spire-agent -n zero-trust-workload-identity-manager && \
    $ oc rollout status deployment/spire-spiffe-oidc-discovery-provider -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap

    Example output

    deployment "zero-trust-workload-identity-manager-controller-manager" successfully rolled out
    statefulset "spire-server" successfully rolled out
    daemonset "spire-agent" successfully rolled out
    deployment "spire-spiffe-oidc-discovery-provider" successfully rolled out
    Copy to Clipboard Toggle word wrap

  2. Verify that the CA bundle was mounted as a volume by running the following command:

    $ oc get deployment zero-trust-workload-identity-manager -n zero-trust-workload-identity-manager -o=jsonpath={.spec.template.spec.'containers[0].volumeMounts'}
    Copy to Clipboard Toggle word wrap
    $ oc get statefulset spire-server -n zero-trust-workload-identity-manager -o jsonpath='{.spec.template.spec.containers[*].volumeMounts[?(@.name=="trusted-ca-bundle")]}'
    Copy to Clipboard Toggle word wrap
    $ oc get daemonset spire-agent -n zero-trust-workload-identity-manager -o jsonpath='{.spec.template.spec.containers[*].volumeMounts[?(@.name=="trusted-ca-bundle")]}'
    Copy to Clipboard Toggle word wrap
    $ oc get daemonset spire-spiffe-csi-driver -n zero-trust-workload-identity-manager -o jsonpath='{.spec.template.spec.containers[*].volumeMounts[?(@.name=="trusted-ca-bundle")]}'
    Copy to Clipboard Toggle word wrap

    Example output

    [{{"mountPath":"/etc/pki/ca-trust/extracted/pem","name":"trusted-ca-bundle","readOnly":true}]
    Copy to Clipboard Toggle word wrap

  3. Verify that the source of the CA bundle is the trusted-ca config map by running the following command:

    $ oc get deployment zero-trust-workload-identity-manager -n zero-trust-workload-identity-manager -o=jsonpath={.spec.template.spec.volumes}
    Copy to Clipboard Toggle word wrap
    $ oc get statefulset spire-server -n zero-trust-workload-identity-manager -o=jsonpath='{.spec.template.spec.volumes}' | jq '.[] | select(.name=="trusted-ca-bundle")'
    Copy to Clipboard Toggle word wrap
    $ oc get daemonset spire-agent -n zero-trust-workload-identity-manager -o=jsonpath='{.spec.template.spec.volumes}' | jq '.[] | select(.name=="trusted-ca-bundle")'
    Copy to Clipboard Toggle word wrap
    $ oc get deployment spire-spiffe-oidc-discovery-provider -n zero-trust-workload-identity-manager -o=jsonpath='{.spec.template.spec.volumes}' | jq '.[] | select(.name=="trusted-ca-bundle")'
    Copy to Clipboard Toggle word wrap

    Example output

    {
      "configMap": {
        "defaultMode": 420,
        "items": [
          {
            "key": "ca-bundle.crt",
            "path": "tls-ca-bundle.pem"
          }
        ],
        "name": "trusted-ca"
      },
      "name": "trusted-ca-bundle"
    }
    Copy to Clipboard Toggle word wrap

10.7. Zero Trust Workload Identity Manager OIDC federation

Zero Trust Workload Identity Manager integrates with OpenID Connect (OIDC) by allowing a SPIRE server to act as an OIDC provider. This enables workloads to request and receive verifiable JSON Web Tokens - SPIFFE Verifiable Identity Documents (JWT-SVIDs) from the local SPIRE agent. External systems, such as cloud providers, can then use the OIDC discovery endpoint exposed by the SPIRE server to retrieve public keys.

The following providers are verified to work with SPIRE OIDC federation:

  • Azure Entra ID
  • Vault

10.7.1. About the Entra ID OpenID Connect

Entra ID is a cloud-based identity and access management service that centralizes user management and access control. Entra ID serves as the identify provider, verifying user identities and issuing and ID token to the application. This token has essential user information, allowing the application to confirm who the user is without managing their credentials.

Integrating Entra ID OpenID Connect (OIDC) with SPIRE provides workloads with automatic, short-lived cryptographic identities. The SPIRE-issued identities are sent to Entra ID to securely authenticate the service without any static secrets.

The managed route uses the External Route Certificate feature to set the tls.externalCertificate field to an externally managed Transfer Layer Security (TLS) secret’s name.

Prerequisites

  • You have installed Zero Trust Workload Identity Manager 0.2.0 or later.
  • You have deployed the SPIRE Server, SPIRE Agent, SPIFFEE CSI Driver, and the SPIRE OIDC Discovery Provider operands in the cluster.
  • You have installed the cert-manager Operator for Red Hat OpenShift. For more information, Installing the cert-manager Operator for Red Hat OpenShift.
  • You have created a ClusterIssuer or Issuer configured with a publicly trusted CA service. For example, an Automated Certificate Management Environment (ACME) type Issuer with the "Let’s Encrypt ACME" service. For more information, see Configuring an ACME issuer

Procedure

  1. Create a Role to provide the router service account permissions to read the referenced secret by running the following command:

    $ oc create role secret-reader \
      --verb=get,list,watch \
      --resource=secrets \
      --resource-name=$TLS_SECRET_NAME \
      -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap
  2. Create a RoleBinding resource to bind the router service account with the newly created Role resource by running the following command:

    $ oc create rolebinding secret-reader-binding \
      --role=secret-reader \
      --serviceaccount=openshift-ingress:router \
      -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap
  3. Configure the SpireOIDCDIscoveryProvider Custom Resource (CR) object to reference the Secret generated in the earlier step by running the following command:

    $ oc patch SpireOIDCDiscoveryProvider cluster --type=merge -p='
    spec:
      externalSecretRef: ${TLS_SECRET_NAME}
    '
    Copy to Clipboard Toggle word wrap

Verification

  1. In the SpireOIDCDiscoveryProvider CR, check if the ManageRouteReady condition is set to True by running the following command:

    $ oc wait --for=jsonpath='{.status.conditions[?(@.type=="ManagedRouteReady")].status}'=True SpireOIDCDiscoveryProvider/cluster --timeout=120s
    Copy to Clipboard Toggle word wrap
  2. Verify that the OIDC endpoint can be accessed securely through HTTPS by running the following command:

    $ curl https://$JWT_ISSUER_ENDPOINT/.well-known/openid-configuration
    
    {
      "issuer": "https://$JWT_ISSUER_ENDPOINT",
      "jwks_uri": "https://$JWT_ISSUER_ENDPOINT/keys",
      "authorization_endpoint": "",
      "response_types_supported": [
        "id_token"
      ],
      "subject_types_supported": [],
      "id_token_signing_alg_values_supported": [
        "RS256",
        "ES256",
        "ES384"
      ]
    }%
    Copy to Clipboard Toggle word wrap

10.7.1.2. Disabling a managed route

If you want to fully control the behavior of exposing the OIDC Discovery Provider service, you can disable the managed Route based on your requirements.

Procedure

  • To manually configure the OIDC Discovery Provider, set managedRoute to false by running the following command:

    $ oc patch SpireOIDCDiscoveryProvider cluster --type=merge -p='
    spec:
      managedRoute: "false"
    Copy to Clipboard Toggle word wrap

10.7.1.3. Using Entra ID with Microsoft Azure

After the Entra ID configuration is complete, you can set up Entra ID to work with Azure.

Prerequisites

  • You have configured the SPIRE OIDC Discovery Provider Route to serve the TLS certificates from a publicly trusted CA.

Procedure

  1. Log in to Azure by running the following command:

    $ az login
    Copy to Clipboard Toggle word wrap
  2. Configure variables for your Azure subscription and tenant by running the following commands:

    $ export SUBSCRIPTION_ID=$(az account list --query "[?isDefault].id" -o tsv) 
    1
    Copy to Clipboard Toggle word wrap
    $ export TENANT_ID=$(az account list --query "[?isDefault].tenantId" -o tsv) 
    1
    Copy to Clipboard Toggle word wrap
    $ export LOCATION=centralus 
    1
    Copy to Clipboard Toggle word wrap
    1
    Your unique subscription identifier.
    1
    The ID for your Azure Active Directory instance.
    1
    The Azure region where your resource is created.
  3. Define resource variable names by running the following commands:

    $ export NAME=ztwim 
    1
    Copy to Clipboard Toggle word wrap
    $ export RESOURCE_GROUP="${NAME}-rg" 
    1
    Copy to Clipboard Toggle word wrap
    $ export STORAGE_ACCOUNT="${NAME}storage" 
    1
    Copy to Clipboard Toggle word wrap
    $ export STORAGE_CONTAINER="${NAME}storagecontainer" 
    1
    Copy to Clipboard Toggle word wrap
    $ export USER_ASSIGNED_IDENTITY_NAME="${NAME}-identity" 
    1
    Copy to Clipboard Toggle word wrap
    1
    A base name for all resources.
    1
    The name of the resource group.
    1
    The name for the storage account.
    1
    The name for the storage container.
    1
    The name for a managed identity.
  4. Create the resource group by running the following command:

    $ az group create \
      --name "${RESOURCE_GROUP}" \
      --location "${LOCATION}"
    Copy to Clipboard Toggle word wrap

10.7.1.4. Configuring Azure blob storage

You need to create a new storage account to be used to store content.

Procedure

  1. Create a new storage account that is used to store content by running the following command:

    $ az storage account create \
      --name ${STORAGE_ACCOUNT} \
      --resource-group ${RESOURCE_GROUP} \
      --location ${LOCATION} \
      --encryption-services blob
    Copy to Clipboard Toggle word wrap
  2. Obtain the storage ID for the newly created storage account by running the following command:

    $ export STORAGE_ACCOUNT_ID=$(az storage account show -n ${STORAGE_ACCOUNT} -g ${RESOURCE_GROUP} --query id --out tsv)
    Copy to Clipboard Toggle word wrap
  3. Create a storage container inside the newly created storage account to provide a location to support the storage of blobs by running the following command:

    $ az storage container create \
      --account-name ${STORAGE_ACCOUNT} \
      --name ${STORAGE_CONTAINER} \
      --auth-mode login
    Copy to Clipboard Toggle word wrap

10.7.1.5. Configuring an Azure user managed identity

You need to Create a new User Managed Identity and then obtain the Client ID of the related Service Principal associated with the User Managed Identity.

Procedure

  1. Create a new User Managed Identity and then obtain the Client ID of the related Service Principal associated with the User Managed Identity by running the following command:

    $ az identity create \
      --name ${USER_ASSIGNED_IDENTITY_NAME} \
      --resource-group ${RESOURCE_GROUP}
    Copy to Clipboard Toggle word wrap
    $ export IDENTITY_CLIENT_ID=$(az identity show --resource-group "${RESOURCE_GROUP}" --name "${USER_ASSIGNED_IDENTITY_NAME}" --query 'clientId' -otsv)
    Copy to Clipboard Toggle word wrap
  2. Retrieve the CLIENT_ID of an Azure user-assigned managed identity and save it as an environment variable by running the following command:

    $ export IDENTITY_CLIENT_ID=$(az identity show --resource-group "${RESOURCE_GROUP}" --name "${USER_ASSIGNED_IDENTITY_NAME}" --query 'clientId' -otsv)
    Copy to Clipboard Toggle word wrap
  3. Associate a role with the Service Principal associated with the User Managed Identity by running the following command:

    $ az role assignment create \
      --role "Storage Blob Data Contributor" \
      --assignee "${IDENTITY_CLIENT_ID}" \
      --scope ${STORAGE_ACCOUNT_ID}
    Copy to Clipboard Toggle word wrap

10.7.1.6. Creating the demonstration application

The demonstration application provides you a way to see if the entire system works.

Procedure

To create the demonstration application, complete the following steps:

  1. Set the application name and namespace by running the following commands:

    $ export APP_NAME=workload-app
    Copy to Clipboard Toggle word wrap
    $ export APP_NAMESPACE=demo
    Copy to Clipboard Toggle word wrap
  2. Create the namespace by running the following command:

    $ oc create namespace $APP_NAMESPACE
    Copy to Clipboard Toggle word wrap
  3. Create the application Secret by running the following command:

    $ oc apply -f - << EOF
    apiVersion: v1
    kind: Secret
    metadata:
      name: $APP_NAME
      namespace: $APP_NAMESPACE
    stringData:
      AAD_AUTHORITY: https://login.microsoftonline.com/
      AZURE_AUDIENCE: "api://AzureADTokenExchange"
      AZURE_TENANT_ID: "${TENANT_ID}"
      AZURE_CLIENT_ID: "${IDENTITY_CLIENT_ID}"
      BLOB_STORE_ACCOUNT: "${STORAGE_ACCOUNT}"
      BLOB_STORE_CONTAINER: "${STORAGE_CONTAINER}"
    EOF
    Copy to Clipboard Toggle word wrap

10.7.1.7. Deploying the workload application

Once the demonstration application has been created.

Prerequisites

  • The demonstration application has been created and deployed.

Procedure

  1. To deploy the application, copy the entire command block provided and paste it directly into your terminal. Press Enter.

    $ oc apply -f - << EOF
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: $APP_NAME
      namespace: $APP_NAMESPACE
    ---
    kind: Deployment
    apiVersion: apps/v1
    metadata:
      name: $APP_NAME
      namespace: $APP_NAMESPACE
    spec:
      selector:
        matchLabels:
          app: $APP_NAME
      template:
        metadata:
          labels:
            app: $APP_NAME
            deployment: $APP_NAME
        spec:
          serviceAccountName: $APP_NAME
          containers:
            - name: $APP_NAME
              image: "registry.redhat.io/ubi9/python-311:latest"
              command:
                - /bin/bash
                - "-c"
                - |
                  #!/bin/bash
                  pip install spiffe azure-cli
    
                  cat << EOF > /opt/app-root/src/get-spiffe-token.py
                  #!/opt/app-root/bin/python
                  from spiffe import JwtSource
                  import argparse
                  parser = argparse.ArgumentParser(description='Retrieve SPIFFE Token.')
                  parser.add_argument("-a", "--audience", help="The audience to include in the token", required=True)
                  args = parser.parse_args()
                  with JwtSource() as source:
                    jwt_svid = source.fetch_svid(audience={args.audience})
                    print(jwt_svid.token)
                  EOF
    
                  chmod +x /opt/app-root/src/get-spiffe-token.py
                  while true; do sleep 10; done
              envFrom:
              - secretRef:
                  name: $APP_NAME
              env:
                - name: SPIFFE_ENDPOINT_SOCKET
                  value: unix:///run/spire/sockets/spire-agent.sock
              securityContext:
                allowPrivilegeEscalation: false
                capabilities:
                  drop:
                    - ALL
                readOnlyRootFilesystem: false
                runAsNonRoot: true
                seccompProfile:
                  type: RuntimeDefault
              ports:
                - containerPort: 8080
                  protocol: TCP
              volumeMounts:
                - name: spiffe-workload-api
                  mountPath: /run/spire/sockets
                  readOnly: true
          volumes:
            - name: spiffe-workload-api
              csi:
                driver: csi.spiffe.io
                readOnly: true
    EOF
    Copy to Clipboard Toggle word wrap

Verification

  1. Ensure that the workload-app pod is running successfully by running the following command:

    $ oc get pods -n $APP_NAMESPACE
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                             READY     STATUS      RESTARTS      AGE
    workload-app-5f8b9d685b-abcde    1/1       Running     0             60s
    Copy to Clipboard Toggle word wrap

  2. Retrieve the SPIFFE JWT Token (SVID-JWT):

    1. Get the pod name dynamically by running the following command:

      $ POD_NAME=$(oc get pods -n $APP_NAMESPACE -l app=$APP_NAME -o jsonpath='{.items[0].metadata.name}')
      Copy to Clipboard Toggle word wrap
    2. Run the script inside the pod by running the following command:

      $ oc exec -it $POD_NAME -n $APP_NAMESPACE -- \
        /opt/app-root/src/get-spiffe-token.py -a "api://AzureADTokenExchange"
      Copy to Clipboard Toggle word wrap

10.7.1.8. Configuring Azure with the SPIFFE identity federation

You can configure Azure with the SPIFFE identity federation to enable password-free and automated authentication to the demonstration application.

Procedure

  • Federate the identities between the User Managed Identity and the SPIFFE identity associated with the workload application by running the following command:

    $ az identity federated-credential create \
     --name ${NAME} \
     --identity-name ${USER_ASSIGNED_IDENTITY_NAME} \
     --resource-group ${RESOURCE_GROUP} \
     --issuer https://$JWT_ISSUER_ENDPOINT \
     --subject spiffe://$APP_DOMAIN/ns/$APP_NAMESPACE/sa/$APP_NAME \
     --audience api://AzureADTokenExchange
    Copy to Clipboard Toggle word wrap

You can check if the application workload can access the Azure Blob Storage.

Prerequisites

  • An Azure Blob Storage has been created.

Procedure

  1. Retrieve a JWT token from the SPIFFE Workload API by running the following command:

    $ oc rsh -n $APP_NAMESPACE deployment/$APP_NAME
    Copy to Clipboard Toggle word wrap
  2. Create and export an environment variable named TOKEN by running the following command:

    $ export TOKEN=$(/opt/app-root/src/get-spiffe-token.py --audience=$AZURE_AUDIENCE)
    Copy to Clipboard Toggle word wrap
  3. Log in to Azure CLI included within the pod by running the following command:

    $ az login --service-principal \
      -t ${AZURE_TENANT_ID} \
      -u ${AZURE_CLIENT_ID} \
      --federated-token ${TOKEN}
    Copy to Clipboard Toggle word wrap
  4. Create a new file with the application workload pod and upload the file to the Blob Storage by running the following command:

    $ echo “Hello from OpenShift” > openshift-spire-federated-identities.txt
    Copy to Clipboard Toggle word wrap
  5. Upload a file to the Azure Blog Storage by running the following command:

    $ az storage blob upload \
      --account-name ${BLOB_STORE_ACCOUNT} \
      --container-name ${BLOB_STORE_CONTAINER} \
      --name openshift-spire-federated-identities.txt \
      --file openshift-spire-federated-identities.txt \
      --auth-mode login
    Copy to Clipboard Toggle word wrap

Verification

  • Confirm the file uploaded successfully by listing the files contained by running the following command:

    $ az storage blob list \
      --account-name ${BLOB_STORE_ACCOUNT} \
      --container-name ${BLOB_STORE_CONTAINER} \
      --auth-mode login \
      -o table
    Copy to Clipboard Toggle word wrap

10.7.2. About Vault OpenID Connect

Vault OpenID Connect (OIDC) with SPIRE creates a secure authentication method where Vault uses SPIRE as a trusted OIDC provider. A workload requests a JWT-SVID from its local SPIRE Agent, which has a unique SPIFFE ID. The workload then presents this token to Vault, and Vault validates it against the public keys on the SPIRE Server. If all conditions are met, Vault issues a short-lived Vault token to the workload which the workload can now use to access secrets and perform actions within Vault.

10.7.2.1. Installing Vault

Before Vault is used as an OIDC, you need to install Vault.

Prerequisites

  • Configure a route. For more information, see Route configuration
  • Helm is installed.
  • A command-line JSON processor for easily reading the output from the Vault API.
  • A HashiCorp Helm repository is added.

Procedure

  1. Create the vault-helm-value.yaml file.

    global:
      enabled: true
      openshift: true 
    1
    
      tlsDisable: true 
    2
    
    injector:
      enabled: false
    server:
      ui:
        enabled: true
      image:
        repository: docker.io/hashicorp/vault
        tag: "1.19.0"
      dataStorage:
        enabled: true 
    3
    
        size: 1Gi
      standalone:
        enabled: true 
    4
    
        config: |
          listener "tcp" {
            tls_disable = 1 
    5
    
            address = "[::]:8200"
            cluster_address = "[::]:8201"
          }
          storage "file" {
            path = "/vault/data"
          }
      extraEnvironmentVars: {}
    Copy to Clipboard Toggle word wrap
    1
    Optimizes the deployment for OpenShift-specific security contexts.
    2
    Disables TLS for Kubernetes objects created by the chart.
    3
    Creates a 1Gi persistent volume to store Vault data.
    4
    Deploys a single Vault pod.
    5
    Tells the Vault server to not use TLS.
  2. Run the helm install command:

    $ helm install vault hashicorp/vault \
      --create-namespace -n vault \
      --values ./vault-helm-value.yaml
    Copy to Clipboard Toggle word wrap
  3. Expose the Vault service by running the following command:

    $ oc expose service vault -n vault
    Copy to Clipboard Toggle word wrap
  4. Set the VAULT_ADDR environment variable to retrieve the hostname from the new route and then export it by running the following command:

    $ export VAULT_ADDR="http://$(oc get route vault -n vault -o jsonpath='{.spec.host}')"
    Copy to Clipboard Toggle word wrap
    Note

    http:// is prepended because TLS is disabled.

Verification

  • To ensure your Vault instance is running, run the following command:

    $ curl -s $VAULT_ADDR/v1/sys/health | jq
    Copy to Clipboard Toggle word wrap

    Example output

    {
      "initialized": true,
      "sealed": true,
      "standby": true,
      "performance_standby": false,
      "replication_performance_mode": "disabled",
      "replication_dr_mode": "disabled",
      "server_time_utc": 1663786574,
      "version": "1.19.0",
      "cluster_name": "vault-cluster-a1b2c3d4",
      "cluster_id": "5e6f7a8b-9c0d-1e2f-3a4b-5c6d7e8f9a0b"
    }
    Copy to Clipboard Toggle word wrap

10.7.2.2. Initializing and unsealing Vault

A newly installed Vault is sealed. This means that the primary encryption key, which protects all other encryption keys, is not loaded into the server memory upon startup. You need to initialize Vault to unseal it.

The steps to initialize a Vault server are:

  1. Initialize and unseal Vault
  2. Enable the key-value (KV) secrets engine and store a test secret
  3. Configure JSON Web Token (JWT) authentication with SPIRE
  4. Deploy a demonstration application
  5. Authenticate and retrieve the secret

Prerequisites

  • Ensure that Vault is running.
  • Ensure that Vault is not initialized. You can only initialize a Vault server once.

Procedure

  1. Open a remote shell into the vault pod by running the following command:

    $ oc rsh -n vault statefulset/vault
    Copy to Clipboard Toggle word wrap
  2. Initialize Vault to get your unseal key and root token by running the following command:

    $ vault operator init -key-shares=1 -key-threshold=1 -format=json
    Copy to Clipboard Toggle word wrap
  3. Export the unseal key and root token you received from the earlier command by running the following commands:

    $ export UNSEAL_KEY=<Your-Unseal-Key>
    Copy to Clipboard Toggle word wrap
    $ export ROOT_TOKEN=<Your-Root-Token>
    Copy to Clipboard Toggle word wrap
  4. Unseal Vault using your unseal key by running the following command:

    $ vault operator unseal -format=json $UNSEAL_KEY
    Copy to Clipboard Toggle word wrap
  5. Exit the pod by entering exit.

Verification

  • To verify that the Vault pod is ready, run the following command:

    $ oc get pod -n vault
    Copy to Clipboard Toggle word wrap

    Example output

    NAME        READY        STATUS      RESTARTS     AGE
    vault-0     1/1          Running     0            65d
    Copy to Clipboard Toggle word wrap

You enable the key-value secrets engine to establish a secure, centralized location for managing credentials.

Prerequisites

  • Make sure that Vault is initialized and unsealed.

Procedure

  1. Open another shell session in the Vault pod by running the following command:

    $ oc rsh -n vault statefulset/vault
    Copy to Clipboard Toggle word wrap
  2. Export your root token again within this new session and log in by running the following command:

    $ export ROOT_TOKEN=<Your-Root-Token>
    Copy to Clipboard Toggle word wrap
    $ vault login "${ROOT_TOKEN}"
    Copy to Clipboard Toggle word wrap
  3. Enable the KV secrets engine at the secret/ path and create a test secret by running the following commands:

    $ export NAME=ztwim
    Copy to Clipboard Toggle word wrap
    $ vault secrets enable -path=secret kv
    Copy to Clipboard Toggle word wrap
    $ vault kv put secret/$NAME version=v0.1.0
    Copy to Clipboard Toggle word wrap

Verification

  • To verify that the secret is stored correctly, run the following command:

    $ vault kv get secret/$NAME
    Copy to Clipboard Toggle word wrap

10.7.2.4. Configuring JSON Web Token authentication with SPIRE

You need to set up JSON Web Token (JWT) authentication so your applications can securely log in to Vault by using SPIFFE identities.

Prerequisites

  • Make sure that Vault is initialized and unsealed.
  • Ensure that a test secret is stored in the key-value secrets engine.

Procedure

  1. On your local machine, retrieve the SPIRE Certificate Authority (CA) bundle and save it to a file by running the following command:

    $ oc get cm -n zero-trust-workload-identity-manager spire-bundle -o jsonpath='{ .data.bundle\.crt }' > oidc_provider_ca.pem
    Copy to Clipboard Toggle word wrap
  2. Back in the Vault pod shell, create a temporary file and paste the contents of oidc_provider_ca.pem into it by running the following command:

    $ cat << EOF > /tmp/oidc_provider_ca.pem
    -----BEGIN CERTIFICATE-----
    <Paste-Your-Certificate-Content-Here>
    -----END CERTIFICATE-----
    EOF
    Copy to Clipboard Toggle word wrap
  3. Set up the necessary environment variables for the JWT configuration by running the following commands:

    $ export APP_DOMAIN=<Your-App-Domain>
    Copy to Clipboard Toggle word wrap
    $ export JWT_ISSUER_ENDPOINT="oidc-discovery.$APP_DOMAIN"
    Copy to Clipboard Toggle word wrap
    $ export OIDC_URL="https://$JWT_ISSUER_ENDPOINT"
    Copy to Clipboard Toggle word wrap
    $ export OIDC_CA_PEM="$(cat /tmp/oidc_provider_ca.pem)"
    Copy to Clipboard Toggle word wrap
  4. Crate a new environment variable by running the following command:

    $ export ROLE="${NAME}-role"
    Copy to Clipboard Toggle word wrap
  5. Enable the JWT authentication method by running the following command:

    $ vault auth enable jwt
    Copy to Clipboard Toggle word wrap
  6. Configure you ODIC authentication method by running the following command:

    $ vault write auth/jwt/config \
      oidc_discovery_url=$OIDC_URL \
      oidc_discovery_ca_pem="$OIDC_CA_PEM" \
      default_role=$ROLE
    Copy to Clipboard Toggle word wrap
  7. Create a policy named ztwim-policy by running the following command:

    $ export POLICY="${NAME}-policy"
    Copy to Clipboard Toggle word wrap
  8. Grant read access to the secret you created earlier by running the following command:

    $ vault policy write $POLICY -<<EOF
    path "secret/$NAME" {
        capabilities = ["read"]
    }
    EOF
    Copy to Clipboard Toggle word wrap
  9. Create the following environment variables by running the following commands:

    $ export APP_NAME=client
    Copy to Clipboard Toggle word wrap
    $ export APP_NAMESPACE=demo
    Copy to Clipboard Toggle word wrap
    $ export AUDIENCE=$APP_NAME
    Copy to Clipboard Toggle word wrap
  10. Create a JWT role that binds the policy to workload with a specific SPIFFE ID by running the following command:

    $ vault write auth/jwt/role/$ROLE -<<EOF
    {
      "role_type": "jwt",
      "user_claim": "sub",
      "bound_audiences": "$AUDIENCE",
      "bound_claims_type": "glob",
      "bound_claims": {
        "sub": "spiffe://$APP_DOMAIN/ns/$APP_NAMESPACE/sa/$APP_NAME"
      },
      "token_ttl": "24h",
      "token_policies": "$POLICY"
    }
    EOF
    Copy to Clipboard Toggle word wrap

10.7.2.5. Deploying a demonstration application

When you deploy a demonstration application, you create a simple client application that uses its SPIFFE identity to authenticate with Vault.

Procedure

  1. On your local machine, set the environment variables for your application by running the following commands:

    $ export APP_NAME=client
    Copy to Clipboard Toggle word wrap
    $ export APP_NAMESPACE=demo
    Copy to Clipboard Toggle word wrap
    $ export AUDIENCE=$APP_NAME
    Copy to Clipboard Toggle word wrap
  2. Apply the Kubernetes manifest to create the namespace, service account, and deployment for the demo app by running the following command. This deployment mounts the SPIFFE CSI driver socket.

    $ oc apply -f - <<EOF
    # ... (paste the full YAML from your provided code here) ...
    EOF
    Copy to Clipboard Toggle word wrap

Verification

  • Verify that the client deployment is ready by running the following command:

    $ oc get deploy -n $APP_NAMESPACE
    Copy to Clipboard Toggle word wrap

    Example output

    NAME             READY        UP-TO-DATE      AVAILABLE     AGE
    frontend-app     2/2          2               2             120d
    backend-api      3/3          3               3             120d
    Copy to Clipboard Toggle word wrap

10.7.2.6. Authenticating and retrieving the secret

You use the demonstration application to fetch a JWT token from the SPIFFE Workload API and use it to log in to Vault and retrieve the secret.

Procedure

  1. Fetch a JWT-SVID by running the following command inside the running client pod:

    $ oc -n $APP_NAMESPACE exec -it $(oc get pod -o=jsonpath='{.items[*].metadata.name}' -l app=$APP_NAME -n $APP_NAMESPACE) \
      -- /opt/spire/bin/spire-agent api fetch jwt \
      -socketPath /run/spire/sockets/spire-agent.sock \
      -audience $AUDIENCE
    Copy to Clipboard Toggle word wrap
  2. Copy the token from the output and export it as an environment variable on your local machine by running the following command:

    $ export IDENTITY_TOKEN=<Your-JWT-Token>
    Copy to Clipboard Toggle word wrap
  3. Crate a new environment variable by running the following command:

    $ export ROLE="${NAME}-role"
    Copy to Clipboard Toggle word wrap
  4. Use curl to send the JWT token to the Vault login endpoint to get a Vault client token by running the following command:

    $ VAULT_TOKEN=$(curl -s --request POST --data '{ "jwt": "'"${IDENTITY_TOKEN}"'", "role": "'"${ROLE}"'"}' "${VAULT_ADDR}"/v1/auth/jwt/login | jq -r '.auth.client_token')
    Copy to Clipboard Toggle word wrap

Verification

  • Use the newly acquired Vault token to read the secret from the KV store by running the following command:

    $ curl -s -H "X-Vault-Token: $VAULT_TOKEN" $VAULT_ADDR/v1/secret/$NAME | jq
    Copy to Clipboard Toggle word wrap

    You should see the contents of the secret ("version": "v0.1.0") in the output, confirming the entire workflow is successful

10.8. Zero Trust Workload Identity Manager SPIRE federation

Configure SPIRE federation to enable workloads in different trust domains to securely authenticate each other across clusters, cloud providers, and organizational boundaries. By establishing trust relationships between separate SPIRE deployments, you can build a zero-trust architecture that spans multiple environments without compromising security or sharing secrets.

Federation works by securely sharing trust bundles between SPIRE servers through dedicated federation endpoints. Each SPIRE deployment maintains its own trust domain and cryptographic identity, while being able to verify identities from federated trust domains. This approach enables cross-cluster communication, multi-cloud deployments, and secure integration with external partners.

Setting up SPIRE federation involves the following high-level steps:

  1. Choose an authentication profile: Select either https_spiffe or https_web.
  2. Configure the bundle endpoints: Each cluster exposes its trust bundle through a federation endpoint secured by the chosen authentication profile.
  3. Bootstrap the initial trust: Manually fetch and configure the initial trust bundle from each remote cluster.
  4. Establish federation relationships: Create ClusterFederatedTrustDomain resources to define which clusters trust each other.
  5. Configure automatic synchronization: The SPIRE Controller Manager automatically keeps trust bundles synchronized after initial setup.

10.8.1. Understanding bundle endpoint profiles

The bundle endpoint profile determines how your cluster exposes its trust bundle to other SPIRE deployments and how it authenticates remote clusters accessing the bundle. Choose the profile that best matches your security requirements and infrastructure.

The Zero Trust Workload Identity Manager supports two authentication profiles for federation:

https_spiffe
Uses SPIFFE-based TLS authentication. The SPIRE server presents its own SVID (SPIFFE Verifiable Identity Document) to authenticate itself to remote SPIRE servers. This profile provides strong cryptographic identity verification and is ideal for federation between SPIRE deployments.
https_web
Uses standard Web PKI (X.509 certificates from public or private certificate Authorities). This profile supports both automatic certificate management via ACME (Let’s Encrypt) and manual certificate management using tools like cert-manager.

The following table summarizes the key differences between the two profiles:

Expand
Criteriahttps_spiffehttps_web

Authentication method

SPIFFE SVID (TLS)

X.509 certificate from CA

Certificate management

Automatic (SPIRE-managed)

ACME (automatic) or manual

Trust model

SPIFFE trust domain

Web PKI / CA trust

Best for

Internal SPIRE-to-SPIRE federation

External federation, public endpoints

Security level

Very high (cryptographic identity)

High (CA-based trust)

Setup complexity

Medium (requires SPIFFE IDs)

Low (ACME) to Medium (manual certs)

Important

After enablement, federation cannot be disabled. The bundle endpoint profile is immutable once configured. Changing the profile or disabling federation requires reinstallation of the system. However, peer configurations (federatesWith) remain dynamic and can be added or removed at any time. Plan your profile selection carefully based on your long-term federation requirements.

10.8.2. Federation configuration examples

The following examples demonstrate different SPIRE federation configurations. Use these as templates when setting up federation between your clusters.

Example 1: Using ACME for automatic certificate management

The following example shows how to configure federation using Let’s Encrypt for automatic certificate provisioning and renewal:

apiVersion: operator.openshift.io/v1alpha1
kind: SpireServer
metadata:
  name: cluster
spec:
  trustDomain: cluster1.example.com
  federation:
    bundleEndpoint:
      profile: https_web
      refreshHint: 300
      httpsWeb:
        acme:
          directoryUrl: https://acme-v02.api.letsencrypt.org/directory
          domainName: federation.apps.cluster1.example.com
          email: admin@example.com
          tosAccepted: "true"
    federatesWith:
      - trustDomain: cluster2.example.com
        bundleEndpointUrl: https://federation.apps.cluster2.example.com
        bundleEndpointProfile: https_web
      - trustDomain: cluster3.example.com
        bundleEndpointUrl: https://federation.apps.cluster3.example.com
        bundleEndpointProfile: https_web
    managedRoute: "true"
Copy to Clipboard Toggle word wrap
Example 2: Using manual certificate management with cert-manager

The following example shows how to configure federation using externally managed certificates:

apiVersion: operator.openshift.io/v1alpha1
kind: SpireServer
metadata:
  name: cluster
spec:
  trustDomain: cluster1.example.com
  federation:
    bundleEndpoint:
      profile: https_web
      refreshHint: 300
      httpsWeb:
        servingCert:
          fileSyncInterval: 86400
          externalSecretRef: spire-server-federation-tls
    federatesWith:
      - trustDomain: cluster2.example.com
        bundleEndpointUrl: https://federation.apps.cluster2.example.com
        bundleEndpointProfile: https_web
      - trustDomain: cluster3.example.com
        bundleEndpointUrl: https://federation.apps.cluster3.example.com
        bundleEndpointProfile: https_web
    managedRoute: "true"
Copy to Clipboard Toggle word wrap
  • The fileSyncInterval field checks for certificate updates every 24 hours.
  • The externalSecretRef field is the name of the Kubernetes Secret containing tls.crt and tls.key
Example 3: Using https_spiffe profile for SPIRE-to-SPIRE federation

The following example shows how to configure federation using SPIFFE-based TLS authentication:

apiVersion: operator.openshift.io/v1alpha1
kind: SpireServer
metadata:
  name: cluster
spec:
  trustDomain: cluster1.example.com
  federation:
    bundleEndpoint:
      profile: https_spiffe
      refreshHint: 300
    federatesWith:
      - trustDomain: cluster2.example.com
        bundleEndpointUrl: https://federation.apps.cluster2.example.com
        bundleEndpointProfile: https_spiffe
        endpointSpiffeId: spiffe://cluster2.example.com/spire/server
      - trustDomain: cluster3.example.com
        bundleEndpointUrl: https://federation.apps.cluster3.example.com
        bundleEndpointProfile: https_spiffe
        endpointSpiffeId: spiffe://cluster3.example.com/spire/server
    managedRoute: "true"
Copy to Clipboard Toggle word wrap
  • The profile field uses https_spiffe profile for SPIFFE-based TLS authentication.
  • The endpointSiffeId field contains the SPIFFE ID of the remote SPIRE server, required for identity validation.
Example 4: Mixed federation with multiple authentication profiles

The following example shows a cluster federating with multiple remote clusters using different authentication profiles:

apiVersion: operator.openshift.io/v1alpha1
kind: SpireServer
metadata:
  name: cluster
spec:
  trustDomain: internal-cluster.example.com
  federation:
    bundleEndpoint:
      profile: https_spiffe
      refreshHint: 300
    federatesWith:
      # Internal cluster using SPIFFE TLS
      - trustDomain: dev-cluster.example.com
        bundleEndpointUrl: https://federation.apps.dev-cluster.example.com
        bundleEndpointProfile: https_spiffe
        endpointSpiffeId: spiffe://dev-cluster.example.com/spire/server
      # External partner using Web PKI
      - trustDomain: partner.example.com
        bundleEndpointUrl: https://federation.partner.example.com
        bundleEndpointProfile: https_web
      # Another external partner using Web PKI
      - trustDomain: vendor.example.com
        bundleEndpointUrl: https://spire-federation.vendor.example.com
        bundleEndpointProfile: https_web
    managedRoute: "true"
Copy to Clipboard Toggle word wrap
  • The profile field cluster exposes its bundle using https_spiffe profile.
  • The bundleEndpointProfile field cluster exposes its bundle using https_spiffe profile.

10.8.3. Configuring SPIRE federation with the https_spiffe profile

The Zero Trust Workload Identity Manager includes SPIRE Federation support, allowing multiple independent SPIRE deployments to establish trust relationships. This procedure demonstrates how to configure federation using the https_spiffe profile, which uses SPIFFE-based TLS authentication between SPIRE servers.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have installed the Zero Trust Workload Identity Manager on all clusters that will participate in the federation.
  • You have cluster-admin privileges on all participating clusters.
  • You have network connectivity between the clusters you intend to federate.

Procedure

  1. Configure the SpireServer custom resource on each cluster to enable federation with the https_spiffe profile. The https_spiffe profile uses SPIFFE-based TLS authentication, where SPIRE servers authenticate to each other using their own SVIDs (SPIFFE Verifiable Identity Documents).

    apiVersion: operator.openshift.io/v1alpha1
    kind: SpireServer
    metadata:
      name: cluster
    spec:
      trustDomain: cluster1.example.com
      federation:
        bundleEndpoint:
          profile: https_spiffe
          refreshHint: 300
        managedRoute: "true"
    Copy to Clipboard Toggle word wrap
    • The trustDomain field sets a unique trust domain for each cluster.
    • The profile field uses the https_spiffe profile for SPIFFE-based TLS authentication.
    • The refreshHint field suggests intervals (in seconds) for remote servers to refresh the trust bundle. Range: 60-3600 seconds.
    • The managedRoute field enables automatic route creation by the Operator.
  2. Apply the configuration changes by running the following command:

    $ oc apply -f spire-server.yaml
    Copy to Clipboard Toggle word wrap
  3. Check the status of the SPIRE Server by entering the following command. Wait for the Ready status to be returned.

    $ oc get spireserver cluster -w
    Copy to Clipboard Toggle word wrap
  4. Verify that the federation route has been created:

    $ oc get route -n zero-trust-workload-identity-manager | grep federation
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                      HOST/PORT                                    PATH   SERVICES        PORT    TERMINATION
    spire-server-federation   federation.apps.cluster1.example.com               spire-server     8443    passthrough
    Copy to Clipboard Toggle word wrap

  5. Fetch the trust bundle from each remote cluster’s federation endpoint:

    $  curl -k https://federation.apps.cluster2.example.com > cluster2-bundle.json
    Copy to Clipboard Toggle word wrap
    Note

    For https_spiffe profile, you might need to use -k flag if the certificate is not trusted by your system’s CA bundle:

    The response contains the trust bundle in JSON Web Key Set (JWKS) format:

    Example trust bundle

    {
      "keys": [
        {
          "use": "x509-svid",
          "kty": "RSA",
          "n": "...",
          "e": "AQAB",
          "x5c": ["..."]
        }
      ],
      "spiffe_sequence": 1,
      "refresh_hint": 300
    }
    Copy to Clipboard Toggle word wrap

  6. Create ClusterFederatedTrustDomain resources for each remote trust domain.

    1. On Cluster 1, create a resource to federate with Cluster 2:

      apiVersion: spire.spiffe.io/v1alpha1
      kind: ClusterFederatedTrustDomain
      metadata:
        name: cluster2-federation
      spec:
        trustDomain: cluster2.example.com
        bundleEndpointURL: https://federation.apps.cluster2.example.com
        bundleEndpointProfile:
          type: https_spiffe
          endpointSPIFFEID: spiffe://cluster2.example.com/spire/server
        trustDomainBundle: |
          {
            "keys": [
              {
                "use": "x509-svid",
                "kty": "RSA",
                "n": "...",
                "e": "AQAB",
                "x5c": ["..."]
              }
            ],
            "spiffe_sequence": 1
          }
      Copy to Clipboard Toggle word wrap
      • The endpointSPIFFEID field contains the SPIFFE ID of the remote SPIRE server. Required for https_spiffe profile to validate the remote server’s identity.
      • The trustDomainBundle contains the complete trust bundle JSON that you fetched in the previous step.
  7. Apply the ClusterFederatedTrustDomain resource by running the following command:

    $ oc apply -f clusterfederatedtrustdomain.yaml
    Copy to Clipboard Toggle word wrap
  8. Repeat steps 5-7 on each cluster for every remote cluster it should federate with. For bidirectional federation, each cluster needs a ClusterFederatedTrustDomain resource for every other cluster.
  9. Update the SpireServer resource on each cluster to add the federatesWith configuration:

    apiVersion: operator.openshift.io/v1alpha1
    kind: SpireServer
    metadata:
      name: cluster
    spec:
      trustDomain: cluster1.example.com
      federation:
        bundleEndpoint:
          profile: https_spiffe
          refreshHint: 300
        federatesWith:
          - trustDomain: cluster2.example.com
            bundleEndpointUrl: https://federation.apps.cluster2.example.com
            bundleEndpointProfile: https_spiffe
            endpointSpiffeId: spiffe://cluster2.example.com/spire/server
          - trustDomain: cluster3.example.com
            bundleEndpointUrl: https://federation.apps.cluster3.example.com
            bundleEndpointProfile: https_spiffe
            endpointSpiffeId: spiffe://cluster3.example.com/spire/server
        managedRoute: "true"
    Copy to Clipboard Toggle word wrap
    • The federatesWith field lists all remote trust domains this cluster should federate with.
  10. Apply the updated configuration by running the following command:

    $ oc apply -f spireserver.yaml
    Copy to Clipboard Toggle word wrap

Verification

  1. Verify that the ClusterFederatedTrustDomain resources have been created by running the following command:

    $ oc get clusterfederatedtrustdomains
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                  TRUST DOMAIN           ENDPOINT URL                                      AGE
    cluster2-federation   cluster2.example.com   https://federation.apps.cluster2.example.com     5m
    cluster3-federation   cluster3.example.com   https://federation.apps.cluster3.example.com     5m
    Copy to Clipboard Toggle word wrap

  2. Check the status of a ClusterFederatedTrustDomain to ensure bundle synchronization is working by running the following command:

    $ oc describe clusterfederatedtrustdomain cluster2-federation
    Copy to Clipboard Toggle word wrap

    Look for successful status conditions indicating that the trust bundle has been synchronized.

  3. Verify that the federation endpoint is accessible by running the following command:

    $ curl https://federation.apps.cluster1.example.com
    Copy to Clipboard Toggle word wrap

    You should receive a JSON response containing the trust bundle.

  4. Check the SPIRE Server logs to confirm federation is active by running the following command:

    $ oc logs -n zero-trust-workload-identity-manager \
        deployment/spire-server -c spire-server --tail=50
    Copy to Clipboard Toggle word wrap

    Look for log messages indicating successful bundle synchronization with federated trust domains.

Using SPIRE federation with Automatic Certificate Management Environment (ACME) protocol provides automatic certificate provisioning from Let’s Encrypt. ACME also enables automatic certificate renewal before expiration, eliminating manual certificate management overhead.

Prerequisites

  • You have installed the Zero Trust Workload Identity Manager on all clusters that will participate in the federation.
  • You have installed the OpenShift CLI (oc).
  • You have cluster-admin privileges on all participating clusters.
  • Your federation endpoints must be publicly accessible for Let’s Encrypt HTTP-01 challenge validation.
  • You have network connectivity between all federated clusters.

Procedure

  1. Configure the SpireServer custom resource on each cluster to enable federation with ACME certificate management.

    Create or update your SpireServer resource with the federation configuration:

    apiVersion: operator.openshift.io/v1alpha1
    kind: SpireServer
    metadata:
      name: cluster
    spec:
      trustDomain: cluster1.example.com
      federation:
        bundleEndpoint:
          profile: https_web
          refreshHint: 300
          httpsWeb:
            acme:
              directoryUrl: https://acme-v02.api.letsencrypt.org/directory
              domainName: federation.apps.cluster1.example.com
              email: admin@example.com
              tosAccepted: "true"
        managedRoute: "true"
    Copy to Clipboard Toggle word wrap
    • The trustDomain field sets a unique trust domain for each cluster (for example, cluster1.example.com, cluster2.example.com).
    • The profile field uses the https_web profile for ACME-based certificate management.
    • The directoryUrl field contains the Let’s Encrypt production directory URL. For testing, use: https://acme-staging-v02.api.letsencrypt.org/directory.
    • The domainName field is the domain name where your federation endpoint is accessible. This automatically sets to federation.<cluster-apps-domain> if managedRoute is set to "true".
    • The email field is your email address for ACME account registration and certificate expiration notifications.
    • The tosAccepted field accepts the Let’s Encrypt Terms of Service.
    • The managedRoute field enables an automatic route creation by the operator for the federation bundle endpoint.
  2. Apply the configuration to each cluster by running the following command:

    $ oc apply -f spireserver.yaml
    Copy to Clipboard Toggle word wrap
  3. Check the status of the SPIRE Server by entering the following command. Wait for the Ready status to be returned before proceeding to the next step.

    $ oc get spireserver cluster -w
    Copy to Clipboard Toggle word wrap

    Example output

    NAME      STATUS   AGE
    cluster   Ready    5m
    Copy to Clipboard Toggle word wrap

  4. Verify that the federation route has been created by running the following command:

    $ oc get route -n zero-trust-workload-identity-manager | grep federation
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                      HOST/PORT                                          PATH   SERVICES        PORT   TERMINATION
    spire-server-federation   federation.apps.cluster1.example.com                     spire-server     8443    passthrough
    Copy to Clipboard Toggle word wrap

  5. On each cluster, fetch the trust bundle from the federation endpoint by running the following command:

    $ curl https://federation.apps.cluster1.example.com > cluster1-bundle.json
    Copy to Clipboard Toggle word wrap

    The response contains the trust bundle in JSON Web Key Set (JWKS) format:

    Example trust bundle

    {
      "keys": [
        {
          "use": "x509-svid",
          "kty": "RSA",
          "n": "...",
          "e": "AQAB",
          "x5c": ["..."]
        }
      ],
      "spiffe_sequence": 1,
      "refresh_hint": 300
    }
    Copy to Clipboard Toggle word wrap

  6. Create ClusterFederatedTrustDomain resources to establish federation relationships.

    1. On Cluster 1, create resources to federate with Cluster 2 and Cluster 3:

      apiVersion: spire.spiffe.io/v1alpha1
      kind: ClusterFederatedTrustDomain
      metadata:
        name: cluster2-federation
      spec:
        trustDomain: cluster2.example.com
        bundleEndpointURL: https://federation.apps.cluster2.example.com
        bundleEndpointProfile:
          type: https_web
        trustDomainBundle: |
          {
            "keys": [...],
            "spiffe_sequence": 1
          }
      ---
      apiVersion: spire.spiffe.io/v1alpha1
      kind: ClusterFederatedTrustDomain
      metadata:
        name: cluster3-federation
      spec:
        trustDomain: cluster3.example.com
        bundleEndpointURL: https://federation.apps.cluster3.example.com
        bundleEndpointProfile:
          type: https_web
        trustDomainBundle: |
          {
            "keys": [...],
            "spiffe_sequence": 1
          }
      Copy to Clipboard Toggle word wrap
      • The trustDomainBundle field contains the complete trust bundle JSON that you fetched using curl in step 5.
  7. Apply the ClusterFederatedTrustDomain resources by running the following command:

    $ oc apply -f cluster-federated-trust-domains.yaml
    Copy to Clipboard Toggle word wrap
  8. Repeat steps 6 and 7 on each cluster to establish bidirectional federation. Each cluster needs ClusterFederatedTrustDomain resources for every other cluster it federates with.
  9. Update the SpireServer resource on each cluster to add the federatesWith configuration:

    apiVersion: operator.openshift.io/v1alpha1
    kind: SpireServer
    metadata:
      name: cluster
    spec:
      # ... existing configuration ...
      federation:
        bundleEndpoint:
          # ... existing bundleEndpoint configuration ...
        federatesWith:
          - trustDomain: cluster2.example.com
            bundleEndpointUrl: https://federation.apps.cluster2.example.com
            bundleEndpointProfile: https_web
          - trustDomain: cluster3.example.com
            bundleEndpointUrl: https://federation.apps.cluster3.example.com
            bundleEndpointProfile: https_web
        managedRoute: "true"
    Copy to Clipboard Toggle word wrap
    • The federatesWith field lists all remote trust domains this cluster should federate with.
  10. Apply the updated configuration by running the following command:

    $ oc apply -f spireserver.yaml
    Copy to Clipboard Toggle word wrap

Verification

  1. Verify that the ClusterFederatedTrustDomain resources have been created by running the following command:

    $ oc get clusterfederatedtrustdomains
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                  TRUST DOMAIN          ENDPOINT URL                                   AGE
    cluster2-federation   cluster2.example.com  https://federation.apps.cluster2.example.com   5m
    cluster3-federation   cluster3.example.com  https://federation.apps.cluster3.example.com   5m
    Copy to Clipboard Toggle word wrap

  2. Check the status of a ClusterFederatedTrustDomain to ensure bundle synchronization is working by running the following command:

    $ oc describe clusterfederatedtrustdomain cluster2-federation
    Copy to Clipboard Toggle word wrap

    Look for Successful status conditions indicating that the trust bundle has been synchronized.

  3. Verify that the federation endpoint is accessible and serving the trust bundle by running the following command:

    $ curl https://federation.apps.cluster1.example.com
    Copy to Clipboard Toggle word wrap

    You should receive a JSON response containing the trust bundle.

  4. Check the SPIRE Server logs to confirm federation is active by running the following command:

    $ oc logs -n zero-trust-workload-identity-manager deployment/spire-server -c spire-server --tail=50
    Copy to Clipboard Toggle word wrap

    Look for log messages indicating successful bundle synchronization with federated trust domains.

  5. Verify that all SPIRE components are running by running the following command:

    $ oc get pods -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                    READY   STATUS    RESTARTS   AGE
    spire-agent-abcde       1/1     Running   0          10m
    spire-server-0          2/2     Running   0          10m
    Copy to Clipboard Toggle word wrap

  6. Optional: Test cross-cluster workload authentication by deploying workloads with SPIFFE identities on different clusters and verifying they can authenticate to each other using the federated trust.

10.8.5. Using SPIRE federation with manual certificate management

You can use SPIRE federation with custom certificate management using cert-manager or other certificate providers. This approach provides flexibility for organizations that require control over certificate issuance, support for internal certificate authorities (CAs), or integration with existing certificate management infrastructure.

Prerequisites

  • You have installed the Zero Trust Workload Identity Manager on all clusters that will participate in the federation.
  • You have installed the OpenShift CLI (oc).
  • You have cluster-admin privileges on all participating clusters.
  • You have installed the cert-manager Operator for Red Hat OpenShift. For more information, see cert-manager Operator for Red Hat OpenShift.
  • Your federation endpoints must be publicly accessible for certificate validation.
  • You have network connectivity between all federated clusters.

Procedure

  1. Install the cert-manager Operator on the cluster where you want to use externally managed certificates.

    Create a namespace and install the operator:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: cert-manager-operator
    ---
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: openshift-cert-manager-operator
      namespace: cert-manager-operator
    spec:
      upgradeStrategy: Default
    ---
    apiVersion: operators.coreos.com/stable-v1
    kind: Subscription
    metadata:
      name: openshift-cert-manager-operator
      namespace: cert-manager-operator
    spec:
      source: redhat-operators
      sourceNamespace: openshift-marketplace
      name: openshift-cert-manager-operator
      channel: stable-v1
    Copy to Clipboard Toggle word wrap
  2. Apply the cert-manager installation by running the following command:

    $ oc apply -f cert-manager-install.yaml
    Copy to Clipboard Toggle word wrap
  3. Check the status of the cert-manager Operator by entering the following command:

    $ oc get pods -n cert-manager
    Copy to Clipboard Toggle word wrap

    All cert-manager pods should be in Running status.

  4. Create an Issuer for certificate provisioning.

    For Let’s Encrypt with HTTP-01 challenge:

    apiVersion: cert-manager.io/v1
    kind: Issuer
    metadata:
      name: letsencrypt-http01
      namespace: zero-trust-workload-identity-manager
    spec:
      acme:
        server: https://acme-v02.api.letsencrypt.org/directory
        privateKeySecretRef:
          name: letsencrypt-account-key
        solvers:
          - http01:
              ingress:
                ingressClassName: openshift-default
    Copy to Clipboard Toggle word wrap

    Alternatively, for an internal CA:

    apiVersion: cert-manager.io/v1
    kind: Issuer
    metadata:
      name: internal-ca
      namespace: zero-trust-workload-identity-manager
    spec:
      ca:
        secretName: internal-ca-key-pair
    Copy to Clipboard Toggle word wrap
  5. Apply the Issuer by running the following command:

    $ oc apply -f issuer.yaml
    Copy to Clipboard Toggle word wrap
  6. Determine the federation endpoint domain name.

    The federation route follows a predictable naming pattern if managedRoute is set to true. Get your cluster’s application domain by running the following command:

    $ CLUSTER_DOMAIN=$(oc get ingresses.config/cluster -o jsonpath='{.spec.domain}')
    $ FEDERATION_DOMAIN="federation.${CLUSTER_DOMAIN}"
    $ echo "Federation domain will be: $FEDERATION_DOMAIN"
    Copy to Clipboard Toggle word wrap

    Example output

    Federation domain will be: federation.apps.cluster1.example.com
    Copy to Clipboard Toggle word wrap

    Note

    The federation route is created automatically if managedRoute is set to true when you apply the SpireServer configuration in a later step. The route name is spire-server-federation and the hostname is federation.<cluster-apps-domain>.

  7. Create a Certificate resource to request a TLS certificate.

    Use the federation domain determined in the previous step:

    apiVersion: cert-manager.io/v1
    kind: Certificate
    metadata:
      name: spire-server-federation-tls
      namespace: zero-trust-workload-identity-manager
    spec:
      secretName: spire-server-federation-tls
      duration: 2160h
      renewBefore: 360h
      commonName: federation.apps.cluster1.example.com
      dnsNames:
        - federation.apps.cluster1.example.com
      usages:
        - server auth
        - digital signature
        - key encipherment
      issuerRef:
        kind: Issuer
        name: letsencrypt-http01
    Copy to Clipboard Toggle word wrap
    • The secretName field must match the externalSecretRef value in SpireServer.
    • The duration field shows how long a certificate is valid. Certificates are valid for 90 days.
    • The renewBefore field shows how many days a certificate must be renewed before it expires. Renew a certificate 15 days before expiration.
    • The commonName field must be replaced with your actual federation domain from the previous step.
    • The dnsNames field must match the commonName and the actual route hostname that was created.
    • The name field must reference the Issuer that was created earlier.
  8. Apply the Certificate resource by running the following command:

    $ oc apply -f certificate.yaml
    Copy to Clipboard Toggle word wrap
  9. Monitor the certificate issuance by running the following command:

    $ oc get certificate spire-server-federation-tls \
        -n zero-trust-workload-identity-manager -w
    Copy to Clipboard Toggle word wrap

    Example output when ready

    NAME                            READY   SECRET                          AGE
    spire-server-federation-tls     True    spire-server-federation-tls     2m
    Copy to Clipboard Toggle word wrap

  10. Create RBAC permissions for the OpenShift Ingress Router to access the certificate secret.

    Create a Role by running the following command:

    $ oc create role secret-reader \
        --verb=get,list,watch \
        --resource=secrets \
        --resource-name=spire-server-federation-tls \
        -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap

    Create a RoleBinding by running the following command:

    $ oc create rolebinding secret-reader-binding \
        --role=secret-reader \
        --serviceaccount=openshift-ingress:router \
        -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap
  11. Configure the SpireServer custom resource to use manual certificate management.

    Now that the certificate is ready, configure the SpireServer to reference it:

    apiVersion: operator.openshift.io/v1alpha1
    kind: SpireServer
    metadata:
      name: cluster
    spec:
      trustDomain: cluster1.example.com
      federation:
        bundleEndpoint:
          profile: https_web
          refreshHint: 300
          httpsWeb:
            servingCert:
              fileSyncInterval: 86400
              externalSecretRef: spire-server-federation-tls
        managedRoute: "true"
    Copy to Clipboard Toggle word wrap
    • The profile field must use https_web profile for certificate-based authentication.
    • The fileSyncInterval field checks for certificate updates every 24 hours (86400 seconds). Range: 3600-7776000 seconds.
    • The externalSecretRef field is the name of the secret containing the TLS certificate and private key. Must match the certificate secret created in the previous steps.
  12. Apply the configuration by running the following command:

    $ oc apply -f spireserver.yaml
    Copy to Clipboard Toggle word wrap
  13. Wait for the SPIRE Server to be ready:

    $ oc get spireserver cluster -n zero-trust-workload-identity-manager -w
    Copy to Clipboard Toggle word wrap

    Wait until the status shows Ready.

  14. Verify that the federation route was created by running the following command:

    $ oc get route spire-server-federation -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                      HOST/PORT                                  PATH   SERVICES        PORT    TERMINATION
    spire-server-federation   federation.apps.cluster1.example.com              spire-server    8443    reencrypt
    Copy to Clipboard Toggle word wrap

    Verify that the route hostname matches the domain name used in your certificate.

  15. Verify that the federation endpoint is accessible by running the following command:

    $ curl https://$(oc get route spire-server-federation \
        -n zero-trust-workload-identity-manager \
        -o jsonpath='{.spec.host}')
    Copy to Clipboard Toggle word wrap

    You should receive a JSON response containing the trust bundle.

  16. Fetch the trust bundle from each federation endpoint that you want to federate with.

    For each remote cluster, fetch its trust bundle by running the following commands:

    $ curl https://federation.apps.cluster1.example.com > cluster1-bundle.json
    $ curl https://federation.apps.cluster2.example.com > cluster2-bundle.json
    Copy to Clipboard Toggle word wrap

    The trust bundle is in JSON Web Key Set (JWKS) format:

    Example trust bundle

    {
      "keys": [
        {
          "use": "x509-svid",
          "kty": "RSA",
          "n": "xGOzB...",
          "e": "AQAB",
          "x5c": ["MIIC..."]
        }
      ],
      "spiffe_sequence": 1,
      "refresh_hint": 300
    }
    Copy to Clipboard Toggle word wrap

  17. Create ClusterFederatedTrustDomain resources for each remote trust domain you want to federate with:

    apiVersion: spire.spiffe.io/v1alpha1
    kind: ClusterFederatedTrustDomain
    metadata:
      name: cluster1-federation
    spec:
      trustDomain: cluster1.example.com
      bundleEndpointURL: https://federation.apps.cluster1.example.com
      bundleEndpointProfile:
        type: https_web
      trustDomainBundle: |
        {
          "keys": [
            {
              "use": "x509-svid",
              "kty": "RSA",
              "n": "xGOzB...",
              "e": "AQAB",
              "x5c": ["MIIC..."]
            }
          ],
          "spiffe_sequence": 1
        }
    ---
    apiVersion: spire.spiffe.io/v1alpha1
    kind: ClusterFederatedTrustDomain
    metadata:
      name: cluster2-federation
    spec:
      trustDomain: cluster2.example.com
      bundleEndpointURL: https://federation.apps.cluster2.example.com
      bundleEndpointProfile:
        type: https_web
      trustDomainBundle: |
        {
          "keys": [...],
          "spiffe_sequence": 1
        }
    Copy to Clipboard Toggle word wrap
    • The trustDomainBundle field contains the complete trust bundle JSON that you fetched in the previous step.
  18. Apply the ClusterFederatedTrustDomain resources by running the following command:

    $ oc apply -f clusterfederatedtrustdomains.yaml
    Copy to Clipboard Toggle word wrap
  19. Update the SpireServer resource to add the federatesWith configuration:

    apiVersion: operator.openshift.io/v1alpha1
    kind: SpireServer
    metadata:
      name: cluster
    spec:
      trustDomain: cluster3.example.com
      federation:
        bundleEndpoint:
          profile: https_web
          refreshHint: 300
          httpsWeb:
            servingCert:
              fileSyncInterval: 86400
              externalSecretRef: spire-server-federation-tls
        federatesWith:
          - trustDomain: cluster1.example.com
            bundleEndpointUrl: https://federation.apps.cluster1.example.com
            bundleEndpointProfile: https_web
    
          - trustDomain: cluster2.example.com
            bundleEndpointUrl: https://federation.apps.cluster2.example.com
            bundleEndpointProfile: https_web
        managedRoute: "true"
    Copy to Clipboard Toggle word wrap
    • The federatesWith field lists all remote trust domains this cluster should federate with.
  20. Apply the updated configuration by running the following command:

    $ oc apply -f spireserver.yaml
    Copy to Clipboard Toggle word wrap
  21. Repeat steps 1-15 on each cluster that participates in the federation, ensuring that:

    • Each cluster has cert-manager installed and configured
    • Each cluster has its own certificate created and ready before applying the SpireServer configuration
    • Each cluster has the RBAC for the ingress router configured
    • Each cluster has ClusterFederatedTrustDomain resources for every other cluster it federates with
    • Each cluster’s SpireServer has the complete federatesWith list

Verification

  1. Verify that the certificate has been issued successfully by running the following command:

    $ oc get certificate spire-server-federation-tls \
        -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                            READY   SECRET                          AGE
    spire-server-federation-tls     True    spire-server-federation-tls     5m
    Copy to Clipboard Toggle word wrap

  2. Check the certificate details and expiration by running the following command:

    $ oc get secret spire-server-federation-tls \
        -n zero-trust-workload-identity-manager \
        -o jsonpath='{.data.tls\.crt}' | base64 -d | openssl x509 -noout -dates
    Copy to Clipboard Toggle word wrap

    Example output

    notBefore=Dec 16 10:00:00 2025 GMT
    notAfter=Mar 16 10:00:00 2026 GMT
    Copy to Clipboard Toggle word wrap

  3. Verify that the RBAC permissions are configured correctly by running the following command:

    $ oc get role,rolebinding -n zero-trust-workload-identity-manager \
        | grep secret-reader
    Copy to Clipboard Toggle word wrap

    Example output

    role.rbac.authorization.k8s.io/secret-reader
    rolebinding.rbac.authorization.k8s.io/secret-reader-binding
    Copy to Clipboard Toggle word wrap

    Verify the RoleBinding references the correct ServiceAccount by running the following command:

    $ oc describe rolebinding secret-reader-binding \
        -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap

    Example output

    Name:         secret-reader-binding
    Namespace:    zero-trust-workload-identity-manager
    Role:
      Kind:  Role
      Name:  secret-reader
    Subjects:
      Kind            Name    Namespace
      ----            ----    ---------
      ServiceAccount  router  openshift-ingress
    Copy to Clipboard Toggle word wrap

  4. Verify that the ClusterFederatedTrustDomain resources have been created by running the following command:

    $ oc get clusterfederatedtrustdomains
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                  TRUST DOMAIN           ENDPOINT URL                                      AGE
    cluster1-federation   cluster1.example.com   https://federation.apps.cluster1.example.com     5m
    cluster2-federation   cluster2.example.com   https://federation.apps.cluster2.example.com     5m
    Copy to Clipboard Toggle word wrap

  5. Check the status of a ClusterFederatedTrustDomain to ensure bundle synchronization is working by running the following command:

    $ oc describe clusterfederatedtrustdomain cluster1-federation
    Copy to Clipboard Toggle word wrap

    Look for successful status conditions indicating that the trust bundle has been synchronized.

  6. Verify that the federation endpoint is accessible and using the correct certificate by running the following command:

    $ curl -v https://$(oc get route spire-server-federation \
        -n zero-trust-workload-identity-manager \
        -o jsonpath='{.spec.host}')
    Copy to Clipboard Toggle word wrap

    In the output, verify that the certificate presented is issued by your configured CA (Let’s Encrypt or internal CA).

  7. Check the SPIRE Server logs to confirm that by running the following command:

    • Federation is active with remote trust domains
    • Trust bundles are being synchronized
    • The bundle endpoint is serving correctly

      $ oc logs -n zero-trust-workload-identity-manager \
          statefulset/spire-server -c spire-server --tail=100
      Copy to Clipboard Toggle word wrap

      Look for log messages indicating successful federation bundle synchronization.

  8. Verify that all SPIRE components are running by running the following command:

    $ oc get pods -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                    READY   STATUS    RESTARTS   AGE
    spire-agent-abc123                      1/1     Running   0          10m
    spire-server-0                          2/2     Running   0          10m
    Copy to Clipboard Toggle word wrap

  9. Optional: Test cross-cluster workload authentication by deploying workloads with SPIFFE identities on different clusters and verifying they can authenticate to each other using the federated trust.

10.8.6. Federation configuration field reference

This reference provides detailed information about all configuration fields available for SPIRE federation in the SpireServer custom resource. Use this reference when customizing your federation setup.

Top-level federation fields
Expand
FieldTypeRequiredDefaultDescription

federation.bundleEndpoint

object

Yes

N/A

Configuration for this cluster’s federation endpoint that exposes the trust bundle to remote clusters.

federation.federatesWith

array

No

[]

List of remote trust domains to federate with.

federation.managedRoute

string

No

"true"

Enable or disable automatic OpenShift Route creation. Set to "true" for operator-managed routes or "false" for manual route management.

bundleEndpoint configuration fields
Expand
FieldTypeRequiredDefaultDescription

federation.bundleEndpoint.profile

string (enum)

Yes

https_spiffe

Authentication profile for the bundle endpoint. Valid values: https_spiffe or https_web. This value is immutable after initial configuration.

federation.bundleEndpoint.refreshHint

integer

No

300

Suggested interval (in seconds) for remote servers to refresh the trust bundle. Valid range: 60-3600.

federation.bundleEndpoint.httpsWeb

object

Conditional

N/A

Required when profile is https_web. Contains certificate configuration.

httpsWeb configuration fields
Expand
FieldTypeRequiredDefaultDescription

federation.bundleEndpoint.httpsWeb.acme

object

Conditional

N/A

ACME configuration for automatic certificate management. Mutually exclusive with servingCert.

federation.bundleEndpoint.httpsWeb.servingCert

object

Conditional

N/A

Manual certificate configuration. Mutually exclusive with acme.

ACME configuration fields
Expand
FieldTypeRequiredDefaultDescription

federation.bundleEndpoint.httpsWeb.acme.directoryUrl

string

Yes

N/A

ACME directory URL. For Let’s Encrypt production: https://acme-v02.api.letsencrypt.org/directory. For staging: https://acme-staging-v02.api.letsencrypt.org/directory

federation.bundleEndpoint.httpsWeb.acme.domainName

string

Yes

N/A

Fully qualified domain name for the certificate. Typically the federation endpoint hostname.

federation.bundleEndpoint.httpsWeb.acme.email

string

Yes

N/A

Email address for ACME account registration and certificate expiration notifications.

federation.bundleEndpoint.httpsWeb.acme.tosAccepted

string

No

"false"

Accept the ACME provider’s Terms of Service. Must be "true" to obtain certificates.

servingCert configuration fields
Expand
FieldTypeRequiredDefaultDescription

federation.bundleEndpoint.httpsWeb.servingCert.fileSyncInterval

integer

No

86400

Interval (in seconds) to check for certificate updates. Valid range: 3600-7776000 (1 hour to 90 days).

federation.bundleEndpoint.httpsWeb.servingCert.externalSecretRef

string

Yes

N/A

Name of the Kubernetes Secret containing the TLS certificate (tls.crt) and private key (tls.key) for the federation route.

federatesWith configuration fields
Expand
FieldTypeRequiredDefaultDescription

federation.federatesWith[].trustDomain

string

Yes

N/A

Trust domain name of the remote SPIRE deployment (for example, cluster2.example.com).

federation.federatesWith[].bundleEndpointUrl

string

Yes

N/A

HTTPS URL of the remote federation endpoint (for example, https://federation.apps.cluster2.example.com).

federation.federatesWith[].bundleEndpointProfile

string (enum)

Yes

N/A

Authentication profile of the remote endpoint. Valid values: https_spiffe or https_web.

federation.federatesWith[].endpointSpiffeId

string

Conditional

N/A

SPIFFE ID of the remote SPIRE server (for example, spiffe://cluster2.example.com/spire/server). Required when bundleEndpointProfile is https_spiffe.

Field validation rules

The following validation rules are enforced by the operator:

  • Profile immutability: The bundleEndpoint.profile field cannot be changed after initial configuration. Changing it requires deleting and recreating the SpireServer resource (re-installation of the system).
  • Mutual exclusivity: Within httpsWeb, only one of acme or servingCert can be specified.
  • Conditional requirements: When profile is https_web, the httpsWeb object must be present with either acme or servingCert configured.
  • SPIFFE ID requirement: When bundleEndpointProfile is https_spiffe in the federatesWith list, the endpointSpiffeId field is required.
  • Array limits: The federatesWith array supports a maximum of 50 entries.
  • Numeric ranges:

    • refreshHint: 60-3600 seconds
    • fileSyncInterval: 3600-7776000 seconds

By enabling the create-only mode, you can pause the Operator reconciliation, which allows you to perform manual configurations or debug without the controller overwriting your changes. This is done by annotating the API resources which are managed by the Operator. The following scenarios are examples of when the create-only mode might be of use:

Manual Customization Required: You need to customize operator-managed resources (ConfigMaps, Deployments, DaemonSets, etc.) with specific configurations that differ from the operator’s defaults

Day 2 Operations: After initial deployment, you want to prevent the operator from overwriting their manual changes during subsequent reconciliation cycles

Configuration Drift Prevention: You want to maintain control over certain resource configurations while still benefiting from the operator’s lifecycle management

10.9.1. Pausing Operator reconciliation by annotation

Reconciliation by annotation supports the SpireServer, SpireAgent, SpiffeCSIDriver, SpireOIDCDiscoveryProvider, and the ZeroTrustWorkloadIdentityManager custom resources. You can pause the reconciliation process by adding an annotation.

Prerequisites

  • You have installed Zero Trust Workload Identity Manager on your machine.
  • You have installed the SPIRE Servers, Agents, SPIFFE Container Storage Interface (CSI), and an OpenID Connect (OIDC) Discovery Provider and are in running status.

Procedure

  • To pause reconciling the SpireServer custom resource, add the create-only annotation to the named cluster by running the following command:

    $ oc annotate SpireServer cluster -n zero-trust-workload-identity-manager ztwim.openshift.io/create-only=true
    Copy to Clipboard Toggle word wrap

Verification

  • Check the status of the SpireServer resource to confirm that the create-only mode is active. The status must be true and the reason must be CreateOnlyModeEnabled.

    $ oc get SpireServer cluster -o yaml
    Copy to Clipboard Toggle word wrap

Example output

status:
  conditions:
  - lastTransitionTime: "2025-09-03T12:13:39Z"
    message: Create-only mode is enabled via ztwim.openshift.io/create-only annotation
    reason: CreateOnlyModeEnabled
    status: "True"
    type: CreateOnlyMode
Copy to Clipboard Toggle word wrap

10.9.2. Resuming Operator reconciliation by annotation

Procedure

Follow these steps to restart the reconciliation process:

  1. Run the oc annotate command, adding a hyphen (-) at the end of the annotation name. This removes the annotation from the cluster resource.

    $ oc annotate SpireServer cluster -n zero-trust-workload-identity-manager ztwim.openshift.io/create-only-
    Copy to Clipboard Toggle word wrap
  2. Restart the controller by running the following command:

    $ oc rollout restart deploy/zero-trust-workload-identity-manager-controller-manager -n zero-trust-workload-identity-manager
    Copy to Clipboard Toggle word wrap

Verification

  • Check the status of the SpireServer resource to confirm that the create-only mode is disabled. The status must be false and the reason must be CreateOnlyModeDisabled.

    $ oc get SpireServer cluster -o yaml
    Copy to Clipboard Toggle word wrap

Example output

status:
  conditions:
  - lastTransitionTime: "2025-09-03T12:13:39Z"
    message: Create-only mode is enabled via ztwim.openshift.io/create-only annotation
    reason: CreateOnlyModeDisabled
    status: "False"
    type: CreateOnlyMode
Copy to Clipboard Toggle word wrap

Once create-only mode is enabled, it persists until the Operator pod restarts, even if the annotation is removed. To exit this mode, you might need to remove or unset the annotation and restart the Operator pod.

10.10. Monitoring Zero Trust Workload Identity Manager

By default, the SPIRE Server and SPIRE Agent components of the Zero Trust Workload Identity Manager emit metrics. You can configure OpenShift Monitoring to collect these metrics by using the Prometheus Operator format.

10.10.1. Enabling user workload monitoring

You can enable monitoring for user-defined projects by configuring user workload monitoring in the cluster.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.

Procedure

  1. Create the cluster-monitoring-config.yaml file to define and configure the ConfigMap:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        enableUserWorkload: true
    Copy to Clipboard Toggle word wrap
  2. Apply the ConfigMap by running the following command:

    $ oc apply -f cluster-monitoring-config.yaml
    Copy to Clipboard Toggle word wrap

Verification

  • Verify that the monitoring components for user workloads are running in the openshift-user-workload-monitoring namespace:

    $ oc -n openshift-user-workload-monitoring get pod
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                   READY   STATUS    RESTARTS   AGE
    prometheus-operator-6cb6bd9588-dtzxq   2/2     Running   0          50s
    prometheus-user-workload-0             6/6     Running   0          48s
    prometheus-user-workload-1             6/6     Running   0          48s
    thanos-ruler-user-workload-0           4/4     Running   0          42s
    thanos-ruler-user-workload-1           4/4     Running   0          42s
    Copy to Clipboard Toggle word wrap

The status of the pods such as prometheus-operator, prometheus-user-workload, and thanos-ruler-user-workload must be Running.

To collect custom metrics from the SPIRE Server, create a ServiceMonitor custom resource (CR). This configuration enables the Prometheus Operator to scrape metrics from the default endpoint, which helps you monitor your SPIRE deployment.

The SPIRE Server operand exposes metrics by default on port 9402 at the /metrics endpoint. You can configure metrics collection for the SPIRE Server by creating a ServiceMonitor custom resource (CR) that enables the Prometheus Operator to collect custom metrics.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.
  • You have installed the Zero Trust Workload Identity Manager.
  • You have deployed the SPIRE Server operand in the cluster.
  • You have enabled the user workload monitoring.

Procedure

  1. Create the ServiceMonitor CR:

    1. Create the YAML file that defines the ServiceMonitor CR:

      Example servicemonitor-spire-server file

      apiVersion: monitoring.coreos.com/v1
      kind: ServiceMonitor
      metadata:
      labels:
        app.kubernetes.io/name: server
        app.kubernetes.io/instance: spire
      name: spire-server-metrics
      namespace: zero-trust-workload-identity-manager
      spec:
      endpoints:
      - port: metrics
        interval: 30s
        path: /metrics
      selector:
        matchLabels:
          app.kubernetes.io/name: server
          app.kubernetes.io/instance: spire
      namespaceSelector:
        matchNames:
        - zero-trust-workload-identity-manager
      Copy to Clipboard Toggle word wrap

    2. Create the ServiceMonitor CR by running the following command:

      $ oc create -f servicemonitor-spire-server.yaml
      Copy to Clipboard Toggle word wrap

      After the ServiceMonitor CR is created, the user workload Prometheus instance begins metrics collection from the SPIRE Server. The collected metrics are labeled with job="spire-server".

Verification

  1. In the OpenShift Container Platform web console, navigate to Observe Targets.
  2. In the Label filter field, enter the following label to filter the metrics targets:

    $ service=zero-trust-workload-identity-manager-metrics-service
    Copy to Clipboard Toggle word wrap
  3. Confirm that the Status column shows Up for the spire-server-metrics entry.

The SPIRE Agent operand exposes metrics by default on port 9402 at the /metrics endpoint. You can configure metrics collection for the SPIRE Agent by creating a ServiceMonitor custom resource (CR), which enables the Prometheus Operator to collect custom metrics.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.
  • You have installed the Zero Trust Workload Identity Manager.
  • You have deployed the SPIRE Agent operand in the cluster.
  • You have enabled the user workload monitoring.

Procedure

  1. Create the ServiceMonitor CR:

    1. Create the YAML file that defines the ServiceMonitor CR:

      Example servicemonitor-spire-agent.yaml file

      apiVersion: monitoring.coreos.com/v1
      kind: ServiceMonitor
      metadata:
        labels:
          app.kubernetes.io/name: agent
          app.kubernetes.io/instance: spire
        name: spire-agent-metrics
        namespace: zero-trust-workload-identity-manager
      spec:
        endpoints:
        - port: metrics
          interval: 30s
          path: /metrics
        selector:
          matchLabels:
            app.kubernetes.io/name: agent
            app.kubernetes.io/instance: spire
        namespaceSelector:
          matchNames:
          - zero-trust-workload-identity-manager
      Copy to Clipboard Toggle word wrap

    2. Create the ServiceMonitor CR by running the following command:

      $ oc create -f servicemonitor-spire-agent.yaml
      Copy to Clipboard Toggle word wrap

      After the ServiceMonitor CR is created, the user workload Prometheus instance begins metrics collection from the SPIRE Agent. The collected metrics are labeled with job="spire-agent".

Verification

  1. In the OpenShift Container Platform web console, navigate to Observe Targets.
  2. In the Label filter field, enter the following label to filter the metrics targets:

    $ service=spire-agent
    Copy to Clipboard Toggle word wrap
  3. Confirm that the Status column shows Up for the spire-agent-metrics entry.

The Zero Trust Workload Identity Manager exposes metrics by default on port 8443 at the /metrics service endpoint. You can configure metrics collection for the Operator by creating a ServiceMonitor custom resource (CR) that enables the Prometheus Operator to collect custom metrics. For more information, see "Configuring user workload monitoring".

The SPIRE Server operand exposes metrics by default on port 9402 at the /metrics endpoint. You can configure metrics collection for the SPIRE Server by creating a ServiceMonitor custom resource (CR) that enables the Prometheus Operator to collect custom metrics.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.
  • You have installed the Zero Trust Workload Identity Manager.
  • You have enabled the user workload monitoring.

Procedure

  1. Configure the Operator to use HTTP or HTTPS protocols for the metrics server.

    1. Update the subscription object for Zero Trust Workload Identity Manager to configure the HTTP protocol by running the following command:

      $ oc -n zero-trust-workload-identity-manager patch subscription zero-trust-workload-identity-manager-subscription --type='merge' -p '{"spec":{"config":{"env":[{"name":"METRICS_BIND_ADDRESS","value":":8080"}, {"name": "METRICS_SECURE", "value": "false"}]}}}'
      Copy to Clipboard Toggle word wrap
    2. Verify the Zero Trust Workload Identity Manager pod is redeployed and that the configured values for METRICS_BIND_ADDRESS and METRICS_SECURE is updated by running the following command:

      $ oc set env --list deployment/zero-trust-workload-identity-manager-controller-manager -n zero-trust-workload-identity-manager | grep -e METRICS_BIND_ADDRESS -e METRICS_SECURE -e container
      Copy to Clipboard Toggle word wrap

      Example output

      deployments/zero-trust-workload-identity-manager-controller-manager, container manager
      METRICS_BIND_ADDRESS=:8080
      METRICS_SECURE=false
      Copy to Clipboard Toggle word wrap

  2. Create the Secret resource with kubernetes.io/service-account.name annotation to inject the token required for authenticating with the metrics server.

    1. Create the secret-zero-trust-workload-identity-manager.yaml YAML file:

      apiVersion: v1
      kind: Secret
      metadata:
       labels:
         name: zero-trust-workload-identity-manager
       name: zero-trust-workload-identity-manager-metrics-auth
       namespace: zero-trust-workload-identity-manager
       annotations:
         kubernetes.io/service-account.name: zero-trust-workload-identity-manager-controller-manager
      type: kubernetes.io/service-account-token
      Copy to Clipboard Toggle word wrap
    2. Create the Secret resource by running the following command:

      $ oc apply -f secret-zero-trust-workload-identity-manager.yaml
      Copy to Clipboard Toggle word wrap
  3. Create the ClusterRoleBinding resource required for granting permissions to access the metrics.

    1. Create the clusterrolebinding-zero-trust-workload-identity-manager.yaml YAML file:

      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRoleBinding
      metadata:
       labels:
         name: zero-trust-workload-identity-manager
       name: zero-trust-workload-identity-manager-allow-metrics-access
      roleRef:
       apiGroup: rbac.authorization.k8s.io
       kind: ClusterRole
       name: zero-trust-workload-identity-manager-metrics-reader
      subjects:
      - kind: ServiceAccount
        name: zero-trust-workload-identity-manager-controller-manager
        namespace: zero-trust-workload-identity-manager
      Copy to Clipboard Toggle word wrap
    2. Create the ClusterRoleBinding resource by running the following command:

      $ oc apply -f clusterrolebinding-zero-trust-workload-identity-manager.yaml
      Copy to Clipboard Toggle word wrap
  4. Create the following ServiceMonitor CR if the metrics server is configured to use http.

    1. Create the servicemonitor-zero-trust-workload-identity-manager-http.yaml YAML file:

      apiVersion: monitoring.coreos.com/v1
      kind: ServiceMonitor
      metadata:
        labels:
          name: zero-trust-workload-identity-manager
        name: zero-trust-workload-identity-manager-metrics-monitor
        namespace: zero-trust-workload-identity-manager
      spec:
        endpoints:
          - authorization:
              credentials:
                name: zero-trust-workload-identity-manager-metrics-auth
                key: token
              type: Bearer
            interval: 60s
            path: /metrics
            port: metrics-http
            scheme: http
            scrapeTimeout: 30s
        namespaceSelector:
          matchNames:
            - zero-trust-workload-identity-manager
        selector:
          matchLabels:
            name: zero-trust-workload-identity-manager
      Copy to Clipboard Toggle word wrap
    2. Create the ServiceMonitor CR by running the following command:

      $ oc apply -f servicemonitor-zero-trust-workload-identity-manager-http.yaml
      Copy to Clipboard Toggle word wrap
  5. Create the following ServiceMonitor CR if the metrics server is configured to use https.

    1. Create the servicemonitor-zero-trust-workload-identity-manager-https.yaml YAML file:

      apiVersion: monitoring.coreos.com/v1
      kind: ServiceMonitor
      metadata:
        labels:
          name: zero-trust-workload-identity-manager
        name: zero-trust-workload-identity-manager-metrics-monitor
        namespace: zero-trust-workload-identity-manager
      spec:
        endpoints:
          - authorization:
              credentials:
                name: zero-trust-workload-identity-manager-metrics-auth
                key: token
              type: Bearer
            interval: 60s
            path: /metrics
            port: metrics-https
            scheme: https
            scrapeTimeout: 30s
            tlsConfig:
              ca:
                configMap:
                  name: openshift-service-ca.crt
                  key: service-ca.crt
              serverName: zero-trust-workload-identity-manager-metrics-service.zero-trust-workload-identity-manager.svc.cluster.local
        namespaceSelector:
          matchNames:
            - zero-trust-workload-identity-manager
        selector:
          matchLabels:
            name: zero-trust-workload-identity-manager
      Copy to Clipboard Toggle word wrap
    2. Create the ServiceMonitor CR by running the following command:

      $ oc apply -f servicemonitor-zero-trust-workload-identity-manager-https.yaml
      Copy to Clipboard Toggle word wrap

      After the ServiceMonitor CR is created, the user workload Prometheus instance begins metrics collection from the SPIRE Server. The collected metrics are labeled with job="zero-trust-workload-identity-manager-metrics-service".

Verification

  1. In the OpenShift Container Platform web console, navigate to Observe Targets.
  2. In the Label filter field, enter the following label to filter the metrics targets:

    $ service=zero-trust-workload-identity-manager-metrics-service
    Copy to Clipboard Toggle word wrap
  3. Confirm that the Status column shows Up for the zero-trust-workload-identity-manager entry.

As a cluster administrator, or as a user with view access to all namespaces, you can query SPIRE Agent and SPIRE Server metrics by using the OpenShift Container Platform web console or the command line. The query retrieves all the metrics collected from the SPIRE components that match the specified job labels.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.
  • You have installed the Zero Trust Workload Identity Manager.
  • You have deployed the SPIRE Server and SPIRE Agent operands in the cluster.
  • You have enabled monitoring and metrics collection by creating ServiceMonitor objects.

Procedure

  1. In the OpenShift Container Platform web console, navigate to Observe Metrics.
  2. In the query field, enter the following PromQL expression to query SPIRE Server metrics:

    {job="spire-server"}
    Copy to Clipboard Toggle word wrap
  3. In the query field, enter the following PromQL expression to query SPIRE Agent metrics.

    {job="spire-agent"}
    Copy to Clipboard Toggle word wrap

Monitor the health and performance of Zero Trust Workload Identity Manager components by reviewing exposed metrics. This reference describes controller, certificate, and runtime metrics that help you maintain system health and troubleshoot errors.

The Zero Trust Workload Identity Manager exposes the following metrics:

Controller runtime metrics
  • controller_runtime_active_workers: Number of currently used workers per controller
  • controller_runtime_max_concurrent_reconciles: Maximum number of concurrent reconciles per controller
  • controller_runtime_reconcile_errors_total: Total number of reconciliation errors per controller
  • controller_runtime_reconcile_time_seconds: Length of time per reconciliation per controller
  • controller_runtime_reconcile_total: Total number of reconciliations per controller
Certificate watcher metrics
  • certwatcher_read_certificate_errors_total: Total number of certificate read errors
  • certwatcher_read_certificate_total: Total number of certificates read
Go runtime metrics

Standard Go runtime metrics including:

  • go_gc_duration_seconds: Garbage collection duration
  • go_goroutines: Number of goroutines
  • go_memstats_*: Memory statistics
  • process_*: Process statistics
Custom Operator metrics

The operator also exposes custom metrics related to:

  • SPIRE Server status and health
  • SPIRE Agent deployment status
  • SPIFFE CSI Driver status
  • OIDC Discovery Provider status
  • Workload identity management operations

10.11. Uninstalling the Zero Trust Workload Identity Manager

You can remove the Zero Trust Workload Identity Manager from OpenShift Container Platform by uninstalling the Operator and removing its related resources.

10.11.1. Uninstalling the Zero Trust Workload Identity Manager

You can uninstall the Zero Trust Workload Identity Manager by using the web console.

Prerequisites

  • You have access to the cluster with cluster-admin privileges.
  • You have access to the OpenShift Container Platform web console.
  • The Zero Trust Workload Identity Manager is installed.

Procedure

  1. Log in to the OpenShift Container Platform web console.
  2. Uninstall the Zero Trust Workload Identity Manager.

    1. Go to Ecosystem Installed Operators.
    2. Click the Options menu next to the Zero Trust Workload Identity Manager entry, and then click Uninstall Operator.
    3. In the confirmation dialog, click Uninstall.

After you have uninstalled the Zero Trust Workload Identity Manager, you have the option to delete its associated resources from your cluster.

Prerequisites

  • You have access to the cluster with cluster-admin privileges.

Procedure

  1. Uninstall the operands by running each of the following commands:

    1. Delete the SpireOIDCDiscoveryProvider cluster by running the following command:

      $ oc delete SpireOIDCDiscoveryProvider cluster
      Copy to Clipboard Toggle word wrap
    2. Delete the SpiffeCSIDriver cluster by running the following command:

      $ oc delete SpiffeCSIDriver cluster -l=app.kubernetes.io/name=zero-trust-workload-identity-manager
      Copy to Clipboard Toggle word wrap
    3. Delete the SpireAgent cluster by running the following command:

      $ oc delete SpireAgent cluster
      Copy to Clipboard Toggle word wrap
    4. Delete the SpireServer cluster by running the following command:

      $ oc delete SpireServer cluster
      Copy to Clipboard Toggle word wrap
    5. Delete the ZeroTrustWorkloadIdentityManager cluster by running the following command:

      $ oc delete ZeroTrustWorkloadIdentityManager cluster
      Copy to Clipboard Toggle word wrap
    6. Delete the persistent volume claim (PVC) by running the following command:

      $ oc delete pvc -l=app.kubernetes.io/name=spire-server
      Copy to Clipboard Toggle word wrap
    7. Delete the service by running the following command:

      $ oc delete service -l=app.kubernetes.io/name=zero-trust-workload-identity-manager -n zero-trust-workload-identity-manager
      Copy to Clipboard Toggle word wrap
    8. Delete the namespace by running the following command:

      $ oc delete ns zero-trust-workload-identity-manager
      Copy to Clipboard Toggle word wrap
    9. Delete the cluster role by running the following command:

      $ oc delete clusterrole -l=app.kubernetes.io/name=zero-trust-workload-identity-manager
      Copy to Clipboard Toggle word wrap
    10. Delete the admission wehhook configuration by running the following command:

      $ oc delete validatingwebhookconfigurations -l=app.kubernetes.io/name=zero-trust-workload-identity-manager
      Copy to Clipboard Toggle word wrap
  2. Delete the custom resource definitions (CRDs) by running each of the following commands:

    1. Delete the SPIRE Server CRD by running the following command:

      $ oc delete crd spireservers.operator.openshift.io
      Copy to Clipboard Toggle word wrap
    2. Delete the SPIRE Agent CRD by running the following command:

      $ oc delete crd spireagents.operator.openshift.io
      Copy to Clipboard Toggle word wrap
    3. Delete the SPIFFEE CSI Drivers CRD by running the following command:

      $ oc delete crd spiffecsidrivers.operator.openshift.io
      Copy to Clipboard Toggle word wrap
    4. Delete the SPIRE OIDC Discovery Provider CRD by running the following command:

      $ oc delete crd spireoidcdiscoveryproviders.operator.openshift.io
      Copy to Clipboard Toggle word wrap
    5. Delete the SPIRE and SPIFFE cluster federated trust domains CRD by running the following command:

      $ oc delete crd clusterfederatedtrustdomains.spire.spiffe.io
      Copy to Clipboard Toggle word wrap
    6. Delete the cluster SPIFFE IDs CRD by running the following command:

      $ oc delete crd clusterspiffeids.spire.spiffe.io
      Copy to Clipboard Toggle word wrap
    7. Delete the SPIRE and SPIFFE cluster static entries CRD by running the following command:

      $ oc delete crd clusterstaticentries.spire.spiffe.io
      Copy to Clipboard Toggle word wrap
    8. Delete the Zero Trust Workload Identity Manager CRD by running the following command:

      $ oc delete crd zerotrustworkloadidentitymanagers.operator.openshift.io
      Copy to Clipboard Toggle word wrap

Verification

To verify that the resources have been deleted, replace each oc delete command with oc get, and then run the command. If no resources are returned, the deletion was successful.

Volver arriba
Red Hat logoGithubredditYoutubeTwitter

Aprender

Pruebe, compre y venda

Comunidades

Acerca de la documentación de Red Hat

Ayudamos a los usuarios de Red Hat a innovar y alcanzar sus objetivos con nuestros productos y servicios con contenido en el que pueden confiar. Explore nuestras recientes actualizaciones.

Hacer que el código abierto sea más inclusivo

Red Hat se compromete a reemplazar el lenguaje problemático en nuestro código, documentación y propiedades web. Para más detalles, consulte el Blog de Red Hat.

Acerca de Red Hat

Ofrecemos soluciones reforzadas que facilitan a las empresas trabajar en plataformas y entornos, desde el centro de datos central hasta el perímetro de la red.

Theme

© 2025 Red Hat