Chapter 10. Zero Trust Workload Identity Manager
10.1. Zero Trust Workload Identity Manager overview
Zero Trust Workload Identity Manager is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The Zero Trust Workload Identity Manager leverages Secure Production Identity Framework for Everyone (SPIFFE) and the SPIFFE Runtime Environment (SPIRE) to provide a comprehensive identity management solution for distributed systems. SPIFFE and SPIRE provide a standardized approach to workload identity, allowing workloads to communicate with other services whether on the same cluster, or in another environment.
Zero Trust Workload Identity Manager replaces long-lived, manually managed secrets with cryptographically verifiable identities. It provides strong authentication ensuring workloads that are communicating with each other are who they claim to be. SPIRE automates the issuing, rotating, and revoking of a SPIFFE Verifiable Identity Document (SVID), reducing the workload of developers and administrators managing secrets.
SPIFFE can work across diverse infrastructures including on-premise, cloud, and hybrid environments. SPIFFE identities are cryptographically enabled providing a basis for auditing and compliance.
The following are components of the Zero Trust Workload Identity Manager architecture:
10.1.1. SPIFFE
Secure Production Identity Framework for Everyone (SPIFFE) provides a standardized way to establish trust between software workloads in distributed systems. SPIFFE assigns unique IDs called SPIFFE IDs. These IDs are Uniform Resource Identifiers (URI) that include a trust domain and a workload identifier.
The SPIFFE IDs are contained in the SPIFFE Verifiable Identity Document (SVID). SVIDs are used by workloads to verify their identity to other workloads so that the workloads can communicate with each other. The two main SVID formats are:
- X.509-SVIDs: X.509 certificates where the SPIFFE ID is embedded in the Subject Alternative Name (SAN) field.
-
JWT-SVIDs: JSON Web Tokens (JWTs) where the SPIFFE ID is included as the
sub
claim.
For more information, see SPIFFE Overview.
10.1.2. SPIRE server
A SPIRE server is responsible for managing and issuing SPIFFE identities within a trust domain. It stores registration entries (selectors that determine under what conditions a SPIFFE ID should be issued) and signing keys. The SPIRE server works in conjunction with the SPIRE agent to perform node attestion via node plugins. For more information, see About the SPIRE server.
10.1.3. SPIRE agent
The SPIRE Agent is responsible for workload attestation, ensuring that workloads receive a verified identity when requesting authentication through the SPIFFE Workload API. It accomplishes this by using configured workload attestor plugins. In Kubernetes environments, the Kubernetes workload attestor plugin is used.
SPIRE and the SPIRE agent perform node attestation via node plugins. The plugins are used to verify the identity of the node on which the agent is running. For more information, see About the SPIRE Agent.
10.1.4. Attestation
Attestation is the process by which the identity of nodes and workloads are verified before SPIFFE IDs and SVIDs are issued. The SPIRE server gathers attributes of both the workload and node that the SPIRE Agent runs on, and then compares them to a set of selectors defined when the workload was registered. If the comparison is successful, the entities are provided with credentials. This ensures that only legitimate and expected entities within the trust domain receive cryptographic identities. The two main types of attestation in SPIFFE/SPIRE are:
- Node attestation: verifies the identity of a machine or a node on a system, before a SPIRE agent running on that node can be trusted to request identities for workloads.
- Workload attestation: verifies the identity of an application or service running on an attested node before the SPIRE agent on that node can provide it with a SPIFFE ID and SVID.
For more information, see Attestation.
10.1.4.1. Zero Trust Workload Identity Manager workflow
The following is a high-level workflow of the Zero Trust Workload Identity Manager within the Red Hat OpenShift cluster.
- The SPIRE, SPIRE agent, SPIFFE CSI Driver, and the SPIRE OIDC Discovery Provider operands are deployed and managed by Zero Trust Workload Identity Manager via associated Customer Resource Definitions (CRDs).
- Watches are then registered for relevant Kubernetes resources and the necessary SPIRE CRDs are applied to the cluster.
-
The CR for the ZeroTrustWorkloadIdentityManager resource named
cluster
is deployed and managed by a controller. To deploy the SPIRE server, SPIRE agent, SPIFFE CSI Driver, and SPIRE OIDC Discovery Provider, you need to create a custom resource of a each certain type and name it
cluster
. The custom resource types are as follows:-
SPIRE server -
SpireServer
-
SPIRE agent -
SpireAgent
-
SPIFFE CSI Driver -
SpiffeCSIDriver
-
SPIRE OIDC discovery provider -
SpireOIDCDiscoveryProvider
-
SPIRE server -
- When a node starts, the SPIRE agent initializes, and connects to the SPIRE server.
- The agent begins the node attestation process. The agent collects information on the node’s identity such as label name and namespace. The agent securely provides the information it gathered through the attestation to the SPIRE server.
- The SPIRE server then evaluates this information against its configured attestation policies and registration entries. If successful, the server generates an agent SVID and the Trust Bundle (CA Certificate) and securely sends this back to the agent.
- A workload starts on the node and needs a secure identity. The workload connects to the agent’s Workload API and requests a SVID.
- The agent receives the request and begins a workload attestation to gather information about the workload.
- After the agent gathers the information, the information is sent to the SPIRE server and the server checks its configured registration entries.
- The agent receives the workload SVID and Trust Bundle and passes it on to the workload. The workload can now present their SVIDs to other SPIFFE-aware devices to communicate with them.
10.2. Zero Trust Workload Identity Manager release notes
The Zero Trust Workload Identity Manager leverages Secure Production Identity Framework for Everyone (SPIFFE) and the SPIFFE Runtime Environment (SPIRE) to provide a comprehensive identity management solution for distributed systems.
These release notes track the development of Zero Trust Workload Identity Manager.
10.2.1. Zero Trust Workload Identity Manager 0.1.0 (Technology Preview)
Issued: 2025-06-16
The following advisories are available for the Zero Trust Workload Identity Manager:
This initial release of Zero Trust Workload Identity Manager is a Technology Preview. This version has the following known limitations:
- Support for SPIRE federation is not enabled.
-
Key manager supports only the
disk
storage type. - Telemetry is supported only through Prometheus.
- High availability (HA) configuration for SPIRE servers or the OpenID Connect (OIDC) Discovery provider is not supported.
-
External datastore is not supported. This version uses the internal
sqlite
datastore deployed by SPIRE. - This version operates using a fixed configuration. User-defined configurations are not allowed.
-
The log level of operands are not configurable. The default value is
DEBUG`
.
10.3. Zero Trust Workload Identity Manager components and features
10.3.1. Zero Trust Workload Identity Manager components
The following components are available as part of the initial release of Zero Trust Workload Identity Manager.
10.3.1.1. SPIFFE CSI Driver
The SPIFFE Container Storage Interface (CSI) is a plugin that helps pods securely obtain their SPIFFE Verifiable Identity Document (SVID) by delivering the Workload API socket into the pod. The SPIFFE CSI driver is deployed as a daemonset on the cluster ensuring that a driver instance runs on each node. The driver uses the ephemeral inline volume capability of Kubernetes allowing pods to request volumes directly provided by the SPIFFE CSI driver. This simplifies their use by applications that need temporary storage.
When the pod starts, the Kubelet calls the SPIFFE CSI driver to provision and mount a volume into the pod’s containers. The SPIFFE CSI driver mounts a directory that contains the SPIFFE Workload API into the pod. Applications in the pod then communicate with the Workload API to obtain their SVIDs. The driver guarantees that each SVID is unique.
10.3.1.2. SPIRE OpenID Connect Discovery Provider
The SPIRE OpenID Connect Discovery Provider is a standalone component that makes SPIRE-issued JWT-SVIDs compatible with standard OpenID Connect (OIDC) users by exposing a open ID configuration endpoint and a JWKS URI for token verification. It is essential for integrating SPIRE-based workload identity with systems that require OIDC-compliant tokens, especially, external APIs. While SPIRE primarily issues identities for workloads, additional workload-related claims can be embedded into JWT-SVIDs through the configuration of SPIRE, which these claims to be included in the token and verified by OIDC-compliant clients.
10.3.1.3. SPIRE Controller Manager
The SPIRE Controller Manager uses custom resource definitions (CRDs) to facilitate the registration of workloads. To facilitate workload registration, the SPIRE Controller Manager registers controllers against pods and CRDs. When changes are detected on these resources, a workload reconciliation process is triggered. This process determines which SPIRE entries should exist based on the existing pods and CRDs. The reconciliation process creates, updates, and deletes entries on the SPIRE server as appropriate.
The SPIRE Controller Manager is designed to be deployed on the same pod as the SPIRE server. The manager communicates with the SPIRE server API using a private UNIX Domain Socket within a shared volume.
10.3.2. Zero Trust Workload Identity Manager features
10.3.2.1. SPIRE server and agent telemetry
SPIRE server and agent telemetry provide insight into the health of the SPIRE deployment. The metrics are in the format provided by the Prometheus Operator. The metrics exposed help in understanding server health & lifecycle, spire component performance, attestation and SVID issuance and plugin statistics.
10.4. Installing the Zero Trust Workload Identity Manager
Zero Trust Workload Identity Manager for Red Hat OpenShift is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The Zero Trust Workload Identity Manager is not installed in OpenShift Container Platform by default. You can install the Zero Trust Workload Identity Manager by using either the web console or CLI.
10.4.1. Installing the Zero Trust Workload Identity Manager
10.4.1.1. Installing the Zero Trust Workload Identity Manager by using the web console
You can use the web console to install the Zero Trust Workload Identity Manager.
Prerequisites
-
You have access to the cluster with
cluster-admin
privileges. - You have access to the OpenShift Container Platform web console.
Procedure
- Log in to the OpenShift Container Platform web console.
-
Go to Operators
OperatorHub. - Enter Zero Trust Workload Identity Manager into the filter box.
- Select the Zero Trust Workload Identity Manager
- Select the Zero Trust Workload Identity Manager version from Version drop-down list, and click Install.
On the Install Operator page:
- Update the Update channel, if necessary. The channel defaults to tech-preview-v0.1, which installs the latest Technology Preview v0.1 release of the Zero Trust Workload Identity Manager.
Choose the Installed Namespace for the Operator. The default Operator namespace is
zero-trust-workload-identity-manager
.If the
zero-trust-workload-identity-manager
namespace does not exist, it is created for you.Select an Update approval strategy.
- The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available.
- The Manual strategy requires a user with appropriate credentials to approve the Operator update.
- Click Install.
Verification
-
Navigate to Operators
Installed Operators. -
Verify that Zero Trust Workload Identity Manager is listed with a Status of Succeeded in the
zero-trust-workload-identity-manager
namespace. Verify that Zero Trust Workload Identity Manager controller manager deployment is ready and available by running the following command:
oc get deployment -l name=zero-trust-workload-identity-manager -n zero-trust-workload-identity-manager
$ oc get deployment -l name=zero-trust-workload-identity-manager -n zero-trust-workload-identity-manager
Copy to Clipboard Copied! Example output
NAME READY UP-TO-DATE AVAILABLE AGE zero-trust-workload-identity-manager-controller-manager-6c4djb 1/1 1 1 43m
NAME READY UP-TO-DATE AVAILABLE AGE zero-trust-workload-identity-manager-controller-manager-6c4djb 1/1 1 1 43m
Copy to Clipboard Copied!
10.4.1.2. Installing the Zero Trust Workload Identity Manager by using the CLI
Prerequisites
-
You have access to the cluster with
cluster-admin
privileges.
Procedure
Create a new project named
zero-trust-workload-identity-manager
by running the following command:oc new-project zero-trust-workload-identity-manager
$ oc new-project zero-trust-workload-identity-manager
Copy to Clipboard Copied! Create an
OperatorGroup
object:Create a YAML file, for example,
operatorGroup.yaml
, with the following content:Example
operatorGroup.yaml
apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-zero-trust-workload-identity-manager namespace: zero-trust-workload-identity-manager spec: upgradeStrategy: Default
apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-zero-trust-workload-identity-manager namespace: zero-trust-workload-identity-manager spec: upgradeStrategy: Default
Copy to Clipboard Copied! Create the
OperatorGroup
object by running the following command:oc create -f operatorGroup.yaml
$ oc create -f operatorGroup.yaml
Copy to Clipboard Copied!
Create a
Subscription
object:Create a YAML file, for example,
subscription.yaml
, that defines theSubscription
object:Example
subscription.yaml
apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-zero-trust-workload-identity-manager namespace: zero-trust-workload-identity-manager spec: channel: tech-preview-v0.1 name: openshift-zero-trust-workload-identity-manager source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Automatic
apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-zero-trust-workload-identity-manager namespace: zero-trust-workload-identity-manager spec: channel: tech-preview-v0.1 name: openshift-zero-trust-workload-identity-manager source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Automatic
Copy to Clipboard Copied! Create the
Subscription
object by running the following command:oc create -f subscription.yaml
$ oc create -f subscription.yaml
Copy to Clipboard Copied!
Verification
Verify that the OLM subscription is created by running the following command:
oc get subscription -n zero-trust-workload-identity-manager
$ oc get subscription -n zero-trust-workload-identity-manager
Copy to Clipboard Copied! Example output
NAME PACKAGE SOURCE CHANNEL openshift-zero-trust-workload-identity-manager zero-trust-workload-identity-manager redhat-operators tech-preview-v0.1
NAME PACKAGE SOURCE CHANNEL openshift-zero-trust-workload-identity-manager zero-trust-workload-identity-manager redhat-operators tech-preview-v0.1
Copy to Clipboard Copied! Verify whether the Operator is successfully installed by running the following command:
oc get csv -n zero-trust-workload-identity-manager
$ oc get csv -n zero-trust-workload-identity-manager
Copy to Clipboard Copied! Example output
NAME DISPLAY VERSION PHASE zero-trust-workload-identity-manager.v0.1.0 Zero Trust Workload Identity Manager 0.1.0 Succeeded
NAME DISPLAY VERSION PHASE zero-trust-workload-identity-manager.v0.1.0 Zero Trust Workload Identity Manager 0.1.0 Succeeded
Copy to Clipboard Copied! Verify that the Zero Trust Workload Identity Manager controller manager is ready by running the following command:
oc get deployment -l name=zero-trust-workload-identity-manager -n zero-trust-workload-identity-manager
$ oc get deployment -l name=zero-trust-workload-identity-manager -n zero-trust-workload-identity-manager
Copy to Clipboard Copied! Example output
NAME READY UP-TO-DATE AVAILABLE AGE zero-trust-workload-identity-manager-controller-manager 1/1 1 1 43m
NAME READY UP-TO-DATE AVAILABLE AGE zero-trust-workload-identity-manager-controller-manager 1/1 1 1 43m
Copy to Clipboard Copied!
10.5. Deploying Zero Trust Workload Identity Manager operands
Zero Trust Workload Identity Manager is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
You can deploy the following operands by creating the respective custom resources (CRs). You must deploy the operands in the following sequence to ensure successful installation.
- SPIRE Server
- SPIRE Agent
- SPIFFE CSI driver
- SPIRE OIDC discovery provider
10.5.1. Deploying the SPIRE server
You can configure the SpireServer
custom resource (CR) to deploy and configure a SPIRE server.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - You have installed Zero Trust Workload Identity Manager in the cluster.
Procedure
Create the
SpireServer
CR:Create a YAML file that defines the
SpireServer
CR, for example,SpireServer.yaml
:Example
SpireServer.yaml
apiVersion: operator.openshift.io/v1alpha1 kind: SpireServer metadata: name: cluster spec: trustDomain: <trust_domain> clusterName: <cluster_name> caSubject: commonName: example.org country: "US" organization: "RH" persistence: type: pvc size: "5Gi" accessMode: ReadWriteOnce datastore: databaseType: sqlite3 connectionString: "/run/spire/data/datastore.sqlite3" maxOpenConns: 100 maxIdleConns: 2 connMaxLifetime: 3600 jwtIssuer: <jwt_issuer_domain>
apiVersion: operator.openshift.io/v1alpha1 kind: SpireServer metadata: name: cluster spec: trustDomain: <trust_domain>
1 clusterName: <cluster_name>
2 caSubject: commonName: example.org
3 country: "US"
4 organization: "RH"
5 persistence: type: pvc
6 size: "5Gi"
7 accessMode: ReadWriteOnce
8 datastore: databaseType: sqlite3 connectionString: "/run/spire/data/datastore.sqlite3" maxOpenConns: 100
9 maxIdleConns: 2
10 connMaxLifetime: 3600
11 jwtIssuer: <jwt_issuer_domain>
12 Copy to Clipboard Copied! - 1
- The trust domain to be used for the SPIFFE identifiers.
- 2
- The name of your cluster.
- 3
- The common name for SPIRE server CA.
- 4
- The country for SPIRE server CA.
- 5
- The organization for SPIRE server CA.
- 6
- The type of volume to be used for persistence. The valid options are
pvc
andhostPath
. - 7
- The size of volume to be used for persistence
- 8
- The access mode to be used for persistence. The valid options are
ReadWriteOnce
,ReadWriteOncePod
, andReadWriteMany
. - 9
- The maximum number of open database connections.
- 10
- The maximum number of idle connections in the pool.
- 11
- The maximum amount of time a connection can be reused. To specify an unlimited time, you can set the value to
0
. - 12
- The JSON Web Token (JWT) issuer domain. The default value is set to the value specified in
oidc-discovery.$trustDomain
.
Apply the configuration by running the following command:
oc apply -f SpireServer.yaml
$ oc apply -f SpireServer.yaml
Copy to Clipboard Copied!
Verification
Verify that the stateful set of SPIRE server is ready and available by running the following command:
oc get statefulset -l app.kubernetes.io/name=server -n zero-trust-workload-identity-manager
$ oc get statefulset -l app.kubernetes.io/name=server -n zero-trust-workload-identity-manager
Copy to Clipboard Copied! Example output
NAME READY AGE spire-server 1/1 65s
NAME READY AGE spire-server 1/1 65s
Copy to Clipboard Copied! Verify that the status of SPIRE server pod is
Running
by running the following command:oc get po -l app.kubernetes.io/name=server -n zero-trust-workload-identity-manager
$ oc get po -l app.kubernetes.io/name=server -n zero-trust-workload-identity-manager
Copy to Clipboard Copied! Example output
NAME READY STATUS RESTARTS AGE spire-server-0 2/2 Running 1 (108s ago) 111s
NAME READY STATUS RESTARTS AGE spire-server-0 2/2 Running 1 (108s ago) 111s
Copy to Clipboard Copied! Verify that the persistent volume claim (PVC) is bound, by running the following command:
oc get pvc -l app.kubernetes.io/name=server -n zero-trust-workload-identity-manager
$ oc get pvc -l app.kubernetes.io/name=server -n zero-trust-workload-identity-manager
Copy to Clipboard Copied! Example output
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTECLASS AGE spire-data-spire-server-0 Bound pvc-27a36535-18a1-4fde-ab6d-e7ee7d3c2744 5Gi RW0 gp3-csi <unset> 22m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTECLASS AGE spire-data-spire-server-0 Bound pvc-27a36535-18a1-4fde-ab6d-e7ee7d3c2744 5Gi RW0 gp3-csi <unset> 22m
Copy to Clipboard Copied!
10.5.2. Deploying the SPIRE agent
You can configure the SpireAgent
custom resource (CR) to deploy and configure a SPIRE agent.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - You have installed Zero Trust Workload Identity Manager in the cluster.
Procedure
Create the
SpireAgent
CR:Create a YAML file that defines the
SpireAgent
CR, for example,SpireAgent.yaml
:Example
SpireAgent.yaml
apiVersion: operator.openshift.io/v1alpha1 kind: SpireAgent metadata: name: cluster spec: trustDomain: <trust_domain> clusterName: <cluster_name> nodeAttestor: k8sPSATEnabled: "true" workloadAttestors: k8sEnabled: "true" workloadAttestorsVerification: type: "auto"
apiVersion: operator.openshift.io/v1alpha1 kind: SpireAgent metadata: name: cluster spec: trustDomain: <trust_domain>
1 clusterName: <cluster_name>
2 nodeAttestor: k8sPSATEnabled: "true"
3 workloadAttestors: k8sEnabled: "true"
4 workloadAttestorsVerification: type: "auto"
5 Copy to Clipboard Copied! - 1
- The trust domain to be used for the SPIFFE identifiers.
- 2
- The name of your cluster.
- 3
- Enable or disable the projected service account token (PSAT) Kubernetes node attestor. The valid options are
true
andfalse
. - 4
- Enable or disable the Kubernetes workload attestor. The valid options are
true
andfalse
. - 5
- The type of verification to be done against kubelet. Valid options are
auto
,hostCert
,apiServerCA
,skip
. Theauto
option initially attempts to usehostCert
, and then falls back toapiServerCA
.
Apply the configuration by running the following command:
oc apply -f SpireAgent.yaml
$ oc apply -f SpireAgent.yaml
Copy to Clipboard Copied!
Verification
Verify that the daemon set of the SPIRE agent is ready and available by running the following command
oc get daemonset -l app.kubernetes.io/name=agent -n zero-trust-workload-identity-manager
$ oc get daemonset -l app.kubernetes.io/name=agent -n zero-trust-workload-identity-manager
Copy to Clipboard Copied! Example output
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE spire-agent 3 3 3 3 3 <none> 10m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE spire-agent 3 3 3 3 3 <none> 10m
Copy to Clipboard Copied! Verify that the status of SPIRE agent pods is
Running
by running the following command:oc get po -l app.kubernetes.io/name=agent -n zero-trust-workload-identity-manager
$ oc get po -l app.kubernetes.io/name=agent -n zero-trust-workload-identity-manager
Copy to Clipboard Copied! Example output
NAME READY STATUS RESTARTS AGE spire-agent-dp4jb 1/1 Running 0 12m spire-agent-nvwjm 1/1 Running 0 12m spire-agent-vtvlk 1/1 Running 0 12m
NAME READY STATUS RESTARTS AGE spire-agent-dp4jb 1/1 Running 0 12m spire-agent-nvwjm 1/1 Running 0 12m spire-agent-vtvlk 1/1 Running 0 12m
Copy to Clipboard Copied!
10.5.3. Deploying the SPIFFE Container Storage Interface driver
You can configure the SpiffeCSIDriver
custom resource (CR) to deploy and configure a SPIRE agent.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - You have installed Zero Trust Workload Identity Manager in the cluster.
Procedure
Create the
SpiffeCSIDriver
CR:Create a YAML file that defines the
SpiffeCSIDriver
CR object, for example,SpiffeCSIDriver.yaml
:Example
SpiffeCSIDriver.yaml
apiVersion: operator.openshift.io/v1alpha1 kind: SpiffeCSIDriver metadata: name: cluster spec: agentSocketPath: '/run/spire/agent-sockets/spire-agent.sock'
apiVersion: operator.openshift.io/v1alpha1 kind: SpiffeCSIDriver metadata: name: cluster spec: agentSocketPath: '/run/spire/agent-sockets/spire-agent.sock'
1 Copy to Clipboard Copied! - 1
- The UNIX socket path to the SPIRE agent.
Apply the configuration by running the following command:
oc apply -f SpiffeCSIDriver.yaml
$ oc apply -f SpiffeCSIDriver.yaml
Copy to Clipboard Copied!
Verification
Verify that the daemon set of the SPIFFE CSI driver is ready and available by running the following command:
oc get daemonset -l app.kubernetes.io/name=spiffe-csi-driver -n zero-trust-workload-identity-manager
$ oc get daemonset -l app.kubernetes.io/name=spiffe-csi-driver -n zero-trust-workload-identity-manager
Copy to Clipboard Copied! Example output
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE spire-spiffe-csi-driver 3 3 3 3 3 <none> 114s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE spire-spiffe-csi-driver 3 3 3 3 3 <none> 114s
Copy to Clipboard Copied! Verify that the status of SPIFFE Container Storage Interface (CSI) Driver pods is
Running
by running the following command:oc get po -l app.kubernetes.io/name=spiffe-csi-driver -n zero-trust-workload-identity-manager
$ oc get po -l app.kubernetes.io/name=spiffe-csi-driver -n zero-trust-workload-identity-manager
Copy to Clipboard Copied! Example output
NAME READY STATUS RESTARTS AGE spire-spiffe-csi-driver-gpwcp 2/2 Running 0 2m37s spire-spiffe-csi-driver-rrbrd 2/2 Running 0 2m37s spire-spiffe-csi-driver-w6s6q 2/2 Running 0 2m37s
NAME READY STATUS RESTARTS AGE spire-spiffe-csi-driver-gpwcp 2/2 Running 0 2m37s spire-spiffe-csi-driver-rrbrd 2/2 Running 0 2m37s spire-spiffe-csi-driver-w6s6q 2/2 Running 0 2m37s
Copy to Clipboard Copied!
10.5.4. Deploying the SPIRE OpenID Connect Discovery Provider
You can configure the SpireOIDCDiscoveryProvider
custom resource (CR) to deploy and configure the SPIRE OpenID Connect (OIDC) Discovery Provider.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - You have installed Zero Trust Workload Identity Manager in the cluster.
Procedure
Create the
SpireOIDCDiscoveryProvider
CR:Create a YAML file that defines the
SpireOIDCDiscoveryProvider
CR, for example,SpireOIDCDiscoveryProvider.yaml
:Example
SpireOIDCDiscoveryProvider.yaml
apiVersion: operator.openshift.io/v1alpha1 kind: SpireOIDCDiscoveryProvider metadata: name: cluster spec: trustDomain: <trust_domain> agentSocketName: 'spire-agent.sock' jwtIssuer: <jwt_issuer_domain>
apiVersion: operator.openshift.io/v1alpha1 kind: SpireOIDCDiscoveryProvider metadata: name: cluster spec: trustDomain: <trust_domain>
1 agentSocketName: 'spire-agent.sock'
2 jwtIssuer: <jwt_issuer_domain>
3 Copy to Clipboard Copied! Apply the configuration by running the following command:
oc apply -f SpireOIDCDiscoveryProvider.yaml
$ oc apply -f SpireOIDCDiscoveryProvider.yaml
Copy to Clipboard Copied!
Verification
Verify that the deployment of OIDC Discovery Provider is ready and available by running the following command:
oc get deployment -l app.kubernetes.io/name=spiffe-oidc-discovery-provider -n zero-trust-workload-identity-manager
$ oc get deployment -l app.kubernetes.io/name=spiffe-oidc-discovery-provider -n zero-trust-workload-identity-manager
Copy to Clipboard Copied! Example output
NAME READY UP-TO-DATE AVAILABLE AGE spire-spiffe-oidc-discovery-provider 1/1 1 1 2m58s
NAME READY UP-TO-DATE AVAILABLE AGE spire-spiffe-oidc-discovery-provider 1/1 1 1 2m58s
Copy to Clipboard Copied! Verify that the status of OIDC Discovery Provider pods is
Running
by running the following command:oc get po -l app.kubernetes.io/name=spiffe-oidc-discovery-provider -n zero-trust-workload-identity-manager
$ oc get po -l app.kubernetes.io/name=spiffe-oidc-discovery-provider -n zero-trust-workload-identity-manager
Copy to Clipboard Copied! Example output
NAME READY STATUS RESTARTS AGE spire-spiffe-oidc-discovery-provider-64586d599f-lcc94 2/2 Running 0 7m15s
NAME READY STATUS RESTARTS AGE spire-spiffe-oidc-discovery-provider-64586d599f-lcc94 2/2 Running 0 7m15s
Copy to Clipboard Copied!
10.6. Monitoring Zero Trust Workload Identity Manager
By default, the SPIRE Server and SPIRE Agent components of the Zero Trust Workload Identity Manager emit metrics. You can configure OpenShift Monitoring to collect these metrics by using the Prometheus Operator format.
10.6.1. Enabling user workload monitoring
You can enable monitoring for user-defined projects by configuring user workload monitoring in the cluster.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role.
Procedure
Create the
cluster-monitoring-config.yaml
file to define and configure theConfigMap
:apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true
Copy to Clipboard Copied! Apply the
ConfigMap
by running the following command:oc apply -f cluster-monitoring-config.yaml
$ oc apply -f cluster-monitoring-config.yaml
Copy to Clipboard Copied!
Verification
Verify that the monitoring components for user workloads are running in the
openshift-user-workload-monitoring
namespace:oc -n openshift-user-workload-monitoring get pod
$ oc -n openshift-user-workload-monitoring get pod
Copy to Clipboard Copied! Example output
NAME READY STATUS RESTARTS AGE prometheus-operator-6cb6bd9588-dtzxq 2/2 Running 0 50s prometheus-user-workload-0 6/6 Running 0 48s prometheus-user-workload-1 6/6 Running 0 48s thanos-ruler-user-workload-0 4/4 Running 0 42s thanos-ruler-user-workload-1 4/4 Running 0 42s
NAME READY STATUS RESTARTS AGE prometheus-operator-6cb6bd9588-dtzxq 2/2 Running 0 50s prometheus-user-workload-0 6/6 Running 0 48s prometheus-user-workload-1 6/6 Running 0 48s thanos-ruler-user-workload-0 4/4 Running 0 42s thanos-ruler-user-workload-1 4/4 Running 0 42s
Copy to Clipboard Copied!
The status of the pods such as prometheus-operator
, prometheus-user-workload
, and thanos-ruler-user-workload
must be Running
.
10.6.2. Configuring metrics collection for SPIRE server by using a Service Monitor
The SPIRE Server operand exposes metrics by default on port 9402
at the /metrics
endpoint. You can configure metrics collection for the SPIRE Server by creating a ServiceMonitor
custom resource (CR) that enables Prometheus Operator to collect custom metrics.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role. - You have installed the Zero Trust Workload Identity Manager.
- You have deployed the SPIRE Server operand in the cluster.
- You have enabled the user workload monitoring.
Procedure
Create the
ServiceMonitor
CR:Create the YAML file that defines
ServiceMonitor
CR:Example
servicemonitor-spire-server
fileapiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: app.kubernetes.io/name: server app.kubernetes.io/instance: spire name: spire-server-metrics namespace: zero-trust-workload-identity-manager spec: endpoints: - port: metrics interval: 30s path: /metrics selector: matchLabels: app.kubernetes.io/name: server app.kubernetes.io/instance: spire namespaceSelector: matchNames: - zero-trust-workload-identity-manager
apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: app.kubernetes.io/name: server app.kubernetes.io/instance: spire name: spire-server-metrics namespace: zero-trust-workload-identity-manager spec: endpoints: - port: metrics interval: 30s path: /metrics selector: matchLabels: app.kubernetes.io/name: server app.kubernetes.io/instance: spire namespaceSelector: matchNames: - zero-trust-workload-identity-manager
Copy to Clipboard Copied! Create the
ServiceMonitor
CR by running the following command:oc create -f servicemonitor-spire-server.yaml
$ oc create -f servicemonitor-spire-server.yaml
Copy to Clipboard Copied! After the
ServiceMonitor
CR is created, the user workload Prometheus instance begins metrics collection from the SPIRE Server. The collected metrics are labeled withjob="spire-server"
.
Verification
-
In the OpenShift Container Platform web console, navigate to Observe
Targets. In the Label filter field, enter the following label to filter the metrics targets:
service=spire-server
$ service=spire-server
Copy to Clipboard Copied! -
Confirm that the Status column shows
Up
for thespire-server-metrics
entry.
10.6.3. Configuring metrics collection for SPIRE agent by using a Service Monitor
The SPIRE Agent operand exposes metrics by default on port 9402
at the /metrics
endpoint. You can configure metrics collection for the SPIRE Agent by creating a ServiceMonitor
custom resource (CR), which enables Prometheus Operator to collect custom metrics.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role. - You have installed the Zero Trust Workload Identity Manager.
- You have deployed the SPIRE Agent operand in the cluster.
- You have enabled the user workload monitoring.
Procedure
Create the
ServiceMonitor
CR:Create the YAML file that defines
ServiceMonitor
CR:Example
servicemonitor-spire-agent.yaml
fileapiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: app.kubernetes.io/name: agent app.kubernetes.io/instance: spire name: spire-agent-metrics namespace: zero-trust-workload-identity-manager spec: endpoints: - port: metrics interval: 30s path: /metrics selector: matchLabels: app.kubernetes.io/name: agent app.kubernetes.io/instance: spire namespaceSelector: matchNames: - zero-trust-workload-identity-manager
apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: app.kubernetes.io/name: agent app.kubernetes.io/instance: spire name: spire-agent-metrics namespace: zero-trust-workload-identity-manager spec: endpoints: - port: metrics interval: 30s path: /metrics selector: matchLabels: app.kubernetes.io/name: agent app.kubernetes.io/instance: spire namespaceSelector: matchNames: - zero-trust-workload-identity-manager
Copy to Clipboard Copied! Create the
ServiceMonitor
CR by running the following command:oc create -f servicemonitor-spire-agent.yaml
$ oc create -f servicemonitor-spire-agent.yaml
Copy to Clipboard Copied! After the
ServiceMonitor
CR is created, the user workload Prometheus instance begins metrics collection from the SPIRE Agent. The collected metrics are labeled withjob="spire-agent"
.
Verification
-
In the OpenShift Container Platform web console, navigate to Observe
Targets. In the Label filter field, enter the following label to filter the metrics targets:
service=spire-agent
$ service=spire-agent
Copy to Clipboard Copied! -
Confirm that the Status column shows
Up
for thespire-agent-metrics
entry.
10.6.4. Querying metrics for the Zero Trust Workload Identity Manager
As a cluster administrator, or as a user with view access to all namespaces, you can query SPIRE Agent and SPIRE Server metrics by using the OpenShift Container Platform web console or the command line. The query retrieves all the metrics collected from the SPIRE components that match the specified job labels.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - You have installed the Zero Trust Workload Identity Manager.
- You have deployed the SPIRE Server and SPIRE Agent operands in the cluster.
-
You have enabled monitoring and metrics collection by creating
ServiceMonitor
objects.
Procedure
-
In the OpenShift Container Platform web console, navigate to Observe
Metrics. In the query field, enter the following PromQL expression to query SPIRE Server metrics:
{job="spire-server"}
{job="spire-server"}
Copy to Clipboard Copied! In the query field, enter the following PromQL expression to query SPIRE Agent metrics.
{job="spire-agent"}
{job="spire-agent"}
Copy to Clipboard Copied!
10.7. Uninstalling the Zero Trust Workload Identity Manager
Zero Trust Workload Identity Manager is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
You can remove the Zero Trust Workload Identity Manager from OpenShift Container Platform by uninstalling the Operator and removing its related resources.
10.7.1. Uninstalling the Zero Trust Workload Identity Manager
You can uninstall the Zero Trust Workload Identity Manager by using the web console.
Prerequisites
-
You have access to the cluster with
cluster-admin
privileges. - You have access to the OpenShift Container Platform web console.
- The Zero Trust Workload Identity Manager is installed.
Procedure
- Log in to the OpenShift Container Platform web console.
Uninstall the Zero Trust Workload Identity Manager.
-
Go to Operators
Installed Operators. - Click the Options menu next to the Zero Trust Workload Identity Manager entry, and then click Uninstall Operator.
- In the confirmation dialog, click Uninstall.
-
Go to Operators
10.7.2. Uninstalling Zero Trust Workload Identity Manager resources by using the CLI
After you have uninstalled the Zero Trust Workload Identity Manager, you have the option to delete its associated resources from your cluster.
Prerequisites
-
You have access to the cluster with
cluster-admin
privileges.
Procedure
Uninstall the operand objects by running each of the following commands:
oc delete ZeroTrustWorkloadIdentityManager cluster oc delete SpireOIDCDiscoveryProvider cluster oc delete SpiffeCSIDriver cluster oc delete SpireAgent cluster oc delete SpireServer cluster
$ oc delete ZeroTrustWorkloadIdentityManager cluster $ oc delete SpireOIDCDiscoveryProvider cluster $ oc delete SpiffeCSIDriver cluster $ oc delete SpireAgent cluster $ oc delete SpireServer cluster
Copy to Clipboard Copied! Delete the Persistent Volume Claim (PVC) and services by running each of the following commands:
oc delete pvc -l=app.kubernetes.io/managed-by=zero-trust-workload-identity-manager oc delete csidriver -l=app.kubernetes.io/managed-by=zero-trust-workload-identity-manager oc delete service -l=app.kubernetes.io/managed-by=zero-trust-workload-identity-manager
$ oc delete pvc -l=app.kubernetes.io/managed-by=zero-trust-workload-identity-manager $ oc delete csidriver -l=app.kubernetes.io/managed-by=zero-trust-workload-identity-manager $ oc delete service -l=app.kubernetes.io/managed-by=zero-trust-workload-identity-manager
Copy to Clipboard Copied! Delete the namespace by running the following command:
oc delete ns zero-trust-workload-identity-manager
$ oc delete ns zero-trust-workload-identity-manager
Copy to Clipboard Copied! Delete the cluster-wide role-based access control (RBAC) by running each of the following commands:
oc delete clusterrolebinding -l=app.kubernetes.io/managed-by=zero-trust-workload-identity-manager oc delete clusterrole -l=app.kubernetes.io/managed-by=zero-trust-workload-identity-manager
$ oc delete clusterrolebinding -l=app.kubernetes.io/managed-by=zero-trust-workload-identity-manager $ oc delete clusterrole -l=app.kubernetes.io/managed-by=zero-trust-workload-identity-manager
Copy to Clipboard Copied! Delete the admission wehhook configuration by running each of the following command:
oc delete validatingwebhookconfigurations -l=app.kubernetes.io/managed-by=zero-trust-workload-identity-manager
$ oc delete validatingwebhookconfigurations -l=app.kubernetes.io/managed-by=zero-trust-workload-identity-manager
Copy to Clipboard Copied! Delete the Custom Resource Definitions (CRDs) by running each of the following commands:
oc delete crd spireservers.operator.openshift.io oc delete crd spireagents.operator.openshift.io oc delete crd spiffecsidrivers.operator.openshift.io oc delete crd spireoidcdiscoveryproviders.operator.openshift.io oc delete crd clusterfederatedtrustdomains.spire.spiffe.io oc delete crd clusterspiffeids.spire.spiffe.io oc delete crd clusterstaticentries.spire.spiffe.io oc delete crd zerotrustworkloadidentitymanagers.operator.openshift.io
$ oc delete crd spireservers.operator.openshift.io $ oc delete crd spireagents.operator.openshift.io $ oc delete crd spiffecsidrivers.operator.openshift.io $ oc delete crd spireoidcdiscoveryproviders.operator.openshift.io $ oc delete crd clusterfederatedtrustdomains.spire.spiffe.io $ oc delete crd clusterspiffeids.spire.spiffe.io $ oc delete crd clusterstaticentries.spire.spiffe.io $ oc delete crd zerotrustworkloadidentitymanagers.operator.openshift.io
Copy to Clipboard Copied!
Verification
To verify that the resources have been deleted, replace each oc delete
command with oc get
, and then run the command. If no resources are returned, the deletion was successful.